content
stringlengths
228
999k
pred_label
stringclasses
1 value
pred_score
float64
0.5
1
Now that we have our billing API all set up, let’s do a quick test in our local environment. Create a mocks/billing-event.json file and add the following. { "body": "{\"source\":\"tok_visa\",\"storage\":21}", "requestContext": { "identity": { "cognitoIdentityId": "USER-SUB-1234" } } } We are going to be testing with a Stripe test token called tok_visa and with 21 as the number of notes we want to store. You can read more about the Stripe test cards and tokens in the Stripe API Docs here. Let’s now invoke our billing API by running the following in our project root. $ serverless invoke local --function billing --path mocks/billing-event.json The response should look similar to this. { "statusCode": 200, "headers": { "Access-Control-Allow-Origin": "*", "Access-Control-Allow-Credentials": true }, "body": "{\"status\":true}" } Commit the Changes Let’s commit these to Git. $ git add . $ git commit -m "Adding a mock event for the billing API" Now that we have our new billing API ready. Let’s look at how to setup unit tests to ensure that our business logic has been configured correctly.
__label__pos
0.92164
blob: 92744b30921ab5ca5951e89b186ccf9003689569 [file] [log] [blame] // © 2016 and later: Unicode, Inc. and others. // License & terms of use: http://www.unicode.org/copyright.html /* * Copyright (C) 2015, International Business Machines Corporation and * others. All Rights Reserved. */ #if defined(STARBOARD) #include "starboard/client_porting/poem/string_poem.h" #endif // defined(STARBOARD) #include "unicode/unistr.h" #include "charstr.h" #include "cstring.h" #include "pluralmap.h" U_NAMESPACE_BEGIN static const char * const gPluralForms[] = { "other", "zero", "one", "two", "few", "many"}; PluralMapBase::Category PluralMapBase::toCategory(const char *pluralForm) { for (int32_t i = 0; i < UPRV_LENGTHOF(gPluralForms); ++i) { if (uprv_strcmp(pluralForm, gPluralForms[i]) == 0) { return static_cast<Category>(i); } } return NONE; } PluralMapBase::Category PluralMapBase::toCategory(const UnicodeString &pluralForm) { CharString cCategory; UErrorCode status = U_ZERO_ERROR; cCategory.appendInvariantChars(pluralForm, status); return U_FAILURE(status) ? NONE : toCategory(cCategory.data()); } const char *PluralMapBase::getCategoryName(Category c) { int32_t index = c; return (index < 0 || index >= UPRV_LENGTHOF(gPluralForms)) ? NULL : gPluralForms[index]; } U_NAMESPACE_END
__label__pos
0.901607
HTML Essential Training (2012) Illustration by Richard Downs HTML Essential Training (2012) with Bill Weinman Video: Using relative URLs Relative URLs are URLs that don't specify a complete host and path. Let's go ahead and open up relative.html. I'm not going to make a working copy here because we're not going to be changing anything, but I want to show it to you in the editor. And here we have a normal little HTML document. You'll notice that this link here, instead of having a whole URL, it just has a file name, page1.html. So what happens with that is that the browser comes along and it says, oh, a relative URL. So it'll construct a complete URL and it'll use this as the basis of that, and so what it does is it says, well, where did I get this document? I got this document on this host and at this path. Expand all | Collapse all 1. 5m 24s 1. Welcome 56s 2. Using the exercise files 1m 37s 3. What you need to know about this course 2m 51s 2. 22m 0s 1. What is HTML? 4m 12s 2. Examining the structure of an HTML document 7m 50s 3. Understanding tags and containers 6m 4s 4. Exploring content models in HTML5 2m 23s 5. Looking at obsolete elements 1m 31s 3. 27m 19s 1. Understanding whitespace and comments 3m 53s 2. Displaying text with paragraphs 3m 37s 3. Applying style 8m 5s 4. Using block and inline tags 6m 34s 5. Displaying characters with references 5m 10s 4. 16m 36s 1. Exploring the front matter of HTML 2m 9s 2. Applying CSS to your document 3m 59s 3. Adding scripting elements 4m 54s 4. Using the meta tag 3m 34s 5. Optimizing your page for search engines 2m 0s 5. 24m 59s 1. Controlling line breaks and spaces 2m 46s 2. Exploring phrase elements 1m 44s 3. Using font markup elements 1m 5s 4. Highlighting text with mark 1m 29s 5. Adding headings 1m 38s 6. Using quotations and quote marks 3m 2s 7. Exploring preformatted text 1m 45s 8. Formatting lists 2m 28s 9. Forcing text direction 3m 49s 10. Suggesting word-break opportunities 2m 29s 11. Annotating East Asian languages 2m 44s 6. 29m 15s 1. Introducing CSS 55s 2. Understanding CSS placement 6m 55s 3. Exploring CSS syntax 10m 34s 4. Understanding CSS units of measure 3m 3s 5. Some CSS examples 7m 48s 7. 22m 5s 1. Using images 4m 13s 2. Flowing text around an image 4m 55s 3. Breaking lines around an image 3m 3s 4. Aligning images 5m 25s 5. Mapping links in an image 4m 29s 8. 22m 28s 1. Understanding URLs 2m 41s 2. Working with hyperlinks 3m 28s 3. Using relative URLs 4m 20s 4. Specifying a base URL 2m 19s 5. Linking within a page 4m 12s 6. Using image links 5m 28s 9. 17m 2s 1. Exploring list types 3m 52s 2. List elements in depth 7m 44s 3. Using text menus with unordered lists 5m 26s 10. 15m 30s 1. Introduction to HTML semantics 4m 9s 2. Exploring an example 4m 56s 3. Marking up figures and illustrations 2m 33s 4. Creating collapsible details 3m 52s 11. 11m 18s 1. Embedding audio 5m 19s 2. Embedding video 5m 59s 12. 11m 53s 1. Creating ad-hoc Document Object Model (DOM) data with the data-* attribute 4m 53s 2. Displaying relative values with meter 2m 57s 3. Creating dynamic progress indicators 4m 3s 13. 4m 49s 1. Overview of HTML5 microdata 1m 8s 2. Exploring an example with microdata 3m 41s 14. 7m 3s 1. Understanding outlines 52s 2. A demonstration of outlining 6m 11s 15. 13m 1s 1. Table basics 7m 29s 2. Exploring the semantic parts of a table 2m 32s 3. Grouping columns 3m 0s 16. 9m 55s 1. Frames overview 54s 2. Using traditional frames 4m 26s 3. Exploring inline frames using iframe 2m 7s 4. Simulating frames with CSS 2m 28s 17. 53m 7s 1. Introducing forms 10m 24s 2. Using text elements 10m 12s 3. Using checkboxes and radio buttons 2m 37s 4. Creating selection lists and dropdown lists 5m 14s 5. Submit and button elements 8m 48s 6. Using an image as a submit button 2m 15s 7. Keeping context with the hidden element 3m 0s 8. Setting tab order 2m 7s 9. Preloading an autocomplete list using the datalist feature 5m 26s 10. Displaying results with output 3m 4s 18. 19m 47s 1. Touring a complete site 2m 14s 2. Touring the HTML 8m 44s 3. Touring the CSS 8m 49s 19. 29s 1. Goodbye 29s Start your free trial now, and begin learning software, business and creative skills—anytime, anywhere—with video instruction from recognized industry experts. Start Your Free Trial Now please wait ... Watch the Online Video Course HTML Essential Training (2012) 5h 34m Beginner Sep 11, 2012 Updated Jan 05, 2015 Viewers: in countries Watching now: This course introduces web designers to the nuts and bolts of HTML (HyperText Markup Language), the programming language used to create web pages. Author Bill Weinman explains what HTML is, how it's structured, and presents the major tags and features of the language. Discover how to format text and lists, add images and flow text around them, link to other pages and sites, embed audio and video, and create HTML forms. Additional tutorials cover the new elements in HTML5, the latest version of HTML, and prepare you to start working with Cascading Style Sheets (CSS). Topics include: • What is HTML? • Using HTML tags and containers • Understanding block vs. inline tags • Controlling line breaks and spaces in text • Aligning images • Linking within a page • Using relative links • Working with tables • Creating progress indicators with HTML5 • Adding buttons and check boxes to forms • Applying CSS • Optimizing your pages for search engines • Building document outlines Subjects: Developer Web Software: HTML Author: Bill Weinman Using relative URLs Relative URLs are URLs that don't specify a complete host and path. Let's go ahead and open up relative.html. I'm not going to make a working copy here because we're not going to be changing anything, but I want to show it to you in the editor. And here we have a normal little HTML document. You'll notice that this link here, instead of having a whole URL, it just has a file name, page1.html. So what happens with that is that the browser comes along and it says, oh, a relative URL. So it'll construct a complete URL and it'll use this as the basis of that, and so what it does is it says, well, where did I get this document? I got this document on this host and at this path. I am going to take that host and path all the way up to the file name and I am going to replace the file name with whatever is here. So we have a path here. Let me go ahead and open this in the browser so you can see what that looks like. It's a file path. See, it starts with the file scheme and then it's got this path/users/billweinman blah, blah, blah, all the way up to Chap07/relative.html. So what the browser will do is it says, I'm looking for this page1.html in the path where the current document is, and so just take this relative.html and it'll replace that with page1.html. And if I hover my mouse over this, you'll notice down here at the bottom of the browser, you'll see that constructed URL. It's everything up to Chap07 and it's page1.html. So when I click on that, I get this page1 document. Let's take a look at that. We'll open that in our text editor, page1.html, and we see here, it's the same document basically, and we have a couple of things. We have a link to, and here it is, a link to page2, but you'll notice that this has a subdirectory. Again it's a relative URL. It doesn't begin with a slash. It doesn't begin with HTTP or anything like that, and it says subdir/page2.html. So the browser will go through the same process. It'll take the current path to this page1.html that it's opened up and it's found this URL in, and it'll replace page1.html with whatever it sees here, which starts with "subdir/page2.html." So if we look at this in the browser, see, we have our current path has everything up to Chap07. And if I hover with this page 2, you see down here at the bottom it says Chap07/subdir/page2. And if we look here in our file system, you see we have a subdir, and there is the page2.html. So when I click on this link, it brings up that page2. And let's just bring that up in the editor, and we can look at that. And you'll notice a couple of things in here. One is the style sheet. You'll notice back in our other documents, if I bring up page1.html, you notice our style sheet here, it says main.css in the href. And if I bring up page2 you'll notice it says ../main.css, and the link back to page1 has ../page1. So this .. is a special thing. Actually, it comes from UNIX file systems, and it means the directory of the previous level relative to this document. In other words, when we look up here at this whole path up to page 2 and we see that the current directory is subdir, what it'll do is it sees that .. so it takes one out, and it'll go back to Chap07, and it'll construct that URL, so you see the URL says Chap07 page1.html down there at the bottom of the screen and here it says ../page1. The same thing for the CSS, because our CSS file is in the previous directory. See, it's right there. So this href for the CSS works exactly the same way, and you can have relative URLs there too as well. We've been doing that all along. If we look at the URL in page1, it just says main.css. That's a relatively URL. It means in the current directory. So let's go ahead and click on our back-to-page-1 link, and you see now we're back in Chap07 and click on original document. See, now we are back at the original document. So relative URLs are a great way to refer to objects within the same file space. Be careful though; it takes some effort to maintain relative links as you move your documents around on your site. Find answers to the most frequently asked questions about HTML Essential Training (2012) . Expand all | Collapse all please wait ... Q: The horizontal nab bar built in Chapter 8 doesn't work correctly in Internet Explorer 8. Do you have a solution? A: Internet Explorer 8 does not support HTML5 and the NAV element. The nab bar can work in IE 8 if you change the nav element to div, and update the CSS accordingly. You will also need to move the "display: inline" from the "ul.menu li a" rule to the "ul.menu li" rule.   Share a link to this course What are exercise files? Exercise files are the same files the author uses in the course. Save time by downloading the author's files instead of setting up your own files, and learn by following along with the instructor. Can I take this course without the exercise files? Yes! If you decide you would like the exercise files later, you can upgrade to a premium account any time. Become a member Download sample files See plans and pricing Please wait... please wait ... Upgrade to get access to exercise files. Exercise files video How to use exercise files. Learn by watching, listening, and doing, Exercise files are the same files the author uses in the course, so you can download them and follow along Premium memberships include access to all exercise files in the library. Exercise files Exercise files video How to use exercise files. For additional information on downloading and using exercise files, watch our instructional video or read the instructions in the FAQ . This course includes free exercise files, so you can practice while you watch the course. To access all the exercise files in our library, become a Premium Member. * Estimated file size Are you sure you want to mark all the videos in this course as unwatched? This will not affect your course history, your reports, or your certificates of completion for this course. Mark all as unwatched Cancel Congratulations You have completed HTML Essential Training (2012). Return to your organization's learning portal to continue training, or close this page. OK Course retiring soon HTML Essential Training (2012) will be retired from the lynda.com library on June 15, 2015. Training videos and exercise files will no longer be available, but the course will still appear in your course history and certificates of completion. For updated training, check out the all new HTML Essential Training in the lynda.com Online Training Library. Become a member to add this course to a playlist Join today and get unlimited access to the entire library of video courses—and create as many playlists as you like. Get started Already a member ? Exercise files Learn by watching, listening, and doing! Exercise files are the same files the author uses in the course, so you can download them and follow along. Exercise files are available with all Premium memberships. Learn more Get started Already a Premium member? Exercise files video How to use exercise files. Ask a question Thanks for contacting us. You’ll hear from our Customer Service team within 24 hours. Please enter the text shown below: The classic layout automatically defaults to the latest Flash Player. To choose a different player, hold the cursor over your name at the top right of any lynda.com page and choose Site preferences from the dropdown menu. Continue to classic layout Stay on new layout Exercise files Access exercise files from a button right under the course name. Mark videos as unwatched Remove icons showing you already watched videos if you want to start over. Control your viewing experience Make the video wide, narrow, full-screen, or pop the player out of the page into its own window. Interactive transcripts Click on text in the transcript to jump to that spot in the video. As the video plays, the relevant spot in the transcript will be highlighted. Learn more, save more. Upgrade today! Get our Annual Premium Membership at our best savings yet. Upgrade to our Annual Premium Membership today and get even more value from your lynda.com subscription: “In a way, I feel like you are rooting for me. Like you are really invested in my experience, and want me to get as much out of these courses as possible this is the best place to start on your journey to learning new material.”— Nadine H. Thanks for signing up. We’ll send you a confirmation email shortly. Sign up and receive emails about lynda.com and our online training library: Here’s our privacy policy with more details about how we handle your information. Keep up with news, tips, and latest courses with emails from lynda.com. Sign up and receive emails about lynda.com and our online training library: Here’s our privacy policy with more details about how we handle your information.     submit Lightbox submit clicked Terms and conditions of use We've updated our terms and conditions (now called terms of service).Go Review and accept our updated terms of service.
__label__pos
0.827397
9 votes 2answers 433 views Can there be a power basis for a totally real field of high degree? A number field $K$ is said to have a power basis if there is an $\alpha \in K$ such that the full ring of integers $O_K$ is the $\mathbb{Z}$-linear span of ... 4 votes 2answers 131 views unit group of biquadratic fields In the unit group of a real biquadratic field, what is the index of the product of the unit groups of its three quadratic subfields? Is the index 1 if discriminant of these three subfields are always ... 6 votes 1answer 243 views Degree of Kummer extensions of number fields Let $K$ be a number field and $a\in K^*$ of infinite order in $K^*$. How do I show that $$[K(\sqrt[n]{a},\zeta_n):K]\geq C\cdot n\cdot\varphi(n)$$ holds for all positive integers $n$, with a positive ... 0 votes 1answer 69 views Significance of the sign of the field norm for units in real quadratic fields Let $k = \mathbb{Q}(\sqrt{m})$, where $m \equiv 1 \pmod{8}$. Let $\epsilon$ be the fundamental unit of $k$ satisfying $\epsilon > 1$. A paper I'm reading involves studying the 2-torsion fields ... 0 votes 0answers 74 views Divisor bounds of ideals in number fields Let $K$ be an algebraic number field and let $I$ be an ideal in $O_K$ (the ring of integers). Denote by $d(I)$ the number of ideals that divide $I$. So if $I= \prod_{i=1}^k p_i^{e_i}$ is the ... 1 vote 1answer 158 views Lower Degree Elements in an Algebraic Number Field Fix an algebraic integer $\alpha$ of degree $n$ such that the extension $K=\mathbf{Q}(\alpha)/\mathbf{Q}$ has intermediate fields. (We can assume $K$ is Galois with non-simple Galois group.) This ... 7 votes 2answers 544 views Quintic polynomial solution by Jacobi Theta function. Does someone have a good and rigorous reference for the solution of quintic ploynomial equation with Jacobi Theta function, in English? Mathworld and Wikipedia don't give a good English reference, at ... 20 votes 0answers 928 views Orders in number fields Let $K$ be a degree $n$ extension of ${\mathbb Q}$ with ring of integers $R$. An order in $K$ is a subring with identity of $R$ which is a ${\mathbb Z}$-module of rank $n$. Question: Let $p$ be an ... 2 votes 1answer 132 views ramification of discrete valuation field Let $K$ be a discrete valuation field with valuation $v:K\rightarrow \mathbb Z\cup \{\infty\}$ which is normalized by $v(\pi)=1$ for a prime element $\pi$. Let $v:\overline K\rightarrow \mathbb ... 2 votes 0answers 110 views Comparing ideal class numbers of different orders Let $P$ be a monic irreducible integral polynomial. Let $K=\mathbf Q[X]/(P)$ be the associated number field, $\mathcal O$ be its ring of integers and $R$ be the order $\mathbf Z[X]/(P)$. (In general, ... 0 votes 1answer 556 views A question on Cebotarev's density theorem Let $K$ be a number field, $d$ a positive integer and $S$ a finite set of places of $K$. By Cebotarev, there exists a finite set of finite places $T$ disjoint from $S$ such that the conjugacy classes ... 0 votes 0answers 167 views Ring of Integers as subring with most irreducibles Let $L$ be a number field. Is it possible to define its ring of integers $R$ by saying it's the subring with (in a fuzzy sense) the "most" irreducibles? 1 vote 1answer 298 views The union of the totally split primes Let $R$ be a Dedekind domain with quotient field $K$, let $L$ be a finite separable extension of $K$, and let $S$ be the integral closure of $R$ in $L$. If $\mathfrak{p}$ is a nonzero prime ideal of ... 0 votes 2answers 486 views Which numbers appear as discriminants of cubics? I'm trying to show that all possible splitting fields occur for a class of cubic polynomials, so have started by looking at the discriminants. Clearly, given that they are products of squares, all ... 18 votes 1answer 862 views What is the ring of integers of the Pythagorean field? Following Hilbert, we call the complex numbers constructible via compass and straight-edge the field of Euclidean numbers, and the totally real such numbers the field of Pythagorean numbers. (Among ... 6 votes 1answer 588 views Is there an analog of class field theory over an arbitrary infinite field of algebraic numbers? Recently, I found a paper by Schilling http://www.jstor.org/pss/2371426, which mentions that for certain infinite field of algebraic numbers there is an analog of class field theory. By infinite field ... 2 votes 3answers 744 views Unit in a number field with same absolute value at a real and a complex place I was asked whether it was possible to produce a monic polynomial with integer coefficients, constant coefficient equal to $1$, having a real root $r > 1$ and a pair of complex roots with absolute ... 13 votes 5answers 2k views Given a number field $K$, when is its Hilbert class field an abelian extension of $\mathbb{Q}$? Given a number field $K$, when is its Hilbert class field an abelian extension of $\mathbb{Q}$? I am going to be on the road soon, so pleas don't be offended if I don't respond quickly to a comment. 14 votes 3answers 2k views sum of squares in ring of integers Lagrange proved that every (positive) rational integer is a sum of 4 squares. Are there general results like this for ring of integers of a number field? Is this class field theory? Explicity, ... 4 votes 3answers 1k views How many primes stay inert in a finite (non-cyclic) extension of number fields? In the following suppose L/K is a finite Galois extension of number fields, (maybe it works for other cases also, I don't know) By the Chebotorev density theorem when Gal(L/K) is cyclic, there are ... 10 votes 4answers 2k views A problem on Algebraic Number Theory, Norm of Ideals A problem on Algebraic Number Theory $K$ and $L$ are number fields over $\Bbb Q$.($\Bbb Q$ is rational number field) $K\subseteq L$. $\mathcal{O}_K$ is the ring of integers of $K$. and ... 0 votes 1answer 604 views On algebraic field extensions Let $L:K$ be a field extension. Let $A$ be a set of elements in $L$, all of which are algebraic over $K$. Construct the field extension $M=K(A)$. I have two questions: [1] Is $M:K$ an algebraic field ...
__label__pos
0.819032
Cancel resource pending planedit Cancels the pending plan of a Resource belonging to a given Deployment. Requestedit DELETE /api/v1/deployments/{deployment_id}/{resource_kind}/{ref_id}/plan/pending Path parametersedit Name Type Required Description deployment_id string Y Identifier for the Deployment ref_id string Y User-specified RefId for the Resource resource_kind string; allowed values: [elasticsearch, kibana, apm, appsearch, enterprise_search, integrations_server] Y The kind of resource Query parametersedit Name Type Required Description force_delete boolean; default: false N When true, deletes the pending plan instead of attempting a graceful cancellation. The default is false. ignore_missing boolean; default: false N When true, returns successfully, even when plans are missing. The default is false. Responsesedit 200 (DeploymentResourceCrudResponse) Standard Deployment Resource Crud Response 400 (BasicFailedReply) The Resource does not have a pending plan. (code: deployments.resource_does_not_have_a_pending_plan) Headers x-cloud-error-codes (string; allowed values: [deployments.resource_does_not_have_a_pending_plan]) The error codes associated with the response 404 (BasicFailedReply) The Deployment specified by {deployment_id} cannot be found. (code: deployments.deployment_not_found) Headers x-cloud-error-codes (string; allowed values: [deployments.deployment_not_found]) The error codes associated with the response 449 (BasicFailedReply) Elevated permissions are required. (code: root.unauthorized.rbac.elevated_permissions_required) Headers x-cloud-error-codes (string; allowed values: [root.unauthorized.rbac.elevated_permissions_required]) The error codes associated with the response 500 (BasicFailedReply) We have failed you. (code: deployments.deployment_resource_no_longer_exists) Headers x-cloud-error-codes (string; allowed values: [deployments.deployment_resource_no_longer_exists]) The error codes associated with the response Request exampleedit curl -XDELETE https://{{hostname}}/api/v1/deployments/{deployment_id}/{resource_kind}/{ref_id}/plan/pending \ -H "Authorization: ApiKey $ECE_API_KEY"
__label__pos
0.567074
How do you get overhead view on Google Maps? How do you get overhead view on Google Maps? Use Google Maps aerial view Manually drag the map to a location or add it into the search box and hit the magnifying glass icon. If you are on mobile, you can also click the compass icon to use your current location. Click the Satellite box in the bottom left of the map screen. The map should now change to aerial view. How can I see an overhead view of my house? Google Earth (and Google Maps) is the easiest way to get a satellite view of your house and neighborhood. This gives you a fascinating application that enables anyone to view nearly any part of the world, get instant geographic information for that area, and even see your house with an aerial view. How do I get live view of my house on Google? Navigate with Live View 1. On your Android phone or tablet, open the Google Maps app . 2. In the search bar, enter a destination or tap it on the map. 3. Tap Directions . 4. Above the map in the travel mode toolbar, tap Walking . 5. In the bottom center, tap Live View . How do I get Google Earth to show top view? Go to the View tab, and click the Snapshot Current View button. Now, when you double-click on that placemark in the 3D viewer or in the Places panel, Google Earth will fly to that saved view, or perspective. How do you tilt in Google? When someone is trying to search something on google send them the askew link and watch the reaction. Askew/Tilt will tilt the google page and the user watching it for the first time will be shocked. Try it with your friends to see their reaction. Can you get a Live View on Google Earth? Google Earth will now play live video feeds from select locations across the globe. The feature is available outright on different platforms supported by Google Earth. With the live video feed, viewers will be able to watch live activities from different locations, first one being the Katmai National Park in Alaska. How do I get compass bearing on Google Maps desktop? To do this, tap the compass icon in the top-right corner of the Google Maps map view. Your map position will move, with the icon updating to show that you’re pointing north. Can I get a birds eye view on Google Maps? Using Google’s Bird’s-Eye View Map Click the “Satellite” thumbnail at the bottom of the map. If you have a slow internet connection, it may take a minute or two for the satellite photo to load. Click the “3D” button on the map’s sidebar to get a bird’s-eye view of the location. How do I get live satellite images? Free Satellite Imagery Sources: Zoom In Our Planet 1. USGS EarthExplorer: Free-To-Use Satellite Imagery. 2. Landviewer: Free Access To Satellite Images. 3. Copernicus Open Access Hub: Up-to-date Free Satellite Imagery. 4. Sentinel Hub: Free High-Quality Satellite Images From Multiple Sources. Which is better Google Earth or Google Earth Pro? Google Earth lets you print screen resolution images, whereas Google Earth Pro offers premium high-resolution photos. Google Earth requires you to manually geo-locate geographic information system (GIS) images, while Google Earth Pro helps you automatically find them. How to find your house on Google Street View? Street View, by Google Maps, is a virtual representation of our surroundings on Google Maps, consisting of millions of panoramic images. Street View’s content comes from two sources – Google and Can I see a live satellite view of my house? A live satellite view of your house, is still a few years off. There are some services which will give you a live view of Earth from space. For example, you can access a live broadcast from NASA How to Hide Your House from Google Street View? Go to Google Maps or the Street View gallery • Find your house in Street View • Select ‘Report a problem’ from the three-dot (kebab) menu (upper left of Google Maps) or the link (bottom left on Google Earth) • How can you see a satellite view of Your House? Use the search field in the top left to enter your street address. • You’ll see your address in the search results. • Zoom in closer to get a detailed overhead satellite view of your home. • You can drag the man icon to the street to get down to ground view.
__label__pos
0.998076
Python Pandas Write DataFrame to Excel In this tutorial, we will learn about Python Pandas Write DataFrame to Excel. Also, we will cover these topics. • Python Pandas Write DataFrame to Excel • Python Pandas Write DataFrame to Excel Without Index • Python Pandas Write DataFrame to CSV • Python Pandas Write DataFrame to CSV without Index • Python Pandas Write DataFrame to CSV Example • Python Pandas Write DataFrame to Existing Excel • Python Pandas Export Multiple DataFrames to Excel Python Pandas Write DataFrame to Excel In this section, we will learn about Python Pandas Write DataFrame to Excel. • Using .to_excel() we can convert the DataFrame to an Excel file in Pyhton Pandas. • No module named ‘openpyxl’, in case this error is appearing that means you need to install openpyxl package on your system. • Use conda install openpyxl if you are using anaconda environment. • Use pip install openpyxl if using pip. • In our example, we have created a dataframe of cars and this dataframe is written to excel file using Python Pandas Read: Python Pandas DataFrame Iterrows Python Pandas Write DataFrame to Excel Without Index In this section, we will learn about Python Pandas write DataFrame to Excel without Index. • Everytime we export the dataframe index is also exported which becomes problem when same file is exported multiple times. • As it will have index multiple times which makes no sense. So it is advised to remove the index value before exporting it to a file. • Using option index=False, will not export an index value in the new file. • Here is the syntax of writing dataframe to excel without index. df.to_excel(new_file.xlsl, index=False) • Here is the implementation on Jupyter Notebook. Read: Pandas Delete Column Python Pandas Write DataFrame to CSV In this section, we will learn about Python Pandas Write DataFrame to CSV. • Using .to_csv() method in Python Pandas we can convert Dataframe to a csv file. • A CSV (Comma Seperated Value) are the best way of projecting the dataset due to it’s simplicity. • In our example, we have created a car dataframe and using .to_csv() function we are going to write dataframe to csv file in Python Pandas. • Here is the implementation on Jupyter Notebook. Read: How to Convert Pandas DataFrame to a Dictionary Python Pandas Write DataFrame to CSV without Index In this section, we will learn about Python Pandas Write DataFrame to CSV without Index. • Index numbers are automatically exported everytime dataframe is exported. • These index numbers can be stopped from exported by setting the parameter index = False while exporting the file. • In our example, we have created car dataframe and we have exported it with no index. • Here is the implementation on Jupyter Notebook. Read: Count Rows in Pandas DataFrame Python Pandas Write DataFrame to CSV Example In this section, we will cover an example for writing a dataframe to CSV in Python Pandas. • We have demonstrated one example in ‘Python Pandas Write DataFrame to CSV’ • To write CSV in Python Pandas we have .to_csv(), this function allows to export the dataframe to csv. • In our example we will create a dataframe using Series and then we will export it to csv. • Here is the implementation on Jupyter Notebook. Read: Python Pandas Drop Rows Example Python Pandas Write DataFrame to Existing Excel In this section, we will learn about Python Pandas Write DataFrame to Existing Excel. • First things in the process is to read or create dataframe that you want to add to the excel file. • then read the excel file using pd.ExcelWriter('exisitingfile.xlsx') and save the information in a variable in our case variable name is ‘writer’. • export the data from sources to writer like this: • df1.to_excel(writer, sheet_name='Sheet1') • here df1 is the first dataframe that we want to add in exisiting dataframe, and we are adding it in a sheet1. Here ‘writer’ will be same for all the dataframes that we want to add into existing dataframe. • In this end to mark the transaction complete type writer.save() • Here is the implementation on Jupyter Notebook. Read: Groupby in Python Pandas Python Pandas Export Multiple DataFrames to Excel In this section, we will learn about Python Pandas Export Multiple DataFrames to Excel. • Using pd.ExcelWriter() we can read the excel file in Python Pandas. • Using df.to_excel() we can write the excel in Python Pandas. This function accepts writer and sheet_name. Here writer is the valriable assigned to pd.ExcelWriter() function. • Using writer.save() we can commit the changes. • In our example, we have created 3 dataframes and we have created a excel file in which all the data of these 3 dataframe is stored in each sheet. • Here is the implementation on Jupyter Notebook. You may also like to read the following tutorials. In this tutorial, we have learned about Python Pandas Write DataFrame to Excel. Also, we have covered these topics. • Python Pandas Write DataFrame to Excel • Python Pandas Write DataFrame to Excel Without Index • Python Pandas Write DataFrame to CSV • Python Pandas Write DataFrame to CSV without Index • Python Pandas Write DataFrame to CSV Example • Python Pandas Write DataFrame to Existing Excel • Python Pandas Export Multiple DataFrames to Excel
__label__pos
0.993549
File Handling in C Share Your Love In this topic today I am going to teach about File Handling In C programming. What is a file? A file is an object which holds data, information or commands. Like many other programming languages, C also provides operations in files such as creating, reading, writing, opening and closing. Let’s dive deep and see how a file is created: Syntax: FILE *desired_name; *desired_name=fopen(“filename” , ”mode”); Hence the above syntax is used to create a file. There are several file management functions in C, which are: FunctionPurpose fopen()To create or open a file fclose()To close a file fprintf()Writing a block of data to a file fscanf()Reading a block of data from a file getc()Reading a single character from a file putc()Writing a single character in a file getw()Reading an integer from a file putw()Writing an integer to a file fseek()Sets the position of the file pointer to a specific location ftell()Returns the current position of the file pointer rewind()Sets the file pointer at the beginning of the file Join Our WhatsApp Group Join our WhatsApp Group To know more About Programming Language tips and how to start learning any programming language. Like the file management functions there are several modes while dealing with files in C, they are: File modeDescription rOpen a file for reading. If a file is in reading mode, then no data is deleted if a file is already present on a system wOpen a file for writing. If a file is in writing mode, then a new file is created if a file doesn’t exist at all. If a file is already present on a system, then all the data inside the file is truncated, and it is opened for writing purposes aOpen a file in append mode. If a file is in append mode, then the file is opened. The content within the file doesn’t change r+Open for reading and writing from the beginning w+Open for reading and writing, overwriting a file a+Open for reading and writing, appending to file These are the important modes and file management functions in C which helps in the file operations. This was an introductory part to file management in C and will deal with them practically in the next article. Share Your Love Default image Soham Malakar Hello my name is Soham Malakar am currently pursuing BCA Coding is one of my favorite hobbies. I love to code and an enthusiast to learn different types of programming languages. I love to get intoxicated in the world of technology and video games. A quick learner possessing better results. Articles: 40 Newsletter Updates Enter your email address below to subscribe to our newsletter
__label__pos
0.581143
OmniSciDB  d2f719934e  All Classes Namespaces Files Functions Variables Typedefs Enumerations Enumerator Friends Macros Groups Pages EventCb Class Reference + Inheritance diagram for EventCb: + Collaboration diagram for EventCb: Public Member Functions void event_cb (RdKafka::Event &event) override   Detailed Description Definition at line 257 of file KafkaImporter.cpp. Member Function Documentation void EventCb::event_cb ( RdKafka::Event &  event) inlineoverride Definition at line 259 of file KafkaImporter.cpp. References logger::ERROR, logger::INFO, LOG, run, and VLOG. 259  { 260  switch (event.type()) { 261  case RdKafka::Event::EVENT_ERROR: 262  LOG(ERROR) << "ERROR (" << RdKafka::err2str(event.err()) << "): " << event.str(); 263  if (event.err() == RdKafka::ERR__ALL_BROKERS_DOWN) { 264  LOG(ERROR) << "All brokers are down, we may need special handling here"; 265  run = false; 266  } 267  break; 268  269  case RdKafka::Event::EVENT_STATS: 270  VLOG(2) << "\"STATS\": " << event.str(); 271  break; 272  273  case RdKafka::Event::EVENT_LOG: 274  LOG(INFO) << "LOG-" << event.severity() << "-" << event.fac().c_str() << ":" 275  << event.str().c_str(); 276  break; 277  278  case RdKafka::Event::EVENT_THROTTLE: 279  LOG(INFO) << "THROTTLED: " << event.throttle_time() << "ms by " 280  << event.broker_name() << " id " << (int)event.broker_id(); 281  break; 282  283  default: 284  LOG(INFO) << "EVENT " << event.type() << " (" << RdKafka::err2str(event.err()) 285  << "): " << event.str(); 286  break; 287  } 288  } #define LOG(tag) Definition: Logger.h:205 static bool run #define VLOG(n) Definition: Logger.h:305 The documentation for this class was generated from the following file:
__label__pos
0.655604
Skip to content ColoredElevationMap Repository source: ColoredElevationMap Description A tutorial on how to setup a Windows Forms Application utilizing ActiViz.NET can be found here: Setup a Windows Forms Application to use ActiViz.NET Note: As long as ActiViz.NET is not build with VTK version 6.0 or higher you must define the preprocessor directive VTK_MAJOR_VERSION_5. Other languages See (Cxx), (Python) Question If you have a question about this example, please use the VTK Discourse Forum Code ColoredElevationMap.cs using System; using System.Collections.Generic; using System.ComponentModel; using System.Windows.Forms; using System.Diagnostics; using Kitware.VTK; namespace ActiViz.Examples { public partial class Form1 : Form { public Form1() { InitializeComponent(); } private void renderWindowControl1_Load(object sender, EventArgs e) { try { ColoredElevationMap(); } catch(Exception ex) { MessageBox.Show(ex.Message, "Exception", MessageBoxButtons.OK); } } private void ColoredElevationMap() { // Create a grid of points (height/terrian map) vtkPoints points = vtkPoints.New(); uint GridSize = 20; double xx, yy, zz; for(uint x = 0; x < GridSize; x++) { for(uint y = 0; y < GridSize; y++) { xx = x + vtkMath.Random(-.2, .2); yy = y + vtkMath.Random(-.2, .2); zz = vtkMath.Random(-.5, .5); points.InsertNextPoint(xx, yy, zz); } } // Add the grid points to a polydata object vtkPolyData inputPolyData = vtkPolyData.New(); inputPolyData.SetPoints(points); // Triangulate the grid points vtkDelaunay2D delaunay = vtkDelaunay2D.New(); #if VTK_MAJOR_VERSION_5 delaunay.SetInput(inputPolyData); #else delaunay.SetInputData(inputPolyData); #endif delaunay.Update(); vtkPolyData outputPolyData = delaunay.GetOutput(); double[] bounds = outputPolyData.GetBounds(); // Find min and max z double minz = bounds[4]; double maxz = bounds[5]; Debug.WriteLine("minz: " + minz); Debug.WriteLine("maxz: " + maxz); // Create the color map vtkLookupTable colorLookupTable = vtkLookupTable.New(); colorLookupTable.SetTableRange(minz, maxz); colorLookupTable.Build(); // Generate the colors for each point based on the color map vtkUnsignedCharArray colors = vtkUnsignedCharArray.New(); colors.SetNumberOfComponents(3); colors.SetName("Colors"); Debug.WriteLine("There are " + outputPolyData.GetNumberOfPoints() + " points."); #if UNSAFE // fastest way to fill color array colors.SetNumberOfTuples(outputPolyData.GetNumberOfPoints()); unsafe { byte* pColor = (byte*)colors.GetPointer(0).ToPointer(); for(int i = 0; i < outputPolyData.GetNumberOfPoints(); i++) { double[] p = outputPolyData.GetPoint(i); double[] dcolor = colorLookupTable.GetColor(p[2]); Debug.WriteLine("dcolor: " + dcolor[0] + " " + dcolor[1] + " " + dcolor[2]); byte[] color = new byte[3]; for(uint j = 0; j < 3; j++) { color[j] = (byte)( 255 * dcolor[j] ); } Debug.WriteLine("color: " + color[0] + " " + color[1] + " " + color[2]); *( pColor + 3 * i ) = color[0]; *( pColor + 3 * i + 1 ) = color[1]; *( pColor + 3 * i + 2 ) = color[2]; } } #else for(int i = 0; i < outputPolyData.GetNumberOfPoints(); i++) { double[] p = outputPolyData.GetPoint(i); double[] dcolor = colorLookupTable.GetColor(p[2]); Debug.WriteLine("dcolor: " + dcolor[0] + " " + dcolor[1] + " " + dcolor[2]); byte[] color = new byte[3]; for(uint j = 0; j < 3; j++) { color[j] = (byte)( 255 * dcolor[j] ); } Debug.WriteLine("color: " + color[0] + " " + color[1] + " " + color[2]); colors.InsertNextTuple3(color[0], color[1], color[2] ); //IntPtr pColor = Marshal.AllocHGlobal(Marshal.SizeOf(typeof(byte)) * 3); //Marshal.Copy(color, 0, pColor, 3); //colors.InsertNextTupleValue(pColor); //Marshal.FreeHGlobal(pColor); } #endif outputPolyData.GetPointData().SetScalars(colors); // Create a mapper and actor vtkPolyDataMapper mapper = vtkPolyDataMapper.New(); #if VTK_MAJOR_VERSION_5 mapper.SetInputConnection(outputPolyData.GetProducerPort()); #else mapper.SetInputData(outputPolyData); #endif vtkActor actor = vtkActor.New(); actor.SetMapper(mapper); // get a reference to the renderwindow of our renderWindowControl1 vtkRenderWindow renderWindow = renderWindowControl1.RenderWindow; // renderer vtkRenderer renderer = renderWindow.GetRenderers().GetFirstRenderer(); // set background color renderer.SetBackground(0.2, 0.3, 0.4); // add our actor to the renderer renderer.AddActor(actor); } } }
__label__pos
0.997273
Filtering When you add Sentry to your app, you get a lot of valuable information about errors and performance. And lots of information is good -- as long as it's the right information, at a reasonable volume. The Sentry SDKs have several configuration options to help you filter out events. We also offer Inbound Filters to filter events in sentry.io. We recommend filtering at the client level though, because it removes the overhead of sending events you don't actually want. Learn more about the fields available in an event. Configure your SDK to filter error events by using the beforeSend callback method and configuring, enabling, or disabling integrations. All Sentry SDKs support the beforeSend callback method. Because it's called immediately before the event is sent to the server, this is your last chance to decide not to send data or to edit it. beforeSend receives the event object as a parameter, which you can use to either modify the event’s data or drop it completely by returning null, based on custom logic and the data available on the event. Copied import io.sentry.Sentry; Sentry.init(options -> { options.setBeforeSend((event, hint) -> { // Modify the event here: event.setServerName(null); // Don't send server names. return event; }); }); Note also that breadcrumbs can be filtered, as discussed in our Breadcrumbs documentation. The beforeSend callback is passed both the event and a second argument, hint, that holds one or more hints. Typically, a hint holds the original exception so that additional data can be extracted or grouping is affected. In this example, the fingerprint is forced to a common value if an exception of a certain type has been caught: A BiFunction<SentryEvent, Object, SentryEvent> can be used to mutate, discard (return null), or return a completely new event. Copied import io.sentry.Sentry; Sentry.init(options -> { options.setBeforeSend((event, hint) -> { if (hint instanceof MyHint) { return null; } else { return event; } }); }); When the SDK creates an event or breadcrumb for transmission, that transmission is typically created from some sort of source object. For instance, an error event is typically created from a log record or exception instance. For better customization, SDKs send these objects to certain callbacks (beforeSend, beforeBreadcrumb or the event processor system in the SDK). Hints are available in two places: 1. beforeSend / beforeBreadcrumb 2. eventProcessors Event and breadcrumb hints are objects containing various information used to put together an event or a breadcrumb. Typically hints hold the original exception so that additional data can be extracted or grouping can be affected. For events, hints contain properties such as event_id, originalException, syntheticException (used internally to generate cleaner stack trace), and any other arbitrary data that you attach. For breadcrumbs, the use of hints is implementation dependent. For XHR requests, the hint contains the xhr object itself; for user interactions the hint contains the DOM element and event name and so forth. Copied import io.sentry.Sentry; import java.sql.SQLException; import java.util.Arrays; Sentry.init(options -> { options.setBeforeSend((event, hint) -> { if (event.getThrowable() instanceof SQLException) { event.setFingerprints(Arrays.asList("database-connection-error")); } return event; }); }); originalException The original exception that caused the Sentry SDK to create the event. This is useful for changing how the Sentry SDK groups events or to extract additional information. syntheticException When a string or a non-error object is raised, Sentry creates a synthetic exception so you can get a basic stack trace. This exception is stored here for further data extraction. event For breadcrumbs created from browser events, the Sentry SDK often supplies the event to the breadcrumb as a hint. This can be used to extract data from the target DOM element into a breadcrumb, for example. level / input For breadcrumbs created from console log interceptions. This holds the original console log level and the original input data to the log function. response / input For breadcrumbs created from HTTP requests. This holds the response object (from the fetch API) and the input parameters to the fetch function. request / response / event For breadcrumbs created from HTTP requests. This holds the request and response object (from the node HTTP API) as well as the node event (response or error). xhr For breadcrumbs created from HTTP requests made using the legacy XMLHttpRequest API. This holds the original xhr object. When used together with one of the logging framework integrations, the Java SDK captures all error logs as events. If you see a particular kind of error very often that has a logger tag, you can ignore that particular logger entirely. For more information see our Logback or Log4j 2.x integration. To prevent certain transactions from being reported to Sentry, use the tracesSampler or beforeSendTransaction configuration option, which allows you to provide a function to evaluate the current transaction and drop it if it's not one you want. Note: The tracesSampler and tracesSampleRate config options are mutually exclusive. If you define a tracesSampler to filter out certain transactions, you must also handle the case of non-filtered transactions by returning the rate at which you'd like them sampled. In its simplest form, used just for filtering the transaction, it looks like this: Copied import io.sentry.Sentry; Sentry.init(options -> { options.setTracesSampler(context -> { // If this is the continuation of a trace, just use that decision (rate controlled by the caller). Boolean parentSampled = context.getTransactionContext().getParentSampled(); if (parentSampled != null) { return parentSampled ? 1.0 : 0.0; } if (/* make a decision based on `samplingContext` */) { // Drop this transaction, by setting its sample rate to 0% return 0.0; } else if (/* ... */) { // Override sample rate for other cases (replaces `options.TracesSampleRate`) return 0.1; } // Can return `null` to fallback to the rate configured by `options.tracesSampleRate` return null; }); }); It also allows you to sample different transactions at different rates. If the transaction currently being processed has a parent transaction (from an upstream service calling this service), the parent (upstream) sampling decision will always be included in the sampling context data, so that your tracesSampler can choose whether and when to inherit that decision. In most cases, inheritance is the right choice, to avoid breaking distributed traces. A broken trace will not include all your services. See Inheriting the parent sampling decision to learn more. Learn more about configuring the sample rate. Copied import io.sentry.Sentry; Sentry.init(options -> { options.setBeforeSendTransaction((transaction, hint) -> { // Modify or drop the transaction here: if ("/unimportant/route".equals(transaction.getTransaction())) { // Don't send the transaction to Sentry return null; } else { return transaction; } }); }); Help improve this content Our documentation is open source and available on GitHub. Your contributions are welcome, whether fixing a typo (drat!) or suggesting an update ("yeah, this would be better").
__label__pos
0.778506
wholemovement Toggle Login Thursday, 01 April 2010 15:46 Folding A Circle iIn Half: Part 2. Principles Written by  Rate this item (0 votes) Let’s get back to the first fold in the circle (Feb. entry) and talk about the principles behind the parts that determine the interrelationships between those parts and all subsequent folding with a circle. In deciding that the circle is a mathematical symbol representing nothing, and using fragments to draw 2-D constructions to prove abstract formula, we have failed to understand that the circle is both Whole and part. Understanding the circle is Whole allows us to observe what is principle. Principles happen first; they affect all parts and all folds, reconfigurations, and joining of multiple circles. If we do not know what is principle, what comes first, we do not really know comprehensively what we are doing. The sphere is Whole before all other forms. Compression transforms the sphere to circle. Spherical unity is principle to the nature of the circle; it comes first. It is triunity; a disk in space showing three circle planes. By adding the two edges there are five generalized parts. We can add the inside volume and the external space, seven individualized associations of unity. There are seven observable qualities that happen first in this act of compression and again in decompressing spherical information by folding the circle. Stating with the WHOLE, there is MOVEMENT that creates DIVISION forming a DUALITY in TRIANGULATION, where there is a CONSISTANCY of all parts to the movement of the Whole, and each part is INNER-DEPENDENT to the Whole. Every fold in the circle reflects these seven principle qualities. They apply to all aspects of our lives. Notice the first five observations are about the mechanics. They reveal the manifest functionality of structural order. The last two observations are relational. They give us the most trouble. We are not consciously consistent in developing progressive habits, and we do not like the idea of being dependent on anyone or anything. We have trouble connecting with the inner, the unseen intention of first cause, the absolute pattern that regulates all purposeful evolutionary formation. The degree of clarity about these two qualities has everything to do with how we relate and interact with each other, and the connections we make that give meaning and value to our lives. These ideas about principles are not just a philosophy illustrated by folding a circle. It comes from direct observation about what happens when a sphere is compressed and again when the circle is folded. Cutting the circle into parts violates the principles and destroys unity causing disruption, and confusion; then breakdown occurs. We are left with separated pieces and must rely on the inconsistency of the human mind and construction methods. Unity can not be constructed; it is a function found only with the Whole. Principles are not a function of parts, but of the Whole. We only have to look at what we have done to this planet and the condition of people’s lives to know there is destruction, lack of clarity, and little understanding of principles or purpose. Humanity lacks knowledge of unity, is confused about the Whole, and has become addicted to fragmented construction using bits and pieces for our own short-sighted pleasure. We are under the illusion that we create unity with parts. Yet we do make extraordinary bigger parts from smaller parts, and incredible smaller and useful parts. Understanding these principles help us to recognize unity, support the beauty of endless differences and to progressively benefit from such diverse expressions of inner goodness. We lack responsibility to the interconnections between all people for our lack of understanding our dependency on the common source of all being. To better understand what is inclusively principled can help clarify and reconcile the confusion between widely differing forms of social and religions cultures and our individual experience. The Wholemovement approach to geometry and pattern development differs from traditional understanding based in a mixture of false assumptions, inconsistencies, and an over abundance of self interest. Ask mathematicians what the principles of mathematics are and keep tract of all the different answers you get. We do not have an understanding of what is principle beyond what we think is more important than what somebody else thinks. Folding circles offers a demonstration of principles that are inclusively dynamic to all pattern development of formation, in all fields of study as far as I can tell. Knowing what is principle helps to increase our capacity to a clearer understanding about consistency of appropriate behavior about our place in the universe. The sphere/circle reformation and folding is the only principled, experiential, hands-on activity that demonstrates anything about the idea of a comprehensive and inclusive Whole.     Read 7361 times Last modified on Tuesday, 12 June 2012 00:54
__label__pos
0.852782
5 I want something like this: enter image description here made with this MWE: \documentclass{memoir} \usepackage[utf8]{inputenc} \usepackage[T1]{fontenc} \usepackage{graphicx} \usepackage{duckuments} \newsubfloat{figure} \begin{document} \begin{figure} \centerfloat \begin{minipage}{10cm} \subbottom[huey]{\includegraphics[width=10cm]{example-image-duck}} \end{minipage} \begin{minipage}{0.5\textwidth} \subtop[duey]{\includegraphics[width=5cm]{example-image-duck}} \subbottom[luey]{\includegraphics[width=5cm]{example-image-duck}} \end{minipage} \caption{Donalds nephews} \label{fig:nephew} \end{figure} \end{document} But where the width of the first figure is unknown, so I cannot make a minipage, then I get: enter image description here Made using the MWE: \documentclass{memoir} \usepackage[utf8]{inputenc} \usepackage[T1]{fontenc} \usepackage{graphicx} \usepackage{duckuments} \newsubfloat{figure} \begin{document} \begin{figure} \centerfloat % \begin{minipage}{10cm} \subbottom[huey]{\includegraphics[width=10cm]{example-image-duck}} %The 10 cm is unknown % \end{minipage} \begin{minipage}{0.5\textwidth} \subtop[duey]{\includegraphics[width=5cm]{example-image-duck}} \subbottom[luey]{\includegraphics[width=5cm]{example-image-duck}} \end{minipage} \caption{Donalds nephews} \label{fig:nephew} \end{figure} \end{document} Is there a way to either find the width from the \includegraphics command, or align the figures without the minipage? • \includegraphics produces a box, therefore you can always get the dimensions of an included image using \settowidth, \settoheight and \settodepth (use a search engine). – frougon May 9 at 12:26 • your question isn't so clear,you set the minipage to .5\textwidth, so don't you want the left larger figure to be .5\textwidth (or a bit less if you want a gap) why do you need to know 10cm? (similarly why force the smaller figures to 5cm rather than use \linewidth (teh width of the minipage) – David Carlisle May 9 at 12:35 • @DavidCarlisle I don't really wan't to force the width of any of the figures, but example images are all (unless forced) the same size. I wanted to simulate having a large (tall), and two small (short) figures, and the wish to arrange them as shown in the first picture (without knowing the width of any one figure). – Thorbjørn E. K. Christensen May 9 at 12:41 • @DavidCarlisle I need to know the width of the first figure, so I don't make the minipage too shallow or wide (which would push/pull the small figures to the right). I wan't the left figure to have it's "natural" size – Thorbjørn E. K. Christensen May 9 at 12:45 • You presumably want to align the center of the minipage with the center of the \subbottom. The baseline of a minipage is the center by default, but \subbotom appears to be using the baseline of the image. \raisebox{\dimexpr 0.5\depth-0.5\height}{...} will center anything. – John Kormylo May 9 at 13:07 4 Here is a proposal for the computation of the desired width. I am using pgf here to make the computation more comprehensible, in principle it can be stripped off. And one can modify things to respect the heights of the captions, too, but this is the basic computation. \documentclass{memoir} \usepackage[utf8]{inputenc} \usepackage[T1]{fontenc} \usepackage{graphicx} \usepackage{duckuments} \newsubfloat{figure} \usepackage{pgf} \begin{document} We are given the aspect ratios $r_i$ of three figures, $r_i$. Let's call their final heights $y_i$ and widths $x_i$, i.e.\ $r_i=y_i/x_i$. We want the sum of heights of figures 2 and 3 to coincide with the height of figure 1, \[y_2+y_3=r_2x_2+r_3x_3=x_2(r_2+r_3)\stackrel{!}{=}y_1=r_1x_1\;,\] where we have used that figures 2 and 3 should be equally wide, $x_2=x_3$. We also want the width of figures 1 and 2 (or, equivalently, 1 and 3) to add up to some target width $t$, $x_1+x_2=t$. Therefore \[ x_1=\frac{r_2+r_3}{r_1+r_2+r_3}t\;.\] The following example uses $t=0.95$\textbackslash\texttt{textwidth}. In this version, the heights of the captions have not been taken into account. % in general the ratios may not coincide \pgfmathsetmacro{\rOne}{height("\includegraphics{example-image-duck}")/width("\includegraphics{example-image-duck}")} \pgfmathsetmacro{\rTwo}{height("\includegraphics{example-image-duck}")/width("\includegraphics{example-image-duck}")} \pgfmathsetmacro{\rThree}{height("\includegraphics{example-image-duck}")/width("\includegraphics{example-image-duck}")} \pgfmathsetmacro{\xOne}{((\rOne+\rTwo)/(\rOne+\rTwo+\rThree))*0.95*\textwidth} \pgfmathsetmacro{\xTwo}{0.95*\textwidth-\xOne} \begin{figure} \centerfloat \begin{minipage}{\xOne pt} \subbottom[huey]{\includegraphics[width=\xOne pt]{example-image-duck}} \end{minipage} \begin{minipage}{\xTwo pt} \subtop[duey]{\includegraphics[width=\xTwo pt]{example-image-duck}} \subbottom[luey]{\includegraphics[width=\xTwo pt]{example-image-duck}} \end{minipage} \caption{Donalds nephews} \label{fig:nephew} \end{figure} \clearpage Assume now that we want to take into account the caption heights. For simplicity, assume they are universal and denote them by $h$. More precisely, at this level $h$ is a tuning parameter that can be adjusted to account for the captions. Then \[2h+y_2+y_3=2h+r_2x_2+r_3x_3=2h+x_2(r_2+r_3)\stackrel{!}{=}h+y_1=h+r_1x_1\] and thus \[ x_1=\frac{(r_2+r_3)t+h}{r_1+r_2+r_3}\;.\] \pgfmathsetmacro{\rOne}{height("\includegraphics{example-image-duck}")/width("\includegraphics{example-image-duck}")} \pgfmathsetmacro{\rTwo}{height("\includegraphics{example-image-duck}")/width("\includegraphics{example-image-duck}")} \pgfmathsetmacro{\rThree}{height("\includegraphics{example-image-duck}")/width("\includegraphics{example-image-duck}")} \pgfmathsetmacro{\hCaption}{17.5} %<-just a guess \pgfmathsetmacro{\xOne}{(((\rOne+\rTwo)*0.95*\textwidth+\hCaption)/(\rOne+\rTwo+\rThree))} \pgfmathsetmacro{\xTwo}{0.95*\textwidth-\xOne} \begin{figure} \centerfloat \begin{minipage}{\xOne pt} \subbottom[huey]{\includegraphics[width=\xOne pt]{example-image-duck}} \end{minipage} \begin{minipage}{\xTwo pt} \subtop[duey]{\includegraphics[width=\xTwo pt]{example-image-duck}} \subbottom[luey]{\includegraphics[width=\xTwo pt]{example-image-duck}} \end{minipage} \caption{Donalds nephews} \label{fig:nephewtuned} \end{figure} \end{document} enter image description here I also added a way to take into account the captions (but this requires tuning since I do not know what \subbottom and \subtop precisely do, i.e. which dimensions they use. If this is known, the following discussion may allow one to compute h rather than guessing it. enter image description here 3 \documentclass{memoir} \usepackage[T1]{fontenc} \usepackage{graphicx} \usepackage{duckuments} \newsubfloat{figure} \usepackage{showframe} \begin{document} \begin{figure} \centerfloat \begin{minipage}{8cm} \strut\\ \subbottom[huey]{\includegraphics[width=\linewidth]{example-image-duck}} \end{minipage}\quad \begin{minipage}{\dimexpr\textwidth-8cm-1em} \subtop[duey\strut]{\includegraphics[width=\linewidth]{example-image-duck}} \subbottom[luey\strut]{\includegraphics[width=\linewidth]{example-image-duck}} \end{minipage} \caption{Donalds nephews}\label{fig:nephew} \end{figure} \end{document} enter image description here • Sorry to ask but where does the length 8cm come from? What happens if the user uses a different page geometry and/or different figures with different aspect ratios? – user121799 May 9 at 19:07 Your Answer By clicking “Post Your Answer”, you agree to our terms of service, privacy policy and cookie policy Not the answer you're looking for? Browse other questions tagged or ask your own question.
__label__pos
1
Anda di halaman 1dari 4 Accuracy and precision Accuracy, in science, engineering, industry and statistics, is the degree of conformity of a measured/calculated quantity to its actual (true) value. Precision (also called reproducibility or repeatability) is the degree to which further measurements or calculations will show the same or similar results. The results of a measurement or calculations can be accurate but not precise, precise but not accurate, neither, or both; if a result is both accurate and precise, it is called valid. The related terms in surveying are error (random variability in research) and bias (nonrandom or directed effects caused by a factor or factors unrelated by the independent variable). Accuracy vs precision - the target analogy High accuracy, but low precision High precision, but low accuracy Accuracy is the degree of veracity while precision is the degree of reproducibility. An analogy used to explain the difference between accuracy and precision is the target comparison. Repeated measurements are compared to arrows that are fired at a target. Accuracy describes the closeness of arrows to the bullseye at the target center. Arrows that strike closer to the bullseye are considered more accurate. The closer a system's measurements to the accepted value, the more accurate the system is considered to be. To continue the analogy, if a large number of arrows are fired, precision would be the size of the arrow cluster. (When only one arrow is fired, precision is the size of the cluster one would expect if this was repeated many times under the same conditions.) When all arrows are grouped tightly together, the cluster is considered precise since they all struck close to the same spot, if not necessarily near the bullseye. The measurements are precise, though not necessarily accurate. However, it is not possible to reliably achieve accuracy in individual measurements without precision - if the arrows are not grouped close to one another, they cannot all be close to the bullseye. (Their average position might be an accurate estimation of the bullseye, but the individual arrows are inaccurate.) See also Circular error probable for application of precision to the science of ballistics. Accuracy and precision in logic level modeling and IC simulation As described in the SIGDA Newsletter[Vol 20. Number 1, June 1990] a common mistake in evaluation of accurate models is to compare a logic simulation model to a transistor circuit simulation model. This is a comparison of differences in precision, not accuracy. Precision is measured with respect to detail and accuracy is measured with respect to reality. Another reference for this topic is "Logic Level Modelling", by John M. Acken, Encyclopedia of Computer Science and Technology, Vol 36, 1997, page 281-306. Quantifying accuracy and precision Ideally a measurement device is both accurate and precise, with measurements all close to and tightly clustered around the known value. The accuracy and precision of a measurement process is usually established by repeatedly measuring some traceable reference standard. Such standards are defined in the International System of Units and maintained by national standards organizations such as the National Institute of Standards and Technology. Precision is usually characterised in terms of the standard deviation of the measurements, sometimes called the measurement process's standard error. The interval defined by the standard deviation is the 68.3% ("one sigma") confidence interval of the measurements. If enough measurements have been made to accurately estimate the standard deviation of the process, and if the measurement process produces normally distributed errors, then it is likely that 68.3% of the time, the true value of the measured property will lie within one standard deviation, 95.4% of the time it will lie within two standard deviations, and 99.7% of the time it will lie within three standard deviations of the measured value. This also applies when measurements are repeated and averaged. In that case, the term standard error is properly applied: the precision of the average is equal to the known standard deviation of the process divided by the square root of the number of measurements averaged. Further, the central limit theorem shows that the probability distribution of the averaged measurements will be closer to a normal distribution than that of individual measurements. With regard to accuracy we can distinguish: the difference between the mean of the measurements and the reference value, the bias. Establishing and correcting for bias is necessary for calibration. the combined effect of that and precision A common convention in science and engineering is to express accuracy and/or precision implicitly by means of significant figures. Here, when not explicitly stated, the margin of error is understood to be one-half the value of the last significant place. For instance, a recording of '8430 m' would imply a margin of error of 5 m (the last significant place is the tens place), while '8000 m' would imply a margin of 500 m. To indicate a more accurate measurement that just happens to lie near a round number, one would use scientific notation: '8.000 x 10 m' indicates a margin of 0.5 m. However, reliance on this convention can lead to false precision errors when accepting data from sources that do not obey it. Looking at this in another way, a value of 8 would mean that the measurement has been made with a precision of '1' (the measuring instrument was able to measure only up to 1's place) whereas a value of 8.0 (though mathematically equal to 8) would mean that the value at the first decimal place was measured and was found to be zero. (The measuring instrument was able to measure the first decimal place.) The second value is more precise. Neither of the measured values may be accurate (the actual value could be 9.5 but measured inaccurately as 8 in both instances). Thus, accuracy can be said to be the 'correctness' of a measurement, while precision could be identified as the ability to resolve smaller differences. Precision is sometimes stratified into: Repeatability - the variation arising when all efforts are made to keep conditions constant by using the same instrument and operator, and repeating during a short time period; and Reproducibility - the variation arising using the same measurement process among different instruments and operators, and over longer time periods. As stated before, you can be both accurate and precise. For instance, if all your arrows hit the bull's eye of the target, they are all both near the "true value" (accurate) and near one another (precise). Something to think about: In the NFL, a place kicker makes 9 of 10 field goals, and another makes 6 of 10. Even if the 6 that the second kicker made were straight down the middle and the first kicker just made his in, he is still less accurate and less precise than the first kicker. This differs from the darts example because either you make it or you do not; there are not different levels of points that can be scored.
__label__pos
0.878202
1 /* 2 * Copyright (c) 1999, 2015, Oracle and/or its affiliates. All rights reserved. 3 * DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER. 4 * 5 * This code is free software; you can redistribute it and/or modify it 6 * under the terms of the GNU General Public License version 2 only, as 7 * published by the Free Software Foundation. 8 * 9 * This code is distributed in the hope that it will be useful, but WITHOUT 10 * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or 11 * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License 12 * version 2 for more details (a copy is included in the LICENSE file that 13 * accompanied this code). 14 * 15 * You should have received a copy of the GNU General Public License version 16 * 2 along with this work; if not, write to the Free Software Foundation, 17 * Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA. 18 * 19 * Please contact Oracle, 500 Oracle Parkway, Redwood Shores, CA 94065 USA 20 * or visit www.oracle.com if you need additional information or have any 21 * questions. 22 * 23 */ 24 25 // no precompiled headers 26 #include "asm/macroAssembler.hpp" 27 #include "classfile/classLoader.hpp" 28 #include "classfile/systemDictionary.hpp" 29 #include "classfile/vmSymbols.hpp" 30 #include "code/icBuffer.hpp" 31 #include "code/vtableStubs.hpp" 32 #include "decoder_windows.hpp" 33 #include "interpreter/interpreter.hpp" 34 #include "jvm_windows.h" 35 #include "memory/allocation.inline.hpp" 36 #include "mutex_windows.inline.hpp" 37 #include "nativeInst_x86.hpp" 38 #include "os_share_windows.hpp" 39 #include "prims/jniFastGetField.hpp" 40 #include "prims/jvm.h" 41 #include "prims/jvm_misc.hpp" 42 #include "runtime/arguments.hpp" 43 #include "runtime/extendedPC.hpp" 44 #include "runtime/frame.inline.hpp" 45 #include "runtime/interfaceSupport.hpp" 46 #include "runtime/java.hpp" 47 #include "runtime/javaCalls.hpp" 48 #include "runtime/mutexLocker.hpp" 49 #include "runtime/osThread.hpp" 50 #include "runtime/sharedRuntime.hpp" 51 #include "runtime/stubRoutines.hpp" 52 #include "runtime/thread.inline.hpp" 53 #include "runtime/timer.hpp" 54 #include "utilities/events.hpp" 55 #include "utilities/vmError.hpp" 56 57 # include "unwind_windows_x86.hpp" 58 #undef REG_SP 59 #undef REG_FP 60 #undef REG_PC 61 #ifdef AMD64 62 #define REG_SP Rsp 63 #define REG_FP Rbp 64 #define REG_PC Rip 65 #else 66 #define REG_SP Esp 67 #define REG_FP Ebp 68 #define REG_PC Eip 69 #endif // AMD64 70 71 extern LONG WINAPI topLevelExceptionFilter(_EXCEPTION_POINTERS* ); 72 73 // Install a win32 structured exception handler around thread. 74 void os::os_exception_wrapper(java_call_t f, JavaValue* value, const methodHandle& method, JavaCallArguments* args, Thread* thread) { 75 __try { 76 77 #ifndef AMD64 78 // We store the current thread in this wrapperthread location 79 // and determine how far away this address is from the structured 80 // execption pointer that FS:[0] points to. This get_thread 81 // code can then get the thread pointer via FS. 82 // 83 // Warning: This routine must NEVER be inlined since we'd end up with 84 // multiple offsets. 85 // 86 volatile Thread* wrapperthread = thread; 87 88 if (os::win32::get_thread_ptr_offset() == 0) { 89 int thread_ptr_offset; 90 __asm { 91 lea eax, dword ptr wrapperthread; 92 sub eax, dword ptr FS:[0H]; 93 mov thread_ptr_offset, eax 94 }; 95 os::win32::set_thread_ptr_offset(thread_ptr_offset); 96 } 97 #ifdef ASSERT 98 // Verify that the offset hasn't changed since we initally captured 99 // it. This might happen if we accidentally ended up with an 100 // inlined version of this routine. 101 else { 102 int test_thread_ptr_offset; 103 __asm { 104 lea eax, dword ptr wrapperthread; 105 sub eax, dword ptr FS:[0H]; 106 mov test_thread_ptr_offset, eax 107 }; 108 assert(test_thread_ptr_offset == os::win32::get_thread_ptr_offset(), 109 "thread pointer offset from SEH changed"); 110 } 111 #endif // ASSERT 112 #endif // !AMD64 113 114 f(value, method, args, thread); 115 } __except(topLevelExceptionFilter((_EXCEPTION_POINTERS*)_exception_info())) { 116 // Nothing to do. 117 } 118 } 119 120 #ifdef AMD64 121 122 // This is the language specific handler for exceptions 123 // originating from dynamically generated code. 124 // We call the standard structured exception handler 125 // We only expect Continued Execution since we cannot unwind 126 // from generated code. 127 LONG HandleExceptionFromCodeCache( 128 IN PEXCEPTION_RECORD ExceptionRecord, 129 IN ULONG64 EstablisherFrame, 130 IN OUT PCONTEXT ContextRecord, 131 IN OUT PDISPATCHER_CONTEXT DispatcherContext) { 132 EXCEPTION_POINTERS ep; 133 LONG result; 134 135 ep.ExceptionRecord = ExceptionRecord; 136 ep.ContextRecord = ContextRecord; 137 138 result = topLevelExceptionFilter(&ep); 139 140 // We better only get a CONTINUE_EXECUTION from our handler 141 // since we don't have unwind information registered. 142 143 guarantee( result == EXCEPTION_CONTINUE_EXECUTION, 144 "Unexpected result from topLevelExceptionFilter"); 145 146 return(ExceptionContinueExecution); 147 } 148 149 150 // Structure containing the Windows Data Structures required 151 // to register our Code Cache exception handler. 152 // We put these in the CodeCache since the API requires 153 // all addresses in these structures are relative to the Code 154 // area registered with RtlAddFunctionTable. 155 typedef struct { 156 char ExceptionHandlerInstr[16]; // jmp HandleExceptionFromCodeCache 157 RUNTIME_FUNCTION rt; 158 UNWIND_INFO_EH_ONLY unw; 159 } DynamicCodeData, *pDynamicCodeData; 160 161 #endif // AMD64 162 // 163 // Register our CodeCache area with the OS so it will dispatch exceptions 164 // to our topLevelExceptionFilter when we take an exception in our 165 // dynamically generated code. 166 // 167 // Arguments: low and high are the address of the full reserved 168 // codeCache area 169 // 170 bool os::register_code_area(char *low, char *high) { 171 #ifdef AMD64 172 173 ResourceMark rm; 174 175 pDynamicCodeData pDCD; 176 PRUNTIME_FUNCTION prt; 177 PUNWIND_INFO_EH_ONLY punwind; 178 179 BufferBlob* blob = BufferBlob::create("CodeCache Exception Handler", sizeof(DynamicCodeData)); 180 CodeBuffer cb(blob); 181 MacroAssembler* masm = new MacroAssembler(&cb); 182 pDCD = (pDynamicCodeData) masm->pc(); 183 184 masm->jump(ExternalAddress((address)&HandleExceptionFromCodeCache)); 185 masm->flush(); 186 187 // Create an Unwind Structure specifying no unwind info 188 // other than an Exception Handler 189 punwind = &pDCD->unw; 190 punwind->Version = 1; 191 punwind->Flags = UNW_FLAG_EHANDLER; 192 punwind->SizeOfProlog = 0; 193 punwind->CountOfCodes = 0; 194 punwind->FrameRegister = 0; 195 punwind->FrameOffset = 0; 196 punwind->ExceptionHandler = (char *)(&(pDCD->ExceptionHandlerInstr[0])) - 197 (char*)low; 198 punwind->ExceptionData[0] = 0; 199 200 // This structure describes the covered dynamic code area. 201 // Addresses are relative to the beginning on the code cache area 202 prt = &pDCD->rt; 203 prt->BeginAddress = 0; 204 prt->EndAddress = (ULONG)(high - low); 205 prt->UnwindData = ((char *)punwind - low); 206 207 guarantee(RtlAddFunctionTable(prt, 1, (ULONGLONG)low), 208 "Failed to register Dynamic Code Exception Handler with RtlAddFunctionTable"); 209 210 #endif // AMD64 211 return true; 212 } 213 214 void os::initialize_thread(Thread* thr) { 215 // Nothing to do. 216 } 217 218 // Atomics and Stub Functions 219 220 typedef jint xchg_func_t (jint, volatile jint*); 221 typedef intptr_t xchg_ptr_func_t (intptr_t, volatile intptr_t*); 222 typedef jint cmpxchg_func_t (jint, volatile jint*, jint); 223 typedef jbyte cmpxchg_byte_func_t (jbyte, volatile jbyte*, jbyte); 224 typedef jlong cmpxchg_long_func_t (jlong, volatile jlong*, jlong); 225 typedef jint add_func_t (jint, volatile jint*); 226 typedef intptr_t add_ptr_func_t (intptr_t, volatile intptr_t*); 227 228 #ifdef AMD64 229 230 jint os::atomic_xchg_bootstrap(jint exchange_value, volatile jint* dest) { 231 // try to use the stub: 232 xchg_func_t* func = CAST_TO_FN_PTR(xchg_func_t*, StubRoutines::atomic_xchg_entry()); 233 234 if (func != NULL) { 235 os::atomic_xchg_func = func; 236 return (*func)(exchange_value, dest); 237 } 238 assert(Threads::number_of_threads() == 0, "for bootstrap only"); 239 240 jint old_value = *dest; 241 *dest = exchange_value; 242 return old_value; 243 } 244 245 intptr_t os::atomic_xchg_ptr_bootstrap(intptr_t exchange_value, volatile intptr_t* dest) { 246 // try to use the stub: 247 xchg_ptr_func_t* func = CAST_TO_FN_PTR(xchg_ptr_func_t*, StubRoutines::atomic_xchg_ptr_entry()); 248 249 if (func != NULL) { 250 os::atomic_xchg_ptr_func = func; 251 return (*func)(exchange_value, dest); 252 } 253 assert(Threads::number_of_threads() == 0, "for bootstrap only"); 254 255 intptr_t old_value = *dest; 256 *dest = exchange_value; 257 return old_value; 258 } 259 260 261 jint os::atomic_cmpxchg_bootstrap(jint exchange_value, volatile jint* dest, jint compare_value) { 262 // try to use the stub: 263 cmpxchg_func_t* func = CAST_TO_FN_PTR(cmpxchg_func_t*, StubRoutines::atomic_cmpxchg_entry()); 264 265 if (func != NULL) { 266 os::atomic_cmpxchg_func = func; 267 return (*func)(exchange_value, dest, compare_value); 268 } 269 assert(Threads::number_of_threads() == 0, "for bootstrap only"); 270 271 jint old_value = *dest; 272 if (old_value == compare_value) 273 *dest = exchange_value; 274 return old_value; 275 } 276 277 jbyte os::atomic_cmpxchg_byte_bootstrap(jbyte exchange_value, volatile jbyte* dest, jbyte compare_value) { 278 // try to use the stub: 279 cmpxchg_byte_func_t* func = CAST_TO_FN_PTR(cmpxchg_byte_func_t*, StubRoutines::atomic_cmpxchg_byte_entry()); 280 281 if (func != NULL) { 282 os::atomic_cmpxchg_byte_func = func; 283 return (*func)(exchange_value, dest, compare_value); 284 } 285 assert(Threads::number_of_threads() == 0, "for bootstrap only"); 286 287 jbyte old_value = *dest; 288 if (old_value == compare_value) 289 *dest = exchange_value; 290 return old_value; 291 } 292 293 #endif // AMD64 294 295 jlong os::atomic_cmpxchg_long_bootstrap(jlong exchange_value, volatile jlong* dest, jlong compare_value) { 296 // try to use the stub: 297 cmpxchg_long_func_t* func = CAST_TO_FN_PTR(cmpxchg_long_func_t*, StubRoutines::atomic_cmpxchg_long_entry()); 298 299 if (func != NULL) { 300 os::atomic_cmpxchg_long_func = func; 301 return (*func)(exchange_value, dest, compare_value); 302 } 303 assert(Threads::number_of_threads() == 0, "for bootstrap only"); 304 305 jlong old_value = *dest; 306 if (old_value == compare_value) 307 *dest = exchange_value; 308 return old_value; 309 } 310 311 #ifdef AMD64 312 313 jint os::atomic_add_bootstrap(jint add_value, volatile jint* dest) { 314 // try to use the stub: 315 add_func_t* func = CAST_TO_FN_PTR(add_func_t*, StubRoutines::atomic_add_entry()); 316 317 if (func != NULL) { 318 os::atomic_add_func = func; 319 return (*func)(add_value, dest); 320 } 321 assert(Threads::number_of_threads() == 0, "for bootstrap only"); 322 323 return (*dest) += add_value; 324 } 325 326 intptr_t os::atomic_add_ptr_bootstrap(intptr_t add_value, volatile intptr_t* dest) { 327 // try to use the stub: 328 add_ptr_func_t* func = CAST_TO_FN_PTR(add_ptr_func_t*, StubRoutines::atomic_add_ptr_entry()); 329 330 if (func != NULL) { 331 os::atomic_add_ptr_func = func; 332 return (*func)(add_value, dest); 333 } 334 assert(Threads::number_of_threads() == 0, "for bootstrap only"); 335 336 return (*dest) += add_value; 337 } 338 339 xchg_func_t* os::atomic_xchg_func = os::atomic_xchg_bootstrap; 340 xchg_ptr_func_t* os::atomic_xchg_ptr_func = os::atomic_xchg_ptr_bootstrap; 341 cmpxchg_func_t* os::atomic_cmpxchg_func = os::atomic_cmpxchg_bootstrap; 342 cmpxchg_byte_func_t* os::atomic_cmpxchg_byte_func = os::atomic_cmpxchg_byte_bootstrap; 343 add_func_t* os::atomic_add_func = os::atomic_add_bootstrap; 344 add_ptr_func_t* os::atomic_add_ptr_func = os::atomic_add_ptr_bootstrap; 345 346 #endif // AMD64 347 348 cmpxchg_long_func_t* os::atomic_cmpxchg_long_func = os::atomic_cmpxchg_long_bootstrap; 349 350 #ifdef AMD64 351 /* 352 * Windows/x64 does not use stack frames the way expected by Java: 353 * [1] in most cases, there is no frame pointer. All locals are addressed via RSP 354 * [2] in rare cases, when alloca() is used, a frame pointer is used, but this may 355 * not be RBP. 356 * See http://msdn.microsoft.com/en-us/library/ew5tede7.aspx 357 * 358 * So it's not possible to print the native stack using the 359 * while (...) {... fr = os::get_sender_for_C_frame(&fr); } 360 * loop in vmError.cpp. We need to roll our own loop. 361 */ 362 bool os::platform_print_native_stack(outputStream* st, const void* context, 363 char *buf, int buf_size) 364 { 365 CONTEXT ctx; 366 if (context != NULL) { 367 memcpy(&ctx, context, sizeof(ctx)); 368 } else { 369 RtlCaptureContext(&ctx); 370 } 371 372 st->print_cr("Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native code)"); 373 374 STACKFRAME stk; 375 memset(&stk, 0, sizeof(stk)); 376 stk.AddrStack.Offset = ctx.Rsp; 377 stk.AddrStack.Mode = AddrModeFlat; 378 stk.AddrFrame.Offset = ctx.Rbp; 379 stk.AddrFrame.Mode = AddrModeFlat; 380 stk.AddrPC.Offset = ctx.Rip; 381 stk.AddrPC.Mode = AddrModeFlat; 382 383 int count = 0; 384 address lastpc = 0; 385 while (count++ < StackPrintLimit) { 386 intptr_t* sp = (intptr_t*)stk.AddrStack.Offset; 387 intptr_t* fp = (intptr_t*)stk.AddrFrame.Offset; // NOT necessarily the same as ctx.Rbp! 388 address pc = (address)stk.AddrPC.Offset; 389 390 if (pc != NULL && sp != NULL && fp != NULL) { 391 if (count == 2 && lastpc == pc) { 392 // Skip it -- StackWalk64() may return the same PC 393 // (but different SP) on the first try. 394 } else { 395 // Don't try to create a frame(sp, fp, pc) -- on WinX64, stk.AddrFrame 396 // may not contain what Java expects, and may cause the frame() constructor 397 // to crash. Let's just print out the symbolic address. 398 frame::print_C_frame(st, buf, buf_size, pc); 399 st->cr(); 400 } 401 lastpc = pc; 402 } else { 403 break; 404 } 405 406 PVOID p = WindowsDbgHelp::SymFunctionTableAccess64(GetCurrentProcess(), stk.AddrPC.Offset); 407 if (!p) { 408 // StackWalk64() can't handle this PC. Calling StackWalk64 again may cause crash. 409 break; 410 } 411 412 BOOL result = WindowsDbgHelp::StackWalk64( 413 IMAGE_FILE_MACHINE_AMD64, // __in DWORD MachineType, 414 GetCurrentProcess(), // __in HANDLE hProcess, 415 GetCurrentThread(), // __in HANDLE hThread, 416 &stk, // __inout LP STACKFRAME64 StackFrame, 417 &ctx, // __inout PVOID ContextRecord, 418 NULL, // __in_opt PREAD_PROCESS_MEMORY_ROUTINE64 ReadMemoryRoutine, 419 WindowsDbgHelp::pfnSymFunctionTableAccess64(), 420 // __in_opt PFUNCTION_TABLE_ACCESS_ROUTINE64 FunctionTableAccessRoutine, 421 WindowsDbgHelp::pfnSymGetModuleBase64(), 422 // __in_opt PGET_MODULE_BASE_ROUTINE64 GetModuleBaseRoutine, 423 NULL); // __in_opt PTRANSLATE_ADDRESS_ROUTINE64 TranslateAddress 424 425 if (!result) { 426 break; 427 } 428 } 429 if (count > StackPrintLimit) { 430 st->print_cr("...<more frames>..."); 431 } 432 st->cr(); 433 434 return true; 435 } 436 #endif // AMD64 437 438 ExtendedPC os::fetch_frame_from_context(const void* ucVoid, 439 intptr_t** ret_sp, intptr_t** ret_fp) { 440 441 ExtendedPC epc; 442 CONTEXT* uc = (CONTEXT*)ucVoid; 443 444 if (uc != NULL) { 445 epc = ExtendedPC((address)uc->REG_PC); 446 if (ret_sp) *ret_sp = (intptr_t*)uc->REG_SP; 447 if (ret_fp) *ret_fp = (intptr_t*)uc->REG_FP; 448 } else { 449 // construct empty ExtendedPC for return value checking 450 epc = ExtendedPC(NULL); 451 if (ret_sp) *ret_sp = (intptr_t *)NULL; 452 if (ret_fp) *ret_fp = (intptr_t *)NULL; 453 } 454 455 return epc; 456 } 457 458 frame os::fetch_frame_from_context(const void* ucVoid) { 459 intptr_t* sp; 460 intptr_t* fp; 461 ExtendedPC epc = fetch_frame_from_context(ucVoid, &sp, &fp); 462 return frame(sp, fp, epc.pc()); 463 } 464 465 // VC++ does not save frame pointer on stack in optimized build. It 466 // can be turned off by /Oy-. If we really want to walk C frames, 467 // we can use the StackWalk() API. 468 frame os::get_sender_for_C_frame(frame* fr) { 469 return frame(fr->sender_sp(), fr->link(), fr->sender_pc()); 470 } 471 472 #ifndef AMD64 473 // Returns an estimate of the current stack pointer. Result must be guaranteed 474 // to point into the calling threads stack, and be no lower than the current 475 // stack pointer. 476 address os::current_stack_pointer() { 477 int dummy; 478 address sp = (address)&dummy; 479 return sp; 480 } 481 #else 482 // Returns the current stack pointer. Accurate value needed for 483 // os::verify_stack_alignment(). 484 address os::current_stack_pointer() { 485 typedef address get_sp_func(); 486 get_sp_func* func = CAST_TO_FN_PTR(get_sp_func*, 487 StubRoutines::x86::get_previous_sp_entry()); 488 return (*func)(); 489 } 490 #endif 491 492 493 #ifndef AMD64 494 intptr_t* _get_previous_fp() { 495 intptr_t **frameptr; 496 __asm { 497 mov frameptr, ebp 498 }; 499 return *frameptr; 500 } 501 #endif // !AMD64 502 503 frame os::current_frame() { 504 505 #ifdef AMD64 506 // apparently _asm not supported on windows amd64 507 typedef intptr_t* get_fp_func (); 508 get_fp_func* func = CAST_TO_FN_PTR(get_fp_func*, 509 StubRoutines::x86::get_previous_fp_entry()); 510 if (func == NULL) return frame(); 511 intptr_t* fp = (*func)(); 512 if (fp == NULL) { 513 return frame(); 514 } 515 #else 516 intptr_t* fp = _get_previous_fp(); 517 #endif // AMD64 518 519 frame myframe((intptr_t*)os::current_stack_pointer(), 520 (intptr_t*)fp, 521 CAST_FROM_FN_PTR(address, os::current_frame)); 522 if (os::is_first_C_frame(&myframe)) { 523 // stack is not walkable 524 return frame(); 525 } else { 526 return os::get_sender_for_C_frame(&myframe); 527 } 528 } 529 530 void os::print_context(outputStream *st, const void *context) { 531 if (context == NULL) return; 532 533 const CONTEXT* uc = (const CONTEXT*)context; 534 535 st->print_cr("Registers:"); 536 #ifdef AMD64 537 st->print( "RAX=" INTPTR_FORMAT, uc->Rax); 538 st->print(", RBX=" INTPTR_FORMAT, uc->Rbx); 539 st->print(", RCX=" INTPTR_FORMAT, uc->Rcx); 540 st->print(", RDX=" INTPTR_FORMAT, uc->Rdx); 541 st->cr(); 542 st->print( "RSP=" INTPTR_FORMAT, uc->Rsp); 543 st->print(", RBP=" INTPTR_FORMAT, uc->Rbp); 544 st->print(", RSI=" INTPTR_FORMAT, uc->Rsi); 545 st->print(", RDI=" INTPTR_FORMAT, uc->Rdi); 546 st->cr(); 547 st->print( "R8 =" INTPTR_FORMAT, uc->R8); 548 st->print(", R9 =" INTPTR_FORMAT, uc->R9); 549 st->print(", R10=" INTPTR_FORMAT, uc->R10); 550 st->print(", R11=" INTPTR_FORMAT, uc->R11); 551 st->cr(); 552 st->print( "R12=" INTPTR_FORMAT, uc->R12); 553 st->print(", R13=" INTPTR_FORMAT, uc->R13); 554 st->print(", R14=" INTPTR_FORMAT, uc->R14); 555 st->print(", R15=" INTPTR_FORMAT, uc->R15); 556 st->cr(); 557 st->print( "RIP=" INTPTR_FORMAT, uc->Rip); 558 st->print(", EFLAGS=" INTPTR_FORMAT, uc->EFlags); 559 #else 560 st->print( "EAX=" INTPTR_FORMAT, uc->Eax); 561 st->print(", EBX=" INTPTR_FORMAT, uc->Ebx); 562 st->print(", ECX=" INTPTR_FORMAT, uc->Ecx); 563 st->print(", EDX=" INTPTR_FORMAT, uc->Edx); 564 st->cr(); 565 st->print( "ESP=" INTPTR_FORMAT, uc->Esp); 566 st->print(", EBP=" INTPTR_FORMAT, uc->Ebp); 567 st->print(", ESI=" INTPTR_FORMAT, uc->Esi); 568 st->print(", EDI=" INTPTR_FORMAT, uc->Edi); 569 st->cr(); 570 st->print( "EIP=" INTPTR_FORMAT, uc->Eip); 571 st->print(", EFLAGS=" INTPTR_FORMAT, uc->EFlags); 572 #endif // AMD64 573 st->cr(); 574 st->cr(); 575 576 intptr_t *sp = (intptr_t *)uc->REG_SP; 577 st->print_cr("Top of Stack: (sp=" PTR_FORMAT ")", sp); 578 print_hex_dump(st, (address)sp, (address)(sp + 32), sizeof(intptr_t)); 579 st->cr(); 580 581 // Note: it may be unsafe to inspect memory near pc. For example, pc may 582 // point to garbage if entry point in an nmethod is corrupted. Leave 583 // this at the end, and hope for the best. 584 address pc = (address)uc->REG_PC; 585 st->print_cr("Instructions: (pc=" PTR_FORMAT ")", pc); 586 print_hex_dump(st, pc - 32, pc + 32, sizeof(char)); 587 st->cr(); 588 } 589 590 591 void os::print_register_info(outputStream *st, const void *context) { 592 if (context == NULL) return; 593 594 const CONTEXT* uc = (const CONTEXT*)context; 595 596 st->print_cr("Register to memory mapping:"); 597 st->cr(); 598 599 // this is only for the "general purpose" registers 600 601 #ifdef AMD64 602 st->print("RIP="); print_location(st, uc->Rip); 603 st->print("RAX="); print_location(st, uc->Rax); 604 st->print("RBX="); print_location(st, uc->Rbx); 605 st->print("RCX="); print_location(st, uc->Rcx); 606 st->print("RDX="); print_location(st, uc->Rdx); 607 st->print("RSP="); print_location(st, uc->Rsp); 608 st->print("RBP="); print_location(st, uc->Rbp); 609 st->print("RSI="); print_location(st, uc->Rsi); 610 st->print("RDI="); print_location(st, uc->Rdi); 611 st->print("R8 ="); print_location(st, uc->R8); 612 st->print("R9 ="); print_location(st, uc->R9); 613 st->print("R10="); print_location(st, uc->R10); 614 st->print("R11="); print_location(st, uc->R11); 615 st->print("R12="); print_location(st, uc->R12); 616 st->print("R13="); print_location(st, uc->R13); 617 st->print("R14="); print_location(st, uc->R14); 618 st->print("R15="); print_location(st, uc->R15); 619 #else 620 st->print("EIP="); print_location(st, uc->Eip); 621 st->print("EAX="); print_location(st, uc->Eax); 622 st->print("EBX="); print_location(st, uc->Ebx); 623 st->print("ECX="); print_location(st, uc->Ecx); 624 st->print("EDX="); print_location(st, uc->Edx); 625 st->print("ESP="); print_location(st, uc->Esp); 626 st->print("EBP="); print_location(st, uc->Ebp); 627 st->print("ESI="); print_location(st, uc->Esi); 628 st->print("EDI="); print_location(st, uc->Edi); 629 #endif 630 631 st->cr(); 632 } 633 634 extern "C" int SpinPause () { 635 #ifdef AMD64 636 return 0 ; 637 #else 638 // pause == rep:nop 639 // On systems that don't support pause a rep:nop 640 // is executed as a nop. The rep: prefix is ignored. 641 _asm { 642 pause ; 643 }; 644 return 1 ; 645 #endif // AMD64 646 } 647 648 649 void os::setup_fpu() { 650 #ifndef AMD64 651 int fpu_cntrl_word = StubRoutines::fpu_cntrl_wrd_std(); 652 __asm fldcw fpu_cntrl_word; 653 #endif // !AMD64 654 } 655 656 #ifndef PRODUCT 657 void os::verify_stack_alignment() { 658 #ifdef AMD64 659 // The current_stack_pointer() calls generated get_previous_sp stub routine. 660 // Only enable the assert after the routine becomes available. 661 if (StubRoutines::code1() != NULL) { 662 assert(((intptr_t)os::current_stack_pointer() & (StackAlignmentInBytes-1)) == 0, "incorrect stack alignment"); 663 } 664 #endif 665 } 666 #endif 667 668 int os::extra_bang_size_in_bytes() { 669 // JDK-8050147 requires the full cache line bang for x86. 670 return VM_Version::L1_line_size(); 671 }
__label__pos
0.992913
How to use "join()" to create a string from an array using JavaScript? Q How to use "join()" to create a string from an array using JavaScript? ✍: Guest A "join" concatenates the array elements with a specified seperator between them. <script type="text/javascript"> var days = ["Sunday","Monday","Tuesday","Wednesday", "Thursday","Friday","Saturday"]; document.write("days:"+days.join(",")); </script> This produces days:Sunday,Monday,Tuesday,Wednesday,Thursday,Friday,Saturday 2011-07-26, 3116👍, 0💬
__label__pos
0.944036
PageRenderTime 67ms CodeModel.GetById 2ms app.highlight 60ms RepoModel.GetById 1ms app.codeStats 0ms /silversupport/appconfig.py https://bitbucket.org/ianb/silverlining/ Python | 454 lines | 440 code | 8 blank | 6 comment | 15 complexity | 5d3053814ae47da63d67f9cdb614094c MD5 | raw file 1"""Application configuration object""" 2 3import os 4import pwd 5import sys 6import warnings 7from site import addsitedir 8from silversupport.env import is_production 9from silversupport.shell import run 10from silversupport.util import asbool, read_config 11from silversupport.disabledapps import DisabledSite, is_disabled 12 13__all__ = ['AppConfig'] 14 15DEPLOYMENT_LOCATION = "/var/www" 16 17 18class AppConfig(object): 19 """This represents an application's configuration file, and by 20 extension represents the application itself""" 21 22 def __init__(self, config_file, app_name=None, 23 local_config=None): 24 if not os.path.exists(config_file): 25 raise OSError("No config file %s" % config_file) 26 self.config_file = config_file 27 self.config = read_config(config_file) 28 if not is_production(): 29 if self.config['production'].get('version'): 30 warnings.warn('version setting in %s is deprecated' % config_file) 31 if self.config['production'].get('default_hostname'): 32 warnings.warn('default_hostname setting in %s has been renamed to default_location' 33 % config_file) 34 if self.config['production'].get('default_host'): 35 warnings.warn('default_host setting in %s has been renamed to default_location' 36 % config_file) 37 if app_name is None: 38 if is_production(): 39 app_name = os.path.basename(os.path.dirname(config_file)).split('.')[0] 40 else: 41 app_name = self.config['production']['app_name'] 42 self.app_name = app_name 43 self.local_config = local_config 44 45 @classmethod 46 def from_instance_name(cls, instance_name): 47 """Loads an instance given its name; only valid in production""" 48 return cls(os.path.join(DEPLOYMENT_LOCATION, instance_name, 'app.ini')) 49 50 @classmethod 51 def from_location(cls, location): 52 """Loads an instance given its location (hostname[/path])""" 53 import appdata 54 return cls.from_instance_name( 55 appdata.instance_for_location(*appdata.normalize_location(location))) 56 57 @property 58 def platform(self): 59 """The platform of the application. 60 61 Current valid values are ``'python'`` and ``'php'`` 62 """ 63 return self.config['production'].get('platform', 'python') 64 65 @property 66 def runner(self): 67 """The filename of the runner for this application""" 68 filename = self.config['production']['runner'] 69 return os.path.join(os.path.dirname(self.config_file), filename) 70 71 @property 72 def update_fetch(self): 73 """A list (possibly empty) of all URLs to fetch on update""" 74 places = self.config['production'].get('update_fetch') 75 if not places: 76 return [] 77 return self._parse_lines(places) 78 79 @property 80 def default_location(self): 81 """The default location to upload this application to""" 82 return self.config['production'].get('default_location') 83 84 @property 85 def services(self): 86 """A dictionary of configured services (keys=service name, 87 value=any configuration)""" 88 services = {} 89 c = self.config['production'] 90 devel = not is_production() 91 if devel: 92 devel_config = self.load_devel_config() 93 else: 94 devel_config = None 95 for name, config_string in c.items(): 96 if name.startswith('service.'): 97 name = name[len('service.'):] 98 mod = self.load_service_module(name) 99 service = mod.Service(self, config_string, devel=devel, 100 devel_config=devel_config) 101 service.name = name 102 services[name] = service 103 return services 104 105 def load_devel_config(self): 106 from silversupport.develconfig import load_devel_config 107 return load_devel_config(self.app_name) 108 109 @property 110 def service_list(self): 111 return [s for n, s in sorted(self.services.items())] 112 113 @property 114 def php_root(self): 115 """The php_root location (or ``/dev/null`` if none given)""" 116 return os.path.join( 117 self.app_dir, 118 self.config['production'].get('php_root', '/dev/null')) 119 120 @property 121 def writable_root_location(self): 122 """The writable-root location, if it is available. 123 124 If not configured, ``/dev/null`` in production and ``''`` in 125 development 126 """ 127 if 'service.writable_root' in self.config['production']: 128 ## FIXME: do development too 129 return os.path.join('/var/lib/silverlining/writable-roots', self.app_name) 130 else: 131 if is_production(): 132 return '/dev/null' 133 else: 134 return '' 135 136 @property 137 def app_dir(self): 138 """The directory the application lives in""" 139 return os.path.dirname(self.config_file) 140 141 @property 142 def static_dir(self): 143 """The location of static files""" 144 return os.path.join(self.app_dir, 'static') 145 146 @property 147 def instance_name(self): 148 """The name of the instance (APP_NAME.TIMESTAMP)""" 149 return os.path.basename(os.path.dirname(self.config_file)) 150 151 @property 152 def packages(self): 153 """A list of packages that should be installed for this application""" 154 return self._parse_lines(self.config['production'].get('packages')) 155 156 @property 157 def package_install_script(self): 158 """A list of scripts to call to install package stuff like apt 159 repositories""" 160 return self._parse_lines( 161 self.config['production'].get('package_install_script'), 162 relative_to=self.app_dir) 163 164 @property 165 def after_install_script(self): 166 """A list of scripts to call after installing packages""" 167 return self._parse_lines( 168 self.config['production'].get('after_install_script'), 169 relative_to=self.app_dir) 170 171 @property 172 def config_required(self): 173 return asbool(self.config['production'].get('config.required')) 174 175 @property 176 def config_template(self): 177 tmpl = self.config['production'].get('config.template') 178 if not tmpl: 179 return None 180 return os.path.join(self.app_dir, tmpl) 181 182 @property 183 def config_checker(self): 184 obj_name = self.config['production'].get('config.checker') 185 if not obj_name: 186 return None 187 if ':' not in obj_name: 188 raise ValueError('Bad value for config.checker (%r): should be module:obj' % obj_name) 189 mod_name, attrs = obj_name.split(':', 1) 190 __import__(mod_name) 191 mod = sys.modules[mod_name] 192 obj = mod 193 for attr in attrs.split('.'): 194 obj = getattr(obj, attr) 195 return obj 196 197 def check_config(self, config_dir): 198 checker = self.config_checker 199 if not checker: 200 return 201 checker(config_dir) 202 203 @property 204 def config_default(self): 205 dir = self.config['production'].get('config.default') 206 if not dir: 207 return None 208 return os.path.join(self.app_dir, dir) 209 210 def _parse_lines(self, lines, relative_to=None): 211 """Parse a configuration value into a series of lines, 212 ignoring empty and comment lines""" 213 if not lines: 214 return [] 215 lines = [ 216 line.strip() for line in lines.splitlines() 217 if line.strip() and not line.strip().startswith('#')] 218 if relative_to: 219 lines = [os.path.normpath(os.path.join(relative_to, line)) 220 for line in lines] 221 return lines 222 223 def activate_services(self, environ=None): 224 """Activates all the services for this application/configuration. 225 226 Note, this doesn't create databases, this only typically sets 227 environmental variables indicating runtime configuration.""" 228 if environ is None: 229 environ = os.environ 230 for service in self.service_list: 231 environ.update(service.env_setup()) 232 if is_production(): 233 environ['SILVER_VERSION'] = 'silverlining/0.0' 234 if is_production() and pwd.getpwuid(os.getuid())[0] == 'www-data': 235 tmp = environ['TEMP'] = os.path.join('/var/lib/silverlining/tmp/', self.app_name) 236 if not os.path.exists(tmp): 237 os.makedirs(tmp) 238 elif not environ.get('TEMP'): 239 environ['TEMP'] = '/tmp' 240 environ['SILVER_LOGS'] = self.log_dir 241 if not is_production() and not os.path.exists(environ['SILVER_LOGS']): 242 os.makedirs(environ['SILVER_LOGS']) 243 if is_production(): 244 config_dir = os.path.join('/var/lib/silverlining/configs', self.app_name) 245 if os.path.exists(config_dir): 246 environ['SILVER_APP_CONFIG'] = config_dir 247 elif self.config_default: 248 environ['SILVER_APP_CONFIG'] = self.config_default 249 else: 250 if self.local_config: 251 environ['SILVER_APP_CONFIG'] = self.local_config 252 elif self.config_default: 253 environ['SILVER_APP_CONFIG'] = self.config_default 254 elif self.config_required: 255 raise Exception('This application requires configuration and config.devel ' 256 'is not set and no --config was given') 257 return environ 258 259 @property 260 def log_dir(self): 261 if is_production(): 262 return os.path.join('/var/log/silverlining/apps', self.app_name) 263 else: 264 return os.path.join(self.app_dir, 'silver-logs') 265 266 def install_services(self, clear=False): 267 """Installs all the services for this application. 268 269 This is run on deployment""" 270 for service in self.service_list: 271 if clear: 272 service.clear() 273 else: 274 service.install() 275 276 def load_service_module(self, service_name): 277 """Load the service module for the given service name""" 278 __import__('silversupport.service.%s' % service_name) 279 mod = sys.modules['silversupport.service.%s' % service_name] 280 return mod 281 282 def clear_services(self): 283 for service in self.service_list: 284 service.clear(self) 285 286 def backup_services(self, dest_dir): 287 for service in self.service_list: 288 service_dir = os.path.join(dest_dir, service.name) 289 if not os.path.exists(service_dir): 290 os.makedirs(service_dir) 291 service.backup(service_dir) 292 293 def restore_services(self, source_dir): 294 for service in self.service_list: 295 service_dir = os.path.join(source_dir, service.name) 296 service.restore(service_dir) 297 298 def activate_path(self): 299 """Adds any necessary entries to sys.path for this app""" 300 lib_path = os.path.join( 301 self.app_dir, 'lib', 'python%s' % sys.version[:3], 'site-packages') 302 if lib_path not in sys.path and os.path.exists(lib_path): 303 addsitedir(lib_path) 304 sitecustomize = os.path.join( 305 self.app_dir, 'lib', 'python%s' % sys.version[:3], 'sitecustomize.py') 306 if os.path.exists(sitecustomize): 307 ns = {'__file__': sitecustomize, '__name__': 'sitecustomize'} 308 execfile(sitecustomize, ns) 309 310 def get_app_from_runner(self): 311 """Returns the WSGI app that the runner indicates 312 """ 313 assert self.platform == 'python', ( 314 "get_app_from_runner() shouldn't be run on an app with the platform %s" 315 % self.platform) 316 runner = self.runner 317 if '#' in runner: 318 runner, spec = runner.split('#', 1) 319 else: 320 spec = None 321 if runner.endswith('.ini'): 322 try: 323 from paste.deploy import loadapp 324 except ImportError: 325 print >> sys.stderr, ( 326 "To use a .ini runner (%s) you must have PasteDeploy " 327 "installed in your application." % runner) 328 raise 329 from silversupport.secret import get_secret 330 runner = 'config:%s' % runner 331 global_conf = os.environ.copy() 332 global_conf['SECRET'] = get_secret() 333 app = loadapp(runner, name=spec, 334 global_conf=global_conf) 335 elif runner.endswith('.py'): 336 ## FIXME: not sure what name to give it 337 ns = {'__file__': runner, '__name__': 'main_py'} 338 execfile(runner, ns) 339 spec = spec or 'application' 340 if spec in ns: 341 app = ns[spec] 342 else: 343 raise Exception("No application %s defined in %s" 344 % (spec, runner)) 345 else: 346 raise Exception("Unknown kind of runner (%s)" % runner) 347 if is_production() and is_disabled(self.app_name): 348 disabled_appconfig = AppConfig.from_location('disabled') 349 return DisabledSite(app, disabled_appconfig.get_app_from_runner()) 350 else: 351 return app 352 353 def canonical_hostname(self): 354 """Returns the 'canonical' hostname for this application. 355 356 This only applies in production environments.""" 357 from silversupport import appdata 358 fp = open(appdata.APPDATA_MAP) 359 hostnames = [] 360 instance_name = self.instance_name 361 for line in fp: 362 if not line.strip() or line.strip().startswith('#'): 363 continue 364 hostname, path, data = line.split(None, 2) 365 line_instance_name = data.split('|')[0] 366 if line_instance_name == instance_name: 367 if hostname.startswith('.'): 368 hostname = hostname[1:] 369 hostnames.append(hostname) 370 hostnames.sort(key=lambda x: len(x)) 371 if hostnames: 372 return hostnames[0] 373 else: 374 return None 375 376 def write_php_env(self, filename=None): 377 """Writes out a PHP file that loads up all the environmental variables 378 379 This is because we don't run any Python during the actual 380 request cycle for PHP applications. 381 """ 382 assert self.platform == 'php' 383 if filename is None: 384 filename = os.path.join(self.app_dir, 'silver-env-variables.php') 385 fp = open(filename, 'w') 386 fp.write('<?\n') 387 env = {} 388 self.activate_services(env) 389 for name, value in sorted(env.iteritems()): 390 fp.write('$_SERVER[%s] = %r;\n' % (name, value)) 391 fp.write('?>') 392 fp.close() 393 394 def sync(self, host, instance_name): 395 """Synchronize this application (locally) with a remote server 396 at the given host. 397 """ 398 dest_dir = os.path.join(DEPLOYMENT_LOCATION, instance_name) 399 self._run_rsync(host, self.app_dir, dest_dir) 400 401 def sync_config(self, host, config_dir): 402 """Synchronise the given configuration (locally) with a remote server 403 at the given host (for this app/app_name)""" 404 dest_dir = os.path.join('/var/lib/silverlining/configs', self.app_name) 405 self._run_rsync(host, config_dir, dest_dir) 406 407 def _run_rsync(self, host, source, dest): 408 assert not is_production() 409 exclude_from = os.path.join(os.path.dirname(__file__), 'rsync-exclude.txt') 410 if not source.endswith('/'): 411 source += '/' 412 ## FIXME: does it matter if dest ends with /? 413 cmd = ['rsync', 414 '--recursive', 415 '--links', # Copy over symlinks as symlinks 416 '--copy-unsafe-links', # Copy symlinks that are outside of dir as real files 417 '--executability', # Copy +x modes 418 '--times', # Copy timestamp 419 '--rsh=ssh', # Use ssh 420 '--delete', # Delete files thta aren't in the source dir 421 '--compress', 422 #'--skip-compress=.zip,.egg', # Skip some already-compressed files 423 '--exclude-from=%s' % exclude_from, 424 '--progress', # I don't think this does anything given --quiet 425 '--quiet', 426 source, 427 os.path.join('%s:%s' % (host, dest)), 428 ] 429 run(cmd) 430 431 def check_service_setup(self, logger): 432 import traceback 433 for service in self.service_list: 434 try: 435 warning = service.check_setup() 436 except Exception, e: 437 logger.notify( 438 'Error with service %s:' % service.name) 439 logger.indent += 2 440 try: 441 logger.info( 442 traceback.format_exc()) 443 logger.notify('%s: %s' % 444 (e.__class__.__name__, str(e))) 445 finally: 446 logger.indent -= 2 447 else: 448 if warning: 449 logger.notify('Warning with service %s:' % service.name) 450 logger.indent += 2 451 try: 452 logger.notify(warning) 453 finally: 454 logger.indent -= 2
__label__pos
0.975314
Staff and system text Updated 2 years ago These are old instructions for MuseScore 2 For MuseScore 3 users, see Checking for updates For general-purpose text, use Staff Text or System Text. The difference between these two types of text is whether you want it to apply to a single staff, or the whole system. This matters when extracting parts: staff text will only appear in a part that contains the specific instrument the text is attached to, while system text will appear in all parts. Additionally, if you choose to hide empty staves, any staff text belonging to an empty staff will also be hidden. System text is never hidden by the "hide empty staves" feature. Staff text Staff text is general purpose text associated with a particular staff at a particular location in the score. To create staff text, choose a location by selecting a note or rest and then use the menu option AddTextStaff Text, or use the shortcut Ctrl+T (Mac: +T). A small text box appears and you can immediately start typing. You can exit the text box at any time (even without typing anything) by pressing Esc. Staff text can, for example, be used to apply indications such as "Solo" or "Pizzicato" to one staff in a score. Depending on what the instructions of the staff text are, MIDI playback of that staff at the text location can be altered to match the instructions by right-clicking on the staff text and selecting Staff Text Properties…. See Mid-staff sound change. System text System text is used when you wish to apply text indications to a whole system rather than just to one staff line. This makes a difference when extracting parts, or if you choose to hide empty staves. To create system text, chose a location by selecting a note or rest and then use the menu option AddTextSystem Text, or use the shortcut Ctrl+Shift+T (Mac: +Shift+T). A small text box appears and you can immediately start typing. You can exit the text box at any time (even without typing anything) by pressing Esc. See also Do you still have an unanswered question? Please log in first to post your question.
__label__pos
0.94593
(Redirected from Vector Editor) The Vector Editor The Bitmap Editor The Paint Editor is Scratch's built-in image editor. Many Scratchers create their own sprites and backdrops using it. These images can be used in many ways, each having its own impact on its project. This is one of the features that makes Scratch different from many other programming tools, because many others do not provide a built-in image creator. Costume Pane The costume pane. Main article: Costume Pane The left-most part of the paint editor is the costume pane. It consists of the buttons for creating new costumes as well as icons for each costume and the costume number. In the costume pane, each costume of the selected sprite is listed. To edit the different costumes in the paint editor, simply click on the desired costume from the pane. The pane can scroll if there are too many costumes to vertically fit its size. Creating New Costumes (or Backdrops) At the bottom of the costume pane there is a blue cat icon. Hover over it and four options will pop up. You can choose a sprite from the library, draw your own, upload an image file from your computer, take a picture, or choose a "surprise costume", which picks a random costume from the costumes library. Switching Among Costumes/Backdrops In the costume pane, the currently selected costume has a blue box around it and you can see an "x" button on it. The canvas of the paint editor only displays the currently selected costume. To access the different costumes of a sprite, click on the thumbnail of the desired costume from the costume pane. Then, the canvas will display the newly selected costume and allow for its modification. The three sliders to change the selected color Paint Editor Conversion At the bottom-left of the paint editor is the option to switch to the other (bitmap or vector) editor. When converting the images to the new editor, or new format, the program has to manipulate them. Converting Bitmap to Vector The option to convert to the vector editor. When converting a bitmap image to the vector editor, the entire bitmap image becomes one united, single object in the vector editor. It contains its bitmap appearance, but the difference is when resizing the converted bitmap image. The vector editor resizes all objects differently than the bitmap editor, often more accurately to the original display. Any shapes converted from bitmap to vector do not transform into a vector shape or obtain splines; the program reads it as before. Converting Vector to Bitmap The option to convert to the bitmap editor. When converting a vector image to bitmap, any objects that extend off the canvas will no longer be included; only a 480x360 resolution image can be created at maximum in the bitmap editor. Unlike from bitmap to vector, vector graphics lose the properties that are unique to them. Specifically, anti-aliasing is removed. Therefore, a smooth vector object may become very jagged and pixelated in appearance. Note Caution: After converting from vector to bitmap, converting back to vector will not retain the vector properties as the objects had before; however, undoing will bring back the properties. Common Options These are all located above the costume. All of these are common to both the vector and bitmap editors. Changing Colors The Scratch paint editor has a color dropdown that has three sliders that can be used to select colors: color, saturation, and brightness. It is found in the middle-left side of the editor. Two Colors Above the three sliders, there are four options. These allow you to blend an area between the two selected colors. If you click one of the options other than the solid color option, two selected colors are shown. Click on each one to edit it seperately. If you fill in an area using one of these options, it is called a gradient. Selecting "Swap" between the two colors switches their order. Color Changing the color slider changes the hue of the color (for example, from red to blue). This tool is the most used, as it has the largest difference between colors. Saturation The saturation of a color is how light it is: 100 saturation is the color selected, 50 saturation is a lighter color, and 0 saturation is totally white. Brightness This option changes how light or dark the color is. The left is completely black, while the right is the selected color. Picking Up a Color At the bottom right, there is an icon that allows you to pick up a color from the costume, sometimes called the "eyedropper". It will magnify the the area the mouse-pointer is near. On the outside is the color you are hovering over. Click to select it. Changing Pen Size In the middle of the paint editor is the pen size bar. It is marked by a paintbrush icon. There is an input to select the pen size. Type in the size or use the arrows on the side to change it. The higher the number is, the thicker the line will be. Naming Costumes To name a costume, just click on the text bar at the top-left of the paint editor and enter the new name. The name of the costume is important for organization and also in programming the project at times. It is best not to name costumes as just numbers without any other characters because it causes confusion with the costume blocks; this is due to each costume having both a name and numerical order value in the costumes pane, so switching to costume "2" could mess up if the third costume was named "2". Copying and Pasting To the right of the outlines there are two options: copy and paste. The copy option copies the region that is selected, while the paste puts it somewhere else. This is especially useful when one needs to duplicate an item. You can use ctrl + c to copy as well and ctrl + v to paste. Undo and Redo To the right of the costume name are two buttons called undo and redo. These buttons allow you to make it as if the last action never happened and repeat the undone action respectively. The redo button cannot be used unless the undo button has been used. If no actions have been taken, neither button can be used. If you change from the costume editor to something else, your actions will be permanent and can only be undo manually. The keyboard shortcut for undo is ctrl + z. Vector-only Options These options are unique to the vector editor and are not found in the bitmap editor. Precise Object Movement In the vector editor, when an object is selected, the arrow keys can be used to move it precisely one pixel. In the bitmap editor, a section grabbed with the select tool can also be moved with the arrow keys. This can be useful for making exact, precise measurements or aligning objects in an organized, structural pattern. Outlines To the right of the color dropdown, there is a dropdown that changes the outlines of objects. It also has the three sliders; however, you cannot blend two colors in currently. To the right of that dropdown is an input to select the thickness of the outline. You can type in a number or use the arrows to change it. Layers The layering of Giga in front of Nano. The paint editor also includes the feature of layering objects in the vector editor. Layering objects is placing them in front or behind one another. At the top-right will be four buttons which allow you to layer objects in front or behind another. On the left side, Forward and Backward move objects one layer at a time, while Front and Back move them to the very front or back, in front of or behind all other objects in that costume. Note Note: You can only layer objects, not individual splines. Grouping When there are many objects that you need to move at once, it can be useful to group them. Use the select tool to select a specific region of the costume, and click group. These objects are now one object and can be moved together. The ungroup button does the opposite; one grouped item selected can be broken up into smaller portions. Note Note: When you group objects, it does not take away the splines of the objects; they can still be modified Horizontal and Vertical Flipping From left to right, top to bottom: No flip, horizontal flip, vertical flip, horizontal and vertical flip. When you select an object, there is the option to horizontally and/or vertically flip it. Flipping an object reverses it. In advanced terms, when an object is flipped, it takes each pixel and sets it in the opposite location of a center with the origin of (0,0). At the top of the editor, there are two tool that look like two arrows pointing to a dotted line. The one on the left flips the selected object horizontally, and the one on the right vertically. Curved/Pointed When using the reshape tool, there are two options to the right of the outlines that makes the splines either curved, like an oval, or pointed, like a rectangle. This is useful when creating shapes with both curved and pointed edges. If more than one point is selected (hold the shift key and click more than one), all selected will apply the specified change. Vector Tools Vector tools, unlike bitmap, create splines instead of an array of pixels to store costumes and backdrops. Many of the tools, though, work in a very similar style. They are found to the left of the costume. Select The first tool, the mouse-pointer tool, is used for modifying the location of an object, stretching or compressing it, or rotating it. When an object is selected with this tool by a mouse-click, a blue box will appear around it and the object can be moved by grabbing the center and moving the mouse. Rotation is accomplished by dragging the two small arrows below the box. The object rotates in relation from the object's center to the rotation circle's position. Lastly, objects can be stretched and compressed with the measurement boxes that appear on the outer edge of the selection box. Note Note: to select multiple, ungrouped objects as a whole, click the mouse in a blank area and a box will drag around the desired area; then all objects the box touches will be selected. Reshape Tool An image, edited with the reshape tool. This tool is used for bending or changing the shapes of a spline in the editor by grabbing the points with the mouse and moving them around. In Scratch 2.0 and earlier, there was a "Smooth" button that removed some splines. • Click on a spline without moving it to remove it. • Hold shift while selecting splines to select multiple. • If you have a closed shape that is filled in, if you break the shape (open the polygon) the fill color will go away. • You can remove the spline you are editing with the delete key or backspace key. • Click on a line or border of a shape where there is no point to add one. • When a spline is clicked, two options will show above on the tab: curved and pointed. These change the spline to be either curved or pointed. • When a spline is clicked, a blue line with diamonds at each end will show up. Pull this to affect the line on either side of the spline. Spline Tool (Drawing) The spline tool acts as the paint brush tool for the vector editor. However, instead of creating pen marks on the canvas like in bitmap, it creates splines, mathematically calculated and stored shapes, which will not become pixelated and can be modified using the reshape tool. To draw, hold down the mouse on the canvas. Drawing from an existing point on an existing spline will automatically match the size and color to the existing spline and also connect the splines together. Eraser The eraser pushes splines within its range outside of the circled area. This tool was first introduced in Scratch 3.0. It is different than the bitmap eraser tool because it leaves the same outline as the spline it erased had. Paint Bucket The paint bucket tool can often be confusing because instead of filling in a region it only fills in vector objects. For example, you cannot fill in the blank background of a costume or backdrop with the paint bucket in vector because it is not an object. When the paint bucket is selected, click on any object's interior or outline to manipulate the color to the currently selected one. Note Note: One cannot fill in any closed area with the vector paint bucket because it must be an object. When filling in backgrounds, it may be best to perform the action in the bitmap editor first and then switch to vector. In bitmap, any region, closed or unclosed to the background, or the borders themselves, may be filled, but in the vector editor the splines must be connected into a uniform, closed-in object. The paint bucket cannot be used to fill in outlines, as it could in previous versions. Text Tool The fonts in the vector editor. The text tool is used to type characters onto the canvas, which can be resized after completion. To type text, click anywhere on the canvas; when a cursor appears, begin typing. One can set the horizontal and vertical boundaries of text by moving the resolution boxes that appear around the text box. Click on a blank area to exit the text editor once finished. To resize the text afterward, select it with the mouse-pointer and drag the measurement boxes to have the text reach the desired size. Text can be edited after exiting the text editor by clicking on existing text with the text tool. The Text Tool has nine available fonts available in the bottom-left of the paint editor to select from: Sans Serif, Serif (Times), Handwriting, Marker, Curly, Pixel, 中文 (Chinese), 日本語 (Japanese), and 한국어 (Korean). More fonts can be obtained using Google Drawings by making a drawing, typing the desired text in, choosing the font, downloading the image, then importing the image into the desired sprite in Scratch and resizing or moving the text to fit one's need. Note Note: Due to a bug, Sans Serif fonts may display as a large Serif font. Edit the font and save the project to fix this. Sometimes, that doesn't work, so just use the Chinese, Japanese, or Korean fonts instead, as they all look very similar to the Sans Serif font. Line Tool The line tool is used for drawing straight lines in the vector editor. A line consists of two points of the spline: one in the beginning and one at the end. To draw a line, click and hold the mouse, and release to draw the line from the starting point of the mouse-click to the release point. Note Note: to draw a curved line, you must first draw a straight line, select the reshape tool, and shift-click anywhere on the line to create a new point which forms a curve on the line. Oval Tool (Circle) The circle tool is used to draw ovals or perfect circles. This can be done by clicking and holding the mouse on the canvas. Then, an oval will form in relation to the mouse's starting and ending coordinates. To draw a perfect circle, hold the shift key while drawing with the oval tool. Rectangle Tool (Square) The rectangle tool is used to create a geometric rectangle (4-sided with right-angle corners). When the tool is selected, the rectangle can be drawn clicking and holding down the mouse-pointer, then releasing. The rectangle has four points on it, each at a corner. To draw a perfect square, hold the shift key while drawing with the rectangle tool. Bitmap Tools The bitmap editor's tools are similar to the vector editor's, but instead use a grid of pixels on a region instead of spline creation. All the bitmap tool icons are pixelated. Due to not using splines, these images are pixelated and do not have as many tools available. Paint Brush The paint brush is a tool simply for drawing wherever the mouse-pointer is clicked. The color and size modify the display of the paint brush's pen marks. To change the brush's size, simply go to the slider at the bottom left-hand corner and change it to your desired size. Line Tool The line tool is used for drawing straight lines. Prior to Scratch 3.0, holding shift would make the line perfectly horizontal or vertical. There was a bug with the shift feature where the line actually ended up where the mouse-pointer was, not perfectly straight. Currently, holding shift allows you to make lines at perfect benchmark angles (such as 90 degrees, -180 degrees, or -45 degrees). Oval Tool (Circle) The oval tool, commonly known as the "circle" or "ellipse" tool, is used for creating ovals of any shape and size. Just like the rectangle tool, when the oval tool is selected there will be two buttons in the bottom-left of the paint editor. The first is used to create an oval with a hollow center, and the adjacent one is used to create a solid, filled-in oval. The oval tool can also create perfect circles by holding down the shift key while drawing. Note Note: The oval tool icon is actually an octogon, because of how the image was pixelated. Rectangle Tool (Square) The rectangle tool, commonly known as the "square tool", is used for drawing rectangles (4-sided, geometric shape with all right angles). These rectangles can either be solid or transparent in the center. When the tool is selected, to the right of the color selection will be two buttons, one consisting of an outlined rectangle, and one consisting of a filled one. By default, the filled one is selected. This means that any drawn rectangle will be one solid mass. If the button consisting of the outline rectangle is selected, the shape drawn will have an open, see-through center. Text Tool The nine usable fonts The text tool is used for typing text into a costume or backdrop. When the tool is selected, click anywhere on the canvas for a cursor to appear. Then, you can type in a desired text. To modify the size of the text, you must drag the small size buttons (tiny squares) to the desired measurement. You can also, with these buttons, stretch and compress your text, but this can only be done after finishing typing. However, you can only resize the text once. When typing, the text appears just like vector, but when the text is deselected, the words become pixelated. There are nine different fonts available to the right of the color selectors. In Scratch 1.4 and before, fonts were loaded from the computer, which meant that the user could choose to use any font they please. However, this was removed in Scratch 2.0, as it included an online editor that could not load the fonts from the computer. Paint Bucket The paint bucket is used to fill in any closed region of a consistent color with one solid color. This can be accomplished by clicking in the desired area on the canvas. The color spreads everywhere that has the same color on it. Note Caution: If your shapes have small holes in them, the color will spread out of the shape. Check for any holes before using this tool. Eraser The eraser tool is used to remove (or erase) a clicked area on the canvas. The colors that are erased are replaced with no color, meaning that area is see-through. Unlike the vector eraser, it does not leave an outline. Select The select tool (formerly the screen region grabber) is used to grab an area on the canvas and relocate, stretch and compress, or modify it in some way. This can be done by clicking and dragging around the desired area. Then, a blue box will appear around that area. If you grab the center of the dotted box with the mouse, you can move the section around. Also, you can stretch and compress it with the measurement boxes that appear around the outside. Rotate the section with the blue arrows located below the selected region. The vector editor tools Helpful Tips and Hints • The resolutions (width by height in pixels) of your costumes and backdrops are shown underneath the costume names in the costume pane. • The scroll bar can be used to quickly pan around an image in the paint editor. It typically will pan vertically, but if the shift key is held down, the scroll bar will pan about the image horizontally. • Costumes and backdrops can be renamed at the top by typing in the text box. • To zoom in or out, click on the magnifying glass tools in the bottom-right. The one with the "+" zooms in and the one with "–" zooms out. The button between them sets the zoom to 100%. Zooming in when drawing can help to create smoother lines than when zoomed out. • Holding the Ctrl key while scrolling is a shortcut for zooming. • If you make a mistake, you can click the undo or redo button at the top. • Ctrl+Z or ⌘ Cmd+Z is a shortcut for undoing actions. • To delete a costume, click on the "x" located at the top-right of the icon, or right-click or shift-click on the costume's icon in the costume pane and select "delete". • To duplicate a costume, right-click the desired costume thumbnail. From the pop-up menu, select "Duplicate". • To place costumes and backdrops in a desired order, drag their icons to another location in the costume pane. • To fill in the background of a costume or backdrop in the vector editor, create a large rectangle around the borders of the editor, and then fill the space and edges in with the desired color using the paint bucket tool. • Objects and pixels can be placed or drawn outside of the canvas in vector mode, although they may be cut off, as in backdrops, on the stage. • To break the stage edges in Scratch 2.0, meaning for a sprite to move freely past the borders, you would create four vector shapes, fill them transparently, and drag them to each outermost edge of the canvas. Doing so would not physically allow the sprite to move at an infinite location off the stage but far enough to remain unseen, portraying the same sense. Example Uses • Simply to draw something • Attach one image to another image • Resize an image • Censor content, such as the user's face History See also: Scratch Update History The following table provides a list of publicly announced updates to the paint editor mainly on the forums. There are many other edits unspecified directly by the Scratch Team. Year Month Day Update(s) 2019 January 2 Paint editor updated to Scratch 3.0. The information on this page is about this version. All previous information can be found at Paint Editor (2.0). See Also References
__label__pos
0.636637
Is It Worth Upgrading To I9 9900k? What is better than i9 9900k? Clock speeds on the i9-9900K start off a bit lower with a base speed of 3.6 GHz (vs. 3.8 GHz), but Intel’s turbo functionality brings it to 5.0 GHz for two cores, which are well past AMD’s peak turbo spec….Features.Intel Core i9-9900KAMD Ryzen 9 3900XCores / Threads8 / 1612 / 24Base Frequency (GHz)3.63.810 more rows•Jul 12, 2019. Is an i9 overkill? Yes the Core i9 will be an overkill for most of the users. Processors like Pentium and core i3 or even i5 with good clock speeds and RAM will be enough for doing daily office work. … In that case you won’t generally need to look beyond core i7. Is Ryzen 7 better than i7? While its true that Intel’s Core i7-9700K offers more raw performance than AMD’s Ryzen 7 2700X, AMD’s chip offers the better value overall because of its dramatically lower price. … For most consumers, the extra performance you get with the i7-9700K isn’t worth the money. Is Intel Core i9 9900k good for gaming? Intel says its 8-core, 9th-gen Core i9-9900K is the “world’s best gaming CPU,” but it’s wrong. It’s easily the world’s fastest gaming CPU. Gordon and Adam sit down to talk about the Intel Core i9-9900K review – and of course Gordon brought some benchmarking charts that show performance versus the 8700K and Ryzen 2700X! Who is AMD owned by? As of March 30, 2020, the top three institutional holders of AMD are The Vanguard Group with 112 million shares, Blackrock Inc. with 82.6 million shares, and Fidelity Management and Research with 50 million shares. Is the 3900x overkill? As previously mentioned, the R9 3900X is overkill for purely gaming as you’re not going to see any real difference between Ryzen 5 and Ryzen 9 in terms of gaming performance. … If you would like to be playing at very high framerates while streaming, the 3900x is a perfectly reasonable choice. Is AMD Ryzen 9 better than i9? The Core i9-9900K has a base frequency of 3.6GHz and a maximum boost frequency of 5GHz, compared with the Ryzen 9 3900X’s base 3.8GHz speed and 4.6GHz maximum boost. … Here, at least on pure specs, the Ryzen 9 3900X wins hands down. It comes with support for 3,200MHz DDR4 memory and a whopping 70MB L3 cache on the die. Why are AMD CPUs so cheap? AMD is able to offer lower prices by thinking that even though the margins are lower, the amount of CPUs sold should make up for the difference – at least somewhat. … AMD is cheaper because of brand name (recognition) in the CPU department, and cheaper in the GPU department because of a worse product. Should I buy an i9 9900k? If you love tinkering and overclocking to squeeze more value out of your CPU, the i9-9900K is probably not for you. That doesn’t mean you shouldn’t buy it. Just don’t get it with the hopes of getting next-level performance through your sick OC skills. Is the i9 9900k future proof? In the here and now, the Core i9 9900K is indeed a proper halo product from Intel – astonishingly fast, highly capable and the most future-proof mainstream CPU on the market. The value proposition is questionable, but for those that want the best of the best from mainstream hardware, it’s a stunning product. Should I buy 9900k or 3900x? If you use your PC for literally anything other than pure gaming: 3900X. If you care about having 5 FPS more at most in 1440p: 9900K. It’s up to 15% more performance at 1080p and as GPUs become more powerful, the CPU is more likely to become the bottleneck at 1440p and 4K. Is Ryzen 7 better than i9? Amazingly, the Core i9-9900K matches Intel’s Core i9-7900X at stock speed – a CPU that costs hundreds of dollars more and has two more cores. It’s also 17% quicker than AMD’s Ryzen 7 2700X a little quicker than the Threadripper 1920X too. How much RAM do I need for i9 9900k? If you are exclusively gaming and have no plans to make or render videos 16gb of ram is the sweet spot. On Intel CPUs ram provides very minimal performance gains so there isn’t much reason to go for ram speeds above 3000 or 3200. 16 GB 3200mhz is the best bang for your buck. Is 3900x good for gaming? Since the 9900K is Intel’s “best gaming CPU,” this makes the Ryzen 9 3900X one of the best AMD processors for gaming. … This unlocks potential for not only solid gaming performance, but also intense workloads like video editing and streaming. Why is AMD bad? The Processors Are Power-Hungry However, at load, the processor uses about 65 watts more than an Intel Core i7-2600K, which is over 40% more. Because of this, AMD processor-based systems actually lose much of their value equation over time, as they cost slightly more to run. Is i9 9900k better than i7 9700k? A 1-5% speed boost in gaming performance for the i9-9900k, even though the i9-9900k costs roughly 25% more than the i7-9700k! … The near-identical gaming performance and lack of hyper-threading makes the i7-9700k the better-value choice for a gamer. Is AMD better than Intel? Overall, both companies produce processors within striking distance of one another on nearly every front — price, power, and performance. Intel chips tend to offer better performance per core, but AMD compensates with more cores at a given price and better onboard graphics. Should I buy i9 or i7? In Intel’s simple terms, the Core i9 is faster than the Core i7, which in turn is faster than the Core i5. But “faster” isn’t always “better” for you. A lot of people don’t need that extra horsepower, which also affects battery life in laptops. Is i7 9700k future proof? There isn’t any processor that is future proof. … i7 9700K is a good CPU. But I believe in coming years more games, programs will make use of more threads, and its 8 threads will make you to want to replace it sooner. Sooner than processor with the same price tag of AMD’s. Why is i9 so expensive? In the end a microchip is only some rock with electricity flowing through it. … Sometimes they fail (so called yield percentage), which makes high end chips like i9 more expensive, because there is less of them, the partially-failed chips are then used in i7, i5, and i3’s, because there is no point in wasting that works. Is AMD Threadripper better than Intel i9? However, AMD provides higher base clock frequencies and more CPU memory cache than Intel® does. The Core i9 outperforms Threadripper in boost speeds and power. PCs with i9 processor technology will run cooler, quieter, and cheaper in the long run.
__label__pos
0.963888
Video Tutorial – Useful Relationships Between Any Pair of h(t), f(t) and S(t) I first started my video tutorial series on survival analysis by defining the hazard function.  I then explained how this definition leads to the elegant relationship of h(t) = f(t) \div S(t). In my new video, I derive 6 useful mathematical relationships that exist between any 2 of the 3 quantities in the above equation.  Each relationship allows one quantity to be written as a function of the other. I am excited to continue adding to my Youtube channel‘s collection of video tutorials.  Please stay tuned for more! Video Tutorial – The Hazard Function is the Probability Density Function Divided by the Survival Function In an earlier video, I introduced the definition of the hazard function and broke it down into its mathematical components.  Recall that the definition of the hazard function for events defined on a continuous time scale is h(t) = \lim_{\Delta t \rightarrow 0} [P(t < X \leq t + \Delta t \ | \ X > t) \ \div \ \Delta t]. Did you know that the hazard function can be expressed as the probability density function (PDF) divided by the survival function? h(t) = f(t) \div S(t) In my new Youtube video, I prove how this relationship can be obtained from the definition of the hazard function!  I am very excited to post this second video in my new Youtube channel. Video Tutorial: Breaking Down the Definition of the Hazard Function The hazard function is a fundamental quantity in survival analysis.  For an event occurring at some time on a continuous time scale, the hazard function, h(t), for that event is defined as h(t) = \lim_{\Delta t \rightarrow 0} [P(t < X \leq t + \Delta t \ | \ X > t) \ \div \ \Delta t], where • t is the time, • X is the time of the occurrence of the event. However, what does this actually mean?  In this Youtube video, I break down the mathematics of this definition into its individual components and explain the intuition behind each component. I am very excited about the release of this first video in my new Youtube channel!  This is yet another mode of expansion of The Chemical Statistician since the beginning of 2014.  As always, your comments are most appreciated!  
__label__pos
0.904365
PageRenderTime 23ms CodeModel.GetById 19ms app.highlight 1ms RepoModel.GetById 1ms app.codeStats 0ms /specs/runtime/switch.ds http://github.com/wilkie/djehuty Unknown | 122 lines | 104 code | 18 blank | 0 comment | 0 complexity | d93b72cd7716b0723c6cf8e2dd4de07c MD5 | raw file 1module specs.runtime.switch; 2 3import core.util; 4 5import math.random; 6 7describe runtime() { 8 describe _d_switch_string { 9 it should_handle_empty_switch_statements { 10 string foo = "hello"; 11 12 switch(foo) { 13 default: 14 break; 15 } 16 } 17 18 it should_handle_one_case { 19 string foo = "hello"; 20 21 switch(foo) { 22 case "hello": 23 should(true); 24 break; 25 default: 26 should(false); 27 break; 28 } 29 } 30 31 it should_handle_three_cases { 32 string foo = "hello"; 33 34 switch(foo) { 35 case "abc": 36 case "zzt": 37 should(false); 38 break; 39 case "hello": 40 should(true); 41 break; 42 default: 43 should(false); 44 break; 45 } 46 } 47 48 template StringList(int idx) { 49 static if (idx == 50) { 50 const char[] StringList = `"` ~ IntToStr!(idx, 16) ~ `" 51 `; 52 } 53 else { 54 const char[] StringList = `"` ~ IntToStr!(idx, 16) ~ `", 55 ` ~ StringList!(idx+1); 56 } 57 } 58 59 template StringArray() { 60 const char[] StringArray = ` 61 string[] foo = [ 62 ` ~ StringList!(0) ~ ` 63 ];`; 64 } 65 66 template CaseList(int idx) { 67 static if (idx == 50) { 68 const char[] CaseList = `case "` ~ IntToStr!(idx, 16) ~ `": 69 picked = "` ~ IntToStr!(idx, 16) ~ `"; 70 break;`; 71 } 72 else { 73 const char[] CaseList = `case "` ~ IntToStr!(idx, 16) ~ `": 74 picked = "` ~ IntToStr!(idx, 16) ~ `"; 75 break; 76 ` ~ CaseList!(idx+1); 77 } 78 } 79 80 template SwitchList() { 81 const char[] SwitchList = ` 82 switch(foo[idx]) { 83 ` ~ CaseList!(0) ~ ` 84 default: 85 picked = ""; 86 break; 87 }`; 88 } 89 90 it should_handle_many_cases { 91 92 mixin(StringArray!()); 93 94 auto r = new Random(); 95 for (size_t i=0; i<50; i++) { 96 size_t idx = cast(size_t)r.nextLong(50); 97 string picked; 98 99 mixin(SwitchList!()); 100 101 should(picked == foo[idx]); 102 } 103 } 104 105 it should_handle_empty_string { 106 switch("") { 107 case "": 108 should(true); 109 break; 110 case "abc": 111 case "zsdf": 112 case "asdf": 113 case "afsfdfas": 114 should(false); 115 break; 116 default: 117 should(false); 118 break; 119 } 120 } 121 } 122}
__label__pos
0.999791
Techniques for WCAG 2.0 Skip to Content (Press Enter) - F39: Failure of Success Criterion 1.1.1 due to providing a text alternative that is not null. (e.g., alt="spacer" or alt="image") for images that should be ignored by assistive technology Applicability Applies to HTML and XHTML. This failure relates to: Description This describes a failure condition for text alternatives for images that should be ignored by AT. If there is no alt attribute at all assistive technologies are not able to ignore the non-text content. The alt attribute must be proved and have a null value (i.e., alt="" or alt=" ") to avoid a failure of this Success Criterion. Note: Although alt=" " is valid, alt="" is recommended. Examples Resources No resources available for this technique. (none currently listed) Tests Procedure 1. Identify any img and applet elements that are used for purely decorative content; 2. Check that the alt attribute for these elements exists. 3. Check that the alt attribute for these elements is null. Expected Results
__label__pos
0.610865
{{announcement.body}} {{announcement.title}} How to Synchronize Blocks by the Value of the Object in Java DZone 's Guide to How to Synchronize Blocks by the Value of the Object in Java Ever had trouble synchronizing blocks of code? · Java Zone · Free Resource The Problem Sometimes, we need to synchronize blocks of code by the value of a variable. In order to understand this problem, we will consider a simple banking application that makes the following operations on each transfer of money by the client: 1. Evaluates the amount of the cash back by this transfer from external web service (CashBackService) 2. Performs a money transfer in the database (AccountService) 3. Updates the data in the cash-back evaluation system (CashBackService) The money transfer operation looks like this: public void withdrawMoney(UUID userId, int amountOfMoney) { synchronized (userId) { Result result = externalCashBackService.evaluateCashBack(userId, amountOfMoney); accountService.transfer(userId, amountOfMoney + result.getCashBackAmount()); externalCashBackService.cashBackComplete(userId, result.getCashBackAmount()); } } The base components of the application are shown in the following diagram: component diagram of the application I tried to make an example as clear as possible. The transfer of money in the payment service depends on the other two services: • The first one is a CashBackService that interacts with another (external) web application under the REST protocol. And, in order to calculate the actual cash-back, we need to synchronize transactions with this application. This is because the next amount of the cash-back may depend on the total amount of user payments. • The second is an AccountService that communicates with an internal database and stores data related to accounts of its users. In this service, we can use a JPA transaction to make some actions as atomic operations in the database. In a real life, I’d strongly recommend refactoring such systems to avoid this situation, if possible. But in our example, imagine that we have no choice. Let’s look at the draft code of this application: @Service public class PaymentService { @Autowired private ExternalCashBackService externalCashBackService; @Autowired private AccountService accountService; public void withdrawMoney(UUID userId, int amountOfMoney) { synchronized (userId) { Result result = externalCashBackService.evaluateCashBack(userId, amountOfMoney); accountService.transfer(userId, amountOfMoney + result.getCashBackAmount()); externalCashBackService.cashBackComplete(userId, result.getCashBackAmount()); } } } @Service public class ExternalCashBackService { @Autowired private RestTemplate restTemplate; public Result evaluateCashBack(UUID userId, int amountOfMoney) { return sendRestRequest("evaluate", userId, amountOfMoney); } public Result cashBackComplete(UUID userId, int cashBackAmount) { return sendRestRequest("complete", userId, cashBackAmount); } private Result sendRestRequest(String action, UUID userId, int value) { URI externalCashBackSystemUrl = URI.create("http://cash-back-system.org/api/" + action); HttpHeaders headers = new HttpHeaders(); headers.set("Accept", MediaType.APPLICATION_JSON_VALUE); RequestDto requestDto = new RequestDto(userId, value); HttpEntity<?> request = new HttpEntity<>(requestDto, headers); ResponseDto responseDto = restTemplate.exchange(externalCashBackSystemUrl, HttpMethod.GET, request, ResponseDto.class) .getBody(); return new Result(responseDto.getStatus(), responseDto.getValue()); } } @Service public class AccountService { @Autowired private AccountRepository accountRepository; @Transactional(isolation = REPEATABLE_READ) public void transfer(UUID userId, int amountOfMoney) { Account account = accountRepository.getOne(userId); account.setBalance(account.getBalance() - amountOfMoney); accountRepository.save(account); } } However, you can have several objects with the same value (userId in this example), but the synchronization works on the instance of the object and not on its value. The code below does not work well. Because it’s incorrectly synchronized; the static factory method UUID.fromString(..) makes a new instance of the UUID class on each call, even if you pass an equal string argument. So, we get different instances of the UUID for equal keys. If we run this code from multiple threads, then we have a good chance to get a problem with synchronization: public void threadA() { paymentService.withdrawMoney(UUID.fromString("ea051187-bb4b-4b07-9150-700000000000"), 1000); } public void threadB() { paymentService.withdrawMoney(UUID.fromString("ea051187-bb4b-4b07-9150-700000000000"), 5000); } In this case, you need to obtain the same reference for equals objects to synchronize on it. Wrong Ways to Solve This Issue Synchronized Methods You can move the synchronized on a method: public synchronized void withdrawMoney(UUID userId, int amountOfMoney) { .. } This solution has a bad performance. You will block transfers of money for absolutely all users. And if you need to synchronize different operations in the different classes by the same key, this solution does not help you at all. String Intern In order to ensure that the instance of the class, which contains a user ID, will be the same in all synchronized blocks, we can serialize it into a String and use the String.intern() to obtain the same link for equals strings. String.intern uses a global pool to store strings that are interned. And when you request intern on the string, you get a reference from this pool if such string exists there or else this string puts in the pool. You can find more details about String.intern in The Java Language Specification - 3.10.5 String Literals or in the Oracle Java documentation about the String.intern public void withdrawMoney(UUID userId, int amountOfMoney) { synchronized (userId.toString().intern()) { .. } } Using the intern is not a good practice because the pool of Strings is difficult to clean with the GC. And, your application can consume too many resources with the active use of the String.intern. Also, there is a chance that a foreign code is synchronized on the same instance of the string as your application. This can lead to deadlocks. In general, the use of intern is better left to the internal libraries of the JDK; there are good articles by Aleksey Shipilev about this concept. How Can We Solve This Problem Correctly? Create Your Own Synchronization Primitive We need to implement a behavior that describes the next diagram: diag 0672834a7737bb323990aabe3bcb5ce6 At first, we need to make a new synchronization primitive — the custom mutex. That will work by the value of the variable, and not by the reference to the object. It will be something like a "named mutex," but a little wider, with the ability to use the value of any objects for identification, not just the value of a String. You can find examples of synchronization primitives to locking by the name in other languages (C++, C#). Now, we will solve this issue in Java. The solution will look something like this: public void withdrawMoney(UUID userId, int amountOfMoney) { synchronized (XMutex.of(userId)) { .. } } In order to ensure that the same mutexes are obtained for equal values of variables, we will make the mutex factory. public void withdrawMoney(UUID userId, int amountOfMoney) { synchronized (XMutexFactory.get(userId)) { .. } } public void purchase(UUID userId, int amountOfMoney, VendorDescription vendor) { synchronized (XMutexFactory.get(userId)) { .. } } In order to return the same instance of mutex on the each of requests with equal keys, we will need to store the created mutexes. If we will store these mutexes in the simple HashMap, then the size of the map will increase as new keys appear. And we don’t have a tool to evaluate a time when a mutex not used anywhere. In this case, we can use the WeakReference to save a reference to the mutex in the map, just when it uses. In order to implement this behavior, we can use the WeakHashMap data structure. I wrote an article about this type of references a couple of months ago; you can consider it in more details here: Soft, Weak, Phantom References in Java Our mutex factory will be based on the WeakHashMap. The mutex factory creates a new mutex if the mutex for this value(key) is not found in the HashMap. Then, the created mutex is added to the HashMap. Using of the WeakHashMap allows us to store a mutex in the HashMap while existing any references to it. And, the mutex will be removed from a HashMap automatically when all references to it are released. We need to use a synchronized version of WeakHashMap; let’s see what’s described in the documentation about it: This class is not synchronized. A synchronized WeakHashMap may be constructed using the Collections.synchronizedMap method. It’s very sad, and a little later, we’ll take a closer look at the reason. But for now, let’s consider an example of implementation, which is proposed by the official documentation (I mean the use of  Collections.synchronizedMap): public final Map<XMutex<KeyT>, WeakReference<XMutex<KeyT>>> weakHashMap = Collections.synchronizedMap(new WeakHashMap<XMutex<KeyT>, WeakReference<XMutex<KeyT>>>()); public XMutex<KeyT> getMutex(KeyT key) { validateKey(key); return getExist(key) .orElseGet(() -> saveNewReference(key)); } private Optional<XMutex<KeyT>> getExist(KeyT key) { return Optional.ofNullable(weakHashMap.get(XMutex.of(key))) .map(WeakReference::get); } private XMutex<KeyT> saveNewReference(KeyT key) { XMutex<KeyT> mutex = XMutex.of(key); WeakReference<XMutex<KeyT>> res = weakHashMap.put(mutex, new WeakReference<>(mutex)); if (res != null && res.get() != null) { return res.get(); } return mutex; } What About Performance? If we look at the code of the Collections.synchronizedMap, then we find a lot of synchronizations on the global mutex, which is created in pair with a SynchronizedMap instance. SynchronizedMap(Map<K,V> m) { this.m = Objects.requireNonNull(m); mutex = this; } And all other methods of the SynchronizedMap are synchronized on mutex: public int size() { synchronized (mutex) {return m.size();} } public boolean containsKey(Object key) { synchronized (mutex) {return m.containsKey(key);} } public V get(Object key) { synchronized (mutex) {return m.get(key);} } public V put(K key, V value) { synchronized (mutex) {return m.put(key, value);} } public V remove(Object key) { synchronized (mutex) {return m.remove(key);} } ... This solution does not have the best performance. All of these synchronizations lead us to permanent locks on each operation with a factory of mutexes. ConcurrentHashMap With a WeakReference as a Key We need to look at the using of the ConcurrentHashMap. It has a better performance than Collections.synchronizedMap. But we have one problem — the ConcurrentHashMap doesn’t allow the use of weak references. This means that the garbage collector cannot delete unused mutexes. I found two ways to solve this problem: • The first is to create my own ConcurrentMap implementation. This is the right decision, but it will take a very long time. • The second one is the use of the ConcurrentReferenceHashMap implementation from the Spring Framework. This is a good implementation, but it has a couple of nuances. We will consider them below. Let’s change the XMutexFactory implementation to use a ConcurrentReferenceHashMap: public class XMutexFactory<KeyT> { /** * Create mutex factory with default settings */ public XMutexFactory() { this.map = new ConcurrentReferenceHashMap<>(DEFAULT_INITIAL_CAPACITY, DEFAULT_LOAD_FACTOR, DEFAULT_CONCURRENCY_LEVEL, DEFAULT_REFERENCE_TYPE); } /** * Creates and returns a mutex by the key. * If the mutex for this key already exists in the weak-map, * then returns the same reference of the mutex. */ public XMutex<KeyT> getMutex(KeyT key) { return this.map.compute(key, (k, v) -> (v == null) ? new XMutex<>(k) : v); } } That’s cool! Less code, but more performance than before. Let’s try to check the performance of this solution. Create a Simple Benchmark I made a small benchmark in order to select an implementation. There are three implementations of the Map involved in the test: •  Collections.synchronizedMap based on the WeakHashMap  •  ConcurrentHashMap  •  ConcurrentReferenceHashMap  I use the ConcurrentHashMap in benchmark just for comparing in measurements. This implementation is not suitable for use in the factory of mutexes because it does not support the use of weak or soft references. All benchmarks are written with using the JMH library. # Run complete. Total time: 00:04:39 Benchmark Mode Cnt Score Error Units ConcurrentMap.ConcurrentHashMap thrpt 5 0,015 ? 0,004 ops/ns ConcurrentMap.ConcurrentReferenceHashMap thrpt 5 0,008 ? 0,001 ops/ns ConcurrentMap.SynchronizedMap thrpt 5 0,005 ? 0,001 ops/ns ConcurrentMap.ConcurrentHashMap avgt 5 565,515 ? 23,638 ns/op ConcurrentMap.ConcurrentReferenceHashMap avgt 5 1098,939 ? 28,828 ns/op ConcurrentMap.SynchronizedMap avgt 5 1503,593 ? 150,552 ns/op ConcurrentMap.ConcurrentHashMap sample 301796 663,330 ? 11,708 ns/op ConcurrentMap.ConcurrentReferenceHashMap sample 180062 1110,882 ? 6,928 ns/op ConcurrentMap.SynchronizedMap sample 136290 1465,543 ? 5,150 ns/op ConcurrentMap.ConcurrentHashMap ss 5 336419,150 ? 617549,053 ns/op ConcurrentMap.ConcurrentReferenceHashMap ss 5 922844,750 ? 468380,489 ns/op ConcurrentMap.SynchronizedMap ss 5 1199159,700 ? 4339391,394 ns/op In this micro-benchmark, I create a situation when several threads compute values in the map. You can consider the source code of this benchmark in more details here at Concurrent Map benchmark Put it on the graph: benchmark result So, the ConcurrentReferenceHashMap justifies its use in this case. Getting Started With the XSync Library I packed this code into the XSync library, and you can use it as a ready solution for the synchronization on the value of variables. In order to do it, you need to add the next dependency: <dependency> <groupId>com.antkorwin</groupId> <artifactId>xsync</artifactId> <version>1.1</version> </dependency> Then, you are able to create instances of the XSync class for synchronization on types that you need. For the Spring Framework, you can make them as beans: @Bean public XSync<UUID> xSync(){ return new XSync<>(); } And now, you can use it: @Autowired private XSync<UUID> xSync; public void withdrawMoney(UUID userId, int amountOfMoney) { xSync.execute(userId, () -> { Result result = externalPolicySystem.validateTransfer(userId, amountOfMoney, WITHDRAW); accountService.transfer(userId, amountOfMoney, WITHDRAW); }); } public void purchase(UUID userId, int amountOfMoney, VendorDescription vendor) { xSync.execute(userId, () -> { .. }); } Concurrent Tests In order to be sure that this code works well, I wrote several concurrent tests. There is an example of one of these tests: public void testSyncBySingleKeyInConcurrency() { // Arrange XSync<UUID> xsync = new XSync<>(); String id = UUID.randomUUID().toString(); NonAtomicInt var = new NonAtomicInt(0); // There is a magic here: // we created a parallel stream and try to increment // the same nonatomic integer variable in each stream IntStream.range(0, THREAD_CNT) .boxed() .parallel() .forEach(j -> xsync.execute(UUID.fromString(id), var::increment)); // Asserts await().atMost(5, TimeUnit.SECONDS) .until(var::getValue, equalTo(THREAD_CNT)); Assertions.assertThat(var.getValue()).isEqualTo(THREAD_CNT); } /** * Implementation of the does not thread safe integer variable: */ @Getter @AllArgsConstructor private class NonAtomicInt { private int value; public int increment() { return value++; } } Let’s see the result of this test: concurrent test result References XSync library on GitHub: https://github.com/antkorwin/xsync Examples using the XSync library: https://github.com/antkorwin/xsync-example The original article can be found here. Topics: concurrency, concurrenthashmap, java, mutex, synchronized, weakreference Published at DZone with permission of Anatoliy Korovin . See the original article here. Opinions expressed by DZone contributors are their own. {{ parent.title || parent.header.title}} {{ parent.tldr }} {{ parent.urlSource.name }}
__label__pos
0.90994
Главная Математика 6 класс Е.А.Бунимович, Л.В.Кузнецова, С.С.Минаева ГДЗ учебник по математике 6 класс Е.А.Бунимович, Л.В.Кузнецова, С.С.Минаева авторы: , , . издательство: "Просвещение" Раздел: Номер №17 а) 5 24 + 3 8 ; б) 7 10 2 5 ; в) 3 4 2 5 ; г) 8 11 + 1 2 . Решение а 5 24 + 3 8 = 5 + 3 3 24 = 5 + 9 24 = 14 24 = 7 12 Решение б 7 10 2 5 = 7 2 2 10 = 7 4 10 = 3 10 Решение в 3 4 2 5 = 3 5 2 4 20 = 15 8 20 = 7 20 Решение г 8 11 + 1 2 = 8 2 + 1 11 22 = 16 + 11 22 = 27 22 = 1 5 22 Instagram line
__label__pos
0.65367
Draw a pair of tangents to a circle of radius 5 cm which are inclined to each other at an angle of 60° Give the justification of the construction To construct a tangent from given conditions 1. Let us draw a circle of radius 5 cm and with the centre as O. 2. Considering a point Q on the circumference of the circle and join OQ. 3. Draw a perpendicular to QP at point Q. 4. Draw a radius OR, making an angle of 120° i.e(180°−60°) with OQ. 5. Draw a perpendicular to RP at point R. 6. Now both the perpendiculars intersect at point P. 7. Therefore, PQ and PR are the required tangents at an angle of 60°. Ncert solutions class 10 Chapter 11-12 Justification We have to prove that ∠QPR = 60° By construction ∠OQP = 90° ∠ORP = 90° And ∠QOR = 120° We know that the sum of all interior angles of a quadrilateral = 360° ∠OQP+∠QOR + ∠ORP +∠QPR = 360o 90°+120°+90°+∠QPR = 360° Therefore, ∠QPR = 60° Hence Justified Was this answer helpful?        0 (0) Upvote (0) Choose An Option That Best Describes Your Problem Thank you. Your Feedback will Help us Serve you better. Leave a Comment Your Mobile number and Email id will not be published. Required fields are marked * * * BOOK Free Class Ask Question
__label__pos
0.997521
Site US and Canada Statistics Hypothesis Tests A statistical hypothesis test is a method for making decisions using data from an experiment, survey, or an observational study, deciding whether the evidence is sufficient to reject the status quo as described by the null hypothesis. The lessons in this unit focus on developing understanding of central inferential concepts. Students generate random samples, set up hypotheses tests and examine the meaning of p-values and alpha levels, Type I and II errors and their connection to power. They also investigate several different tests and when they are applicable. Statistics: Hypothesis Tests Activities Title Type Meaning of Alpha • Students will recognize that the alpha value (significance level of a test) is the relative frequency for sample statistic values that lead to a "reject the null" conclusion, given that the null hypothesis is actually true. • Students will recognize that a sample mean can lead to a "reject the null" or "fail to reject the null" depending on the alpha level. Alignments  Standards  |  Textbook   • 4723 Contingency Tables and Chi-Square This lesson involves deriving and interpreting the chi-square as an indication of whether two variables in a population are independent or associated. Alignments  Standards  |  Textbook   • 3038 Chi-Square Tests This lesson involves investigating chi-squared tests and distributions. Alignments  Standards  |  Textbook   • 3252 Difference in Means This activity involves investigating whether a difference really seems to exist between two sample means. Alignments  Standards  |  Textbook   • 3003 Meaning of Power In this lesson, samples are generated from a population for a particular hypothesis test, leading to the conjecture that the null hypothesis is actually false. Alignments  Standards  |  Textbook   • 4454 Type 2 Error This activity allows students to experiment with different alpha levels and alternative hypotheses to investigate the relationship among types of error and power. Alignments  Standards  |  Textbook   • 3157 What is a p-value? This lesson involves beginning with a null hypothesis specifying the mean of a normally distributed population with a given standard deviation. Alignments  Standards  |  Textbook   • 4224 Resampling This lesson involves approximate sampling distributions obtained from simulations based directly on a single sample. The focus of the lesson is on conducting hypothesis tests in situations for which the conditions of more traditional methods are not met. Alignments  Standards  |  Textbook   • 2903 TI-Nspire is a trademark of Texas Instruments. Adobe, Flash Player and Reader, and Microsoft Word are registered trademarks of their owners.
__label__pos
0.974636
  • Zenith Education Studio Overview of the H2 Math syllabus and quick tips on preparing for the exams The jump from O Level to A Level Math is pretty significant for most students, and this is especially so for those pursuing JC Math at a H2 level. The increase in levels of complexity, technicality, and content required to tackle the questions more often than not leave students frustrated because they are tuned to thinking and answering at a lower level in their secondary school papers. Nevertheless, taking up H2 JC Math is an informed choice that will open many doors for you in the future, since a large and growing proportion of careers seek talented individuals who can analyze and critique contemporary issues from a mathematical lens. That is why our dedicated educators here at Zenith, the top JC Math tuition center in Singapore, have pieced together an informative guide to ease your transition into the A Level Math environment. Overview The foremost difference that most JC Math students notice is the lengthier time limits given by Cambridge to complete the A Level Math papers. Standing at a grand total of 6 hours, with papers 1 and 2 taking up each 50% of its duration, it’s safe to say that much more is expected from you, in terms of active recollection and application of the learned concepts. In contrast, the O Level paper’s time limit totals four and a half hours - which is much less rigorous by comparison. A noticeable difference is that at the A Levels, mathematical skills have to not just be learnt but actively applied to novel question types. On top of that, each question may require an integration of skills from more than one topic, which is what makes JC Math fluid and dynamic. As for the two main categories of topics, H2 Math is split into Pure Mathematics (relating to theories, concepts and formulae), and Probability & Statistics (more applied data-gathering tools). With regard to the total mark distribution, Pure Mathematics takes up 70% of the final A Level Math grade compared to just 30% for Probability and Statistics. Therefore, it is of paramount importance that you place increased emphasis on scoring well for the Pure Math portions of the paper, since that is where the bulk of your grade is allocated to. This is not to say that you should neglect the Probability and Statistics section of your paper, because that has traditionally been what most JC Math students rely on to boost their final grade (due to the arguably less challenging nature of the questions). But fret not, Zenith’s specially curated JC Math tuition programme will equip you with the necessary skills to navigate the choppy waters of A Level Math! Understanding the nature of the questions Arguably one of the most fundamental steps that help propel A Level Math students to their desired distinctions is the ability to categorize and analyze the requirements of each chapter in the A Level Math syllabus. That is an important skill that Zenith’s JC Math tuition programme aims to impart to you, because the wide variety of chapters in the syllabus calls for a systematic and coherent approach to tackle every specific one of them. The first skill that we often emphasize is being able to accurately visualize certain topics that are more inclined towards 3D model building and/or graphic representation. This relates immensely to the chapter on vectors, where intense levels of imagination are needed to visualize the representation of the vector models in space, and subsequently solve them. Simply reading the theoretical concepts (i.e the vector’s basic properties; its scalar and vector products; and the distance between lines and/or planes) would not suffice; you need to actively understand their relationships visually in order to be prepared for the tricky questions the A Level Math papers inevitably come up with. Another chapter that our JC Math tuition programme identifies as visualization-centric is the graphs and transformations chapter under the pure math portion of the A Level Math syllabus, where questions often require students to sketch complicated graph models. Apart from visualization topics, there are chapters inclined towards the application of content in a real-world scenario. For example, those in Probability and Sampling, where very real-world samples of data are given to students for them to extract and interpret relevant inferences from the data in question. This ties back to the concept of application because such questions are often related to the fields of healthcare, banking, science, and a whole host of others. Thus, it is crucial that you understand the context in which the questions are steeped in so as to be able to combine their theoretical knowledge learnt with the ability to apply such concepts in the broad and authentic fields that chapters like Probability and Sampling span. Lastly, the third branch of chapters are concerned with mindful practice. Chapters like Maclaurin’s series and complex numbers emphasise heavily on practice, because the same formulas are tested question after question with few modifications to its underlying structure. As the top JC Math tuition programme in Singapore, Zenith empowers you to be able to differentiate and dissect the various question types effortlessly. Fig 1. A brief summation of the A Level Math chapters Having a study plan You may be adept at classifying the above chapters (as in Fig.1) by their nature, but that skill has to be coupled with a sound and constructive study plan. That entails long-term planning in terms of revision and learning. Without an orderly list of work to complete, it is inevitable that you will arrive at the A Level Math examinations tired and all burnt out. For example, you should allocate dedicated slots of time to study sessions, preferably once in the morning, afternoon, and night on weekdays, in between meals. That should be more than adequate time to brush up on your weak points and put in consistent hours of mindful practice. You could also, for instance, study different chapters at the various time slots of the day, so as to ensure that you do not get too overwhelmed by any one specific chapter. Perhaps you could aim to clear visualization-heavy topics in the day while the mind is still afresh, and seek to complete application and practice centric topics during nightfall. It never hurts to be well prepared for your A Level Math papers, because it adds structure to your learning and allows you to tackle each chapter with increased foresight and analysis. Adopt good note-taking habits too, to complement and add value to your study sessions. Note-taking would be especially pertinent for topics that make use of many different sets of concepts and equations (i.e. calculus and hypothesis-testing), as it ensures that you understand the key formulas of theory-heavy chapters. On top of that, identifying popular questions types/trends in the A Level Math ten-year series may be extremely beneficial, especially for the correlation and linear regression chapter, where real-world explanations for the patterns in the graph can be recycled from previous year papers. The H2 JC Math tuition programme offered at Zenith aims to foster such healthy study habits, while providing a vibrant and supportive learning environment to ensure that students feel at home during study sessions. Be exam smart Another critical tip Zenith recommends is to make the most of your time for the duration of the H2 A Level Math paper. This includes managing your time well, which many students tend to overlook. In reality, over-allocating time on one question will inevitably have a domino effect on later parts of your answers, since questions at the H2 level tend to take up 2-3 foolscap papers’ worth of space. It is also of paramount importance that you do not take shortcuts and skip important steps in your workings, especially for the chapter on integrations where “show” questions (Fig. 2) are abundant. Fig 2. A typical “show” question at the A Level Math examinations Such question types demand for a systematic and meticulous display of workings. Any shortcuts may lead to a correct final answer being penalized because workings are absolutely essential in the Cambridge mark scheme. Minimizing the amount of careless mistakes and calculation errors can also make or break the distinction for aspiring students, as entire lines of calculations can be affected by silly errors at the start of the question. Above all, be certain to write neatly and legibly so that the A Level Math marker can read and interpret your workings without difficulty, for workings that are an eyesore tend to have a negative impact on your grade. That is why Zenith’s JC Math tuition programme trains students to develop positive exam-management skills, allowing them to minimize unwanted and unnecessary errors. At Zenith, our experienced tutors will go the extra mile to guide you in this aspect. With the above tips in mind, some might ask: why study H2 Math? For one, the job prospects are bountiful. You can dive into the world of tech as a data scientist, machine-learning engineer, or software developer; you can venture into the realm of finance, becoming a quantitative analyst or accountant; you can pivot into the field of architecture as a construction project manager... The possibilities are endless! Such is the beauty of Mathematics, and the H2 A Level Math syllabus sets you up to pursue such career options in university. Being the top JC Math tuition centre in Singapore, Zenith makes sure to nurture the mathematical side in each and every student so that they can tackle their A Level Math exams with confidence and diligence. We hope you’ve found this A Level Math guide and tips insightful. Click here to be part of Zenith’s JC Math tuition program today! 19 views0 comments  
__label__pos
0.863954
Timestamp from table to db Need to add a timestamp to each calibration entry. Should the timestamp orginate from Ignition or the DB? If ignition, how do I go about adding it to the query? [code]tabledata=event.source.parent.getComponent(‘Table’).data row = event.source.parent.getComponent(‘Table’).selectedRow site = tabledata.getValueAt(row, “site”) cal = tabledata.getValueAt(row, “calibration”) t_stamp=system.db.dateFormat(now(),“yyyy-MM-dd HH:mm:ss”) print site, cal system.db.runPrepUpdate(“INSERT INTO pump_cal (site, calibration, t_stamp) VALUES (?,?,?)”,[site,cal,t_stamp])[/code] Got it to work using this instead. Is this the correct way of entering the timestamp? Had to differentiate between expressions and scripts in order to get it to work. tabledata=event.source.parent.getComponent('Table').data row = event.source.parent.getComponent('Table').selectedRow site = tabledata.getValueAt(row, "site") cal = tabledata.getValueAt(row, "calibration") t_stamp= event.source.parent.getComponent('datetime').text print site, cal, t_stamp system.db.runPrepUpdate("INSERT INTO pump_cal (site, calibration,t_stamp) VALUES (?,?,?)",[site,cal,t_stamp]) You’re exactly right. Grabbing a date has to be formatted into a string usable by the db. I think the default datetime string in Ignition includes the weekday when it’s pulled into the script, or some such. You can also get the current date in scripting like this:from java.util import Date currentDateTime = Date() print currentDateTime
__label__pos
0.916372
General User Discussions Linking to another chunked topic Hey folks. I have the following bit of structure: Top Topic chunk=to-content --Subtopic A --Subtopic B  --Subtopic C Subtopics B and C are reference topics, and I want to link to them from the relevant point in Subtopic A.  I think that I can link <xref href="#top-topic-id/subtopic-B-id>, because at publishing the page id will be the top topic id?  Links Broken Why are all the links broken?.... I was trying to read some case studies about DITA but none of the links are working. XMetaL check in vs case I have an <xref> element whose href attribute is automatically turned to ALL CAPITALS when it's checked in. How do I stop this happening? I need to preserve the case I entered or links don't work. Andrew. How do I add a summary or title attribute for simpletable or choicetable elements? Our documentation requires HTML table tags to include summaries or title attributes for accessibility requirements, but I haven't been able to find an existing attribute that I use for this purpose. We are only using simpletable and choicetable. How have you addressed accessibility needs for tables in your documentation? Sample DITA end-user documentation? Hi, I am a student in the Technical Communication program at Red River College in Winnipeg, Manitoba. As part of my graduation project, I am exploring the prospect of using DITA to design an end-user guide to develop a web application for a cancer treatment centre in the province. Does anyone have a link to any substantial documentation samples that were designed using the DITA architecture? I was hoping to find.pdfs or Windows Help Files.  Must appreciated. Allan  XML.org Focus Areas: BPEL | DITA | ebXML | IDtrust | OpenDocument | SAML | UBL | UDDI OASIS sites: OASIS | Cover Pages | XML.org | AMQP | CGM Open | eGov | Emergency | IDtrust | LegalXML | Open CSA | OSLC | WS-I
__label__pos
0.932272
Cisco CCNP Service Provider 350-501 SPCOR Topic: Border Gateway Protocol December 21, 2022 1. BGP Introduction So welcome to BGP video trainings. Now, in this section, we’ll start with some of the basics of PGP, like basic concepts and terminology. So, if you just want to look at the basics, we’ll start with the autonomous system number. What is an autonomous system number? And then we’ll see the differences between IGP and EGP protocols and some of the basic features of BGP. And then we’ll see the loop prevention mechanism in the BGP. And then finally, we’ll wrap up this video by understanding when it is more appropriate to use BGP and when it is not really recommended to use BGP. So let’s get started. first with the autonomous system number. Now, from our basic CCN studies, we have already learned what an autonomous system number is Autonomous System Number. The autonomous system number is simple. It’s a collection of networks under a single common administrative domain. So that’s what we say. It’s a collection of routers under a single administrative domain. As an example, I go to an organisation where it has specific locations that are all part of the same organization. Let’s say it is a number 100, and this organisation is, let’s say, ABC, and all the routers within that organisation will be represented by one number. And we call that an autonomous system er. And we cow, the next thing is similar to the way we can have multiple autonomous system numbers, and if you want to communicate within the same autonomous system number or within the same organization, generally we use IGP protocols. Like whatever the protocols we learn in ourprevious videos, in the basic CCNA CCNP videos. As a result, we refer to Rip. Rip version two (iGRP is no longer used). EHRP? ISS’s OSP These are all IGP protocols. So the main difference is that IGB protocols operate within the same autonomous system number. So if you want to communicate within the same organisation or within the same autonomous systems, then you use something called interior gateway protocols. We call them IGP protocols. So all these protocols listed here are typically your IGP protocols. But let’s say you want to communicate between two or more different autonomous system numbers, then we use an exterior gateway routing protocol. And the only protocol that is running on the Internet that allows you to exchange routes between two or more different autonomous system members is the BGP protocol. So BGP is the only protocol running on the Internet backbone and is primarily responsible for maintaining the routes or exchanging the routes between two or more autonomous systems. And the service providers use BGB protocol and busing BGP, they control all the routing information. So BGP is the only protocol that is designed for the huge network that is the Internet. So we’ll see some of the points anyway, more into that, more in detail about the BGP features. Now, in this section, we have seen the Autonomous System number. It suggests routers are under a common administration. Now, if you are communicating between two or more different autonomous systems, like we are trying to see an exchange route between two or more different Adam system numbers. We use a protocol called PGP, okay? So the next thing we’ll see is that we’ll get into IGP routing. Like when we talk about IGP routing, we have seen static default and dynamic routings. And we’ve seen Rap Ehrposp of ISS, all these protocols, inside that dynamic routing. As a result, all of these protocols operate under the same autonomous system numbers. So we call them IGP protocols. But if you want to communicate between two or more different autonomous system numbers, we use the BGP protocol. So first, we’ll start with some of the basic features of BGP. Like BGP, this is a standard protocol. You can run this protocol on any vendor device, and it is an exterior gateway routing protocol. exterior in the sense that it is going to exchange routes between two or more different autonomous system members. So we call it the “exterior gateway protocol,” and it is specially designed for inter-domain routing. “Inters” means communication between two or more different autonomous system members. Assume this is 100 and you want to communicate with 200 people. As shown in the previous diagram, we use the BGP protocol, which is specifically designed to scale a large network like the Internet. It supports class lists, and then it supports all your FLSM VLMs here, the Dr. Manual, and auto summarizations. All these things are supported in the BGP, just like they are in all our IGP protocols. And it’s similar to your internal protocol updates, where incremental and trigger updates are supported. And we call BGP a path-vector protocol. What exactly does “prod vector” mean? So, a path vector is a method of extending routes along with path information. Consider the following example: I have an adiagram here; you can see the diagram here. Now let’s take an example. This router will publicise this zero-dot, zero-dot network. So now this router is going to advertise this network into the next autonomous system, which means it is going from one as to another as. So it is going to carry that ten-point, one-point dot network, and at the same time, it is going to carry the path information. So it is coming from, which is 65,400 as Now when it sends, it is going to pass on this information to this router. And when this router is going to pass on the information to the next router or the next autonomous system, it is going to carry that network information just like your normal IGB protocol, but it is going to carry the autonomous system path information. So it’s going to say that it’s coming from 65,400, which is this, and then it is coming from these two S. So it’s crossing these two autonomous systems and reaching here. And finally, when this router is going to send it to another router, it is going to pass on that information like it is originating from this, then it has reached the next, and finally it has reached this, which is the last s. So we call this “path information,” and when it is going to carry any routing update, it’s going to make sure that it is going to carry this autonomous system path information. This is extremely useful, especially if you’re in a loop prevention mechanism. There is something called a “loop prevention mechanism,” which I’ll be discussing in our next video, in the next sections. I’ll probably get into that more in detail. Okay, so we call it “path vector protocol,” where it is going to carry the path information from which autonomous system hops it is moving on. So that’s the reason we call it “path vector protocol.” And then there were some more differences and some more features of BGP. It uses unicast to send updates to manually selected neighbors. So at this point, I’ll get more clarity when I get into BGP neighbors. Now this point will tell you that, like manually, you have to convert the neighborhood. Now, what is “manually”?Let’s take an example. We have two routers, and if I’m running OSP or another protocol, say EHRP, by default, when I advertise this interface, it will send a hello message to the other side of the interface, and then it will respond to that hello message. Based on those “hello” messages, they automatically establish a neighbouring relationship. So they’ll automatically become neighbors, and they’ll automatically build a neighbour table. There is something like automatic neighborship, but in BGP we don’t have that. In BGP, we have to manually configure the neighbor. which means we need to say that router two, you are man enough. Let’s say this router one is the neighbour of router two. On router one, you have to configure a neighbour command saying that you are maneuvering, and on router two, you have to configure a neighbour command saying that router one is the neighbour of router two. And if the neighbour commands on both sites match, and everything is properly configured, the neighborship will appear; this is what manual neighborship configurations are. We established some initial labs that were entirely dedicated to this. So I’m not getting into practical things. Probably in our next couple of videos, in the third or fourth video, we will directly jump to this lab. where I will also show you the configuration, how it works, and some more detail on whatever we discussed. So next thing, it’s a BGP application-level protocol for reliability. It operates on port 179 in the metric system and employs the TCP protocol. It uses a lot of attributes. So we have a weight attribute, local preference, and path origin. Next up, we’ve got a lot of attributes. BGP supports very rich emotional attributes, which can affect the path manipulation process. Like in OSPF, it uses bandwidth, and the default formula it uses is ten to the power of eight divided by bandwidth. In the case of EHRP, it is going to use bandwidth delay, load, and MTU reliability. Hopkins will be used in the case of Rap. But in BGP, the pass selection process varies and relies heavily on these attributes. So, more on these attributes. We’ll definitely get into this in our advanced BGP concepts. So as of now, I can simply say that it supports rich attributes, which can really affect the path manipulations. And then finally, the administrative distance is 20 if the route is coming from external BGP; again, we’ll see the differences between external BGP and external BGP. Also, if the route is coming from external BGP or from a different autonomous system name, that will have an administrative distance of 20. If the route is coming from internal BGP, we have an administrative distance of 200. So again, more detail about IBGP and BGP. I’ll definitely go into more detail in our following sections. So here I’m just getting into some basic information that is going to define the features of BGP. So, as you may recall, I recently discussed loop prevention mechanisms in BGP. There’s a chance you’re connecting to the Internet and connecting to your service folder from one side while also connecting to another service folder from the other. So you might be advertising your route; come to our network. And this route is advised to a service forwarder or to some other autonomous system numbers, and from there it reaches the Internet Cloud, with the possibility of the same route returning. Now, that way, I can create the loop. However, in BGP, there is a rule that whenever a router sees its own number, such as when I discussed path vector behaviour earlier. Take, for example, this network (1810 plus 16). This network is now starting with the number 100. When it sends to the A router the second AIS, it is going to carry the network information, like 180 or 10, whatever it is, and the autonomous system path information, saying that it is coming from 100. And from there, when it reaches the other Now the same network: 180, 1016. It’s going to carry the as Path information that it is coming from 100 here and then reaching200 and then finally reaching to my S.When this occurs, it will be forwarded to ZIP code 500. It will forward that network information with 180 and state that it is coming from 100. So in general, it will be reverse and then it’s reaching a S 200 here and then coming to 300.So these routers will now understand that there are 500 routers inside this network; they will understand that this network starts with 100, then 200, then 300, and finally reaches my autonomous system members. This is now advertised as 100 from here onwards. Now, when it advertises this information, when a router receives this information, it is already running as route 100, and whenever it sees its own as route number within that update, it will reject the route. So this is the default loop prevention mechanism in the BGP protocol. When a router behaves as shown in Figure 180, the tendot network is not accepted by the 100 prefix or in its as path. In the BGP, this is a loop detection and default loop prevention mechanism. So BGP will never instal any route when it sees its own as a member. So I’m running as 100 and I’m getting a route called the “Ten Dot Network,” which also has 100. So it’s something originating in MYS and coming back to MYS again. So, because when you are connecting to the internet, you are definitely connecting to multiple routes, and there is a possibility that the routes will be coming from multiple sites or that the same route is coming back again to MYS. So this is some kind of loop prevention mechanism. You must understand the BGP’s default behavior.  2. When BGP is More Appropriate Now, finally, the last thing we’ll see here now that we have seen some of the basic features First thing, we started with a number and then IGP routing BGP features, and this is what we discussed just now: the path vector behaviour and then the loop prevention mechanism here. Now the last thing we’ll see in this video is when BGP is more appropriate to use. You may now find yourself in a position where you are in charge of an organization, and your organisation is most likely using BGP. So if you’re running a big organisation, you might be running BGP. Large organisations typically use BGP for path manipulations in order to exchange out of service for them. Now, first, we need to understand when it is more appropriate to use BGP. So, if I decide to use BGP, I must first determine whether I can use it, cannot use it, or whether it is required. So the first condition for using BGP is that you should be a service folder. So if I’m the service provider, let’s say I’m working for any service for the network, then servicefour is going to provide services to many customers, which means he is going to connect many customers to different members of the autonomous system members. So to exchange the information between these multiple autonomous system numbers, it becomes mandatory for the service provider to run BGP because BGP is the only protocol designed to exchange the route between the multiple layers and it can do some path manipulation. So it is specially designed for that. So the entire internet So, whatever the network, we can say that BGP is used by the world’s largest network, the Internet. As a result, service folders keep the Internet running. So service folders connect to each other, and they maintain the biggest network. Now, if you want to access the Internet, let’s say I’m going to [email protected]. When my request reaches my ISP, and from the ISP, we say it is going to the Internet, So the Internet is a public network where everyone is connected. So that means this ISP is going to maintain where your Yahoo server is, and it’s going to maintain the route of the Yahoo server. And the Yahoo server is located elsewhere—possibly with a different company or organization. So it’s going to send a request to the Yahoo server, get the reply, and finally send it to ours. That is, if you are a service provider, you must maintain the routes of various autonomous systems, or everything necessary to maintain the routes. So this can be done by using the BGP protocol, because in this scenario we are not going to use any of the IGP protocols because IGP protocols are only designed within the same autonomous system number. But when you talk about the Internet, we are moving between different servers to reach the Yahoo server and coming back. So maybe I’m just moving around multiple addresses—probably more than 56—to reach that particular Yahoo server. So that’s the first case. If you run an ISP, or if you are an ISP, you must use the BGP protocol. Okay? So in case I’m not an ISP, let’s say my company decides to run BGP even though it’s not an ISP. Now it is really recommended if you have multiple autonomous systems and you are connecting to multipliers, which we call a “multi-homing environment” and where you need to do some path manipulations. So let me take one simple scenario to understand this second point: when is BGP more appropriate to use, okay?So I’ve got a diagram here; I’m going to keep on diagramming. Let’s say this is my organization, which is connected to I got many routers, plenty of routers here, okay? So now your organisation is simply connecting to your ISP, and we are connecting to the Internet from there. So we just got only one connection to ISP. In that case, it’s really not recommended to use PCP because normally what we do to access the Internet is use the default route. So towards the ISP and from the ISP, we confer a static route, and we redistribute that distorted route into your IGP, and you will simply exchange the route without any problem. This is one common scenario where we use it. This is the most common scenario without BGP. So especially in small or medium-sized networks, we generally follow this scenario where we have some IGP protocol running, probably EHRP, OSPF, or any of the other protocols. And we have a default route pointing to our ISP. And then we are redistributing the static route into our IGB protocols, and we are just allowing all the routers in MYS to access the Internet via the same ISP. So we call this a single-home environment, where you have a single exit path towards the service or order, and we just have one exit path. As a result, we do not recommend using PGP in this case. But let’s say this is my organization. It’s a very big organization. In my organization, I have a large number of autonomous routers. I also have several exit options. I’m most likely connected to two service portals, ISP one and ISP two. So maybe for redundancy, I have gone with multiple service providers. From there, I’m going to access my Internet, or probably the resources of different organizations. This is my ABC organization. So we call this a multi-homing environment. As a result, you now have multiple exit routes from two different directions. You can now try the same thing here, such as just a default route or a primary default route backup. And you can use a normal IGP to reach any one of these. But what I can do in this type of scenario is use BGP to do some path manipulations. What I can do is say that some of the traffic should use this route. Assume I have a 10-dot network, a 20-dot network, a 30-dot network, and a 40-dot network in my production network. So I can manipulate the path so that this tendot network and 20-dot network use this route to go outside my autonomous system numbers, and then I can tell the remaining two networks to go from this route. So we call this path manipulation. Now this is something BGP is going to provide you with. So if you want to do something like this, then you can use BGP and instal the route. So now we are not going to use a default route here. We are just trying to exchange the routes in the service folder. And, despite the fact that I’m also running BGP and the service board is running BGP, I’m going to instal all of the routes in the service folder, or routes that come from the service folder, into my BGP routing table. As a result, some of the larger organizations, in particular, adopt these practices. We call this a “multihoming environment,” because you have multiple exit paths and we’re going to manipulate them. So this is the next case where we can use PGP. So, in greater depth I’m getting into that, like the different types of connections to an ISP and the different types of configurations. I’m getting into detail about that in my next video. So here I’m just trying to figure out when it is more appropriate to use BGP. So if you are a service porter, it becomes mandatory for you to use BGP. And if you’re not a service portal, if you’re running a business in a multi-homing environment, and you want to manipulate the path of traffic entering or leaving your au, So in this type of scenario, I can use BGP, and it is not recommended to use BGP if you’re running a single-homing environment. If you recall, as I mentioned in another example, MYS is only connecting to one connection to one of the ISPs, and there is only one exit path. So, in this case, BGP is not required because you simply configure a default route to the service for order because you already have one route. So we don’t really need to do any path manipulation. So it’s really not recommended to use BGP in this type of scenario. And if you lack resources, Because if you are running BGP in your production networkmeans you are going to run the BGP and youare going to instal all the routes of internet probablycoming from service portal, which means you should have avery high end devices, probably your core routers, to maintain such a big routing table and to process it. And also, if you have a limited understanding of PGP or if you’re really not sure how the BGP behaves, how it selects the best path, or how it’s going to work, troubleshooting BGP So it’s really not recommended to use BGP. So that is something that is recommended to keep in mind when it is more appropriate and when it is not appropriate to use BGP. So finally, we have seen some of the basic concepts of BGP here. So initially, we started with the autonomous system number. What is an autonomous system number? It’s a number that identifies a company. Then we saw the fundamental differences between high-GP and BGP, as well as some of the fundamental features later on. And we have seen the blue prevention mechanism. The default loop prevention mechanism in the BGP is wheneverany route is getting into the as, it will check if it sees its own as number inside that it’s not going to instal that route. And then finally, we have seen when to use and when not to use BP. So probably in our next video, we’ll see some more and get into some more detail about the BGP configurations. 3. BGP Options – Connecting to Internet Now we are ready to get into some more details about the BGP here. So, if you remember from our previous video, we learned some of the basics of BGP, where we started with autonomous system numbers, IPGP, and BGP differences. And then BGP features a loop prevention mechanism and explains when to use BGP and when not to use BGP. So this is something that we have seen in a number of first videos. Now, we are going to continue from there. Now, in this video, we are going to learn about the different types of ISP connections. So depending on the size of the company and the company’s requirements, we can have different types of connections to the ISP. And we have something called single-home, dual-home, multi-home, and dual-multi-home connections. We’ll try to see the differences between these connections and determine which type of connection, which type of routing, or which type of BGP configuration is more appropriate. We’ll try to look into that, and then finally, we’ll see if you’re connecting to your ISP. Let’s say I’m connecting to my ISP via a BGP route. What are the different methods we try to connect? Okay, that’s what it is again; we’ll get into more detail about it later. For example, we have three options: default routes, specific routes, and exchanging all routes. So we’ll see these two major themes in this video of training. So let’s start with the different types of ISP connections. So the first one is a single home connection. Now, if you remember from the previous video, I already discussed single homes. So in a single-family home environment, we have just one exit path. So this is my autonomous system number—let’s say autonomous system number 80. It’s connecting to my service folder, which is my ISP one. And I only have one connection. So in this type of connection, we can just have a single default route. I can connect to this router. I can simply say “IP route zero zero” and whatever. The next stop is, let’s say, one or two. And then from the ISP side, he’s going to configure a static route for the public IP, whatever we are using here. So it’s just a normal default configuration, like what we did in our basic CCN. So in this section, in this single home environment, it’s really not recommended to run BGP. We just have a site with a single ISP connection and this is fine. That is not heavily reliant on the Internet or a broadband connection. So we can use static routes, which are the most commonly used, or even some of the default routes provided by the ISP. The second connection is something called a “dual homing environment.” This is now more common in the majority of companies that do not use BGP. We generally have two links connecting to service portals. What is the benefit of connecting to two links? The main benefit is that if one of the links fails, we can go to this router and enter IP route zero here, as well as zero, zero, zero. I’ll say next stop addresses 1, 1, and 2, which is my first link, and I’m going to confirm one more default route. On the same router, I’m going to say IP, and then I’m going to say two with some different administrator distance. Now, by default, it will use the first link because of administrative distance, default is one. If that primary link fails, it is going to use the second link. Now, the main advantage we get from this dual home site connection is redundancy. The major advantage We can now do this in cases where you are connecting to two different routers or have a connection to two different routers for redundancy. And we generally run some HSRP, VR, and RPG to have multiple gateways again in the land, so that’s something different in terms of configurations, and then that’s how we do it in the case of dual homing. And in this scenario also, it’s not really recommended to use PGP. So we’re not going to run any BGP because we’re just connecting to the same service for orders. As a result, if any of the links fails, only the backup link is used, and we have very little path manipulation here. As you can see, we have very limited path manipulation options. So in this scenario also, it’s not recommended to use PGP, but when it comes to multihoming environments, like in this scenario, we can use BGP here. Now, in a multi-homing environment, with two exit paths from the same as this is my S, it’s connecting to ISP one and connecting to ISP two, and I can tell some of the traffic. Let’s say I have four networks, such as a 10-dot network, a 20-dot network, a 30-dot network, and a 40-dot network, and I know that the 10-dot network should use a primary ISP and the other two, which are my 3-dot networks, can use an alternate secondary ISP. If this link fails, everyone is routed through the secondary link. So this is what we can do: some path manipulation where we can decide how the traffic should enter the IS or exit the S. So we call this path manipulation. So in this type of scenario, BGP is more typically used, and we generally exchange the routes. In this scenario, we’re not simply getting the default route; we are not sending only the default route here. Instead, we are also exchanging all the routes from the service folder through the PGP protocol. But in the case of the previous two networks, we are just sending the default route. Any unknown packet will be sent to the ISP, and the ISP will return that packet based on the static route. But here we are trying to exchange all the routes through BGP or maybe some specific routes and then we have something called dual home environment.  We just got some extra links that will provide more redundancy—the most redundancy. And, once again, we can use BGP to perform path manipulations. So these are the four different types of implementations we can use, and especially in the first two networks, the first two types BGP is not really recommended because in the first two scenarios, what we do is simply configure a default route, like default routing, if you remember the basic default routing in the basic CCN studies. And from the ISP side, we get a static route for the public IP because our router also does some NAT translations. That’s how it works. But whereas in the case of the remaining two setups, we are not only connecting to ISP one for redundancy, we are also going to connect to ISP two also. So I can tell you that some networks should go from here and the rest should go from there. If you want to do that path manipulation within your application, we need to run BGP. Anyway, the service folder is going to run BGP. Now you need to have a registered number, and based on that number, I can exchange the routes between them from the service folder, and I can do some path manipulation where I will decide how my routes or how my network should go to the internet via this route or that route. So this is something we get if you are using BGP, especially in the big networks, where they use BGP for these things. Or if you are working for a service folder, you may have some customers with this type of connection and need to do some path manipulation. That okay. For customer 1, they should exit from one path, and for customer 2, they should use the alternate path. So that’s something really useful. BGP is designed for that only. So the next thing we need to figure out is how we’re going to connect to the service and exchange routes. Now there are three different methods of exchanging routes, communicating with the service forwarder or routes, or communicating via the internet. So one option is to just have a single default route, and the next option is to have some more specific routes plus the default route. And the third option is to have the complete routing table. Now, how it is going to work. So I’ve got some diagrams here. So these diagrams look a little bit complicated. So I’ll try to make it simple here by using my own diagrams, as you can see. So what I’m going to do is use this as my autonomous system number, and I’ve got plenty of routers, probably more than 50 or 60 routers connected in my S, and my S number is 65,060 5000 subneumber. And I’m going to connect to two different ISPs. Already, I have taken some tools from ISP One and ISP Two. Okay? So one option is what I can do if I want to communicate with my internet or with anyone else outside. Now one option I can simply do isI can simply use a single default route. That is something I can do; all I have to do is go to my router and confirm a default route here pointing to my ISP. And then, from the ISP, he will confirm a static route. And then I’ll redistribute that default route into my IGP. Now, based on the IGP, the same thing I’ll do here Also, I’ll point out a default here, and we have a static route from the ISP that I’ll redistribute into BGP. And now I’m getting the default route from both sides. I’m getting the default route from both sides. So based on the IGP metric, like the OSP of EHRP, it is going to use any one default route as my primary route. If that route goes down, then automatically it is going to use the second route. That is the first option. This option has an advantage, but it also has a disadvantage. Now the advantage of this first type of implementation is that we don’t really need to know the routes. Let’s say I’m going to try to access a Yahoo! or Gmail server on the Internet. I have no reason to keep any information about that Yahoo! or Gmail account. So all I’m going to keep is a default route. So any unknown packet will be sent to the ISP, and the ISP will take it and forward it to Yahoo. And your communication will begin between them. That’s a very good advantage. With that, if you don’t have enough resources, you can really try this. But the drawback is that we cannot do any path manipulations. So path manipulation is not possible in this scenario. Because even though you are going to use two default routes, it’s going to use any one of the route as primary route based on IGP metric. And if that primary route fails, then only the alternate route will be used. That is not possible. That is the main disadvantage here. Now it depends upon whether it’s a very good option. Especially if you don’t have enough resources in your network, you don’t have high-speed routers to maintain the routes to run BGP. You can simply continue with this option. This is extremely beneficial, particularly for small and medium-sized businesses. They go with this kind of solution. That is the first option we can use as a simple default route. The second option is that we can have a default route; that is one thing. We can also have some specific routes. This is an additional method of implementing this one or more implementations, which we can do if you’re using BGP. Now, what I’m going to do is have a default route from this side, and from both sides, I’m going to have a default route that is just like a case. At the same time, I am also receiving some specific routes. Specific routes imply that, for example, I frequently access some of my servers on the Internet. There are some ten servers, or my company’s servers, or anything. I have some regular servers, possibly Yahoo Gmail, but it could be any other server. Now I’ll try to only receive this network information into my BGP table, which means that I have the information for those ten servers or networks in my BGP table. But I am not going to maintain the complete internet routes for everyone. And if so, to reach those ten routes or ten networks, I’m going to receive the information, and I can run BGP. And I believe the 10 and 20 dot networks should share the same primary ISP. Alternatively, I can say they will use an ISP route in my LAN. So, in my case, I have four different networks, and I can manipulate the paths so that when these four networks reach these specific servers or networks, they use ISP 1, and the two remaining networks use ISP 2. So I’m going to do some path manipulation based on those specific routes. And if I’m trying to access any other networks other than this, like maybe I’m trying to access some normal Internet traffic or I’m trying to do something other than accessing these specific servers, they just go with the default route. So in this scenario, I’m getting a default route, which means it is reducing the overhead on my router. I’m not going to instal all the routes at the same time. I have some specific routes that allow me to do path manipulation only for specific destinations or networks. at the same time. For the remaining networks, I am going to use the default route that I have. That’s the second option. Now the third option is exchanging all the internet routes. In the third option, we will not use any default routes; instead, we will exchange everything from the service folder, which means that on my as, any router in my S is connecting to both ISPs, this is my ISP one, and connect to ISP two. I’m receiving all the routes from the service folder, including all the internet routes from this service folder. Now I’ll do path manipulation for each and every network. Any network I go to, I want to access on the internet or any other, as I’ll do my own path manipulation. So based on that path manipulation, it’s going to have the major drawback of being very hard on your routers. Like your routers, they really need a lot of processing. is required to maintain all the route information to run the BGP. And you need to have some high-end devices. So especially this is more appropriate for very big size companies. Or if you’re continuously communicating, especially with ISPs, I can say that the service board does need to maintain each and every route information that comes in. So if you’re running a specific organization—a big server organization—we can go with the second option. In the case of small companies, we generally don’t use PGP; we simply go with the default route with the primary and secondary options. So these are some of the three different types of implementations for which we use the default routes from the service portal, which is very easy on the routers. Internet traffic is routed through the nearest PGP router. So it’s just forwarded based on your IGP information. Or we can just allow for the selection of some paths, with others falling back to the default route. And the third one is that we are going to exchange all the routes, which is very hard on the resources of the router, but again, it’s going to carry the most direct path or the shortest path. It’s going to carry a shorthand. Now, depending upon the requirement, you can go with any one of these options. So we discuss two things here. We attempt to comprehend the various methods of connecting to ISPs, including single-home connections, dual-home connections, and the more appropriate use of a default route without BGP. And if you’re running a multi-homed or dual-homed environment, then it’s really recommended to go with BGP. At the same time, we translate some paths. When you connect to BGP, when you’re running, when you’re exchanging routes through the Internet, the ISP, and the service folder, you can have a single default route coming from the service folder, and we’re not going to run BGP here; we’re just getting the default route from the service folder. Or we can have some specific routes from the service folder and some specific routes, and then the remaining will automatically switch back to the default route, and we can have all the routes exchanged from the service folder. So, three different types of implementations So probably we’ll try to jump into our labs, do some basic configurations in our next video. So these are some of the basic things we need to understand before we start getting into BGP configurations. Leave a Reply How It Works img Step 1. Choose Exam on ExamLabs Download IT Exams Questions & Answers img Step 2. Open Exam with Avanset Exam Simulator Press here to download VCE Exam Simulator that simulates real exam environment img Step 3. Study & Pass IT Exams Anywhere, Anytime!
__label__pos
0.582239
ovito.data This module contains various types of data objects, which are produced and processed within OVITO’s data pipeline system. It also provides the DataCollection container class for such data objects and some additional utility classes to compute neighbor lists to iterate over the bonds of particles. Data container: Data objects: Auxiliary data classes: Utility classes: class ovito.data.DataCollection A DataCollection is a container that holds together multiple data objects, each representing a different facet of a dataset. Data collections are the main entities that are generated and processed in OVITO’s data pipeline system. DataCollection instances are typically returned by the Pipeline.compute() and the FileSource.compute() methods and contain the results of a data pipeline. Within a data collection, you will typically find a bunch of data objects, which collectively form the dataset, for example: All these types derive from the common DataObject base class. A DataCollection comprises two main parts: 1. The objects list, which can hold an arbitrary number of data objects of the types listed above. 2. The attributes dictionary, which stores auxialliary data in the form of simple key-value pairs. Data object access The find() and find_all() methods allow you to look up data objects in the objects list of a data collection by type. For example, to retrieve the SimulationCell from a data collection: data = pipeline.compute() cell = data.find(SimulationCell) The find() method yields None if there is no instance of the given type in the collection. Alternatively, you can use the expect() method, which will instead raise an exception in case the requested object type is not present: cell = data.expect(SimulationCell) It is possible to programmatically add or remove data objects from the data collection by manipulating its objects list. For instance, to populate a new data collection with a SimulationCell object we can write: cell = SimulationCell() mydata = DataCollection() mydata.objects.append(cell) There are certain conventions regarding the numbers and types of data objects that may be present in a data collection. For example, there should never be more than one SimulationCell instance in a data collection. In contrast, there may be an arbitrary number of ParticleProperty instances in a data collection, but they all must have unique names and the same array length. Furthermore, there must always be one ParticleProperty named Position in a data collection, or no ParticleProperty at all. When manipulating the objects list of a data collection directly, it is your responsibility to make sure that these conventions are followed. Particle and bond access To simplify the work with particles and bonds, which are represented by a bunch of ParticleProperty or BondProperty instances, respectively, the DataCollection class provides two special accessor fields. The particles field represents a dictionary-like view of all the ParticleProperty data objects that are contained in a data collection. It thus works like a dynamic filter for the objects list and permits name-based access to individual particle properties: positions = data.particles['Position'] assert(positions in data.objects) Similarly, the bonds field is a dictionary-like view of all the BondProperty instances in a data collection. If you are adding or removing particle or bond properties in a data collection, you should always do so through these accessor fields instead of manipulating the objects list directly. This will ensure that certain invariants are always maintained, e.g. the uniqueness of property names and the consistent size of all property arrays. Attribute access In addition to data objects, which represent complex forms of data, a data collection can store an arbitrary number of attributes, which are simple key-value pairs. The attributes field of the data collection behaves like a Python dictionary and allows you to read, manipulate or newly insert attributes, which are typically numeric values or string values. Data ownership One data object may be part of several DataCollection instances at a time, i.e. it may be shared by several data collections. OVITO’ pipeline system uses shallow data copies for performance reasons and to implement efficient data caching. Modifiers typically manipulate only certain data objects in a collection. For example, the ColorCodingModifier will selectively modify the values of the Color particle property but won’t touch any of the other data objects present in the input data collection. The unmodified data objects will simply be passed through to the output data collection without creating a new copy of the data values. As a consequence of this design, both the input data collection and the output collection of the pipeline may refer to the same data objects. In such a situation, no data collection owns the data objects exclusively anymore. Thus, in general it is not safe to manipulate the contents of a data object in a data collection, because that could lead to unwanted side effects or corruption of data maintained by the pipeline system. For example, modifying the particle positions in a data collection that was returned by a system function is forbidden (or rather discouraged): data = pipeline.compute() positions = data.particles['Position'] with positions: # NEVER DO THIS! You may accidentally be manipulating positions[...] += (0,0,2) # shared data that is owned by the pipeline system! Before manipulating the contents of a data object in any way, it is crucial to ensure that no second data collection is referring to the same object. The copy_if_needed() method helps you ensure that a data object is exclusive owned by a certain data collection: # First, pass the data object from the collection to the copy_if_needed() method. # A data copy is made if and only if the collection is not yet the exclusive owner. positions = data.copy_if_needed( data.particles['Position'] ) # Now it's guaranteed that the data object is not shared by any other collection # and it has become safe to modify its contents: with positions: positions[...] += (0,0,2) # Displacing all particles along z-direction. copy_if_needed() first checks whether the given object is currently shared by more than one data collection. If yes, a deep copy of the object is made and the original object in the data collection is replaced with the copy. Now we can be confident that the copied data object is exclusively owned by the data collection and it’s safe to modify it without risking side effects. attributes A dictionary of key-value pairs that represent global tokens of information which are not associated with any specific data object in the data collection. An attribute is a value of type int, float, or str with a unique identifier name such as "Timestep" or "ConstructSurfaceMesh.surface_area". The attribute name serves as keys for the attributes dictionary of the data collection. Attributes are dynamically generated by modifiers in a data pipeline or by a data source as explained in the following. Attributes loaded from input files The Timestep attribute is loaded from LAMMPS dump files and other simulation file formats that store the simulation timestep. Such input attributes can be retrieved from the .attributes dictionary of a pipeline’s FileSource: >>> pipeline = import_file('snapshot_140000.dump') >>> pipeline.source.attributes['Timestep'] 140000 Other attributes read from an input file are, for example, the key-value pairs found in the header line of extended XYZ files. Dynamically computed attributes Analysis modifiers like the CommonNeighborAnalysisModifier or the ClusterAnalysisModifier output scalar computation results as attributes. The reference documentation of each modifier type lists the attributes it produces. For example, the number of clusters identified by the ClusterAnalysisModifier can be queried as follows: pipeline.modifiers.append(ClusterAnalysisModifier(cutoff = 3.1)) data = pipeline.compute() nclusters = data.attributes["ClusterAnalysis.cluster_count"] Exporting attributes to a text file The ovito.io.export_file() function supports writing attribute values to a text file, possibly as functions of time: export_file(pipeline, "data.txt", "txt", columns = ["Timestep", "ClusterAnalysis.cluster_count"], multiple_frames = True) User-defined attributes The PythonScriptModifier allows you to generate your own attributes that are dynamically computed (typically on the basis of some other input information): pipeline.modifiers.append(CommonNeighborAnalysisModifier()) def compute_fcc_fraction(frame, input, output): n_fcc = input.attributes['CommonNeighborAnalysis.counts.FCC'] output.attributes['fcc_fraction'] = n_fcc / input.particles.count pipeline.modifiers.append(PythonScriptModifier(function = compute_fcc_fraction)) print(pipeline.compute().attributes['fcc_fraction']) The CommonNeighborAnalysisModifier used in the example above generates the attribute CommonNeighborAnalysis.counts.FCC to report the number of atoms that form an FCC lattice. To compute the fraction of FCC atoms from that, we need to divide by the total number of atoms in the system. To this end, we insert a PythonScriptModifier into the pipeline behind the CommonNeighborAnalysisModifier. Our custom modifier function generates a new attribute named fcc_fraction. Finally, the value of the user-defined attribute can be queried from the pipeline or exported to a text file using the export_file() function as described above. bonds Returns a BondsView, which allows to access all BondProperty objects stored in this data collection by name. Furthermore, it provides convenience functions for adding new bond properties to the collection. copy_if_needed(obj, deepcopy=False) Makes a copy of a data object from this data collection if the object is not exclusively owned by the data collection but shared with other collections. After the method returns, the data object is exclusively owned by the collection and it becomes safe to modify the object without causing unwanted side effects. Typically, this method is used in custom modifier functions (see PythonScriptModifier) that participate in OVITO’s data pipeline system. A modifier function receives an input collection of data objects from the system. However, modifying these input objects in place is not allowed, because they are owned by the pipeline and modifying them would lead do unexpected side effects. This is where this method comes into play: It makes a copy of a given data object and replaces the original in the data collection with the copy. The caller can now safely modify this copy in place, because no other data collection can possibly be referring to it. The copy_if_needed() method first checks if obj, which must be a data object from this data collection, is shared with some other data collection. If yes, it creates an exact copy of obj and replaces the original in this data collection with the copy. Otherwise it leaves the object as is, because it is already exclusively owned by this data collection. Parameters:obj (DataObject) – The object from this data collection to be copied if needed. Returns:An exact copy of obj if it was shared with some other data collection. Otherwise the original object is returned. expect(object_type) Looks up the first data object in this collection of the given class type. Raises a KeyError if there is no instance matching the type. Use find() instead to test if the data collection contains the given type of data object. Parameters:object_type – The DataObject subclass specifying the type of object to find. Returns:The first instance of the given class or its subclasses from the objects list. find(object_type) Looks up the first data object from this collection of the given class type. Parameters:object_type – The DataObject subclass that should be looked up. Returns:The first instance of the given class or its subclasses from the objects list; or None if there is no instance. Method implementation: def find(self, object_type): for o in self.objects: if isinstance(o, object_type): return o return None find_all(object_type) Looks up all data objects from this collection of the given class type. Parameters:object_type – The DataObject subclass that should be looked up. Returns:A Python list containing all instances of the given class or its subclasses from the objects list. Method implementation: def find_all(self, object_type): return [o for o in self.objects if isinstance(o, object_type)] objects The list of data objects that make up the data collection. Data objects are instances of DataObject-derived classes, for example ParticleProperty, Bonds or SimulationCell. You can add or remove objects from the objects list to insert them or remove them from the DataCollection. However, it is your responsibility to ensure that the data objects are all in a consistent state. For example, all ParticleProperty objects in a data collection must have the same lengths at all times, because the length implicitly specifies the number of particles. The order in which data objects are stored in the data collection does not matter. Note that the DataCollection class also provides convenience views of the data objects contained in the objects list: For example, the particles dictionary lists all ParticleProperty instances in the data collection by name and the bonds does the same for all BondProperty instances. Since these dictionaries are views, they always reflect the current contents of the master objects list. particles Returns a ParticlesView object representing the particles, which provides name-based access to the ParticleProperty instances stored in this DataCollection. Furthermore, the view object provides convenience functions for creating new particle properties. class ovito.data.DataObject Abstract base class for all data objects. A DataObject represents a data fragment processed and produced by a data pipeline. See the ovito.data module for a list of the different types of data objects in OVITO. Typically, a data object is contained in a DataCollection together with other data objects, forming a data set. Furthermore, data objects may be shared by several data collections. Certain data objects are associated with a DataVis object, which is responsible for generating the visual representation of the data and rendering it in the viewports. The vis field provides access to the attached visual element, which can be configured as needed to change the visual appearance of the data. The different visual element types of OVITO are all documented in the ovito.vis module. vis The DataVis element associated with this data object, which is responsible for rendering the data visually. If this field contains None, the data is non-visual and doesn’t appear in rendered images or the viewports. class ovito.data.SimulationCell Base class:ovito.data.DataObject Stores the geometric shape and the boundary conditions of the simulation cell. A SimulationCell data object is typically part of a DataCollection and can be retrieved through its expect() method: from ovito.io import import_file from ovito.data import SimulationCell pipeline = import_file("input/simulation.dump") cell = pipeline.compute().expect(SimulationCell) # Print cell matrix to the console. [...] is for casting to Numpy. print(cell[...]) The simulation cell geometry is stored as a 3x4 matrix (with column-major ordering). The first three columns of the matrix represent the three cell vectors and the last column is the position of the cell’s origin. For two-dimensional datasets, the is2D flag ist set. In this case the third cell vector and the z-coordinate of the cell origin are ignored by OVITO. import numpy.linalg # Compute simulation cell volume by taking the determinant of the # left 3x3 submatrix of the cell matrix: vol = abs(numpy.linalg.det(cell[0:3,0:3])) # The SimulationCell.volume property computes the same value. assert numpy.isclose(cell.volume, vol) The SimulationCell object behaves like a standard Numpy array of shape (3,4). Data access is read-only, however. If you want to manipulate the cell vectors, you have to use a with compound statement as follows: # Make cell twice as large along the Y direction by scaling the second cell vector: with cell: cell[:,1] *= 2.0 A SimulationCell instance are always associated with a corresponding SimulationCellVis, which controls the visual appearance of the simulation box. It can be accessed through the vis attribute inherited from DataObject. # Change display color of simulation cell to red: cell.vis.rendering_color = (1.0, 0.0, 0.0) is2D Specifies whether the system is two-dimensional (true) or three-dimensional (false). For two-dimensional systems the PBC flag in the third direction (z) and the third cell vector are ignored. Default:false pbc A tuple of three boolean values, which specify periodic boundary flags of the simulation cell along each cell vector. volume Computes the volume of the three-dimensional simulation cell. The volume is the absolute value of the determinant of the 3x3 submatrix formed by the three cell vectors. volume2D Computes the area of the two-dimensional simulation cell (see is2D). class ovito.data.Property Base class:ovito.data.DataObject Stores the values for an array of elements (e.g. particle or bonds). In OVITO’s data model, an arbitrary number of properties can be associated with data elements such as particle or bonds, each property being represented by a Property object. A Property is basically an array of values whose length matches the number of data elements. Property is the common base class for the ParticleProperty and BondProperty specializations. Data access A Property object behaves almost like a Numpy array. For example, you can access the property value for the i-th data element using indexing: property = data.particles['Velocity'] print('Velocity vector of first particle:', property[0]) print('Z-velocity of second particle:', property[1,2]) for v in property: print(v) Element indices start at zero. Properties can be either vectorial (e.g. velocity vectors are stored as an N x 3 array) or scalar (1-d array of length N). Length of the first array dimension is in both cases equal to the number of data elements (number of particles in the example above). Array elements can either be of data type float or int. If necessary, you can cast a Property to a standard Numpy array: velocities = numpy.asarray(property) No data is copied during the conversion; the Numpy array will refer to the same memory as the Property. By default, the memory of a Property is write-protected. Thus, trying to modify property values will raise an error: property[0] = (0, 0, -4) # "ValueError: assignment destination is read-only" A direct modification is prevented by the system, because OVITO’s data pipeline uses shallow data copies and needs to know when data objects are being modified. Only then results that depend on the changing data can be automatically recalculated. We need to explicitly announce a modification by using Python’s with statement: with property: property[0] = (0, 0, -4) Within the with compound statement, the array is temporarily made writable, allowing us to alter the per-particle data stored in the Property object. components The number of vector components if this is a vector property; or 1 if this is a scalar property. name The name of the property. class ovito.data.PlotData Base class:ovito.data.DataObject This data object type stores a series of XY data points. It is generated by certain modifiers that compute histograms and function plots, e.g. the CoordinationNumberModifier and the HistogramModifier. title The title of the plot. class ovito.data.SurfaceMesh Base class:ovito.data.DataObject This data object type stores a triangle mesh describing a surface or, more precisely, a two-dimensional manifold that is closed and orientable. Typically, surface meshes are produced by modifiers such as the ConstructSurfaceModifier, CreateIsosurfaceModifier or CoordinationPolyhedraModifier. Periodic domains What is special about surface meshes is that they may be embedded in a periodic domain, i.e. a simulation cell with periodic boundary conditions. That means triangles of a surface mesh can connect vertices on opposite sides of a simulation box and wrap around correctly. OVITO takes care of computing the intersections of the periodic surface with the box boundaries and automatically produces a non-periodic representation of the triangle mesh when it comes to visualizing the surface. The domain the surface mesh is embedded in is represented by a SimulationCell object, which is attached to the SurfaceMesh instance. You can access it through the domain attribute. Visual representation The visual appearance of the surface mesh in rendered images is controlled by its attached SurfaceMeshVis element, which is accessible through the vis base class attribute. Interior and exterior region As surface meshes are closed orientable manifolds, one can define an interior and an exterior region of space that are separated by the manifold. For example, if the surface mesh is constructed by the ConstructSurfaceModifier from a set of particles, then the region enclosed by the surface is the “solid” region and the outside region is the one containing no particles. It can be that there is no interior region and the exterior region is infinite and fills all space. In this case the surface mesh is degenerate and comprises no triangles. The opposite extreme is also possible in periodic domains: The interior region extends over the entire domain and there is no outside region. Again, the surface mesh will consist of zero triangles in this case. To discriminate between the two situations, the SurfaceMesh class provides the all_interior flag, which is set when the interior region fills the entire periodic domain. The locate_point() method can be used to test whether some point in space belongs to the interior or the exterior region. File export A surface mesh can be written to a file in the form of a conventional triangle mesh. For this, a non-periodic version is produced by truncating triangles at the domain boundaries and generating “cap polygons” to fill the holes that occur at the intersection of the interior region with the domain boundaries. To export the mesh, use the ovito.io.export_file() function and select vtk/trimesh as output format: from ovito.io import import_file, export_file from ovito.data import SurfaceMesh from ovito.modifiers import ConstructSurfaceModifier # Load a particle set and construct the surface mesh: pipeline = import_file("input/simulation.dump") pipeline.modifiers.append(ConstructSurfaceModifier(radius = 2.8)) mesh = pipeline.compute().expect(SurfaceMesh) # Export the mesh to a VTK file for visualization with ParaView. export_file(mesh, 'output/surface_mesh.vtk', 'vtk/trimesh') Cutting planes An arbitrary number of cutting planes can be attached to a SurfaceMesh, which allow to cut away parts of the mesh for visualization purposes. This is sometimes useful, if you want to open a hole in a closed surface to allow a look inside. The SurfaceMesh maintains a list of cutting planes, which are accessible through the get_cutting_planes() and set_cutting_planes() methods. Note that the cuts are non-destructive and dynamically computed only on the transient version of the mesh produced for visualization and data export purposes. The SliceModifier, which can act on a SurfaceMesh, performs the slice by simply adding a new entry to the SurfaceMesh’s list of cutting planes. Mesh data access The methods get_vertices(), get_faces() and get_face_adjacency() methods provide access to the internal data of the surface mesh. all_interior Boolean flag indicating that the SurfaceMesh is degenerate and the interior region extends over the entire domain. domain The SimulationCell describing the (possibly periodic) domain in which this surface mesh is embedded in. get_cutting_planes() Returns a N x 4 array containing the definitions of the N cutting planes attached to this SurfaceMesh. Each plane is defined by its unit normal vector and a signed displacement magnitude, which determines the plane’s distance from the coordinate origin along the normal, giving four numbers per plane in total. Those parts of the surface mesh which are on the positive side of the plane (in the direction the normal vector) are cut away. Note that the returned Numpy array is a copy of the internal data stored by the SurfaceMesh. get_face_adjacency() Returns a M x 3 array listing the indices of the three faces that are adjacent to each of the M triangle faces in the mesh. This information can be used to traverse the neighbors of triangle faces. Every triangle face has exactly three neighbors, because surface meshes are closed manifolds. get_faces() Returns a M x 3 array with the vertex indices of the M triangles in the mesh. Note that the returned Numpy array is a copy of the internal data stored by the SurfaceMesh. Also keep in mind that a triangle face can cross domain boundaries if PBCs are used. get_vertices() Returns a N x 3 array with the xyz coordinates of the N vertices in the mesh. Note that the returned Numpy array is a copy of the internal data stored by the SurfaceMesh. locate_point(pos, eps=1e-6) Determines whether a spatial location is inside the region enclosed by the surface, outside of it, or exactly on the surface itself. Parameters: • pos – The (x,y,z) coordinates of the test point • eps – Numerical precision threshold for point-on-surface test Returns: -1 if pos is inside the region enclosed by the surface, +1 if outside, 0 if exactly on the surface set_cutting_planes(planes) Sets the cutting planes to be applied to this SurfaceMesh. The array planes must follow the same format as the one returned by get_cutting_planes(). class ovito.data.DislocationNetwork Base class:ovito.data.DataObject This data object types stores the network of dislocation lines extracted by a DislocationAnalysisModifier. Instances of this class are associated with a DislocationVis that controls the visual appearance of the dislocation lines. It can be accessed through the vis attribute of the DataObject base class. Example: from ovito.io import import_file, export_file from ovito.modifiers import DislocationAnalysisModifier from ovito.data import DislocationNetwork pipeline = import_file("input/simulation.dump") # Extract dislocation lines from a crystal with diamond structure: modifier = DislocationAnalysisModifier() modifier.input_crystal_structure = DislocationAnalysisModifier.Lattice.CubicDiamond pipeline.modifiers.append(modifier) data = pipeline.compute() total_line_length = data.attributes['DislocationAnalysis.total_line_length'] cell_volume = data.attributes['DislocationAnalysis.cell_volume'] print("Dislocation density: %f" % (total_line_length / cell_volume)) # Print list of dislocation lines: network = data.expect(DislocationNetwork) print("Found %i dislocation segments" % len(network.segments)) for segment in network.segments: print("Segment %i: length=%f, Burgers vector=%s" % (segment.id, segment.length, segment.true_burgers_vector)) print(segment.points) # Export dislocation lines to a CA file: export_file(pipeline, "output/dislocations.ca", "ca") # Or export dislocations to a ParaView file: export_file(pipeline, "output/dislocations.vtk", "vtk/disloc") File export A dislocation network can be written to a data file in the form of polylines using the ovito.io.export_file() function (select the vtk/disloc output format). During export, a non-periodic version is produced by clipping dislocation lines at the domain boundaries. segments The list of dislocation segments in this dislocation network. This list-like object is read-only and contains DislocationSegment objects. class ovito.data.DislocationSegment A single dislocation line from a DislocationNetwork. The list of dislocation segments is returned by the DislocationNetwork.segments attribute. cluster_id The numeric identifier of the crystal cluster of atoms containing this dislocation segment. The true Burgers vector of the segment is expressed in the local coordinate system of this crystal cluster. id The unique identifier of this dislocation segment. is_infinite_line This property indicates whether this segment is an infinite line passing through a periodic simulation box boundary. A segment is considered infinite if it is a closed loop and its start and end points do not coincide. See also the is_loop property. is_loop This property indicates whether this segment forms a closed dislocation loop. Note that an infinite dislocation line passing through a periodic boundary is also considered a loop. See also the is_infinite_line property. length Returns the length of this dislocation segment. points The list of space points that define the shape of this dislocation segment. This is a N x 3 Numpy array, where N is the number of points along the segment. For closed loops, the first and the last point coincide. spatial_burgers_vector The Burgers vector of the segment, expressed in the global coordinate system of the simulation. This vector is calculated by transforming the true Burgers vector from the local lattice coordinate system to the global simulation coordinate system using the average orientation matrix of the crystal cluster the dislocation segment is embedded in. true_burgers_vector The Burgers vector of the segment, expressed in the local coordinate system of the crystal. Also known as the True Burgers vector. class ovito.data.ParticleProperty Base class:ovito.data.Property Stores an array of per-particle values. This class derives from Property, which provides the base functionality shared by all property types in OVITO. In OVITO’s data model, an arbitrary number of properties can be associated with the particles, each property being represented by a separate ParticleProperty object. A ParticleProperty is basically an array of values whose length matches the number of particles. The set of properties currently associated with all particles is exposed by the DataCollection.particles view, which allows accessing them by name and adding new properties. Standard properties OVITO differentiates between standard properties and user-defined properties. The former have a special meaning to OVITO, a prescribed name and data layout. Certain standard properties control the visual representation of particles. Typical examples are the Position property, the Color property and the Radius property. User-defined properties, on the other hand, may have arbitrary names (as long as they do not collide with one of the standard names) and the property values have no special meaning to OVITO, only to you, the user. Whether a ParticleProperty is a standard or a user-defined property is indicated by the value of its type attribute. Creating particle properties New properties can be created and assigned to particles with the ParticlesView.create_property() factory method. User-defined modifier functions, for example, use this to output their computation results. Typed particle properties The standard property 'Particle Type' stores the types of particles encoded as integer values, e.g.: >>> data = node.compute() >>> tprop = data.particles['Particle Type'] >>> print(tprop[...]) [2 1 3 ..., 2 1 2] Here, each number in the property array refers to a defined particle type (e.g. 1=Cu, 2=Ni, 3=Fe, etc.). The defined particle types, each one represented by an instance of the ParticleType auxiliary class, are stored in the types array of the ParticleProperty object. Each type has a unique id, a human-readable name and other attributes like color and radius that control the visual appearance of particles belonging to the type: >>> for type in tprop.types: ... print(type.id, type.name, type.color, type.radius) ... 1 Cu (0.188 0.313 0.972) 0.74 2 Ni (0.564 0.564 0.564) 0.77 3 Fe (1 0.050 0.050) 0.74 IDs of types typically start at 1 and form a consecutive sequence as in the example above. Note, however, that the types list may store the ParticleType objects in an arbitrary order. Thus, in general, it is not valid to directly use a type ID as an index into the types array. Instead, the type_by_id() method should be used to look up the ParticleType: >>> for i,t in enumerate(tprop): # (loop over the type ID of each particle) ... print('Atom', i, 'is of type', tprop.type_by_id(t).name) ... Atom 0 is of type Ni Atom 1 is of type Cu Atom 2 is of type Fe Atom 3 is of type Cu Similarly, a type_by_name() method exists that looks up a ParticleType by name. For example, to count the number of Fe atoms in a system: >>> Fe_type_id = tprop.type_by_name('Fe').id # Determine ID of the 'Fe' type >>> numpy.count_nonzero(tprop == Fe_type_id) # Count particles having that type ID 957 Note that OVITO supports multiple type classifications. For example, in addition to the 'Particle Type' standard particle property, which stores the chemical types of atoms (e.g. C, H, Fe, …), the 'Structure Type' property may hold the structural types computed for atoms (e.g. FCC, BCC, …) maintaining its own list of known structure types in the types array. type The type of the particle property. One of the following constants: Type constant Property name Data type Component names ParticleProperty.Type.User (a user-defined property with a non-standard name) int/float   ParticleProperty.Type.ParticleType Particle Type int   ParticleProperty.Type.Position Position float X, Y, Z ParticleProperty.Type.Selection Selection int   ParticleProperty.Type.Color Color float R, G, B ParticleProperty.Type.Displacement Displacement float X, Y, Z ParticleProperty.Type.DisplacementMagnitude Displacement Magnitude float   ParticleProperty.Type.PotentialEnergy Potential Energy float   ParticleProperty.Type.KineticEnergy Kinetic Energy float   ParticleProperty.Type.TotalEnergy Total Energy float   ParticleProperty.Type.Velocity Velocity float X, Y, Z ParticleProperty.Type.Radius Radius float   ParticleProperty.Type.Cluster Cluster int   ParticleProperty.Type.Coordination Coordination int   ParticleProperty.Type.StructureType Structure Type int   ParticleProperty.Type.Identifier Particle Identifier int   ParticleProperty.Type.StressTensor Stress Tensor float XX, YY, ZZ, XY, XZ, YZ ParticleProperty.Type.StrainTensor Strain Tensor float XX, YY, ZZ, XY, XZ, YZ ParticleProperty.Type.DeformationGradient Deformation Gradient float XX, YX, ZX, XY, YY, ZY, XZ, YZ, ZZ ParticleProperty.Type.Orientation Orientation float X, Y, Z, W ParticleProperty.Type.Force Force float X, Y, Z ParticleProperty.Type.Mass Mass float   ParticleProperty.Type.Charge Charge float   ParticleProperty.Type.PeriodicImage Periodic Image int X, Y, Z ParticleProperty.Type.Transparency Transparency float   ParticleProperty.Type.DipoleOrientation Dipole Orientation float X, Y, Z ParticleProperty.Type.DipoleMagnitude Dipole Magnitude float   ParticleProperty.Type.AngularVelocity Angular Velocity float X, Y, Z ParticleProperty.Type.AngularMomentum Angular Momentum float X, Y, Z ParticleProperty.Type.Torque Torque float X, Y, Z ParticleProperty.Type.Spin Spin float   ParticleProperty.Type.CentroSymmetry Centrosymmetry float   ParticleProperty.Type.VelocityMagnitude Velocity Magnitude float   ParticleProperty.Type.Molecule Molecule Identifier int   ParticleProperty.Type.AsphericalShape Aspherical Shape float X, Y, Z ParticleProperty.Type.VectorColor Vector Color float R, G, B ParticleProperty.Type.ElasticStrainTensor Elastic Strain float XX, YY, ZZ, XY, XZ, YZ ParticleProperty.Type.ElasticDeformationGradient Elastic Deformation Gradient float XX, YX, ZX, XY, YY, ZY, XZ, YZ, ZZ ParticleProperty.Type.Rotation Rotation float X, Y, Z, W ParticleProperty.Type.StretchTensor Stretch Tensor float XX, YY, ZZ, XY, XZ, YZ ParticleProperty.Type.MoleculeType Molecule Type int   type_by_id(id) Looks up the ParticleType with the given numeric ID in the types list. Raises a KeyError if the ID does not exist. type_by_name(name) Looks up the ParticleType with the given name in the types list. If multiple types exists with the same name, the first type is returned. Raises a KeyError if there is no type with such a name. types A (mutable) list of ParticleType instances. Note that the particle types may be stored in arbitrary order in this list. Thus, it is not valid to use a numeric type ID as an index into this list. class ovito.data.ParticleType Represents a particle type or atom type. ParticleType instances are typically part of a typed ParticleProperty, but this class is also used in other contexts, for example to define the list of structural types identified by the PolyhedralTemplateMatchingModifier. color The display color to use for particles of this type. enabled This flag only has a meaning in the context of structure analysis and identification. Modifiers such as the PolyhedralTemplateMatchingModifier or the CommonNeighborAnalysisModifier manage a list of structural types that they can identify (e.g. FCC, BCC, etc.). The identification of individual structure types can be turned on or off by setting their enabled flag. id The unique numeric identifier of the particle type. name The display name of this particle type. This may be an empty string if the type was loaded from an input file that doesn’t contain named particle types. radius The display radius to use for particles of this type. class ovito.data.BondProperty Base class:ovito.data.Property Stores an array of per-bond values. This class derives from Property, which provides the base functionality shared by all property types in OVITO. In OVITO’s data model, an arbitrary set of properties can be associated with bonds, each property being represented by a BondProperty object. A BondProperty is basically an array of values whose length matches the numer of bonds in the data collection (see BondsView.count). BondProperty objects have the same fields and behave the same way as ParticleProperty objects. Both property classes derives from the common Property base class. Please see its documentation on how to access per-bond values. The set of properties currently associated with the bonds is exposed by the DataCollection.bonds view, which allows accessing them by name and adding new properties. Note that the topological definition of bonds, i.e. the connectivity between particles, is stored in the BondProperty named Topology. type The type of the bond property (user-defined or one of the standard types). One of the following constants: Type constant Property name Data type BondProperty.Type.User (a user-defined property with a non-standard name) int/float BondProperty.Type.BondType Bond Type int BondProperty.Type.Selection Selection int BondProperty.Type.Color Color float (3x) BondProperty.Type.Length Length float BondProperty.Type.Topology Topology int (2x) BondProperty.Type.PeriodicImage Periodic Image int (3x) BondProperty.Type.Transparency Transparency float type_by_id(id) Returns the BondType with the given numeric ID from the types list. Raises a KeyError if the ID does not exist. type_by_name(name) Returns the BondType with the given name from the types list. If multiple types exists with the same name, the first type is returned. Raises a KeyError if there is no type with such a name. types A (mutable) list of BondType instances. Note that the bond types may be stored in arbitrary order in this type list. class ovito.data.BondType Describes a bond type. color The display color to use for bonds of this type. id The unique numeric identifier of the bond type. name The display name of this bond type. This may be an empty string if the type was loaded from an input file that doesn’t contain named bond types. class ovito.data.BondsEnumerator Utility class that permits efficient iteration over the bonds connected to specific particles. The constructor takes a DataCollection object as input. From the unordered list of bonds in the data collection, the BondsEnumerator will build a lookup table for quick enumeration of bonds of particular particles. All bonds connected to a given particle can be subsequently visited using the bonds_of_particle() method. Warning: Do not modify the underlying bonds list in the data collection while the BondsEnumerator is in use. Adding or deleting bonds would render the internal lookup table of the BondsEnumerator invalid. Usage example from ovito.io import import_file from ovito.data import BondsEnumerator from ovito.modifiers import ComputePropertyModifier # Load a dataset containing atoms and bonds. pipeline = import_file('input/bonds.data.gz', atom_style='bond') # For demonstration purposes, let a modifier calculate the length of each bond. pipeline.modifiers.append(ComputePropertyModifier(operate_on='bonds', output_property='Length', expressions=['BondLength'])) # Obtain pipeline results. data = pipeline.compute() positions = data.particles['Position'] # array with atomic positions bond_topology = data.bonds['Topology'] # array with bond topology bond_lengths = data.bonds['Length'] # array with bond lengths # Create bonds enumerator object. bonds_enum = BondsEnumerator(data) # Loop over atoms. for particle_index in range(data.particles.count): # Loop over bonds of current atom. for bond_index in bonds_enum.bonds_of_particle(particle_index): # Obtain the indices of the two particles connected by the bond: a = bond_topology[bond_index, 0] b = bond_topology[bond_index, 1] # Bond orientations can be arbitrary (a->b or b->a): assert(a == particle_index or b == particle_index) # Obtain the length of the bond from the 'Length' bond property: length = bond_lengths[bond_index] print("Bond from atom %i to atom %i has length %f" % (a, b, length)) bonds_of_particle() Returns an iterator that yields the indices of the bonds connected to the given particle. The indices can be used to index into the BondProperty arrays. class ovito.data.CutoffNeighborFinder(cutoff, data_collection) A utility class that computes particle neighbor lists. This class allows to iterate over the neighbors of a given particle within a specified cutoff distance. You can use it to build neighbors lists or perform computations that require neighbor information. The constructor takes a positive cutoff radius and a DataCollection containing the input particle positions and the cell geometry (including periodic boundary flags). Once the CutoffNeighborFinder has been constructed, you can call its find() method to iterate over the neighbors of a particle, for example: from ovito.io import import_file from ovito.data import CutoffNeighborFinder # Load input simulation file. pipeline = import_file("input/simulation.dump") data = pipeline.compute() # Initialize neighbor finder object: cutoff = 3.5 finder = CutoffNeighborFinder(cutoff, data) # Prefetch the property array containing the particle type information: ptypes = data.particles['Particle Type'] # Loop over all particles: for index in range(data.particles.count): print("Neighbors of particle %i:" % index) # Iterate over the neighbors of the current particle: for neigh in finder.find(index): print(neigh.index, neigh.distance, neigh.delta, neigh.pbc_shift) # The index can be used to access properties of the current neighbor, e.g. type_of_neighbor = ptypes[neigh.index] Note: In case you rather want to determine the N nearest neighbors of a particle, use the NearestNeighborFinder class instead. find(index) Returns an iterator over all neighbors of the given particle. Parameters:index (int) – The index of the central particle whose neighbors should be iterated. Particle indices start at 0. Returns:A Python iterator that visits all neighbors of the central particle within the cutoff distance. For each neighbor the iterator returns an object with the following attributes: • index: The global index of the current neighbor particle (starting at 0). • distance: The distance of the current neighbor from the central particle. • distance_squared: The squared neighbor distance. • delta: The three-dimensional vector connecting the central particle with the current neighbor (taking into account periodicity). • pbc_shift: The periodic shift vector, which specifies how often each periodic boundary of the simulation cell is crossed when going from the central particle to the current neighbor. The index value returned by the iterator can be used to look up properties of the neighbor particle as demonstrated in the example above. Note that all periodic images of particles within the cutoff radius are visited. Thus, the same particle index may appear multiple times in the neighbor list of a central particle. In fact, the central particle may be among its own neighbors in a sufficiently small periodic simulation cell. However, the computed vector (delta) and PBC shift (pbc_shift) taken together will be unique for each visited image of a neighboring particle. class ovito.data.NearestNeighborFinder(N, data_collection) A utility class that finds the N nearest neighbors of a particle or around a spatial location. The constructor takes the (maximum) number of requested nearest neighbors, N, and a DataCollection containing the input particles and the cell geometry (including periodic boundary flags). N must be a positive integer not greater than 30 (which is the built-in maximum supported by this class). Once the NearestNeighborFinder has been constructed, you can call its find() method to iterate over the sorted list of nearest neighbors of a specific particle, for example: from ovito.io import import_file from ovito.data import NearestNeighborFinder # Load input simulation file. pipeline = import_file("input/simulation.dump") data = pipeline.compute() # Initialize neighbor finder object. # Visit the 12 nearest neighbors of each particle. N = 12 finder = NearestNeighborFinder(N, data) # Prefetch the property array containing the particle type information: ptypes = data.particles['Particle Type'] # Loop over all input particles: for index in range(data.particles.count): print("Nearest neighbors of particle %i:" % index) # Iterate over the neighbors of the current particle, starting with the closest: for neigh in finder.find(index): print(neigh.index, neigh.distance, neigh.delta) # The index can be used to access properties of the current neighbor, e.g. type_of_neighbor = ptypes[neigh.index] Furthermore, the class offers the find_at() method, which lets you determine the N nearest particles around an arbitrary spatial location: # Find particles closest to some spatial point (x,y,z): coords = (0, 0, 0) for neigh in finder.find_at(coords): print(neigh.index, neigh.distance, neigh.delta) Note: In case you rather want to find all neighbor particles within a certain cutoff range of a particle, use the CutoffNeighborFinder class instead. find(index) Returns an iterator that visits the N nearest neighbors of the given particle in order of ascending distance. Parameters:index (int) – The index of the central particle whose neighbors should be iterated. Particle indices start at 0. Returns:A Python iterator that visits the N nearest neighbors of the central particle in order of ascending distance. For each visited neighbor the iterator returns an object with the following attributes: • index: The global index of the current neighbor particle. • distance: The distance of the current neighbor from the central particle. • distance_squared: The squared neighbor distance. • delta: The three-dimensional vector connecting the central particle with the current neighbor (correctly taking into account periodic boundary conditions). The global index returned by the iterator can be used to look up properties of the neighbor as demonstrated in the first example code above. Note that several periodic images of the same particle may be visited. Thus, the same particle index may appear multiple times in the neighbor list of a central particle. In fact, the central particle may be among its own neighbors in a sufficiently small periodic simulation cell. However, the computed neighbor vector (delta) will be unique for each visited image of a neighboring particle. The number of neighbors actually visited may be smaller than the requested number, N, if the system contains too few particles and has no periodic boundary conditions. find_at(coords) Returns an iterator that visits the N nearest particles around a spatial point given by coords in order of ascending distance. Unlike the find() method, which queries the nearest neighbors of a physical particle, the find_at() method allows searching for neareby particles at arbitrary locations in space. Parameters:coords – A (x,y,z) coordinate triplet specifying the spatial location where the N nearest particles should be queried. Returns:A Python iterator that visits the N nearest neighbors in order of ascending distance. For each visited particle the iterator returns an object with the following attributes: • index: The index of the current particle (starting at 0). • distance: The distance of the current neighbor from the query location. • distance_squared: The squared distance to the query location. • delta: The three-dimensional vector from the query point to the current particle (correctly taking into account periodic boundary conditions). If there exists a particle that is exactly located at the query location given by coords, then it will be returned by this function. This is in contrast to the find() function, which does not visit the central particle itself. The number of neighbors actually visited may be smaller than the requested number, N, if the system contains too few particles and has no periodic boundary conditions. class ovito.data.ParticlesView(data_collection) A dictionary view of all ParticleProperty objects in a DataCollection. An instance of this class is returned by DataCollection.particles. It implements the collections.abc.Mapping interface. That means it can be used like a standard read-only Python dict object to access the particle properties by name, e.g.: data = pipeline.compute() positions = data.particles['Position'] has_selection = 'Selection' in data.particles name_list = data.particles.keys() New particle properties can be added with the create_property() method. count This read-only attribute returns the number of particles in the DataCollection. create_property(name, dtype=None, components=None, data=None) Adds a new particle property to the data collection and optionally initializes it with the per-particle data provided by the data parameter. The method returns the new ParticleProperty instance. The method allows to create standard as well as user-defined particle properties. To create a standard particle property, one of the standard property names must be provided as name argument: colors = numpy.random.random_sample(size = (data.particles.count, 3)) data.particles.create_property('Color', data=colors) The length of the provided data array must match the number of particles, which is given by the count attribute. You can also set the values of the property after its construction: prop = data.particles.create_property('Color') with prop: prop[...] = numpy.random.random_sample(size = prop.shape) To create a user-defined particle property, use a non-standard property name: values = numpy.arange(0, data.particles.count, dtype=int) data.particles.create_property('myint', data=values) In this case the data type and the number of vector components of the new property are inferred from the provided data Numpy array. Providing a one-dimensional array creates a scalar property while a two-dimensional array creates a vectorial property. Alternatively, the dtype and components parameters can be specified explicitly if initialization of the property values should happen after property creation: prop = data.particles.create_property('myvector', dtype=float, components=3) with prop: prop[...] = numpy.random.random_sample(size = prop.shape) If the property to be created already exists in the data collection, it is replaced with a new one. The existing per-particle data from the old property is however retained if data is None. Note: If the data collection contains no particles yet, that is, even the Position property is not present in the data collection yet, then the Position standard property can still be created from scratch as a first particle property by the create_property() method. The data array has to be provided in this case to specify the number of particles to create: # An empty data collection to begin with: data = DataCollection() # Create 10 particles with random xyz coordinates: positions = numpy.random.random_sample(size = (10,3)) data.particles.create_property('Position', data=positions) After the initial Positions property has been created, the number of particles is now specified and any subsequently added properties must have the exact same length. Parameters: • name – Either a standard property type constant or a name string. • data – An optional data array with per-particle values for initializing the new property. The size of the array must match the particle count in the data collection and the shape must be consistent with the number of components of the property. • dtype – The element data type when creating a user-defined property. Must be either int or float. • components (int) – The number of vector components when creating a user-defined property. Returns: The newly created ParticleProperty class ovito.data.BondsView(data_collection) This class provides a dictionary view of all BondProperty objects in a DataCollection. An instance is returned by the bonds attribute of the data collection: data = pipeline.compute() print("Number of bonds:", data.bonds.count) The count attribute of the BondsView class reports the number of bonds. Bond properties Bonds can possess an arbitrary set of bond properties, just like particles possess particle properties. The values of each bond property are stored in a separate BondProperty data object. The BondsView class operates like a Python dictionary and provides access to all BondProperty objects based on their unique name: print("Bond property names:") print(data.bonds.keys()) if 'Length' in data.bonds: length_prop = data.bonds['Length'] assert(len(length_prop) == data.bonds.count) New bond properties can be added using the create_property() method. Removal of a property is possible by deleting it from the DataCollection.objects list. Bond topology If bonds exist in a data collection, then the Topology bond property is always present. It has the special role of defining the connectivity between particles in the form of a N x 2 array of indices into the particles list. In other words, each bond is defined by a pair of particle indices. topology = data.bonds['Topology'] for a,b in topology: print("Bond from particle %i to particle %i" % (a,b)) Bonds are stored in no particular order. If you need to enumerate all bonds connected to a certain particle, you can use the BondsEnumerator utility class for that. Bond display settings The Topology bond property has a BondsVis element attached to it, which controls the visual appearance of the bonds in rendered images. It can be accessed through the vis attribute: data.bonds['Topology'].vis.enabled = True data.bonds['Topology'].vis.shading = BondsVis.Shading.Flat data.bonds['Topology'].vis.width = 0.3 Computing bond vectors Since each bond is defined by two indices into the particles array, we can use this to determine the corresponding spatial bond vectors. They can be computed from the positions of the particles: topology = data.bonds['Topology'] positions = data.particles['Position'] bond_vectors = positions[topology[:,1]] - positions[topology[:,0]] Here, the first and the second column of the bonds topology array are used to index into the particle positions array. The subtraction of the two indexed arrays yields the list of bond vectors. Each vector in this list points from the first particle to the second particle of the corresponding bond. Finally, we might have to correct for the effect of periodic boundary conditions when a bond connects two particles on opposite sides of the box. OVITO keeps track of such cases by means of the the special Periodic Image bond property. It stores a shift vector for each bond, specifying the directions in which the bond crosses periodic boundaries. We can use this information to correct the bond vectors computed above. This is done by adding the product of the cell matrix and the shift vectors from the Periodic Image bond property: cell = data.expect(SimulationCell) bond_vectors += numpy.dot(cell[:,:3], data.bonds['Periodic Image'].T).T The shift vectors array is transposed here to facilitate the transformation of the entire array of vectors with a single 3x3 cell matrix. To summarize: In the two code snippets above we have performed the following calculation for every bond (a, b) in parallel: v = x(b) - x(a) + dot(H, pbc) where H is the cell matrix and pbc is the bond’s PBC shift vector of the form (nx, ny, nz). count This read-only attribute returns the number of bonds in the DataCollection. It always matches the lengths of all BondProperty arrays in the data collection. create_property(name, dtype=None, components=None, data=None) Adds a new bond property to the data collection and optionally initializes it with the per-bond data provided by the data parameter. The method returns the new BondProperty instance. The method can create standard and user-defined bond properties. To create a standard bond property, one of the standard property names must be provided as name argument: colors = numpy.random.random_sample(size = (data.bonds.count, 3)) data.bonds.create_property('Color', data=colors) The size of the provided data array must match the number of bonds in the data collection given by the count attribute. You can also set the property values after construction: prop = data.bonds.create_property('Color') with prop: prop[...] = numpy.random.random_sample(size = prop.shape) To create a user-defined bond property, use a non-standard property name: values = numpy.arange(0, data.bonds.count, dtype=int) data.bonds.create_property('myint', data=values) In this case the data type and the number of vector components of the new property are inferred from the provided data Numpy array. Providing a one-dimensional array creates a scalar property while a two-dimensional array creates a vectorial property. Alternatively, the dtype and components parameters can be specified explicitly if initialization of the property values should happen after property creation: prop = data.bonds.create_property('myvector', dtype=float, components=3) with prop: prop[...] = numpy.random.random_sample(size = prop.shape) If the property to be created already exists in the data collection, it is replaced with a new one. The existing per-bond data from the old property is however retained if data is None. Note: If the data collection contains no bonds yet, that is, even the Topology property is not present in the data collection yet, then the Topology property can still be created from scratch as a first bond property by the create_property() method. The data array has to be provided in this case to specify the number of bonds to create. After the initial Topology property has been created, the number of bonds is now specified and any subsequently added properties must have the exact same length. Parameters: • name – Either a standard property type constant or a name string. • data – An optional data array with per-bond values for initializing the new property. The size of the array must match the bond count and the shape must be consistent with the number of components of the property. • dtype – The element data type when creating a user-defined property. Must be either int or float. • components (int) – The number of vector components when creating a user-defined property. Returns: The newly created BondProperty class ovito.data.TrajectoryLines Base class:ovito.data.DataObject This data object type stores the traced trajectory lines of a group of particles. It is typically generated by a GenerateTrajectoryLinesModifier. Each TrajectoryLines object is associated with a TrajectoryVis element, which controls the visual appearance of the trajectory lines in rendered images. It is accessible through the vis base class attribute.
__label__pos
0.961639
Managing Transformations It is sometimes very difficult to have an overview of all the transformations that are required to calculate another transformation. Suppose you have a robot with a camera that can observe the robot’s end-effector and an object that we want to manipulate. We would like to know the position of the end-effector in the object’s frame so that we can control it. The TransformManager can handle this for you. """ ====================== Transformation Manager ====================== In this example, we will use the TransformManager to infer a transformation automatically. """ print(__doc__) import numpy as np import matplotlib.pyplot as plt from pytransform3d import rotations as pr from pytransform3d import transformations as pt from pytransform3d.transform_manager import TransformManager random_state = np.random.RandomState(0) ee2robot = pt.transform_from_pq( np.hstack((np.array([0.4, -0.3, 0.5]), pr.random_quaternion(random_state)))) cam2robot = pt.transform_from_pq( np.hstack((np.array([0.0, 0.0, 0.8]), pr.q_id))) object2cam = pt.transform_from( pr.active_matrix_from_intrinsic_euler_xyz(np.array([0.0, 0.0, -0.5])), np.array([0.5, 0.1, 0.1])) tm = TransformManager() tm.add_transform("end-effector", "robot", ee2robot) tm.add_transform("camera", "robot", cam2robot) tm.add_transform("object", "camera", object2cam) ee2object = tm.get_transform("end-effector", "object") ax = tm.plot_frames_in("robot", s=0.1) ax.set_xlim((-0.25, 0.75)) ax.set_ylim((-0.5, 0.5)) ax.set_zlim((0.0, 1.0)) plt.show() (Source code, png, hires.png, pdf) _images/plot_transform_manager.png We can also export the underlying graph structure as a PNG with tm.write_png(filename) _images/graph.png A subclass of TransformManager is UrdfTransformManager which can load robot definitions from URDF files. The same class can be used to display collision objects or visuals from URDF files. The library trimesh will be used to load meshes. Here is a simple example with one visual that is used for two links: """ ================ URDF with Meshes ================ This example shows how to load a URDF with STL meshes. This example must be run from within the examples folder or the main folder because it uses a hard-coded path to the URDF file and the meshes. """ import os import matplotlib.pyplot as plt from pytransform3d.urdf import UrdfTransformManager BASE_DIR = "test/test_data/" data_dir = BASE_DIR search_path = "." while (not os.path.exists(data_dir) and os.path.dirname(search_path) != "pytransform3d"): search_path = os.path.join(search_path, "..") data_dir = os.path.join(search_path, BASE_DIR) tm = UrdfTransformManager() with open(os.path.join(data_dir, "simple_mechanism.urdf"), "r") as f: tm.load_urdf(f.read(), mesh_path=data_dir) tm.set_joint("joint", -1.1) ax = tm.plot_frames_in( "lower_cone", s=0.1, whitelist=["upper_cone", "lower_cone"], show_name=True) ax = tm.plot_connections_in("lower_cone", ax=ax) tm.plot_visuals("lower_cone", ax=ax) ax.set_xlim((-0.1, 0.15)) ax.set_ylim((-0.1, 0.15)) ax.set_zlim((0.0, 0.25)) plt.show() (Source code, png, hires.png, pdf) _images/plot_urdf_with_meshes.png
__label__pos
0.905399
glibmm 2.80.0 Public Types | Public Member Functions | Static Public Member Functions | Protected Attributes | List of all members Glib::Module Class Reference Dynamic loading of modules. More... #include <glibmm/module.h> Public Types enum class  Flags {   LAZY = 1 << 0 ,   LOCAL = 1 << 1 ,   MASK = 0x03 }   Public Member Functions  Module (const std::string &file_name, Flags flags=Flags(0))  Opens a module.    Module (const Module &)=delete   Moduleoperator= (const Module &)=delete    Module (Module &&other) noexcept   Moduleoperator= (Module &&other) noexcept   virtual ~Module ()  Close a module.    operator bool () const  Check whether the module was found.   void make_resident ()  Ensures that a module will never be unloaded.   bool get_symbol (const std::string &symbol_name, void *&symbol) const  Gets a symbol pointer from the module.   std::string get_name () const  Get the name of the module.   GModulegobj ()   const GModulegobj () const   Static Public Member Functions static bool get_supported ()  Checks if modules are supported on the current platform.   static std::string get_last_error ()  Gets a string describing the last module error.   static std::string build_path (const std::string &directory, const std::string &module_name)  A portable way to build the filename of a module.   Protected Attributes GModulegobject_   Detailed Description Dynamic loading of modules. These functions provide a portable way to dynamically load object files (commonly known as 'plug-ins'). The current implementation supports all systems that provide an implementation of dlopen() (e.g. Linux/Sun), as well as HP-UX via its shl_load() mechanism, and Windows platforms via DLLs. Member Enumeration Documentation ◆ Flags Enumerator LAZY  Specifies that symbols are only resolved when needed. The default action is to bind all symbols when the module is loaded. LOCAL  Specifies that symbols in the module should not be added to the global name space. The default action on most platforms is to place symbols in the module in the global name space, which may cause conflicts with existing symbols. MASK  Mask for all flags. Constructor & Destructor Documentation ◆ Module() [1/3] Glib::Module::Module ( const std::string file_name, Flags  flags = Flags(0)  ) explicit Opens a module. If the module has already been opened, its reference count is incremented. If not, the module is searched in the following order: 1. If file_name exists as a regular file, it is used as-is; else 2. If file_name doesn't have the correct suffix and/or prefix for the platform, then possible suffixes and prefixes will be added to the basename till a file is found and whatever is found will be used; else 3. If file_name doesn't have the ".la"-suffix, ".la" is appended. Either way, if a matching .la file exists (and is a libtool archive) the libtool archive is parsed to find the actual file name, and that is used. At the end of all this, we would have a file path that we can access on disk, and it is opened as a module. If not, file_name is opened as a module verbatim in the hopes that the system implementation will somehow be able to access it. Use operator bool() to see whether the operation succeeded. For instance, Glib::Module module("plugins/helloworld"); if(module) { void* func = nullptr; bool found = get_symbol("some_function", func); } Dynamic loading of modules. Definition module.h:45 bool get_symbol(const std::string &symbol_name, void *&symbol) const Gets a symbol pointer from the module. Parameters file_nameThe name or path to the file containing the module, or an empty string to obtain a module representing the main program itself. flagsThe flags used for opening the module. ◆ Module() [2/3] Glib::Module::Module ( const Module ) delete ◆ Module() [3/3] Glib::Module::Module ( Module &&  other) noexcept ◆ ~Module() virtual Glib::Module::~Module ( ) virtual Close a module. The module will be removed from memory, unless make_resident has been called. Member Function Documentation ◆ build_path() static std::string Glib::Module::build_path ( const std::string directory, const std::string module_name  ) static A portable way to build the filename of a module. The platform-specific prefix and suffix are added to the filename, if needed, and the result is added to the directory, using the correct separator character. The directory should specify the directory where the module can be found. It can be an empty string to indicate that the module is in a standard platform-specific directory, though this is not recommended since the wrong module may be found. For example, calling build_path() on a Linux system with a directory of /lib and a module_name of "mylibrary" will return /lib/libmylibrary.so. On a Windows system, using \Windows as the directory it will return \Windows\mylibrary.dll. Parameters directoryThe directory the module is in module_nameThe name of the module Returns The system-specific filename of the module Deprecated: 2.76: You will get the wrong results most of the time. Use the constructor instead with module_name as the basename of the file_name argument. ◆ get_last_error() static std::string Glib::Module::get_last_error ( ) static Gets a string describing the last module error. Returns The error string ◆ get_name() std::string Glib::Module::get_name ( ) const Get the name of the module. Returns The name of the module ◆ get_supported() static bool Glib::Module::get_supported ( ) static Checks if modules are supported on the current platform. Returns true if available, false otherwise ◆ get_symbol() bool Glib::Module::get_symbol ( const std::string symbol_name, void *&  symbol  ) const Gets a symbol pointer from the module. Parameters symbol_nameThe name of the symbol to lookup symbolA pointer to set to the symbol Returns True if the symbol was found, false otherwise. ◆ gobj() [1/2] GModule * Glib::Module::gobj ( ) inline ◆ gobj() [2/2] const GModule * Glib::Module::gobj ( ) const inline ◆ make_resident() void Glib::Module::make_resident ( ) Ensures that a module will never be unloaded. Any calls to the Glib::Module destructor will not unload the module. ◆ operator bool() Glib::Module::operator bool ( ) const explicit Check whether the module was found. ◆ operator=() [1/2] Module & Glib::Module::operator= ( const Module ) delete ◆ operator=() [2/2] Module & Glib::Module::operator= ( Module &&  other) noexcept Member Data Documentation ◆ gobject_ GModule* Glib::Module::gobject_ protected
__label__pos
0.707061
Charts & Graphics in Excel 2010 for Windows Excel 2010 (Windows)    |    Intermediate • 15 videos | 1h 2m 26s • Earns a Badge Using charts is an effective way to present data in your Excel spreadsheet. Discover how to insert, format, and customize different chart types in Excel, and how to insert and work with trendlines. WHAT YOU WILL LEARN • Inserting your first chart Labeling your chart Formatting your chart Customizing your chart axes Customizing your chart titles Creating a chart template Presenting negative values in your chart Creating a pie chart • Creating a scatter chart Creating a bubble chart Creating a radar chart Combining two types of chart Creating a comparative chart using symbols Inserting trendlines Customizing your trend lines IN THIS COURSE • 3m 28s Charts can be used to analyze and present your data in a visually attractive manner. Excel 2010 comes complete with a large number of different chart types, all of which can be adjusted to suit your preferences. FREE ACCESS • 4m 36s Your Excel 2010 charts will often contain a large amount of information. You can make your charts easier to read by adding and adjusting different information elements, including labels, legends and titles. These elements can all be moved, resized and formatted. FREE ACCESS • Locked 3.  Formatting your chart in Excel 2010 for Windows 4m 8s Once you have created your chart in Excel 2010, you can begin to change its appearance by using the formatting tools. You can, for example, change the color setup for each element, adjust the font styles of any text item that you have added, and change the overall visual style of your chart. FREE ACCESS • Locked 4.  Customizing your chart axes in Excel 2010 for Windows 4m 26s Your chart axes can be modified to display a large amount of information. In Excel 2010, you can adjust the measurement unit, move the maximum and minimum values, and even change where the two axes cross in the chart. FREE ACCESS • Locked 5.  Customizing your chart titles in Excel 2010 for Windows 4m 24s You can use the WordArt formatting tools to create a unique title or chart element. See how to enhance your chart text with WordArt in Excel 2010. FREE ACCESS • Locked 6.  Creating a chart template in Excel 2010 for Windows 3m 10s If you frequently use the same type of chart, then you may find it useful to know how to create a chart template. Once you have created your chart template using your specified formatting options, you can very quickly apply it to your data entries in Excel 2010. FREE ACCESS • Locked 7.  Presenting negative values in your chart in Excel 2010 for Windows 3m 26s If your chart contains negative values, your Excel 2010 chart will need to be adjusted. Once your chart contains negative values, you can change the way in which these individual values are formatted and arranged on your axis. FREE ACCESS • Locked 8.  Creating a pie chart in Excel 2010 for Windows 4m A pie chart is used to compare the different proportions of an overall total. This is particularly useful when you are working with percentage data in Excel 2010. FREE ACCESS • Locked 9.  Creating a scatter chart in Excel 2010 for Windows 4m 2s In Excel 2010, a scatter chart is used to cross reference values and combine different data types. One data type can be represented by the x-axis, and a different data type can be represented by the y-axis. You can then use the legend to explain what each point on the chart means. FREE ACCESS • Locked 10.  Creating a bubble chart in Excel 2010 for Windows 5m 43s In Excel 2010, a bubble chart can be used to cross reference three different types of information in a single chart. Different data values can be visualized according to where the point is placed on the x- and y-axes and the size of the point itself. FREE ACCESS • Locked 11.  Creating a radar chart in Excel 2010 for Windows 5m 43s The radar chart can be used to visually plot five different data values in Excel 2010. Each axis represents a particular value or measurement. Once the chart has been inserted, the individual can compare the form and size of the shape created along the five different axes to obtain an overall representation of the data series. FREE ACCESS • Locked 12.  Combining two types of chart in Excel 2010 for Windows 3m 41s If you have two different types of data, then you may find yourself experiencing a problem with your chart's scale. In Excel 2010, you can create a chart that combines multiple types. You can even create a secondary axis for your other data series. FREE ACCESS • Locked 13.  Creating a comparative chart using symbols in Excel 2010 for Windows 4m 12s If you have a large number of data values in a table and you want to create a visual representation of them, you can create your own symbol chart. In Excel 2010, you can use the REPT formula to insert a certain number of symbols in a cell to represent the total units or value that is being referenced. FREE ACCESS • Locked 14.  Inserting trendlines in Excel 2010 for Windows 3m 31s In Excel 2010, you can insert miniature charts that track the overall trend of a data series. These are called Sparklines and they can help your reader to grasp the general progression of your data series. FREE ACCESS • Locked 15.  Customizing your trend lines in Excel 2010 for Windows 3m 55s Sparklines that have been inserted into your Excel 2010 worksheet can be formatted. You can adjust the color scheme that is applied, activate and deactivate different data points, and even visualize negative values in your trends. FREE ACCESS EARN A DIGITAL BADGE WHEN YOU COMPLETE THIS COURSE Skillsoft is providing you the opportunity to earn a digital badge upon successful completion on some of our courses, which can be shared on any social network or business platform. Digital badges are yours to keep, forever.
__label__pos
0.832424
| How To Tell If Someone Is Ignoring You On Snapchat Asenqua Tech is reader-supported. When you buy through links on our site, we may earn an affiliate commission. ✎ Key Takes: Determining if someone is ignoring you on Snapchat involves observing their activities, including posting stories, viewing your snaps, or interacting with others, especially if their location on the Snap Map does not align with their actions. If you suspect someone is ignoring you, it’s crucial to have an open conversation with them. This allows you to gain insights into the reasons behind their behavior, whether they are personal or not, and helps clarify the situation. How To Tell If Someone Is Ignoring You On Snapchat: In addition to speculation, there are specific and reliable methods to determine the answer. Some of these methods are outlined below: 1. Check Snap Score On Snapchat, each time a person sends or receives a snap, their snap score increases by one. Fortunately, the snap score of everyone is visible to their Snap friends. Therefore, you can check whether the person is currently active on Snapchat. If the snap score values are low and remain constant, it indicates that the individual is not actively using Snapchat during that time, and they are not ignoring you. However, if you observe a higher snap score, it suggests that they may be ignoring you and your messages. 🔴 Steps To Follow: Step 1: Open your Snapchat and navigate to the inbox section. Step 2: Open the chat of the person whose snap score you want to check. Step 3: Inside the chat, click on the profile icon/bitmoji at the top. Step 4: Below the name, you will find the snap icon and a numerical value, which represents the snap score of the targeted person. By following these steps, you can easily check the snap score of a person on Snapchat. If the score increases without them responding to your snaps, it indicates that they are engaged in other activities, potentially ignoring your messages. 2. Check the Updated Story This method is the most effective and straightforward way to gauge someone’s engagement with you. If the individual is actively posting updates or stories but remains unresponsive to your messages and snaps, it’s likely that they are intentionally ignoring you. When a person updates a story on Snapchat, they inherently engage with the app, scrolling through snap messages, viewing other users’ stories, and encountering notifications of incoming messages and snaps. Additionally, message notifications appear in a highlighted blue, making them easily noticeable. If you observe someone consistently uploading stories but not responding to your messages or snaps, it suggests they may be intentionally ignoring your communications. 3. Check Snap Map behavior Examining Snap Map behavior involves scrutinizing the individual’s location, indicating whether they have recently opened Snapchat. If the person has accessed Snapchat within the last 24 hours, their last visit time will be visible beneath their bitmoji icon on the Map. By assessing this information, you can deduce whether the person is neglecting your messages and snaps. If the last visit time corresponds to a timeframe shortly after you sent a snap or message, it strongly suggests that the person is intentionally disregarding your communications. Similar Posts
__label__pos
0.929233
Enable contrast version Tutor profile: Lulu Y. Inactive Lulu Y. Statistical Programmer for a Top 20 Pharmaceutical Company Tutor Satisfaction Guarantee Questions Subject: SAS TutorMe Question: What is the difference of "PROC SORT NODUP " and "PROC SORT NODUPKEY"? Inactive Lulu Y. Answer: NODUP will delete the observations which are exactly the same in all variables. NODUPKEY will delete the observations which are identical in the "BY" variable(s). Subject: Calculus TutorMe Question: When we want to know a maximum value of a curve, y = 5+ x^2 (x square), what is the solution by using derivative? Inactive Lulu Y. Answer: Find the derivative of y = 5+ x^2 , which is 2*x, and when this equals to 0, we have the max value of y Subject: Statistics TutorMe Question: When we are using t-test to compare the mean of two groups, for example, we want to find out, whether there is a significant difference in the height of two groups of girls. What assumption we need before we are using t-test? Inactive Lulu Y. Answer: The mean of the height of two groups is asymptoticly normally distributed. Contact tutor Send a message explaining your needs and Lulu will reply soon. Contact Lulu Request lesson Ready now? Request a lesson. Start Lesson FAQs What is a lesson? A lesson is virtual lesson space on our platform where you and a tutor can communicate. You'll have the option to communicate using video/audio as well as text chat. You can also upload documents, edit papers in real time and use our cutting-edge virtual whiteboard. How do I begin a lesson? If the tutor is currently online, you can click the "Start Lesson" button above. If they are offline, you can always send them a message to schedule a lesson. Who are TutorMe tutors? Many of our tutors are current college students or recent graduates of top-tier universities like MIT, Harvard and USC. TutorMe has thousands of top-quality tutors available to work with you.
__label__pos
0.954563
Question web server + separate mail server = same domain name possible? Posted April 12, 2015 16.5k views UbuntuEmailDNSLEMP My situation: 1. one web server for my public website (mydomain.org) 2. one mail server for my website (mail.mydomain.org) My goal: Emails on mail server look like this: [email protected] (not [email protected]) Is this possible? These answers are provided by our Community. If you find them useful, show some love by clicking the heart. If you run into issues leave a comment, or add your own answer to help others. × Submit an Answer 3 answers Yes, You have to add a MX record to the DNS for “mydomain.org”. The MX record points to “mail.mydomain.org”. When mail servers resolve “[email protected]” they will query the MX record for “mydomain.org” and get told to fetch email from “mail.mydomain.org”. That DNS can point to any IP you want to use as your email server. I like that kind of QA. It’s so clear and Expressive and understandable :)
__label__pos
0.993794
Skip to main content Version: 8.0.2 Selector What is a Selector? A selector is a function that identifies an element in your application. The selector API(full api here) provides methods and properties to select elements on the screen and get their state. You can use selectors to inspect elements state on the page, define Action targets, and Assertion actual values. For example, when we are testing a web page, that web page consists of elements, like username field, password field, label, paragraph of text, etc. We will definitely need to interact with those elements, you will need to provide a Selector to for example to execute I.click() action. This is where Test Maker Selector API comes into play. Test Maker Selector API, is not just about finding elements and stabilizing this process, it provides many Actions to get information about the Selector like value, inner text, style, visibility, etc. The Selector consists of 2 parts mainly: • The Init • The Actions Selector Example Let us explain the image above: 1. We ask Test Maker to find an element with a "username" id. 2. Now that we have found the element, we ask Test Maker to tell us if that element is visible or not. The Init This is where you let test Maker understand how would you want to locate the element on the screen. This is usually a standard syntax that is dependent on the Adapter you are using. For example, if you are using an adapter that tests web pages like the Playwright Adapter, then you have two ways to define the init: 1. CSS 2. XPath CSS and XPath are really rich set of grammar that would help you to find an Element on the screen using many conditions. It is out of the scope of Test Maker documentation to teach CSS and XPath, but there are thousands of articles, videos on the net to cover that. It is highly recommended to go over some of them to grasp the basic concept of them. Here are some helpful resources for you: What is css. The Actions Test Maker Selector API comes with an extensive collection of Actions. These Actions can help you to further pinpoint the Selector or to get information about the Selector. Smart Waiting As with anything in life, things don't always happen when we want them 🙂 And this applies to Test Automation too. Your Application is subject to many variables that affect its performance (like network, bad code, etc). This means that sometimes, Elements on the screen don't show up at the exact time you want them! This is really one of the biggest issues that Test Automation Engineers face. And most of the time they resort to implicit waits, which is a kind of brute force solution for the issue. Something like: await I.wait(200); In the test automation theory, you can find two types of waits: implicit (also known as hard wait) waits and explicit (also known as soft wait) waits. But! Implicit wait introduces a few drawbacks as well. It increases the test script execution time as each of the commands would be ceased to wait for a stipulated amount of time before resuming the execution. Thats why we really recommend applying explicit waits whenever its possible. Explicit waits are used to halt the execution until a particular condition is met or the maximum time has elapsed. Unlike Implicit waits, Explicit waits are applied for a particular instance only. Having an implicit await is not always bad, sometimes you have no option and your Application behaves this way. But if there is a built-in solution for this issue why not to use it 😎. Test Maker makes this a none existing issue with its Smart Waiting System 🤩 The way it works is that we auto wait for your Selector for a defined amount of time (it can be set globally and per Selector). If the element is not found immediately, Test Maker retries its attempts to find the element. Test fails if a defined time limit was reached and the element was never found. If the element is present and was successfully found, test execution will be continued with next steps.
__label__pos
0.821658
<meta http-equiv="refresh" content="1; url=/nojavascript/"> Square and Cube Root Function Families ( Read ) | Analysis | CK-12 Foundation Dismiss Skip Navigation You are viewing an older version of this Concept. Go to the latest version. Square and Cube Root Function Families % Progress Practice Progress % Practice Now Extracting the Equation from a Graph The graph of a cubic function starts at the point (2, 2). It passes through the point (10, -2). What is the equation of the function? Guidance This concept is the opposite of the previous two. Instead of graphing from the equation, we will now find the equation, given the graph. Example A Determine the equation of the graph below. Solution: From the previous two concepts, we know this is a square root function, so the general form is y=a \sqrt{x-h}+k . The starting point is (-6, 1) . Plugging this in for h and k , we have y=a \sqrt{x+6}+1 . Now, find a , using the given point, (-2, 5) . Let’s substitute it in for x and y and solve for a . 5&=a \sqrt{-2+6}+1 \\4&=a \sqrt{4} \\4&=2a \\2&=a The equation is y=2 \sqrt{x+6}+1 . Example B Find the equation of the cubed root function where h=-1 and k=-4 and passes through (-28, -3) . Solution: First, plug in what we know to the general equation; y= \sqrt[3]{x-h}+k \Rightarrow y=a \sqrt[3]{x+1}-4 . Now, substitute x=-28 and y=-3 and solve for a . -3&=a \sqrt[3]{-28+1}-4 \\1&=-3a \\- \frac{1}{3}&=a The equation of the function is y=- \frac{1}{3} \sqrt[3]{x+1}-4 . Example C Find the equation of the function below. Solution: It looks like (0, -4) is (h, k) . Plug this in for h and k and then use the second point to find a . -6&=a \sqrt[3]{1-0}-4 \\-2&=a \sqrt[3]{1} \\-2&=a The equation of this function is y=-2 \sqrt[3]{x}-4 . When finding the equation of a cubed root function, you may assume that one of the given points is (h, k) . Whichever point is on the “bend” is (h, k) for the purposes of this text. Intro Problem Revisit First, plug in what we know to the general equation; y= \sqrt[3]{x-h}+k \Rightarrow y=a \sqrt[3]{x - 2} + 2 . Now, substitute x=10 and y=-2 and solve for a . -2&=a \sqrt[3]{10-2}+2 \\-2&=a\sqrt[3]{8}+2\\-2&=2a +2\\-4&=2a\\a = -2 The equation of the function is y=-2 \sqrt[3]{x-2}+2 . Guided Practice Find the equation of the functions below. 1. 2. 3. Find the equation of a square root equation with a starting point of (-5, -3) and passes through (4, -6) . Answers 1. Substitute what you know into the general equation to solve for a . From Example C, you may assume that (5, 8) is (h, k) and (-3, 7) is (x, y) . y&=a \sqrt[3]{x-5}+8 \\7&=a \sqrt[3]{-3-5}+8 \\-1&=-2a \\\frac{1}{2}&=a The equation of this function is y= \frac{1}{2} \sqrt[3]{x-5}+8 . 2. Substitute what you know into the general equation to solve for a . From the graph, the starting point, or (h, k) is (4, -11) and (13, 1) are a point on the graph. y&=a \sqrt{x-4}-11 \\1&=a \sqrt{13-4}-11 \\12&=3a \\4&=a The equation of this function is y=4 \sqrt{x-4}-11 . 3. Substitute what you know into the general equation to solve for a . From the graph, the starting point, or (h, k) is (-5, -3) and (4, -6) are a point on the graph. y&=a \sqrt{x+5}-3 \\-6&=a \sqrt{4+5}-3 \\-3&=3a \\-1&=a The equation of this function is y=- \sqrt{x+5}-3 . Practice Write the equation for each function graphed below. 1. Write the equation of a square root function with starting point (-6, -3) passing through (10, -15) . 2. Write the equation of a cube root function with (h, k) = (2, 7) passing through (10, 11) . 3. Write the equation of a square root function with starting point (-1, 6) passing through (3, 16) . 4. Write the equation of a cubed root function with (h, k)=(-1, 6) passing through (7, 16) . 5. Write the equation of a cubed root function with (h, k)=(7, 16) passing through (-1, 6) . 6. How do the two equations above differ? How are they the same? Image Attributions Explore More Sign in to explore more, including practice questions and solutions for Square and Cube Root Function Families. Reviews Please wait... Please wait... Original text
__label__pos
0.999629
Skip to content Manage cluster logs Running logs of NebulaGraph cluster services (graphd, metad, storaged) are generated and stored in the /usr/local/nebula/logs directory of each service container by default. View logs To view the running logs of a NebulaGraph cluster, you can use the kubectl logs command. For example, to view the running logs of the Storage service: // View the name of the Storage service Pod, nebula-storaged-0. $ kubectl get pods -l app.kubernetes.io/component=storaged NAME READY STATUS RESTARTS AGE nebula-storaged-0 1/1 Running 0 45h ... // Enter the container storaged of the Storage service. $ kubectl exec -it nebula-storaged-0 -c storaged -- /bin/bash // View the running logs of the Storage service. $ cd /usr/local/nebula/logs Clean logs Running logs generated by cluster services during runtime will occupy disk space. To avoid occupying too much disk space, the NebulaGraph Operator uses a sidecar container to periodically clean and archive logs. To facilitate log collection and management, each NebulaGraph service deploys a sidecar container responsible for collecting logs generated by the service container and sending them to the specified log disk. The sidecar container automatically cleans and archives logs using the logrotate tool. In the YAML configuration file of the cluster instance, set spec.logRotate to enable log rotation and set timestamp_in_logfile_name to false to disable the timestamp in the log file name to implement log rotation for the target service. The timestamp_in_logfile_name parameter is configured under the spec.<graphd|metad|storaged>.config field. By default, the log rotation feature is turned off. Here is an example of enabling log rotation for all services: ... spec: graphd: config: # Whether to include a timestamp in the log file name. # You must set this parameter to false to enable log rotation. # It is set to true by default. "timestamp_in_logfile_name": "false" metad: config: "timestamp_in_logfile_name": "false" storaged: config: "timestamp_in_logfile_name": "false" logRotate: # Log rotation configuration # The number of times a log file is rotated before being deleted. # The default value is 5, and 0 means the log file will not be rotated before being deleted. rotate: 5 # The log file is rotated only if it grows larger than the specified size. The default value is 200M. size: "200M" Collect logs If you don't want to mount additional log disks to back up log files, or if you want to collect logs and send them to a log center using services like fluent-bit, you can configure logs to be output to standard error. The Operator uses the glog tool to log to standard error output. Note Currently, NebulaGraph Operator only collects standard error logs. In the YAML configuration file of the cluster instance, you can configure logging to standard error output in the config and env fields of each service. ... spec: graphd: config: # Whether to redirect standard error to a separate output file. The default value is false, which means it is not redirected. redirect_stdout: "false" # The severity level of log content: INFO, WARNING, ERROR, and FATAL. The corresponding values are 0, 1, 2, and 3. stderrthreshold: "0" env: - name: GLOG_logtostderr # Write log to standard error output instead of a separate file. value: "1" # 1 represents writing to standard error output, and 0 represents writing to a file. image: vesoft/nebula-graphd replicas: 1 resources: requests: cpu: 500m memory: 500Mi service: externalTrafficPolicy: Local type: NodePort version: vmaster metad: config: redirect_stdout: "false" stderrthreshold: "0" dataVolumeClaim: resources: requests: storage: 1Gi storageClassName: ebs-sc env: - name: GLOG_logtostderr value: "1" image: vesoft/nebula-metad ... Last update: January 30, 2024
__label__pos
0.531992
1. Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community. Yikes, my 7970 only scores 61.5 on ASIC Discussion in 'AMD / ATI' started by Frag Maniac, Dec 7, 2012. 1. MxPhenom 216 MxPhenom 216 Corsair Fanboy Joined: Aug 31, 2010 Messages: 10,053 (6.64/day) Thanks Received: 2,270 Location: Seattle, WA People are so up in arms about overclocking these days its almost ridiculous. I mean i know i love overclocking my shit, but graphics cards this generation don't even need it, one because most overclock on their own with boost clocks like kepler, or they are fast enough out of the box, to not warrant the need for it.   INSTG8R says thanks. 2. drdeathx drdeathx Joined: May 14, 2009 Messages: 2,132 (1.07/day) Thanks Received: 479 Location: Chicago burbs Whats funny is this guy is looking at a rating that means nothing. I bet he look at Windows experience score too.:laugh::laugh::laugh::banghead::banghead: Run some real benchmarks and see how the card performs. Then get back to us.   Darkleoco and EarthDog say thanks. 3. Frag Maniac Frag Maniac Joined: Nov 9, 2010 Messages: 2,643 (1.83/day) Thanks Received: 551 If it means nothing why did W1zzard write the GUI that tests the ASIC score by labeling low scores as having lower OCing on air, which is the way most people run them. Lower OCs is also the most common scenario for those reporting their results with such ratings. Average OCs and certain above average are more the exception than the norm for high leakage cards on air. Acting like it's not isn't going to change that. And if you'd paid attention to any of my other posts lately, you'd know I'm already running Cat 12.11, and have already benched the card. If you're going to take cheap shots, at least know wtf you're talking about, and no, you personally I don't care to get back to, because you can't seem to handle discussing it with respect. "I bet he look", you can't even SAY your cheap shots intelligently.   4. INSTG8R INSTG8R Joined: Nov 26, 2004 Messages: 2,910 (0.80/day) Thanks Received: 471 Location: Lost in Norway Well I will just throw in my ASIC Quality on my Sapphire OC Edition(non Boost 1000/1450) It weighs in at 69.8%. I have never made any serious effort to OC it. Only OC I have ever done is thru the CCC which amounted to 1100/1500. The difference was pretty minimal outside of the extra heat. Heck Far Cry 3 has set the new standard for my cards Max Temp. It's easily pushing Mid 70's running FC3 well over 10C then what I'm used to for my Max Temp. I will take the loss of what literally will amount to no more than 5FPS vs. the extra heat any kind of OC is going to generate. You mentioned the Vapor-X IF I was to go for a new card(or if I had waited just a bit longer) THAT would be my card of choice hands down. Better components, better cooler and the kicker is I looked it up couple weeks back and my OC Edition is actually more expensive than the Vapor-X :banghead: But like MxPhenom said, The card doesn't really need any OC anyway. Maybe a year or two down the road the option is there when it might actually struggle.   Last edited: Dec 8, 2012 5. jboydgolfer jboydgolfer Joined: Oct 17, 2012 Messages: 619 (0.84/day) Thanks Received: 106 Location: Amherst , MA ASIC is MOOT man(in terms of O/C'ing), My LCS 7970 had an ASIC of 78 , and OC'd like SHYT. Although , it had TERRIBLE Coil whine, so I returned it, and got a new one, 71 ASIC, So So OC, nothing to write home about.   6. INSTG8R INSTG8R Joined: Nov 26, 2004 Messages: 2,910 (0.80/day) Thanks Received: 471 Location: Lost in Norway The ONLY time I hear any kind of Coil Whine is when I start a game that say for example doesn't use V-Sync in the Intros(My latest that comes to mind is Alan Wake) Well Hellz Yeah the coils are gonna whine when it's pushing almost 6000 FPS. That's what V-sync is for... Coil Whine is pretty aptly named your card is begging you to stop overworking it for no good reason at all ;)   7. W1zzard W1zzard Administrator Staff Member Joined: May 14, 2004 Messages: 14,961 (3.92/day) Thanks Received: 11,762 You guys realize that "ASIC" just means "GPU"? So without the word "quality" after it, it isn't what you are talking about.   Steevo and INSTG8R say thanks. 8. INSTG8R INSTG8R Joined: Nov 26, 2004 Messages: 2,910 (0.80/day) Thanks Received: 471 Location: Lost in Norway Sigh... Yes your right. Now I gotta edit my post...   9. buildzoid buildzoid Joined: Oct 29, 2012 Messages: 1,273 (1.76/day) Thanks Received: 341 Location: CZ My ASIC quality is somewhere around 65 not sure and can't check now but I can get mine up to 1150/1525 on 1.112v   10. m1dg3t m1dg3t Joined: May 22, 2010 Messages: 2,247 (1.39/day) Thanks Received: 513 Location: Canada This just in; Next round of GFX cards will launch with ridiculous pricing due to plethora of users returning working cards claiming "low ASIC" or "coil whine". People make me laugh.   11. Frick Frick Fishfaced Nincompoop Joined: Feb 27, 2006 Messages: 10,783 (3.41/day) Thanks Received: 2,344 So how is it measured? WHAT is measured? EDIT @ midget: If the whine is loud enough I'd return it. That kind of whining always gets to me.   Last edited: Dec 8, 2012 12. Frogger Frogger Joined: Feb 12, 2006 Messages: 2,180 (0.69/day) Thanks Received: 313 oc's like there's no tommorrow MSI R7970 Power Edition @1200/1600 24/7 Flashed to LIGHTING   13. drdeathx drdeathx Joined: May 14, 2009 Messages: 2,132 (1.07/day) Thanks Received: 479 Location: Chicago burbs I know WTF I am talking about. Run 3Dmark 11 and post your damn score. If your getting hung up on this score the community really does not use much your crackers. :nutkick: Who cares about air or watercooloing. Cooling is what it is.   14. Frag Maniac Frag Maniac Joined: Nov 9, 2010 Messages: 2,643 (1.83/day) Thanks Received: 551 ORLY? So you're psychic and know what I have or haven't done without reading all my posts? OK Cpt wiz kid, you can chill now and understand that I don't give feedback to those whom talk down to people they apparently know nothing about. You're still not getting that a pretty tech savvy guy, the REAL W1zzard, wrote that ASIC interpretation chart indicating lower ASIC typically means lower air OC, and higher ASIC higher OC on air. You also seem oblivious to the fact that spending $200 or more on WCing totally changes the cost effectiveness of even buying one 7970 in the first place. Hell, at the $280 they start at, two 7950s would be only $30 more than what I paid for my 7970 plus WCing. Doesn't take a genious to figure out which it the better deal. If it hasn't sunk in by now, I'll just have to ignore your responses because I don't feel like dealing with your attitude. W1zzard's first response tells it all. Best I can hope for is extreme OCing with spendy cooling, and that = not cost effective. Please tolerate our abbreviating ASIC "quality" with just ASIC though W1zzard. Pretty much everyone here knows what is meant when people say ASIC score, and to say quality when some of us clearly aren't actually getting a quality OC is salt in the wound.   15. W1zzard W1zzard Administrator Staff Member Joined: May 14, 2004 Messages: 14,961 (3.92/day) Thanks Received: 11,762 it's like saying "car" when you are talking about "car wash". If you don't like your card, how it OCs, the color of it, the smell. By all means, return it, that's what restocking fees are for.   brandonwh64, m1dg3t and Naito say thanks. 16. Naito Naito Joined: Oct 10, 2009 Messages: 423 (0.23/day) Thanks Received: 160 Location: Australia Ahahahaha :roll: Definitely should take smell into consideration next time I buy a new GPU :laugh: :toast:   17. the54thvoid the54thvoid Joined: Dec 14, 2009 Messages: 3,372 (1.90/day) Thanks Received: 1,602 Location: Glasgow - home of formal profanity ASIC = Application-Specific Integrated Circuit. (I googled). Is what you are saying is that the ASIC itself is the whole thing relating to the cards circuitry? If so, does this infer the chip itself (the Tahiti Core or GK104) is only a component of the ASIC? The only way what you are saying makes sense (your car wash analogy) is if the ASIC quality does not simply refer to the chip. (Whereas the OP is using ASIC as a description of the chip itself). I am getting the jist of things here? :confused:   18. Frick Frick Fishfaced Nincompoop Joined: Feb 27, 2006 Messages: 10,783 (3.41/day) Thanks Received: 2,344 If ASIC = GPU I think wiz is broadening it a bit. Or is ASIC part of the GPU? Good questions. Anyway found this on another forum (ironcally): EDIT: It feels like this feature generates more questions than answers. It feels people are confused about what it actually means.   19. Naito Naito Joined: Oct 10, 2009 Messages: 423 (0.23/day) Thanks Received: 160 Location: Australia I think W1zzard means that referring to ASIC quality by just saying ASIC or ASIC score is meaningless. What are you scoring in regards to the ASIC? What aspects are you measuring? And so on. Just as saying 'car' would be meaningless, when referring to a car wash, as 'car' could be referring to many aspects.   Last edited: Dec 9, 2012 20. the54thvoid the54thvoid Joined: Dec 14, 2009 Messages: 3,372 (1.90/day) Thanks Received: 1,602 Location: Glasgow - home of formal profanity Found out some more. I'll paraphrase. All components are given an ASIC quality but the ASIC that gpu-z reads is the AMD or Nvidia assigned chip ASIC. This is given at manufacture and is a way to describe the operating temps/voltage and frequency parameters of the chip. A higher ASIC chip works within the parameters of the assigned frequency with a lower voltage and heat output. I think. However, the rest of the PCB is made of assembled components that also rely on ASIC 'quality'. So you may have a high ASIC chip but a poor selection of PCB components (Mosfets, chokes, etc) would hamper overclocking. The high ASICs work well with lower voltages (less leakage) and therefore work well on air. The lower ASICs have higher leakage and require higher volts to hit the same frequencies but obviously at higher temps (water or LN2). My confusion is why do high ASICs not clock even better under water or ice? Is it that the higher leakage chips are tested up to a certain volt range and are therefore known to tolerate them? Whereas the high ASIC's have a lower tested operating voltage and may not respond to more voltage? Original description here. http://forums.anandtech.com/showpost.php?p=33445806&postcount=21   21. Naito Naito Joined: Oct 10, 2009 Messages: 423 (0.23/day) Thanks Received: 160 Location: Australia I think cadaveca gave the best answer:   the54thvoid says thanks. 22. BlackZero BlackZero New Member Joined: Jan 28, 2011 Messages: 168 (0.12/day) Thanks Received: 103 Location: England Which brings me to another question, if you don't mind, regrading your MSI 7970 not overclocking that well even when water cooled. What is the ASIC quality reading for that card? Also, I'll just add that from my understanding a higher ASIC quality means that the chip is also at it's limit in terms of how much voltage the circuitry can tolerate. I could be wrong, though. Edit: I'll also add that my current graphics card exhibits all the signs of being very high ASIC and doesn't respond as well to cooling.   23. the54thvoid the54thvoid Joined: Dec 14, 2009 Messages: 3,372 (1.90/day) Thanks Received: 1,602 Location: Glasgow - home of formal profanity lol, I shouldn't tell you, you'll probably try and cancel your own MSI order. 59.2 But remember it's still a stock 7970 pcb, just overclocked at bios level. I reached about 1100-1125 stable. The CCC limits were also set at 1125 (but i tried via the tweak to go higher). 1125 is 200MHz above stock (21.6% overclock) nothing to be sad about and my ASIC is unusually low. Flipside, less coil whine than my powercolor!   BlackZero says thanks. 24. BlackZero BlackZero New Member Joined: Jan 28, 2011 Messages: 168 (0.12/day) Thanks Received: 103 Location: England Thanks, that's actually re-assuring as it shows that the lower overclock is likely due to the ASIC quality of your particular card rather than something inherently wrong with the model in question. Which brings me to the next logical question, what kind of voltage did you test with?   25. the54thvoid the54thvoid Joined: Dec 14, 2009 Messages: 3,372 (1.90/day) Thanks Received: 1,602 Location: Glasgow - home of formal profanity Right up to 1300mv. But it did bad things on 3DMark11. Thing is it ran at stock volts at 1125core. This particular card just didn't like the juice. My Powercolor card did 1300MHz core on 1300mv. Both my cards chug along at 1050core, 1500 memory, stock volts, 100% stable.   BlackZero says thanks. Currently Active Users Viewing This Thread: 1 (0 members and 1 guest) Share This Page
__label__pos
0.52998
Convert MD to PPSX using Python or Online App MD to PPSX conversion in your Python Applications without installing Microsoft Word® or PowerPoint MD Conversion via C# .NET MD Conversion via Java MD Conversion via C++ MD Conversion in Android Apps   Why to Convert MD to PPSX Python developers often need to convert MD files to PPSX format for various reasons. MD files are plain text files that are used to write documents in a simple and easy to read format. However, PPSX files are PowerPoint presentations that are used to create slideshows and presentations. Converting MD files to PPSX format allows developers to create slideshows and presentations from the MD files. How Aspose.Total Helps for MD to PPSX Conversion Aspose.Total for Python via .NET API is a full package of various APIs that can help developers automate the conversion process from MD to PPSX. The process is mainly in two steps. Firstly, the Aspose.Words for Python via .NET API is used to convert the MD file to PDF. After that, the PowerPoint Python API, Aspose.Slides for Python via .NET, is used to save the created PDF into a Presentation as a PPSX format. The Aspose.Total for Python via .NET API is a comprehensive package of APIs that can help developers automate the conversion process from MD to PPSX. It is easy to use and provides a simple and efficient way to convert MD files to PPSX format. The API also provides a wide range of features and functions that can help developers create high-quality presentations from MD files. How to Convert MD to PPSX in Python • Step 1 Open the source MD file using Document class • Save MD file to PDF by using Save method by providing the file name and desired directory path. • Step 2 Load PDF file with an instance of Presentation class • Call the save method while specifying output file path & SaveFormat.PPSX as parameters. So your MD file is converted to PPSX at the specified path. Conversion Requirements • For MD to PPSX conversion, Python 3.5 or later is required • Reference APIs within the project directly from PyPI ( Aspose.Slides and Aspose.Words ) • Or use the following pip commands pip install aspose.slides and pip install aspose.words. • Moreover, Microsoft Windows or Linux based OS (see more for Slides and Words ) and for Linux check additional requirements for gcc and libpython and follow step by step instructions .   Save MD To PDF in Python - Step 1     Save PDF To PPSX in Python - Step 2   Free Online Converter for MD to PPSX FAQ • How can I convert MD to PPSX Online? The online MD conversion app is available above. To begin, you can add your MD file for conversion by either dragging and dropping it or clicking inside the white area to import the document. Once your MD file is uploaded, click on the Convert button to start the conversion process. After the MD to PPSX conversion is complete, you can download your converted file. With just one click, you will get your output PPSX files. • How long does it take to convert MD? This online MD converter operates quickly, but the speed largely depends on the size of the MD file. Small MD files can be converted to PPSX in just a few seconds. If you have integrated the conversion code within a .NET application, the speed of the process will depend on how well you have optimized your application. • Is it safe to convert MD to PPSX using free Aspose.Total converter? Of course! After conversion, the download link for the PPSX file will be available immediately. Uploaded files are deleted after 24 hours, and download links will no longer work after that time. Your files are safe, and no one has access to them. The free app is mainly integrated for testing purposes, so you can verify the results before integrating the code. • What browser should I use to convert MD? You can use any modern web browser like Google Chrome, Firefox, Opera, or Safari for this online MD to PPSX conversion. However, if you are building a desktop application, the Aspose.Total MD Conversion API will work seamlessly. Explore MD Conversion Options with Python Convert MD to EMAIL (Email Files) Convert MD to EML (E-Mail Message) Convert MD to EMLX (Apple Mail Message) Convert MD to ICS (Calendar File) Convert MD to MBOX (Email Mailbox File) Convert MD to MSG (Outlook Message Item File) Convert MD to ODP (OpenDocument Presentation Format) Convert MD to OFT (Outlook File Template) Convert MD to OST (Outlook Offline Storage Table) Convert MD to POT (Microsoft PowerPoint Template Files) Convert MD to POTM (Microsoft PowerPoint Template File) Convert MD to POTX (Microsoft PowerPoint Template Presentation) Convert MD to POWERPOINT (Presentation Files) Convert MD to PPS (PowerPoint Slide Show) Convert MD to PPSM (Macro-enabled Slide Show) Convert MD to PPT (PowerPoint Presentation) Convert MD to PPTM (Macro-enabled Presentation File) Convert MD to PPTX (Open XML presentation Format) Convert MD to PST (Outlook Personal Storage Table) Convert MD to VCF (vCard File) What is MD File Format? MD, or Markdown, is a lightweight markup language commonly used for formatting plain text documents. It was created by John Gruber in 2004 with the goal of allowing writers to focus on content without the distractions of complex formatting. Markdown uses simple and intuitive syntax that can be easily converted into HTML or other document formats. In Markdown, you can apply formatting to text by using a combination of special characters and plain text. For example, you can use asterisks or underscores to create italic or bold text, hashtags to create headings, and hyphens or asterisks to create lists. Markdown also supports adding links, images, code snippets, and tables. One of the advantages of Markdown is its readability in its raw form, as it closely resembles plain text. It can be written in any text editor and easily converted into HTML or other formats using various tools and converters. Markdown files have the .md or .markdown file extension. Markdown is widely used for writing documentation, creating blog posts, and even in version control systems like Git. Its simplicity and versatility have made it a popular choice among writers, developers, and content creators for creating structured and well-formatted documents with minimal effort. What is PPSX File Format? The PPXS file format is a file extension used by Microsoft PowerPoint to save presentations in a compressed format. It is an extension of the PPTX file format, which is the default file format for PowerPoint presentations. The “S” in PPXS stands for “compressed”, indicating that the file has undergone compression to reduce its size. The purpose of the PPXS format is to create smaller file sizes for PowerPoint presentations, making them easier to share, transfer, and store. The compression algorithm used in PPXS files helps to reduce the overall file size while maintaining the content and formatting of the presentation. PPXS files can be created and edited using Microsoft PowerPoint or other software that supports the PowerPoint file format. They contain various multimedia elements such as text, images, videos, audio, and animations, just like regular PowerPoint presentations. When opening a PPXS file, PowerPoint decompresses the file to restore it to its original PPTX format, allowing users to view and edit the presentation as they would with any other PowerPoint file. The use of the PPXS format can be beneficial when working with large presentations or when there is a need to optimize file sizes for sharing or storage purposes. However, it’s worth noting that decompressing the PPXS file to its original PPTX format may require additional processing time.
__label__pos
0.780868
2 $\begingroup$ I am developing a software with hybrid encryption (RSA-4096 and AES-256-CBC). The iv and key of an AES encrypted file will be RSA encrypted end exchanged through a different way than the encrypted file. Thats ok for me to this point. I am thinking about where it is most easy to break the encryption of my own files and always come to the point where a key and iv guessing would take ages but is theoretically easy. It just takes time with the computing power we have today. In my application the transfer of the encrypted file will not be possible if there are bits lost in any case. Version 1: Since CBC is always based on the the pre block, why do I not just change the order of the blocks in the file and write a map together with the key and iv into the RSA part? Is this a stupid idea? Version 2: Why do I not just replace a random character in each AES block with its "value" + 1 or so and write into my RSA a map of the chawed characters? In my opinion in Version 1 the "guessing power" would be the "AES blockcount"*"time for a single file" and after that the order would sill be wrong. I have no idea what effort it would take to get the right order... In the second Version it would be even to hard for me to calculate. It could be any bit in a block with a wrong value and a guessing of iv and key per block. Does that make sense? Who knows about the computing power we will have in a few years. And I also don't want people to read the files in a few years ;-) $\endgroup$ 7 $\begingroup$ TL,DR: No, don't do it. Use a standard protocol and don't write your own implementation. If you're thinking at the level of AES, CBC and RSA, you're doing it wrong. I recommend crypto_box. I am developing a software with hybrid encryption (RSA-4096 RSA-4096 is ok if done right. Done right means OAEP or RSA-KEM. PKCS#1v1.5 is theoretically possible to get right, but very difficult because it is prone to oracle attacks. A lack of padding or homegrown padding is plain broken. ECIES gives you as much or better security at much better performance, and typically fewer risks of getting it wrong due to insecure parameters. and AES-256-CBC). Unauthenticated modes are dangerous because they are prone to oracle attacks. Even PGP got it wrong. Don't use unauthenticated modes. And especially don't use CBC, which has padding, which allows even more oracle attacks. You might think you don't care about authentication, but you're probably wrong. If someone can send fake messages, that does help them obtain the content of confidential messages. The efail attack on PGP works by piecing parts of legitimate messages into fake messages and having the fake messages (attempt to be) decrypted. Use GCM or CCM instead (or use ChaCha20_Poly1305 instead of AES-GCM or AES-CCM). (But as I said earlier, what you should really be using is something like crypto_box.) Also note that using authenticated encryption prevents oracle attacks at the level of the symmetric cryptography, but it doesn't prevent similar attacks at a higher level, since an attacker can make up their own symmetric key and send whatever they want by encrypting their own symmetric key with the public key. You should sign messages (sign (a hash of) the plaintext) with a separate set of asymmetric keys, where the sender holds the private key and the recipient knows the sender's public key (or can get it from a reliable source). The iv and key of an AES encrypted file will be RSA encrypted end exchanged through a different way than the encrypted file. You can, and typically should send the IV with the file. The IV doesn't need to be confidential. The AES key is encrypted with RSA (that's how hybrid encryption works) and the encrypted AES key is typically sent with the file. The RSA private key is typically never transmitted at all: that's the point of asymmetric cryptography. You do need to take care of transmitting the public RSA key separately, but that's a whole different issue which I won't go into here. I am thinking about where it is most easy to break the encryption of my own files and always come to the point where a key and iv guessing would take ages but is theoretically easy. It just takes time with the computing power we have today. No, guessing the key is the hardest way to break the encryption. Nobody can break AES by guessing the key. It's theoretically easy, but practically impossible because it takes far, far more time than you have. If you took all the existing computing power on Earth now (yes, including NSA's) and ran it for the current age of the universe, you would still not stand a good chance of guessing an AES-128 key. The way cryptography gets broken in the real world is not because the building blocks are weak, but because people take strong building blocks and assemble them badly. Version 1: Since CBC is always based on the the pre block, why do I not just change the order of the blocks in the file and write a map together with the key and iv into the RSA part? Is this a stupid idea? You can only send a limited amount of extra data in the RSA part, so that would limit the size of the message. For a very short message (up to about 15–20 blocks — $\log_2(15!) \approx 40$, $\log_2(20!) \approx 61$), all possible permutations can easily be enumerated, so you have no security gain. For longer messages, what do you gain? The attacker has undecipherable blocks in some unspecified order. They're undecipherable anyway. The attacker can't even distinguish between the blocks. So maybe you have extra security in case something else is broken? Think again. If the permutation gives extra security, it can only be because the attacker cannot distinguish between the blocks: otherwise that means the attacker can reconstruct the permutation. And if something else is broken, broken means that the attacker does have extra knowledge about the blocks. This extra knowledge isn't absolutely guaranteed to let the attacker know which order to put them in, but it's likely to help. So the offshoot is that no, you're very unlikely to gain anything. What do you have to lose? Well, once again, where cryptography gets broken is because the building blocks are misused. Every extra complexity that you add increases the attack surface of both your protocol, and your implementation. Doing extra stuff makes your protocol less secure, not more. Version 2: Why do I not just replace a random character in each AES block with its "value" + 1 or so and write into my RSA a map of the chawed characters? For pretty much the same reasons as version 1, this has a cost and no advantage. You'd just be increasing the attack surface to make indistinguishable blocks indistinguishable. Who knows about the computing power we will have in a few years. To be resistant to quantum computers in case they actually become useful to break cryptography, use AES-256. Quantum computers basically halve the effective key size of symmetric algorithms. AES-128 is good against classical computing but fails to a quantum computer that's able to perform about $2^{64}$ (a few billion billions) operations. The asymmetric part is more problematic. We don't have a widely accepted asymmetric encryption algorithm that would resist quantum computers. But by far, the biggest risk in what you want to do is design and implementation mistakes. The worst thing you can do is to devise your own protocol. What you should do is: • Use an existing, vetted implementation of an existing, vetted protocol. To encrypt some data with an asymmetric key scheme, crypto_box has a good reputation. • Be agile: prepare to upgrade your protocol if it gets broken. | improve this answer | | $\endgroup$ • $\begingroup$ CBC sucks not only because of padding oracle attacks but also because it doesn't provide any authentication whatsoever. $\endgroup$ – SEJPM Jul 28 '19 at 13:44 • $\begingroup$ @SEJPM Right, I didn't even get into authentication. But the padding oracle attacks are possible pretty much because CBC doesn't do authentication. Though I should mention what do use instead of CBC (well, ok, it's crypto_box, but that's at a different level). $\endgroup$ – Gilles 'SO- stop being evil' Jul 28 '19 at 13:46 • 1 $\begingroup$ To have full message authenticity you should sign the ciphertext or plaintext. An adversary could otherwise simply generate his own ciphertext using the public key. AEAD schemes or MAC is not going to protect you against that kind of attack. An AEAD scheme could be used to maintain confidentiality (no CBC style plaintext/padding attacks) and integrity, but not authenticity. $\endgroup$ – Maarten Bodewes Jul 28 '19 at 14:41 • $\begingroup$ I thought PGP used CFB, not CBC. It's S/MIME that uses CBC. $\endgroup$ – forest Jul 29 '19 at 6:05 Your Answer By clicking “Post Your Answer”, you agree to our terms of service, privacy policy and cookie policy Not the answer you're looking for? Browse other questions tagged or ask your own question.
__label__pos
0.798767
<img src="https://d5nxst8fruw4z.cloudfront.net/atrk.gif?account=iA1Pi1a8Dy00ym" style="display:none" height="1" width="1" alt="" /> Dismiss Skip Navigation Our Terms of Use (click here to view) have changed. By continuing to use this site, you are agreeing to our new Terms of Use. 2.2: Circular Functions Difficulty Level: At Grade Created by: CK-12 Turn In Radian Measure In-Text Examples One very common mistake to make when working in radians is to forget from time to time that a complete rotation is \begin{align*}2 \pi \;\mathrm{radians}\end{align*}, not \begin{align*}\pi\end{align*} radians. This doesn’t happen often when one is thinking of a whole circle, but it is very easy to think of a quarter of a circle as being \begin{align*}\frac{\pi}{4}\;\mathrm{radians}\end{align*} instead of \begin{align*}\frac{\pi}{2}\end{align*}, a sixth of a circle as being \begin{align*}\frac{\pi}{6}\end{align*} instead of \begin{align*}\frac{\pi}{3}\end{align*}, and so on. Students may make this type of error frequently, and may continue making it for quite some time, so remind them more than once to be particularly careful about checking their angle measures when working in radians. (As evidence that not only students are prone to this particular error, see the illustration to example 4 in the lesson immediately following this one.) Another common error is to get the degrees-to-radians formula and the radians-to-degrees formula mixed up. The note at the bottom of page 101 should help students with this, though: basically, they should multiply by \begin{align*}\frac{\pi}{180}\end{align*} when they want \begin{align*}\pi\end{align*} in their answer (that is, when converting to radians), and should multiply by \begin{align*}\frac{180}{\pi}\end{align*} when they have a \begin{align*}\pi\end{align*} to get rid of (that is, when converting from radians to degrees). Of course, there won’t always be a \begin{align*}\pi\end{align*} involved when working in radians (see problem 5 below, for example), but the mnemonic will remind them which way the conversion goes if they don’t take it too literally. Review Questions 6) This problem should read “sine” instead of “cosine”; students may therefore think that Gina’s typing in “sin” instead of “cos” is the problem, instead of noticing that the calculator is in the wrong mode. They may also get \begin{align*}\frac{-\sqrt{3}}{2}\end{align*} instead of \begin{align*}\frac{-1}{2}\end{align*} as the actual value, and shouldn’t be penalized for this. Additional Problems 1) What is \begin{align*}\sqrt{2}\;\mathrm{radians}\end{align*} in degrees? (Round to the nearest tenth.) 2) What is \begin{align*}90\;\mathrm{radians}\end{align*} in degrees, and what is its reference angle? 3) What is \begin{align*}\pi \;\mathrm{degrees}\end{align*} in radians? (Round to four decimal places.) Answers to Additional Problems 1) \begin{align*}81.03^\circ\end{align*} 2) \begin{align*}5156.6^\circ\end{align*}, which is coterminal with \begin{align*}116.6^\circ\end{align*}, so its reference angle is \begin{align*}63.4^\circ\end{align*}. 3) \begin{align*}0.0548.\end{align*} Applications of Radian Measure In-Text Examples 1) The most common mistake to make here is to forget that the hour hand is not right on the \begin{align*}11\end{align*}, but a third of the way between \begin{align*}11\end{align*} and \begin{align*}12\end{align*}. Students who forget this may jump to the conclusion that the hands are \begin{align*}\frac{5}{12}\end{align*} of the circle apart, and then may also forget (as mentioned above) that \begin{align*}\frac{5}{12}\end{align*} of the circle does not equal \begin{align*}\frac{5 \pi}{12}\;\mathrm{radians.}\end{align*} 2) A common error has actually been made in the text here: the numbers \begin{align*}12\end{align*} and \begin{align*}11.81\end{align*} have been substituted for \begin{align*}r\end{align*} in the arc-length formula when each of those numbers is in fact a diameter and not a radius. Students should be cautioned against making the same error on future problems, but should probably be forgiven for making it here. 4) As mentioned earlier, another common error appears in the in-text illustration here: \begin{align*}\frac{2 \pi}{3}\end{align*} would of course be \begin{align*}\frac{1}{3}\end{align*} of the circle, not \begin{align*}\frac{2}{3}\end{align*}. This will only lead to student error, though, if students try finding the area of the whole circle first and then taking \begin{align*}\frac{2}{3}\end{align*} of it instead of \begin{align*}\frac{1}{3}\end{align*} of it. If they apply the formula in the text instead, they will not go wrong, since the formula is not based on the illustration. Review Questions 1) On part a.iii, students should not try to convert the approximate angle measure they found in part a.ii to degrees (as they may try to do with their calculators, especially since they have just used them to find that approximation). Instead they should start with the exact angle measure from part a.i, and convert it by hand using the formula they learned earlier. 3) Simply miscounting the dots may lead to error here. (There are 32.) Also, in finding the distance between the two dots selected in part b, students may include the dots at both the beginning and the end and conclude that the distance between them is \begin{align*}\frac{14}{32}\end{align*} of the circle when it is really \begin{align*}\frac{13}{32}\end{align*}. 4) Students may forget to take into account both the radius of the outer circle and the radius of the inner circle when calculating the area of each section. 5) This problem is a particularly easy place to make the routine error of plugging in the diameter in place of the radius. Circular Functions of Real Numbers Review Questions 1) A quick hint to use similar triangles should help students who get stuck on this problem. 3) Students may get a little confused about where to do the labeling; point out if necessary that the largest circle in the diagram is the unit circle. The other circles are just there to give them a convenient place to write the angle measures without having to write them right on top of the coordinates of the points; if they get mixed up and write the radian measures on the inner circle and the degree measures on the middle circle, there’s no need to penalize them as long as they’ve matched the correct degree and radian measures with the correct angles. 4) Perceptive students may draw the cosine segment in a different place than the book suggests; they may draw it as a horizontal segment extending from the \begin{align*}y-\end{align*}axis to the point where the sine segment meets the unit circle. This isn’t an error; in fact, drawing the segment this way makes it easier to see that the sine and cosine segments have a relationship similar to the relationship between the tangent and cotangent or secant and cosecant segments. (However, drawing it the standard way is fine too, and probably easier for most.) 5) The correct answer to this question is actually any combination of \begin{align*}\sin, \tan,\end{align*} and \begin{align*}\sec\end{align*}. 6) Some students may get the idea that answers b and d must both be true if either one of them is true—that the tangent must get infinitely large when the cotangent gets infinitely small, because they are “opposites” in a sense. The ambiguity of the phrase “infinitely small” doesn’t help; it can be taken to mean “approaching zero,” but in this case it really means “approaching negative infinity.” When the tangent approaches infinity, the cotangent approaches zero, not negative infinity; when the cotangent approaches negative infinity, the tangent approaches zero, and this latter case is what happens when \begin{align*}x\end{align*} increases from \begin{align*}\frac{3 \pi}{2}\end{align*} to \begin{align*}2 \pi\end{align*}. (Referring to the graphs earlier in the lesson will confirm this.) Additional Problems 1) Why does it make sense that the ranges of the secant and cosecant functions include all numbers except those between \begin{align*}1\end{align*} and \begin{align*}-1\end{align*}? (Hint: think in terms of sine and cosine.) Answers to Additional Problems 1) The sine and cosine functions only take values between \begin{align*}1\end{align*} and \begin{align*}-1\end{align*}—in other words, only numbers whose absolute value is less than or equal to \begin{align*}1\end{align*}. The secant and cosecant are the reciprocals of the sine and cosine, and the reciprocal of a number whose absolute value is less than or equal to \begin{align*}1\end{align*} will have an absolute value greater than or equal to \begin{align*}1\end{align*}. (For example, think of a fraction between \begin{align*}0\end{align*} and \begin{align*}1\end{align*}: its numerator is less than its denominator, so its reciprocal will have a numerator greater than its denominator and so will be greater than \begin{align*}1\end{align*}.) So, the secant and cosecant functions can only take values whose absolute value is greater than or equal to \begin{align*}1\end{align*}. Linear and Angular Velocity In-Text Examples 1) A possible overcomplication of this problem is to think that \begin{align*}15\;\mathrm{feet}\end{align*} is the “length” of the oval shape formed by the track (i.e. the major axis of an ellipse), rather than simply the track’s circumference and thus the distance the car travels per circuit. 2) Students may forget to convert their final answer from miles per minute to miles per hour. If they do remember, they are likely to try to divide the miles per minute result by \begin{align*}60\end{align*}, instead of multiplying it by \begin{align*}60\end{align*}, to get the answer in miles per hour. Encourage them to slow down and think about what the answer really means: if Lois goes \begin{align*}.047\;\mathrm{miles}\end{align*} in one minute, then how far will she go in \begin{align*}60\;\mathrm{times}\end{align*} one minute? Review Questions 1) Careless reading may lead students to jump to the answer \begin{align*}\frac{7}{9}\;\mathrm{cm/sec}\end{align*}. Remind them that the speed of the dial is the circumference, not the radius, divided by \begin{align*}9\;\mathrm{seconds.}\end{align*} 3) Note that Doris’ horse is not \begin{align*}7\;\mathrm{m}\end{align*} from the center, but rather \begin{align*}7\;\mathrm{m}\end{align*} farther from the center than Lois’, meaning it is \begin{align*}10\;\mathrm{m}\end{align*} total from the center. 4) We haven’t covered scientific notation in a while, so students may be prone to make mistakes with it. Watch for answers that are off by one or two orders of magnitude. Additional Problems 1) Two gears mesh with each other so that they rotate in opposite directions, with both their outer edges moving at the same linear velocity. The radius of the larger gear is \begin{align*}10\;\mathrm{cm}\end{align*} and the radius of the smaller gear is \begin{align*}6\;\mathrm{cm}\end{align*}. The smaller gear makes two revolutions per second. a) What is the angular velocity of the smaller gear? b) What is the linear velocity of a point on its outside edge? c) What is the angular velocity of the larger gear? d) How many revolutions does the larger gear make per second? e) A peg is attached to the larger gear at a point \begin{align*}2\;\mathrm{cm}\end{align*} from its outer edge. What is the peg’s linear velocity? Answers to Additional Problems 1) a) \begin{align*}4 \pi\;\mathrm{radians/sec}\end{align*}, or approximately \begin{align*}12.57\;\mathrm{radians/sec}\end{align*}. b) \begin{align*}24 \pi\;\mathrm{cm/sec}\end{align*}, or approximately \begin{align*}75.40\;\mathrm{cm/sec.}\end{align*} c) The linear velocity of a point on its outside edge is the same as that of the smaller gear, or \begin{align*}24 \pi\;\mathrm{cm/sec}\end{align*}, so its angular velocity is \begin{align*}2.4\pi\;\mathrm{radians/sec}\end{align*}, or approximately \begin{align*}7.54\;\mathrm{radians/sec}\end{align*}. d) \begin{align*}1.2\end{align*} revolutions per second. e) The peg is \begin{align*}8\;\mathrm{cm}\end{align*} from the center of the gear, so its linear velocity is \begin{align*}19.2 \pi \;\mathrm{cm/sec}\end{align*}, or approximately \begin{align*}60.32 \;\mathrm{cm/sec}\end{align*}. Graphing Sine and Cosine Functions In-Text Examples 2) Students may think the period is \begin{align*}2\;\mathrm{units}\end{align*}, because each “portion” of the graph is \begin{align*}2\;\mathrm{units}\end{align*} wide. Explain that it takes one “high” portion and one “low” portion to make up one complete cycle of the graph. 3-6) Despite the explanations in the text, it is likely that some students will still habitually mix up the period with the frequency. Repeated drilling may be the only way to fix this. Review Questions 2) A few students may be tripped up by part d; since there are no minimum and maximum values, they may get the idea they’re supposed to be looking for something else, and supply an answer like “\begin{align*}\frac{\pi}{2}\end{align*} and \begin{align*}\frac{-\pi}{2}\end{align*}” because those are the beginning and ending \begin{align*}x-\end{align*}values of one cycle of the graph. The error they are most likely to make on problems like \begin{align*}e\end{align*} and \begin{align*}f\end{align*} is to think that the number within the parentheses affects the maximum and minimum \begin{align*}y-\end{align*}values. However, solving problems a through \begin{align*}d\end{align*} first should help reinforce that it is only the multiplier out front that matters. 3) Students may try to simply divide both sides by \begin{align*}\sin(x)\end{align*}; they will then end up with the equation \begin{align*}4 = 1\end{align*}, for which there is no solution. However, this technique is incorrect because \begin{align*}\sin(x)\end{align*} sometimes equals zero on the interval of \begin{align*}x-\end{align*}values given for the problem, and dividing by an expression that can equal zero often eliminates possible solutions. Graphing the two functions instead (or simply thinking about them in the right way) will show that whenever \begin{align*}\sin(x)\end{align*} equals zero, \begin{align*}4 \ \sin(x)\end{align*} also equals zero and so the two expressions are equal. This happens at three places on the given interval, including the interval’s endpoints. A more appropriate application of algebra will yield this solution as well: instead of dividing both sides by \begin{align*}\sin(x)\end{align*}, subtract \begin{align*}\sin(x)\end{align*} from both sides to yield \begin{align*}3 \ \sin(x) = 0\end{align*}, and then divide by \begin{align*}3\end{align*} to get \begin{align*}\sin(x) = 0\end{align*}. 4) Getting the period and frequency mixed up is the most likely error here; reviewing the definitions may help. 5) Students may get twice the correct value for the amplitude here, forgetting that it’s the distance from the minimum or maximum to the middle of the graph rather than to the maximum or minimum. Also, when writing out the equation, they may again get mixed up about which number goes where. 6) At this point, students may still be stretching the graph when they should be shrinking it, or vice versa. Translating Sine and Cosine Functions In-Text Examples 1) A few students may think we’re still dealing with amplitude here, and that the maximum and minimum are \begin{align*}6\end{align*} and \begin{align*}-6\end{align*}. Remind them that we are now shifting the graph rather than stretching it; the maximum and minimum remain the same distance apart, but have both moved together \begin{align*}6\;\mathrm{units}\end{align*} from where they started out. Review Questions 1-5) Starting with the questions and trying to match them to the functions right away is tempting, but almost certainly the wrong way to approach this set of problems; it’s much better to sketch out graphs of the functions and then match them to the appropriate descriptions. Also, the questions about \begin{align*}y-\end{align*}intercepts may be confusing, as we haven’t directly discussed those with regard to trig functions specifically. The \begin{align*}y-\end{align*}intercepts can’t be derived directly from the amplitude, frequency, period, phase shift, or vertical shift, as students may be tempted to try, but they can often be read off the graph fairly easily. The best way to find them, however, is simply to plug in \begin{align*}0\end{align*} for \begin{align*}x\end{align*} and then calculate the value of \begin{align*}y\end{align*}. (Remember, the \begin{align*}y-\end{align*}intercept is simply the value of \begin{align*}y\end{align*} when the graph crosses the \begin{align*}y-\end{align*}axis, which is to say when \begin{align*}x\end{align*} equals \begin{align*}0\end{align*}.) 6) “Express the equation as both a sine and cosine function” may be misconstrued to mean “write one equation that involves both sine and cosine” rather than “write one equation that involves sine and another that involves cosine.” Also, watch out for students getting the sine and cosine graphs mixed up—not just on this problem, but on any problem that involves phase-shifted sinusoids (like the next four problems!). 7-10) Students may unfortunately be thrown by the fact that graphs A and C are incorrectly drawn: they are shifted up \begin{align*}2\;\mathrm{units}\end{align*} when they should be shifted 1 unit. Explain this to avoid confusing them. Students may also still be getting mixed up about whether a shift to the left (or to the right) should be described with a negative or positive number, and this is a particularly bad time to make that error because it may be compounded with the error of mixing up sine and cosine graphs, resulting in students mistaking a sine graph shifted one way for a cosine graph shifted the other way (or even shifted the same way). 11) It may be hard to tell here that the tick marks on the \begin{align*}x-\end{align*}axis represent \begin{align*}1\;\mathrm{unit}\end{align*} each, rather than \begin{align*}\pi\end{align*} or \begin{align*}\frac{\pi}{2}\;\mathrm{units}\end{align*}. (The multiples of \begin{align*}\pi\end{align*} are indicated in approximately the right positions, but without tick marks.) General Sinusoidal Graphs In-Text Examples 1) On problems like this, students may still be getting mixed up about which transformations correspond to which parts of the equation. In particular, they may confuse the amplitude with the vertical shift and the frequency with the phase shift, or get the frequency and the period mixed up. 3) One knee-jerk error here is to assume that the amplitude is the same as the maximum value, or in this case \begin{align*}60\end{align*}. (In reality, the amplitude is indeed the same as the maximum value whenever the vertical shift is \begin{align*}0\end{align*}, but otherwise the maximum value is equal to the amplitude plus the vertical shift—in this case, the vertical shift is \begin{align*}20\end{align*}, so the amplitude is \begin{align*}40\end{align*}.) Another common error is, once again, to forget that the amplitude is only half the total height of the graph. Review Questions 1-5) Trying to find the maximum and minimum after finding the amplitude but before finding the vertical shift may lead students to get the wrong values, as they may default to treating the graph as if it were centered at \begin{align*}0\end{align*}. Additional Problems 1) The graph of \begin{align*}y = 2 + \cos (3(x - \pi))\end{align*} is translated an additional \begin{align*}\frac{\pi}{2}\;\mathrm{units}\end{align*} to the left and \begin{align*}3\;\mathrm{units}\end{align*} down. What is the equation of the new graph? 2) The graph of \begin{align*}y = 3 + 2 \sin \left (6 \left (x - \frac{\pi}{3}\right )\right )\end{align*} is stretched so that its period is twice as long. What is the frequency of the new sine wave? 3) Which of the following yields the same graph as \begin{align*}y = 2 + \cos \left (2 \left (x + \frac{\pi}{2}\right )\right )\end{align*}? a) \begin{align*}y = 2 + \cos \left (2 \left (x - \frac{\pi}{2}\right )\right )\end{align*} b) \begin{align*}y = 2 + \sin \left (2 \left (x - \frac{\pi}{2}\right )\right )\end{align*} c) \begin{align*}y = 2 + \sin \left (2 \left (x + \frac{\pi}{2}\right )\right )\end{align*} d) \begin{align*}y = 2 - \cos \left (2 \left (x + \frac{\pi}{2}\right )\right )\end{align*} Answers to Additional Problems 1) \begin{align*}y = -1 + \cos \left (3 \left (x - \frac{\pi}{2}\right )\right )\end{align*} 2) If the period is doubled, the frequency is halved. The old frequency is \begin{align*}6\end{align*}, so the new frequency is \begin{align*}3\end{align*}. 3) a Notes/Highlights Having trouble? Report an issue. Color Highlighted Text Notes Please to create your own Highlights / Notes Show More Image Attributions Show Hide Details Save or share your relevant files like activites, homework and worksheet. To add resources, you must be the owner of the section. Click Customize to make your own copy. Please wait... Please wait... Image Detail Sizes: Medium | Original   CK.MAT.ENG.TE.1.Trigonometry.2.2 Here
__label__pos
1
Access Deny & Privilege in Pega Introduction This is the continuation of Authorization topic. Please go through ‘Access roles‘ (http://myknowpega.com/2017/05/15/67/) post first. Access roles & ARO – Configuration Access Deny You can explain it with the name. Yes, we are denying the access. Simply saying, it is the exact opposite of Access of Role to Object (ARO). Access Deny = Access Denial of Object (ADO) ADO is my own term, please forget it 🙂 As we saw before, objects refer to class instances. So here, we deny access to particular class instances. Privilege • It is a granular part to ARO or Access deny. See ARO, Access deny control the access for the class instances, whereas Privilege controls the access for particular rules. • Say for example in an organization, we have manager and a set of developers.We need to allow executing appraisal flow only for managers and not for users. It means that we can control executing the flow by using privilege. You need to specify the privilege in 2 places: 1. In the rule form 2. In the Access of Role to Object -> Access role Say, you have created a new privilege ‘ExecuteAppraisal’ and included it in Appraisal flow. Now, this flow can be executed only by people who hold the privilege in their access roles. Are you confused? Cool, you will be well cleared by the following examples 🙂 What is an Access Deny rule? • It is the reverse of Access of Role to Object. • Rule form is exact replica to ARO. • Access deny is part of security category. How do we configure a Access Deny rule? Step 1: Create a new Access Deny rule. Step 2: Configure the rule form. It has a single main tab. Security tab If you see the right bottom corner, then you can see, • 0 – Do not deny access. • 5 – Access will be denied till production Access controls – You specify the access control for various options. I just copied the same from ‘ARO‘ (http://myknowpega.com/2017/05/15/67/) lesson below 😉 In the fields, you can provide either level values (see at the right) or access when rule (Replica of when rule). Say, you provided Level value 5. Then it will be in application till production environment. 1. Open Instances – Controls whether you can Open FKT-Fkart-Work-Sales cases 2. Modify instances – Controls whether you can Save FKT-Fkart-Work-Sales cases 3. Delete instances – Controls whether you can delete FKT-Fkart-Work-Sales cases 4. Run reports – Controls whether you can run reports of applies to class FKT-Fkart-Work-Sales 5. Execute activities – Controls whether you can run reports of applies to class FKT-Fkart-Work-Sales 6. Open rules – Controls whether you can open rules of applies to class FKT-Fkart-Work-Sales 7. Modify rules – Controls whether you can modify rules of applies to class FKT-Fkart-Work-Sales 8. Delete rules – Controls whether you can delete rules of applies to class FKT-Fkart-Work-Sales. Let’s test it 🙂 Step 1: Create a new Access deny rule for User role – Fkart:Users It is already created above. Step 2: Configure access control for open instances to level value 5. Step 3: Open the FKart:User access role and verify the access class in the grid. We have successfully configured to deny access to open sales case. Step 4: Have a test user pointing to that Users access group – Fkart:Users Note: This access group should contain the same access role – Fkart:User, where we created access deny. Step 5: Login the User and create a new sales case. We have created a case S-142. Step 6: Open the case from recent/worklist. Yes, we did it. 🙂 You can remove that access level and test again. Keep on testing different scenarios. What is a Privilege rule? • It provides access control on rules based on access role. • It is part of security category. How do we configure a Privilege rule? Step 1: Create a new Privilege rule. Step 2: Nothing 😀 There is no need to configure anything in Privilege rule form. How do we refer a Privilege rule? Imagine, we have a requirement like sales user can only create a sales case. Managers cannot create the case. This is the key area in privilege rule. You need to configure in 2 places. 1. Rules – Restricts In the sales flow rule – Process tab Privilege class – This will be default to Flow class. Privilege name – Specify the privilege name here. 2. Access role – > ARO – conveys Step 1: Open the ARO on sales class that belongs to sales user and open the Privilege tab. Step 2: Add the Privilege created above. Now, we have configured the sales user with the privilege to create a new case from sales flow. For Sales manager, we didn’t add any privilege in their Access role, so they can’t create a new sales case. Let’s jump to test. Step 1: Make sure rules are configured with the Privilege created and Privilege is added with ARO. Step 2: Configure the test user to ‘FKart:SalesManager’ access group – > role Step 3: Check the manager portal, if you are able to create a new sales case. You can’t. Step 4: Now update the test user to sales user role – Fkart:User role. Step 5: Now check the user portal. You should be able to create a new case. We have successfully configured privilege in flow rule and restricted user based on their roles. Restricting Flow actions Scenario: For a sales case, only sales users can change the stage. Sales manager will not have privilege to change the stage. Note: Change stage flow action will be available through out the case life cycle in the other actions button. We shall see about those configuration in ‘Cases‘ lesson. Step 1: First, save the flow action in application class. Step 2: We can use the same privilege, we used for testing flow. Configure it in security tab – Privileges. Step 3: We  have already added the privilege in user role. Make sure it is added. Step 4: Move to user portal and check the flow action from other actions. Step 5: Now configure the test operator to sales manager portal and check the Actions button. What are the other rules that can be restricted using privilege? 1. Activity 2. Correspondence 3. Flows 4. Flow actions 5. Report definitions 6. Attachment categories 7. Parse structured I wanted to show you report definition restriction, but already it’s a very long post 🙂 You can test the above rules. Summary: • Access Deny is the exact opposite to ARO. Normally, we use ARO in many places. • Privileges need to be configured in 2 places: 1. Rules 2. Access role of the users We are at the end of the post. We will discuss how to use Access Manager in next lesson 🙂 20 thoughts on “Access Deny & Privilege in Pega 1. Hi Vinod, Activities and Data transforms are coming in next week posts 🙂 Please subscribe and stay tuned for more posts. 1. Can you describe the scenarios where only access deny is used and scenarios where only access role to object is used? To get the difference between the two rule types. 1. Hi Vyas, Access Deny gets precedence over ARO. Imagine a scenario – Manager access group contains three access roles – Manager, User, Approver. You need to restrict access to particular class. 1. If you use ARO, then you should make sure ARO s in all three roles should be restricted to access level 0. 2. If you use Access Deny, then you can wisely update any 1 access roles with access deny restrictions. Adv : Rule count is minimized and easy management. 2. Nice post.. Keep up the great work.. Just want to add a point to this topic: Access deny takes precedence over access when if both returns true. 3. I have a requirement to give permission to display a particular filed only to a particular role. I can implement this by adding visibility condition to the field to check the access role. But I want admin user to configure this permission to the roles. Is there any way we can configure and manage these type of permission using Access Manager. 1. Hi Mathew, Thanks for you comment. I don’t think controlling a particular field using access role is the right way. If you need admin to control this, have a decision table and delegate the rule to admin access group. Inside the decision table, you can administer the visibility conditions for different roles!! 4. Hi Prem, Nice work!! Can you please share detailed explanation about Flows, flow actions and case management. Thanks, Vasu 5. Great hard work shown by you thanks a lot PREM My skills are upgrading in day to day life because of highly profession presentations. 6. Hi Prem, Great work!!! Can this be achieved with single access group with diff level of users. My requirement is like have 2 levels of Sales Users. 1st level of User should access/create sales case and 2nd level of user should not access sales case. 1. Yes Suman, First you need to identify which parameter differentiate level 1 and level 2. It can be within Operator page or acces group somewhere. Say for example, 1st level user directly reports to sales head. Then I can create a access when and provide a special privilege in the sales class ARO using access when rule. So when it return true, then they can create / modify sales case 7. Hi Prem, Nice article Can this be achieved with single access group with level of users. My requirement like we have 2 levels of sales users. 1st Level of user should access/create sales case and 2nd level of user should not access Sales case. 1. First you need to identify which parameter differentiate level 1 and level 2. It can be within Operator page or acces group somewhere. Say for example, 1st level user directly reports to sales head. Then I can create a access when and provide a special privilege in the sales class ARO using access when rule. So when it return true, then they can create / modify sales case Leave a Reply Your email address will not be published. Required fields are marked *
__label__pos
0.821833
World Apple Eu Family Sharingpotuck9to5mac Apple Eu Family Sharingpotuck9to5mac offers a comprehensive suite of advantages that cater to the modern family’s digital needs. From safeguarding privacy to facilitating cost-effective sharing among multiple family members, this feature transforms the Apple experience into a seamless and collaborative journey. By exploring the intricacies of setting up Family Sharing in Europe and mastering the art of content management within this framework, users can unlock a wealth of potential for enhancing their digital lifestyle. The intricacies of this feature promise a world of shared possibilities, making it a compelling option for tech-savvy families looking to optimize their Apple ecosystem. Benefits of Apple EU Family Sharing Apple EU Family Sharing offers a range of advantages that make it a valuable feature for users looking to enhance their digital experiences within the Apple ecosystem. The feature provides privacy protection by allowing families to share purchases without sharing personal accounts. Additionally, it enables cost savings as it allows up to six family members to share purchases, subscriptions, and iCloud storage, making it a cost-effective option for families. Setting Up Family Sharing in Europe To facilitate the setup process for Family Sharing in Europe, users can follow a straightforward series of steps within their Apple devices. Setting up this feature allows family members to share purchases, subscriptions, and iCloud storage. While convenience is a key benefit, users should also consider privacy concerns when sharing personal data with family members. By carefully managing sharing settings, users can enjoy the advantages of Family Sharing while safeguarding their privacy. Read Also Chimera Chinalinked Nxpshilov Tomhardware Managing Shared Content Efficiently Efficiently managing shared content within the Family Sharing feature on Apple devices involves strategic organization and thoughtful consideration of privacy implications. By embracing efficient organization techniques and fostering a culture of collaborative consumption, families can streamline access to shared apps, media, and subscriptions while respecting individual privacy preferences. This approach maximizes the benefits of Family Sharing, ensuring smooth coordination and a harmonious digital experience for all members. Tips for Maximizing Family Sharing Benefits Maximizing the benefits of Family Sharing requires a strategic approach that emphasizes effective utilization of shared resources and clear communication among family members. To maximize savings, consider sharing subscriptions for services like Apple Music or iCloud storage. Allocate responsibilities for managing shared purchases and ensure everyone understands the guidelines for using shared content. Conclusion In conclusion, Apple Eu Family Sharingpotuck9to5mac, including privacy protection, cost savings, and shared access to purchases and subscriptions. Setting up and managing Family Sharing efficiently requires careful consideration of privacy concerns, strategic organization, and effective techniques. By maximizing the benefits of Family Sharing, families can enjoy a harmonious digital experience while also ensuring privacy and convenience. Related Articles Leave a Reply Your email address will not be published. Required fields are marked * Check Also Close Back to top button
__label__pos
0.707345
mapado/doctrine-blender-bundle Bundle in change of the mapado/doctrine-blender package configuration v0.4.1 2016-04-12 12:35 UTC This package is auto-updated. Last update: 2020-01-15 11:57:40 UTC README This bundle in charge of the https://github.com/mapado/doctrine-blender integration into a Symfony project. Installation composer require "mapado/doctrine-blender-bundle:0.*" Usage Entities Taken from doctrine mongodb documentation First lets define our Product document: /** @Document */ class Product { /** @Id */ private $id; /** @String */ private $title; public function getId() { return $this->id; } public function getTitle() { return $this->title; } public function setTitle($title) { $this->title = $title; } } Next create the Order entity that has a $product and $productId property linking it to the Product that is stored with MongoDB: namespace Entities; use Documents\Product; /** * @Entity * @Table(name="orders") */ class Order { /** * @Id @Column(type="integer") * @GeneratedValue(strategy="AUTO") */ private $id; /** * @Column(type="string") */ private $productId; /** * @var Documents\Product */ private $product; public function getId() { return $this->id; } public function getProductId() { return $this->productId; } public function setProduct(Product $product) { $this->productId = $product->getId(); $this->product = $product; } public function getProduct() { return $this->product; } } Configuration mapado_doctrine_blender: doctrine_external_associations: order: source_object_manager: 'doctrine.orm.order_entity_manager' classname: 'Acme\DemoBundle\Entity\Order' references: product: # this is the name of the property in the source entity reference_id_getter: 'getProductId' # optional, method in the source entity fetching the ref.id reference_setter: 'setProduct' # optional, method in the source entity to set the reference reference_object_manager: 'doctrine_mongodb.odm.product_document_manager' reference_class: 'Acme\DemoBundle\Document\Product' tags: # can also be an array (or an iterator) reference_id_getter: 'getTagIds' # must return an array (or an iterator) of identifiers reference_setter: 'setTags' reference_object_manager: 'doctrine_mongodb.odm.tag_document_manager' reference_class: 'Acme\DemoBundle\Document\Tag' another_reference: # ... another_source: # ...
__label__pos
0.520475
People used to get so worked up whenever they encounter a Blue Screen of Death error. However, this problem has become so common in Windows that people have now become unfazed when their PC freezes and flashes a BSOD error. These days, when issues like the PNP_Deteсted_Fatal_Error show up on their computer, they naturally know that the first thing to do is find a solution online. Well, if you are wondering how to fix the PNP_Detected_Fatal_Error on Windows 10, you’ve come to the right place. In this post, we are going to share the steps you need to take to get rid of the error message. What Does the PNP_Detected_Fatal_Error Blue Screen Mean? PNP is an abbreviated form of the term ‘Plug and Play.’ PNP functions as the interface of your system, allowing devices like headphones and USB drives to work. When Windows fails to function properly because of damaged, missing, or corrupted system files or drivers, the critical PDP_DETECTED_FATAL_ERROR message can show up on your screen. So, make sure you follow our instructions below to get rid of the problem permanently. Solution 1: Restoring Older Versions of your Drivers As we’ve mentioned, the error has something to do with your drivers. So, we recommend rolling back your drivers to their older versions. Since the error message is preventing you from accessing your system properly, we suggest booting into Safe Mode. To access this feature, you need to let your PC fail to start for at least three times until you trigger the automatic repair environment. From there, follow these steps: 1. Navigate to this path: 2. Troubleshoot ->Advanced Options ->Startup Settings 3. Click the Restart button. 4. Once your PC reboots, press F4 or 4 on your keyboard to activate Safe Mode. After booting into Safe Mode, you need to follow the instructions below: 1. Go to Device Manager. 2. Identify the driver you have recently updated, then right-click it. 3. Select Properties from the options. 4. Go to the Driver tab. 5. Click Roll Back Driver. 6. Click OK. You need to do this for all the drivers you recently updated. If you think that a newly installed software program is causing the error, we recommend going to Programs and Features to remove it. After you’ve rolled back the drivers or uninstalled the problematic program, check if the error is gone. If it is, we suggest updating your drivers correctly, using Auslogics Driver Updater.You do not have to worry about installing the wrong versions of your drivers. Auslogics Driver Updater will automatically recognize your operating system version and processor type. All you need to do is click a button and this tool will find the latest driver versions recommended by the manufacturers. RECOMMENDED Resolve PC Issues with Driver Updater Unstable PC performance is often caused by outdated or corrupt drivers. Auslogics Driver Updater diagnoses driver issues and lets you update old drivers all at once or one at a time to get your PC running smoother Auslogics Driver Updater is a product of Auslogics, certified Microsoft® Silver Application Developer DOWNLOAD NOW Solution 2: Going Back to a Previous Restore Point One of the best ways to get rid of the PNP_Detected_Fatal_Error is to restore your system to a previous workable version. Do not worry about losing personal files because this method will not affect your stored data. Before we proceed with the steps, you need to boot into Safe Mode. Once you’ve done that, follow these instructions: 1. Open the Run dialog box by pressing Windows Key+R on your keyboard. 2. Inside the Run dialog box, type “rstrui.exe” (no quotes), then click OK. 3. Now, select a restore point where your PC still functioned properly. 4. Click Next to proceed. Solution 3: Running the System File Checker You can also run an SFC scan to repair damaged or missing system files. To do that, simply follow the steps below: 1. On your keyboard, press Windows Key + S. 2. Type “cmd” (no quotes), then right-click Command Prompt from the results. 3. Select Run as Administrator. 4. If prompted to give permission to the app, click Yes. 5. Run this command: sfc/ scannow Solution 4: Cleaning System Junk It is possible that the unnecessary system junk is causing the PNP_Detected_Fatal_Error to appear. So, we suggest removing them by following these steps: 1. Go to your taskbar, then click the Search icon. 2. Inside the Search box, type “cmd” (no quotes). 3. Right-click Command Prompt from the results, then select Run as Administrator. 4. You will be asked if you want to give permission to the app. Click Yes. 5. Inside Command Prompt, type “cleanmgr” (no quotes), then hit Enter. Disk Cleanup will identify the amount of disk space you can reclaim. 6. You will see a list of items you can remove. Select Temporary Files, then click OK. Pro Tip: You can hit two birds with one stone when you use Auslogics BoostSpeed. When you take advantage of this tool, you can get rid of PC junk and restore system stability. It will tweak non-optimal system settings, improving your computer’s speed. With Auslogics BoostSpeed, your PC can perform faster and more efficiently. Quick solution To quickly resolve «PNP Detected Fatal Error» issue, use a safe FREE tool developed by the Auslogics team of experts. The app contains no malware and is designed specifically for the problem described in this article. Just download and run it on your PC. free download Developed by Auslogics Auslogics is a certified Microsoft® Silver Application Developer. Microsoft confirms Auslogics' high expertise in developing quality software that meets the growing demands of PC users. Can you suggest other ways of fixing the PNP_Detected_Fatal_Error? Share your ideas by joining the discussion below!
__label__pos
0.848474
La presentazione è in caricamento. Aspetta per favore La presentazione è in caricamento. Aspetta per favore Corso di informatica Athena – Periti Informatici Liste Code e Pile! Docente: Daniele Prevedello. Presentazioni simili Presentazione sul tema: "Corso di informatica Athena – Periti Informatici Liste Code e Pile! Docente: Daniele Prevedello."— Transcript della presentazione: 1 Corso di informatica Athena – Periti Informatici Liste Code e Pile! Docente: Daniele Prevedello 2 Corso di informatica Athena – Periti Informatici In informatica, una lista concatenata (o linked list) è una delle strutture dati fondamentali usate nellaprogrammazione. Essa consiste di una sequenza di nodi, ognuno contenente campi di dati arbitrari ed uno o due riferimenti ("link") che puntano al nodo successivo e/o precedente.informaticastrutture datiprogrammazionecampi Docente: Daniele Prevedello 3 Corso di informatica Athena – Periti Informatici Il modo più semplice di creare una lista è una lista semplicemente concatenata, che utilizza un collegamento per nodo. Questo collegamento punta al nodo successivo della lista o a un valore nullo.valore nullo Docente: Daniele Prevedello 4 Corso di informatica Athena – Periti Informatici Un tipo più sofisticato di lista è una lista doppiamente concatenata o lista concatenata a due vie. Ogni nodo possiede due collegamenti: uno punta al nodo precedente o ad un valore nullo se è il primo nodo;valore nullo l'altro punta al nodo successivo o ad un valore nullo se è il nodo finale. valore nullo Docente: Daniele Prevedello 5 Corso di informatica Athena – Periti Informatici Le liste concatenate hanno vari vantaggi nei confronti degli array:array Docente: Daniele Prevedello Gli elementi possono essere inseriti nelle liste indefinitamente; Le operazioni di inserimento e cancellazione sono molto più veloci. 6 Corso di informatica Athena – Periti Informatici Però presentano anche alcuni svantaggi rispetto agli array:array Docente: Daniele Prevedello Mentre gli array permettono un accesso casuale, le liste concatenate permettono soltanto un accesso sequenziale agli elementi ;accesso casualeaccesso sequenziale Nelle liste concatenate viene usato più spazio per memorizzare i riferimenti agli elementi successivi (e precedenti). 7 Corso di informatica Athena – Periti Informatici typedef struct ns { int data; struct ns* next; } node; Docente: Daniele Prevedello int o qualunque altro tipo semplice o complesso (array o altra struttura) 8 Corso di informatica Athena – Periti Informatici node *list_add(node** p, int i) { node *n = (node*)malloc(sizeof(node)); n->next = *p; *p = n; n->data = i; return n; } Docente: Daniele Prevedello Inserimento 9 Corso di informatica Athena – Periti Informatici void list_remove(node** p) { /* rimuove head */ if (*p != NULL) { node *n = *p; *p = (*p)->next; free(n); } Docente: Daniele Prevedello Cancellazione 10 Corso di informatica Athena – Periti Informatici node** list_search(node** n, int i) { while (*n != NULL) { if ((*n)->data == i) { return n; } n = &(*n)->next; } return NULL; } Docente: Daniele Prevedello Ricerca 11 Corso di informatica Athena – Periti Informatici void list_print(node* n) { if (n == NULL) { printf("la lista è vuota\n"); } while (n != NULL) { printf("print %p %p %d\n", n, n->next, n->data); n = n->next; } Docente: Daniele Prevedello Scorrimento 12 Corso di informatica Athena – Periti Informatici int main(void) { node *n = NULL; list_add(&n, 0); /* lista: 0 */ list_add(&n, 1); /* lista: 1 0 */ list_add(&n, 2); /* lista: */ list_add(&n, 3); /* lista: */ list_add(&n, 4); /* lista: */ list_print(n); list_remove(&n); /* rimuove il primo elemento (4) */ list_remove(&n->next); /* rimuove il nuovo secondo (2) */ list_remove(list_search(&n, 1)); /* rimuove la cella che contiene 1 (primo) */ list_remove(&n->next); /* rimuove il successivo (0) */ list_remove(&n); /* rimuove l'ultimo (3) */ list_print(n); return 0; } Docente: Daniele Prevedello 13 Corso di informatica Athena – Periti Informatici Code Docente: Daniele Prevedello 14 Corso di informatica Athena – Periti Informatici In Informatica per coda si intende una struttura dati di tipo FIFO, First In First Out (il primo in ingresso è il primo ad uscire).FIFO Un esempio pratico sono le code che in un paese civile si fanno per ottenere un servizio, come pagare al supermercato o farsi tagliare i capelli dal parrucchiere: idealmente si viene serviti nello stesso ordine con cui ci si è presentati. Docente: Daniele Prevedello 15 Corso di informatica Athena – Periti Informatici Operazioni su una coda: Accodamento di un elemento Detta anche operazione di enqueue, serve a mettere un elemento in coda. Estrazione di un elemento Detta anche operazione di dequeue, serve a rimuovere un elemento dalla testa della coda. Docente: Daniele Prevedello 16 Corso di informatica Athena – Periti Informatici Per implementare una coda viene utilizzato normalmente una lista concatenata.lista concatenata Ogni elemento della lista è un elemento della coda. Inoltre, viene conservata la testa coda come un puntatore. L'operazione di accodamento consiste nella normale operazione di inserimento di un elemento nella lista. L'operazione di estrazione consiste nel sostituire il puntatore che conserva la testa della coda con il secondo elemento della coda. È possibile inoltre utilizzare un array che può essere trattato come array circolare. Docente: Daniele Prevedello 17 Corso di informatica Athena – Periti Informatici Pile o Stack Docente: Daniele Prevedello 18 Corso di informatica Athena – Periti Informatici In informatica, il termine stack o pila viene usato per riferirsi a strutture dati le cui modalità d'accesso seguono una politica LIFO (Last In First Out).informaticaLIFO I dati vengono estratti (letti) in ordine rigorosamente inverso rispetto a quello in cui sono stati inseriti (scritti). Docente: Daniele Prevedello 19 Corso di informatica Athena – Periti Informatici Si immagini ad esempio una "pila di piatti" o una "pila di giornali; l'idea è che quando si pone un piatto nella pila lo si metta in cima, e che quando si preleva un piatto si prelevi, analogamente, quello in cima. In generale la pila è un particolare tipo di lista in cui le operazioni di inserimento ed estrazione si compiono dallo stesso estremo. Docente: Daniele Prevedello 20 Corso di informatica Athena – Periti Informatici Docente: Daniele Prevedello 21 Corso di informatica Athena – Periti Informatici Esercizi Realizzare una lista che permetta di memorizzare un numero non definito di valori interi inseriti da un utente. Stampare la lista. Permettere la ricerca di un valore nella lista. Permettere la cancellazione di un elemento. Docente: Daniele Prevedello 22 Corso di informatica Athena – Periti Informatici Ultimo compito: IMPARARE A MEMORIA COME IMPLEMENTARE UNA LISTA (CODA O PILA) E LE OPERAZIONI SU DI ESSE! Docente: Daniele Prevedello Scaricare ppt "Corso di informatica Athena – Periti Informatici Liste Code e Pile! Docente: Daniele Prevedello." Presentazioni simili Annunci Google
__label__pos
0.760208
Top definition The approach to problem solving that uses overwhelming mass (of money, people (particularly programmers) military power, etc) to solve a problem but ignores any requirement to consider how best to apply the mass of resources available or alternatives to its mass use. Three examples: 1. NASA Astronauts pen: Russian solution to NASA multi million Dollar 'works in zero gravity' pen - the pencil. 2. The second Gulf War. 3. In IT brute force and ignorance is a design technique adopted by many software houses; brute force coding that ignores (retrievable) knowledge of how problems have been solved before in more elegant ways. by Madrigor October 27, 2013 Mug icon The Urban Dictionary Mug One side has the word, one side has the definition. Microwave and dishwasher safe. Lotsa space for your liquids. Buy the mug
__label__pos
0.75123
TextMeshPro displays white blocks when creating prefab or changing localscale There are two scenarios in which my text is turning into white boxes/blocks that are the same width as the character I’m trying to display. 1) I serialize (save) a prefab with my text component as a child of the root object. When I instantiate that prefab at runtime, this happens. 2) I setup a UI image. I add a child with a TextMeshProUGUI component. I add a script that changes the localscale of the image on hover. At runtime, the text displays fine at first. When hovering over it (triggering the script), the text turns to white blocks. Both cases are using the “Text Mesh Pro UGUI” component. Unity 2017.1.3f1 | TextMesh Pro 1.0.56.0b3 Edit: Other than these cases, text shows up perfectly fine. Turns out the problem was that I was setting the z component of the local scale to zero. :slight_smile:
__label__pos
0.94953
MY CART (5) Use Coupon: CART20 and get 20% off on all online Study Material ITEM DETAILS MRP DISCOUNT FINAL PRICE Total Price: R There are no items in this cart. Continue Shopping Menu Get extra R 320 off USE CODE: MOB20 DEFINITE INTEGRAL AS LIMIT OF A SUM We have already discussed the concept of definite integral in the previous sections. Definite integral is closely related to concepts like antiderivative and indefinite integrals. In this section, we shall discuss this relationship in detail. Definite integral consists of a function f(x) which is continuous in a closed interval [a, b] and the meaning of definite integral is assumed to be in context of area covered by the function f from (say) ‘a’ to ‘b’. An alternative way of describing \int_{a}^{b}f(x)dx is that the definite integral \int_{a}^{b}f(x)dx is a limiting case of the summation of an infinite series, provided f(x) is continuous on [a, b] i.e., \int_{a}^{b}f(x)dx = lim_{n\rightarrow \infty} h\sum_{r=0}^{n-1}f(a+rh), where h = (b-a)/n  The converse is also true i.e., if we have an infinite series of the above form, it can be expressed as a definite integral.        • Express the given series in the form ∑ 1/n f (r/n) • Then the limit is its sum when n→∞, i.e. lim n→∞ h ∑1/n f(r/n) • Replace r/n by x and 1/n by dx and lim n→∞ ∑ by the sign of ∫. • The lower and the upper limit of integration are the limiting values of r/n for the first and the last term of r respectively. Some particular cases of the above are • lim n→∞ ∑nr =1 1/n f(r/n) or lim n→∞ ∑n–1r = 0 1/n f(r/n) = ∫10 f(x)dx • lim n→∞ ∑pnr =1 1/n f(r/n) = ∫βα f(x)dx where α = lim n→∞ r/n = 0 (as r = 1) and β = lim n→∞ r/n = p (as r = pn)  Illustration 1: Show that (A) lim n→∞ {1/(n+1) + 1/(n+2) + 1/(n+3) + ... + ... 1/(n+n)} = ln 2. (B) lim n→∞ 1p + 2p + 3p + ... + np/(np +1) = 1/(p +1)   (p > 0) Solution:  (A) Let I = lim n→∞ (1/n+1 + 1/n+2 + 1/n+3 + ... + 1/n+n)                           = lim n→∞ {1/(n+1) + 1/(n+2) + 1/(n+3) + ... + 1/(n+n)}                           = lim_{n\rightarrow \infty} 1/n \sum_{r=1}^{n}1/(1+r/n) Now α = lim n→∞ 1/n = 0  (as r = 1)  and β = lim n→∞ r/n = 1  (as r = n) ⇒ l = ∫10. 1/1+x dx = [In (1+x)] 10  ⇒ I = ln 2. (B) 1p + 2p + 3p + ... + np/(np +1) = ∑nr=1 1p/n.np = ∑nr=1 1+n(r/n)p  Take f(x) = xp; Let h = 1/n so that as n → ∞; h → 0 ∴    limn→∞ ∑nr =1 1/n f(0 + r/n) = ∫10 f(x)dx = ∫10 xpdx  = 1/p+1 Leibnitz’s Rule If g is continuous on [a, b] and f1 (x) and f2 (x) are differentiable functions whose values lie in [a, b], then d/dx ∫ f2(x) f1(x) g(t)dt = g (f2(x)) f2'(x) – g (f1(x)) f1'(x) Illustration 1:          If f(x) = cos x - \int_{0}^{x} (x-t), then show that {f}''(x) + f(x) = - cos x                                  Solution:                          f ‘(x) = – sin x – (x f (x) + ∫x0 f(t) dt) + x f(x)        = –sin x – ∫x0 f(t)dt            f “(x) = – cos x – f(x) Hence, this gives f “(x) + f(x) = cos x. ____________________________________________________________________________ Illustration 2:          If a function f(x) is defined ∀x ∈ R such that g(x) = \int_{x}^{a}f(t)/t dt, then prove that \int_{0}^{a}g(x) dx = \int_{0}^{a}f(x) dx. Solution: g(x) = \int_{x}^{a}f(t)/t dt Diffrentiate w.r.t. x g’(x) = – F(x)/x F(x) = -x g’(x) \int_{0}^{a}f(x)dx = - \int_{0}^{a}x {g}'(x)dx NOw, we know that g(a) = 0 and hence we get \int_{0}^{a}f(x) dx = -ag(a) + \int_{0}^{a}g(x) dx = \int_{0}^{a}g(x) dx Hence, \int_{0}^{a}g(x) dx = \int_{0}^{a}f(x) dx. ______________________________________________________________________________ Illustration 3:     Determine a positive integer n < 5, such that \int_{0}^{1}e^{x} (x-1)^{n} dx = 16 - 6e Solution:   Let I_{n} = \int_{0}^{1}e^{x} (x-1)^{n} dx           Integrating by parts, \left [e^{x}(x-1)^{n} \right ]^{1}_{0} - \int_{0}^{1}e^{x}n(x-1)dx = (-1)^{n}dx I_{n} = \left [e^{x}(x-1)^{n} \right ]^{1}_{0} - \int_{0}^{1}e^{x}n(x-1)dx = (-1)^{n+1} = nI_{n-1} …... (1) Also, I_{1} = \int_{0}^{1}e^{x}n(x-1)^{n-1}dx = \left [((x-1)e^{x}) \right ]^{1}_{0}- \int_{0}^{1}e^{x}ex 1 dx – (–1) – [ex]10 = 1 – (e–1) = 2 – e From (i), I2 = ( -1)3 - 2I1 = -1 - 2(2 - e) = -5 + 2e and I3 = (-1)4 - 3I2 = 1 - 3(-5 + 2e) = 16 - 6e Which is given . ∴ n = 3. Antiderivative Concept with the Area Problem This theorem state that If f(x) is a continuous function on [a, b] and F(x) is any anti derivative of f(x) on [a, b] i.e. F'(x) = f (x) ∀ x ∈ (a, b), then \int_{a}^{b}f(x) dx = F(b) - F(a)  The function F(x) is the integral of f(x) and a and b are the lower and the upper limits of integration. Illustration 1: Evaluate \int_{-2}^{2}dx/(4+x^{2})  directly as well as by substitution x = 1/t. Evaluate why the answers don't tally. Solution: I = \int_{-2}^{2}dx/(4+x^{2}) = [1/2 tan–1 (x/2)]2–2 = 1/2 [tan–1(1) – tan–1 (–1)]    = 1/2 [π/4 – (–π/4)] = π/4 ⇒ l = π/4 On the other hand; if x = 1/t then, I = \int_{-2}^{2}dx/(4+x^{2}) Solving this and simplifying, we get I = – [1/2 tan–1 (2t)]1/2–1/2 = –1/2 tan–11 – (–1/2 tan–1 (–1)) = –π/8 – π/8 = –π/4 ∴ I = π/4 when x = 1/t  In above two results l = -π/4 is wrong. Since the integrand ¼ + x2 > 0 and therefore the definite integral of this function cannot be negative. Since x = 1/t is discontinuous at t = 0, the substitution is not valid (∴ I = π/4). Note: It is important the substitution must be continuous in the interval of integration. ________________________________________________________________________________________ Illustration 2: Let \alpha = \int_{0}^{\infty}dx/(x^{4}+ 7x^{2}+1) and \beta = \int_{0}^{\infty}x^{2}dx/(x^{4}+ 7x^{2}+1) then show that α = β. Solution: \alpha = \int_{0}^{\infty}dx/(x^{4}+7x^{2}+1) Put x = 1/t ⇒ dx = –1/t2 then \alpha = \int_{0}^{\infty}-1/t^{2}dt/ (1/t^{4} + 7/t^{2} + 1) = \int_{0}^{\infty} dt. t^{2}/ (t^{4} + 7t^{2} + 1) = \int_{0}^{\infty}t^{2} dt/ (t^{4} + 7t^{2} + 1) = \beta . Q1. The Leibnitz’s rule can be applied only if  (a) the function g is continuous in [a, b] and the functions f1 and f2 are differentiable functions whose values may lie within or outside [a, b]. (b) the function g is continuous in [a, b] and the functions f1 and f2 are differentiable functions whose values lie outside [a, b]. (c) the function g is continuous in [a, b] and the functions f1 and f2 are differentiable functions whose values lie in [a, b]. (d) the function g is discontinuous in [a, b] and the functions f1 and f2 are differentiable functions whose values may lie withing or outside [a, b]. Q2. Fundamental theorem of calculus states that \int_{a}^{b}f(x)dx = F(b) – F(a), where (a) f(x) is a continuous function on [a,b] and F(x) is the derivative of f(x) on [a, b]. (b) f(x) is a continuous function on [a,b] and F(x) is the anti derivative of f(x) on [a, b]. (c) f(x) is a discontinuous function on [a,b] and F(x) is the derivative of f(x) on [a, b]. (d) F(x) is a continuous function on [a,b] and f(x) is the derivative of f(x) on [a, b]. Q3. In order to express infinite series as definite integral,  (a) sign of summation is replaced by integration. (a) sign of integration is replaced by summation. (a) infinite series can’t be expressed as definite integral. (a) None of the above. Q4. The definite integral  \int_{a}^{b}f(x)dx is a limiting case of  (a) infinite series (b) finite series (c) infinite sequence (d) indefinite integral. Q5. lim _{n\rightarrow \infty} \sum_{r=0}^{n-1}1/n f(r/n) = (a)  \int_{0}^{2}f(x)dx (b)  \int_{1}^{0}f(x)dx (c)  \int_{0}^{2}f(x)dx   (d) \int_{0}^{1}f(x)dx Q1 Q2 Q3 Q4 Q5 (c) (b) (a) (a) (d) Related Resources: To read more, Buy study materials of Definite integral comprising study notes, revision notes, video lectures, previous year solved questions etc. Also browse for more study materials on Mathematics here. • Complete JEE Main/Advanced Course and Test Series • OFFERED PRICE: R 15,000 • View Details Get extra R 3,000 off USE CODE: MOB20
__label__pos
0.99786
Data Transformation (Part 1) | Normalisation Techniques Data Transformation | Normalisation Techniques Normalisation Techniques Almost always when we get raw data in any project, it is unfit for direct consumption for analysis or modelling . It is a especially a concern when the data volume is huge for example in a big data analytics project . In this blog post I cover a few of the most common transformations and their use. Normalisation The process of normalization entails converting numerical values into a new range using a mathematical function. There are two primary reasons why this may be used. To make two variables in different scales comparable In a profile of a customer where I may have two variables - years of education and income. We might want both these to be treated equally but their ranges are very different. Plotting them on a graph may make it impossible to decipher any correlation between these two variables. However, normalization would bring them on to the scale and the relationship would clearly stand out. Some models may need the data to be normalized before modeling KNN models, for example, require a pre-requisite normalization for the model to produce effective results. Refer to this article for greater details. Some common normalization methods are as follows. Min-Max Min-Max is probably the most commonly used transformation. This transforms the numerical variable into a new range, for example, 0 to 1. It is calculated by the formula given below. min max For example, consider the range of marks that a set of students have scored by roll number given below Roll NumberMarks 110 215 350 460 If we were to normalize it between the ranges of 0 to 1 we would get the following Roll NumberCalculationNormalised marks 1 min max 0 2 min max .1 3 min max .8 4 min max 1 As we can see above that we have taken max as the maximum marks as obtained by the student as opposed to the maximum marks possible. However if from the original data set, it is possible to determine the maximum ranges then that is what we should be using and 60 in the above formulae should have been replaced by 100. z-score Simply put the z-score is the number of standard deviations a data point is from the mean of the data set. To be able to understand this we must understand what is standard deviation. The formula for z-score is as below zscore where zscore is the mean and zscore is the standard deviation For the above data set the z-score calculation of each observation is as follows. mean is 33.75 and Standard Deviation is 24.95 Roll NumberMarksZ-Score 110-0.7037037037 215-0.5555555556 3500.4814814815 4600.7777777778 Box-Cox Transformation It is not necessary for a data set to adhere to a normal distribution. However many data analysis methods require the data distribution to be normal. Box-Cox is a transformation that can be used to convert any distribution to a normal distribution. Every dataset may not benefit from a Box-Cox transformation, for example, if there are significant outliers box-cos may not help. The box-cox transformation in mathematical form is denoted as boxcox where λ is the exponent (power) and δ is a shift amount that is added when X is zero or negative. When λ is zero, the above definition is replaced by boxcox As you can very well imagine the trick is to find the right value of λ to get a normal distribution. Usually, the standard λ values of -2, -1.5, -1, -0.5, 0, 0.5, 1, 1.5, and 2 are investigated to determine which, if any, is most suitable. However, a maximum likelihood estimation can be used to determine the best possible value of λ to get a more normal distribution. To understand Box-Cox transformation lets look at a non normal data-set and see the impact of the transformation on it. boxcox In part II of this series of normalisation, we will discuss Aggregations, Value Mapping and Discretization.
__label__pos
0.877661
Batch Apex in Salesforce A Batch Apex in Salesforce allows you to define a single job that can be broken up into manageable chunks that will be processed separately. Batch Apex is a global class that implements the Database.Batchable interface. In this post, we will learn about what is Batch Apex and when to use Batch Apex example. What is Batch Apex in Salesforce? The Batch class is used to process millions of records within normal processing limits. With Batch Apex, we can process records asynchronously to stay within platform limits. If you have a lot of records to process, for example, data cleansing or archiving, Batch Apex is probably your best solution. In Batch Apex each transaction starts with a new set of governor limits, making it easier to ensure that your code stays within the governor execution limits. When to use Batch Apex Here is the real-time scenario to use the Apex Batch job. 1. If you want to send an email 90 days before the Contract expiration, we want to notify the User that the Contract is going to expire. 2. Create a renewal opportunity base on contract end date. 3. Batch Apex is also used to recalculate Apex Sharing for large data volume. Batch Job Execution Let understand how batch job execution happened and in which order all method execute. 1) Start method is identifies the scope (list of data to be processed) and automatically called at the beginning of the apex job. This method will collect record or objects on which the operation should be performed. global Database.QueryLocator start(Database.BatchableContext BC){} 2) Execute Method processes a subset of the scoped records and performs operation which we want to perform on the records fetched from start method. global void execute(Database.BatchableContext BC, list<sobject<) {} 3) Finish method executes after all batches are processed. This method use for any post job or wrap-up work like send confirmation email notifications. global void finish(Database.BatchableContext BC) {} Batch Apex in Salesforce Batch Job Execution Batch Apex Example In Salesforce Let take one example for batch job in Salesforce. If you need to make a field update to every Account in your organization. If you have 10,001 Account records in your org, this is impossible without some way of breaking it up. 1. So in the start() method, you define the query you’re going to use in this batch context ‘select Id from Account’. 2. Then the execute() method runs, but only receives a relatively short list of records (default 200). Within the execute(), everything runs in its own transactional context, which means almost all of the governor limits only apply to that block. Thus each time execute() is run, you are allowed 150 queries and 50,000 DML rows and so on. When that execute() is complete, a new one is instantiated with the next group of 200 Accounts, with a brand new set of governor limits. 3. Finally the finish() method wraps up any loose ends as necessary, like sending a status email. Batch Apex Example In Salesforce Sample Batch Job here is sample batch job. global class AccountUpdateBatchJob implements Database.Batchable<sObject> { global Database.QueryLocator start(Database.BatchableContext BC) { String query = 'SELECT Id,Name FROM Account'; return Database.getQueryLocator(query); } global void execute(Database.BatchableContext BC, List<Account> scope) { for(Account a : scope) { a.Name = a.Name + 'Updated by Batch job'; } update scope; } global void finish(Database.BatchableContext BC) { } } Using Callouts in Batch Apex To do the callout in batch Apex, use Database.AllowsCallouts in the class definition. Like below example. public class SearchAndReplace implements Database.Batchable<sObject>, Database.AllowsCallouts{ } What is stateful in the batch apex Batch Apex is stateless by default. If a batch process needs information that is shared across transactions, then you need to implement a Stateful interface. If you specify Database.Stateful in the class definition, you can maintain state across these transactions. For example, if your batch job is executing one logic and you need to send an email at end of the batch job with all successful records and failed records. For that, you can use Database.Stateful in the class definition. public class MyBatchJob implements Database.Batchable<sObject>, Database.Stateful{ public integer summary; public MyBatchJob(String q){ Summary = 0; } public Database.QueryLocator start(Database.BatchableContext BC){ return Database.getQueryLocator(query); } public void execute( Database.BatchableContext BC, List<sObject> scope){ for(sObject s : scope){ Summary ++; } } public void finish(Database.BatchableContext BC){ } } How to execute a Batch job? You can execute a batch job with DataBase.executeBatch(obj,size) method or we can schedule a batch job with scheduler class. Let see how to execute a batch job AccountUpdateBatchJob obj = new AccountUpdateBatchJob(); DataBase.executeBatch(obj); How to execute a Batch job Scheduler Class For Batch Apex Schedulable Class is a global class that implements the Schedulable interface. That includes one execute method. Here is example of a scheduler class global class AccountUpdateBatchJobscheduled implements Schedulable { global void execute(SchedulableContext sc) { AccountUpdateBatchJob b = new AccountUpdateBatchJob(); database.executebatch(b); } } How to Schedule scheduler class There are two option we have schedule the scheduler classes. 1) By System Scheduler. 2) By Developer console System Scheduler Step 1) Click on Setup->Apex class. Then search Schedule Apex button. Step 2) Select the scheduler class and set Time like below screen shot. Scheduler Class For Batch Apex By Developer console Execute below code from developer console. AccountUpdateBatchJobscheduled m = new AccountUpdateBatchJobscheduled(); String sch = '20 30 8 10 2 ?'; String jobID = system.schedule('Merge Job', sch, m); Test Class for Batch Job How to write test class for Batch Job. @isTest public class AccountUpdateBatchJobTest { static testMethod void testMethod1() { List<Account> lstAccount= new List<Account>(); for(Integer i=0 ;i <200;i++) { Account acc = new Account(); acc.Name ='Name'+i; lstLead.add(acc); } insert lstAccount; Test.startTest(); AccountUpdateBatchJob obj = new AccountUpdateBatchJob(); DataBase.executeBatch(obj); Test.stopTest(); } } Limits in Batch Apex 1. Up to five queued or active batch jobs are allowed for Apex. 2. Cursor limits for different Force.com features are tracked separately. For example, you can have 50 Apex query cursors, 50 batch cursors, and 50 Visualforce cursors open at the same time. 3. A maximum of 50 million records can be returned in the Database.QueryLocator object. If more than 50 million records are returned, the batch job is immediately terminated and marked as Failed. 4. If the start method returns a QueryLocator, the optional scope parameter of Database.executeBatch can have a maximum value of 2,000. If set to a higher value, Salesforce chunks the records returned by the QueryLocator into smaller batches of up to 2,000 records. If the start method returns an iterable, the scope parameter value has no upper limit; however, if you use a very high number, you may run into other limits. 5. If no size is specified with the optional scope parameter of Database.executeBatch, Salesforce chunks the records returned by the start method into batches of 200, and then passes each batch to the execute method.Apex governor limits are reset for each execution of execute. 6. The start, execute, and finish methods can implement up to 10 callouts each 7. The maximum number of batch executions is 250,000 per 24 hours. 8. Only one batch Apex job’s start method can run at a time in an organization. Batch jobs that haven’t started yet remain in the queue until they’re started. Note that this limit doesn’t cause any batch job to fail and execute methods of batch Apex jobs still run in parallel if more than one job is running. Summary In this post we learn about what is Batch Apex in Salesforce and when we should use Batch Apex. I hope this helped you to learn how batch class is used to process millions of records with in normal processing limits. Amit Chaudhary Amit Chaudhary Amit Chaudhary is Salesforce Application & System Architect and working on Salesforce Platform since 2010. He is Salesforce MVP since 2017 and have 17 Salesforce Certificates. He is a active blogger and founder of Apex Hours. Articles: 460 9 Comments Leave a Reply Your email address will not be published. Required fields are marked *
__label__pos
0.960228
TopCoder problem "Telltale" used in TCHS SRM 63 (Division I Level Two) Problem Statement      Your friend likes to go to parties and tell stories. At each party, he tells his favorite tale. The tale can be told in two variants - true or exaggerated. The exaggerated variant of the story sounds much cooler, so he wants to use it as often as possible, but he doesn't want to be known as a liar. The problem is that some people already know the true variant of this story, so if any of those people are at the same party as him, he must tell the story truthfully. Also, if a person hears different variants of the story at different parties, she's also able to detect that your friend is a liar, so he must avoid this as well. You are given an int n, the total number of people (numbered 1 through n). The people who already know the true variant of the story are given in the int[] a. The people at each party that your friend will attend are given in the String[] parties, where the i-th element is a space-separated list of all the people attending the i-th party. Return the maximum number of parties where your friend can tell the exaggerated variant of his story and not have anybody detect that he's a liar.   Definition      Class:Telltale Method:doNotTellTheTruth Parameters:int, int[], String[] Returns:int Method signature:int doNotTellTheTruth(int n, int[] a, String[] parties) (be sure your method is public)        Constraints -n will be between 1 and 50, inclusive. -a will contain between 0 and n elements, inclusive. -Each element of a will be between 1 and n, inclusive. -All elements of a will be distinct. -parties will contain between 1 and 50 elements, inclusive. -Each element of parties will contain between 1 and 50 characters, inclusive. -Each element of parties will contain a single-space separated list of integers without leading or trailing spaces. -Integers in each element of parties will be between 1 and n, inclusive, with no leading zeros. -Integers in each elements of parties will be distinct.   Examples 0)      4 {} {"1 2", "3", "2 3 4"} Returns: 3 Nobody knows the true story, so your friend can tell the exaggerated variant at all 3 parties. 1)      4 {1} {"1 2 3 4"} Returns: 0 Person 1 knows the true story. There's only one party, and she is present there, so your friend can never tell the exaggerated variant. 2)      4 {} {"1 2 3 4"} Returns: 1 Now person 1 doesn't know the true story, so your friend can tell the exaggerated variant. 3)      4 {1} {"1", "2", "3", "4", "4 1"} Returns: 2 Person 1 knows the true story. She's present at parties 0 and 4 (0-indexed), so he must tell the true variant at those parties. Person 4 will hear the true story at party 4, and she will also be at party 3, so he must tell the true variant at party 3. So, the exaggerated variant can only be told at parties 1 and 2. 4)      10 {1, 2, 3, 4} {"1 5", "2 6", "7", "8", "7 8", "9", "10", "3 10", "4"} Returns: 4 5)      8 {1, 2, 7} {"3 4", "5", "5 6", "6 8", "8"} Returns: 5 6)      3 {3} {"1", "2", "1 2", "1 2 3"} Returns: 0 Problem url: http://www.topcoder.com/stat?c=problem_statement&pm=10205 Problem stats url: http://www.topcoder.com/tc?module=ProblemDetail&rd=13532&pm=10205 Writer: DStepanenko Testers: PabloGilberto , Olexiy , ivan_metelsky Problem categories: Dynamic Programming, Graph Theory, Search, Simple Search, Iteration, Simulation
__label__pos
0.711423
Skip to main content Process JSON Various functions are provided to extract values from JSON documents. You can also use the shortcut <json>::<path> to extract the string value for specified JSON path, e.g. raw::b.c to get value "1" from {"a":true,"b":{"c":1}}. Then you can convert it to other data types using to_int() or ::int shortcut. json_extract_int json_extract_int(json, key) to get the integer value from the specified JSON document and key. For example json_extract_int('{"a":10,"b":3.13}','a') will get the number 10. You can also use the shortcut col:a::int. json_extract_float json_extract_float(json, key) to get the float value from the specified JSON document and key. For example json_extract_int('{"a":10,"b":3.13}','b') will get the float value 3.13. You can also use the shortcut col:a::float. json_extract_bool json_extract_bool(json, key) to get the boolean value from the specified JSON document and key. For example json_extract_bool('{"a":true}','a') will get the boolean value true or 1. You can also use the shortcut col:a::bool. json_extract_string json_extract_string(json, key)to get the string value from the specified JSON document and key. For example json_extract_string('{"a":true,"b":{"c":1}}','b') will get the string value {"c":1} and you can keep applying JSON functions to extract the values. You can also use the shortcut col:b to get the string value, or col:b.c::int to get the nested value. json_extract_array json_extract_array(json, key)to get the array value from the specified JSON document and key. For example json_extract_array('{"a": "hello", "b": [-100, 200.0, "hello"]}', 'b') will get the array value ['-100','200','"hello"'] If the entire JSON document is an array, the 2nd parameter key can be omitted to turn the json string as an array, e.g. json_extract_array(arrayString). You can also use the shortcut col:b[*]. json_extract_keys json_extract_keys(jsonStr) to parse the JSON string and extract the keys. e.g. select '{"system_diskio_name":"nvme0n1"}' as tags,json_extract_keys(tags) will get an array: [ "system_diskio_name" ] is_valid_json is_valid_json(str) to check whether the given string is a valid JSON or not. Return true(1) or false(0) json_has json_has(json, key) to check whether a specified key exists in the JSON document. For example json_has('{"a":10,"b":20}','c')returns 0(false). json_value json_value(json, path) allows you to access the nested JSON objects. For example, json_value('{"a":true,"b":{"c":1}}','$.b.c') will return the number 1 json_query json_query(json, path) allows you to access the nested JSON objects as JSON array or JSON object. If the value doesn't exist, an empty string will be returned. For example, json_query('{"a":true,"b":{"c":1}}','$.b.c') will return an array with 1 element [1] In a more complex example, json_query('{"records":[{"b":{"c":1}},{"b":{"c":2}},{"b":{"c":3}}]}','$.records[*].b.c') will get [1,2,3]
__label__pos
0.919973
What is Augmented Data Management? What is Augmented Data Management? Augmented data management refers to the automation of data quality checks and the improvement of manual data cleansing processes. Data mesh and data fabric are two new data management techniques that aim to tackle the issues of understanding, managing, and dealing with company data in a hybrid multi-cloud environment. The excellent news is both approaches to data architecture are complementary. But what do a data mesh and data fabric mean, and how do you utilize these data governance tools to make better decisions with your company data? What is a Data Fabric? According to Gartner, a design approach functions as a consolidated layer of connecting processes and data. Data fabrics employ continual analytics over available, discoverable, and inferred metadata to facilitate the creation, distribution, and use of consolidated and re-usable datasets in all settings, including hybrid and multi-cloud platforms. The data fabric architectural concept help organizations ease data access and enable large-scale personal data usage. This method simplifies data silos, opening new possibilities for data management, integration, specific customer perspectives, and reliable AI deployments, among many industry application cases. The abstraction layer of data fabrics makes it simpler and more convenient to model, consolidate, and analyze any data source, design data pipelines, and consolidate data in real-time because it is metadata-driven. By automating manual operations throughout data platforms using machine learning, a data fabric facilitates obtaining insights from data by improving data observability and quality. For the data users, this enhances data engineering productivity and time-to-value. What is a Data Mesh? A data mesh is a decentralized socio-technical strategy to distribute, access, and control analytical data in sophisticated and massive ecosystems—within or across enterprises employing, according to Forrester. The data mesh infrastructure aligns data sources with data proprietors based on business categories or functionalities. Data proprietors can generate data products for their domains due to data ownership decentralization, which means data users, both data scientists and business customers, can employ a blend of these data products for data analytics and science. The benefit of the data mesh is that it delegates data product generation to subject matter experts upstream who are most acquainted with the business domains, rather than depending on data engineers to cleanse and integrate data products downstream. Furthermore, by supporting a publish-and-subscribe paradigm and using APIs, the data mesh speeds up the re-use of data products, making it more straightforward for data users to obtain the needed data, including reliable updates. What is the Difference Between Data Fabrics and a Data Mesh? Both can exist in the same system. Data fabrics facilitate the deployment of a data mesh in three ways: ● Allows data proprietors to create data products by cataloging assets, translating resources into products, and adhering to federated management principles. ● Permit owners and users utilize data products in various ways, including publishing data products to a catalog, searching and finding data products, and querying or visualizing data products using APIs or data virtualization. ● As part of the data product creation process or data product monitoring, use insights from data fabric metadata to automate tasks by learning from patterns. Data fabrics allow you to begin with a use case and get quick time-to-value irrespective of data location. In data management, data fabrics can help you adopt and use a data mesh to its full potential by automating a lot of the problems involved in creating data products and managing their lifetime. You may build a data mesh leveraging the data fabric foundation's agility, continuing to benefit from a use case-centric data architecture irrespective of your data location. Related Articles Explore More Special Offers 1. Short Message Service(SMS) & Mail Service 50,000 email package starts as low as USD 1.99, 120 short messages start at only USD 1.00
__label__pos
0.985483
Introduction to Docker and Kubernetes: Part I diagram vm.png Few explanations about Docker, differences between Image and Container and Overview of Kubernetes Concept Long story before containerization arises, we tend to use VMs approach on deploying applications. Every machine will have their OS with its libraries, makes the apps size is pretty huge as its contains OS. The diagram is closely like below: VM runs on top of Hypervisor, if you have heard of VMware, Virtual Box, Hyper-V that are some examples of hypervisor. Each app is isolated within VM alongside with the OS and this is similar to container, the fundamentals that must understand by developers nowadays. Then, what is container? Basic concept of container is similar to container in real life, which is the way you package everything into one. Like this cargo container below: container.jpeg Cargo container. Source: wikipedia (source) You might often hearing these questions from your colleagues: My app works smoothly on your laptop, but why it doesn’t work on my machine? Most likely they haven’t implemented containerization on their applications that forces them to do installations step by step every time they want to test the app in local machine. Moreover the installations of its dependencies are differ on each Operating Systems. Let’s say that when you install NodeJS in Windows has different steps when you install it on Linux. But with container, we won’t get bother by doing such repetitive things that will save us much time and could focus on the development instead. More specifically, container wraps and package the code, dependencies, configurations in one file and makes it isolate from each other and with the OS. Though they are isolated but they can communicate by networks which is the happy part of containerzation! As we saw before that VMs run on top of Hypervisor meanwhile container runs on top of some sort of it we call it as container runtime. There are some container runtimes but the most popular is Docker. In world of virtualization and container, actually the things discussed above is more considered as definition of Image. Then what’s the difference between image and container? Image vs Container To get easier on understanding both terms, we can use Object Oriented Programming analogy, think of Image as Class and container as Object. We can instantiate many Objects from a Class, and so does container. We could create and run many containers from an image. Then in short we could say that actually containers is image that running. Let’s move straight to Docker, one of container runtime that popular among developers. Docker There is a brief explanation from Wikipedia about Docker’s definition: Docker is set of platform as a service products that use OS-level virtualization to deliver software in packages called containers. Wikipedia (source) With Docker, you can deliver your apps much easier despite what OS you used when development. Good things is Docker is available in Windows, Mac OS and Linux. This article won’t cover the installation step so if you insist to try on your machine, kindly follow these steps and choose based on your OS. Just like containerized app diagram, Docker play its role as container runtime that manages the containers app and runs on top of Operating System. Some key features when running container images in Docker — called as Docker Engine are: • Standard: Docker created the industry standard for containers, so they could be portable anywhere • Lightweight: Containers share the machine’s OS system kernel and therefore do not require an OS per application, driving higher server efficiencies and reducing server and licensing costs • Secure: Applications are safer in containers and Docker provides the strongest default isolation capabilities in the industry These are written on Official Docker’s page. That’s all about Docker and let’s go to practices section. Pull Public Image First of all, to interact with Docker we will use Docker CLI that are included on your Docker installation. To ensure that Docker has been installed successfully, check its version by executing docker --version on your terminal: Then, we can try to pull images from docker hub , let’s take Postgres Official Image as an example. To pull the image, run docker pull postgres:13.5 If you haven’t pull the image, then it will download from docker hub then store it to your local image cache. To ensure that the image has been pulled successfully, execute docker images on your terminal: As you might see there is postgresimage with tag 13.5, but you might be confused as well why there is another postgres image. If you looked at it more details, the other image has different tag that is 13-alpine . Every docker image has tag, consider this tag as versioning system. So we could say that now we have postgres installed in our machine that acts as Image with version 13.5, fortunately Postgres provide us the docs that straight to version 13.5, kindly check here. Run the Docker Image In order to running system/app/program , simply just need to run the pulled image — called it by Docker Container with docker run command: docker run --name <container-name> -e POSTGRES_PASSWORD=<pg-password> -d postgres:13.5 There are some parameters here: 1. --name is argument that used to specify the container name when it's running. Good things, docker is excellent at recognizing each container by giving name that results from auto generate if we didn't specify the container name. 2. -e is argument to specify environment variable that want to override when running the image. Postgres image does support some of environment variables, go check them here. 3. -d is to run the image in detach, means that the container will run in background and you can do other stuffs while container is initiating and running. 4. postgres:13.5 is the image name, don't forget to specify the tag after colon symbol : otherwise it will recognize to use latest image tag which some of part we don't want that to happened. docker run also has more than those options, check them here Other Way: Create Container then Start It What docker run actually did is create container from images then running them. For some purpose we could create the container only, and run them later by executing docker create command. The definition of docker create from Docker Official Documentations The docker create command creates a writeable container layer over the specified image and prepares it for running the specified command Docker Official Documentation (source) Then create the container from image with: docker create --name <container-name> -t -i -e POSTGRES_PASSWORD=<pg-password> postgres:13.5 To learn more about doker create please go here. After you executed the command it will return container ID as the STDOUT in your terminal. To ensure that the container has been created, use docker ps -a You might see that the STATUS of container is Created means that the container has been created, but its not running yet. To run it, use docker start docker start <container-name>|<container-id> This command will instruct Docker server to run the created container into Up status, or Exited if there is error. Run docker ps again to check: The postgres container state now changed from Created to Up for 6 minutes in this case, and the Ports is allocated as well. We already know that the container has been running but how to see what’s the container doing back there? This is where logging play its role, Docker has provide us the command to see our container’s log by using docker logs docker logs <container-name>|<container-id> Here is the output would like: Since we used Docker Image of Database tools, make sure the container logs showing that they are ready to accept connections before you did any transactions. For some reason, you might want to stop the container as it uses your machine resource as long as the container is running. You can do it easily by docker stop docker stop <container-name>|<container-id> This will put the container into Exited status but still remains in container list when we execute docker ps -a, to completely remove it we can use docker rmcommand like below: docker rm <container-name>|<container-id> Note that you can’t remove the container unless you stop it first otherwise it will lead you to error: Though it’s possible to force remove but it’s not kind of thing that’s recommend to do. Run Command in Container It’s also possible to execute a command inside running container to know what’s actually inside it. To do that, use docker exec docker exec -it <container-name>|<container-id> <command> To give you more understanding on this command, let’s try to explain some of its arguments: -i is same as interactive that will help us to interact with the container as they provide STDIN to the user. If we remove the -i argument then we won't get any feedback as our STDIN can't be processed by the container. -t in exec helps to produce pretty output to the user that interacts with the container. If we remove this argument then we will think that the container is hanging at first, while they are waiting for our input. bash is one of common command that you can choose if you want to run bash script inside the container. docker exec often helps us to understand what are things inside a container as we might having blurry of understanding what's container actually is. Also, we could interact with the running container as what we can do in UNIX terminal. Refer here for docker execreference Though there are still other topics related to Docker i.e Volume, Networking, Docker Compose but in this post I tried to give the basic things when we learning Docker for first time. Kubernetes Okay, so we already learned about Docker in introduction. It provides you easiness to deploy and deliver your apps without worrying of setup and installing libraries, however using Docker only ain’t scalable at all. And here comes Kubernetes to scale our apps! From Kubernetes.io documentation, the definition is: Kubernetes is a portable, extensible, open-source platform for managing containerized workloads and services, that facilitates both declarative configuration and automation Kubernetes Official Documentation (source) To put it simply, kubernetes is open source platform to managed containerized application. It’s also known as k8s because there are 8 letters between k and s. So, how’s k8s works and its architecture? From architecture above, it gives us some terms that relates to k8s components, they are: Cluster Components • control plane: is part of k8s cluster which responsibles to manage nodes and pods, it also expose the kube api. • api (kubeapi-server): it plays role as Front End for kube control plane. Simply say, to interact with kube cluster we will communicate through kube-apiserver. • sched (kube scheduler): watches newly created pods with unassigned node, and then choose a node for pods to run on, since pods can’t run jobs without node. • c-m (kube controller manager): to run controller process that watches state of our cluster. It will try to move the current cluster state to desired state. • c-c-m (cloud controller manager): to link our cluster with cloud provider API (GCP, AWS), it only runs controller that specific to our cloud provider in which they won’t run AWS controller if we are using GCP, vice versa. • etcd: key value storage uses for persistent store, it’s like k8s database. Node Components Node is actually a running virtual machine that resides in kube cluster, and we could scale the node vertically and horizontally. Vertical scaling means that we upgrade the specification of node i.e its CPU, memory, or storage but speaking the facts that horizontally scale has limit and at certain point it can’t be scale in horizontal way anymore. So, in the end it’s recommend to do horizontal scaling because it has no limit as long as we can afford the price. Horizontal scaling means that we spawn new nodes in the cluster to share the loads with another node, usually we face this case when there is high traffic or some data needs to be processed in parallel thus we need more nodes by using data processing concept like Map Reduce. Speaking about node, it has some components that we need to know about: • kubelet: agent that runs on each node to make sure containers running smoothly in a pod. • kube-proxy: is a network proxy to maintain networking rules on nodes, and allowing communications to your pod from network session inside or outside of your cluster. • container runtime/container manager: it’s a software that responsible for spawn & running containers. There are some of container runtimes such as Docker, containerd, CRI-O, and runC. The first thing I noted when learning Kubernetes is about their hierarchial components and it’s helping me to have general understanding about k8s. I could say that: In hierarchial, k8s has one or more cluster, each cluster has many nodes. Each node has one or more pods, each pod has one or more containers. Container terms in k8s has exactly same meaning with container in Docker that we discussed before, that’s why we need to understand first about container. That’s all for introduction to Kubernetes, let’s put more focus on the hands on session. When reading about k8s you might got a question in mind. Do we have to setup the cluster from scratch, or using cloud like Google Kubernetes Engineor Amazon Kubernetes Service for the sake of learning and exploring kube?Happily, we don’t need to do both. The community has developed “mini cluster” to facilitate us in terms of learning k8s in practical way. There are some options and personally I didn’t have recommendation on which is best to use but I’m using minikube until this article is written, and this post will using minikube as our demo. Minikube helps us to setup local cluster with single node and already supported in Windows, Linux, and macOS. To install it on your machine, take a look on their docs here. Ensure that minikube has been installed by checking its version with minikube version While minikube acts to spawning cluster with its node, we need a way to communicate to the cluster and that’s kubectl plays the part! kubectl is tools developed by community to interact with kube-apiserver. Remember that kube-apiserver is k8s front end, it means that we can interact with kubernetes cluster using kubectl . If you haven’t install it on your machine, please follow this guide and install it based on your Operating System. Once you have kubectl in your local, be sure to check it’s version by typing kubectl version in terminal. We could see wo informations in the output: • Client version states the kubectl version. kubectl acts as client. • Server version states the kubernetes cluster version. Now we have minikube and kubectl in our machine, what’s next? In kubernetes, there are some objects/resources that we need to configure using kubectl, actually you might not need to use all of them in your apps/projects but having knowledge on it surely will help you a lot and gives you reasoning on when we use certain object and why we use it instead of others. To help you understand about kube objects, here I got some articles for you: If you are looking for tutorial in Video version, I have some recommendations to take a look. Don’t forget to support their channel as well! Indonesian Version English Version After you feel more familiar with kubectl and kube objects, I found cheatsheet provided by k8s docs that might be help you later Finally we reached the end of this article, but don’t worry we will continue this with implementing Kubernetes case in Airflow in Part II! Only registered users can post comments. Please, login or signup. Start blogging about your favorite technologies and get more readers Join other developers and claim your FAUN account now! Avatar Okza Pradhana Data Engineer, DANA @okzapradhana Data Engineering Enthusiast Stats 16 Influence 494 Total Hits 1 Posts Discussed tools
__label__pos
0.530387
WordPress как на ладони Недорогой хостинг для сайтов на WordPress: wordpress.jino.ru Получай пассивный доход от сайта с помощью браузерных PUSH уведомлений класс не описан WPSEO_Statistics{} Yoast 1.0 Class that generates interesting statistics about things. Хуков нет. Возвращает null. Ничего. Использование $WPSEO_Statistics = new WPSEO_Statistics(); // use class methods Методы 1. get_post_count( $rank ) Код WPSEO_Statistics{} Yoast 16.2 <?php class WPSEO_Statistics { /** * Returns the post count for a certain SEO rank. * * @todo Merge/DRY this with the logic virtually the same in WPSEO_Metabox::column_sort_orderby(). * * @param WPSEO_Rank $rank The SEO rank to get the post count for. * * @return int */ public function get_post_count( $rank ) { if ( WPSEO_Rank::NO_FOCUS === $rank->get_rank() ) { $posts = [ 'meta_query' => [ 'relation' => 'OR', [ 'key' => WPSEO_Meta::$meta_prefix . 'focuskw', 'value' => 'needs-a-value-anyway', 'compare' => 'NOT EXISTS', ], ], ]; } elseif ( WPSEO_Rank::NO_INDEX === $rank->get_rank() ) { $posts = [ 'meta_key' => WPSEO_Meta::$meta_prefix . 'meta-robots-noindex', 'meta_value' => '1', 'compare' => '=', ]; } else { $posts = [ 'meta_key' => WPSEO_Meta::$meta_prefix . 'linkdex', 'meta_value' => [ $rank->get_starting_score(), $rank->get_end_score() ], 'meta_compare' => 'BETWEEN', 'meta_type' => 'NUMERIC', ]; } $posts['fields'] = 'ids'; $posts['post_status'] = 'publish'; if ( current_user_can( 'edit_others_posts' ) === false ) { $posts['author'] = get_current_user_id(); } $posts = new WP_Query( $posts ); return (int) $posts->found_posts; } }
__label__pos
0.846731
ZMUohMy_MessageFactory.html 06dec09 CmdrZin 17dec09   References 1. Kevin Glass's sgs-tank (org.newdawn) code used with the Darkstar Java Game Server. Overview The Message Factory concept is a general purpose message formatting and handling system. It supports different types of messages identified by the first byte in the message. The rest of the message is formatted and parsed based on this message type ID byte. In this way, not every thing has to be String text based. The basic message transport is still the ByteBuffer object. The base class is Message and all messages extend this and add their own formatting and parsing methods. Messages are data structures passed between the Client and the Server. They are stored in a ByteBuffer object. This object maintains an index pointer that is incremented on each get() call. On receive, the MessageFactory pulls the first byte off to see what type of message it is. It then instantiates a new object of this message type and calls the decode() method for that object. It then returns the message object to the caller. This seems like a fairly clean way to handle messages. Unified message system Send and Receive use the same Message Class for a common code base. Send and Receive use the same extended Message Class objects for encoding and decoding message body data and inserting message type codes. Send Process 1. Instantiate a message object and pass its body data. (The message is not built up at this time. This allows a single common msg.encode call to be made at the time of sending rather than every time a message is instantiated.) 2. Pass a large enought ByteBuffer to buildMessage to encode the message into. (This is where the message is actually built up.) 3. Pass the trimmed down ByteBuffer to sendToUser() for output. Receive Process 1. Upon detecting a full data packet in the stream buffer, call dataArrived to pull the first byte out and use it to determine the message type. This is done by MessageFactory.getMessage(). The ByteBuffer object being passed around remembers when it is accessed and keep this status in its ponters. (The appropriate message object is instantiated and the message is converted from a byte array into the elements of the new message object.) 2. The new message object is then sent to a handler for game processing. Organization sun.com.sgs.darkmud.messages (package) Message.java  // Common abstract message class MessageFactory.java // Central distributor for incoming messages UniqueMessage1.java // message object using the abstract Message class  ... UniqueMessageN.java // message object using the abstract Message class Enhancement Ideas Messages to the server that are game related could be sent to a queue based on the LinkedList class. System related messages, such as LogOff, would be handled immediately. Encryption: Using byte buffers for messages allows easy implementation of encryption. Message body can be padded out to 16 or 64 byte boundaries with random numbers for block encryption. The header (type) could be expanded to two bytes and given error correction encoding. Sending a Text Message to the Client MudUser.sendToUser() is the primary way to send a message to the Client. It currently sends a text string through use of a ByteBuffer. This protocol will be changed to use a ByteBuffer as the input parameter. Sending messages will now be a two step process: 1. Build the Message object into a ByteBuffer with xxx.encode (xxx is a class that extends the Message class). 2. Send the ByteBuffer to sendToUser the way text is now sent. First step is to change over all of the text message generators to use this protocol and then change the Client to be compatable. TextMessage class Add TextMessage.java that extends the Message class to sun.com.sgs.darkmud.messages. This class just encodes and decodes a text string. Changes to MudUser.java Add a sendMsgToUser( Message message ) method that allocates a general purpose ByteBuffer to be used for sending a Message object. (see code) NOTE: Also added some test code to decode an encoded String to test the MessageFactory. Add another sendToUser() method that has a ByteBuffer as the input parameter. Changes to MudMain.java Changed loggedIn() to use a TextMessage object to send text to Client.     // We send the welcome message to the client //OLD     user.sendToUser(welcome);     TextMessage msg = new TextMessage(welcome); //NEW     user.sendMsgToUser(msg);                    //NEW DEBUG: Works with test code in sendMsgToUser(), so on to modifying Client. Changes to MudClient.java Changed receivedMessage() to use the test code from sendMsgToUser() to decode the message back into a String. Once this works, the support for other Message types can be added. ChangeOver step 1: Test the first byte of the ByteBuffer to see if its a TextMessage.ID. If is is, send it to the MessageFactory for decoding and handle as a Message object. If not, then pass on as a String as before. Added code (see code)     if (message[0] == TextMessage.ID) {         try {             m.rewind();             Message bmsg = MessageFactory.getMessage(buffer);             // Need to check the message type then cast to proper object.             TextMessage tmsg = (TextMessage) bmsg;             msg = tmsg.getMessage();         } catch (IOException e) {         }     } else {         msg = new String(message); This and the old type testing code will be replace with a new method called handleMessage(). Ok..text messages work. Back to the Server to change over all othe text outputs to Client. Changed FlightSpell.cancelSpell() //OLD  ((MudUser)getContainer()).sendToUser("Your blue glow fades.\n");     TextMessage msg = new TextMessage("Your blue glow fades.\n"); //NEW     ((MudUser)getContainer()).sendMsgToUser(msg);                 //NEW Changed Zombie.doSomething() //OLD   mu[0].sendToUser(sOut);     TextMessage msg = new TextMessage(sOut); //NEW     mu[0].sendMsgToUser(msg);                //NEW Changed MudUser.buildExperienceMessage() //OLD   sendToUser("@01"+experience);     TextMessage msg = new TextMessage("@01"+experience); //NEW     sendMsgToUser(msg);                                  //NEW Other modules changed in the same way. Ok...all other text output to Client is changed over. Now to remove the test code in the Client so that it only uses the new handleMessage method for the new protocol... Added method handleMessage() (see code). Might as well go all out and add InfoMessage and QuestMessage types rather than hack more of the recieveMessage() method. Add InfoMessage.java to handle old '@' messages. Leave as Strings for now. Need to figure out the best way to format the data. (see code) Add QuestMessage.jave to handle old '[' messages. Leave as Strings for now.  Need to figure out the best way to format the data. (see code) DEBUG: Client receiveMessage() uses handleMessage() and only Message objects..oops, forgot to add new message types to MessageFactory switch case list...doh..and not using in Server to encode messages..arrrgh..Lots of other sendToUser() calls in server. Mostly for command handling..so just cast these all into TextMesssage objects for now.. Changes to sendToUser Add         TextMessage msg = new TextMessage(text);         sendMsgToUser(msg); Delete the rest. DEBUG: rats..forgot to disable test code in sendMsgToUser..hmm..type 0 being sent..try default these to Strings..maybe need to rewind before testing..rewind before sending??..YES..ok..a little clean up..and forgot to add break for each case in handleMessage()..ok..everything seems to work again.. TODO: Still need to figure out the command output and how it gets to sendToUser(). Next steps are to do the same thing for Client to Server messages and to use the encoding features to format the data instead of string token coding and parsing. Going to do the encoding first.. 14dec09 ok..on with the encoding..The first area to use it is in the Info lists that are sent to the Client. These contain stats, skills, item, and quest data. These are name:value pair right now so encoding them should be a piece of cake. InfoMessage.java Need four lists of name:value pairs: Stats, Skills, Inventory, and Quests and four for the Strings. The InfoMessage class will do the name extraction from the database now and send the name strings in the message. Otherwise, the Client would have to have a copy of the database also. For now, the InfoMessage constructor will be sent all four lists. Later, a single list can be sent for updating with very little change to the code. This is where the Message Factory concept provides flexablility. (see code) MudClient.java More changes here. Need to change the way the display lists are loaded. Added method ClientInfo.updateInfo() to put list data into display lists. (see code) This is a lot cleaner than all the text parsing. DEBUG: Haven't done buildExperienceMessage() yet..so disable..hmm...maybe buffer too small??.. BUG: Wrong order of encoding of Info..didn't get key size..WOO HOO...all back to normal. Now the Client to Server needs to be done, but this is mostly just text right now so should be easy. 17dec09 Update the ASK command and the EP message. ExpMesssage.java This message is used to update the User's experience points text display. Its sent as an int instead of a String just to annoye packet sniffers. (see code). Add this message type to MessageFactory, MudUser, and MudClient.handleMessage. Test exp message..works like its suppose too...make an AskMessage that brings up NPC dialog box. AskMessage.java This message will bring up the general purpose NPC dialog box. For now, it displays the text that is static on the Client side. It can easily be expanded to output text sent from the server. hmm...stuck in commit..don't really need to have an AskTask. If used, need to get invoker..ok now Trying to use AskTask, but need invoker, but MudCommand is not seralizable..so kernel fails.. Going to do it without Task for now..tried by holding the ManagedReference for the user..works..so will use AskTask to send message..another example if nothing else. (see QuestNPC.java code) All working. Now to come up with a way to send user input back to this NPC without using the command protocol..maybe an ID for each generated QuestNPC or an association table..                                 TODO: Continue with exapmle. This is all old stuff. Example: Sending a text message // text comes from the user input text box as before. UserCommandMessage msg = new UserCommandMessage(text); sendMessage(msg); The first messgae type will be TEXT to replace the general text command messages and outputs that are used by the Client and Server.     Details: ID is a static byte defined in the Class for each message type. These must be different for each message type. This constructor is called to instantiate a Class which Extends a common Message Class The constructor calls the original (super) Message Class constructor to set the message type code. /**  * Create a new fire message  */ public FireMessage(float x, float y) { Call the constructor of the super class (Message) to set the message type.   super(ID); Save the parameters for constructing the message body later.   this.x = x;   this.y = y; } This is the constructor for Message. It is evoked by the super( ) call. The parameter code is the message class ID from the super( ) call. Message.class code /**   * Create a new message based on its unique identifier   */ public Message(byte code) {   this.code = code; } This code calls the message encoder to construct a data array for the message body the then sends the message. SendBuffer is a global 2k ByteBuffer object used as a holding buffer for output. /**  * Send a message to the channel.  */ private synchronized void sendMessage(Message msg) {   try {     // encode the message into a byte buffer so SGS     // client libs can send it     msg.encode(sendBuffer); This must be the SGS send support call. This would be replaced with the socket send method call.     channel.sendBroadcastData(sendBuffer, true);   } catch (IOException e) {     Log.log(e);   } catch (Throwable e) {     Log.log(e);   } } Build up the message body in a temporary buffer. Clear the holding buffer then put in the type and then add the message body. The buffer can now be sent to the channel for sending. Message.class code /**  * Encode this message into the byte buffer provided  */ public void encode(ByteBuffer buffer) throws IOException {   ByteArrayOutputStream bout = new ByteArrayOutputStream();   DataOutputStream dout = new DataOutputStream(bout);   // I like data output streams, they're easier to work with than buffers   encodeImpl(dout);   buffer.clear();   buffer.put(code);   buffer.put(bout.toByteArray()); } This was an abstract method of Message that is overwritten by the new Class. It writes the message body data into the stream provided. (See ChatLobby for string use examples.) /**  * @see org.newdawn.tank.messages.Message#encodeImpl(java.io.DataOutputStream)  */ public void encodeImpl(DataOutputStream dout) throws IOException {   dout.writeFloat(x);   dout.writeFloat(y); } Receiving a Message Example: receiving a message and process it Message message = MessageFactory.getMessage(data); handleMessage(message); Code from GameView.java puts the handler inside the try loop. This code from LobbyDialog.java is a bit cleaner with them separate. Details: Pass the buffer to the MessageFactory to try to generate the right type on message object. This code is assumed to be called by the SGS CLient. Must be from implementing ClientChannelListener interface. /**  * @see com.sun.gi.comm.users.client.ClientChannelListener#dataArrived(byte[], java.nio.ByteBuffer, boolean)  */ public void dataArrived(byte[] userID, ByteBuffer data, boolean reliable) {   try {     Message message = MessageFactory.getMessage(data);     handleMessage(message);   } catch (IOException e) {     Log.log(e);   } } The MessageFactory pulls the first byte off to determine the message type. It then instantiates a message object of that type or throws an error back. If the type is valid, it calls the message's decode method to format the data into elements associated to the specific message type and returns the message object for processing. /**  * A simple factory to take a ByteBuffer recieved from SGS and decode  * it into a well formed message object.  * @author Kevin Glass  */ public class MessageFactory {  /**   * Decode the byte buffer into a message object. A runtime exception   * is thrown to indicate an unrecognised message type.   *   * @param buffer The buffer to decode   * @return The message object decoded   * @throws IOException Indicates that data could not be read from the   * buffer.   */   public static Message getMessage(ByteBuffer buffer) throws IOException {     byte code = buffer.get();     Message message;     switch (code) {       case ChatMessage.ID:       {         message = new ChatMessage();         break;       }       case ArenaListMessage.ID:       {         message = new ArenaListMessage();         break;       }       case FireMessage.ID:       {         message = new FireMessage();         break;       }       default:       {         Log.log("Unrecognised message code: "+code);         throw new RuntimeException("Unrecognised message code: "+code);       }     }     message.decode(buffer);     return message;   } } Assuming a FireMessage was sent, then message.decode would call FireMessage.decode which is a Message.decode method that FireMessage inherited. /**  * Decode the message from a byte buffer  */ public void decode(ByteBuffer buffer) throws IOException {   byte[] array = new byte[buffer.remaining()];   buffer.get(array);   ByteArrayInputStream bin = new ByteArrayInputStream(array);   DataInputStream din = new DataInputStream(bin);   decodeImpl(din); } The decodeImpl method is abstract and supplied by FireMessage. The method is customized for the FireMessage class and extracts the data into the appropriate elements. /**  * @see org.newdawn.tank.messages.Message#decodeImpl(java.io.DataInputStream)  */ public void decodeImpl(DataInputStream din) throws IOException {   x = din.readFloat();   y = din.readFloat(); } The data has now been formatted into the FireMessage object and can be handled by processes unique to that type of message. Up to this point, NO game actions has been taken based on the contents of the message. It has only been unravelled back into its original form or data structure that was sent by the sender. The Handler can now decide what game action(s) should take place based on the message content. Since this code snippet handler came from the chat room code JobbyDialog.java, a FireMessage would just be ignored. The GameView.jave handler code shows how this message type would be processed. /**  * Handle a message recieved form the server  *  * @param message The message recieved from the server  */ public void handleMessage(Message message) {   switch (message.getID()) {     case ChatMessage.ID:     {       ChatMessage msg = (ChatMessage) message;       chat.append(msg.getMessage()+"\n");       int index = chat.getText().length();       chat.select(index,index);       break;     }     case ArenaListMessage.ID:     {       ArenaListMessage msg = (ArenaListMessage) message;       arenaList.clear();       for (int i=0;i<msg.getArenaCount();i++) {         arenaList.addElement(msg.getArenaName(i));      }      break;     }   } }  
__label__pos
0.653161
Public Key Infrastructure Explained | Everything you need to know Save to My DOJO Public Key Infrastructure Explained | Everything you need to know Here on the Hyper-V Dojo Blog I, of course, focus on Hyper-V, but it frequently intersects several other technologies that administrators need to work with which means those are also highly relevant for Hyper-V users. Of these, public key infrastructure (PKI) and its components – most importantly, certificates – remain poorly understood, and therefore poorly implemented. In Hyper-V, we employ PKI for Shielded VMs and replication. Beyond that, we have hordes of other uses for certificates; websites, code signing, secure e-mail, and Windows Admin Center, to name a few. With the complexity behind encryption technologies and the arcane nature of related tools, I find that many in IT tend to avoid the entire technology group as too complex. In reality, I could probably stand in front of a classroom and deliver all of the most important aspects of PKI in fifteen minutes. Since I haven’t got a classroom handy and I would like to distribute this information far and wide, I will use this article to explain the technology. I will refer back to it in future articles. Before we get started, here’s a quick table of contents if you want to skip to a certain section. 1. What is Public Key Infrastructure (PKI) 2. The Core Use of PKI and Certificates 3. A Very Brief Introduction to Digital Encryption 1. Encoding and Encryption Ciphers 2. Keyed Encryption Ciphers 3. Symmetric Encryption 4. Asymmetric Encryption 4. PKI Identification 5. The PKI Certificate Issuance Process 6. The PKI Certificate Validation Process 1. Why Not Contact the Certification Authority During Validation? 7. The PKI Certificate Revocation Process 8. PKI Identity Verification Visualization 9. Certificate Signing Operations 10. SSL Encrypted Communications 11. Offline Certification Authority 1. The Risks of an Online Root Certification Authority 2. Offline CA Creation Process 3. An Offline CA Without a CRL 12. What IS a Certification Authority? 1. Does a Certification Authority Require a CRL? 13. The Dangers of Self-Signed Certificates 14. Going Further with PKI 1. What is Public Key Infrastructure (PKI)? A public key infrastructure or PKI establishes a digital trust hierarchy in which a central authority securely verifies the identity of objects. We commonly use PKI to certify users and computers. It functions by maintaining, distributing, validating, and revoking SSL/TLS certificates built from the public key of public/private key pairs. There are many associated terms connected to public key infrastructure you’ll need to be familiar with so I’ll lay them out here. You don’t necessarily need to memorize these, or even understand all of them at this stage – you might even want to skim or even skip this section and just use it as a reference later. I have deliberately kept these descriptions simple to stay within the scope of the article. • SSL: “SSL” stands for “Secure Sockets Layer”. SSL was designed to secure digital communications traveling over insecure channels. TLS has supplanted SSL, but we still use the term SSL, mostly for familiarity reasons. It has cemented itself into the common vernacular, so there’s little use in fighting it. After this point, I will use “SSL” in its generic sense. • TLS: “TLS” stands for “Transport Layer Security”. This technology group serves the same fundamental purpose as SSL and depends upon the same basic components and concepts, but the technologies cannot be used interchangeably. • Cipher: an algorithm used for encoding or encryption. In SSL, the term often refers to a collection, or “suite” of ciphers, each with a different purpose in an SSL conversation. • Key: A digital key used with SSL/TLS is just a sequence of bits, usually expressed in hexadecimal characters. Ciphers use keys to encrypt and decrypt data. Keys used in standard PKI are expected to be a certain number of bits, a power of 2 starting at 1024 (ex: 2048, 4096, etc.). A longer key provides stronger defense against brute-force cracking when used in a cipher, but also requires more computing overhead. In PKI, keys come in pairs: • Private key: a key that is held by its owner and never shared with anyone else. The private key in a private/public pair is the only key that can be used to decrypt data that was encrypted by the public key. A private key that is accessed by anyone other than its owner is considered “compromised”. • Public key: a key that can be shared with anyone. The public key in a private/public pair is the only key that can be used to decrypt data that was encrypted by the private key. The “PK” in “PKI” comes from its usage of public keys. It also serves as a reminder: we use “public key” in PKI but never “private key”, because no private key should ever enter the infrastructure. • Certificate: A certificate is a digital file used for identity and authorization. You will often see these referred to as “SSL certificates”. However, SSL implies communications, whereas certificates have more purposes. The term has lodged itself in common jargon, so it too will continue despite its technical inaccuracy. When I remember, I say “PKI certificate” instead. Certificates contain many components. Some of these items: • Identifying information. There are several defined fields, and most certificates contain only a subset. Examples: • Common Name: The name of the object that the certificate identifies. Sometimes that is a fully qualified domain name, such as www.altaro.com. Sometimes, it is just a name, such as “Eric Siron”. • Locality: The city, or equivalent, of the entity represented by the certificate • Organization: The name of the organization that owns the certificate • A public key • Validity period • Encoding: Passing data through an algorithm to transform it for the purpose of facilitating a process or conforming to a standard. For instance, base-64 encoding can turn character string sequences from a human-readable form that might cause problems for simple string handlers (like URLs) into strings that computers can easily process but humans would struggle with. Text can be encoded in UTF8 (and others) so that it fits a common standard. “Decoding” is a convenience term that we use to mean “reversing encoding”, although it could be argued that there is no such thing. We simply perform a different encoding pass on the new data that generates output that matches the original data. The most important thing to understand: encoding does not provide any meaningful security. We only use encoding for convenience. • Encryption: Encryption is similar to encoding, but uses algorithms (usually called ciphers in this context) to obscure the data as opposed to adapting it for a functional purpose. “Decryption” reverses encryption. • Cracking: a term that traces its origins to the same concepts behind physical-world activities such as “cracking a safe”. It refers to the action of decrypting data without having access to the private key. I previously mentioned “brute-force” cracking, which means trying all possible keys one at a time until finding the correct one. I’ll leave further research on other techniques to you. • Certification Authority: Sometimes shortened to “certificate authority”. Often abbreviated to “CA”. An entity that signs and revokes certificates. • Self-signed certificate: A certificate in which the identity represented by the certificate also signed and issued the certificate. The term “self-signed” is often used erroneously to describe a PKI that an organization maintains internally. A certificate signed by any authority other than the certificate holder is not self-signed, even if that authority is not reachable on the public Internet or automatically trusted by computers and devices. • Root Certification Authority: The top-most entity of the PKI, and the only entity that expects others to blindly trust it. Uses a self-signed certificate. Can sign, issue, and revoke certificates. • Intermediate Certification Authority: Sometimes referred to as a “subordinate CA”. A  CA whose certificate was signed and issued by another CA. Generally identical in function to a root CA, although the root or a superior intermediate CA can place restraints on it. • Certificate chain: a single unit that contains all of the information needed to trace through all intermediate CAs back to and including the root CA. • Server certificate and client certificate: technically incorrect, yet commonly used terms. In typical usage, these terms mean “the certificate used by the server in a given SSL communication” and “the certificate used by the client in a given SSL communication”, respectively. However, you cannot correctly say, “this certificate file is a client certificate”. “Server” and “client” are arbitrary designations for a digital transmission and have no meaning whatsoever when you’re only referring to a single entity (the certificate holder). A certificate is a certificate. • Constraints, key usage, and enhanced key usage: actions that a CA has authorized the certificate holder to perform. For instance, consider a development application that uses a private key to sign a piece of code. If the CA has signed the matching certificate for code signing usage, then a computer that runs that code and trusts the CA will treat the code as properly signed. However, a private key can be used for any purpose — constraints only limit the actions the issuing certification authority will validate. That means that you still cannot correctly refer to a certificate with the Client Authentication key usage as a “client certificate”. • Certificate Revocation List (CRL): A list of certificates that the CA has marked invalid. If a certificate appears on this list, then no client should consider it reliable. The CA signs the CRL to make it tamper-proof so that it can be freely distributed and trusted. • Online Certificate Status Protocol responder (OCSP responder): CRL’s are just simple lists. A client must download the entire CRL and search through it to check on any given certificate. For very long CRLs and/or low-powered clients, that can take a lot of time. An OCSP responder keeps a copy of the revoked certificate list and can perform the search for any client that asks. 2. The Core Use of PKI and Certificates We use PKI and certificates for a multitude of purposes. Functionally, though, they all derive from two central needs: identification and encryption. I first learned about PKI through encryption, so I’ll start there. 3. A Very Brief Introduction to Digital Encryption I only intend to talk enough about encryption to explain the problems that PKI solves. A fulfilling lifetime career could be made from this subject. I recommend that you find experts if you want to know more. 3.1 Encoding and Encryption Ciphers At its simplest, a cipher is an algorithm. We apply the term “cipher” to an algorithm when it has uses in encoding or encryption. Let’s look at a trivial cipher: ROT13. It involves only the 26 characters used in the English alphabet. It encrypts character-by-character, replacing the character to be encrypted with the character 13 places forward, wrapping around at “A” if passing “Z”. “A” becomes “N”, “B” becomes “O”, etc. The ROT13 cipher exhibits two major problems: • You only need to know that ROT13 was used to encrypt in order to restore the original data. Even if you don’t know that, it usually does not require a great deal of effort to discover. In simpler terms, ROT13 does not require a key for decryption. • You cannot effectively use ROT13 on any message that contains characters outside the 26 characters in the English alphabet. It does not define a way to encrypt a “1” or a “ñ”, nor does it define any way to output those characters. Therefore, ROT13 depends on a specific, limited range of source material and cannot provide any guarantee of safety. Keyless ciphers find their primary value in blocking casual access to data. You might use a keyless cipher in a puzzle game in which you want the player to eventually figure out the message. In the larger world of encoding, keyless algorithms find a great many more uses, such as base-64 encoding and the various text encoding schemes. I briefly covered those in the terminology section above. For meaningful secrecy, we turn to keyed ciphers. 3.2 Keyed Encryption Ciphers A keyed cipher differs from the previously-discussed ciphers in that it depends upon at least one key. These ciphers come in two types: • Symmetric ciphers: The algorithm uses the same key to encrypt and decrypt. Furthermore, the encrypted data is usually the same size as its unencrypted source. • Asymmetric ciphers: The algorithm uses one key to encrypt and a different key to decrypt. Data encrypted asymmetrically tends to be larger than its unencrypted source 3.3 Symmetric Encryption Symmetric encryption is the easiest to understand. Imagine if you decided to use ROT13, but moved to the character immediately after. You could consider the 1 to be a key in an algorithm defined as “ROT13 + k“.  Just knowing the cipher is no longer enough; you also need to know (or figure out) the key. A real-world corollary to symmetric encryption would be a standard home safe. The physical locking mechanism correlates to a digital algorithm. It can be opened with a physical key or combination. Anyone in possession of an identical copy of the physical key or knowledge of the combination can open the safe. A practical discussion involves a “real” symmetric encryption cipher. Let’s choose 3DES (3-pass Data Encryption Standard). Jim wants to share data with Jane and only Jane. So, he chooses to encrypt it with 3DES. He then gives the encrypted data (usually called ciphertext) to Jane. In order for Jane to decrypt it, she must know the key. Once he gives it to her, it becomes a “shared key”. Symmetric Encryption The good parts: DES3 depends only on a key and an algorithm. Technically, it can work with any kind of data. Contrast that against ROT13, which can only work with the 26 characters that make up its algorithm. Because DES3’s algorithm does not depend on the data, the ciphertext is useless without the key. Knowing that DES3 created the ciphertext does nearly nothing for an attacker. The other nice thing: the ciphertext from most symmetric algorithms is the same size as the plaintext. However, we still have a problem, which I hope the diagram made clear. How can Jim securely deliver the key to Jane? Symmetric encryption works well for protecting information intended only for personal consumption. But, we want to communicate with others. So, we turn to asymmetric encryption. 3.4 Asymmetric Encryption Asymmetric encryption solves the secret key problem. The encryption cipher requires one key while the decryption cipher requires a different one. Asymmetric Encryption Asymmetric encryption has no perfectly analogous physical-world examples. You can see similarities in the mailbox of an apartment complex or other multi-tenant building. The mail carrier uses one key to load the boxes with incoming mail and the tenants use their individual keys to retrieve their mail. The analogy fails in that even though any given key, whether the carrier’s or a tenant’s, can only open one door, the contents of the mailbox (the data) can be equally accessed from either side. You can see a different analogy on your own front door. You can control the inside of a lock with a simple twist of the knob. You could call that a public key — anyone can turn it. However, the outside of a lock requires a specific key that you protect — you could call that a private key. This analogy fails to align with asymmetric encryption in that either “key” can freely lock and unlock the door. The important thing to keep in mind for asymmetric encryption: data encrypted by one key can only be decrypted using another. Even the key that was used to create the ciphertext cannot be used to return it to plaintext. This fact serves as the basis for PKI. Keys are created in pairs. The owner permanently holds on to one key (the private key) and freely distributes the other (the public key). When the key holder wants to securely distribute data, it uses the private key to encrypt it. When someone wants to send data that only the key holder can read, they can encrypt the data with that entity’s public key. This all looks really good right? The downside: asymmetric ciphertext cannot be the same size as the plaintext. It is usually much larger — often over double. That translates to increased storage, transmission, and decryption effort costs. 4. PKI Identification We have more to talk about in encryption, but first, we need to cover the greater purpose of public key infrastructure: identity. In our earlier example, Jim wants to share data with Jane. How can Jane be certain that Jim sent the data that she received? If they only use symmetric encryption, she only knows that the sender has the correct key. She also has that key, so she knows that it has been shared at least once. She has no certainty that Jim did not share it with a malicious third party. Similarly, Jim can’t be certain that Jane is the person that received his transmission, or that a person that intercepted the secret key does not also have a way to intercept the ciphertext. We use PKI certificates to solve the identity problem. Certificates represent the very core of PKI. A certificate’s primary purpose is to establish identity, verified by a central authority. One perfect real-world analogy is state-issued identification: State Identification At the very top of this “certificate”, we find the central authority. We find identifying information for the individual. We also have a validity period and a bar-coded sequence that we could think of like a serial number. The picture corresponds to a public key. When “James P. Keiai” needs to prove his identity, he presents this identification card. The person evaluating it goes through these elements: • Is the “certification authority” trusted? A government-issued ID might only be accepted within that government’s borders. In any case, the authority must be known and trusted in order for the certificate to have value. In PKI, we typically address that by pre-installing CA certificates. On Windows systems, you can see them in the Certificates snap-in under Trusted Root Certification Authorities and Intermediate Certification Authorities. • Is the “certificate” genuine? Government-issued IDs typically include some tamper-resistant elements. PKI certificates include a signature created by the certification authority that provides tamper-proofing • Was the “certificate” issued to the person presenting it? This “certificate” includes a photograph which can be compared to the person holding the “certificate”. A PKI certificate includes a public key. Only the matching private key can supply data that can be decrypted using that public key. • Is the certificate within its validity period? Given sufficient time, any tamper-proofing can be circumvented. An issuer might also lose trust in the entity. We use validity periods to address those problems. 5. The PKI Certificate Issuance Process To best understand PKI certificates, let’s start by looking at the issuance process. 1. An entity (computer, user, device, etc.) generates its own private and public key pair 2. The entity generates a certificate signing request (CSR) including the public key and identifying information to include on the certificate (common name, locality, subject alternate names, etc.) 3. The CSR is submitted to a certification authority 4. The authority generates a certificate including all of the above information and usage authorization (such as server authentication and code signing) and writes a record of the issuance into its database Some things to note: • The entity never discloses the private key, not even to the certification authority • The entity decides on the keys, not the CA. A CA can refuse to issue a certificate for a key of insufficient length, but it has no other say on the composition of the key • The CA decides which key usages it will apply to the final certificate The entity can now present that certificate to anyone that asks. By presenting a certificate, the entity makes a statement: “I am who I say am and the certification authority will vouch for me”. 6. The PKI Certificate Validation Process A simple process verifies a certificate. When a certification authority creates a certificate, it includes signing information. It creates the signature from its own private key, which means that only the authority’s public key can be used to verify it. With that, it effectively states that it vouches for the validity of the presented certificate. Because it has signed the certificate that includes the entity’s public key, the CA also vouches that any encrypted data that can be read or any signature that can be verified by the entity’s public key must have been created by the entity’s private key. An important thing to note: no one contacts the certification authority during this process to validate the certificate; its signature on the certificate has done that. The system, trying to verify the certificate, either trusts the issuer or it does not. We expect the system to keep a local list of certificate authorities that it trusts. We also expect that it will automatically accept certificates signed by them. If the issuer does not exist in the system’s local list, then we expect the system to prompt for an override or reject the certificate. These are conventions; nothing in the technology requires any of this. Even though the system won’t contact the certificate authority, it can look for revocation information. The issuer must have included revocation information in the certificate for that check to occur. Revocation information might be published in a location other than the certification authority. 6.1 Why Not Contact the Certification Authority During Validation? It might seem like an oversight — shouldn’t someone trying to validate a certificate have the ability to contact the issuer? That requirement would create two problems: • The certification authority would need to be online all the time — we’ll look at problems with that in a bit • The client seeking verification would need to be online. Even though that seems like a guaranteed condition today, it certainly wasn’t at the dawn of PKI and should never be taken as a given even in today’s connected world Requiring the CA to always be available represents a problem, but PKI already solved it. If the client has the CA’s certificate and the certificate that it signed, then it already has all the information it needs to know that the CA signed the certificate. The revocation process deals with bad certificates. 7. The PKI Certificate Revocation Process If a private key becomes compromised, anything that it ever signed or encrypted becomes suspect. The certification authority must be deliberately told to revoke the certificate; it has no automatic way to know of compromise. To revoke a certificate, the CA marks it as revoked in its own database. It can then issue or update a Certificate Revocation List (CRL). Alternatively, it can make revocation information available to an Online Certificate Status Protocol responder (OCSP). The location of a CRL and/or OCSP responder must be included in all certificates signed by the certification authority or they will never be checked. Important: any system can host CRLs and OCSP responders. The certification authority only needs to generate the revocation information. CRLs must carry the CA’s signature, so they need no particular security. Therefore, you can take the CA offline but keep the system(s) that host its CRLs and perform OCSP operations online. 8. PKI Identity Verification Visualization The following image shows the most salient components of the preceding explanations: PKI Identity Verification In order: 1. Entity generates a private/public key pair 2. Entity crafts a certificate signing request and submits it to the certification authority 3. The certification authority issues a certificate and records it in the database 4. Entity presents the certificate to the client 5. The client presumably has the signing certification authority’s certificate or can get it 6. Client checks that the certificate does not appear on the CRL 7. If 4, 5, and 6 all checks out, the client will accept the certificate That wraps up PKI identity. With identity established, we can continue our encryption discussion. 9. Certificate Signing Operations Remember how we wanted certainty that Jim was sending data and Jane was receiving it? We can do that, and add tamper-proofing, using signatures created by private keys. We have two general ways to accomplish our goals. Authenticated Sender, Anonymous Receiver In some cases, we only want to authenticate the sender, or signer, of an object. To continue our example, Jim wants to ensure that anyone who reads his e-mail knows that he wrote it and sent it. The process to enable that works like this (simplified): 1. Jim creates a message. 2. Jim passes the message through a “hashing” algorithm — a fancy word meaning that the algorithm crunched the data and produced a number. 3. Jim uses his private key to encrypt the hash. 4. Jim attaches the resulting cipher to the end of the message and transmits it to Jane. 5. Jane uses the same hashing algorithm on the message. 6. Jane uses Jim’s public key to decrypt the signature. 7. If the hash computed by Jane matches the hash in the decrypted plaintext, Jane has verified the condition and authenticity of the message; Jim’s private key signed the message and no one altered it. Most e-mail applications can do all of that automatically. You just tell Outlook to sign your e-mail and tell it which certificate to use. As long as your system has a matching private key, Outlook will handle the rest. You can’t just pick any old certificate from your store though; this all depends on ownership of the correct private key. This type of signing works for sending signed e-mail to anyone. You do not need to know anyone else’s public key; some recipients might not even have one. We use the same technique when we want to sign PowerShell scripts, driver packages, and software. Authenticated Sender, Authenticated Receiver In our example, we did want Jane to know that Jim sent her the message, but we also wanted Jim to know that only Jane received it. Signing got us halfway there. To get it all, we need encryption. If we only wanted the other half, Jim could encrypt the message with Jane’s public key. That would ensure that only Jane could open it. However, Jane cannot verify anything about the message except that the encryptor used her public key — by definition, anyone might have that. To fully authenticate both sender and receiver, Jim would sign the message with his private key and then encrypt the message with Jane’s public key. Only Jane can decrypt the message, but only Jim’s public key can verify the signature. I want to reiterate that I am describing this in simple terms. We rarely use asymmetric keys for general-purpose encryption. End-to-end authentication and encryption almost always involves the key pairs of both parties and a temporary symmetric key during the encryption phase. The actual process looks more like the steps in the SSL Encrypted Communications section. 10. SSL Encrypted Communications After all that, we can now absorb the gist of SSL communications rather easily. I don’t want to dig too far into the depths of encrypted communication because that extends beyond my goal of simplicity. I’m going to trim down to the minimal steps of typical communications in an HTTPS conversation, such as reading a web page: 1. A client contacts the server. 2. The client and server exchange information about the communications they intend to perform, such as the ciphers to use (SSL handshake). 3. The server transmits its certificate to the client. 4. The client checks that it trusts the certification authority that issued the certificate. If it does not recognize the CA and does not get an override, the communication ends. 5. The client checks for revocation information on the certificate. If the certificate is revoked or revocation information is unavailable, then the client might attempt to obtain an override. Implementations vary on how they deal with null or unreachable CRL information, but almost all will refuse to communicate with any entity using a revoked certificate. 6. The client generates a portion of a temporary key for symmetric encryption. 7. The client uses the server’s public key to encrypt the partial temporary key. 8. The client sends the encrypted partial key to the server. 9. The server decrypts the partial key using its own private key. 10. The server completes the secret key. 11. The client and server agree to use the secret key. All communications in the same conversation are encrypted with that key. It would be possible to use asymmetric encryption for the entire conversation. However, as we talked about earlier, asymmetric encryption results in ciphertext that greatly exceeds the size of the unencrypted source. To solve that problem without exposing a plaintext key, SSL only uses asymmetric encryption while the client and server establish identity and work together to create a symmetric shared key. From that point forward, they only use symmetric encryption. That keeps the size of transmitted data to a minimum. Even better, if an attacker manages to break any point of the transmission besides the initial negotiation, they will only gain a temporary key. All of that explains why we use suites of ciphers: we need multiple algorithms to make this work. [thrive_leads id=’17165′] 11. Offline Certification Authority If you’re going to build a PKI, it will have a root certification authority at its heart. Keeping that authority safe must be your primary concern. Administrators commonly take their root certification authority offline to protect it. 11.1 The Risks of an Online Root Certification Authority You face a dilemma: if you keep the certification authority online, that increases the odds of a compromised private key. If you take it offline, it can’t sign certificates. Using an offline root certificate in a multi-CA PKI resolves the dilemma with minimal side effects. PKI always carries some risk of private key compromise. However, there are two problems specific to the root authority: • If the root is compromised, every certificate that carries its signature is untrustworthy. That includes every certification authority in the PKI. Every single CA, and therefore every certificate they issued, becomes suspect. • The root certification authority uses a self-signed certificate. No CA database contains it and no one has the authority to revoke it. Even if the CA could revoke its own certificate, it would no longer be trusted to sign the CRL, thereby invalidating its own invalidation. In simpler terms, the root CA’s certificate cannot legitimately appear in any CRL, therefore it cannot be revoked. As previously mentioned, most PKI certificates say: “I am who I say I am and the certification authority will vouch for me.” The root certification authority says: “I am who I am because I say so.” The only certain thing protecting the root CA from compromise is its validity period. Therefore, stronger steps must be taken to safeguard the root CA’s private key. 11.2 Offline CA Creation Process I will publish an article containing a complete walk-through on building a standalone and subordinate certification authority set. The essential steps: 1. Create a key pair. 2. Create a self-signed certificate from that pair. 3. Create another key pair. 4. Use the private key from step 1 to issue a certificate for the new pair. Ensure the certificate is authorized to act as a certification authority and has information to reach a CRL for certificates that it signs. The key pair and certificate in steps 1 and 2 represent the root certification authority. The key pair and certificate in steps 3 and 4 represent an intermediate (subordinate) authority. The CRL information on the subordinate CA’s certificate (in step 4) points to a CRL created by the root CA. 1. Generate a CRL from the root CA. Publish it at the location specified in the intermediate CA’s information. 2. Take the root CA offline. 3. At regular intervals, bring the root CA “online” (not necessarily reachable) and update the CRL. 4. Perform standard certificate issuance and revocation operations with the intermediate CA. The CRLs for both authorities must be kept online and reachable at all times. ImportantThe CRL information on a certificate always refers to the CRL of that certificate’s issuer. That seems straightforward enough on endpoint certificates. However, it can get confusing for CA certificates. If it helps your memory, a certificate contains a reference to the CRL that might list it. Therefore, a well-formed root CA certificate will not contain CRL information because no CRL could ever contain a root certificate. Each intermediate CA’s certificate will contain CRL information for its parent CA, not for itself. 11.3 An Offline CA Without a CRL You can create an offline CA without a CRL. You simply issue the intermediate CA certificate(s) with no CRL distribution information. If the subordinate’s certificate contains no CRL information, then it will be trusted until it expires. However, doing so is only marginally more secure than just using a single online root CA. If the subordinate CA’s private key ever becomes compromised, then, even though the root CA has the power to revoke it, no one will know how to reach a CRL to find out. Removing the intermediate CA’s CRL would help to remove trust in the certificates that it issued, but not every process checks a CRL and most will ignore a missing CRL. I understand the temptation of creating an offline root CA without generating a CRL. You would not need to maintain anything. You could even delete the root CA’s files. However, you gain almost no security. 12. What IS a Certification Authority? I think this article presented a clear idea of the concept of a certification authority and the function of a certification authority. I don’t believe that it cleared up all the mystique of a certification authority, though. When you’re running the wizard in Windows Server to set up a CA, it throws up all kinds of warnings about the permanence of the computer name and domain membership. I believe that gives the impression that a CA is a dreadful, magical beast. But, a simpler explanation underlies all those warnings. When you make a domain member into a CA, the wizard builds a lot of scaffolding around it in the directory. Microsoft could have fashioned a bunch of brittle triggers and conditional checks and resolution steps on the system’s name and domain membership status. Or, they could just tell you never to change either and (correctly) blame any problems on a failure to comply. Whatever impression that leaves on the unwary, the CA truly has a very simple structure. A functional PKI certification authority must contain these things: 1. A public and private key pair. It uses the private key to sign things and the public key to prove that it signed things. 2. A certificate that it or a parent CA signed. 3. A list of issued certificates. That’s all. Implementations vary, of course. For instance, OpenSSL depends on a couple of files to tell it what the next numbered certificate and CRL will carry. However, all CAs need the three listed components. If you want to take a CA offline but only keep the most important parts, you can’t get by without those. Since #2 can be freely distributed to anyone, only #1 and #3 require security. Be aware that I don’t know how to fully regenerate a failed Windows CA using only those components. When you use the Windows wizard to build a CA, it allows you to re-use a previously existing private key. I don’t know of a way to make it use a previous certificate database (feel free to use the comment form if you do know). If you want to use Windows Server as your offline root CA, I recommend that you take the entire Windows Server installation offline and keep it safe. Personally, I use something else entirely… details in another post. 12.1 Does a Certification Authority Require a CRL? We talked about this only a few sections ago, but I focused on security concerns there. Structurally, a CA does not require a CRL. Even more technically, a CA does not include a CRL at all. A CA includes a list of certificates that it issued; its revocation process marks a certificate as revoked on that list. An external tool generates a CRL by making a sublist of only the revoked certificates on the master and having the CA sign the resulting list. Do go back and read the part about dramatically reduced security when a CA does not publish revocation information if you missed it. Please. It really does matter. 13. The Dangers of Self-Signed Certificates As more technology requires certificates (e.g., Windows Admin Center), I see more requests for information on creating self-signed certificates. STOP USING SELF-SIGNED CERTIFICATES. People say, “Oh, it’s OK, I only use them in my test lab.” How can you call it a “test” lab if you do things that you would never do in production? What exactly do you believe you are testing? Great, process “alpha” works with self-signed certificates, which the manufacturer probably already stated… so what does that prove? How can you apply that knowledge when you transition to production? A proper test environment duplicates a production environment to the greatest possible extent. Self-signed certificates represent an overt danger if used in production. Above, you saw how we have no choice but to use them for root CAs, so we go to great lengths to protect them. Think about those problems in the context of a self-signed endpoint: • No one can revoke a self-signed certificate. If you lost control of the private key, data encrypted by an attacker would forever be just as good as data encrypted by you. Data meant for you could always be signed by your public key and read by your attacker. • Why should anyone trust a self-signed certificate, even members of your own organization? Literally, anyone can create a certificate and stick your name on it. • You can’t lock up the private key. You’re actively using it. And, you imported its public key on at least one other machine which now blindly trusts it. That does not qualify as “secure” in any sense of the word. • A compromised root CA would be terrible, but tearing down the entire PKI would at least address the problem. You can use centralized tools to remove the compromised CA from Windows’ trust lists. If you manually imported a self-signed certificate, you will have to manually hunt it down. I understand that self-signed certificates seem easy. But, you can almost as easily learn PKI. You can quickly set up and configure PKI. Windows Server PKI offers auto-enroll and auto-renew, so a tiny bit of early pain saves all sort of ongoing effort. You will never regret adding “PKI” to your skills list. Do the right thing, not the expedient thing. 14. Going Further with PKI I know that the topic of public key infrastructure can seem daunting, but administrators can no longer afford to ignore it. I am appalled by the proliferation of self-signed certificates, especially when it takes such little effort to build a fully functional PKI. If you don’t know how to do that, watch this space for a forthcoming how-to. If you can’t wait that long, head on over to the Altaro Dojo Forums and start a discussion. I’m actively part of that community and answering questions on a daily basis. Altaro Hyper-V Backup Share this post Not a DOJO Member yet? Join thousands of other IT pros and receive a weekly roundup email with the latest content & updates! 47 thoughts on "Public Key Infrastructure Explained | Everything you need to know" • Tim Gilroy says: Article is a very handy tool for basic essentials of PKI. Only issue I have seen so far is ROT13 is incorrect ROT13 A = N… • Eric Siron says: It’s always the little things. Fixed. • FM says: I am student and i have présentation about PKI and i have problème of understanding if you can help and thank you i will tell the scenario of what i have understand i am client i contact thet Authority regisration she generate pair key public and private then the authority registration send to me the private key to me and she send also private and public key to authority certification the authority certification generate the certeficat and add the public key and use a hash function to get fogpreint of private key and added to the certeficat then te certeficat will be in lDAP for verification by authority verification the authority send the certificat also so me and which i will you the private key to verfie the certeficat that have been gived to me b hashing the private key and compare to the fogpreint of the certeficat if i’am please correct me i will be very gradfull and also how scenario will be case two person wante to communicate using pki and thank you • Eric Siron says: That’s almost entirely incorrect. Please study the flows. Most importantly, private keys cannot leave their home system. A private key that has been transmitted is considered compromised. • Andrey Klimkin says: You wrote: 3. Jim uses his private key to encrypt the hash. 6. Jane uses Jim’s public key to decrypt the signature. In fact, it’s exactly the opposite: 3. Jim uses Jane’s public key to encrypt the hash. 6. Jane uses her private key to decrypt the signature. Thanks • Eric Siron says: I need to reword the intro to that article section. I only intended to cover the digital signature process, not two-way verification. Actually, I’ll probably just rebuild the section to cover both. Thanks for pointing that out. • Brian says: Great article. Thanks for the info! • Candice dawn chaplin says: Hi thankyou so does this mean I’m ok for pki sinigtar kind regards candice chaplin • Wil Chak says: Great write-up!!! Really appreciate this. It really helped in demystifying the complexities of understanding PKI and certificate concepts. Only one question though, a little confused on the usage of the public and private key pair. Like Andrey Klimkin said in his post, I thought the public key is used for encrypting and the private key is used for decrypting. I’m sure it’s not as simple as that but can you please clarify? • Eric Siron says: Both keys are used for both encryption and decryption, depending on context. Decryption is the easy one to figure out; you only need to know which key encrypted the information. No other key, including the one used to encrypt, can decrypt. So the only question is which key to use to encrypt? Easy ones are the one-way encryption tasks: • If you want to send data for anyone to read and be certain that you encrypted it (like a public web page), then you encrypt with your private key because only your public key can decrypt. • If you want to send data for a specific person to read but not necessarily to verify it was you that sent it, then you encrypt with their public key because only their private key can decrypt. • If you want to encrypt data that only you can read, you encrypt with your own public key (just like someone who wanted to send something to you that only you could read). It’s when you need two-way verification that things can get complicated, but only because of the possible permutations. Ordinarily, you as the sender will want to encrypt the data with the recipient’s public key because that ensures that only they can open it. Then you sign the transmission, but not encrypt it, with your private key. Only your public key can verify the signature, which proves that you sent it. The data is already encrypted so it does not need to be re-encrypted, although you could if you wanted. You could encrypt with your private key and encrypt again with the sender’s public key. You would not encrypt with your private key and sign with the recipient’s key, though, because that does not really give you anything more than encrypting with your private key. • Jane Doe says: Thank you for a simple/simply great explanation! Got the feeling that in paragraph 11.2 “Offline CA Creation Process” the first item on the list meant to say: ‘Create a key pair.’ Leave a comment or ask a question Your email address will not be published. Required fields are marked * Your email address will not be published. Notify me of follow-up replies via email Yes, I would like to receive new blog posts by email What is the color of grass? Please note: If you’re not already a member on the Dojo Forums you will create a new account and receive an activation email.
__label__pos
0.820773
14.16. Maze 14.16.1. Overview Figure 17.359. An example of a rendered maze. An example of a rendered maze. Filter Maze applied This filter generates a random black and white maze pattern. The result completely overwrites the previous contents of the active layer. A typical example is shown below. Can you find the route from the center to the edge? 14.16.2. Activating the filter This filter is found in the image window menu under FiltersRenderPatternMaze…. 14.16.3. Options Figure 17.360. Maze filter options “Maze” filter options Presets, Input Type, Clipping, Blending Options, Preview, Split view [Note] Note These options are described in Section 2, “Common Features”. Width, Height These sliders control how many pathways the maze should have. The lower the values for width and height, the more paths you will get. The same happens if you increase the number of pieces in the Width and Height Pieces fields. The result won't really look like a maze unless the width and height are equal. Algorithm type You can choose between these two algorithms for maze: Depth first and Prim's algorithm. Only a computer scientist can tell the difference between them. Tileable If you want to use it in a pattern, you can make the maze tileable by checking this check-button. Random Seed You can specify a seed for the random number generator, or ask the program to generate one for you. Unless you need to later reproduce exactly the same maze, you might as well have the program do it. Foreground color, Background color You can choose colors for the maze and its background. Defaults are Toolbox colors.
__label__pos
0.950231
charging the Surface Discussion in 'Microsoft Surface Pro' started by Arizona Willie, Mar 7, 2013. 1. Arizona Willie Arizona Willie Active Member Joined: Feb 24, 2013 Messages: 685 Likes Received: 51 Trophy Points: 28 Location: Mesa, Arizona I have struggled with the teeny tiny little plug used to charge the Surface. My big fingers make it hard to hold onto that tiny little thing and get it in the right position. This morning I discovered that it works really well to hold onto the cord instead of the plug. I grab the cord about 3 inches behind the plug and hold the plug over the magic hole and WHAM the magnets pull it into place just fine. :)   1 person likes this. 2. TeknoBlast TeknoBlast Active Member Joined: Oct 16, 2012 Messages: 764 Likes Received: 134 Trophy Points: 43 Location: Irving, TX Yeah, I dont see how some people are having issues connecting the charger. From time to time, I have to set it jus right to get it to take, but not really a big deal to me. Everyone is different I guess.   3. Arizona Willie Arizona Willie Active Member Joined: Feb 24, 2013 Messages: 685 Likes Received: 51 Trophy Points: 28 Location: Mesa, Arizona Well, I can understand having trouble with the charger, simply because it is so small and the hole it goes in is also tiny and someone with big hands / fingers can have trouble holding the tiny plug. Not only because it is so small but when holding it in my fingers, my hand gets in the way of seeing the spot I'm trying to put it. But holding the cord a few inches behind the power head works just fine. But I do also turn it on it's side so the right side with the magic magnetic grabber ( technical term ) is easy to see.   4. jacewt jacewt New Member Joined: Feb 26, 2013 Messages: 24 Likes Received: 1 Trophy Points: 3 I found your solution to be the best so far. I hold the chord and the plug clicks right in. It only takes me one shot, and sometimes I can even do it without looking. The key is to holding the chord with the plug aligned and ready to just go, I find the plug is always kind of twisting away.   5. lostlogik lostlogik New Member Joined: Feb 22, 2013 Messages: 31 Likes Received: 2 Trophy Points: 0 Ditto, good tip and the way I have found to be the easiest.   6. HD_Dude HD_Dude New Member Joined: Mar 3, 2013 Messages: 129 Likes Received: 21 Trophy Points: 0 Yes, thanks for the excellent tip. It works much better than holding the connector itself. Good job!   Share This Page Search tags for this page charging the surface pro , microsoft surface pro not charging , surface 2 battery not keep charge , surface battery not charging , surface charger hard to plug , surface not charging , surface pro 2 battery not charging , surface pro 2 not charging , surface pro charging , surface pro not charging
__label__pos
0.72195
Python message: "Failed to import PythonQt.qSlicerxxxxModuleWidgets" Dear All, I’m Andrea Vitali from University of Bergamo (Italy) and our students faced a specific problem during the development of a C++ module based on the template “loadablecustommarkups”. In particular, we have problems with creating the loadable-custom-markups module in 3Dslicer version 4.13.0 for Windows 10 with Visual Studio 2019, Qt5 version 5.0.2 and Cmake version 3.15.4. We followed the instructions in the 3D-slicer documentation: • Setting up the source and debug-mode build folder respectively named "S4_13" and "S4_13D", • Creating an extension and adding a "custom loadable markup" module from “Extension Wizard”, • Compiling files using c-make by entering the path to source and debug-mode build folder, where it’s necessary the .git file in our module, • Opening the project in Visual Studio and compiling "All-build" project, Despite this, when we try to open the Slicer.exe file, we get errors with the message “Failed to import PythonQt.qSlicercustomMarkupModuleWidgets”. While if we create a standard loadable module with the same procedure, no error appears. I also found this post “Build error caused by qSlicerSubjectHierarchyPlotsPlugin” in which the same error was relative to the maximum lenght of the file name. Coudl it be the same issue? Thanks in advance, Regards, Andrea Vitali Yes, it could be the path. Be sure to use something very short. I use c:/s for debug and c:/sr for release builds in windows. Hi Steve, thank you for your replay. However, the error still remain. We have tried with the generation of a standard loadable module and everything works fine. I think that it should be better to move the discussion in the section “Development”. Regards, Hi, We are experiencing the same situation on Windows 10, with a library compiled with our extension. The compiled .lib lives in the solution directory, close to C:\ to avoid any path-length related issues. Yet, the python script to build the UI fails to find the library and halts the execution, leaving the UI partly built. 3DSlicer v4.13.0, VS 2019 v16, Qt 5.15.02 and CMake 3.22.1. The library is exported using the directive Q_SLICER_MODULE_${MODULE_NAME_UPPER}_WIDGETS_EXPORT and built using the flag WRAP_PYTHONQT (SlicerMacroBuildModuleWidget) @RafaelPalomar Hi Javier, I have also tried: 1. Generate the markups module library directly inside C: 2. Name the module with a single letter (“m”) The error still occur. I hope in the developers’ help. @AndreaVitali86, @jpdefrutos, the problem looks similar, but we cannot conclude it is the same problem yet. What we have seen with @jpdefrutos is that PythonD libraries generated are .lib files (I’m not a Windows user of Slicer, but I would expect a .dll to be generated in order to be loaded in run-time). @pieper, @lassoan, @jcfr does this make sense? This is sounding like a cmake library issue either not generating, building, or copying the python wrapper files correctly. It might also be that they aren’t being put in the path correctly for the launcher. Maybe a good debug strategy would be to look at the files generated for loadable module code that works as expected in python and then see the corresponding pattern of wrapped code and libraries for any differences with the custom markup code. Can you post a link to your source code? Do you only see the errors if you try to use the qSlicerexePModuleWidgets from a Python scripted module? Hi all, I conferm the same issue mentioned by @RafaelPalomar: we have a qSlicerCustomModuleWidgetsPythonQt.lib instead of the .dll file We have to check how the files are generated in Visual Studio as suggested by @pieper. I will do it the next week. About the questions of @lassoan: • Can you post a link to your source code? I can post the code but we did not develop custom code inside the software modules automatically generated by the custom markups widget template. Anyway, this is the github repo: https://github.com/AndreaVitali86/SlicerCustomMarkups • Do you only see the errors if you try to use the qSlicerexePModuleWidgets from a Python scripted module? The code is relative to a loadable module (C++) so, the builded files are imported during the initiliazation of 3D slicer: when the initialization phase of all the modules finishes the python console give us the reported error. In other words, we are not using a Python scripted module. Thank you very much for your efforts. I’ve submitted a pull request that fixes the module: The problem was that the code pieces that allow the template to be used as a test need to be removed. I could not find the root cause of why Python-wrapping of the widgets does not work correctly (and thus the Python module cannot be imported and the Failed to import PythonQt.qSlicerCustomMarkupsModuleWidgets error is logged). But this extension does not seem to have this issue, so by making this extension more similar to it should fix the issue: 1 Like
__label__pos
0.851784
How to call a function every N seconds in Python? Automation of python functions is very useful, like creating a backup every n seconds. In this python article, we will learn How to repeatedly execute a function every N seconds in Python. Table Of Contents Method 1: Using time module First method we can use to repeatedly execute a function every N seconds in Python is the sleep() function of time module. The time module comes pre-installed with python programming language and provides the functions for working with time, including retrieving the current time, converting between different time formats, and sleeping (pausing) the execution of a program. time.sleep() : This function allows you to pause the execution of your program for a specified number of seconds. This takes only one argument which is the number of seconds. See an Example code below: CODE : # importing time module. import time # main function which will be initialized after n seconds. def main() : # time.asctime() prints the current date & time in string. print('Program Executed @',time.asctime()) # initialized a counter. n = 1 # runs until n < 5,just to avoid the infinite loop. # this will execute the main() func in every 5 seconds. while n < 5: main() time.sleep(5) n = n+1 OUTPUT : Program Executed @ Wed Jan 4 01:27:37 2023 Program Executed @ Wed Jan 4 01:27:42 2023 Program Executed @ Wed Jan 4 01:27:47 2023 Program Executed @ Wed Jan 4 01:27:52 2023 .... .... So, In the above example, by using the sleep() function from time module, we executed a function after every N seconds. Here the main() function runs at every 5 seconds. Also a counter has been used with the variable n, which ensures that this does not go into an infinite loop. Method 2: Using sched module Another module we can use to repeatedly execute a function every N seconds in Python is sched module which comes pre-installed with Python programming language. It allows you to schedule a function to be called at a specific time, or to be called repeatedly at a specified interval. See an Example code below: CODE : # importing time and sched module import sched, time # main function which will be initialized after n seconds. def main(sc) : # time.asctime() prints the current date & time in string. print('Program Executed @',time.asctime()) # Create a scheduler instance sc = sched.scheduler() # initialized a counter. n = 1 # runs until n < 5,just to avoid the infinite loop. # this will execute the main() func in every 5 seconds. while n < 5: # Schedule the event to be executed in 5 seconds sc.enter(5, 1, main, (sc,)) # Run the scheduler sc.run() n = n+1 OUTPUT : Program Executed @ Wed Jan 4 12:24:04 2023 Program Executed @ Wed Jan 4 12:24:10 2023 Program Executed @ Wed Jan 4 12:24:15 2023 Program Executed @ Wed Jan 4 12:24:20 2023 In the above code, the sched module has been used to repeatedly execute a function every N seconds in Python. Here sc has been used as an argument in sc.enter() while scheduling the event. The sc is an instance of the scheduler class. The scheduler class is used to schedule the execution of events at certain times. The enter() method is used to schedule an event to be executed after a certain number of seconds. Method 3: Using threading module Threading module comes pre-installed with Python programming language, which can be used to repeatedly execute a function every N seconds in Python. Threading module in Python provides a simple interface for working with threads. Threads are used to run multiple pieces of code concurrently within a single program. Here we will be using threading module to create a new thread that executes the function, and then use the threading.Timer class to execute the function every N seconds. See an Example code below: CODE : # Importing threading and time module. import threading,time def main(): # time.asctime() prints the current date & time in string. print('Program Executed @',time.asctime()) def run(): # created a global variable. global counter counter += 1 # stop after 5 executions if counter < 5: # where 5 is the number of seconds to wait threading.Timer(5, run).start() # running the main function main() counter = 0 run() OUTPUT : Program Executed @ Wed Jan 4 13:33:53 2023 Program Executed @ Wed Jan 4 13:33:58 2023 Program Executed @ Wed Jan 4 13:34:03 2023 Program Executed @ Wed Jan 4 13:34:08 2023 Program Executed @ Wed Jan 4 13:34:13 2023 Summary In this Python article, we learned about modules like: time, sched and threading. Also we used methods from three different modules to execute a function every N seconds in Python. You can always use any of the methods/module you want but the most easy to understand and use is sleep() function of time module which just takes one argument which is N seconds after which you want to execute a certain block of code. Thanks. Leave a Comment Your email address will not be published. Required fields are marked * This site uses Akismet to reduce spam. Learn how your comment data is processed. Scroll to Top
__label__pos
0.927519
Do you want to pick up from where you left of? Take me there Pattern Matching Pattern matching ist ein mächtiger Teil Elixirs. Es erlaubt uns einfache Werte, Datenstrukturen und sogar Funktionen zu matchen. In dieser Lektion werden wir anfangen zu sehen, wie pattern matching benutzt wird. Table of Contents Match Operator Bereit für einen Hirnverdreher? In Elixir ist der =-Operator eigentlich ein match-Operator. Durch diesen können wir Werte zuweisen und matchen: iex> x = 1 1 Lass uns nun versuchen etwas einfaches zu matchen: iex> 1 = x 1 iex> 2 = x ** (MatchError) no match of right hand side value: 1 Lass uns das mit ein paar der collections probieren, die wir kennen: # Lists iex> list = [1, 2, 3] [1, 2, 3] iex> [1, 2, 3] = list [1, 2, 3] iex> [] = list ** (MatchError) no match of right hand side value: [1, 2, 3] iex> [1 | tail] = list [1, 2, 3] iex> tail [2, 3] iex> [2|_] = list ** (MatchError) no match of right hand side value: [1, 2, 3] # Tupel iex> {:ok, value} = {:ok, "Successful!"} {:ok, "Successful!"} iex> value "Successful!" iex> {:ok, value} = {:error} ** (MatchError) no match of right hand side value: {:error} Pin Operator Wir haben gerade gelernt, dass der match-Operator eine Zuweisung ausführt, wenn die linke Seite des matches eine Variable beinhaltet. In manchen Fällen ist das Verhalten neu zu definieren unerwünscht. Für diese Fälle gibt es den pin-Operator: ^. Wenn wir eine Variable pinnen matchen wir auf den vorhandenen Wert statt einen neuen zu definieren. Lass uns sehen, wie das funktioniert: iex> x = 1 1 iex> ^x = 2 ** (MatchError) no match of right hand side value: 2 iex> {x, ^x} = {2, 1} {2, 1} iex> x 2 Elixir 1.2 hat Unterstützung für pins in map keys und Funktionsklauseln eingeführt: iex> key = "hello" "hello" iex> %{^key => value} = %{"hello" => "world"} %{"hello" => "world"} iex> value "world" iex> %{^key => value} = %{:hello => "world"} ** (MatchError) no match of right hand side value: %{hello: "world"} Ein Beispiel von pinning einer Funktionsklausel: iex> greeting = "Hello" "Hello" iex> greet = fn ...> (^greeting, name) -> "Hi #{name}" ...> (greeting, name) -> "#{greeting}, #{name}" ...> end #Function<12.54118792/2 in :erl_eval.expr/5> iex> greet.("Hello", "Sean") "Hi Sean" iex> greet.("Mornin'", "Sean") "Mornin', Sean" Caught a mistake or want to contribute to the lesson? Edit this lesson on GitHub!
__label__pos
0.929481
IUT R&T - 1ère année - Informatique Programmation et algorithmique 1 TP n°3 A la fin du TP, envoyer sur l'ENT un compte-rendu contenant les solutions des exercices marqués d'une étoile. (compte-rendu au format texte, sur le modèle de modeleTP1.txt) EXERCICE 1* 1.1. Ecrire un algorithme précisé résolvant le problème suivant : Procédure Ligner Donnée : un caractère C de code ASCII supérieur à 32, un entier naturel p. Résultat : affichage de p fois le caractère C. Exemple. Ligner('x',5) affiche xxxxx. 1.2. Ecrire un algorithme précisé résolvant le problème suivant : Procédure Border Donnée : un caractère C de code ASCII supérieur à 32, un entier naturel q. Résultat : affichage de deux caractères C séparés par q espaces. 1.3. Ecrire un programme C utilisant ces deux procédures dans un programme affichant un carré de dimension n×n caractères. L'intérieur du carré est composé d'espaces et l'extérieur de caractères C, n et C étant donnés par l'utilisateur. Exemple. Carré de taille 5×5 composé de 'x' : xxxxx x   x x   x x   x xxxxx 1.4. Compléter ce programme pour afficher successivement différents carrés. Choisir ce qui arrêtera le programme. EXERCICE 2* Quand on demande à un programme en C de saisir un entier naturel, la fonction scanf utilise le formatage "%u" pour transformer en entier naturel la chaine de caractères saisie (composées de caractères compris entre '0' et '9'). Nous nous proposons ici de réaliser une fonction analogue pour le cas où la chaine saisie est une représentation binaire du nombre (composés uniquement de caractères '0' et '1'). Voici un algorithme précisé qui réalise cette transformation : Fonction ScanBinaire Entrée : une chaine S composée de n caractères '0' et '1'. Sortie : la valeur décimal D de l'entier naturel correspondant à S. Variable d'usage : un entier i. D←0; Pour i de 0 à n−1 faire D←2*D; Si S[i]='1' alors D←D+1; Retourner D. 2.1. Réaliser une trace de cet algorithme (c'est-à-dire faire le tourner) pour S="1011", en complétant le tableau suivant : initialement : D=0. i= ?  | i<n ? | modifications de D : ...   | ...   | ... 2.2. Traduire en C cette fonction et l'intégrer dans un programme qui donne la valeur décimale d'une chaine ch saisies. Le prototype de la fonction sera unsigned int ScanBinaire(char S[]); et la variable ch du programme principal sera déclarée par char ch[10]={0}; N.B. On pourra utiliser la fonction strlen de la bibliothèque string.h pour avoir la taille n d'une chaine. 2.3. Compléter ce programme afin qu'il affiche les valeurs de chaines successives saisies. Ce programme s'arrêtera quand une chaine de caractère non conforme sera saisie. 2.4. Tester ce programme. EXERCICE 3* On donne l'algorithme précisé suivant : Fonction Opère Entrée : deux entiers naturels n et p et un caractère c de l'ensemble {'+','−','*','/'}. Sortie : le résultat de l'opération correspondante : n+p, n−p, n*p, ou n/p (division entière). Selon c faire Cas '+' : Retourner n+p; Cas '−" : Si n>p alors Retourner n−p sinon Retourner 0; Cas '*' : Retourner n*p; Cas '/' : Retourner n div p. 3.1. Traduire en C cette fonction et l'intégrer dans un programme qui réalise une sorte de calculette en calculant les opérations demandées par l'utilisateur, jusqu'à rencontrer une impossibilité de calcul. Le prototype de la fonction sera unsigned int Opere(unsigned int n, char c, unsigned int p); La saisie se fera dans l'ordre n, puis c, puis p. L'affichage du résultat se fera sous la forme n+p=résultat 3.2. Tester ce programme sur des exemples bien choisi. Ce programme est-il robuste ? convivial ? ergonomique ? EXERCICE 4 (pour aller plus loin) Reprendre l'exercice 3 en : – utilisant les possibilités de décalage de bit du langage C pour éviter des opérations de multiplication, et – saisissant les caractères entrés un par un avec la fonction getchar. N.B. La touche ENTREE donne le caractère '\n'.
__label__pos
0.617197
6 $\begingroup$ Background: Generalizing the notion of upper half plane to compact Riemann surfaces: Suppose $p(x,y) \in \mathbb{R}[x,y]$ is a polynomial in 2 variables with real coefficients, defining a smooth complex plane algebraic curve $C_0 = \{(x,y) \in \mathbb{C}^2:p(x,y)=0\}$. Let $C$ be the projective closure of $C_0$ in $P^2\mathbb{C}$, and assume that $C$ is also smooth. Since $C$ is defined over the real numbers, it comes equipped with an involution $\sigma:C\rightarrow C$, $\sigma(x,y) = (\overline{x},\overline{y})$. Denote by $X$ the compact Riemann surface associated to $C$, and let $X_\mathbb{R}$ be the set of fixed points of $\sigma$. If the space $X - X_\mathbb{R}$ has exactly two connected components, then $X$ is called a real compact Riemann surface of dividing type, and the two connected components are denoted by $X_+$ and $X_{-}$ (the decision between "the positive half plane" and "the negative half plane" arbitrarily). And finally, to the question: I am given a real compact Riemann surface of dividing type $X$, and interested in interpolation problems of meromorphic functions with conditions such as "all the poles of $f$ lie in the upper half plane". Does anybody knows of any previous work in the area? Any known techniques to relate these topological and algebraic constructions? $\endgroup$ 3 • 1 $\begingroup$ Liran, your question seems to be a bit vague... Could you add a bit more details? $\endgroup$ – Dmitri Panov Aug 1 '10 at 21:19 • $\begingroup$ For example, I would like to know when for a given set of points \{a_1,\dots,a_n\}, there is a meromorphic function whose zeros are exactly \{a_1,\dots,a_n\}, and all of whose poles belong to $X_{+}$. Does this clarify? $\endgroup$ – the L Aug 1 '10 at 21:22 • $\begingroup$ Great, this is much more concreet! $\endgroup$ – Dmitri Panov Aug 1 '10 at 21:48 2 $\begingroup$ Let me give a version of the question in the comment: Let $X$ be a curve of genus $g$ with a real separting involution, and conisder the map $Sym^n(X_+)\to Jac^n(X)$. For wich $n$ is this map surjective? Or, in other words, what is the minimal number of poles of a meromorphic function with poles in $X_+$ that garanties that zeros can happen at any collection of points? This sounlds like a very nice question. In the case $g=1$ you can always take $n=2$. Also for any $g$ you sould take $n>g$ because $Sym^g(X)$ maps to $Jac^g(X)$ with degree $1$. Added. The notation $Sym^n(X)$ means the symmetric power of $X$. Let me explain also why what is above is a reformulation of the original question. Indeed, a divisor $\sum_i x_i-\sum_i y_i$ on $X$ is a divisor of a meromorfic function iff it represent zero in $Jac^0(X)$. So if we want to chose arbitraly zeros $x_i$ of a meromorphic function $f$ keeping the poles $y_i$ in $X_+$ it is enouth to know that $\sum_i y_i$ can take any value in $Jac^n(X)$ (to cancel the point $\sum_i x_i$). This is eactly the condition that $Sym^n(X_+)\to Jac^n(X)$ is surjective. $\endgroup$ 1 • $\begingroup$ I am sorry, but could you explain your notation "Sym"? and why is this formulation equivalent? thanks! $\endgroup$ – the L Aug 1 '10 at 22:26 Your Answer By clicking “Post Your Answer”, you agree to our terms of service, privacy policy and cookie policy Not the answer you're looking for? Browse other questions tagged or ask your own question.
__label__pos
0.982904
Instantly share code, notes, and snippets. Embed What would you like to do? use std::collections::HashMap; use std::collections::HashSet; use std::cmp::Ordering; #[derive(Debug, Clone)] pub struct Transaction { pub txid: String, pub fee: u64, // in satoshis pub size: u64, // bytes pub fee_rate: f64, // for debugging pub ancestor_size: u64, // in bytes pub ancestor_fees: u64, pub depends: Vec<String>, // references by txid } #[derive(Debug, Clone)] pub struct TransactionCluster<'a> { pub transactions: Vec<&'a Transaction> } impl<'a> TransactionCluster<'a> { pub fn new(capacity: usize) -> TransactionCluster<'a> { TransactionCluster { transactions: Vec::with_capacity(capacity) } } pub fn size(&self) -> u64 { let mut sum = 0; for transaction in &self.transactions { sum += transaction.size; } sum } pub fn fee(&self) -> u64 { let mut sum = 0; for transaction in &self.transactions { sum += transaction.fee; } sum } pub fn fee_rate(&self) -> f64 { (self.fee() as f64) / (self.size() as f64) } pub fn txids_string(&self) -> String { let mut res = "[".to_string(); for transaction in &self.transactions { res.push('"'); res.push_str(transaction.txid.as_str()); res.push_str("\","); } res.push(']'); res } } // pool is a txid -> transaction pub fn process(pool: &HashMap<String, Transaction>) -> Vec<TransactionCluster> { let mut transactions: Vec<&Transaction> = pool.values().collect(); expand_transactions(&mut transactions, pool) } // Goes through `transactions` and if they have any dependencies, it expands them // unless it has already seen that dependency. // Requires a mutable reference to transactions, because it in-place sorts it fn expand_transactions<'a>( transactions: &mut Vec<&'a Transaction>, pool: &'a HashMap<String, Transaction> ) -> Vec<TransactionCluster<'a>> { transactions.sort_by(|a, b| sort_by_ancestoral_fee_rate(*a, *b)); let mut expanded = vec!(); let mut picked: HashSet<*const Transaction> = HashSet::new(); for transaction in transactions { let mut ancestors = expand_transaction(transaction, pool); ancestors.transactions.retain(|&ancestor| picked.insert(ancestor)); if !ancestors.transactions.is_empty() { expanded.push(ancestors);; // no point adding an empty cluster } } // now we need to resort expanded.sort_by(|a, b| sort_by_fee_rate(a, b)); expanded } // ancestor includes itself, this flattens! fn expand_transaction<'a>( transaction: &'a Transaction, pool: &'a HashMap<String, Transaction> ) -> TransactionCluster<'a> { // must includes the current transaction! let mut deps = Vec::with_capacity(transaction.depends.len()); // Push transactions one layer deep for txid in &transaction.depends { let dep = pool.get(txid); match dep { None => println!("Could not find {} in pool", txid), Some(t) => { deps.push(t); } } } let expanded = expand_transactions(&mut deps, pool); // Now we're going to flatten this, to make a result... let mut result = TransactionCluster::new(deps.len() + 1); for cluster in expanded { for transaction in cluster.transactions { result.transactions.push(transaction) } } result.transactions.push(transaction); result } fn sort_by_ancestoral_fee_rate(a: &Transaction, b: &Transaction) -> Ordering { let fee_rate_a = a.ancestor_fees / a.ancestor_size; let fee_rate_b = b.ancestor_fees / b.ancestor_size; // Note we are comparing b against a, to get the exact reverse (high fee shit should be first) fee_rate_b.partial_cmp(&fee_rate_a).unwrap_or(Ordering::Equal) } fn sort_by_fee_rate(a: &TransactionCluster, b: &TransactionCluster) -> Ordering { // Note we are comparing b against a, to get the exact reverse (high fee shit should be first) b.fee_rate().partial_cmp(&a.fee_rate()).unwrap_or(Ordering::Equal) } @karel-3d This comment has been minimized. Copy link karel-3d commented Oct 4, 2017 Hey. I am author of https://estimatesmartfee.com/, I found your website by random :) on my website I just have the fees, but I like the graph. But I never wrote anything in rust. Any pointer what is the input to this and what is the output and how to run it? :D Or I can try to run it myself, but well, I will appreciate any help before starting experimenting @karel-3d This comment has been minimized. Copy link karel-3d commented Oct 4, 2017 also, your website shows much higher fees than what estimate smart fee does; do you keep updating it at all? :D @karel-3d This comment has been minimized. Copy link karel-3d commented Oct 4, 2017 sorry, no, I looked wrong, it still returns the correct fee, I clicked wrong I guess :) good job Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
__label__pos
0.959535
Categories RHCE 7 Study Guide STDIN, STDOUT, STDERR and File Redirectors Explained This tutorial explains I/O (Input/Output) redirection with examples. Learn what the STDIN, STDOUT and STDERR are and how they work in Linux along with common file redirectors such as >, >>, 2>&1, <, /dev/null, /dev/tty1 and /dev/sda in detail. This tutorial is the last part of the article \”Linux file system and shell explained with file redirectors\”. It explains following RHCSA/RHCE objective. Use input-output redirection (>, >>, |, 2>, etc.) Other parts of this tutorial are following. Linux file system and kernel version explained This tutorial is first part of the article. It explains Linux file system structure (Linux directory structure) in detail along with naming convention used in kernel name. Linux shell and command types explained This tutorial is the second part of the article. It explains shell and its function in detail along with Linux command types such as internal commands, external commands, non-privilege commands and privilege commands. Input / Output (I/O) Redirection STDIN, STDOUT and STDERR represent the devices from which shell takes input command and displays result and error respectively. Let’s take a simple example. STDIN & STDOUT: – User types a command from keyboard (STDIN) at shell prompt and hit the Enter key. If command exists, shell executes that command and displays the result on monitor (STDOUT). stdin and stdout explained STDIN & STDERR: – User types a command from keyboard (STDIN) at shell prompt and hit Enter key. If command does not exist, shell displays the error on monitor (STDERR). stdin and stderr explained When we access a file, Linux creates an entry point in Kernel which is used to uniquely identify that file in running session. This identifier is known as file descriptor. File descriptor is a non-negative integer value. First three file descriptors 0, 1 and 2 are reserved for STDIN, STDOUT and STDERR respectively. File descriptors not only allow shell to accept input commands from any source but also allow shell to send the output or error of these commands on any destination. Unless we manually specify the device for STDIN, SDTOUT and STDERR, shell uses the default devices. File Descriptor Name Data Flow direction Default Device 0 STDIN (Standard Input) < Keyboard 1 STDOUT (Standard Output) > Monitor 2 STDERR (Standard Error) > Monitor If non-standard source (STDIN) or destination (STDOUT/STDERR) is used, we have to specify that manually. Let’s take an example. A script automatically executes, when a user is logged in, and sends its output to the log server. In this case, since shell is receiving commands from the script instead of a standard input device, it must have to know where it should send the output generated from those commands. stdin stdout and stderr explained For input redirection < symbol is used while > symbol is used for output redirection. When we use input redirection symbol (<), Linux replaces it with file descriptor and read the script and retrieves the commands as if they were typed from the keyboard. I/O (Input/output) redirection practical examples Access shell and run following commands $ls $ls > test $cat test $echo \"this text will overwrite existing content\" $echo \"this text will overwrite existing content\" > test $echo \"this text will append existing content\" >> test $cat < test Let’s understand each command in deail. First command lists the content of current directory. Second command also lists the content of current directory. But instead of displaying output on monitor (default standard output device), it sends that output in a file named test. Third command displays the content of specified file on monitor. Fourth command prints the specified string on monitor. Fifth command also prints the specified string. But instead of printing it on standard output device (monitor), it prints that in a file named test. Since the file test already contains data, its content will be overwritten. We can verify this by running third command again. Sixth command also prints the specified string in a file named test. But instead of overwriting, this command appends the file. Seventh command takes its input from file instead of standard input device (keyboard). Following figure illustrates above practice with output. file redirector practical example Common I/O file redirectors Redirector Description > Store output in specified file. If file exist, content will be overwritten. If file does not exist, a new file will be created with specified name and output will be stored. >> Store output in specified file. If file exist, content will be appended. If file does not exist, a new file will be created with specified name and output will be stored. 2>&1 Send error messages and command output on same destination. < Read command from file instead of keyboard. /dev/null Send output to null. (Discard the output.) /dev/tty1 Send output to terminal number one. (Require root permission) /dev/sda Send output to first hard disk (sda). (Require root permission) Pipes in Linux Pipes make I/O redirection more flexible. It allows us to redirect the output of one command into other command as input. how pipe works in linux at command prompt Shell allows us to combine multiple commands using pipes. To combine commands use pipe (|) sign between them. Let’s take an example. The cat command prints the contents of specified file on monitor. The wc command calculates the number of line, word and characters from the specified file and prints the result on monitor. Both commands need source file to work. Access shell and run following command. $cat test | wc This command redirects the output of first command (cat) in second command (wc) as input. pipe example When using pipes, only the output of last command will be displayed on standard output device (monitor). That’s all for this tutorial. In next tutorial we will learn another RHCSA/RHCE tutorial in detail with examples. If you like this tutorial, please don’t forget to share it with friends. Full Version EX300 Dumps Try EX300 Dumps Demo
__label__pos
0.69321
Take the 2-minute tour × Stack Overflow is a question and answer site for professional and enthusiast programmers. It's 100% free, no registration required. When I concatenate the following two unicode characters I see both but there is a space between them. Is there anyway to get rid of this space? StringBuilder sb = new StringBuilder(); int characterCode; characterCode = Convert.ToInt32("2758", 16); sb.Append((char)characterCode); characterCode = Convert.ToInt32("25c4", 16); sb.Append((char)characterCode); share|improve this question add comment 4 Answers up vote 4 down vote accepted If you examine sb, you will see that it has Length of 2. There is no space between the characters. I think the issue is that you wish the "on" pixels of the 2 characters were closer to each other so the 2 "characters" look more "next to" each other, no? Edit: Like you said, you can see if those 2 characters look any "closer" to each other in a different font. share|improve this answer   You are correct, yes I would like them to appear closer together. But I guess it is just a font issue. –  OutOFTouch Mar 30 '10 at 21:08 add comment Would not var str = "\x2758\x25c4" work? share|improve this answer   While this is the "right" way, it doesn't address the OP's original issue that the two specific characters appear too far apart. –  JeffH Mar 30 '10 at 21:01 add comment There's no space, it's an artifact of your display font. share|improve this answer add comment Character U+2758 looks very wide in MS Gothic, but it's narrow in Arial Unicode MS. Try changing your font. share|improve this answer add comment Your Answer   discard By posting your answer, you agree to the privacy policy and terms of service. Not the answer you're looking for? Browse other questions tagged or ask your own question.
__label__pos
0.516611
阅读 130 关于 防抖 let debounce = (fn,wait)=>{ let timer = 0; return (...args)=>{ if(timer)clearTimeout(timer); timer = setTimeout(()=>{ fn.apply(this,args) },wait) } } 复制代码 节流 const throttle = (fn, wait) => { let finished = true; return (...args) => { if(!finished)return; finished = false; setTimeout(() => { fn.apply(this, args); finished = true; }, wait); }; }; 复制代码 深拷贝 // 思路:拷贝的数据类型分为:基本数据类型、 const deepClone = (target, map = new Map()) => { // 首先处理基本数据类型 if((typeof target !== 'object' || target !== 'function') || target !== null) return target; // 判断具体的引用数据类型 let type = Object.prototype.toString.call(target); let cloneTarget; } 复制代码 ES6新增的东西 一些细微的拓展 letconstvar 存在变量提升和会绑定到全局window对象 解构赋值 :数组结构和对象结构 箭头函数 特点:形参只有一个可以省略小括号、函数体只有一句话,可以省略大括号,写在一行、不用写return、没有 argumentsthis为外部作用域的this、不能作为构造函数 // 扩展运算符 复制代码 对象拓展 keys(obj):获取对象的所有key形成的数组 values(obj):获取对象的所有value形成的数组 entries(obj):获取对象的所有key和value形成的二维数组。格式:[[k1,v1],[k2,v2],...] 复制代码 数组的拓展 find() 返回找出第一个符合条件的数组成员,否则返回undefined。 [1, 4, -5, 10].find((n) => n < 0) -5 findIndex() 返回找到第一个符合条件的数组成员索引值,不存在则返回-1。 [1, 5, 10, 15].findIndex((value)=> value > 9) 2 Array.from() 把伪数组转成数组 let arr = Array.from({0: 'a', 1: 'b', 2: 'c', length: 3});// ['a', 'b', 'c'] includes()方法 判断数组是否包含某个值 [1, 4, 3, 9].includes(4, 2);// false, 从2的位置开始查,所以没有找到4 数组实例的 flat(),flatMap() [1, [2, [3]]].flat(Infinity) // [1, 2, 3] flatMap()方法对原数组的每个成员执行一个函数,然后对返回值组成的数组执行flat()方法。 // 相当于 [[2, 4], [3, 6], [4, 8]].flat() [2, 3, 4].flatMap((x) => [x, x * 2]) // [2, 4, 3, 6, 4, 8] 复制代码 字符串拓展 字符串的includes(), startsWith(), endsWith(),repeat() includes(str, [position]) 返回布尔值,表示是否找到了参数字符串 startsWidth(str, [position]) 返回布尔值,表示参数字符串是否在原字符串的头部或指定位置 endsWith(str, [position]) 返回布尔值,表示参数字符串是否在原字符串的尾部或指定位置。 console.log('hello world'.includes('e', 2)); // false 从位置2开始查找e,没有找到 console.log('hello world'.includes('e')); // true console.log('hello world'.startsWith('h')); // 未指定位置,看开头是否是h,返回true console.log('hello world'.startsWith('l', 2)); // 指定位置的字符是l,返回true console.log('hello world'.endsWith('d')); // 未指定位置,结尾是d,返回true console.log('hello world'.endsWith('r', 9)); // 指定位置的字符是r,返回true let html = '<li>itheima</li>'; html = html.repeat(10); 复制代码 闭包实现模块化 实现链式操作 函数想要实现链式调用就必须返回一个实例,比如说数组map之后可以继续调用... //创建一个类 class Person{ setName(){ this.name = name; return this; } setAge(){ this.age = age; return this; } }; //实例化 var person= new Person(); person.setName("Mary").setAge(20); 链式操作的缺点:比如说我原型上的某个方法想返回某个值,就不能再链式调用下去,或者只能放到最后去调用。 链式操作的有点:可以使得异步编程的流程更加清晰,不会像回调函数一样相互耦合, jQuery和ES6中的Promise也正是沿用了这一思想,Promise每一个异步任务返回一个Promise对象, 通过then方法指定回调函数。 复制代码 属性的.和[] this.xxx 和 this['xxx' + a] 是一样的 一开始JS引擎只有this['xxx'],现在写this.xxx其实JS引擎也会内部转成this['xxx'] 复制代码 常见的运算符 算术运算符 加减乘除取余、自增、自减 复制代码 赋值运算符 =、+=、-=、*=、/=、%=、按位与赋值(&=)、按位或赋值(|=)、按位非赋值(~=)、 复制代码 位运算符 按位与(&)、按位或(|)、按位非(~)、按位异或(^)、<<左移、>>右移、>>>有符号右移 二进制:512 256 64 32 | 16 8 4 2 1(左右手) 负数:将正数10互换再加上1 -9 =>1001=>0110+1=> 1111 1111 1111 1111 1111 1111 1111 0111 按位非(NOT) ~25 => -(25)-1 => -26 ~~25 => -(25)-1 => -26 => -(-26)-1 => 25 按位与(AND) 25 & 3 => 11001 & 00011 => 找同时存在1的位数,才取1 => 00001 => 1 按位或(OR) 25 | 3 => 11001 & 00011 => 找存在1的位数,就取1 => 11011 => 27 按位异或(XOR) 25 ^ 3 => 11001 & 00011 => 找存在1的位数,就取1,同时存在,则不取 => 11010 => 26 var a=10,b=9; a ^= b, b ^= a, a ^= b; //a=9,b=10 <<左移 -2<<5 => 2的二进制为10,全部向做移5位 => 0000010-> 1000000 => -64 >>右移 该操作符会将第一个操作数向右移动指定的位数。向右被移出的位被丢弃 -9 >> 2 => 1001 -> 10 => -2 >>>有符号右移 正数>>>和>>结果一样,负数>>>数值会很大 取整: ~~3.9 => 3 3.9 | 0 => 3 3.9 ^ 0 => 3 3.9 << 0 => 3 复制代码 逗号运算符(,) 能够先执行运算符左侧的操作数,然后再执行右侧的操作数,最后返回右侧操作数的值。 a = (b = 1,c = 2); //连续执行和赋值 console.log(a); //返回2 console.log(b); //返回1 console.log(c); //返回2 复制代码 需要注意的关键字 in 除了配合for遍历对象,还可以判断某个属性是否属于某个对象(包括原型链上的原型) 如: "make" in {make: "Honda", model: "Accord", year: 1998} // returns true 0 in ["redwood", "bay", "cedar", "oak", "maple"] // returns true "length" in trees // returns true (length is an Array property) 复制代码 instanceof 一个对象能否通过原型链找到另一个对象的原型 Class 与 Style 如何动态绑定? Class 对象语法: <div v-bind:class="{ active: isActive, 'text-danger': hasError }"></div> data: { isActive: true, hasError: false } 复制代码 数组语法: <div v-bind:class="[isActive ? activeClass : '', errorClass]"></div> data: { activeClass: 'active', errorClass: 'text-danger' } 复制代码 style 对象语法: <div v-bind:style="{ color: activeColor, fontSize: fontSize + 'px' }"></div> data: { activeColor: 'red', fontSize: 30 } 数组语法: <div v-bind:style="[styleColor, styleSize]"></div> data: { styleColor: { color: 'red' }, styleSize:{ fontSize:'23px' } } 复制代码 JavaScript 语句标识 for - 按条件循环代码块一定的次数 let i = 0; for(;i<10;){ i++ } 复制代码 while - 按条件循环代码块一定的次数 let j = 0; while(j<10){ i++ } 复制代码 forin - 用于遍历对象的属性 复制代码 dowhile - 先执行一个语句块,再按条件循环代码块一定的次数 var text = "" var i = 0; do { i++; } while (i < 5); 复制代码 打印100以内的质数 //只能被1和自己整除的数 let arr = []; for(let i = 2,count = 0; i<100; i++){ //首先1肯定不是,100也不是 for(let j = 1; j<=i; j++)if(i%j==0)count++;//拿到一个数和之前的数取余,如果取余为0,次数count累加 if(count === 2)arr.push(i);//如果取余次数只有两次(只能被1和自己整除的数),则为质数 count = 0;//重置次数 } 复制代码 斐波那契数列 function Fibonacci(n){ let arr = [] let n1 = 1, n2 = 1; arr.push(n1,n2) for (let i = 2; i < n; i++){ [n1, n2] = [n2, n1 + n2]; arr.push(n2) } return arr; } Fibonacci(10) function Fibonacci2 (n , ac1 = 1 , ac2 = 1) { if( n <= 1 ) {return ac2}; return Fibonacci2 (n - 1, ac2, ac1 + ac2); } 复制代码 caller和arguments.callee和fn.length arguments为实参集合,是个类数组,里面有个属性是callee,是函数本身,常用于 一个匿名函数内部想要得到匿名函数本身 function fn(){ console.log(arguments.callee === fn)//true console.log(arguments.callee.length)//0 形参长度 console.log(arguments.length)//3 实参长度 } fn(1,2,3) function f1(){ f2() } function f2(){ console.log(f2.caller)//f1(){} 指向谁在调用它的函数 } f1() 严格模式下caller、callee、arguments都不可以被通过 复制代码 JSON.stringify()和toString()的区别 1. toString()是转化成字符串,因此不带 [ ] 2. JSON.stringify是转成JSON字符串,是带[]的 迭代 1. JavaScript内置的可迭代对象有:String、Array、Map、Set 2. 当迭代这些数据类型的时候(比如说for...of),JavaScript内部就会(不带参数)调用这些数据类型原型上@@iterator方法(可通过常量 Symbol.iterator 访问该属性),然后返回一个符合迭代器协议的对象,再通过迭代器去获得要迭代的值。 3. 一些内置的语法结构——比如for...of循环、展开语法、yield*,和解构赋值——其内部实现也使用了同样的迭代协议: [...someString] // ["h", "i"] 复制代码 [][Symbol.iterator]() + '' //"[object Array Iterator]" ''[Symbol.iterator]() + '' //"[object String Iterator]" new Map()[Symbol.iterator]() + '' //"[object Map Iterator]" new Set()[Symbol.iterator]() + '' //"[object Set Iterator]" let iterator1 = [][Symbol.iterator](); //返回一个迭代器 let iterator2 = ''[Symbol.iterator](); //返回一个迭代器 let iterator3 = new Map()[Symbol.iterator](); //返回一个迭代器 let iterator4 = new Set()[Symbol.iterator](); //返回一个迭代器 iterator1.next(); // { value: xxx, done: false } iterator1.next(); // { value: xxx, done: false } ... 复制代码 使object对象变成可迭代的对象: 也就是说Object上没有@@iterator这个方法,所以object对象不是可迭代,但是我们可以给Object原型上自定义一个[Symbol.iterator]的方法,使object对象变成可迭代的对象。 没有给对象自定义可迭代方法时: var myIterable = {}; for( key of myIterable){ console.log(key) } //Uncaught TypeError: myIterable is not iterable 复制代码 给对象自定义可迭代方法时: var myIterable = {}; myIterable[Symbol.iterator] = function* () { yield 1; yield 2; yield 3; }; for( key of myIterable){ console.log(key) } //1 //2 //3 复制代码 也就是说我们可以改变可迭代对象的方法或者讲任一一个对象变为可迭代。 使一个新创建出来的类实例变成可迭代的对象: 没有给类实例自定义可迭代方法时: class SimpleClass { constructor(data) { this.data = data } } const simple = new SimpleClass([1,2,3,4,5]) for (const val of simple) { console.log(val) } //Uncaught TypeError: simple is not iterable 复制代码 给类实例自定义可迭代方法时: class SimpleClass { constructor(data) { this.data = data } [Symbol.iterator]() { return { next: () => { if (index < this.data.length) { return {value: this.data[index++], done: false} } else { return {done: true} } } } } } const simple = new SimpleClass([1,2,3,4,5]) for (const val of simple) { console.log(val) //'1' '2' '3' '4' '5' } 复制代码 简单迭代器 function makeIterator(array) { let nextIndex = 0; return { next: function () { return nextIndex < array.length ? { value: array[nextIndex++], done: false } : { done: true }; } }; } let it = makeIterator(['哟', '呀']); console.log(it.next().value); // '哟' console.log(it.next().value); // '呀' console.log(it.next().done); // true 复制代码 文章分类 前端 文章标签
__label__pos
0.987716
Python vs Matlab Python vs Matlab: The Essential Differences You Should Know Generally, people get confused about a large number of programming languages and their importance. The same concern goes with Python vs Matlab.  Yes, people seek the comparison between Python and Matlab so that they can choose one from the best two. They search for the reasons why they are different from each other and what tasks they perform. Indeed, all the programming languages are different, but they also share other common features. All the programming languages have their unique points and are used for a specific purpose, such as Python is a general-purpose programming language, whereas Matlab is an expensive language and uses a specific purpose. If you want to know more about Python and Matlab, we are here to guide you on which one you should choose. We will discuss in what aspects they are different from each other. Let’s begin with a brief overview of Python vs Matlab. Best Assignment Help Services Overview Matlab is a mathematical and technical purpose programming language that includes arrays, linear algebra, and matrices. It is broadly known as a high-quality environment for any task. On the other hand, Python is a general-purpose language and is becoming popular daily due to its ease and functionality. Python possesses all the computational power of Matlab for science and computation to develop new applications easily, but it is different from Matlab. Hence, to choose the perfect fit for you need to go through the full comparison of Python vs Matlab. What Is Python? Python is an open-source, general-purpose, high-level programming language. In 1991 it was released and developed by Guido van Rossum. Python uses the OOP approach for helping the developers for writing precise and logical code for small and large projects.  Python is a broadly used language. Mainly it is developed for emphasis on code readability. It supports multiple paradigms of programming, like OOP, procedural programming, and functional programming. It let you work quickly and integrate systems more efficiently. What Is Matlab? Matlab is an abbreviation for Matrix Laboratory, and it is a high-level language. It is one of those programming languages which are well-designed and the most advanced ones for computing. Cleve Moler developed it in the late 1970s.  It is the best tool for data plotting, developing user interfaces, matrix manipulations, and implementing algorithms. Though Matlab is primarily developed for numerical computations, it enables symbolic computation with the MuPAD symbolic engine. See also SAS vs STATA: The Essential Guide By Statistics Experts Python Uses And Features 1. It is easy to learn as it has clean and clear syntax. 2. High portable as it runs almost anywhere. It has high-end servers and workstations. 3. It is free and extensible. 4. To delimit the blocks it uses white space. Matlab Uses And Features 1. It saves time, saves human lives, reduces costs, etc. 2. Image processing. 3. Matlab is a programming interface as well as a programming language. 4. The functionality of Matlab can be greatly expanded by adding toolboxes. These are groups of functions that provide particular functionality. For example, an Excel link allows data to be written in an Excel-compatible format, while Statistics Toolbox allows for more advanced statistical data manipulation (ANOVA, Basic Fits, etc.) Advantages Of Python • Easy to Read, Learn and Write • Interpreted Language • Free and Open-Source • Portability • Dynamically Typed • Improved Productivity • Broad Libraries Support Advantages Of Matlab • Ease of Use • Predefined Functions • Graphical User Interface • Platform Independence • MATLAB Compiler • Device-Independent Plotting Disadvantages Of Python • Slow Speed • Weak in Mobile Computing • Not Memory Efficient • Runtime Errors • Database Access Disadvantages Of Matlab • Cost • Interpreted language Python vs Matlab Python vs matlab Python Vs Matlab: The Key Differences Ease Of Learning If we talk about the ease of learning, there is no clear winner from both languages. The complexity of learning any programming language means ease of learning. Syntax, programming structure, and resources of learning are all part of it. Python is one of the most user-friendly programming languages, with a simple syntax. To work on Python, all you need is a basic understanding of programming. There are various sources available online for learning to program for free. You can share your problems of coding with the other users in the python community. Aside from that, you can obtain proper Python documentation on their official website to get started with Python. In contrast, Matlab is all about the toolboxes. In Matlab, if you want to work, you can use drop and down ways to do so. Users can use the toolboxes in Matlab to do any task they want. You need to have some basic knowledge of the language if you want to get started with Matlab. On their official website, you’ll find the best community assistance, where you can share your problems, questions, and other information with many programmers. Syntax The syntax is one of the most crucial differences between Matlab and Python. In Matlab, everything is an array, whereas, in Python, everything is an object. For example, Strings in Matlab can be either array of characters or arrays of strings, whereas strings in Python are represented by a single object called “str.”  See also The Most Used Programming Languages In 2021 by Experts Nature  Matlab is a proprietary commercial product and not open-source software. As a result, to use it, you must purchase it first. You’ll have to pay an extra amount for any additional Matlab toolbox you want to install and run.  Aside from the expense, it’s important to note that its user base is limited, and Matlab was created specifically for MathWorks. Matlab would also lose its industrial value if MathWorks went out of business. In contrast, Python is an open-source programming language, which means it is completely free. So you can download the Python and install it and make alterations to the source code to meet your needs. Python has a more extensive fan base and user community as a result of this.  Naturally, the Python community is large, with thousands of developers actively contributing to the language’s continuous improvement. Python has a large number of free packages, making it a popular choice among developers all around the world. Cost When it comes to programming, the cost is always a concern. As a result, programmers look for open-source programming languages. They use open-source programming languages to execute a variety of activities in those languages. Python is a programming language that is free and open-source. As a result, there is no need to invest any money to use Python. Programmers frequently use Python. Aside from that, newbie programmers prefer Python because it allows them to get started without spending any money. In contrast, Matlab is one of the most costly programming languages.  To use Matlab, you must pay a large amount of money. Mathworks offers Matlab’s students version, and this version is less expensive than the full version of Matlab. In addition, if your school or college has purchased a campus-wide license, you can use it with the help of access. See also Coding vs Scripting | Major Differences You Should Know? IDE Matlab has an integrated development environment. It has a very clean interface with the console in the center to write commands. Variable explorer lies on the right and directory listing on the left. In the case of Python, it does not have an integrated development environment by default. Users choose an IDE according to their requirements on their own. Anaconda is a very popular Python package that includes two IDEs- one is Spyder, and the other is JupyterLab. Both these IDEs work very efficiently. Popularity In programming languages, the popularity depends on how much programmers prefer the language. And on various aspects, the popularity of the programming language. As we compare Matlab with Python, then Python is the most popular programming language. And you can see the popularity of Python popularity over Matlab. The ease of learning, ease of access, and simplicity make Python popular.  In contrast, Matlab is a costly one, so it is less popular. And as we compare it to Python, it has limited features. You have to buy the additional tool to expand the Matlab features. Python Vs Matlab: In Tabular Form FeaturesPythonMatlab InventionPython is developed in 1991 by Guido Van Rossum.It was invented by a computer programmer and mathematician Cleve Moler in the 1970s. PurposeIt is a general-purpose languageMatlab is a mathematical and technical computational language UsePython is used for web programmingIt is used in the creation of user interfaces, function plotting, and matrix manipulation. AdvantagesOpen-source, free, large, and active community, extensive library.Test algorithms fastly without compilation LibraryIt contains a large standard library.There is no generic programming functionality in the standard library. SupportEmail and phone supportPersonalized real-time support is absent Code Generation for EmbeddedIt does not generate comprehensive, automatic code for embedded systems.It generates readable, portable, C++ and C codes. PortabilityIt is portable There are some restrictions in the case of portability. Salary- Python vs Matlab The average salary of a Python Programmer Per Hour Average Salary of a Matlab Developer per hour Top Companies Which use Python vs Matlab Online sources to learn Python and Matlab Nowadays, there are numerous sources that provide an excellent guide to any programming language online. People who want to learn Python or Matlab programming language do not need to rely on offline classes or any degree and diploma. Some online platforms are hard-working day and night to avail their services to the students. Such platforms are- Udemy Coursera Codeacademy etc. Other than that you can learn these languages through blogs, websites, youtube tutorials, ebooks, etc. Conclusion Hence, both Python and Matlab both have their benefits and drawbacks and rule the market in their own way. The above information defines Python vs Matlab with their features effectively. And we hope that now you should know all about Python vs Matlab.  And you can decide which is the best one between Python vs Matlab. But if in any case, you want assistance from us regarding Python Homework Help or Python Programming Assignment Help, then contact us without any hesitation. We are 24*7 available here to help you. FAQs Is Matlab harder than Python? From my point of view, Matlab is easier to use than Python. We can do vector and matrix operations directly in Matlab (rather than go through NumPy in Python). Can Python replace Matlab? Yes, Python can replace Matlab as most Matlab applications can be found in Python libraries or viably duplicated. Python is a general language, and it is more versatile than Matlab. Leave a Comment Your email address will not be published. Required fields are marked * This site uses Akismet to reduce spam. Learn how your comment data is processed.
__label__pos
0.938472
Excel Formulas for Beginners Joseph Basic, Excel, Formulas 4 Comments Formulas are the basis of getting any calculation done in Excel.  Whether you need to sum a range of values, calculate forecast for next quarter’s sales, or even to perform text manipulation, all these things are done using formulas.  Simply put, Excel without formulas would just be a fancy text editor (ok, maybe a little more than that, but you get my point).  Formulas are a crucial part of being efficient in Excel and if you’re brand new to formulas, this post can be of great use to you. What are Formulas in Excel? As mentioned in my An Introduction to Microsoft Excel, Excel uses a grid of cells to contain data.  Whether that data is text, numbers, or both, you can apply formulas to it in order to produce useful information.  For example, you can sum all of the data in a column, find the difference between two dates, or combine multiple strings together. How to Write a Formula Writing a formula in Excel is easy.  Here are 3 basic rules to remember before you write a formula: 1. A formula always begins with the equal sign (=) 2. All built-in functions (like SUM()) always have parenthesis after the function’s name and after all the arguments have been passed to that function (more on arguments later) 3. Formulas accept numbers, text, or cell references You write your formulas by selecting the cell you wish to have the formula’s output be displayed and you can begin by typing the “=” sign.  You can also click inside the formula bar to enter a formula or you can press F2 (which brings your cursor to the formula bar for the selected cell). Here is an example of a simple formula: Basic Formulas Notice that the formula bar shows the formula while the cell shows the output.  This is because a) the formula bar always shows the formula (hence, its name) and b) I’m currently not in an “Edit Mode” where I could change the formula. To edit a formula, there are 3 convenient ways you can use: 1. Double-click the cell that has the formula (this will temporarily make the cell into a formula bar to edit the formula from within the cell) 2. Select the cell and click in the formula bar 3. Select the cell and press F2 Cell References In my previous example, I added two numbers by entering the actual numbers into the formula.  What if I needed to grab a number from a cell?  That’s easy.  You can either type the cell reference by hand, or you can select the cell you want to reference while still editing the formula. In the following image, notice how A2 has a weird dashed outline to it.  That’s because I recently selected it. Referencing the cells in formulas. =A1+A2 The output will still be 2 as shown previously.  Also, did you notice the colors of the A1 and A2 text?  They are blue and green, respectively.  Their cells are also outlined with those colors when you are editing the formula, but it’s hard to see that in the image because all of the cells are against each other.  Check out the next image below.  This time I clicked in the formula bar to better see the color outline.  This gives you a visual guide to where the references are made.  (Also, since I clicked in the formula bar, the colors are displayed there rather than inside cell B1.) Formulas - Display of the colors outlined for cells A1 and A2. It’s important to know that when Excel evaluates a formula, it starts by looking at the references and converts them into their respective values.  So for the formula above, A1 converts to 1, then Excel converts A2 to 1 as well, and then it sums them together.  When the reference is a range, it is converted into what’s called an array, or a collection of values ordered in the way they were found (left to right, move one row down, then repeat). Related articles: Excel’s Built-In Functions Excel provides a ton of built-in functions you can use in your formulas.  There are formulas for math, finance, statistics, and even engineering.  Excel can be very powerful with its built-in functions. Each function takes in what’s called an argument (or multiple arguments).  An argument is some information that the function needs in order to do what you want it to.  Usually, when a function takes one argument, the argument is just a reference to the data you want to have the function work on.  If there is more than one argument, then some of those arguments are just instructions for the function on how to process the data.  Each argument is separated by a comma.  Also, arguments can also be called parameters. For now, we’ll begin with a simple SUM() function.  In later posts I’ll be sure to get more in-depth with other useful functions. The SUM() Function The SUM() function does just what it says, sums up data.  It requires at least one argument, but can take up to 255 arguments.  Each argument “type” can either be a cell reference, a cell range reference, or a hand-typed number.  The argument types can be mixed and matched.  To follow our simple example of A1+A2, here are some ways you can use the SUM() function. Multiple ways to use the SUM function in formulas Allow me to explain how these 4 formulas work: 1. Two basic cell references, separated by a comma.  So in this formula there are two arguments to the SUM() function.  The result is 2. 2. The argument here is the cell range.  I should mention that you define a cell range as Start:End where Start is the top-left-most cell and End is the bottom-right-most cell.  The two references are joined by a colon.  The result is also 2. 3. In this formula, I put a reference to A1, which is 1, and a reference to A2, which is also 1.  However, for the A2 reference, I also put a dash (or hyphen) in front of the reference.  This dash negates A2'svalue, making it -1.  So 1 + (-1) = 0. 1. Remember that Excel evaluates the reference first, and then applies functions and operators (operators are +, -, *, /, etc.)  For example, when it sees -A2, it first converts A2 to 1, and then negates it. 4. The fourth formula is just a cell range as the first argument, followed by a hand-typed “3” as the second argument.  The result is 1 + 1 + 3 = 5. What’s Next? Start writing formulas! The best way to learn Excel is by actually playing around with it. You can start by check out some of the Commonly Used Excel Formulas. You can also get started with Vlookup, one of the most popular formulas in Excel! Are you ready to master Excel Dashboards? Learn Excel Dashboard Course Power BI Webinar Still not convinced? Check out my review of the course! • Lisa Very nice post! I like the simplicity of your explanation. How about writing something on charts? • Hi Lisa! Thank you for your comment 🙂 I like that idea. I will definitely post about charts in a future post! • Mary Hello, maybe you already posted something about this topic, but I couldn’t find a specific post on it. I need to create a drop down list for each record (row) in a log. How can I have excel place the drop down list in each cell on a specific column? I want to use the drop down list so people that enter records in this log don’t have to type the records every time, instead they would just select them from the list. Thanks so much for your help 🙂 • vinodha how can i put single quotation mark in between text in excel with example This is my string value Allen’s Communications
__label__pos
0.993597
Backup and Recovery Why should I consider backup services for my data? Imagine the following scenario: You are an ecommerce company who deals with customers through a database driven website. You store all the necessary customer & order details on your database. An error from a 3rd party contractor leads to your database becoming corrupted - and whatever you try, you can't get the data back. Now, the same scenario with a backup service: You are an ecommerce company who deals with customers through a database driven website. You store all the necessary customer & order details on your database. You use a RAID array to back up your data servers, in case the worst does happen. An error from a 3rd party contractor leads to your database becoming corrupted - but that's not an issue. You quickly reimport your backup database and you've only lost a few minutes worth of data. Backing up data is essential for many businesses who could incur huge financial consequences if the data was lost or corrupted. This means that backup services are vital and should be strongly considered when purchasing many data centre services.
__label__pos
0.702129
HOW I WAS ABLE TO TAKE OVER ONE OF DELL’S SUBDOMAINS Hi everyone, I found a subdomain takeover recently on Dell, and I wanted to write a write-up of it. I hope you enjoy it! First of all, I chose a bounty program. Then, as usual, I started with a subdomain scan. I scanned all the subdomains using subfinder. The scan finished, and I probed the output using httpx. I separated live and dead subdomains then started my research by checking for subdomain takeovers. WHAT IS A SUBDOMAIN TAKEOVER? Subdomain takeover is the process of registering a non-existing domain name to gain control over another domain. It generally occurs when there is a domain name (e.g. subdomain.example.com) using a CNAME record to another domain (e.g. subdomain.example.com CNAME anotherdomain.com). At some point, anotherdomain.com expires and is available for registration by anyone. If the CNAME record is not deleted from the example.com DNS zone, anyone who registers to anotherdomain.com has full control over subdomain.example.com until the DNS record is present. Since you get a basic understanding of subdomain takeover vulnerability and impact of it, we can now skip to how to find a subdomain takeover, and how I found one. Subdomain takeover is only possible on dead subdomains because the flaw occurs when an expired subdomain’s CNAME records are not deleted. So we can ignore the live subdomains. We need to check every dead subdomain. We can do it manually, or we can use a script that automates the process for us. I’m going to show you both two methods. SEARCHING FOR TAKEOVERS MANUALLY First things first we need to see every dead subdomain and their CNAME records. We can use a simple command to do that: dig. You can use it like the following: Dig Results The output of this command is important and we are going to use that later. And we need to see the responses they return. We can use a browser to do it, but there are tools that take screenshots of the subdomains we specify, like aquatone. Since we can see the contents of dead subdomains, we need to see if the subdomain returns a custom error message or something else. If it returns an error message, we are going to use the error message and the anotherdomain.com (herokuapp.com in this situation) that we have seen in the output of the dig command and learn that if it is possible to takeover a subdomain. You can use can-i-take-over-xyz. SEARCHING FOR TAKEOVERS AUTOMATICALLY The method and process are the same, but tools can do all the process instead of us. I used a tool called subjack which is written in golang and is really fast. You can download it from here: subjack. Once I found the bug, I skipped to the Proof of Concept to write and submit my report. PoC (PROOF OF CONCEPT) After finding a subdomain with a Heroku app in its dig results, I visited the page which led me to the following error page: Heroku Error Message (This is not the same error page that I encountered but I forgot to take a screenshot since I was in a hurry to report it :) ) Then I visited can-i-take-over-xyz and looked to the engine column and found Heroku with the error message it returns, “app not found”. It was the same error as I saw in the dead subdomain. Later, I visited the official website of Heroku engine and tried to take the subdomain over. And I was successful doing it :) To proof it, I left a little message on the subdomain: Proof of Concept I tried to tell you about how I found a subdomain takeover vulnerability on Dell as much as I could. Thank you for reading it :) My Twitter: twitter.com/tahabykl Web Application Security Researcher | twitter.com/tahabykl
__label__pos
0.885451
We want to hear from you!Take our 2021 Community Survey! Glossary of React Terms Single-page Application A single-page application is an application that loads a single HTML page and all the necessary assets (such as JavaScript and CSS) required for the application to run. Any interactions with the page or subsequent pages do not require a round trip to the server which means the page is not reloaded. Though you may build a single-page application in React, it is not a requirement. React can also be used for enhancing small parts of existing websites with additional interactivity. Code written in React can coexist peacefully with markup rendered on the server by something like PHP, or with other client-side libraries. In fact, this is exactly how React is being used at Facebook. ES6, ES2015, ES2016, etc These acronyms all refer to the most recent versions of the ECMAScript Language Specification standard, which the JavaScript language is an implementation of. The ES6 version (also known as ES2015) includes many additions to the previous versions such as: arrow functions, classes, template literals, let and const statements. You can learn more about specific versions here. Compilers A JavaScript compiler takes JavaScript code, transforms it and returns JavaScript code in a different format. The most common use case is to take ES6 syntax and transform it into syntax that older browsers are capable of interpreting. Babel is the compiler most commonly used with React. Bundlers Bundlers take JavaScript and CSS code written as separate modules (often hundreds of them), and combine them together into a few files better optimized for the browsers. Some bundlers commonly used in React applications include Webpack and Browserify. Package Managers Package managers are tools that allow you to manage dependencies in your project. npm and Yarn are two package managers commonly used in React applications. Both of them are clients for the same npm package registry. CDN CDN stands for Content Delivery Network. CDNs deliver cached, static content from a network of servers across the globe. JSX JSX is a syntax extension to JavaScript. It is similar to a template language, but it has full power of JavaScript. JSX gets compiled to React.createElement() calls which return plain JavaScript objects called “React elements”. To get a basic introduction to JSX see the docs here and find a more in-depth tutorial on JSX here. React DOM uses camelCase property naming convention instead of HTML attribute names. For example, tabindex becomes tabIndex in JSX. The attribute class is also written as className since class is a reserved word in JavaScript: <h1 className="hello">My name is Clementine!</h1> Elements React elements are the building blocks of React applications. One might confuse elements with a more widely known concept of “components”. An element describes what you want to see on the screen. React elements are immutable. const element = <h1>Hello, world</h1>; Typically, elements are not used directly, but get returned from components. Components React components are small, reusable pieces of code that return a React element to be rendered to the page. The simplest version of React component is a plain JavaScript function that returns a React element: function Welcome(props) { return <h1>Hello, {props.name}</h1>; } Components can also be ES6 classes: class Welcome extends React.Component { render() { return <h1>Hello, {this.props.name}</h1>; } } Components can be broken down into distinct pieces of functionality and used within other components. Components can return other components, arrays, strings and numbers. A good rule of thumb is that if a part of your UI is used several times (Button, Panel, Avatar), or is complex enough on its own (App, FeedStory, Comment), it is a good candidate to be a reusable component. Component names should also always start with a capital letter (<Wrapper/> not <wrapper/>). See this documentation for more information on rendering components. props props are inputs to a React component. They are data passed down from a parent component to a child component. Remember that props are readonly. They should not be modified in any way: // Wrong! props.number = 42; If you need to modify some value in response to user input or a network response, use state instead. props.children props.children is available on every component. It contains the content between the opening and closing tags of a component. For example: <Welcome>Hello world!</Welcome> The string Hello world! is available in props.children in the Welcome component: function Welcome(props) { return <p>{props.children}</p>; } For components defined as classes, use this.props.children: class Welcome extends React.Component { render() { return <p>{this.props.children}</p>; } } state A component needs state when some data associated with it changes over time. For example, a Checkbox component might need isChecked in its state, and a NewsFeed component might want to keep track of fetchedPosts in its state. The most important difference between state and props is that props are passed from a parent component, but state is managed by the component itself. A component cannot change its props, but it can change its state. For each particular piece of changing data, there should be just one component that “owns” it in its state. Don’t try to synchronize states of two different components. Instead, lift it up to their closest shared ancestor, and pass it down as props to both of them. Lifecycle Methods Lifecycle methods are custom functionality that gets executed during the different phases of a component. There are methods available when the component gets created and inserted into the DOM (mounting), when the component updates, and when the component gets unmounted or removed from the DOM. Controlled vs. Uncontrolled Components React has two different approaches to dealing with form inputs. An input form element whose value is controlled by React is called a controlled component. When a user enters data into a controlled component a change event handler is triggered and your code decides whether the input is valid (by re-rendering with the updated value). If you do not re-render then the form element will remain unchanged. An uncontrolled component works like form elements do outside of React. When a user inputs data into a form field (an input box, dropdown, etc) the updated information is reflected without React needing to do anything. However, this also means that you can’t force the field to have a certain value. In most cases you should use controlled components. Keys A “key” is a special string attribute you need to include when creating arrays of elements. Keys help React identify which items have changed, are added, or are removed. Keys should be given to the elements inside an array to give the elements a stable identity. Keys only need to be unique among sibling elements in the same array. They don’t need to be unique across the whole application or even a single component. Don’t pass something like Math.random() to keys. It is important that keys have a “stable identity” across re-renders so that React can determine when items are added, removed, or re-ordered. Ideally, keys should correspond to unique and stable identifiers coming from your data, such as post.id. Refs React supports a special attribute that you can attach to any component. The ref attribute can be an object created by React.createRef() function or a callback function, or a string (in legacy API). When the ref attribute is a callback function, the function receives the underlying DOM element or class instance (depending on the type of element) as its argument. This allows you to have direct access to the DOM element or component instance. Use refs sparingly. If you find yourself often using refs to “make things happen” in your app, consider getting more familiar with top-down data flow. Events Handling events with React elements has some syntactic differences: • React event handlers are named using camelCase, rather than lowercase. • With JSX you pass a function as the event handler, rather than a string. Reconciliation When a component’s props or state change, React decides whether an actual DOM update is necessary by comparing the newly returned element with the previously rendered one. When they are not equal, React will update the DOM. This process is called “reconciliation”. Is this page useful?Edit this page
__label__pos
0.713021
Dynamic and Nested Templates Hey Logseq, I’ve been thinking about workflow automation. Templates scratch the surface, but they could be so much more. Here’s some ideas: 1. Journals to populate with a different template based on the day of the week, month, or year. Example implementation: image 2. Templates to allow calls to slash commands, such as /today and /template/“template_name”, to dynamically populate the template upon creation, allowing for template reuse and dynamic time and date. Example implementation: ((/today)) ((/current-time)) and ((/template/“Morning Routine”)) would be replaced by Logseq with the results of those slash commands upon use of the parent template. Less important / Nice to have: 1. Extend Time and Date slash commands to include current and next first day of week/month/year. Bonus 1: Be able to add -X or +X for a relative value inside templates. Example implementation: ((/next-week+1)) could populate with the date for next tuesday, or ((/today+14)) could populate with the date 2 weeks from now. Bonus 2: Add commands for the independant month and year values. ex: ((/year+1)) = 2022, [[((/year))-((/month))]] = [[2021-11]]. (Perhaps wrap [[]] in another (()) to avoid collapsing [[]] before template use.) 2. Embed random child block from a parent block or page. Example implementation: {{random [[Inspirational Quotes]]}}. Perhaps wrap in (()) (or whatever) to select randomly at template creation rather than instantly. Bonus: Embed a random Linked reference block of a page. Ex: {{random-linkedref [[Highlights]]}} Do you guys have any other thoughts or ideas about workflow automation? Cheers, MOS Nice ideas. For example, I would like to give myself a reminder on Friday to go back through the week’s notes. And I would like to give myself a reminder on Tuesday to put the bins out on Wednesday morning! 1 Like You can try using the logseq smartblocks plugin for this
__label__pos
0.950901
Yeelight IFTTT Service become offline every day #1 Hello, I am having a serious problem where my Yeelight service within IFTTT is disconnect every single day at a specific time (or sometimes within an hour or so of it). I’m reconnecting the service and everything work normally again, but then in the next day, at that specific time (around 13:00-14:00 in the afternoon) it will disconnect on its own. I’ve read some topics here where you suggest that maybe the lease has expired with the router. However, even when the service within IFTTT disconnected, I can still control the bulbs with the Yeelight app on my phone, so the bulbs are still connected to the internet and your server normally. Is there something that you can suggest or do to resolve this please? I rely on the IFTTT applets to run these bulbs I use, and if it disconnects all the time, it becomes pretty annoying. Please help. If you need me to provide additional info, please feel free to let me know. Thank you. 0 Likes #2 It may be because of you had two IFTTT account but linked with one XIAOMI account. And below is the reply of IFTTT staff: “Yes, the multiple accounts situation you mentioned do exist. It is not unheard of for multiple IFTTT accounts to connect to the same service account. It might not be immediate, but we would eventually stop sending refresh requests for Account A.” Please think of this situation, do you have two IFTTT account? 0 Likes #3 Hi, Thank you for your answer. As far as I know, I only have one IFTTT account. My IFTTT account is [email protected]. I connect to YeelightIFTTT with Mi Account: account id is 1617919090 0 Likes #4 Had email to IFTTT’s staff to search your refresh requests to help us debug your issue. 1 Like #5 Hi, Do you have an answer please? 0 Likes #6 Sorry, IFTTT hadn’t reply me yet… 0 Likes #7 Is there any solution? 0 Likes
__label__pos
0.662844
Prev NEXT How Mac OS X Works The Purpose of Operating Systems What's the big deal about operating systems in the first place? What do they actually do? An operating system is the level of programming that lets you do things with your computer. The operating system interacts with a computer's hardware on a basic level, transmitting your commands into language the hardware can interpret. The OS acts as a platform for all other applications on your machine. Without it, your computer would just be a paperweight. At its heart, a computer is a number-crunching device. It takes input in the form of zeros and ones -- bits -- and channels them through various circuits and processors. The hardware behaves according to strict rules. We define these rules using things like logic gates, which take input and produce an output in a predictable way. Some simple computers have no need of an operating system because they only perform a specific task. But personal computers need to be more versatile. The operating system allows complex programs to access the capabilities of the hardware to get results. Only the hardware's physical properties and our own imaginations can limit what programs can do. Advertisement You could design an operating system by physically programming it into a computer's circuits. This would require building electrical pathways using millions of logic gates. But such an operating system would be inflexible. That's why operating systems like Mac OS X and Windows are software. Software is more malleable than hardware -- you can make changes through software patches and version updates. To do the same with hardware would mean switching out physical chips and circuit boards. Operating systems are like the manager for a computer. It's the job of the OS to monitor what software needs and what the hardware can provide. As you run applications on your computer, the OS allocates the resources necessary to complete the task. That can include processing power, memory allocation and computer storage access, among other things. Ideally, the OS will make sure that your computer's hardware is never overtaxed. The OS also allows programs to run on a computer. Without an OS, a programmer would have to design an application to run on the hardware directly. This isn't very efficient. An operating system acts as an application interface to the hardware. The OS does this through an application program interface (API). Program developers build applications for the API. Assuming the programmer has done a good job at building an application without any serious bugs, it should run just fine on the operating system. One important part of the Mac computer is the firmware. Firmware is a level of programming that exists directly on top of a hardware layer. It's not part of the operating system itself. The Mac firmware is the first stored program that executes when you turn on a Mac computer. Its job is to check the computer's CPU, memory, disk drives and ports for errors. The PC equivalent to the Mac firmware is called BIOS, which stands for basic input-output systems. A second program called a bootloader loads the Mac OS X, assuming there are no errors reported by the firmware. Next, we'll take a closer look at what makes the Mac OS X tick.
__label__pos
0.915077
Keep video running when screen is turned off? Discussion in 'iPad 2 Forum' started by GAMESHARQ, Mar 13, 2014. 1. GAMESHARQ GAMESHARQ Expand Collapse iPad Fan Joined: May 13, 2011 Messages: 281 Thanks Received: 4 When the screen goes off, either manually or automatically, whatever video I'm watching stops playing. Is there a way to change it so that it keeps playing even when the screen is off?   2. Mickey330 Mickey330 Expand Collapse Administrator Staff Member Joined: Aug 30, 2010 Messages: 11,892 Thanks Received: 2,225 Can you tell us what app you are using to play the video? And, I assume you are playing the video to hear music? Is the video on You Tube or another place? We need to know what your intent is and where you are streaming the video from (if you are). Because yeah, normal ops is to turn off a video when the cover is closed... Marilyn   3. GAMESHARQ GAMESHARQ Expand Collapse iPad Fan Joined: May 13, 2011 Messages: 281 Thanks Received: 4 For example, the TBS app or the WWE app. I know it sounds crazy, listening to shows without watching them, but that's a whole different kettle of fish.   4. dhewson777 dhewson777 Expand Collapse iPad Ninja Joined: Feb 23, 2012 Messages: 1,589 Thanks Received: 163 If playing back videos, you can use nPlayer. It has a setting to allow a video to continue playing during screen lock. Check each app for that option, I'm sure there are others.   Share This Page
__label__pos
0.974026
MEC030_V2XInformationService.yaml 45 KB Newer Older Elian Kraja's avatar Elian Kraja committed type: object x-etsi-notes: "NOTE 1:\tOther standardization organizations could be added as needed.\nNOTE 2:\tThe V2X message types of ETSI shall be used as specified in ETSI TS 102 894-2 [6], clause A.114." x-etsi-ref: 6.3.5 V2xServerUsd.sdpInfo: description: SDP with IP multicast address and port number used for V2X communication via MBMS. properties: ipMulticastAddress: description: '' type: string x-etsi-mec-cardinality: '1' x-etsi-mec-origin-type: String portNumber: description: '' type: string x-etsi-mec-cardinality: '1' x-etsi-mec-origin-type: String required: - ipMulticastAddress - portNumber type: object x-etsi-mec-cardinality: '1' V2xServerUsd.tmgi: description: Temporary Mobile Group Identity (TMGI), which is used within MBMS to uniquely identify Multicast and Broadcast bearer services. properties: mbmsServiceId: description: MBMS Service ID consisting of three octets. type: string x-etsi-mec-cardinality: '1' x-etsi-mec-origin-type: String mcc: description: The Mobile Country Code part of PLMN Identity. type: string x-etsi-mec-cardinality: '1' x-etsi-mec-origin-type: String mnc: description: The Mobile Network Code part of PLMN Identity. type: string x-etsi-mec-cardinality: '1' x-etsi-mec-origin-type: String required: - mbmsServiceId - mcc - mnc type: object x-etsi-mec-cardinality: '' V2xServerUsd: properties: sdpInfo: $ref: '#/components/schemas/V2xServerUsd.sdpInfo' serviceAreaIdentifier: description: A list of service area identifier for the applicable MBMS broadcast area. items: type: string minItems: 1 type: array x-etsi-mec-cardinality: 1..N x-etsi-mec-origin-type: String tmgi: $ref: '#/components/schemas/V2xServerUsd.tmgi' required: - tmgi - serviceAreaIdentifier - sdpInfo type: object x-etsi-ref: 6.5.10 VirtualNetworkInterfaceRequirements: type: string LinkType: description: >- 'This data type represents a type of link' type: object required: - href properties: href: $ref: '#/components/schemas/Href' Href: description: >- The URI referring to the subscription. type: string format: uri ProblemDetails: properties: detail: description: A human-readable explanation specific to this occurrence of the problem type: string x-etsi-mec-cardinality: 0..1 x-etsi-mec-origin-type: String instance: description: A URI reference that identifies the specific occurrence of the problem format: uri type: string x-etsi-mec-cardinality: 0..1 x-etsi-mec-origin-type: URI status: description: The HTTP status code for this occurrence of the problem format: uint32 type: integer x-etsi-mec-cardinality: 0..1 x-etsi-mec-origin-type: Uint32 title: description: A short, human-readable summary of the problem type type: string x-etsi-mec-cardinality: 0..1 x-etsi-mec-origin-type: String type: description: A URI reference according to IETF RFC 3986 that identifies the problem type format: uri type: string x-etsi-mec-cardinality: 0..1 x-etsi-mec-origin-type: URI type: object responses: 204: description: No Content 206: description: Partial content 400: description: 'Bad Request : used to indicate that incorrect parameters were passed to the request.' content: application/json: schema: $ref: '#/components/schemas/ProblemDetails' 401: description: 'Unauthorized : used when the client did not submit credentials.' content: application/json: schema: $ref: '#/components/schemas/ProblemDetails' 403: description: 'Forbidden : operation is not allowed given the current status of the resource.' content: application/json: schema: $ref: '#/components/schemas/ProblemDetails' 404: description: 'Not Found : used when a client provided a URI that cannot be mapped to a valid resource URI.' content: application/json: schema: $ref: '#/components/schemas/ProblemDetails' 406: description: 'Not Acceptable : used to indicate that the server cannot provide the any of the content formats supported by the client.' content: application/json: schema: $ref: '#/components/schemas/ProblemDetails' 409: description: 'Conflict : The operation cannot be executed currently, due to a conflict with the state of the resource' content: application/json: schema: $ref: '#/components/schemas/ProblemDetails' 412: description: 'Precondition failed : used when a condition has failed during conditional requests, e.g. when using ETags to avoid write conflicts when using PUT' content: application/json: schema: $ref: '#/components/schemas/ProblemDetails' 415: description: 'Unsupported Media Type : used to indicate that the server or the client does not support the content type of the entity body.' content: application/json: schema: $ref: '#/components/schemas/ProblemDetails' 422: description: 'Unprocessable Entity : used to indicate that the server understands the content type of the request entity and that the syntax of the request entity is correct but that the server is unable to process the contained instructions. This error condition can occur if an JSON request body is syntactically correct but semantically incorrect, for example if the target area for the request is considered too large. This error condition can also occur if the capabilities required by the request are not supported.' content: application/json: schema: $ref: '#/components/schemas/ProblemDetails' 429: description: 'Too Many Requests : used when a rate limiter has triggered.' content: application/json: schema: $ref: '#/components/schemas/ProblemDetails'
__label__pos
0.985356
15. What is the area of the circle whose equation is (X – 9)2 + (y – 3)2 = 64? O 167 O 647 O 8 2 40 Question 15. What is the area of the circle whose equation is (X – 9)2 + (y – 3)2 = 64? O 167 O 647 O 8 2 40 in progress 0 MichaelMet 2 months 2021-07-15T09:07:01+00:00 1 Answers 2 views 0 Answers ( ) 0 2021-07-15T09:08:34+00:00 Answer: 64π General Formulas and Concepts: Geometry Area of a Circle Formula: A = πr² • r is radius Pre-Calculus Center-Radius Form: (x – h)² + (y – k)² = r² • (h, k) is center • r is radius Step-by-step explanation: Step 1: Define Identify (x – 9)² + (y – 3)² = 64 ↓ Compare to Center-Radius Form Center (9, 3) Radius² = 64 Step 2: Find Area 1. Substitute in variables [Area of a Circle Formula]:                                         A = π(64) 2. Multiply:                                                                                                             A = 64π Leave an answer Browse Giải phương trình 1 ẩn: x + 2 - 2(x + 1) = -x . Hỏi x = ? ( )
__label__pos
0.986762
Advice on updating a plugin for my stateful set Hi, I would like to ask the k8s community what they think would be the best way to accomplish this task I need to complete. I have a database with a plugins directory in which I can drop .jar files, and then restart the database to put them into effect. In k8s, this seems difficult to me. How should I go about this while maintaining high availability, but at the same time updating the plugin successfully? Thanks! Graphtastic
__label__pos
0.889232
Highlights of Calculus (5 videos) Big Picture: Integrals The second half of calculus looks for the distance traveled even when the speed is changing. Finding this “integral” is the opposite of finding the derivative. Professor Strang explains how the integral adds up little pieces to recover the total distance. I know the speed at each moment of my trip, so how far did I go? Professor Strang’s Calculus textbook (1st edition, 1991) is freely available here. Subtitles are provided through the generous assistance of Jimmy Ren. Transcript Related Resources Course Info
__label__pos
0.927803
Take the 2-minute tour × Stack Overflow is a question and answer site for professional and enthusiast programmers. It's 100% free, no registration required. How do you explicitly state the valid settings for an option? Take this example for instance Options[myFunc] = {opt1 -> "SomeString"}; myFunc[OptionsPattern[]] := Print[OptionValue[opt1]]; myFunc prints the value of the option. If we evaluate myFunc[opt1 -> {1, 2}] then it prints {1, 2}. This function will essentially print anything that you set to opt1. My question is, how can I make sure that my function only accept a given number of values for opt1. We can start with something simple like a String and an Integer. To have a better idea of the behavior that we would except when given the wrong values for opt1 we can take a look at what happens when we give the wrong values for PlotRange in the function Plot. enter image description here In the example given in the picture I purposely gave wrong values to the PlotRange options and gave me a message specifying the correct type of values for that particular options. It seems that it PlotRange ended up taking its default value and thus it return the Graphics object. In the simple example what we would like to obtain is something like: myFunc::sometag : Value of option opt1 -> `1` is not a string or interger. Does anyone know how to achieve this? share|improve this question add comment 1 Answer up vote 7 down vote accepted A straightforward solution Here is a simple way: In[304]:= ClearAll[myFunc]; Options[myFunc] = {opt1 -> "SomeString"}; myFunc::badopt = "Value of option opt1 -> `1` is not a string or interger."; myFunc[OptionsPattern[]] := With[{val = OptionValue[opt1]}, With[{error = ! MatchQ[val, _String | _Integer]}, If[error, Message[myFunc::badopt , val]]; (Print[val] /; ! error)]]; For example: In[308]:= myFunc[opt1 -> 1] During evaluation of In[308]:= 1 In[309]:= myFunc[opt1 -> {1, 2}] During evaluation of In[309]:= myFunc::badopt: Value of option opt1 -> {1,2} is not a string or interger. Out[309]= myFunc[opt1 -> {1, 2}] Making it general with custom assignment operators We can use the fact that OptionValue inside a function works with a single argument being an option name, to factor out the error-checking tedium. This is possible by using mma meta-programming facilities. Here is the code for a custom assignment operator: ClearAll[def, OptionSpecs]; SetAttributes[def, HoldAll]; def[f_[args___] :> body_,OptionSpecs[optionSpecs : {(_ -> {_, Fail :> _}) ..}]] := f[args] := Module[{error = False}, Scan[ With[{optptrn = First[# /. optionSpecs], optval = OptionValue[#]}, If[! MatchQ[optval, optptrn ], error = True; Return[(Fail /. Last[# /. optionSpecs])[optval]]]] &, optionSpecs[[All, 1]] ]; body /; ! error]; What it does is to take a function definition as a rule f_[args___]:>body_, and also the specifications for the acceptable options settings and actions to perform upon detection of an error in one of the passed options. We then inject the error-testing code (Scan) before the body gets executed. As soon as the first option with inappropriate setting is found, error flag is set to True, and whatever code is specified in the Fail:>code_ part of specifications for that option. The option specification pattern (_ -> {_, Fail :> _}) should read (optname_ -> {optpattern_, Fail :> onerror_}), where optname is an option name, optpattern is a pattern that the option value must match, and onerror is arbitrary code to execute if error is detected. Note that we use RuleDelayed in Fail:>onerror_, to prevent premature evaluation of that code. Note b.t.w. that the OptionSpecs wrapper was added solely for readability - it is a completely idle symbol with no rules attached to it. Here is an example of a function defined with this custom assignment operator: ClearAll[myFunc1]; Options[myFunc1] = {opt1 -> "SomeString", opt2 -> 0}; myFunc1::badopt1 = "Value of option opt1 -> `1` is not a string or interger."; myFunc1::badopt2 = "Value of option opt2 -> `1` is not an interger."; def[myFunc1[OptionsPattern[]] :> Print[{OptionValue[opt1], OptionValue[opt2]}], OptionSpecs[{ opt1 -> {_Integer | _String, Fail :> ((Message[myFunc1::badopt1, #]; Return[$Failed]) &)}, opt2 -> {_Integer, Fail :> ((Message[myFunc1::badopt2, #]; Return[$Failed]) &)}} ]]; Here are examples of use: In[473]:= myFunc1[] During evaluation of In[473]:= {SomeString,0} In[474]:= myFunc1[opt2-> 10] During evaluation of In[474]:= {SomeString,10} In[475]:= myFunc1[opt2-> 10,opt1-> "other"] During evaluation of In[475]:= {other,10} In[476]:= myFunc1[opt2-> 1/2] During evaluation of In[476]:= myFunc1::badopt2: Value of option opt2 -> 1/2 is not an interger. Out[476]= $Failed In[477]:= myFunc1[opt2-> 15,opt1->1/2] During evaluation of In[477]:= myFunc1::badopt1: Value of option opt1 -> 1/2 is not a string or interger. Out[477]= $Failed Adding option checks to already defined functions automatically You might also be interested in a package I wrote to test the passed options: CheckOptions, available here. The package comes with a notebook illustrating its use. It parses the definitions of your function and creates additional definitions to check the options. The current downside (apart from generation of new definitions which may not always be appropriate) is that it only covers older way to define options through OptionQ predicate (I did not yet update it to cover OptionValue - OptionsPattern. I will reproduce here a part of the accompanying notebook to illustrate how it works: Consider a model function : In[276]:= ClearAll[f]; f[x_, opts___?OptionQ]:= x^2; f[x_, y_, opts___?OptionQ] := x + y; f[x_, y_, z_] := x*y*z; Suppose we want to return an error message when an option FontSize is passed to our function : In[280]:= f::badopt="Inappropriate option"; test[f,heldopts_Hold,heldArgs_Hold]:=(FontSize/.Flatten[List@@heldopts])=!=FontSize; rhsF[f,__]:=(Message[f::badopt];$Failed); We add the option - checking definitions : In[283]:= AddOptionsCheck[f,test,rhsF] Out[283]= {HoldPattern[f[x_,opts___?OptionQ]/;test[f,Hold[opts],Hold[x,opts]]]:> rhsF[f,Hold[opts],Hold[x,opts]], HoldPattern[f[x_,y_,opts___?OptionQ]/;test[f,Hold[opts],Hold[x,y,opts]]]:> rhsF[f,Hold[opts],Hold[x,y,opts]], HoldPattern[f[x_,opts___?OptionQ]]:>x^2, HoldPattern[f[x_,y_,opts___?OptionQ]]:>x+y, HoldPattern[f[x_,y_,z_]]:>x y z} As you can see, once we call AddOptionsCheck, it generates new definitions. It takes the function name, the testing function, and the function to execute on failure. The testing function accepts the main function name, options passed to it (wrapped in Hold), and non-options arguments passed to it (also wrapped in Hold). From the generated definitions you can see what is does. We now check on various inputs : In[284]:= f[3] Out[284]= 9 In[285]:= f[3,FontWeight->Bold] Out[285]= 9 In[286]:= f[3,FontWeight->Bold,FontSize->5] During evaluation of In[286]:= f::badopt: Inappropriate option Out[286]= $Failed In[289]:= f[a,b] Out[289]= a+b In[290]:= f[a,b,FontWeight->Bold] Out[290]= a+b In[291]:= f[a,b,FontWeight->Bold,FontSize->5] During evaluation of In[291]:= f::badopt: Inappropriate option Out[291]= $Failed In[292]:= OptionIsChecked[f,test] Out[292]= True Please note that the test function can test for arbitrary condition involving function name, passed arguments and passed options. There is another package of mine, PackageOptionChecks, available at the same page, which has a simpler syntax to test specifically r.h.s. of options, and can also be applied to entire package. A practical example of its use is yet another package, PackageSymbolsDependencies, whose functions' options are "protected" by PackageOptionChecks. Also, PackageOptionChecks may be applied to functions in Global' context as well, it is not necessary to have a package. One other limitation of the current implementation is that we can not return the function unevaluated. Please see a more detailed discussion in the notebook accompanying the package. If there is enough interest in this, I will consider updating the package to remove some of the limitations I mentioned. share|improve this answer 1   nice package. Is it really true that mma does not have anything built-in to do this? –  acl Jul 5 '11 at 20:10      @acl Not as far as I know (which doesn't mean there isn't such a built-in mechanism, but at least it is not well documented). OTOH, built-in functions clearly use some mechanism to check that. Whether or not that mechanism is factored out from the specifics of each particular function, I don't know. As I mentioned elsewhere, bugs resulting from passing inappropriate option settings were some of the worst I ever encountered, and the struggle with them was the motivation for me to write those two packages. –  Leonid Shifrin Jul 5 '11 at 20:48 add comment Your Answer   discard By posting your answer, you agree to the privacy policy and terms of service. Not the answer you're looking for? Browse other questions tagged or ask your own question.
__label__pos
0.921013
DEV Community Cover image for Learning Rust: Working with threads Núria for codegram Posted on • Originally published at codegram.com on Learning Rust: Working with threads I used to live in the single-thread JavaScript happy-land where the closest thing to working with threads I ever did was communicating between a website and a Chrome extension. So when people talked about the difficulties of parallelism and concurrency, I never truly got what the fuss was about. As you may have read before, I started learning Rust a few weeks ago, re-writing a text-based game I previously made with Vue. It's a survival game in which you must gather and craft items to eat and drink. It has no winning condition other than trying to survive as many days as possible. I managed to get most of the game features working, but there was an annoying bug: if the user left the game idle for hours, it didn't check for the stats until the user interacted again. You could live for hundreds of days without doing nothing! I knew this could be solved with threads, so I finally gathered the courage and read the chapter Fearless Concurrency of The Rust Programming Language. Tweet saying 'Well well well. Im going to start reading about #Rust concurrency #prayForFloit', followed by a tweet of a picture of Pikachu with a blank face Let's recap: what I needed was to be able to keep track of stats and days count every few seconds, and notify the player as soon as the stats reach 0. Spawning a new thread that runs some code every 10 seconds was easy: thread::spawn(move || loop { thread::sleep(Duration::from_secs(10)); println!("Now we should decrease stats and update day count…"); }); But how could I modify the stats and day count from that thread without running into ownership issues? Turned out to be much easier than I was expecting. You can create a Mutex (mutual exclusion), so only one thread at a time can access that data. Multiple threads need to own that Mutex, so you need to wrap it in an Arc (atomically reference counted) in order for the code to work properly (all this is much better explained in the Shared-State Concurrency chapter). The code ended up looking like this: fn main() { let stats = Arc::new(Mutex::new(Stats { water: Stat::new(100.0), food: Stat::new(100.0), energy: Stat::new(100.0), })); let days = Arc::new(Mutex::new(0)); control_time(&days, &stats); // ... } fn control_time(days: &Arc<Mutex<i32>>, stats: &Arc<Mutex<Stats>>) { let now = Instant::now(); let days = Arc::clone(&days); let stats = Arc::clone(&stats); thread::spawn(move || loop { thread::sleep(Duration::from_secs(10)); let elapsed_time = now.elapsed().as_secs(); let mut elapsed_days = days.lock().unwrap(); *elapsed_days = elapsed_time as i32 / 60; let mut stats_lock = stats.lock().unwrap(); decrease_stats(&mut stats_lock, 10.0); }); } The main thread can keep using days and stats as before, just by adding the .lock(). And this works nice but hey, wait… I still have the same problem: the main thread is busy waiting for the user input! Even though stats and day count are updating successfully every 10 seconds, the main thread is not aware. It was time to add another thread! Thread all the things! This thread is supposed to handle the user input, and send the action to the main one. For this one, it felt better to use the other way the Rust Programming Book explains to communicate between threads: message passing. I needed a channel to send the action to the main thread, but the main thread also needed to let the input thread know when it was ready to receive an action (since some actions take time). The channels that Rust offers in the standard library are multiple producers, single consumer (meaning there is no two-way communication between channels), so I ended up creating two channels. let (tx, rx) = mpsc::channel(); let (tx2, rx2) = mpsc::channel(); (not really creative with the names) Then, spawn a new thread that will wait for the main thread signal, ask the user for input, and send it back to the main one. Note that using rx2.recv() blocks the thread until a message is received: this will allow us to control when the user should be prompted. thread::spawn(move || loop { let _ = rx2.recv(); let action = request_input("\nWhat to do?"); tx.send(action).ok(); }); Then, from the main thread, we send a message to request input, and proceed to create a loop that will continuously check for stats and for user input with rx.try_recv() (this doesn't block the thread). If the stats have reached 0, the loop will break, ending the game; if not, we ask for input again. tx2.send(String::from("Ready for input")).ok(); loop { if let Ok(action) = rx.try_recv() { match action.trim() { // handle all possible actions } } if is_game_over(&stats.lock().unwrap()) { break; } else { tx2.send(String::from("Ready for input")).ok(); } } It felt really natural to me: it's just like dispatching an event with JavaScript, right? Well, no. When you dispatch an event with JavaScript, no one cares if someone is listening to that event. You dispatch it, and if there is no listener, that message is lost forever. In Rust land, if a tree falls in a forest there has to be someone around to listen to the sound it makes. Otherwise, the forest panics and burns itself and the world explodes. And if the person who is around is busy doing other things, all the trees will wait in line and won't fall until that person stops to listen to them. So what was happening was this: asks for input before the last action's result appear As you see, it looks as if the input thread is not waiting until the ready message from the main one. However, the problem is that the main thread is sending messages continuously (remember that the action is being listened to via try_recv, so it's not blocking). Even though when the user inputs sleep, the main thread indeed sleeps for a few seconds, since we had sent tons of messages before, the input thread gets the messages one by one. This might feel natural if you are used to other languages, but coming from JavaScript, it blew my mind and took me some time to wrap my mind around it. In the end, the solution was as easy as sending the ready message only after the last message has been received and handled: tx2.send(String::from("Ready for input")).ok(); loop { if let Ok(action) = rx.try_recv() { match action.trim() { // handle all possible actions } // now we are ready for another action: tx2.send(String::from("Ready for input")).ok(); } if is_game_over(&stats.lock().unwrap()) { break; } } All problems solved! As what I've been seeing these last few weeks, what's really difficult about Rust is not the language itself, but leaving behind the JavaScript way of thinking (feel free to replace JavaScript by your preferred language). However, that's what I like the most, getting out of the comfort zone! You can check the game code here: https://github.com/codegram/live-rust Cover photo by Geran de Klerk Discussion (0)
__label__pos
0.896522
离线下载 获取电子书 Android SDK · 更新于 2018-01-17 22:00:59 Android SDK 上手指南:Java 应用程序编程 在这份教程中,我们不会过度深入细节,不过如果大家对于某些概念还不大清楚,请点击此处参阅甲骨文 Java 指南。这是一份非常优秀的 Java 语言指导材料,非常适合初学者。 介绍 如果大家已经对 Java 非常熟悉,那么不妨直接忽略这部分内容。如果大家的技巧还存在局限或者对 Java 这种语言只闻其名,那么本文将为各位解答很多在 Android 开发当中经常遇到的问题。需要注意的是,这篇文章并不能作为 Java 起步教程来阅读,最多只能算是基础知识汇总。如果对 Java 毫无了解,大家还需要参考其它一些额外的 Java 学习资料。 在这份教程中,我们不会过度深入细节,不过如果大家对于某些概念还不大清楚,请点击此处参阅甲骨文 Java 指南。这是一份非常优秀的 Java 语言指导材料,非常适合初学者。如果在刚刚开始阅读时发现本教程提到的一些内容有些陌生,也请大家千万不要惊慌。只要真正着手开始在 Android 项目中进行尝试,各位很快就能理解本文表达的内容。 1. Java 语法 第一步 大家已经在我们的 Android 项目中见识过一部分 Java 语法了,但为了清楚起见,让我们再从另一个项目重新学习。这一次我们不再使用 Android 项目,而直接换成 Java 项目——这样大家就能更轻松地感受到我们所使用的结构。打开 Eclipse,点击“New”按钮。在弹出的导航窗口中下滚到 Java 文件夹中并将其打开,选择“Java Project”然后单击下一步。 输入“MyJavaProject”作为项目名称并点击“Finish”。Eclipse 接下来会在工作区内创建我们的新项目。在 Package Explorer 当中,打开新项目文件夹,右键点击“src”并选择“New”、然后选“Class”。这时在 Name 框中输入“MyMainClass”。接着勾选旁边的复选项“public static void main”最后点击“Finish”。 Eclipse 会创建出类并在编辑器中打开。大家不必过多关注项目结构或者类中的现有内容,因为我们的 Android 项目所使用的结构与 Java 项目并不相同。各位可以利用这个项目来磨练自己的 Java 编码技能,在这里代码的运行与测试都要比 Android 应用简便得多,而且我们也能更多地关注 Java 语法本身。 我们在类文件中看到的“public static void main”行就是主方法。无论方法的具体内容是什么,它都会在应用程序运行时加以执行。方法的内容就是显示在“public static void main(String[] args)”后面大括号里的部分。Eclipse 可能还生成了一个“to do”行——直接无视就好。在其后创建新行,我们就从这里开始添加自己的代码。 第二步 在 Java 当中,一条变量可以保存一个数据值,例如文本字符串或者数字。当我们在 Java 中创建或者“声明”一个变量时,需要指定其中的数据类型并为其命名。输入以下代码: int myNum; 这一行声明了一个整数变量。我们可以通过以下代码行声明一个变量并为其分配一个值: int myNum = 5; 现在我们可以通过名称引用这条变量了。添加以下代码行,从而将变量值写入到输出控制台: System.out.println(myNum); 大家一般不会在自己的 Android 应用中以这种方式向系统输出写入结果,而是用 LogCat 视图取而代之。不过通过这种输出写入方式,我们能够更为便捷地对 Java 代码进行测试。 第三步 现在让我们运行应用。运行过程与 Android 应用存在些许不同,但我们会在稍后继续进行说明。选择“Run”,而后选择“Run Configurations”。在弹出的列表左侧选择“Java Application”并点击上方的“New launch configuration”。如果这是我们的惟一一个 Java 应用,Eclipse 会自动选择运行刚刚创建完成的小小成果。 现在点击“Run”来运行我们的应用程序。大家会看到,编辑器下方的控制台视图中将显示出数字“5”。大家可以利用这种方式在学习过程中对 Java 代码进行测试。 大家现在可以通过工具栏中的“Run”按钮随时运行上一次启动过的项目。 第四步 无论何时,只要是在 Java 中进行变量声明,我们都会使用相同的语法。为了在以后的编程工作中为变量分配不同的值,我们可以通过名称对其进行引用: myNum = 3; 上述代码会覆盖掉现有值。在 Java 中存在着很多不同的变量类型。其中 int 属于被引用的基本类型,此外还有一些其它数字类型;char 用于字符值,而 boolean 则用于保存真假值。对象的类型也分许多种;关于对象的话题,我们放在以后进行讨论。对于大家来说,最熟悉的基本对象类型应该要数 String 了,它的作用是保存一条文本字符串: String myName = "Sue"; 文本字符串值要用引号括起来。大家可以在正面的例子中看到它的使用方法: System.out.println("number: " + myNum); 添加上述代码并运行,控制台会显示“number:”再加上变量值。 第五步 在上面我们看到了赋值运算符“=”——正面我们再来看其它一些常见运算符: //add myNum = 5+6; //subtract myNum = 7-3; //multiply myNum = 3*2; //divide myNum = 10/5; //remainder myNum = 10%6; //increment (add one) myNum++; //decrement (subtract one) myNum--; 运算符既可以被用在变量当中,也可以作为硬编码数字(如上所示): int myNum = 5; int myOtherNum = 4; int total = myNum+myOtherNum;//9 第六步 作为 Android 基础内容的另一种 Java 结构就是注释。大家可以通过以下两种方式添加注释: //this is a single line comment /* This is a multiline comment * stretching across lines * to give more information */ 最重要的是养成编写代码的同时添加注释的好习惯,这一方面便于我们自己日后查看,另外也能让其他合作者了解我们的编码意图。 2. 控制结构 第一步 我们向主方法中添加的代码会在 Java 应用程序运行时同时执行。而在运行我们所创建的 Android 应用程序时,主 Activity 中 onCreate 方法的代码会同时执行。这些方法中的所有代码行都会按从上到下的顺序依次执行,不过执行的流程并不总是线性的。Java 当中有很多控制结构,正面我们就从条件开始了解其中最常见的几种。条件语句一般用于进行测试从而确定执行流程。在 Java 当中,最简单的条件结构就是 if 语句: if(myNum>3) System.out.println("number is greater than 3"); 这项测试的目的在于检查变量的值是否大于 3。如果确实大于 3,那么字符串将被写入输出结果。如果小于等于 3,则不向输出结果写入任何内容、继续执行程序中的下一行。条件测试会“返回”一个真假值。真与假都属于 boolean 值。我们也可以向其中添加 else,这样其内容只会在返回假值时才执行: if(myNum>3) System.out.println("number is greater than 3"); else System.out.println("number is not greater than 3"); 在我们的示例中,else 语句会在值等于或者小于 3 时执行。尝试在代码的整数变量中添加不同的值,看看条件测试结果会发生哪些变化: if(myNum>10) System.out.println("number is greater than 10"); else if(myNum>7) System.out.println("number is greater than 7"); else if(myNum>3) System.out.println("number is greater than 3"); else System.out.println("number is 3 or less"); 只有在流程中的每一次测试中都返回假值时,所有测试行才会被彻底执行一遍。因此对于大部分数字来说,只会输出一条字符串。如果有必要,大家可以把多条 else if 语句串点起来。大家还可以利用 if 语句与一个或者多个 else if 相结合,而不必每一次都在之后单独设置 else。 下面我们测试一个数字是否大于另一个。尝试使用以下变量: if(myNum<10) System.out.println("number less than 10"); if(myNum==10) System.out.println("number equals 10"); if(myNum!=10) System.out.println("number is not equal to 10"); if(myNum>=10) System.out.println("number either greater than or equal to 10"); if(myNum<=10) System.out.println("number either less than or equal to 10"); 大家也可以利用包含字符串的变量类型进行类似的测试。要同时进行多项测试,可以利用以下语法: if(myNum>=10 && myNum<=50) System.out.println("number is between 10 and 50"); 其中的“&&”是作为“and”运算符存在的,意思是整条语句只有在两项测试都返回真值时才被判定为真。而“or”运算符将在两条测试中任意一条返回真值时判定为真: if(myNum<0 || myNum!=-1) System.out.println("number is less than 0 or not equal to -1"); 为了将代码组成代码块,我们可以使用大括号——两个括号之间的所有代码都会在测试返回真值时执行: if(myNum<10) { System.out.println("number less than 10"); myNum=10; } 这些括号能够在循环、方法以及类中实现代码分组。 第二步 接下来让我们看看循环。下面的 for 循环会进行十次遍历,意味着它的内容将执行十次: for(int i=0; i<10; i++){ System.out.println(i); } 在 for 循环中的第一个表达式旨在将一个整数型计数器变量初始化为零。第二个表达式则是条件测试,检查该变量的值是否小于 10。如果返回的是真值,则循环内容得到执行;如果返回的是假值,则中止循环。一旦循环当中的内容开始执行,第三个表达式就同时执行,即递增计数器。 另一种循环 while 所使用的语法稍有区别。以下代码就是我们利用 while 来实现上面的 for 循环的相同执行效果: int i=0; while(i<10){ System.out.println(i); i++; } 循环当中可以容纳多行代码,其中包括其它循环。 第三步 我们已经接触了主方法与 Android 的 onCreate 方法。下面让我们一起学习如何创建自己的方法。将以下方法放置在主方法的右括号之后: public static void doSomething(){ System.out.println("something"); } 该方法被定义为 public,这意味着项目中的所有类都可以调用其进程。如果它的属性为“private”,则代表只供同一个类内部进行访问(也就是‘visibility’)。一般来说,大家不会在自己的第一个 Android 应用中包含“static”修饰符,因此忽略掉它即可。而“void”代表着返回类型。在我们的示例中,该方法不会返回任何值。为了执行该方法,我们需要在主方法中添加一项调用: doSomething(); 运行应用程序并查看其功能——改变方法以返回一个值: public static int doSomething(){ return 5; } 改变方法调用并再次运行: System.out.println(doSomething()); 返回的值会被写出。方法还可以接收参数: public static int doSomething(int firstNum, int secondNum){ return firstNum*secondNum; } 在调用该方法时,大家必须符合正确的参数类型与数字: System.out.println(doSomething(3, 5)); 方法能够将应用程序进程拆分为逻辑块。如果大家需要多次执行同一项任务,那么它们的作用将非常重要;我们可以简单在方法中进行定义,然后在需要时随时调用。如果各位需要改变处理流程,也只需在方法代码中进行修改。 3. 类与对象 第一步 我们已经了解了方法如何被用于重新使用代码并将其拆分成逻辑部分。类与对象则能够在更大的范围内实现此类功能。大家可以将应用中的任务划分成不同对象,其中每个对象都由它所归属的类为其定义一系列职责。这类似于用一种方法负责一个特定功能区域,不过一个对象可以拥有多个方法而且能够保存数据值。 想象我们正在创建一款游戏——大家可以创建一个专门用来处理用户详细信息的类。在 Package Explorer 中选择我们的应用程序包,右键点击并选择“New”而后是“Class”。输入“GameUser”作为类名称,确保 main method stub 复选框没有被勾选,然后点击“Finish”。Eclipse 会打开这个类文件,在初始状态下其中只包含它的类声明概要: public class GameUser { //class content } 大家所添加的所有内容都应该位于两个大括号之间(除非大家添加导入语句,这部分内容将位于最前方)。我们的 Android 应用会识别出罗列于文件开头的包名称。当然这里我们使用的是默认包,所以前面并没有列出其它内容。 第二步 在这个类当中添加以下变量: private String playerName; rivate int score; 这些被称为“实例变量”,因为它们被定义为我们所创建的类中的实例。在它们之后添加一个构造方法,它会在该类中的某个对象被创建后开始执行: public GameUser(String userName, int userScore){ playerName=userName; score=userScore; 这里的构造永远与类使用同样的名称,而且可能要求也可能不要求使用参数。该构造通常应该向实例变量分配值,一般是通过参数来实现。 第三步 类也可以定义方法。将以下典型集合添加到构造之后: public String getName() {return playerName;} public int getScore() {return score;} public void setName(String newName) {playerName=newName;} public void setScore(int newScore) {score=newScore;} 这些被称为 get 与 set 方法,或者叫 getter 与 setter,因为它们会利用接收及发送实例变量值的能力将外部代码添加到类中来。查看 Eclipse 中的 Outline 视图,理解它如何帮助实现导航类内容。 第四步 保存我们新建的类文件。回到主类当中,为新类在主方法中创建一个对象: GameUser aUser = new GameUser("Jim", 0); 我们符合构造当中的参数要求——以上代码中的“new”关键字将使构造开始执行。现在我们可以使用这个类实例,通过调用其方法访问其中的数据: System.out.println(aUser.getScore()); aUser.setScore(5); System.out.println(aUser.getScore()); 运行程序以查看调用对象上的 public 方法之后,值产生了什么样的变化。大家可以创建多个对象实例,并对它们进行分别管理: GameUser anotherUser = new GameUser("Jane", 5); 4. 继承与界面 第一步 我们已经了解了如何通过创建对象实例来使类定义一系列职责。它的效果不仅作用于我们所创建的类本身,同时也作用于其它我们能够使用的现有 Java 及 Android 类。除此之外,在创建这些平台类实例的同时,大家还可以利用继承对其加以扩展。在继承机制的帮助下,我们可以创建出一个继承现有类功能、同时又拥有自己运行流程的类。在我们所创建的第一个 Android 项目中,主 Activity 类就是一个很好的例子。 现在打开 Android 项目中的这个类。在类声明的开头,大家会看到“extends Activity”。这意味着该类属于 Android Activity 类中的一个子类。这里的 Activity 类用于使 Android 系统处理向用户呈现的屏幕内容,而各方法则用于不同变量状态下的屏幕内容(创建、暂停与消除等)。通过向 Android Activity 类声明中的定义方法添加代码并在必要时增加额外方法的方式,我们能够更专注于实现应用程序的独特风格。 这是我们经常会在 Android 上使用的模式,用于为应用程序的常见需要扩展定义类。大家可以用自己的类适当对其加以补充。 第二步 再来看 Activity 类中的起始行。请记住,我们添加了“implements OnClickLisener”来处理 UI 中的按钮点击操作。这将通过引用被实施在界面当中。界面类似于一个我们利用“extends”继承而来的类,只不过界面声明只需简单罗列方法概述。大家需要对每一项概述进行方法实施。因此当我们实施 OnClickListener 时,需要委托该类提供一个 onClick 方法——正如我们在之前的 Android 项目中所做。因此界面类似于一项协定。在继承机制的辅助下,扩展类能够继承由类声明所提供的、用于实现超类(即经过扩展的类)的方法实施。如果需要,大家可以覆盖这些实施内容。 总结 在今天的教程中,我们简要介绍了一些 Java 语法方面的基本知识。当然,还有很多其它关于 Java 的结构与概念需要了解。如果大家此前没有接触过 Java,又希望保证自己能拥有足以顺利应对 Android 开发工作的必要知识,请务必点击此处认真阅读甲骨文公司发布的 Java 指南。其中需要认真学习的主题包括数组与交换语句。在本系列的后续文章中,我们将探讨一些大家最常用到的 Android 类。而在下一章节中,我们则开始探索 Android 应用项目中的资源。 原文链接: http://mobile.tutsplus.com/tutorials/android/android-sdk-java-application-programming/ 上一篇: 用户交互 下一篇: 应用程序资源
__label__pos
0.594357
1 $\begingroup$ Let $f \colon \mathbb{R}^n \to \mathbb{R} $ and $A=\{x \in [0,1]^n\colon x_1 \leq x_2 \leq \cdots \leq x_n\}$ I wan to compute a integral of the form $$I =\int_A \prod_{i=1}^n f(x_i -x_{i-1})dx = \int_{0}^1\int_{0}^{x_n} \cdots \int_0^{x_2}\prod_{i=1}^n f(x_i -x_{i-1})dx_1 \cdots dx_n$$ With the change of variables $y = F(x)$, $F_i(x)=x_i-x_{i-1}$ ($x_0=0$) we have $$I=\int_{F^{-1}(A)}\prod_{i=1}^n f(y_i)dy, \qquad F_i^{-1}(y) = y_1 + \cdots + y_i$$ Now I'm trying to know the set $F^{-1}(A)$ so I can have a explicit form for the limits of integration. Any help will be appreciated. $\endgroup$ 1 $\begingroup$ $$ F^{-1}(A)=\{y\in\mathbb{R}^n\colon y_j\ge0,\; j=1,\dots, n,\; \sum_j y_j\le 1\}. $$ $\endgroup$ Your Answer By clicking “Post Your Answer”, you agree to our terms of service, privacy policy and cookie policy Not the answer you're looking for? Browse other questions tagged or ask your own question.
__label__pos
0.986233
阅读 9061 对不起,我不是针对你,我是说在座的各位都不会写 Java! 技术点 本文不是一个吹嘘的文章,不会讲很多高深的架构,相反,会讲解很多基础的问题和写法问题,如果读者自认为基础问题和写法问题都是不是问题,那请忽略这篇文章,节省出时间去做一些有意义的事情。 开发工具 不知道有多少”老”程序员还在使用 Eclipse,这些程序员们要不就是因循守旧,要不就是根本就不知道其他好的开发工具的存在,Eclipse 吃内存卡顿的现象以及各种偶然莫名异常的出现,都告知我们是时候寻找新的开发工具了。 更换 IDE 根本就不想多解释要换什么样的 IDE,如果你想成为一个优秀的 Java 程序员,请更换 IntelliJ IDEA。使用 IDEA 的好处,请搜索谷歌。 别告诉我快捷键不好用 更换 IDE 不在我本文的重点内容中,所以不想用太多的篇幅去写为什么更换IDE。在这里,我只能告诉你,更换 IDE 只为了更好、更快的写好 Java 代码。原因略。 别告诉我快捷键不好用,请尝试新事物。 Bean bean 使我们使用最多的模型之一,我将以大篇幅去讲解 bean,希望读者好好体会。 domain 包名 根据很多 Java 程序员的”经验”来看,一个数据库表则对应着一个 domain 对象,所以很多程序员在写代码时,包名则使用:com.xxx.domain ,这样写好像已经成为了行业的一种约束,数据库映射对象就应该是 domain。但是你错了,domain 是一个领域对象,往往我们再做传统 Java 软件 Web 开发中,这些 domain 都是贫血模型,是没有行为的,或是没有足够的领域模型的行为的,所以,以这个理论来讲,这些 domain 都应该是一个普通的 entity 对象,并非领域对象,所以请把包名改为:com.xxx.entity。 如果你还不理解我说的话,请看一下 Vaughn Vernon 出的一本叫做《IMPLEMENTING DOMAIN-DRIVEN DESIGN》(实现领域驱动设计)这本书,书中讲解了贫血模型与领域模型的区别,相信你会受益匪浅。 DTO 数据传输我们应该使用 DTO 对象作为传输对象,这是我们所约定的,因为很长时间我一直都在做移动端 API 设计的工作,有很多人告诉我,他们认为只有给手机端传输数据的时候(input or output),这些对象成为 DTO 对象。请注意!这种理解是错误的,只要是用于网络传输的对象,我们都认为他们可以当做是 DTO 对象,比如电商平台中,用户进行下单,下单后的数据,订单会发到 OMS 或者 ERP 系统,这些对接的返回值以及入参也叫 DTO 对象。 我们约定某对象如果是 DTO 对象,就将名称改为 XXDTO,比如订单下发OMS:OMSOrderInputDTO。 DTO 转化 正如我们所知,DTO 为系统与外界交互的模型对象,那么肯定会有一个步骤是将 DTO 对象转化为 BO 对象或者是普通的 entity 对象,让 service 层去处理。 场景 比如添加会员操作,由于用于演示,我只考虑用户的一些简单数据,当后台管理员点击添加用户时,只需要传过来用户的姓名和年龄就可以了,后端接受到数据后,将添加创建时间和更新时间和默认密码三个字段,然后保存数据库。 @RequestMapping("/v1/api/user") @RestController public class UserApi { @Autowired private UserService userService; @PostMapping public User addUser(UserInputDTO userInputDTO){ User user = new User(); user.setUsername(userInputDTO.getUsername()); user.setAge(userInputDTO.getAge()); return userService.addUser(user); } } 复制代码 我们只关注一下上述代码中的转化代码,其他内容请忽略: User user = new User(); user.setUsername(userInputDTO.getUsername()); user.setAge(userInputDTO.getAge()); 复制代码 请使用工具 上边的代码,从逻辑上讲,是没有问题的,只是这种写法让我很厌烦,例子中只有两个字段,如果有 20 个字段,我们要如何做呢? 一个一个进行 set 数据吗?当然,如果你这么做了,肯定不会有什么问题,但是,这肯定不是一个最优的做法。 网上有很多工具,支持浅拷贝或深拷贝的 Utils。举个例子,我们可以使用 org.springframework.beans.BeanUtils#copyProperties 对代码进行重构和优化: @PostMapping public User addUser(UserInputDTO userInputDTO){ User user = new User(); BeanUtils.copyProperties(userInputDTO,user); return userService.addUser(user); } 复制代码 BeanUtils.copyProperties 是一个浅拷贝方法,复制属性时,我们只需要把 DTO 对象和要转化的对象两个的属性值设置为一样的名称,并且保证一样的类型就可以了。如果你在做 DTO 转化的时候一直使用 set 进行属性赋值,那么请尝试这种方式简化代码,让代码更加清晰! 转化的语义 上边的转化过程,读者看后肯定觉得优雅很多,但是我们再写 Java 代码时,更多的需要考虑语义的操作,再看上边的代码: User user = new User(); BeanUtils.copyProperties(userInputDTO,user); 复制代码 虽然这段代码很好的简化和优化了代码,但是他的语义是有问题的,我们需要提现一个转化过程才好,所以代码改成如下: @PostMapping public User addUser(UserInputDTO userInputDTO){ User user = convertFor(userInputDTO); return userService.addUser(user); } private User convertFor(UserInputDTO userInputDTO){ User user = new User(); BeanUtils.copyProperties(userInputDTO,user); return user; } 复制代码 这是一个更好的语义写法,虽然他麻烦了些,但是可读性大大增加了,在写代码时,我们应该尽量把语义层次差不多的放到一个方法中,比如: User user = convertFor(userInputDTO); return userService.addUser(user); 复制代码 这两段代码都没有暴露实现,都是在讲如何在同一个方法中,做一组相同层次的语义操作,而不是暴露具体的实现。 如上所述,是一种重构方式,读者可以参考 Martin Fowler 的《Refactoring Imporving the Design of Existing Code》(重构 改善既有代码的设计) 这本书中的 Extract Method 重构方式。 抽象接口定义 当实际工作中,完成了几个 API 的 DTO 转化时,我们会发现,这样的操作有很多很多,那么应该定义好一个接口,让所有这样的操作都有规则的进行。 如果接口被定义以后,那么 convertFor 这个方法的语义将产生变化,它将是一个实现类。 看一下抽象后的接口: public interface DTOConvert<S,T> { T convert(S s); } 复制代码 虽然这个接口很简单,但是这里告诉我们一个事情,要去使用泛型,如果你是一个优秀的 Java 程序员,请为你想做的抽象接口,做好泛型吧。 我们再来看接口实现: public class UserInputDTOConvert implements DTOConvert { @Override public User convert(UserInputDTO userInputDTO) { User user = new User(); BeanUtils.copyProperties(userInputDTO,user); return user; } } 复制代码 我们这样重构后,我们发现现在的代码是如此的简洁,并且那么的规范: @RequestMapping("/v1/api/user") @RestController public class UserApi { @Autowired private UserService userService; @PostMapping public User addUser(UserInputDTO userInputDTO){ User user = new UserInputDTOConvert().convert(userInputDTO); return userService.addUser(user); } } 复制代码 review code 如果你是一个优秀的 Java 程序员,我相信你应该和我一样,已经数次重复 review 过自己的代码很多次了。 我们再看这个保存用户的例子,你将发现,API 中返回值是有些问题的,问题就在于不应该直接返回 User 实体,因为如果这样的话,就暴露了太多实体相关的信息,这样的返回值是不安全的,所以我们更应该返回一个 DTO 对象,我们可称它为 UserOutputDTO: @PostMapping public UserOutputDTO addUser(UserInputDTO userInputDTO){ User user = new UserInputDTOConvert().convert(userInputDTO); User saveUserResult = userService.addUser(user); UserOutputDTO result = new UserOutDTOConvert().convertToUser(saveUserResult); return result; } 复制代码 这样你的 API 才更健全。 不知道在看完这段代码之后,读者有是否发现还有其他问题的存在,作为一个优秀的 Java 程序员,请看一下这段我们刚刚抽象完的代码: User user = new UserInputDTOConvert().convert(userInputDTO); 复制代码 你会发现,new 这样一个 DTO 转化对象是没有必要的,而且每一个转化对象都是由在遇到 DTO 转化的时候才会出现,那我们应该考虑一下,是否可以将这个类和 DTO 进行聚合呢,看一下我的聚合结果: public class UserInputDTO { private String username; private int age; public String getUsername() { return username; } public void setUsername(String username) { this.username = username; } public int getAge() { return age; } public void setAge(int age) { this.age = age; } public User convertToUser(){ UserInputDTOConvert userInputDTOConvert = new UserInputDTOConvert(); User convert = userInputDTOConvert.convert(this); return convert; } private static class UserInputDTOConvert implements DTOConvert<UserInputDTO,User> { @Override public User convert(UserInputDTO userInputDTO) { User user = new User(); BeanUtils.copyProperties(userInputDTO,user); return user; } } } 复制代码 然后 API 中的转化则由: User user = new UserInputDTOConvert().convert(userInputDTO); User saveUserResult = userService.addUser(user); 复制代码 变成了: User user = userInputDTO.convertToUser(); User saveUserResult = userService.addUser(user); 复制代码 我们再 DTO 对象中添加了转化的行为,我相信这样的操作可以让代码的可读性变得更强,并且是符合语义的。 再查工具类 再来看 DTO 内部转化的代码,它实现了我们自己定义的 DTOConvert 接口,但是这样真的就没有问题,不需要再思考了吗? 我觉得并不是,对于 Convert 这种转化语义来讲,很多工具类中都有这样的定义,这中 Convert 并不是业务级别上的接口定义,它只是用于普通 bean 之间转化属性值的普通意义上的接口定义,所以我们应该更多的去读其他含有 Convert 转化语义的代码。 我仔细阅读了一下 GUAVA 的源码,发现了 com.google.common.base.Convert 这样的定义: public abstract class Converter<A, B> implements Function<A, B> { protected abstract B doForward(A a); protected abstract A doBackward(B b); //其他略 } 复制代码 从源码可以了解到,GUAVA 中的 Convert 可以完成正向转化和逆向转化,继续修改我们 DTO 中转化的这段代码: private static class UserInputDTOConvert implements DTOConvert<UserInputDTO,User> { @Override public User convert(UserInputDTO userInputDTO) { User user = new User(); BeanUtils.copyProperties(userInputDTO,user); return user; } } 复制代码 修改后: private static class UserInputDTOConvert extends Converter<UserInputDTO, User> { @Override protected User doForward(UserInputDTO userInputDTO) { User user = new User(); BeanUtils.copyProperties(userInputDTO,user); return user; } @Override protected UserInputDTO doBackward(User user) { UserInputDTO userInputDTO = new UserInputDTO(); BeanUtils.copyProperties(user,userInputDTO); return userInputDTO; } } 复制代码 看了这部分代码以后,你可能会问,那逆向转化会有什么用呢?其实我们有很多小的业务需求中,入参和出参是一样的,那么我们变可以轻松的进行转化,我将上边所提到的 UserInputDTO 和 UserOutputDTO 都转成 UserDTO 展示给大家。 **DTO:** public class UserDTO { private String username; private int age; public String getUsername() { return username; } public void setUsername(String username) { this.username = username; } public int getAge() { return age; } public void setAge(int age) { this.age = age; } public User convertToUser(){ UserDTOConvert userDTOConvert = new UserDTOConvert(); User convert = userDTOConvert.convert(this); return convert; } public UserDTO convertFor(User user){ UserDTOConvert userDTOConvert = new UserDTOConvert(); UserDTO convert = userDTOConvert.reverse().convert(user); return convert; } private static class UserDTOConvert extends Converter<UserDTO, User> { @Override protected User doForward(UserDTO userDTO) { User user = new User(); BeanUtils.copyProperties(userDTO,user); return user; } @Override protected UserDTO doBackward(User user) { UserDTO userDTO = new UserDTO(); BeanUtils.copyProperties(user,userDTO); return userDTO; } } } 复制代码 API: @PostMapping public UserDTO addUser(UserDTO userDTO){ User user = userDTO.convertToUser(); User saveResultUser = userService.addUser(user); UserDTO result = userDTO.convertFor(saveResultUser); return result; } 复制代码 当然,上述只是表明了转化方向的正向或逆向,很多业务需求的出参和入参的 DTO 对象是不同的,那么你需要更明显的告诉程序:逆向是无法调用的: private static class UserDTOConvert extends Converter<UserDTO, User> { @Override protected User doForward(UserDTO userDTO) { User user = new User(); BeanUtils.copyProperties(userDTO,user); return user; } @Override protected UserDTO doBackward(User user) { throw new AssertionError("不支持逆向转化方法!"); } } 复制代码 看一下 doBackward 方法,直接抛出了一个断言异常,而不是业务异常,这段代码告诉代码的调用者,这个方法不是准你调用的,如果你调用,我就”断言”你调用错误了。 说到这里顺便给大家推荐一个Java架构方面的交流学习群:867923845,点击立即加入里面会免费分享一些资深架构师录制的视频录像:有Spring,MyBatis,Netty源码分析,高并发、高性能、分布式、微服务架构的原理,JVM性能优化这些成为架构师必备的知识体系。还能领取免费的学习资源和前辈的面试经验和面试题,相信对于已经工作和遇到技术瓶颈的码友,在这个群里会有你需要的内容。 Bean 的验证 如果你认为我上边写的那个添加用户 API 写的已经非常完美了,那只能说明你还不是一个优秀的程序员。我们应该保证任何数据的入参到方法体内都是合法的。 为什么要验证 很多人会告诉我,如果这些 API 是提供给前端进行调用的,前端都会进行验证啊,你为什还要验证? 其实答案是这样的,我从不相信任何调用我 API 或者方法的人,比如前端验证失败了,或者某些人通过一些特殊的渠道(比如 Charles 进行抓包),直接将数据传入到我的 API,那我仍然进行正常的业务逻辑处理,那么就有可能产生脏数据! “对于脏数据的产生一定是致命”,这句话希望大家牢记在心,再小的脏数据也有可能让你找几个通宵! jsr 303验证 hibernate 提供的 jsr 303 实现,我觉得目前仍然是很优秀的,具体如何使用,我不想讲,因为谷歌上你可以搜索出很多答案! 再以上班的 API 实例进行说明,我们现在对 DTO 数据进行检查: public class UserDTO { @NotNull private String username; @NotNull private int age; //其他代码略 } 复制代码 API 验证: @PostMapping public UserDTO addUser(@Valid UserDTO userDTO){ User user = userDTO.convertToUser(); User saveResultUser = userService.addUser(user); UserDTO result = userDTO.convertFor(saveResultUser); return result; } 我们需要将验证结果传给前端,这种异常应该转化为一个 api 异常(带有错误码的异常)。 @PostMapping public UserDTO addUser(@Valid UserDTO userDTO, BindingResult bindingResult){ checkDTOParams(bindingResult); User user = userDTO.convertToUser(); User saveResultUser = userService.addUser(user); UserDTO result = userDTO.convertFor(saveResultUser); return result; } private void checkDTOParams(BindingResult bindingResult){ if(bindingResult.hasErrors()){ //throw new 带验证码的验证错误异常 } } 复制代码 BindingResult 是 Spring MVC 验证 DTO 后的一个结果集,可以参考spring 官方文档(spring.io/)。 拥抱 lombok 上边的 DTO 代码,已经让我看的很累了,我相信读者也是一样,看到那么多的 Getter 和 Setter 方法,太烦躁了,那时候有什么方法可以简化这些呢。 请拥抱 lombok,它会帮助我们解决一些让我们很烦躁的问题 去掉 Setter 和 Getter 其实这个标题,我不太想说,因为网上太多,但是因为很多人告诉我,他们根本就不知道 lombok 的存在,所以为了让读者更好的学习,我愿意写这样一个例子: @Setter @Getter public class UserDTO { @NotNull private String username; @NotNull private int age; public User convertToUser(){ UserDTOConvert userDTOConvert = new UserDTOConvert(); User convert = userDTOConvert.convert(this); return convert; } public UserDTO convertFor(User user){ UserDTOConvert userDTOConvert = new UserDTOConvert(); UserDTO convert = userDTOConvert.reverse().convert(user); return convert; } private static class UserDTOConvert extends Converter<UserDTO, User> { @Override protected User doForward(UserDTO userDTO) { User user = new User(); BeanUtils.copyProperties(userDTO,user); return user; } @Override protected UserDTO doBackward(User user) { throw new AssertionError("不支持逆向转化方法!"); } } } 复制代码 看到了吧,烦人的 Getter 和 Setter 方法已经去掉了。 但是上边的例子根本不足以体现 lombok 的强大。我希望写一些网上很难查到,或者很少人进行说明的 lombok 的使用以及在使用时程序语义上的说明。 比如:@Data,@AllArgsConstructor,@NoArgsConstructor..这些我就不进行一一说明了,请大家自行查询资料。 bean 中的链式风格 什么是链式风格?我来举个例子,看下面这个 Student 的 bean: public class Student { private String name; private int age; public String getName() { return name; } public Student setName(String name) { this.name = name; return this; } public int getAge() { return age; } public Student setAge(int age) { return this; } } 复制代码 仔细看一下 set 方法,这样的设置便是 chain 的 style,调用的时候,可以这样使用: Student student = new Student() .setAge(24) .setName("zs"); 复制代码 相信合理使用这样的链式代码,会更多的程序带来很好的可读性,那看一下如果使用 lombok 进行改善呢,请使用 @Accessors(chain = true),看如下代码: @Accessors(chain = true) @Setter @Getter public class Student { private String name; private int age; } 复制代码 这样就完成了一个对于 bean 来讲很友好的链式操作。 静态构造方法 静态构造方法的语义和简化程度真的高于直接去 new 一个对象。比如 new 一个 List 对象,过去的使用是这样的: List<String> list = new ArrayList<>(); 复制代码 看一下 guava 中的创建方式: List<String> list = Lists.newArrayList(); 复制代码 Lists 命名是一种约定(俗话说:约定优于配置),它是指 Lists 是 List 这个类的一个工具类,那么使用 List 的工具类去产生 List,这样的语义是不是要比直接 new 一个子类来的更直接一些呢,答案是肯定的,再比如如果有一个工具类叫做 Maps,那你是否想到了创建 Map 的方法呢: HashMap<String, String> objectObjectHashMap = Maps.newHashMap(); 复制代码 好了,如果你理解了我说的语义,那么,你已经向成为 Java 程序员更近了一步了。 再回过头来看刚刚的 Student,很多时候,我们去写 Student 这个 bean 的时候,他会有一些必输字段,比如 Student 中的 name 字段,一般处理的方式是将 name 字段包装成一个构造方法,只有传入 name 这样的构造方法,才能创建一个 Student 对象。 接上上边的静态构造方法和必传参数的构造方法,使用 lombok 将更改成如下写法(@RequiredArgsConstructor 和 @NonNull): @Accessors(chain = true) @Setter @Getter @RequiredArgsConstructor(staticName = "ofName") public class Student { @NonNull private String name; private int age; } 复制代码 测试代码: Student student = Student.ofName("zs"); 复制代码 这样构建出的 bean 语义是否要比直接 new 一个含参的构造方法(包含 name 的构造方法)要好很多。 当然,看过很多源码以后,我想相信将静态构造方法 ofName 换成 of 会先的更加简洁: @Accessors(chain = true) @Setter @Getter @RequiredArgsConstructor(staticName = "of") public class Student { @NonNull private String name; private int age; } 复制代码 测试代码: Student student = Student.of("zs"); 复制代码 当然他仍然是支持链式调用的: Student student = Student.of("zs").setAge(24); 复制代码 这样来写代码,真的很简洁,并且可读性很强。 使用 builder Builder 模式我不想再多解释了,读者可以看一下《Head First》(设计模式) 的建造者模式。 今天其实要说的是一种变种的 builder 模式,那就是构建 bean 的 builder 模式,其实主要的思想是带着大家一起看一下 lombok 给我们带来了什么。 看一下 Student 这个类的原始 builder 状态: public class Student { private String name; private int age; public String getName() { return name; } public void setName(String name) { this.name = name; } public int getAge() { return age; } public void setAge(int age) { this.age = age; } public static Builder builder(){ return new Builder(); } public static class Builder{ private String name; private int age; public Builder name(String name){ this.name = name; return this; } public Builder age(int age){ this.age = age; return this; } public Student build(){ Student student = new Student(); student.setAge(age); student.setName(name); return student; } } } 复制代码 调用方式: Student student = Student.builder().name("zs").age(24).build(); 复制代码 这样的 builder 代码,让我是在恶心难受,于是我打算用 lombok 重构这段代码: @Builder public class Student { private String name; private int age; } 复制代码 调用方式: Student student = Student.builder().name("zs").age(24).build(); 复制代码 代理模式 正如我们所知的,在程序中调用 rest 接口是一个常见的行为动作,如果你和我一样使用过 spring 的 RestTemplate,我相信你会我和一样,对他抛出的非 http 状态码异常深恶痛绝。 所以我们考虑将 RestTemplate 最为底层包装器进行包装器模式的设计: public abstract class FilterRestTemplate implements RestOperations { protected volatile RestTemplate restTemplate; protected FilterRestTemplate(RestTemplate restTemplate){ this.restTemplate = restTemplate; } //实现RestOperations所有的接口 } 复制代码 然后再由扩展类对 FilterRestTemplate 进行包装扩展: public class ExtractRestTemplate extends FilterRestTemplate { private RestTemplate restTemplate; public ExtractRestTemplate(RestTemplate restTemplate) { super(restTemplate); this.restTemplate = restTemplate; } public <T> RestResponseDTO<T> postForEntityWithNoException(String url, Object request, Class<T> responseType, Object... uriVariables) throws RestClientException { RestResponseDTO<T> restResponseDTO = new RestResponseDTO<T>(); ResponseEntity<T> tResponseEntity; try { tResponseEntity = restTemplate.postForEntity(url, request, responseType, uriVariables); restResponseDTO.setData(tResponseEntity.getBody()); restResponseDTO.setMessage(tResponseEntity.getStatusCode().name()); restResponseDTO.setStatusCode(tResponseEntity.getStatusCodeValue()); }catch (Exception e){ restResponseDTO.setStatusCode(RestResponseDTO.UNKNOWN_ERROR); restResponseDTO.setMessage(e.getMessage()); restResponseDTO.setData(null); } return restResponseDTO; } } 复制代码 包装器 ExtractRestTemplate 很完美的更改了异常抛出的行为,让程序更具有容错性。在这里我们不考虑 ExtractRestTemplate 完成的功能,让我们把焦点放在 FilterRestTemplate 上,“实现 RestOperations 所有的接口”,这个操作绝对不是一时半会可以写完的,当时在重构之前我几乎写了半个小时,如下: public abstract class FilterRestTemplate implements RestOperations { protected volatile RestTemplate restTemplate; protected FilterRestTemplate(RestTemplate restTemplate) { this.restTemplate = restTemplate; } @Override public <T> T getForObject(String url, Class<T> responseType, Object... uriVariables) throws RestClientException { return restTemplate.getForObject(url,responseType,uriVariables); } @Override public <T> T getForObject(String url, Class<T> responseType, Map<String, ?> uriVariables) throws RestClientException { return restTemplate.getForObject(url,responseType,uriVariables); } @Override public <T> T getForObject(URI url, Class<T> responseType) throws RestClientException { return restTemplate.getForObject(url,responseType); } @Override public <T> ResponseEntity<T> getForEntity(String url, Class<T> responseType, Object... uriVariables) throws RestClientException { return restTemplate.getForEntity(url,responseType,uriVariables); } //其他实现代码略。。。 } 复制代码 我相信你看了以上代码,你会和我一样觉得恶心反胃,后来我用 lombok 提供的代理注解优化了我的代码(@Delegate): @AllArgsConstructor public abstract class FilterRestTemplate implements RestOperations { @Delegate protected volatile RestTemplate restTemplate; } 复制代码 这几行代码完全替代上述那些冗长的代码。 是不是很简洁,做一个拥抱 lombok 的程序员吧。 重构 需求案例 项目需求 项目开发阶段,有一个关于下单发货的需求:如果今天下午 3 点前进行下单,那么发货时间是明天,如果今天下午 3 点后进行下单,那么发货时间是后天,如果被确定的时间是周日,那么在此时间上再加 1 天为发货时间。 思考与重构 我相信这个需求看似很简单,无论怎么写都可以完成。 很多人可能看到这个需求,就动手开始写 Calendar 或 Date 进行计算,从而完成需求。 而我给的建议是,仔细考虑如何写代码,然后再去写,不是说所有的时间操作都用 Calendar 或 Date 去解决,一定要看场景。 对于时间的计算我们要考虑 joda-time 这种类似的成熟时间计算框架来写代码,它会让代码更加简洁和易读。 请读者先考虑这个需求如何用 Java 代码完成,或先写一个你觉得完成这个代码的思路,再来看我下边的代码,这样,你的收获会更多一些: final DateTime DISTRIBUTION_TIME_SPLIT_TIME = new DateTime().withTime(15,0,0,0); private Date calculateDistributionTimeByOrderCreateTime(Date orderCreateTime){ DateTime orderCreateDateTime = new DateTime(orderCreateTime); Date tomorrow = orderCreateDateTime.plusDays(1).toDate(); Date theDayAfterTomorrow = orderCreateDateTime.plusDays(2).toDate(); return orderCreateDateTime.isAfter(DISTRIBUTION_TIME_SPLIT_TIME) ? wrapDistributionTime(theDayAfterTomorrow) : wrapDistributionTime(tomorrow); } private Date wrapDistributionTime(Date distributionTime){ DateTime currentDistributionDateTime = new DateTime(distributionTime); DateTime plusOneDay = currentDistributionDateTime.plusDays(1); boolean isSunday = (DateTimeConstants.SUNDAY == currentDistributionDateTime.getDayOfWeek()); return isSunday ? plusOneDay.toDate() : currentDistributionDateTime.toDate() ; } 复制代码 读这段代码的时候,你会发现,我将判断和有可能出现的不同结果都当做一个变量,最终做一个三目运算符的方式进行返回,这样的优雅和可读性显而易见,当然这样的代码不是一蹴而就的,我优化了 3 遍产生的以上代码。读者可根据自己的代码和我写的代码进行对比。 提高方法 如果你做了 3 年+的程序员,我相信像如上这样的需求,你很轻松就能完成,但是如果你想做一个会写 Java 的程序员,就好好的思考和重构代码吧。 写代码就如同写字一样,同样的字,大家都会写,但是写出来是否好看就不一定了。如果想把程序写好,就要不断的思考和重构,敢于尝试,敢于创新,不要因循守旧,一定要做一个优秀的 Java 程序员。 提高代码水平最好的方法就是有条理的重构!(注意:是有条理的重构) 设计模式 设计模式就是工具,而不是提现你是否是高水平程序员的一个指标。 我经常会看到某一个程序员兴奋的大喊,哪个程序哪个点我用到了设计模式,写的多么多么优秀,多么多么好。我仔细去翻阅的时候,却发现有很多是过度设计的。 业务驱动技术 or 技术驱动业务 业务驱动技术 or 技术驱动业务 ? 其实这是一个一直在争论的话题,但是很多人不这么认为,我觉得就是大家不愿意承认罢了。我来和大家大概分析一下作为一个 Java 程序员,我们应该如何判断自己所处于的位置. 业务驱动技术:如果你所在的项目是一个收益很小或者甚至没有收益的项目,请不要搞其他创新的东西,不要驱动业务要如何如何做,而是要熟知业务现在的痛点是什么?如何才能帮助业务盈利或者让项目更好,更顺利的进行。 技术驱动业务:如果你所在的项目是一个很牛的项目,比如淘宝这类的项目,我可以在满足业务需求的情况下,和业务沟通,使用什么样的技术能更好的帮助业务创造收益,比如说下单的时候要进队列,可能几分钟之后订单状态才能处理完成,但是会让用户有更流畅的体验,赚取更多的访问流量,那么我相信业务愿意被技术驱动,会同意订单的延迟问题,这样便是技术驱动业务。 我相信大部分人还都处于业务驱动技术的方向吧。 所以你既然不能驱动业务,那就请拥抱业务变化吧。 代码设计 一直在做 Java 后端的项目,经常会有一些变动,我相信大家也都遇到过。 比如当我们写一段代码的时候,我们考虑将需求映射成代码的状态模式,突然有一天,状态模式里边又添加了很多行为变化的东西,这时候你就挠头了,你硬生生的将状态模式中添加过多行为和变化。 慢慢的你会发现这些状态模式,其实更像是一簇算法,应该使用策略模式,这时你应该已经晕头转向了。 说了这么多,我的意思是,只要你觉得合理,就请将状态模式改为策略模式吧,所有的模式并不是凭空想象出来的,都是基于重构。 Java 编程中没有银弹,请拥抱业务变化,一直思考重构,你就有一个更好的代码设计! 你真的优秀吗? 真不好意思,我取了一个这么无聊的标题。 国外流行一种编程方式,叫做结对编程,我相信国内很多公司都没有这么做,我就不在讲述结对编程带来的好处了,其实就是一边 code review,一边互相提高的一个过程。既然做不到这个,那如何让自己活在自己的世界中不断提高呢? “平时开发的时候,做出的代码总认为是正确的,而且写法是完美的。”,我相信这是大部分人的心声,还回到刚刚的问题,如何在自己的世界中不断提高呢? 答案就是: • 多看成熟框架的源码 • 多回头看自己的代码 • 勤于重构 你真的优秀吗? 如果你每周都完成了学习源码,回头看自己代码,然后勤于重构,我认为你就真的很优秀了。 即使也许你只是刚刚入门,但是一直坚持,你就是一个真的会写java代码的程序员了。 关注下面的标签,发现更多相似文章 评论
__label__pos
0.942422
Sign up × MathOverflow is a question and answer site for professional mathematicians. It's 100% free, no registration required. For $K$ a compact Lie-group with maximal torus $T$, I'd like to know the cohomology $\text{H}^{\ast}(K/T)$ of the flag variety $K/T$. If I'm not mistaken, this should be isomorphic to the algebra of coinvariants of the associated root system, according to the "classical Borel picture", as it is often called in the literature (sadly, often without a reference). Unfortunately, Borels original paper is quite long and so I have the following Question: Does anybody know a short proof for the theorem? For example, can it be proved using a direct computation of the Lie-algebra cohomology $\text{H}^{\ast}({\mathfrak k},{\mathfrak t})$? As a side-question: Is it correct that for a complex semisimple Lie-group $G$ with Borel $B$, compact real form $K$ and maximal torus $T$ the map $K/T\to\ G/B$ is a homotopy equivalence? What is a good reference for these things? I'm sorry if this is too elementary for MO, but apart from Borel's original paper I couldn't find good sources. share|cite|improve this question 4 Answers 4 up vote 15 down vote accepted Borel's lengthy 1953 Annals paper is essentially his 1952 Paris thesis. It was followed by work of Bott, Samelson, Kostant, and others, which eventually answers your side question affirmatively. For a readable modern account in the setting of complex algebraic groups rather than compact groups, try to locate a copy of the lecture notes: MR649068 (83h:14045) 14M15 (14D25 20F38 57N99 57T15) Hiller, Howard, Geometry of Coxeter groups. Research Notes in Mathematics, 54. Pitman (Advanced Publishing Program), Boston, Mass.-London, 1982. iv+213 pp. ISBN 0-273-08517-4. (This was based on his 1980 course at Yale. Eventually he left mathematics to work for Citibank.) The identification of the cohomology ring with the coinvariant algebra of the Weyl group has continued to be important for algebraic and geometric questions, for instance in the work of Beilinson-Ginzburg-Soergel. While Hiller's notes are not entirely self-contained, they are helpfully written. (But note that his short treatment of Coxeter groups has a major logical gap.) ADDED: In Hiller's notes, Chapter III (Geometry of Grassmannians) is most relevant. For connections with Lie algebra cohomology, the classical paper is: MR0142697 (26 #266) 22.60 (17.30) Kostant, Bertram, Lie algebra cohomology and generalized Schubert cells. Ann. of Math. (2) 77 1963 72–144. Nothing in this rich circle of ideas can be made quick and easy; a lot depends on what you already know. P.S. Keep in mind that Hiller tends to give explicit details just for the general linear group and grassmannians, but he also points out how the main results work in general, with references. I don't know a more modern textbook reference for this relatively old work. But the intuitive connection between the Borel picture and the Bott/Kostant cohomology picture is roughly this: The Lie subalgebra spanned by negative root vectors plays the role of tangent space to the flag manifold/variety. In the Lie algebra cohomology approach you get an explicit graded picture for each degree in terms of number of elements in the Weyl group of a fixed length, whereas the Borel description in terms of Weyl group coinvariants makes the algebra structure of cohomology more transparent. (What I don't know is whether a simpler proof of Borel's theorem can be derived using Lie algebra cohomology.) Concerning the relationship between $K/T$ and $G/B$, this goes back to the work around 1950 on topology of Lie groups (Iwasawa, Bott, Samelson): all the topology of a connected, simply connected Lie group comes from a maximal compact subgroup. So the two versions of the flag manifold are homeomorphic. In later times, emphasis has often shifted to treating $G$ as a complex algebraic group, so that $G/B$ is a projective variety. For me the literature is hard to compactify. One more reference, which treats the Borel theorem in a semi-expository style: MR1365844 (96j:57051) 57T10 Reeder, Mark (1-OK), On the cohomology of compact Lie groups. Enseign. Math. (2) 41 (1995), no. 3-4, 181–200. There is some online access here. share|cite|improve this answer      Thank you, these links were very helpful to me. –  B. Bischof Apr 17 '10 at 19:23 1   Another important paper in the direction of Kostant's paper is that by Bernstein, Gel'fand, Gel'fand, "Schubert cells and Cohomology of the spaces G/P" –  B. Bischof Apr 19 '10 at 14:37 Note that your statement is only true in rational cohomology. For example, $H^\ast(SO(5)/T)$ is not generated in degree $2$ (though it is rationally). The easiest proof I know starts from equivariant cohomology: $ H^\ast_T(K/T) = H^\ast_{T\times T}(K) = H^\ast_{T\times K\times T}(K\times K) = H^\ast_K(K/T \times K/T) $ So far this uses $H^\ast_F(X) = H^\ast(X/F)$ for free actions. Now use the equivariant Künneth formula: $... = H^\ast_K(K/T) \otimes_{H^\ast_K} H_K(K/T) = H^\ast_T \otimes_{H^\ast_K} H^\ast_T$ Rationally, the base ring $H^\ast_K$ is $(H^\ast_T)^W$, the invariants. Since you didn't want equivariant but ordinary cohomology, kill the left factor, leaving ${\mathbb Q} \otimes_{(H^\ast_T)^W} H^\ast_T$, which is your desired ring of coinvariants. (I'm having a bunch of trouble with $H$ vs. $H^\ast$ in typesetting here, sorry!) share|cite|improve this answer      Having had the same trouble myself, I fixed the typesetting by using \ast instead of * for the asterisks. (Backticks also work.) –  Tim Perutz Apr 17 '10 at 16:59 3   I am not sure I agree that equivariant cohomology is supreme as a candidate for the quickest way to go if one wants an algebro-topological proof. One can use the Eilenberg-Moore spectral sequence which has the form (with a field $k$ as coefficient ring say) $$ \mathrm{Tor}^*_{H^*(K)}(H^*(T)),k) \implies H^*(K/T)$$ and as $H^*(T)$ is free as $H^*(K)$-module (when $k$ has characteristic $0$) this degenerates to give what one wants. In any case if either of these proofs qualifies as "short" depends on whether your definition of length is recursive or not.... –  Torsten Ekedahl Apr 17 '10 at 18:10      Comment to my own comment: When I wrote $H^*(K)$ and $H^*(T)$ I meant $H^*(BK)$ and $H^*(BT)$. –  Torsten Ekedahl Apr 19 '10 at 5:26 1   Maybe even faster: $H_K(K/T)=H_T$, so $H(K/T)=\mathbb Q \otimes_{H_K} H_T=\mathbb Q \otimes_{H_T^W} H_T$ the ring of coinvariants. –  Jan Weidner Oct 25 '11 at 12:06      Concerning the side question, the map K/T → G/B is actually an isomorphism of real manifolds, not just a homotopy equivalence. Not sure about the references, but this is essentially textbook material (unfortunately I forgot where I learned about this). Considering the case of U(n) ⊂ GL(n) acting on the flags in Cn is instructive. share|cite|improve this answer 1   Oddly, in the infinite-dimensional case the most relevant map is the other way, and is only a homotopy equivalence, as one can read about in [Pressley-Segal] "Loop Groups". –  Allen Knutson Apr 19 '10 at 1:11 1   One way of getting this isomorphism is that it is a special case of the Iwasawa decomposition for a general semi-simple (real) Lie group. This allows us to get many specific references, Helgason for instance. –  Torsten Ekedahl Apr 19 '10 at 5:34      Oh, so this is called the Iwasawa decomposition. I used to know this, but it was a long time ago. –  Leonid Positselski Apr 19 '10 at 20:03 I may be too late to this particular party, but I thought it should be stated that the result is originally due to Leray, not Borel. For a timeline, due to Borel, of Leray's results in this vein, see "Jean Leray and Algebraic Topology." J. Leray, Selected Papers, Oeuvres Scientifiques 1: 1-21. Relevant works by Leray are Détermination, dans les cas non exceptionnels, de l’anneau de cohomologie de l’espace homogène quotient d'un groupe de Lie compact par un sous-groupe de même rang. C. R. Acad. Sci., Paris, Sér. I 228, pp. 1902-1904; Sur l’homologie des groupes de Lie, des espaces homogènes et des espaces fibrés principaux. Colloque de Topologie du C.B.R.M., Bruxelles. Masson, Paris 1950, pp. 101-115. A fairly short proof in modern notation of this result, close in spirit to the techniques of Borel's thesis, was once installed on Wikipedia by this author: https://en.wikipedia.org/wiki/Generalized_flag_variety#Cohomology. share|cite|improve this answer 1   I'm not sure precisely what should be attributed to Leray, though it should be noted that in his 1952 thesis (with Leray on the examining committee) Borel does discuss the foundational contributions of Leray and others while emphasizing that his own viewpoint is different in crucial ways. It might be useful to sort out the history more carefully than I have, but Borel's formulation seems to have had more widespread influence in later work on compact Lie groups and homogeneous spaces. –  Jim Humphreys Jul 2 at 0:49      I find Borel's viewpoint much more helpful, and much of Leray's output on these matters in CR notices without proofs. I get the impression that it's mostly due to the work of Koszul, Cartan, and Borel that Leray's ideas found the purchase they did. That said, Borel agrees that this particular result predates his proof of it. –  jdc Jul 2 at 5:23      For example, Thm. 2.1(a,b) in the Colloque paper is "a) Notons $P_G$ la sous-algèbre de $P_T$ [this is the algebra $\mathbb{R}[\mathfrak{t}]$ of polynomial functions on the maximal torus] que constituent ses éléments invariants par $\Phi$ [this is the coadjoint action of the Weyl group]; il existe un sous-espace vectoriel $Q_G$ de $P_G$ qui a la dimension $l$ [this is the rank of $T$] et qui engendre les éléments de degrés $> 0$ de $P_G$ (C. Chevalley). –  jdc Jul 2 at 5:30      "b) Soit $R_G$ le plus petit idéal de $P_T$ contenant $Q_G$; on a $H_{G/T} = P_T / R_G$; la représentation linéaire du $\Phi$ que constitue l'espace vectoriel $H_{G/T} = P_T/R_G$ est équivalente à l'algèbra du groupe $\Phi$." –  jdc Jul 2 at 5:32 Your Answer   discard By posting your answer, you agree to the privacy policy and terms of service. Not the answer you're looking for? Browse other questions tagged or ask your own question.
__label__pos
0.553357
Trim whitespace (spaces) from a string From CodeCodex Related content: Implementations[edit] Common Lisp[edit] Common lisp has a built-in string-trim function. In fact, you can get rid of arbitrary characters. The first parameter is the character you want to get rid of from the right and left of the string, the second parameter is the string. (string-trim " " " trim me ") ;;output "trim me" Unsurprisingly, if you want to just get rid of characters on just one end of the string, use string-right-trim or string-left-trim. Erlang[edit] string:strip(" 123 xyz ").  % "123 xyz" string:strip(" 123 xyz ", left).  % "123 xyz " string:strip(" 123 xyz ", right).  % " 123 xyz" string:strip("...123 xyz..", both, $.). % "123 xyz" Java[edit] The String class in Java has a built in trim function: String s = " xyz ".trim(); System.out.println(s); // prints out "xyz"; According to Java document: If this String object represents an empty character sequence, or the first and last characters of character sequence represented by this String object both have codes greater than '\u0020' (the space character), then a reference to this String object is returned. If you want to trim an empty character sequence " " to "", trim() works (in this case, trim will return the empty string). JavaScript[edit] Trim is not built into JavaScript, but you can use prototyping to add this to your code: String.prototype.trim = function() { return this.replace(/^\s+|\s+$/g,""); } Then you can use this code like this example: var input = " xyz "; var output = input.trim(); // output = "xyz" OCaml[edit] let rec trim s = let l = String.length s in if l=0 then s else if s.[0]=' ' || s.[0]='\t' || s.[0]='\n' || s.[0]='\r' then trim (String.sub s 1 (l-1)) else if s.[l-1]=' ' || s.[l-1]='\t' || s.[l-1]='\n' || s.[l-1]='\r' then trim (String.sub s 0 (l-1)) else s another solution: let trim str = if str = "" then "" else let search_pos init p next = let rec search i = if p i then raise(Failure "empty") else match str.[i] with | ' ' | '\n' | '\r' | '\t' -> search (next i) | _ -> i in search init in let len = String.length str in try let left = search_pos 0 (fun i -> i >= len) (succ) and right = search_pos (len - 1) (fun i -> i < 0) (pred) in String.sub str left (right - left + 1) with | Failure "empty" -> "" ;; Perl[edit] s{\A\s*|\s*\z}{}gmsx; # remove leading and trailing whitespace PHP[edit] $string = trim($string); Python[edit] Python strings have the strip(), lstrip(), rstrip() methods for removing any character from both ends of a string. If the characters to be removed are not specified then whitespace will be removed. strip() removes from both ends; lstrip() removes leading characters (Left-strip); rstrip() removes trailing characters (Right-strip). " xyz ".strip() # returns "xyz" " xyz ".lstrip() # returns "xyz " " xyz ".rstrip() # returns " xyz" " x y z ".replace(' ', '') # returns "xyz" Ruby[edit] This is very similar to the python version above. " test 1 ".strip #=> "test 1" " test 1 ".lstrip #=> "test 1 " " test 1 ".rstrip #=> " test 1" " test 1 ".gsub(" ","") #=> "test1" Seed7[edit] The trim function removes leading and trailing spaces and control chars: writeln(trim(" xyz ")); # writes the string "xyz"; Visual Basic[edit] To remove leading and/or trailing spaces from a string: sString = LTrim(sString) ' Remove leading spaces' sString = RTrim(sString) ' Remove trailing spaces' sString = Trim(sString) ' Remove leading and trailing spaces' To remove all spaces from a string: sString = Replace(sString, " ", "") Zsh[edit] string=" 123 abc xyz " ${string// /} # output: 123abcxyz
__label__pos
0.678958
2 $\begingroup$ Let $X$ be a variety (i.e. a reduced scheme of finite type over a field) and let $G$ be an abstract group, finitely generated, acting of $X$ algebraically freely. The example I have in mind is $\mathbb Z$ acting by shifts on $\mathbb A^1$. The quotient in this example clearly does not exist as a noetherian scheme since the fibres are discrete infinite. Is it possible to still make sense of the quotient algebraically? I guess the best way to put it formally is: Is it possible to put the category of varieties over the field k into a bigger "algebraic" category such that the functor of points of the quotient (i.e. orbits of the values of the usual functior of points) is always representable? What I mean by "algebraic" is that I am definitely not interested in a quotient in the sense of complex geometry. Is it possible to achieve this goal by considering a suitable category of non-Noetherian schemes? $\endgroup$ • $\begingroup$ If $k$ has characteristic $0$, then $\mathbb{A}^1_k / \mathbb{Z}$ exists and is just $\mathrm{Spec}(k)$. $\endgroup$ – Martin Brandenburg Nov 26 '12 at 2:16 • $\begingroup$ is it in the sense I've mentioned? it doesn't represent the functor $F(T)$={$\mathbb Z$-orbits of T-points of $\mathbb A^1$}. by the way in characteristic $p>0$ the quotient exits and is just $\mathbb A^1$ $\endgroup$ – Dima Sustretov Nov 26 '12 at 3:26 • $\begingroup$ The functor of points of the quotient is not "points invariant under the action," it's "orbits under the G-action". Of course this cannot be representable in a "geometric" setting, since it's not a sheaf in e.g. the etale topology. Its etale sheafification is representable by a (non-separated) algebraic space, however. $\endgroup$ – Daniel Litt Nov 26 '12 at 3:54 • $\begingroup$ Daniel, yes, my mistake, I should have said "orbits". What is this algebraic space? I.e. if we regard an algebraic space as a quotient of a scheme by an étale equivalence relation, what the scheme and the equivalence relation it would be? $\endgroup$ – Dima Sustretov Nov 26 '12 at 8:52 • $\begingroup$ @ Dima: I don't think you really mean "transitively"! $\endgroup$ – Laurent Moret-Bailly Nov 26 '12 at 20:18 1 $\begingroup$ The quotient exists as an algebraic space (if we ignore the literature that defines algebraic spaces to be separated). The equivalence relation is given by the standard action groupoid. This is étale, since both the projection and the action map from $X \times G$ to $X$ are locally finitely presented and formally étale (indeed, they are locally isomorphisms). I'm not sure what you are looking for by passing to non-Notherian schemes, beyond allowing the "morphism space" $X \times G$ to be used. | cite | improve this answer | | $\endgroup$ • $\begingroup$ shouldn't the equivalenc relation be a closed subscheme of $X \times X$? or is it something that we relax when we do not require the algebraic space to be separated? $\endgroup$ – Dima Sustretov Nov 26 '12 at 17:27 • $\begingroup$ The equivalence relation is a monomorphism of schemes $R\to X\times X$. In your case, $R$ would be $G\times X$. Separated algebraic spaces correspond to the case where $R\to X\times X$ is a closed immersion. In your case, the morphism $R\to X\times X$ is not even quasicompact (as soon as $G$ is infinite), so the quotient is not quasiseparated and therefore does not qualify as an "algebraic space" for many authors. I am not quite sure this is justified, but on the other hand such spaces have not really proved useful yet. $\endgroup$ – Laurent Moret-Bailly Nov 26 '12 at 20:30 • $\begingroup$ The reason I am asking this question is beause I have been told that sometimes passing to formal schemes allows taking quotients that would otherwise not exist (as a scheme), the exemplary construction being the Tate curve. But perhaps this is a wrong analogy. $\endgroup$ – Dima Sustretov Nov 26 '12 at 21:17 • $\begingroup$ Formal schemes allow you to consider uniformization of curves that lie over a formal neighborhood of the maximally degenerate locus (after Mumford), but they form a category that is different from non-Noetherian schemes, since coordinate rings become topologized. If you are looking for a world where you can take sheaf quotients of arbitrary free actions of group schemes or group sheaves, you could just consider the category of sheaves itself. $\endgroup$ – S. Carnahan Nov 26 '12 at 22:11 Your Answer By clicking “Post Your Answer”, you agree to our terms of service, privacy policy and cookie policy Not the answer you're looking for? Browse other questions tagged or ask your own question.
__label__pos
0.667795
1 $\begingroup$ Let $G$ be a finite, simple, loopless graph with $|V(G)|=n$. We define its edge density as $$ed(G) := \frac{|E(G)|}{n \choose 2}.$$ Moreover we set $$d_n := \text{max}\big\{ed(G): G \text{ is a triangle-free graph with } V(G) = \{1,\ldots, n\} \big\}.$$ Does $\lim_{n\to\infty}d_n$ exist and if yes, what is its value? If not, what is $\lim \text{sup}_{n\to\infty}d_n$? (I would expect this value to be around $2/3$ but I really don't have any idea.) $\endgroup$ 6 $\begingroup$ Turan's theorem says that this limit exists and is equal to $\frac{1}{2}$. The graphs that achieve this are the complete bipartite graphs. Moreover, the theorem gives the answer for the more general question of $K_{r+1}$-free graphs, where maximum edge density comes from complete $r$-partite graphs. $\endgroup$ Your Answer By clicking “Post Your Answer”, you agree to our terms of service, privacy policy and cookie policy Not the answer you're looking for? Browse other questions tagged or ask your own question.
__label__pos
0.989534
Location: PHPKode > scripts > TheCartPress > thecartpress/widgets/CheckoutWidget.class.php <?php /** * This file is part of TheCartPress. * * TheCartPress is free software: you can redistribute it and/or modify * it under the terms of the GNU General Public License as published by * the Free Software Foundation, either version 3 of the License, or * (at your option) any later version. * * TheCartPress is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the * GNU General Public License for more details. * * You should have received a copy of the GNU General Public License * along with TheCartPress. If not, see <http://www.gnu.org/licenses/>. */ class CheckoutWidget extends WP_Widget { function CheckoutWidget() { $widget_settings = array( 'classname' => 'checkout', 'description' => __( 'Allow to view the Checkout info (for debugging purpose)', 'tcp' ), ); $control_settings = array( 'width' => 300, 'id_base' => 'checkout-widget' ); $this->WP_Widget( 'checkout-widget', 'TCP Checkout', $widget_settings, $control_settings ); } function widget( $args, $instance ) { if ( is_page( get_option( 'tcp_checkout_page_id' ) ) ) { extract( $args ); $title = apply_filters( 'widget_title', $instance['title'] ); echo $before_widget; if ( isset( $_SESSION['tcp_checkout'] ) ) { if ( $title ) echo $before_title, $title, $after_title; ?><pre><?php var_dump( $_SESSION['tcp_checkout'] );?></pre><?php } echo $after_widget; } } function update( $new_instance, $old_instance ) { $instance = $old_instance; $instance['title'] = strip_tags( $new_instance['title'] ); return $instance; } function form( $instance ) { $defaults = array( 'title' => 'Checkout', ); $instance = wp_parse_args( ( array ) $instance, $defaults );?> <p> <label for="<?php echo $this->get_field_id( 'title' ); ?>"><?php _e( 'Title', 'tcp' )?>:</label> <input class="widefat" id="<?php echo $this->get_field_id( 'title' ); ?>" name="<?php echo $this->get_field_name( 'title' ); ?>" type="text" value="<?php echo esc_attr( $instance['title'] ); ?>" /> </p><?php } } ?> Return current item: TheCartPress
__label__pos
0.862391
Create A Fancy Sliding Menu With jQuery Advertisement One of the great advantages of creating interactive websites is being able to dynamically hide and reveal parts of your content. Not only does it make for a more interesting user experience, but it allows you to stuff more onto a single page than would otherwise be possible, but in a very elegant, non-obtrusive way, and without overwhelming the user with too much information at once. In this tutorial, we’ll create a sliding menu using the jQuery framework. You will find the downloadable source files at the end of the tutorial if you wish to use them on your website. But the main goal of this article is to show you some basic techniques for creating these kinds of effects and to provide you with the tools you need to realize your own creative ideas. This tutorial is aimed at beginner jQuery developers and those just getting into client-side scripting. You’ll learn how to progressively build this simple effect from scratch. If all you want is a fancy effect on your website, you can simply use the accordion plug-in1, which implements the same basic thing and allows even more control over it. On the other hand, if you want to get your hands dirty and find out how a system like this works, so that you can develop your own ideas later, this tutorial is for you. Also, in the next part of this series, we’ll take a look at how to enhance this basic sliding menu with various effects and animations to make it more interesting. Why jQuery? When creating any kind of Web application, especially one that contains animation effects and various elements that are implemented differently in various browsers, using a framework that takes care of low-level implementation and lets you focus on the high-level code logic is always a good idea. So, while a JavaScript framework can save you time by simplifying specific commands and letting you type less, the real benefit comes from guaranteed cross-browser compatibility, ensuring that your application works the same everywhere without any additional effort on your part. We chose jQuery, because it’s one of the most popular frameworks out there, with a fairly extensive and easy-to-use (not to mention well-documented) API2. However, you could very likely implement the same techniques demonstrated here in any framework of your choice. Before We Start Before writing a single line of code, always consider how you’re going to integrate the JavaScript code into your HTML, how users will interact with the interface and how the overall solution will affect the user experience. With a menu, for example, you have to consider whether your content is dynamically generated or static. With dynamic content, a menu that animates on mouse click would work perfectly fine; but it wouldn’t look so fancy for static content, where the page has to reload every time the user clicks on a menu item. When should you play the animation then? Before or after the page reloads? The quality and speed of the effect depends on many factors, including the user’s computer, whether the website’s content has been cached, how much content you want to display and so on. You have to consider every possibility of your specific situation to get the best out of it. There’s no golden rule here. For the sake of a simpler demonstration, we’ve decided to focus mainly on the JavaScript code, but we’ll offer some possible solutions at the end of the article. The Basics: HTML And CSS Let’s get started already! We have to build a solid foundation for our JavaScript code first. Never forget that although JavaScript is used almost everywhere nowadays, some users (and search engines, of course) still do not have it enabled. So we have to make sure that everything works fine and looks good even without the fancy effects. <div class="menu"> <div class="menu_button"><img src="button_1.png" alt="" /></div> <div class="menu_slider"><img src="img_1.png" alt="" /></div> <div class="menu_button"><img src="button_2.png" alt="" /></div> <div class="menu_slider"><img src="img_2.png" alt="" /></div> <div class="menu_button"><img src="button_3.png" alt="" /></div> <div class="menu_slider"><img src="img_3.png" alt="" /></div> <div class="menu_button"><img src="button_4.png" alt="" /></div> <div class="menu_slider"><img src="img_4.png" alt="" /></div> <div class="menu_button"><img src="button_5.png" alt="" /></div> <div class="menu_slider"><img src="img_5.png" alt="" /></div> </div> .menu { width : 100%; } .menu .menu_button, .menu .menu_slider { margin : 0; float : left; height : 158px; } .menu .menu_button { width : 33px; cursor : pointer; } .menu .menu_slider { /* Hide the contents by default */ width : 0px; overflow : hidden; } .menu .menu_button:hover { background : #ddd; } The above code generates a simple menu bar that consists of 12 container (div) elements, each containing a single image, alternating the menu button and the menu slider images. We could have used only img elements and simply put them one after another, but valid XHTML requires that we wrap them in another element, so we did that using the div containers. This also allows you to substitute the images with any other element later (see an example near the end of this tutorial), because we’ll be concerned only with the containers. We set the menu_slider class container width to 0px, so they will be invisible by default: and we’ll hide and reveal them dynamically with JavaScript. We use float: left here to keep the div elements on the same line. Also, note that I omitted the width, height, and alt attributes in the img tags for readability (well, they’re there, but empty), but you should always include at least the alt attribute on your website for valid HTML/XHTML. The last line might not be so obvious. We set the background color for hover to #ddd. This ensures that a different color for button titles appears whenever the user mouses over one. If our menu were vertical, we would simply type color: #ddd, but because it’s horizontal and our letters are on a 90°-degree angle, we have to get a little tricky. So, we use transparent PNGs as menu buttons, and we sort of cut out the text from the button by leaving the letters fully transparent. With this method, we can control the color of the text simply by changing the background, which will show through the transparent area. The overflow: hidden line is not necessary when working with images, but it will come in handy if we want to use other slide effects instead (see later in the tutorial). Here’s what our menu currently looks like. (Hover over the items to see the gray background behind the buttons.) jQuery Magic Now for the fun part. Let’s start by binding a slide function to the mouse-click event on each menu button. Remember that each menu_slider‘s width is currently 0px. What slide() does is animate the width of the container right next to the clicked button to make it go from 0px to a specified width, thus creating a slide effect. We use the $(this) expression to convert the clicked button to a jQuery object right away; so we can use jQuery’s next() method to get the div right next to the button. This will be the corresponding menu_slider, which we can now pass on to the slide() function to animate it. From now on, whenever we post a code snippet, the green comments will indicate new or important parts or provide additional explanation. function slide ( menu_slider ) // Function to render the animation. { menu_slider.animate ( { 'width' : '253' }, // The first parameter is a list of CSS attributes that we want to change during the animation. 1000 // The next parameter is the duration of the animation. ); } $(".menu .menu_button").bind ( "click", function ( event ) // We're binding the effect to the click event on any menu_button container. { // Get the item next to the button var menu_slider = $(this).next(); // We convert it to a jQuery object: $(HTMLElement) // Do the animation slide ( menu_slider ); } As you can see, a click event starts the whole process. First, we store the element next to the button (i.e. the corresponding image) in the variable menu_slider. Then we pass it to the slide function. Finally, the slide function uses jQuery’s animate method to create the effect. Its first parameter is a list of CSS attributes that we want to change (in this case, only the width of the image, to 253px). The second parameter is the duration of the animation in milliseconds. We’ve set it to 1 second. Give it a try: You can see that it’s working, though far from perfect. Specifying the height of the images beforehand (as we did in the CSS) is very important, otherwise the height will grow proportionally to the width, resulting in a different effect. Currently, you can still click on each menu item and slide out the corresponding image, which is not what we want, so we’ll fix that now. We just need to introduce a new variable called active_menu that stores the currently active menu_button object, and we’ll also modify the slide function to accept another parameter, which will specify the direction of the slide or, to be more precise, the width property of the animation. So if we pass a parameter greater than 0, it will slide out, and if we pass 0, it will slide back in. // Global variables var active_menu; // We introduce this variable to hold the currently active (i.e. open) element. function slide ( menu_slider, width ) { menu_slider.animate ( { 'width' : width }, // We replaced the specific value with the second parameter. 1000 ); } $(".menu .menu_button").bind ( "click", function ( event ) { // Get the item next to the button. var menu_slider = $(this).next(); // First slide in the active_menu. slide ( active_menu, 0 ); // Then slide out the clicked menu. slide ( menu_slider, 253 ); } // We also slide out the first panel by default and thus set the active_menu variable. active_menu = $($( ".menu_slider" )[0]); // Notice we've converted it to a jQuery object again. slide ( active_menu, 253 ); One thing to keep in mind is that every jQuery object behaves kind of like an array, even if it contains only one element. So to get the DOM object it’s referring to (in our case, the img element), we have to access the array and get the first element from it. We did that with the ($( ".menu_slider" )[0] expression, which selects the very first DOM element of the “menu_slider” class, but you can use the alternative get method as well: $(".menu_slider").get(0). If you refresh the page here, and if you have a browser that automatically jumps to the last read section (or if you scroll fast enough), you can see this menu unfold after the page loads. A Few Fixes All right, finally, our script is working the way we want, except for some glitches, which we’ll address now. They’re not fatal errors. In fact, you may even want to leave them as they are; but if you don’t, here’s a way to resolve them. Forbidding Multiple Open Panels If you’ve played around with the demo above, you probably noticed that if you click more than one panel within 1 second, more than one animation can run at the same time, sometimes making the menu wider than it is supposed to be. To resolve this issue, we can simply introduce another variable that determines whether an animation is playing or not. We’ll call it is_animation_running. We set this variable to true as soon as the slide effect begins, and we set it back to false when the animation finishes. To accomplish this, we use the animation function’s another parameter, which specifies another function to run right after the animation finishes. This is important, because if you just set is_animation_running to false after the animation function, nothing would happen, because it would execute almost instantly, when the sliding has just begun. By using this third parameter, we ensure that the variable will be set to false at exactly the right time, regardless of the effect’s duration. Then we simply allow our application to run only if is_animation_running is false (i.e. when no other panel is sliding at the moment). var active_menu; var is_animation_running = false; // New variable. function slide ( menu_slider, width ) { is_animation_running = true; // Indicates that an animation has started. menu_slider.animate ( { 'width' : width }, 1000, // <- Notice the column! function () // This third parameter is the key here. { is_animation_running = false; // We set is_animation_running to false after the animation finishes. } ); } $(".menu .menu_button").bind ( "click", function ( event ) { // First check if animation is running. If it is, we don't do anything. if ( is_animation_running ) { return; // Interrupt execution. } var menu_slider = $(this).next(); slide ( active_menu, 0 ); slide ( menu_slider, 253 ); } active_menu = $($( ".menu .menu_slider" )[0]); slide ( active_menu, 253 ); Now, if you click multiple buttons, only one animation will run at a time. The Self-Collapse Glitch You may have also noticed what happens when you click on the currently active button. It slides in and then out again. If that’s cool with you, then fine, but perhaps you want to fix the active item’s width. To do this, we just add a little check. Whenever a menu button is clicked, we check if the container right next to it is the active_menu or not. (Remember, the active_menu variable stores the div container that is currently slid out.) If it is, we do nothing; otherwise, we play the animation. Easy as pie! But remember we said that every jQuery object behaves like an array? In fact, because it’s just a collection of DOM elements, there’s really no good way to compare two such objects. Thus, we’ll access the DOM elements directly to see if they’re the same or not (i.e. active_menu[0] and $(this).next()[0]). var active_menu; var is_animation_running = false; function slide ( menu_slider, width ) { is_animation_running = true; menu_slider.animate ( { 'width' : width }, 1000, function () { is_animation_running = false; } ); } $(".menu .menu_button").bind ( "click", function ( event ) { // First, check if the active_menu button was clicked. If so, we do nothing ( return ). if ( $(this).next()[0] == active_menu[0] ) // Here we make the check. { return; } if ( is_animation_running ) { return; } var menu_slider = $(this).next(); slide ( active_menu, 0 ); active_menu = menu_slider; // Set active menu for next check. slide ( active_menu, 253 ); // Now we can pass active_menu if we want. } active_menu = $($( ".menu .menu_slider" )[0]); slide ( active_menu, 253 ); Our menu works perfectly now. Try it: click on a button twice. Nothing should happen on your second click. Cleaning Up All right, we’re almost there. If you put the code on a website right now, it will most likely work just fine. But to make sure everything runs smoothly, let’s get rid of those global variables. Hiding these inside a class is always a good idea, so that they don’t collide with other JavaScript code. This is especially important if you’ve added other JavaScript snippets to your page from different sources. Imagine two coders gave the same name to one of their global variables. Whenever you interacted with one, you’d automatically affect the other. It would be a mess! So now we’ll create a SlidingMenu class and use active_menu and is_animation_running as its variables. This approach also allows you to use the sliding menu more than once on the page. All you need to do is create a new instance of SlidingMenu for every animated menu. And while we’re at it, we may as well make the slider function belong to SlidingMenu, so that it can access and directly modify its variables if needed. function SlidingMenu () { // Let's make these class variables so that other functions (i.e. slide) can access them. this.is_animation_running = false; this.active_menu = $($( ".menu .menu_slider" )[0]); // We do the bindings on object creation. var self = this; $( ".menu .menu_button" ).bind( "click", self, on_click ); // Menu button click binding. // Do the slide. this.slide ( 253 ); } SlidingMenu.prototype.slide = slide; function slide ( width ) { this.is_animation_running = true; var self = this; this.active_menu.animate ( { 'width' : width }, // We replaced the specific value with the second parameter. 1000, function () { self.is_animation_running = false; // We set is_animation_running false after the animation finishes. } ); } function on_click ( event ) { // Notice we access the SlidingMenu instance through event.data! if ( $(this).next()[0] == event.data.active_menu[0] ) { return; } if ( event.data.is_animation_running ) { return; } // Get the item next to the button. var menu_slider = $(this).next(); // First slide in the active_menu. event.data.slide ( 0 ); // Set the active menu to the current image. event.data.active_menu = menu_slider; // Then slide out the clicked menu. event.data.slide ( 253 ); } var sl = new SlidingMenu(); // We pass the three parameters when creating an instance of the menu. The code above needs some explanation. There are three important blocks, so let’s look at them one by one. The SlidingMenu Class function SlidingMenu () // Our new class. { // Let's make these class variables so that other functions (i.e. slide) can access them. this.is_animation_running = false; this.active_menu = $($( ".menu .menu_slider" )[0]); // We do the bindings on object creation. var self = this; $( ".menu .menu_button" ).bind( "click", self, on_click ); // Menu button click binding. // Do the slide. this.slide ( 253 ); } JavaScript, unlike many other programming languages, doesn’t have a class keyword for creating classes. But we can simply create objects that have their own variables and methods by creating a regular JavaScript object. So, we basically define a function here, SlidingMenu, and put inside this function all of the stuff that we would put in a regular class constructor in other languages. We first define the same two variables that we used earlier, is_animation_running and active_menu. With the this keyword, we ensure that they belong to the particular class instance. The next part may not be obvious at first: var self = this; $( ".menu .menu_button" ).bind( "click", self, on_click ); To understand this, we have to talk a bit about how jQuery handles events. jQuery Event Handling 101 (at Least What We Need to Know Here) In short, jQuery uses the bind method to add event listeners to DOM elements. (You could alternatively use the live method, which would update event listeners if new DOM elements are added to the document.) The basic usage of bind requires two parameters: an event type (e.g. click, mouseover) and a function (i.e. callback function) that executes when the given event type occurs on the DOM element. And this is where the this keyword comes into play, because in the callback function we often want to reference the DOM object on which the event happened. And jQuery makes it very convenient to do so; we just need to use this. For example, let’s say we want to change an image element to another image when the user clicks on it. To do so, we can write something like this: $("#example_img").bind ( "click", change_image ); function change_image ( event ) { this.src = "images/another_img.png"; } In the example above, we use the this keyword to reference the DOM object. It’s very convenient for simple applications, but for more complicated ones, you may run into problems. As in the example, we want to access the SlidingMenu instance’s variables somehow. But because the this keyword is already used to reference the DOM object, we have to find another way. Fortunately, jQuery allows us to do this fairly easily. To do it, we can pass another parameter to the bind function, which will be placed right between the event type and the callback function, and this parameter has to be an object. You probably noticed the event parameter in the change_image function above. To every callback function is automatically passed an event parameter that contains a handful of useful information, including which element was clicked. And with the extended call of the bind function, we can pass the SlidingMenu instance object through the event parameter as well. We can then access this object through event.data. Here’s a basic example: function ImageData () // This will be an object that contains information about an image, much like our SlidingMenu class contains information about the sliding menu. { this.width = "500"; this.height = "200"; this.src = "images/example_image.png"; } // We create an instance of ImageData. var image_instance = new ImageData(); // We bind the change_image function to the click event, passing along the image_instance data object as well. $("#example_image").bind ( "click", image_instance, change_image ); // The callback function. function change_image ( event ) { this.src = event.data.width; // event.data refers to the image_instance object this.src = event.data.height; this.src = event.data.src; } This example illustrates well how we can access both the DOM element on which the event occurred and the data object we passed through. We access the former through the this keyword, and we access the latter through event.data. Now it finally makes sense why we used this extended version of the function call when binding the click event to the buttons. And because this will always refer to the DOM element in this context, we used the self variable as a substitute, to pass the SlidingMenu instance to the callback function. Here it is again: var self = this; $( ".menu .menu_button" ).bind( "click", self, on_click ); Moving Along The last part in our class definition simply slides out the first panel using its slide method. But because we haven’t defined such a function yet, the line below the class definition also becomes important: SlidingMenu.prototype.slide = slide; We use JavaScript’s prototype mechanism to extend our SlidingMenu object with the slide method. This approach has two main benefits. First, the slider function can now access the variables of any class instance directly using the this keyword. (Because no event handling is involved directly in this function, this now refers to the SlidingMenu instance. You’ll see with on_click that we will need to access it through event.data). Secondly, using prototype instead of directly writing this method inside the class improves memory usage if we make more than one instance of SlidingMenu, because we don’t have to create the slide functions every time we create a new instance: they’ll always use the external function. The Slide Function As we’ve discussed, slide is responsible for sliding the panels in and out. It will be called from the on_click function (see below) and uses the same parameters as before. function slide ( width ) { this.is_animation_running = true; var self = this; this.active_menu.animate ( { 'width' : width }, this.effect_duration, function () { self.is_animation_running = false; } ); } You can see we put this before every variable, and it now refers to the class instance’s variables. This way, we don’t have to pass the variables as function parameters to access or even modify them, and no matter how many instances we create of SlidingMenu, they’ll always refer to the correct variables. You may have noticed that we introduced a variable called self. We basically did this for the same reason we did it in our class definition: because jQuery handles this last parameter similar to event handling. If we used this instead, it would refer to the DOM object. Try it out: it won’t work properly. By introducing the self variable, the animations run as expected. The last thing worth mentioning is that we replaced the menu_slider parameter with the class instance’s active_menu variable. So from now on, we can just set this variable from anywhere and it will animate the current active_menu automatically. It’s just for convenience: one less parameter to pass. The on_click Function Finally, let’s look at the on_click function. Here we put all the code that describes what happens after the user clicks a menu_button. It performs the same checks as before and uses the slide function to hide and reveal menu objects. This method accesses the class variables through the event.data that we passed along in our class definition. You can also see that we pass only one parameter to our modified slide function (the desired width of the element), so it knows whether to slide it in or out; but the element that needs to be animated will be accessed directly through the active_menu variable. function on_click ( event ) { // First check if the active_menu button was clicked. If so, we do nothing. ( return ) if ( $(this).next()[0] == event.data.active_menu[0] ) // Remember, active_menu refers to the image ( thus next() ). { return; } // Check if animation is running. If it is, we interrupt. if ( event.data.is_animation_running ) { return; } // Get the item next to the button. var menu_slider = $(this).next(); // First slide in the active_menu. event.data.slide ( 0 ); // Set the active menu to the current image. event.data.active_menu = menu_slider; // Then slide out the clicked menu. event.data.slide ( 253 ); } Customization By now, our sliding menu should work exactly as we want, and we don’t have to worry about it interfering with other JavaScript code. One last thing to do before we wrap it up is to make the SlidingMenu class a bit more flexible, because it’s way too rigid. As it is now: • It works only with a container with the class name menu; • It works only with menu images that are 253 pixels long; • It works only when the animation duration is set to 1 second. To fix these, we’ll pass three more variables to the SlidingMenu constructor: container_name will contain the class (or id, see below) of the menu container div; menu_slider_length will specify the width of the images we slide out; and duration will set the animation’s length in milliseconds. function SlidingMenu ( container_name, menu_slider_width, duration ) // Note the new parameters. { var self = this; $( container_name + " .menu_button" ).bind ( "click", self, on_click ); // We bind to the specified element. this.effect_duration = duration; // New variable 1. this.menu_slider_width = menu_slider_width; // New variable 2. this.is_animation_running = false; this.active_menu = $($( container_name + " .menu_slider" )[0]); this.slide ( this.menu_slider_width ); // We replaced 253 with the arbitrary variable. } SlidingMenu.prototype.slide = slide; // Function to render the animation. function slide ( width ) { this.is_animation_running = true; this.active_menu.animate ( { 'width' : width }, this.effect_duration, // Replace 1000 with the duration variable. function () { this.is_animation_running = false; } ); } function on_click ( event ) { if ( $(this).next()[0] == active_menu[0] ) { return; } if ( event.data.is_animation_running ) { return; } var menu_slider = $(this).next(); event.data.slide ( 0 ); this.active_menu = menu_slider; event.data.slide ( event.data.effect_duration ); // Slide with the specified amount. } var sl = new SlidingMenu( ".menu", 253, 1000 ); // We pass the three new parameters when creating an instance. We simply replaced the variables with the three new ones where needed. You can see that we could do a lot more customization here; for example, replacing not just the main container name (.menu) but the ones we’ve been calling .menu_button and menu_slider all along. But we’ll leave that up to you. You could even specify an id for the main container (i.e. #menu) if you’d like. Everything has become a bit friendlier and more flexible all of a sudden. In the demo below you can specify an arbitrary number for the duration of the animation (in milliseconds) and the width of the images. Play around with it: Duration: Width: Of course, changing the width of the images makes more sense when you use images on your website that are not exactly 253 pixels wide. Then you can simply call the SlidingMenu constructor with the correct width, and you’re set. Getting a Bit More Complex Another thing I mentioned at the beginning of this tutorial is that because we’re concerned only about the containers of the elements, menu_slider can be any element that has the class menu_slider. So, as a more complex example, menu_slider could be a div containing a list of sub-menu items: Looking much better, right? Of course, when using it for real, you would add a link to each of those list items, so that when the user clicks on it, it loads the corresponding page. By the way, if you don’t want the text to shrink with the width of the container (as in the example above), then add width: 253px; to your CSS file, where you substitute 253px with the desired width. Here is all of the additional CSS that I used for the demo above: .menu .menu_slider ul { position : relative; top : -100px; left : -35px; font-size : 12px; } .menu .menu_slider img { width : 253px; height : 158px; } .menu .menu_slider ul li { list-style : none; background : #fff; color : #333; cursor : pointer; } .menu .menu_slider p { width : 253px; margin-left : 5px; } The only thing worth mentioning here is the font size. Because we defined the menu’s width, height and pretty much everything else in pixels, also setting the font size to a particular number is better, so that it looks consistent across different browsers. In addition, you can see that on mouse-over the menu items become clearer and more defined. This is achieved by changing the opacity of the list elements on hover. Instead of using various CSS hacks for cross-browser compatibility, we’ll trust jQuery on this one: simply use the fadeTo method to fade in and out: $(".menu .menu_slider ul li").bind ( "mouseover", function () { $(this).fadeTo ( "fast", "1.0" ); } ); $(".menu .menu_slider ul li").bind ( "mouseout", function () { $(this).fadeTo ( "fast", "0.8" ); } ); // This line is used to make them fade out by default. $(".menu .menu_slider li").fadeTo ( "fast", 0.8 ); Wrapping Up Congratulations! You made it! By now you should have a working JavaScript sliding menu. More importantly, you should have some understanding of how it works and how to incorporate your own ideas into this model. Hopefully you’ve learned something useful in this tutorial. Now, to reap the rewards, all you have to do is grab all of this code we’ve written and load it on page load, jQuery-style: $(document).ready( function() { // All code goes here. }); To make it even easier, here are all the source files for both the simple and complex examples you saw above: Remember we talked about integrating your code in your website at the beginning of this tutorial? Well, now you can experiment all you want, see how it works and make the necessary adjustments. For example, if you have static content, you might want to change the trigger from click to mouseover, so that the menu begins sliding as soon as you mouse over it. This way the pages can be loaded when the user clicks on the images or buttons. Or you can play around with various highlighting solutions: maybe put a nice border around the images on mouse-over. It’s totally up to you. Have fun! What Next? Well, we can still do a lot here. We still haven’t covered optimization and customization. And of course, the animation is still just a simple slide; as I mentioned, you could achieve the same effect with the jQuery accordion plug-in. But this is where it gets interesting. Now that we have a solid foundation, we can think about some advanced effects and make our little application even easier to use. We’ll cover some of these and the aforementioned topics in the next part. (al) Footnotes 1. 1 http://docs.jquery.com/UI/Accordion 2. 2 http://api.jquery.com/ 3. 3 http://www.smashingmagazine.com/wp-content/uploads/2010/02/simple_jquery_sliding_menu_demo.zip 4. 4 http://www.smashingmagazine.com/wp-content/uploads/2010/02/complex_jquery_sliding_menu_demo.zip ↑ Back to top Tweet itShare on Facebook Advertising 1. 1 Amazing! This was just what I was looking for. Thank you so much! 0 2. 2 Your code is amazing, but may I request you to go a little bit advance. Instead of any image files in the slides can we use any HTML objects, where some links can be used ? Just a suggestion. Regards 0 3. 3 After a long search, I found it. Thanx body. 0 4. 4 Deirdre Regan May 9, 2013 7:44 am Hi, this is exactly what I am looking for except I only want the words and when you click on the words then the image will slide out. I dont want both until you click and I cant figure it out by myself. Any suggestions!? 0 ↑ Back to top
__label__pos
0.699655
JUNOS ANNOTATE A great way to add your own comments to any part of a Junos config. Let say we have this DHCP config... services {         ssh;         telnet;         xnm-clear-text;         web-management {             http {                 interface vlan.7;             }             https {                 system-generated-certificate;                 interface vlan.7;             }         }         dhcp {             pool 10.1.1.0/24 {                 address-range low 10.1.1.101 high 10.1.1.254;                 default-lease-time 86400;                 domain-name company.com;                 router {                     10.1.1.1;                 }             } And we wish to place some kind of comment on the DHCP pool as there is no description statement associated with DHCP. 1) GO INTO EDIT MODE 2) USE THE ANNOTATE COMMAND blogger@LEFTY# annotate system ? Possible completions:   <string>             String argument [edit] blogger@LEFTY#            Note you cant write the whole path into the command. Doesn't give you the choice. You have to be at the correct spot in the edit path. The correct spot is to be just above where you what you wish to annotate is and to understand that the annotated comment sits right above the section you want to annotate. I.e If you want to annotate the pool you need to be at the DHCP level at the edit prompt. [edit system services dhcp] blogger@LEFTY# annotate pool 10.1.1.0/24 "THIS IS THE COMMENT" [edit system services dhcp] blogger@LEFTY# commit commit complete [edit system services dhcp] End result... services {         ssh;         telnet;         xnm-clear-text;         web-management {             http {                 interface vlan.7;             }             https {                 system-generated-certificate;                 interface vlan.7;             }         }         dhcp {             /* THIS IS THE COMMENT */             pool 10.1.1.0/24 {                 address-range low 10.1.1.101 high 10.1.1.254;                 default-lease-time 86400;                 domain-name company.com;                 router {                     10.1.1.1;                 }             } Note: The system places the delimeters in itself and if the comment has spaces enter it in within double quotes as per a lot of other Junos config elements.  3) REMOVE THE ANNOTATE You go to the same spot in the hierarchy as where you put it in to take it out.  [edit system services dhcp] blogger@LEFTY# annotate pool 10.1.1.0/24 ""                  [edit system services dhcp] Use "" to remove it. Enjoy!     3 comments: 1. Thank you, I was trying to issue the delete annotate only to find out that the command is not there. :) ReplyDelete 2. This comment has been removed by a blog administrator. ReplyDelete 3. This comment has been removed by a blog administrator. ReplyDelete
__label__pos
0.908603
Pengertian Router pada Jaringan Pengertian Router pada Jaringan. Router adalah sebuah alat untuk mengirimkan paket data antar jaringan melalui proses yang disebut dengan routing. Sebagai sebuah komputer, router dapat dijelaskan sebagai berikut : "Komputer yang secara khusus bertugas untuk mengirimkan paket data antar jaringan. Sebuah router bertanggung jawab untuk menghubungkan koneksi antar dua jaringan atau lebih dengan memilih jalur yang terbaik supaya paket yang dikirim sampai ke tujuan". Dari sini dapat dikatakan bahwa router berada di netwok center atau  pusat jaringan. Secara umum router memiliki 2 koneksi yaitu : • Koneksi WAN (Koneksi ke ISP) • Koneksi ke LAN Prinsip kerja router secara umum adalah dengan memeriksa alamat  IP tujuan dari paket yang dikirim, setelah mendapatkan alamat IP tujuan router akan kemudian memilih jalur terbaik, dalam hal ini pemilihan jalur berdasarkan bantuan dari routing tabel.
__label__pos
0.998506
1 $\begingroup$ I understand that one should standardize and normalize the test data (or any "unlabeled" data) with the training mean and sd. How can I implement this in R language? Is there a kind of "fitting" to the training set and a kind of applying to the test data? $\endgroup$ 1 Answer 1 2 $\begingroup$ Check out the preProcess function from the caret library. You can choose the parameters you want to scale/center the training data, and it also saves the transformations it makes so then you can normalize the test set with the same specifications that you normalized the training set with. Could go something like this: library(caret) trainData <- data.frame(v1 = rnorm(15,3,1), v2 = rnorm(15,2,2)) testData <- data.frame(v1 = rnorm(5,3,1), v2 = rnorm(5,2,2)) normParam <- preProcess(trainData) norm.testData <- predict(normParam, testData) now your norm.testData is scaled and centered according to the training data set parameters. Another way to do this without using caret: ## set up data trainData <- data.frame(v1 = rnorm(15,3,1), v2 = rnorm(15,2,2)) testData <- data.frame(v1 = rnorm(5,3,1), v2 = rnorm(5,2,2)) ## find mean and sd column-wise of training data trainMean <- apply(trainData,2,mean) trainSd <- apply(trainData,2,sd) ## centered sweep(trainData, 2L, trainMean) # using the default "-" to subtract mean column-wise ## centered AND scaled norm2.testData <- sweep(sweep(testData, 2L, trainMean), 2, trainSd, "/") $\endgroup$ 7 • $\begingroup$ I must omit the label column from the training data to leave it at original scale, right? (Weird that this step mostly stays unnoticed). $\endgroup$ – Hendrik Sep 13, 2016 at 14:35 • $\begingroup$ Correct. The labels should remain unchanged when normalizing. $\endgroup$ – TBSRounder Sep 13, 2016 at 14:50 • $\begingroup$ Thanks. While I accept the reasoning to take the training set into account as a basis for standardization in general, I wonder if including the otherwise available test set (in case of a competition, for instance) into the process would be better, perhaps. I would combine both the train (minus label) and data sets and "fit" preProcess to this combined sets. Isn't it better? $\endgroup$ – Hendrik Sep 13, 2016 at 15:16 • 1 $\begingroup$ You would probably get a "better" performance, but only because you are overfitting. You would be using the "gold standard" test set to influence how your training data is pre-processed, which is not good, and when you try predicting things past your now-tainted test set, you will probably see a performance drop because of that overfitting. Every case/data are different, but that is the general idea of why thats not a good way to do it. $\endgroup$ – TBSRounder Sep 13, 2016 at 15:43 • $\begingroup$ As for using Caret: apart from the fact that I try to evade the overdependence from specific packages and like to learn mechanisms "by hand", Caret cannot provide all possibly necessary imputation method (in my present case, "mean imputation"). So it would still be handy to have a more general solution to implement any kind of (columnwise!) standardization/normalization. $\endgroup$ – Hendrik Sep 14, 2016 at 7:46 Your Answer By clicking “Post Your Answer”, you agree to our terms of service and acknowledge that you have read and understand our privacy policy and code of conduct. Not the answer you're looking for? Browse other questions tagged or ask your own question.
__label__pos
0.504002