content
stringlengths
228
999k
pred_label
stringclasses
1 value
pred_score
float64
0.5
1
What is Computer Graphics? Computer graphics, production of images onto computers for use within virtually any moderate. Graphics used from the picture design of published material are usually produced in computers, as will be moving and still pictures found in comic strips along with animation. Computer images are key to engineering visualization, also a field which uses colors and images to simulate complex phenomena like air pollutants and electric fields, as well as computer-aided engineering and design, by which items are attracted and examined from computer programs. The Windows-based graphical interface, a typical way of getting together with numerous pc apps, is an item of computer images. Shading And Texturing Images contain high content, either concerning information theory (i.e., the variety of all pieces necessary to symbolize pictures ) and regarding semantics (i.e., the significance that pictures can communicate to the audience). From the 1960s ancient computer graphics approaches used vector images to make pictures out of line sections, that have been united for display on technical computer monitors. As the coordinates of its endpoints specify a whole line segment vector images is economical in its use of memory. But it’s unsuitable for exceptionally realistic graphics, because most graphics have some curved borders, and employing most of the right lines to draw curved objects causes a noticeable “stair-step” effect. From the late 1970s and ’80s raster pictures, produced from tv technology, became common, though still restricted by high priced images work station computers. A little pixel suffices for monochrome graphics, while four pieces per pixel define a 16-step grayscale image. PCs are now designed with dedicated video memory for bitmaps that are carrying. 3D Rendering Bitmaps aren’t suitable for tasks, that desire a representation of these items Even though employed for display. 1 standard for its representation of computer models into graphic pictures is that the Utah teapot, created in the University of Utah at 1975. Re-presented as an image, the Utah teapot consists of tiny polygons. Bezier curves, that have the benefit of needing less memory can provide smoother representations. Specimens describe bezier curves; there is a curve set by 2 points and the curve’s slopes, equivalently, by four points in those points. Going for the exact incline at the intersection can smoothly join 2 inches. Curves and curves are referred to as B-splines were introduced into design apps to its modeling of bodies. Rendering provides lots of additional challenges in the search for realism. Things have to be altered since they move in accordance with the view of the observer or rotate. As the view changes objects must obscure them, and also their ones must be obscured by their surfaces. This means of”hidden surface removal” can be carried out by stretching the pixel features to range from the”thickness” of each pixel into a scene, even as determined by the thing which this really is part. Algorithms may subsequently calculate which surfaces at a scene are observable and those are hidden by other people. In computers armed with technical graphics cards such as electronic games, computer simulations, along with other innovative computer software, these calculations have been implemented so fast there is not any audible lag–which is, which makes is achieved at “real-time.” Shading And Texturing Visual appearance comprises more than merely color and shape; feel and surface finish (e.g., lace, matte, glistening ) must also be accurately modeled. The results which these features have within a person’s appearance depend on turn upon the lighting, which might be diffuse, by one source, or even perhaps both. There are lots of methods of producing the conversation of lighting using surfaces. The methods are Phong, Gouraud, and flat. Zero stripes are utilized and just 1 color tone is traditionally used for your object, using various levels of black or white inserted to shading. The model looks unrealistic and horizontal. In Gouraud shading, textures might be utilized (such as timber, rock, stucco, etc ); each advantage of this thing is provided a color that points in light, and also the monitor interpolates (computes intermediate values) to develop a more smooth gradient over each surface. This causes a more realistic image. Gouraud pictures can be rendered by computer graphics systems. Each pixel takes all sources and also any feel. It gives results that are more realistic but is slower. Specular reflection is not modeled by the methods from simulating transparent or surfaces and translucent objects. This is sometimes accomplished by beam tracing, a manufacturing technique which utilizes basic optical legislation of expression and refraction. Ray-tracing follows a light beam from the perspective. After a thing is encountered by the beam, it’s tracked because it’s reflected or refracted. Raytracing is a procedure; each represented or refracted beam is traced in precisely exactly the exact same manner until it creates an insignificant contribution vanishes into the desktop. Raytracing might possibly have quite a number of years –hours or moments might be swallowed in creating a scene. In fact, items are illuminated perhaps not just directly by way of a source of light such as sunlight or even a lamp but in addition more diffusely by reflected light from different items. Such lighting is recreated in computer images which consider the aftereffects of the elements in a spectacle on the visual appeal of each thing and by methods, which version light as energy as opposed to beams. By way of instance, a colored object may throw a shine of the color on surfaces. Radiosity applies optical principles to attain realism–and such as beam tracing, it’s computationally costly. Processors And Apps A way to decrease the time necessary for true representation would be always to make use of parallel processing, to ensure at beam shading, as an instance, multiple beams can be tracked at the same time. Pipelined parallelism, Still another technique, benefit from how images processing might be divided into phases –assembling Bezier surfaces or even polygons, eliminating surfaces, including shading, rasterization and so on. Utilizing parallelism, together with image has been rasterized, still, yet another could be scrapped as well as a third could be assembled. Both forms of parallelism are utilized in high-resolution images of chips. In spite of most of this power, it might take days to leave the various pictures required to get a computer-animated motion-picture. Computer images rely on computer software bundles. The OpenGL (open images library) defines a regular set of graphics routines which might be used from education languages like C or Java. Commercial and bundles deliver extensive modeling capacities for images. More small tools, offering just basic two-dimensional images, would be the “paint” programs usually installed on smartphones.  
__label__pos
0.867842
1138 DÍAS EN SEGUNDOS ¿Cómo convertir 1138 días a segundos? 1138 días es equivalente a 98323200 segundos.[1] Fórmula de conversión ¿Cómo convertir 1138 días a segundos? Sabemos, por definición, que: 1d = 86400sec Podemos establecer una proporción para resolver la cantidad de segundos. 1 d 1138 d = 86400 sec x sec Ahora, multiplicamos de forma cruzada para encontrar el resultado de nuestra variable desconocida x x sec = 1138 d 1 d * 86400 sec x sec = 98323200 sec Conclusión: 1138 d = 98323200 sec Conversión en dirección opuesta El inverso del factor de conversión es que 1 segundo es igual a 1.01705396081495e-08 veces 1138 días. También se puede expresar como: 1138 días es igual a 1 1.01705396081495e-08 segundos. Aproximación Un resultado numérico aproximado es: 98323200.0, o alternativamente, 0.0. Notas [1] La precisión es de 15 dígitos significativos (catorce dígitos a la derecha del punto decimal) El resultado puede contener alguna inexactitud en los digitos significativos debido al uso de la aritmética de punto flotante
__label__pos
0.660542
Answers Solutions by everydaycalculation.com Answers.everydaycalculation.com » Divide fractions Divide 1/15 with 20/98 1/15 ÷ 20/98 is 49/150. Steps for dividing fractions 1. Find the reciprocal of the divisor Reciprocal of 20/98: 98/20 2. Now, multiply it with the dividend So, 1/15 ÷ 20/98 = 1/15 × 98/20 3. = 1 × 98/15 × 20 = 98/300 4. After reducing the fraction, the answer is 49/150 MathStep (Works offline) Download our mobile app and learn to work with fractions in your own time: Android and iPhone/ iPad Related: © everydaycalculation.com
__label__pos
0.827285
1703 I've got the following objects using AJAX and stored them in an array: var homes = [ { "h_id": "3", "city": "Dallas", "state": "TX", "zip": "75201", "price": "162500" }, { "h_id": "4", "city": "Bevery Hills", "state": "CA", "zip": "90210", "price": "319250" }, { "h_id": "5", "city": "New York", "state": "NY", "zip": "00010", "price": "962500" } ]; How do I create a function to sort the objects by the price property in ascending or descending order using JavaScript only? 1 27 Answers 27 2201 Sort homes by price in ascending order: homes.sort(function(a, b) { return parseFloat(a.price) - parseFloat(b.price); }); Or after ES6 version: homes.sort((a, b) => parseFloat(a.price) - parseFloat(b.price)); Some documentation can be found here. For descending order, you may use homes.sort((a, b) => parseFloat(b.price) - parseFloat(a.price)); 9 • 239 You can use string1.localeCompare(string2) for string comparison – bradvido May 27, 2014 at 13:51 • 86 Keep in mind that localeCompare() is case insensitive. If you want case sensitive, you can use (string1 > string2) - (string1 < string2). The boolean values are coerced to integer 0 and 1 to calculate the difference. – Don Kirkby May 1, 2015 at 21:58 • 3 Thanks for the update, @Pointy, I don't remember running into this problem, but perhaps the behaviour has changed in the last couple of years. Regardless, the localeCompare() documentation shows that you can explicitly state whether you want case sensitivity, numeric sorting, and other options. – Don Kirkby Aug 28, 2017 at 16:11 • 6 @sg28 I think you've misunderstood the MDN explanation. It does not say that the sort function is not reliable, it says that it is not stable. I understand why this can be confusing, but that is not a claim that it is not suitable for use. In the context of sorting algorithms, the term stable has a specific meaning - that "equal" elements in the list are sorted in the same order as in the input. This is completely unrelated to the idea of code which is unstable (i.e. not yet ready for use). – Stobor Jul 30, 2018 at 1:42 • 3 If you want to sort by a specific string values for example by city you could use: this.homes.sort((current,next)=>{ return current.city.localeCompare(next.city)}); Oct 30, 2018 at 4:13 737 Here's a more flexible version, which allows you to create reusable sort functions, and sort by any field. const sort_by = (field, reverse, primer) => { const key = primer ? function(x) { return primer(x[field]) } : function(x) { return x[field] }; reverse = !reverse ? 1 : -1; return function(a, b) { return a = key(a), b = key(b), reverse * ((a > b) - (b > a)); } } //Now you can sort by any field at will... const homes=[{h_id:"3",city:"Dallas",state:"TX",zip:"75201",price:"162500"},{h_id:"4",city:"Bevery Hills",state:"CA",zip:"90210",price:"319250"},{h_id:"5",city:"New York",state:"NY",zip:"00010",price:"962500"}]; // Sort by price high to low console.log(homes.sort(sort_by('price', true, parseInt))); // Sort by city, case-insensitive, A-Z console.log(homes.sort(sort_by('city', false, (a) => a.toUpperCase() ))); 7 • Even though this is super cool, isn't it terribly inefficient. The speed would be O(n*n). Any way to use a more efficient sort algorithm, like Quick Sort while also keeping it extensible like your answer above? – nickb Oct 31, 2011 at 5:47 • 1 One issue I have with this is that with reverse=false, it will sort numbers as 1,2,3,4... but Strings as z,y,x... – Abby Apr 26, 2012 at 13:42 • 4 A small enhancement: var key = primer ? function (x) { return primer(x[field]); } : function (x) { return x[field]; } – ErikE Aug 7, 2012 at 1:18 • 11 While [1,-1][+!!reverse] looks cool, it's a horrible thing to do. If a user can't call your method properly, punish him, not try to somehow make sense of it, no matter what. – Ingo Bürk Dec 3, 2013 at 21:28 • 2 Wouldn't it be better to prepare the source data, this would cause consecutive parsing when clearly the source data needs some tweaking. Mar 5, 2015 at 14:14 153 To sort it you need to create a comparator function taking two arguments. Then call the sort function with that comparator function as follows: // a and b are object elements of your array function mycomparator(a,b) { return parseInt(a.price, 10) - parseInt(b.price, 10); } homes.sort(mycomparator); If you want to sort ascending switch the expressions on each side of the minus sign. 2 86 for string sorting in case some one needs it, const dataArr = { "hello": [{ "id": 114, "keyword": "zzzzzz", "region": "Sri Lanka", "supportGroup": "administrators", "category": "Category2" }, { "id": 115, "keyword": "aaaaa", "region": "Japan", "supportGroup": "developers", "category": "Category2" }] }; const sortArray = dataArr['hello']; console.log(sortArray.sort((a, b) => { if (a.region < b.region) return -1; if (a.region > b.region) return 1; return 0; })); 1 • 3 This should be at the top, everyone keeps talking about sorting numbers, and no one talks about sorting letters or words alphabetically. Thanks – Emeka Orji Jun 14, 2022 at 16:07 55 If you have an ES6 compliant browser you can use: The difference between ascending and descending sort order is the sign of the value returned by your compare function: var ascending = homes.sort((a, b) => Number(a.price) - Number(b.price)); var descending = homes.sort((a, b) => Number(b.price) - Number(a.price)); Here's a working code snippet: var homes = [{ "h_id": "3", "city": "Dallas", "state": "TX", "zip": "75201", "price": "162500" }, { "h_id": "4", "city": "Bevery Hills", "state": "CA", "zip": "90210", "price": "319250" }, { "h_id": "5", "city": "New York", "state": "NY", "zip": "00010", "price": "962500" }]; homes.sort((a, b) => Number(a.price) - Number(b.price)); console.log("ascending", homes); homes.sort((a, b) => Number(b.price) - Number(a.price)); console.log("descending", homes); 0 27 I recommend GitHub: Array sortBy - a best implementation of sortBy method which uses the Schwartzian transform But for now we are going to try this approach Gist: sortBy-old.js. Let's create a method to sort arrays being able to arrange objects by some property. Creating the sorting function var sortBy = (function () { var toString = Object.prototype.toString, // default parser function parse = function (x) { return x; }, // gets the item to be sorted getItem = function (x) { var isObject = x != null && typeof x === "object"; var isProp = isObject && this.prop in x; return this.parser(isProp ? x[this.prop] : x); }; /** * Sorts an array of elements. * * @param {Array} array: the collection to sort * @param {Object} cfg: the configuration options * @property {String} cfg.prop: property name (if it is an Array of objects) * @property {Boolean} cfg.desc: determines whether the sort is descending * @property {Function} cfg.parser: function to parse the items to expected type * @return {Array} */ return function sortby (array, cfg) { if (!(array instanceof Array && array.length)) return []; if (toString.call(cfg) !== "[object Object]") cfg = {}; if (typeof cfg.parser !== "function") cfg.parser = parse; cfg.desc = !!cfg.desc ? -1 : 1; return array.sort(function (a, b) { a = getItem.call(cfg, a); b = getItem.call(cfg, b); return cfg.desc * (a < b ? -1 : +(a > b)); }); }; }()); Setting unsorted data var data = [ {date: "2011-11-14T16:30:43Z", quantity: 2, total: 90, tip: 0, type: "tab"}, {date: "2011-11-14T17:22:59Z", quantity: 2, total: 90, tip: 0, type: "Tab"}, {date: "2011-11-14T16:28:54Z", quantity: 1, total: 300, tip: 200, type: "visa"}, {date: "2011-11-14T16:53:41Z", quantity: 2, total: 90, tip: 0, type: "tab"}, {date: "2011-11-14T16:48:46Z", quantity: 2, total: 90, tip: 0, type: "tab"}, {date: "2011-11-14T17:25:45Z", quantity: 2, total: 200, tip: 0, type: "cash"}, {date: "2011-11-31T17:29:52Z", quantity: 1, total: 200, tip: 100, type: "Visa"}, {date: "2011-11-14T16:58:03Z", quantity: 2, total: 90, tip: 0, type: "tab"}, {date: "2011-11-14T16:20:19Z", quantity: 2, total: 190, tip: 100, type: "tab"}, {date: "2011-11-01T16:17:54Z", quantity: 2, total: 190, tip: 100, type: "tab"}, {date: "2011-11-14T17:07:21Z", quantity: 2, total: 90, tip: 0, type: "tab"}, {date: "2011-11-14T16:54:06Z", quantity: 1, total: 100, tip: 0, type: "Cash"} ]; Using it Arrange the array, by "date" as String // sort by @date (ascending) sortBy(data, { prop: "date" }); // expected: first element // { date: "2011-11-01T16:17:54Z", quantity: 2, total: 190, tip: 100, type: "tab" } // expected: last element // { date: "2011-11-31T17:29:52Z", quantity: 1, total: 200, tip: 100, type: "Visa"} If you want to ignore case sensitive, set the parser callback: // sort by @type (ascending) IGNORING case-sensitive sortBy(data, { prop: "type", parser: (t) => t.toUpperCase() }); // expected: first element // { date: "2011-11-14T16:54:06Z", quantity: 1, total: 100, tip: 0, type: "Cash" } // expected: last element // { date: "2011-11-31T17:29:52Z", quantity: 1, total: 200, tip: 100, type: "Visa" } If you want to convert the "date" field as Date type: // sort by @date (descending) AS Date object sortBy(data, { prop: "date", desc: true, parser: (d) => new Date(d) }); // expected: first element // { date: "2011-11-31T17:29:52Z", quantity: 1, total: 200, tip: 100, type: "Visa"} // expected: last element // { date: "2011-11-01T16:17:54Z", quantity: 2, total: 190, tip: 100, type: "tab" } Here you can play with the code: jsbin.com/lesebi Thanks to @Ozesh by his feedback, the issue related to properties with falsy values was fixed. 1 • In case you are sorting through numbers and you encounter a '0' in between the array of objects, you might notice that the above code breaks.. Here is a quick fix for that : var checkNaN = function (value) { return Number.isNaN(Number(value)) ? 0 : value; } followed by: return function (array, o) { .... a = _getItem.call(o, a); a = checkNaN(a); b = _getItem.call(o, b); b = checkNaN(b); return o.desc * (a < b ? -1 : +(a > b)); }); – Ozesh Oct 20, 2016 at 4:29 26 You want to sort it in Javascript, right? What you want is the sort() function. In this case you need to write a comparator function and pass it to sort(), so something like this: function comparator(a, b) { return parseInt(a["price"], 10) - parseInt(b["price"], 10); } var json = { "homes": [ /* your previous data */ ] }; console.log(json["homes"].sort(comparator)); Your comparator takes one of each of the nested hashes inside the array and decides which one is higher by checking the "price" field. 0 22 Use lodash.sortBy, (instructions using commonjs, you can also just put the script include-tag for the cdn at the top of your html) var sortBy = require('lodash.sortby'); // or sortBy = require('lodash').sortBy; Descending order var descendingOrder = sortBy( homes, 'price' ).reverse(); Ascending order var ascendingOrder = sortBy( homes, 'price' ); 2 • 1 Or const sortBy = require('lodash/sortBy'); let calendars = sortBy(calendarListResponse.items, cal => cal.summary); – mpen Oct 17, 2016 at 1:13 • not sure if loadash changed recently by now its named OrderBy import { orderBy } from 'lodash'; ... ... return orderBy ( rows, 'fieldName' ).reverse(); – montelof Dec 8, 2016 at 21:56 19 I'm little late for the party but below is my logic for sorting. function getSortedData(data, prop, isAsc) { return data.sort((a, b) => { return (a[prop] < b[prop] ? -1 : 1) * (isAsc ? 1 : -1) }); } 1 • 2 This answer is the easiest to understand. I simplified it for my use case. function objsort(obj,prop){ return obj.sort( (a, b) => a[prop].toString().localeCompare(b[prop]) ); } – Ken H Sep 3, 2021 at 14:11 15 You can use string1.localeCompare(string2) for string comparison this.myArray.sort((a,b) => { return a.stringProp.localeCompare(b.stringProp); }); Note that localCompare is case insensitive 2 • Note that localCompare now has options that can be used if you want case sensitive (and other options). Near-universal support in up-to-date browsers. Jul 10, 2021 at 21:15 • The best solution for array of string sorting – Hugo Sohm Aug 25, 2022 at 14:16 10 While I am aware that the OP wanted to sort an array of numbers, this question has been marked as the answer for similar questions regarding strings. To that fact, the above answers do not consider sorting an array of text where casing is important. Most answers take the string values and convert them to uppercase/lowercase and then sort one way or another. The requirements that I adhere to are simple: • Sort alphabetically A-Z • Uppercase values of the same word should come before lowercase values • Same letter (A/a, B/b) values should be grouped together What I expect is [ A, a, B, b, C, c ] but the answers above return A, B, C, a, b, c. I actually scratched my head on this for longer than I wanted (which is why I am posting this in hopes that it will help at least one other person). While two users mention the localeCompare function in the comments for the marked answer, I didn't see that until after I stumbled upon the function while searching around. After reading the String.prototype.localeCompare() documentation I was able to come up with this: var values = [ "Delta", "charlie", "delta", "Charlie", "Bravo", "alpha", "Alpha", "bravo" ]; var sorted = values.sort((a, b) => a.localeCompare(b, undefined, { caseFirst: "upper" })); // Result: [ "Alpha", "alpha", "Bravo", "bravo", "Charlie", "charlie", "Delta", "delta" ] This tells the function to sort uppercase values before lowercase values. The second parameter in the localeCompare function is to define the locale but if you leave it as undefined it automatically figures out the locale for you. This works the same for sorting an array of objects as well: var values = [ { id: 6, title: "Delta" }, { id: 2, title: "charlie" }, { id: 3, title: "delta" }, { id: 1, title: "Charlie" }, { id: 8, title: "Bravo" }, { id: 5, title: "alpha" }, { id: 4, title: "Alpha" }, { id: 7, title: "bravo" } ]; var sorted = values .sort((a, b) => a.title.localeCompare(b.title, undefined, { caseFirst: "upper" })); 8 Here is a culmination of all answers above. Fiddle validation: http://jsfiddle.net/bobberino/4qqk3/ var sortOn = function (arr, prop, reverse, numeric) { // Ensure there's a property if (!prop || !arr) { return arr } // Set up sort function var sort_by = function (field, rev, primer) { // Return the required a,b function return function (a, b) { // Reset a, b to the field a = primer(a[field]), b = primer(b[field]); // Do actual sorting, reverse as needed return ((a < b) ? -1 : ((a > b) ? 1 : 0)) * (rev ? -1 : 1); } } // Distinguish between numeric and string to prevent 100's from coming before smaller // e.g. // 1 // 20 // 3 // 4000 // 50 if (numeric) { // Do sort "in place" with sort_by function arr.sort(sort_by(prop, reverse, function (a) { // - Force value to a string. // - Replace any non numeric characters. // - Parse as float to allow 0.02 values. return parseFloat(String(a).replace(/[^0-9.-]+/g, '')); })); } else { // Do sort "in place" with sort_by function arr.sort(sort_by(prop, reverse, function (a) { // - Force value to string. return String(a).toUpperCase(); })); } } 2 • can you please explain what is the significance of having * (rev ? -1 : 1); – TechTurtle Sep 28, 2017 at 16:59 • That's to reverse the order (ascending vs descending) the rev portion just flips normal results when the rev argument is true. Otherwise it'll just multiple by 1 which does nothing, when set, it'll multiply the result by -1, thereby inverting the result. – bob Sep 29, 2017 at 18:16 7 A more LINQ like solution: Array.prototype.orderBy = function (selector, desc = false) { return [...this].sort((a, b) => { a = selector(a); b = selector(b); if (a == b) return 0; return (desc ? a > b : a < b) ? -1 : 1; }); } Advantages: • autocompletion for properties • extends array prototype • does not change array • easy to use in method chaining Usage: Array.prototype.orderBy = function(selector, desc = false) { return [...this].sort((a, b) => { a = selector(a); b = selector(b); if (a == b) return 0; return (desc ? a > b : a < b) ? -1 : 1; }); }; var homes = [{ "h_id": "3", "city": "Dallas", "state": "TX", "zip": "75201", "price": "162500" }, { "h_id": "4", "city": "Bevery Hills", "state": "CA", "zip": "90210", "price": "319250" }, { "h_id": "5", "city": "New York", "state": "NY", "zip": "00010", "price": "962500" }]; let sorted_homes = homes.orderBy(h => parseFloat(h.price)); console.log("sorted by price", sorted_homes); let sorted_homes_desc = homes.orderBy(h => h.city, true); console.log("sorted by City descending", sorted_homes_desc); 0 6 You can use the JavaScript sort method with a callback function: function compareASC(homeA, homeB) { return parseFloat(homeA.price) - parseFloat(homeB.price); } function compareDESC(homeA, homeB) { return parseFloat(homeB.price) - parseFloat(homeA.price); } // Sort ASC homes.sort(compareASC); // Sort DESC homes.sort(compareDESC); 6 For sorting a array you must define a comparator function. This function always be different on your desired sorting pattern or order(i.e. ascending or descending). Let create some functions that sort an array ascending or descending and that contains object or string or numeric values. function sorterAscending(a,b) { return a-b; } function sorterDescending(a,b) { return b-a; } function sorterPriceAsc(a,b) { return parseInt(a['price']) - parseInt(b['price']); } function sorterPriceDes(a,b) { return parseInt(b['price']) - parseInt(b['price']); } Sort numbers (alphabetically and ascending): var fruits = ["Banana", "Orange", "Apple", "Mango"]; fruits.sort(); Sort numbers (alphabetically and descending): var fruits = ["Banana", "Orange", "Apple", "Mango"]; fruits.sort(); fruits.reverse(); Sort numbers (numerically and ascending): var points = [40,100,1,5,25,10]; points.sort(sorterAscending()); Sort numbers (numerically and descending): var points = [40,100,1,5,25,10]; points.sort(sorterDescending()); As above use sorterPriceAsc and sorterPriceDes method with your array with desired key. homes.sort(sorterPriceAsc()) or homes.sort(sorterPriceDes()) 0 5 While it is a bit of an overkill for just sorting a single array, this prototype function allows to sort Javascript arrays by any key, in ascending or descending order, including nested keys, using dot syntax. (function(){ var keyPaths = []; var saveKeyPath = function(path) { keyPaths.push({ sign: (path[0] === '+' || path[0] === '-')? parseInt(path.shift()+1) : 1, path: path }); }; var valueOf = function(object, path) { var ptr = object; for (var i=0,l=path.length; i<l; i++) ptr = ptr[path[i]]; return ptr; }; var comparer = function(a, b) { for (var i = 0, l = keyPaths.length; i < l; i++) { aVal = valueOf(a, keyPaths[i].path); bVal = valueOf(b, keyPaths[i].path); if (aVal > bVal) return keyPaths[i].sign; if (aVal < bVal) return -keyPaths[i].sign; } return 0; }; Array.prototype.sortBy = function() { keyPaths = []; for (var i=0,l=arguments.length; i<l; i++) { switch (typeof(arguments[i])) { case "object": saveKeyPath(arguments[i]); break; case "string": saveKeyPath(arguments[i].match(/[+-]|[^.]+/g)); break; } } return this.sort(comparer); }; })(); Usage: var data = [ { name: { first: 'Josh', last: 'Jones' }, age: 30 }, { name: { first: 'Carlos', last: 'Jacques' }, age: 19 }, { name: { first: 'Carlos', last: 'Dante' }, age: 23 }, { name: { first: 'Tim', last: 'Marley' }, age: 9 }, { name: { first: 'Courtney', last: 'Smith' }, age: 27 }, { name: { first: 'Bob', last: 'Smith' }, age: 30 } ] data.sortBy('age'); // "Tim Marley(9)", "Carlos Jacques(19)", "Carlos Dante(23)", "Courtney Smith(27)", "Josh Jones(30)", "Bob Smith(30)" Sorting by nested properties with dot-syntax or array-syntax: data.sortBy('name.first'); // "Bob Smith(30)", "Carlos Dante(23)", "Carlos Jacques(19)", "Courtney Smith(27)", "Josh Jones(30)", "Tim Marley(9)" data.sortBy(['name', 'first']); // "Bob Smith(30)", "Carlos Dante(23)", "Carlos Jacques(19)", "Courtney Smith(27)", "Josh Jones(30)", "Tim Marley(9)" Sorting by multiple keys: data.sortBy('name.first', 'age'); // "Bob Smith(30)", "Carlos Jacques(19)", "Carlos Dante(23)", "Courtney Smith(27)", "Josh Jones(30)", "Tim Marley(9)" data.sortBy('name.first', '-age'); // "Bob Smith(30)", "Carlos Dante(23)", "Carlos Jacques(19)", "Courtney Smith(27)", "Josh Jones(30)", "Tim Marley(9)" You can fork the repo: https://github.com/eneko/Array.sortBy 1 • I like this answer a lot because of sortBy's concise syntax. Simple to use –even with nested fields– while maintaining great code-readability. Thank you! May 31, 2020 at 16:15 4 For a normal array of elements values only: function sortArrayOfElements(arrayToSort) { function compareElements(a, b) { if (a < b) return -1; if (a > b) return 1; return 0; } return arrayToSort.sort(compareElements); } e.g. 1: var array1 = [1,2,545,676,64,2,24] output : [1, 2, 2, 24, 64, 545, 676] var array2 = ["v","a",545,676,64,2,"24"] output: ["a", "v", 2, "24", 64, 545, 676] For an array of objects: function sortArrayOfObjects(arrayToSort, key) { function compareObjects(a, b) { if (a[key] < b[key]) return -1; if (a[key] > b[key]) return 1; return 0; } return arrayToSort.sort(compareObjects); } e.g. 1: var array1= [{"name": "User4", "value": 4},{"name": "User3", "value": 3},{"name": "User2", "value": 2}] output : [{"name": "User2", "value": 2},{"name": "User3", "value": 3},{"name": "User4", "value": 4}] 3 If you use Underscore.js, try sortBy: // price is of an integer type _.sortBy(homes, "price"); // price is of a string type _.sortBy(homes, function(home) {return parseInt(home.price);}); 3 Here is a slightly modified version of elegant implementation from the book "JavaScript: The Good Parts". NOTE: This version of by is stable. It preserves the order of the first sort while performing the next chained sort. I have added isAscending parameter to it. Also converted it to ES6 standards and "newer" good parts as recommended by the author. You can sort ascending as well as descending and chain sort by multiple properties. const by = function (name, minor, isAscending=true) { const reverseMutliplier = isAscending ? 1 : -1; return function (o, p) { let a, b; let result; if (o && p && typeof o === "object" && typeof p === "object") { a = o[name]; b = p[name]; if (a === b) { return typeof minor === 'function' ? minor(o, p) : 0; } if (typeof a === typeof b) { result = a < b ? -1 : 1; } else { result = typeof a < typeof b ? -1 : 1; } return result * reverseMutliplier; } else { throw { name: "Error", message: "Expected an object when sorting by " + name }; } }; }; let s = [ {first: 'Joe', last: 'Besser'}, {first: 'Moe', last: 'Howard'}, {first: 'Joe', last: 'DeRita'}, {first: 'Shemp', last: 'Howard'}, {first: 'Larry', last: 'Fine'}, {first: 'Curly', last: 'Howard'} ]; // Sort by: first ascending, last ascending s.sort(by("first", by("last"))); console.log("Sort by: first ascending, last ascending: ", s); // "[ // {"first":"Curly","last":"Howard"}, // {"first":"Joe","last":"Besser"}, <====== // {"first":"Joe","last":"DeRita"}, <====== // {"first":"Larry","last":"Fine"}, // {"first":"Moe","last":"Howard"}, // {"first":"Shemp","last":"Howard"} // ] // Sort by: first ascending, last descending s.sort(by("first", by("last", 0, false))); console.log("sort by: first ascending, last descending: ", s); // "[ // {"first":"Curly","last":"Howard"}, // {"first":"Joe","last":"DeRita"}, <======== // {"first":"Joe","last":"Besser"}, <======== // {"first":"Larry","last":"Fine"}, // {"first":"Moe","last":"Howard"}, // {"first":"Shemp","last":"Howard"} // ] 2 • could we sort {"first":"Curly","last":"Howard", "property" : {"id" : "1"}} type of array by id? Aug 14, 2017 at 8:29 • yes, the function has to be slightly modified to take in a new parameter, say, nestedName. You then call by with name="property", nestedName="id" Aug 15, 2017 at 21:29 3 Create a function and sort based on the input using below code var homes = [{ "h_id": "3", "city": "Dallas", "state": "TX", "zip": "75201", "price": "162500" }, { "h_id": "4", "city": "Bevery Hills", "state": "CA", "zip": "90210", "price": "319250" }, { "h_id": "5", "city": "New York", "state": "NY", "zip": "00010", "price": "962500" }]; function sortList(list,order){ if(order=="ASC"){ return list.sort((a,b)=>{ return parseFloat(a.price) - parseFloat(b.price); }) } else{ return list.sort((a,b)=>{ return parseFloat(b.price) - parseFloat(a.price); }); } } sortList(homes,'DESC'); console.log(homes); 2 For sort on multiple array object field. Enter your field name in arrprop array like ["a","b","c"] then pass in second parameter arrsource actual source we want to sort. function SortArrayobject(arrprop,arrsource){ arrprop.forEach(function(i){ arrsource.sort(function(a,b){ return ((a[i] < b[i]) ? -1 : ((a[i] > b[i]) ? 1 : 0)); }); }); return arrsource; } 2 You will need two function function desc(a, b) { return b < a ? -1 : b > a ? 1 : b >= a ? 0 : NaN; } function asc(a, b) { return a < b ? -1 : a > b ? 1 : a >= b ? 0 : NaN; } Then you can apply this to any object property: data.sort((a, b) => desc(parseFloat(a.price), parseFloat(b.price))); let data = [ {label: "one", value:10}, {label: "two", value:5}, {label: "three", value:1}, ]; // sort functions function desc(a, b) { return b < a ? -1 : b > a ? 1 : b >= a ? 0 : NaN; } function asc(a, b) { return a < b ? -1 : a > b ? 1 : a >= b ? 0 : NaN; } // DESC data.sort((a, b) => desc(a.value, b.value)); document.body.insertAdjacentHTML( 'beforeend', '<strong>DESCending sorted</strong><pre>' + JSON.stringify(data) +'</pre>' ); // ASC data.sort((a, b) => asc(a.value, b.value)); document.body.insertAdjacentHTML( 'beforeend', '<strong>ASCending sorted</strong><pre>' + JSON.stringify(data) +'</pre>' ); 0 Hi after reading this article, I made a sortComparator for my needs, with the functionality to compare more than one json attributes, and i want to share it with you. This solution compares only strings in ascending order, but the solution can be easy extended for each attribute to support: reverse ordering, other data types, to use locale, casting etc var homes = [{ "h_id": "3", "city": "Dallas", "state": "TX", "zip": "75201", "price": "162500" }, { "h_id": "4", "city": "Bevery Hills", "state": "CA", "zip": "90210", "price": "319250" }, { "h_id": "5", "city": "New York", "state": "NY", "zip": "00010", "price": "962500" }]; // comp = array of attributes to sort // comp = ['attr1', 'attr2', 'attr3', ...] function sortComparator(a, b, comp) { // Compare the values of the first attribute if (a[comp[0]] === b[comp[0]]) { // if EQ proceed with the next attributes if (comp.length > 1) { return sortComparator(a, b, comp.slice(1)); } else { // if no more attributes then return EQ return 0; } } else { // return less or great return (a[comp[0]] < b[comp[0]] ? -1 : 1) } } // Sort array homes homes.sort(function(a, b) { return sortComparator(a, b, ['state', 'city', 'zip']); }); // display the array homes.forEach(function(home) { console.log(home.h_id, home.city, home.state, home.zip, home.price); }); and the result is $ node sort 4 Bevery Hills CA 90210 319250 5 New York NY 00010 962500 3 Dallas TX 75201 162500 and another sort homes.sort(function(a, b) { return sortComparator(a, b, ['city', 'zip']); }); with result $ node sort 4 Bevery Hills CA 90210 319250 3 Dallas TX 75201 162500 5 New York NY 00010 962500 0 function compareValues(key, order = 'asc') { return function innerSort(a, b) { if (!a.hasOwnProperty(key) || !b.hasOwnProperty(key)) { // property doesn't exist on either object return 0; } const varA = (typeof a[key] === 'string') ? a[key].toUpperCase() : a[key]; const varB = (typeof b[key] === 'string') ? b[key].toUpperCase() : b[key]; let comparison = 0; if (varA > varB) { comparison = 1; } else if (varA < varB) { comparison = -1; } return ( (order === 'desc') ? (comparison * -1) : comparison ); }; } http://yazilimsozluk.com/sort-array-in-javascript-by-asc-or-desc 0 I recently wrote a universal function to manage this for you if you want to use it. /** * Sorts an object into an order * * @require jQuery * * @param object Our JSON object to sort * @param type Only alphabetical at the moment * @param identifier The array or object key to sort by * @param order Ascending or Descending * * @returns Array */ function sortItems(object, type, identifier, order){ var returnedArray = []; var emptiesArray = []; // An array for all of our empty cans // Convert the given object to an array $.each(object, function(key, object){ // Store all of our empty cans in their own array // Store all other objects in our returned array object[identifier] == null ? emptiesArray.push(object) : returnedArray.push(object); }); // Sort the array based on the type given switch(type){ case 'alphabetical': returnedArray.sort(function(a, b){ return (a[identifier] == b[identifier]) ? 0 : ( // Sort ascending or descending based on order given order == 'asc' ? a[identifier] > b[identifier] : a[identifier] < b[identifier] ) ? 1 : -1; }); break; default: } // Return our sorted array along with the empties at the bottom depending on sort order return order == 'asc' ? returnedArray.concat(emptiesArray) : emptiesArray.concat(returnedArray); } -1 Your request can be promptly fulfilled by effortlessly adding your objects to a TrueSet, where each object is represented by the numeric value of its price property. const repr = item => Number.parseInt(item.price), got = TrueSet.of(repr, ORDER.ASCENDING) .letAll(data) Nothing else is required. You can check validity of the proposed solution by: import {TrueSet} from "@ut8pia/classifier/queue/TrueSet.js"; import {ORDER} from "@ut8pia/classifier/global.js"; const data = [{ "h_id": "3", "city": "Dallas", "state": "TX", "zip": "75201", "price": "162500" }, { "h_id": "4", "city": "Bevery Hills", "state": "CA", "zip": "90210", "price": "319250" }, { "h_id": "5", "city": "New York", "state": "NY", "zip": "00010", "price": "962500" }, { "h_id": "6", "city": "Dallas", "state": "TX", "zip": "75201", "price": "90000" } ], expected = [{ "h_id": "6", "city": "Dallas", "state": "TX", "zip": "75201", "price": "90000" }, { "h_id": "3", "city": "Dallas", "state": "TX", "zip": "75201", "price": "162500" }, { "h_id": "4", "city": "Bevery Hills", "state": "CA", "zip": "90210", "price": "319250" }, { "h_id": "5", "city": "New York", "state": "NY", "zip": "00010", "price": "962500" } ]; assert.deepEqual(got.toArray(), expected) However, you might want to use all the item properties in the representation. const repr = item => [Number.parseInt(item.price), item.state, item.zip, item.city, item.h_id] The TrueSet internal tree structure - a Classifier, would allow you to both query and modify your dataset based on such properties. To get acquainted with the concepts of TrueSet, Classifier and representation function, consider exploring the @ut8pia/classifier library. -4 You can use the below approach if you don't want to use any sort() method function sortObj(obj) { let numArr = []; //the array which just includes prices as Number let sortedObj = []; obj.map((x) => { numArr.push(Number(x["price"])); }); while (numArr.length > 0) { let minIndex = numArr.indexOf(Math.min(...numArr)); //the index of cheapest home in the obj numArr.splice(minIndex, 1); sortedObj.push(obj.splice(minIndex, 1)); // splicing cheapest home from Homes Array to sortedObj Array. } console.log(sortedObj); } var homes = [ { h_id: "3", city: "Dallas", state: "TX", zip: "75201", price: "162500", }, { h_id: "4", city: "Bevery Hills", state: "CA", zip: "90210", price: "319250", }, { h_id: "5", city: "New York", state: "NY", zip: "00010", price: "962500", }, ]; sortObj(homes);
__label__pos
0.905191
博客> JavaScript强化教程——获取内容和属性 JavaScript强化教程——获取内容和属性 2020-02-27 22:33 评论:0 阅读:825 哟猫Intry js html5 本文为 H5EDU 机构官方 HTML5培训 教程,主要介绍:JavaScript强化教程 —— 获取内容和属性 jQuery - 获取内容和属性 jQuery 拥有可操作 HTML 元素和属性的强大方法。 jQuery DOM 操作 jQuery 中非常重要的部分,就是操作 DOM 的能力。 jQuery 提供一系列与 DOM 相关的方法,这使访问和操作元素和属性变得很容易。 lamp DOM = Document Object Model(文档对象模型) DOM 定义访问 HTML 和 XML 文档的标准: "W3C 文档对象模型独立于平台和语言的界面,允许程序和脚本动态访问和更新文档的内容、结构以及样式。" 获得内容 - text()、html() 以及 val() 三个简单实用的用于 DOM 操作的 jQuery 方法: text() - 设置或返回所选元素的文本内容 html() - 设置或返回所选元素的内容(包括 HTML 标记) val() - 设置或返回表单字段的值 下面的例子演示如何通过 jQuery text() 和 html() 方法来获得内容: 实例 $("#btn1").click(function(){ alert("Text: " + $("#test").text()); }); $("#btn2").click(function(){ alert("HTML: " + $("#test").html()); }); 下面的例子演示如何通过 jQuery val() 方法获得输入字段的值: 实例 $("#btn1").click(function(){ alert("值为: " + $("#test").val()); }); 获取属性 - attr() jQuery attr() 方法用于获取属性值。 下面的例子演示如何获得链接中 href 属性的值: 实例 $("button").click(function(){ alert($("#runoob").attr("href")); }); 收藏 0 sina weixin mail 回到顶部
__label__pos
0.579988
Velo: Writing Code That Only Runs on Mobile Devices Sometimes you want certain code to run only when your site is being viewed on a mobile device. To do so, you'll need to wrap that code in a JavaScript conditional statement that checks the type of device the code is currently running on as described below. Code Editor You might expect that code you write in the code editor when in mobile view only runs when your site is viewed on a mobile device. However, the code you write in the code editor actually runs regardless of the type of device your site is being viewed on.   Properties & Events Panel The Properties & Events panel in the editor is the same for all devices. Therefore, if you add an event handler, such as an onClick for a button, that event handler will run regardless of the type of device your site is being viewed on.   Mobile Code As mentioned above, to write code that only runs on mobile devices, you'll need to wrap that code in a JavaScript conditional statement that checks the type of device the code is currently running on. To check the current device type in code, you need to first import the wix-window-frontend API. Remember, import statements go all the way at the top of your page code.  Copy 1 import wixWindowFrontend from 'wix-window-frontend'; Then you need to wrap your code in a conditional statement that checks the device type.  Copy 1 if(wixWindowFrontend.formFactor === "Mobile"){ 2 // code that will only run on mobile 3 } Example For example, let's say you have an image on your page and some hidden text. When viewed on a desktop device, you want the hidden text to show when visitors hover over the image and disappear when they are no longer hovering over the image. But on mobile devices, since there is no concept of hovering, you want the hidden text to show when visitors click on the image and disappear when they click again. So, in our example, we have the following elements on the page: We begin by setting the hiddenText element to Hidden using the Properties & Events panel. Remember that whatever you do in the Properties & Events panel affects what happens when your site is viewed on both desktop and mobile devices. In our case, we want the text to load as hidden on all devices, so no further action is needed. Next, we use the Properties & Events panel to add three event handlers to the image element. The onMouseIn and onMouseOut events are used for desktop devices. The onClick event is used for mobile devices. As mentioned above, we need to first import the wix-window-frontend API. Copy 1 import wixWindowFrontend from 'wix-window-frontend'; Now, we write the code for the event handlers. In the mouseIn event handler, we write a line of code that shows the hidden text element. In the mouseOut event handler, we write a line of code that hides the text element.  Copy 1 export function image_mouseIn(event) { 2 $w('#hiddenText').show("fade"); 3 } 4 5 export function image_mouseOut(event) { 6 $w('#hiddenText').hide("fade"); 7 } Since the mouseIn and mouseOut events are never fired on mobile devices, we need a different way to show and hide the text on those devices. That's what we use the onClick event handler for. In the onClick event handler, we toggle whether the text element is hidden or shown. Since we only want this to happen on mobile devices, we wrap the toggle code inside a conditional that checks the device type. Because hovering doesn't apply to phones as well as tablets, we use an or (||) to test for both cases. Copy 1 export function image_click(event) { 2 if(wixWindowFrontend.formFactor === "Mobile" || wixWindowFrontend.formFactor === "Tablet"){ 3 if($w('#hiddenText').hidden){ 4 $w('#hiddenText').show("fade"); 5 } 6 else{ 7 $w('#hiddenText').hide("fade"); 8 } 9 } 10 } Testing If you want to test how your code works on mobile devices, you'll need to publish your site and view the published version on a mobile device or in a mobile device emulator.  Warning: If you preview your site, it will always behave as if it is being viewed on a desktop device, even if you preview from the mobile editor. To test your site on a desktop machine as if it is being viewed on a mobile device: 1. Publish your site. 2. View the published site. 3. Open your browser's developer tools. 4. Use your browser's developer tools to emulate a mobile device. (This is usually called something like Toggle device toolbar or Responsive Design Mode and turned on using an icon similar to this ). 5. Refresh the page so your site now loads as if it were on a mobile device. Was this helpful? Yes No
__label__pos
0.794568
Export (0) Print Expand All Collapse the table of content Expand the table of content Expand Minimize Get-SCOMGroup Updated: January 26, 2014 Applies To: System Center 2012 R2 Operations Manager Get-SCOMGroup Gets Operations Manager groups. Syntax Parameter Set: Empty Get-SCOMGroup [-ComputerName <String[]> ] [-Credential <PSCredential> ] [-SCSession <Connection[]> ] [ <CommonParameters>] Parameter Set: FromGroupDisplayName Get-SCOMGroup [-DisplayName] <String[]> [-ComputerName <String[]> ] [-Credential <PSCredential> ] [-SCSession <Connection[]> ] [ <CommonParameters>] Parameter Set: FromGroupGuid Get-SCOMGroup [-Id] <Guid[]> [-ComputerName <String[]> ] [-Credential <PSCredential> ] [-SCSession <Connection[]> ] [ <CommonParameters>] Detailed Description The Get-SCOMGroup cmdlet gets System Center 2012 – Operations Manager groups. You can specify which groups to get by name or ID. Because a group object is a type of class instance object, it can be passed to the Instance parameter of another cmdlet, such as the Enable-SCOMDiscovery cmdlet. By default, this cmdlet uses the active persistent connection to a management group. Use the SCSession parameter to specify a different persistent connection. You can create a temporary connection to a management group by using the ComputerName and Credential parameters. For more information, type Get-Help about_OpsMgr_Connections. Parameters -ComputerName<String[]> Specifies an array of names of computers. The cmdlet establishes temporary connections with management groups for these computers. You can use NetBIOS names, IP addresses, or fully qualified domain names (FQDNs). To specify the local computer, type the computer name, localhost, or a dot (.). The System Center Data Access service must be running on the computer. If you do not specify a computer, the cmdlet uses the computer for the current management group connection. Aliases none Required? false Position? named Default Value localhost Accept Pipeline Input? true (ByValue) Accept Wildcard Characters? false -Credential<PSCredential> Specifies a PSCredential object for the management group connection. To obtain a PSCredential object, use the Get-Credential cmdlet. For more information, type Get-Help Get-Credential. If you specify a computer in the ComputerName parameter, use an account that has access to that computer. The default is the current user. Aliases none Required? false Position? named Default Value none Accept Pipeline Input? true (ByValue) Accept Wildcard Characters? false -DisplayName<String[]> Specifies an array of display names. Values for this parameter depend on which localized management packs you import and the locale of the user that runs Windows PowerShell. Aliases none Required? true Position? 1 Default Value none Accept Pipeline Input? true (ByValue) Accept Wildcard Characters? true -Id<Guid[]> Specifies an array of GUIDs of groups. An SCOMGroup object contains a GUID as its Id property. Aliases none Required? true Position? 1 Default Value none Accept Pipeline Input? true (ByValue) Accept Wildcard Characters? false -SCSession<Connection[]> Specifies an array of Connection objects. To obtain a Connection object, use the Get-SCOMManagementGroupConnection cmdlet. Aliases none Required? false Position? named Default Value none Accept Pipeline Input? true (ByValue) Accept Wildcard Characters? false <CommonParameters> This cmdlet supports the common parameters: -Verbose, -Debug, -ErrorAction, -ErrorVariable, -OutBuffer, and -OutVariable. For more information, see about_CommonParameters. Inputs The input type is the type of the objects that you can pipe to the cmdlet. Outputs The output type is the type of the objects that the cmdlet emits. Examples Example 1: Get groups by using display names This command gets all groups that have a display name that includes Agent and all groups that have a display name that includes Windows. PS C:\> Get-SCOMGroup -DisplayName "*Agent*","*Windows*" Example 2: Get a group by using an ID This command gets the group that has an Id of 7413b06b-a95b-4ae3-98f2-dac9ff76dabd. PS C:\> Get-SCOMGroup -Id 7413b06b-a95b-4ae3-98f2-dac9ff76dabd Related topics   Was this page helpful? (1500 characters remaining) Thank you for your feedback Show: © 2015 Microsoft
__label__pos
0.999667
Citrix 1Y0-A17 Real Exam Questions Implementing Citrix XenDesktop 4 The Implementing Citrix XenDesktop 4 (exam A17) is divided into the following sections: • Defining XenDesktop Architecture • Identifying Pre-Install Considerations • Installing XenDesktop • Building Provisioning Services vDisks • Configuring Desktop Delivery Controller for XenDesktop • Delivering Desktops • Managing XenDesktop • Troubleshooting Desktop Images for Virtualization • Delivering and Managing Applications QUESTION 1 What is the role of the pool management service? A.Clones virtual machines B.Turns virtual machines on and off C.Streams the virtual machine to the users D.Assigns users to the correct virtual machine Answer: B QUESTION 2 Which component of the XenDesktop architecture uses Microsoft Active Directory to find the controllers that constitute a farm? A.Desktop Receiver B.Domain Controller C.Virtual Desktop Agent D.Desktop Delivery Controller Answer: C QUESTION 3 An administrator needs to allow each help desk worker in an environment access to one virtual desktop Which two types of devices should the administrator configure to allow each help desk worker to connect to a virtual desktop? (Choose two) A.Thin clients B.Fat client devices C.Remote computers D.Repurposed computers Answer: AD QUESTION 4 N administrator needs to configure pooled desktops for a large number of users and would like to automate this process To complete this task the administrator will need to use ___________ and ___________ (Choose the two correct options to complete the sentence) A.XenServer B.XenCenter C.Provisioning Services D.XenDesktop SetupWizard E.Delivery Services console Answer: CD QUESTION 5 Scenario: A user is attempting to access a virtual deskop The Web interface sent an .ICA file, but no ICA connection was established Which component of the XenDeskop architecture has failed to communicate with the virtual desktop? A.DataStore B.Desktop Receiver C.Virtual Desktop Agent D.Desktop Delivery Controller Answer: B QUESTION 6 Which issue can the Profile Management feature address in a XenDesktop implementation? A.Inability of users to stch between multiple profiles B.Inability of settings to be saved against mandatory profiles C.Profile bloat because extraneous files are copied to the proNe D.Printing failure because printer properties are not updated at each logon Answer: C QUESTION 7 An administrator needs to configure existing hardware that will be repurposed to support connections to virtual desktops How should the administrator allow users to connect to their desktops in this environment? A.Using domain-joined Windows XP Embedded on a LAN and connecting through a XenDesktop Ser4lces site to a virtual desktop in full-screen-only mode B.Using a Windows XP device on a LAN and connecting through a XenDesktop Web site to a virtual desktop with a Citnx Desktop Receiver window and a toolbar C.Using a non-domain-joined Windows XP Embedded device on a LAN and connecting through a Desktop Appliance Connector site to a virtual desktop in full-screen-only mode D.Using a Windows XP device that connects remotely using Access Gateway through a XenDesktop Web site to a virtual desktop with a Citnx Desktop Receiver window and a toolbar Answer: A QUESTION 8 An administrator changed the settings on the Desktop Delivery Controller so that it uses an assigned static IP address instead of DHCP. Which result, when venfied by the administrator, shows that the change has been made and ensures that the virtual machines can successfully register? A.The firewall ports have been changed B.The Farm GUID has been added to the registry C.The DNS has been updated with the new IP address D.The Desktop Delivery Controller Service has been restarted Answer: C QUESTION 9 An administrator needs to configure the XenDesktop environment as depicted in the screenshot Click the Exhibit’ button to view the screenshot Which two Active Directory trust relationships are required for the environment based on the screenshot? (Choose two) A.One way trust from Miami to Dublin B.One way trust from Miami to New York C.One way trust from Dublin to New York D.Two way trust between Dublin and Miami E.Two way trust between Miami and New York F.Two way trust between New York and Dublin Answer: DE QUESTION 10 Scenario: A Provisioning Services server will only be used to deliver a single Vista or Windows 7 desktop in a XenDesktop enwonment The administrator needs to set storage requirements on the hard drive that will contain the vDisk for Provisioning Services According to best practices, what is the minimum amount of storage required on the hard dnve? A.20GB B.30GB C.40GB D.50GB Answer: A Download  |  Password: certificatexam.com 2 comments
__label__pos
0.674306
inlabru implementation of the rational SPDE approach David Bolin and Alexandre B. Simas 2023-06-25 Introduction In this vignette we will present the inlabru implementation of the covariance-based rational SPDE approach. For further technical details on the covariance-based approach, see the Rational approximation with the rSPDE package vignette and Bolin, Simas, and Xiong (2023). We begin by providing a step-by-step illustration on how to use our implementation. To this end we will consider a real world data set that consists of precipitation measurements from the Paraná region in Brazil. After the initial model fitting, we will show how to change some parameters of the model. In the end, we will also provide an example in which we have replicates. The examples in this vignette are the same as those in the R-INLA implementation of the rational SPDE approach vignette. As in that case, it is important to mention that one can improve the performance by using the PARDISO solver. Please, go to https://www.pardiso-project.org/r-inla/#license to apply for a license. Also, use inla.pardiso() for instructions on how to enable the PARDISO sparse library. An example with real data To illustrate our implementation of rSPDE in inlabru we will consider a dataset available in R-INLA. This data has also been used to illustrate the SPDE approach, see for instance the book Advanced Spatial Modeling with Stochastic Partial Differential Equations Using R and INLA and also the vignette Spatial Statistics using R-INLA and Gaussian Markov random fields. See also Lindgren, Rue, and Lindström (2011) for theoretical details on the standard SPDE approach. The data consist of precipitation measurements from the Paraná region in Brazil and were provided by the Brazilian National Water Agency. The data were collected at 616 gauge stations in Paraná state, south of Brazil, for each day in 2011. An rSPDE model for precipitation We will follow the vignette Spatial Statistics using R-INLA and Gaussian Markov random fields. As precipitation data are always positive, we will assume it is Gamma distributed. R-INLA uses the following parameterization of the Gamma distribution, \[\Gamma(\mu, \phi): \pi (y) = \frac{1}{\Gamma(\phi)} \left(\frac{\phi}{\mu}\right)^{\phi} y^{\phi - 1} \exp\left(-\frac{\phi y}{\mu}\right) .\] In this parameterization, the distribution has expected value \(E(x) = \mu\) and variance \(V(x) = \mu^2/\phi\), where \(1/\phi\) is a dispersion parameter. In this example \(\mu\) will be modelled using a stochastic model that includes both covariates and spatial structure, resulting in the latent Gaussian model for the precipitation measurements \[\begin{align} y_i\mid \mu(s_i), \theta &\sim \Gamma(\mu(s_i),c\phi)\\ \log (\mu(s)) &= \eta(s) = \sum_k f_k(c_k(s))+u(s)\\ \theta &\sim \pi(\theta) \end{align},\] where \(y_i\) denotes the measurement taken at location \(s_i\), \(c_k(s)\) are covariates, \(u(s)\) is a mean-zero Gaussian Matérn field, and \(\theta\) is a vector containing all parameters of the model, including smoothness of the field. That is, by using the rSPDE model we will also be able to estimate the smoothness of the latent field. Examining the data We will be using inlabru. The inlabru package is available on CRAN and also on GitHub. We begin by loading some libraries we need to get the data and build the plots. library(ggplot2) library(INLA) library(inlabru) library(splancs) library(viridis) Let us load the data and the border of the region data(PRprec) data(PRborder) The data frame contains daily measurements at 616 stations for the year 2011, as well as coordinates and altitude information for the measurement stations. We will not analyze the full spatio-temporal data set, but instead look at the total precipitation in January, which we calculate as Y <- rowMeans(PRprec[, 3 + 1:31]) In the next snippet of code, we extract the coordinates and altitudes and remove the locations with missing values. ind <- !is.na(Y) Y <- Y[ind] coords <- as.matrix(PRprec[ind, 1:2]) alt <- PRprec$Altitude[ind] Let us build a plot for the precipitations: ggplot() + geom_point(aes( x = coords[, 1], y = coords[, 2], colour = Y ), size = 2, alpha = 1) + geom_path(aes(x = PRborder[, 1], y = PRborder[, 2])) + geom_path(aes(x = PRborder[1034:1078, 1], y = PRborder[ 1034:1078, 2 ]), colour = "red") + scale_color_viridis() The red line in the figure shows the coast line, and we expect the distance to the coast to be a good covariate for precipitation. This covariate is not available, so let us calculate it for each observation location: seaDist <- apply(spDists(coords, PRborder[1034:1078, ], longlat = TRUE ), 1, min) Now, let us plot the precipitation as a function of the possible covariates: par(mfrow = c(2, 2)) plot(coords[, 1], Y, cex = 0.5, xlab = "Longitude") plot(coords[, 2], Y, cex = 0.5, xlab = "Latitude") plot(seaDist, Y, cex = 0.5, xlab = "Distance to sea") plot(alt, Y, cex = 0.5, xlab = "Altitude") par(mfrow = c(1, 1)) Creating the rSPDE model To use the inlabru implementation of the rSPDE model we need to load the functions: library(rSPDE) To create a rSPDE model, one would the rspde.matern() function in a similar fashion as one would use the inla.spde2.matern() function. Mesh We can use R-INLA for creating the mesh. Let us create a mesh which is based on a non-convex hull to avoid adding many small triangles outside the domain of interest: prdomain <- inla.nonconvex.hull(coords, -0.03, -0.05, resolution = c(100, 100)) prmesh <- inla.mesh.2d(boundary = prdomain, max.edge = c(0.45, 1), cutoff = 0.2) plot(prmesh, asp = 1, main = "") lines(PRborder, col = 3) points(coords[, 1], coords[, 2], pch = 19, cex = 0.5, col = "red") Setting up the data frame In place of a inla.stack, we can set up a data.frame() to use inlabru. We refer the reader to vignettes in https://inlabru-org.github.io/inlabru/index.html for further details. prdata <- data.frame(long = coords[,1], lat = coords[,2], seaDist = inla.group(seaDist), y = Y) coordinates(prdata) <- c("long","lat") Setting up the rSPDE model To set up an rSPDEmodel, all we need is the mesh. By default it will assume that we want to estimate the smoothness parameter \(\nu\) and to do a covariance-based rational approximation of order 2. Later in this vignette we will also see other options for setting up rSPDE models such as keeping the smoothness parameter fixed and/or increasing the order of the covariance-based rational approximation. Therefore, to set up a model all we have to do is use the rspde.matern() function: rspde_model <- rspde.matern(mesh = prmesh) Notice that this function is very reminiscent of R-INLA’s inla.spde2.matern() function. We will assume the following linkage between model components and observations \[\eta(s) \sim A x(s) + A \text{ Intercept} + \text{seaDist}.\] \(\eta(s)\) will then be used in the observation-likelihood, \[y_i\mid \eta(s_i),\theta \sim \Gamma(\exp(\eta (s_i)), c\phi).\] Model fitting We will build a model using the distance to the sea \(x_i\) as a covariate through an improper CAR(1) model with \(\beta_{ij}=1(i\sim j)\), which R-INLA calls a random walk of order 1. We will fit it in inlabru’s style: cmp <- y ~ Intercept(1) + distSea(seaDist, model="rw1") + field(coordinates, model = rspde_model) To fit the model we simply use the bru() function: rspde_fit <- bru(cmp, data = prdata, family = "Gamma", options = list( control.inla = list(int.strategy = "eb"), verbose = FALSE) ) inlabru results We can look at some summaries of the posterior distributions for the parameters, for example the fixed effects (i.e. the intercept) and the hyper-parameters (i.e. dispersion in the gamma likelihood, the precision of the RW1, and the parameters of the spatial field): summary(rspde_fit) ## inlabru version: 2.8.0 ## INLA version: 23.06.25 ## Components: ## Intercept: main = linear(1), group = exchangeable(1L), replicate = iid(1L) ## distSea: main = rw1(seaDist), group = exchangeable(1L), replicate = iid(1L) ## field: main = cgeneric(coordinates), group = exchangeable(1L), replicate = iid(1L) ## Likelihoods: ## Family: 'Gamma' ## Data class: 'SpatialPointsDataFrame' ## Predictor: y ~ . ## Time used: ## Pre = 2.24, Running = 154, Post = 0.153, Total = 157 ## Fixed effects: ## mean sd 0.025quant 0.5quant 0.975quant mode kld ## Intercept 1.915 0.956 0.042 1.915 3.788 1.915 0 ## ## Random effects: ## Name Model ## distSea RW1 model ## field CGeneric ## ## Model hyperparameters: ## mean sd 0.025quant ## Precision parameter for the Gamma observations 13.493 9.20e-01 11.754 ## Precision for distSea 29249.469 2.33e+04 3938.534 ## Theta1 for field 0.816 2.51e-01 0.323 ## Theta2 for field -2.180 1.03e+00 -4.204 ## Theta3 for field -2.939 1.27e+00 -5.478 ## 0.5quant 0.975quant mode ## Precision parameter for the Gamma observations 13.469 15.374 13.433 ## Precision for distSea 22931.599 89990.938 11348.818 ## Theta1 for field 0.816 1.311 0.815 ## Theta2 for field -2.184 -0.134 -2.200 ## Theta3 for field -2.925 -0.473 -2.871 ## ## Deviance Information Criterion (DIC) ...............: 2496.32 ## Deviance Information Criterion (DIC, saturated) ....: 703.68 ## Effective number of parameters .....................: 89.74 ## ## Watanabe-Akaike information criterion (WAIC) ...: 2498.14 ## Effective number of parameters .................: 80.45 ## ## Marginal log-Likelihood: -1258.41 ## is computed ## Posterior summaries for the linear predictor and the fitted values are computed ## (Posterior marginals needs also 'control.compute=list(return.marginals.predictor=TRUE)') Let \(\theta_1 = \textrm{Theta1}\), \(\theta_2=\textrm{Theta2}\) and \(\theta_3=\textrm{Theta3}\). In terms of the SPDE \[(\kappa^2 I - \Delta)^{\alpha/2}(\tau u) = \mathcal{W},\] where \(\alpha = \nu + d/2\), we have that \[\tau = \exp(\theta_1),\quad \kappa = \exp(\theta_2), \] and by default \[\nu = 4\Big(\frac{\exp(\theta_3)}{1+\exp(\theta_3)}\Big).\] The number 4 comes from the upper bound for \(\nu\), which is discussed in R-INLA implementation of the rational SPDE approach vignette. In general, we have \[\nu = \nu_{UB}\Big(\frac{\exp(\theta_3)}{1+\exp(\theta_3)}\Big),\] where \(\nu_{UB}\) is the value of the upper bound for the smoothness parameter \(\nu\). Another choice for prior for \(\nu\) is a truncated lognormal distribution and is also discussed in R-INLA implementation of the rational SPDE approach vignette. inlabru results in the original scale We can obtain outputs with respect to parameters in the original scale by using the function rspde.result(): result_fit <- rspde.result(rspde_fit, "field", rspde_model) summary(result_fit) ## mean sd 0.025quant 0.5quant 0.975quant mode ## tau 2.333320 0.589969 1.3876900 2.261240 3.692530 2.1247100 ## kappa 0.191468 0.249204 0.0151749 0.112532 0.859865 0.0388295 ## nu 0.346499 0.399316 0.0169371 0.204083 1.520210 0.0417286 We can also plot the posterior densities. To this end we will use the gg_df() function, which creates ggplot2 user-friendly data frames: posterior_df_fit <- gg_df(result_fit) ggplot(posterior_df_fit) + geom_line(aes(x = x, y = y)) + facet_wrap(~parameter, scales = "free") + labs(y = "Density") We can also obtain the summary on a different parameterization by setting the parameterization argument on the rspde.result() function: result_fit_matern <- rspde.result(rspde_fit, "field", rspde_model, parameterization = "matern") summary(result_fit_matern) ## mean sd 0.025quant 0.5quant 0.975quant mode ## std.dev 213.500000 4710.740000 3.8829300 91.738200 271.56500 -0.0510998 ## range 23.704200 43.528300 -1.5094900 10.977300 125.53600 4.2864400 ## nu 0.346499 0.399316 0.0169371 0.204083 1.52021 0.0417286 In a similar manner, we can obtain posterior plots on the matern parameterization: posterior_df_fit_matern <- gg_df(result_fit_matern) ggplot(posterior_df_fit_matern) + geom_line(aes(x = x, y = y)) + facet_wrap(~parameter, scales = "free") + labs(y = "Density") Predictions Let us now obtain predictions (i.e. do kriging) of the expected precipitation on a dense grid in the region. We begin by creating the grid in which we want to do the predictions. To this end, we can use the inla.mesh.projector() function: nxy <- c(150, 100) projgrid <- inla.mesh.projector(prmesh, xlim = range(PRborder[, 1]), ylim = range(PRborder[, 2]), dims = nxy ) This lattice contains 150 × 100 locations. One can easily change the resolution of the kriging prediction by changing nxy. Let us find the cells that are outside the region of interest so that we do not plot the estimates there. xy.in <- inout(projgrid$lattice$loc, cbind(PRborder[, 1], PRborder[, 2])) Let us plot the locations that we will do prediction: coord.prd <- projgrid$lattice$loc[xy.in, ] plot(coord.prd, type = "p", cex = 0.1) lines(PRborder) points(coords[, 1], coords[, 2], pch = 19, cex = 0.5, col = "red") Let us now create a data.frame() of the coordinates: coord.prd.df <- data.frame(x1 = coord.prd[,1], x2 = coord.prd[,2]) coordinates(coord.prd.df) <- c("x1", "x2") Since we are using distance to the sea as a covariate, we also have to calculate this covariate for the prediction locations. Finally, we add the prediction location to our prediction data.frame(), namely, coord.prd.df: seaDist.prd <- apply(spDists(coord.prd, PRborder[1034:1078, ], longlat = TRUE ), 1, min) coord.prd.df$seaDist <- seaDist.prd pred_obs <- predict(rspde_fit, coord.prd.df, ~exp(Intercept + field + distSea)) Let us now build the data frame with the results: pred_df <- pred_obs@data pred_df <- cbind(pred_df, pred_obs@coords) Finally, we plot the results. First the predicted mean: ggplot(pred_df, aes(x = x1, y = x2, fill = mean)) + geom_raster() + scale_fill_viridis() Then, the std. deviations: ggplot(pred_df, aes(x = x1, y = x2, fill = sd)) + geom_raster() + scale_fill_viridis() An example with replicates For this example we will simulate a data with replicates. We will use the same example considered in the Rational approximation with the rSPDE package vignette (the only difference is the way the data is organized). We also refer the reader to this vignette for a description of the function matern.operators(), along with its methods (for instance, the simulate() method). Simulating the data Let us consider a simple Gaussian linear model with 30 independent replicates of a latent spatial field \(x(\mathbf{s})\), observed at the same \(m\) locations, \(\{\mathbf{s}_1 , \ldots , \mathbf{s}_m \}\), for each replicate. For each \(i = 1,\ldots,m,\) we have \[\begin{align} y_i &= x_1(\mathbf{s}_i)+\varepsilon_i,\\ \vdots &= \vdots\\ y_{i+29m} &= x_{30}(\mathbf{s}_i) + \varepsilon_{i+29m}, \end{align}\] where \(\varepsilon_1,\ldots,\varepsilon_{30m}\) are iid normally distributed with mean 0 and standard deviation 0.1. We use the basis function representation of \(x(\cdot)\) to define the \(A\) matrix linking the point locations to the mesh. We also need to account for the fact that we have 30 replicates at the same locations. To this end, the \(A\) matrix we need can be generated by inla.spde.make.A() function. The reason being that we are sampling \(x(\cdot)\) directly and not the latent vector described in the introduction of the Rational approximation with the rSPDE package vignette. We begin by creating the mesh: m <- 200 loc_2d_mesh <- matrix(runif(m * 2), m, 2) mesh_2d <- inla.mesh.2d( loc = loc_2d_mesh, cutoff = 0.05, offset = c(0.1, 0.4), max.edge = c(0.05, 0.5) ) plot(mesh_2d, main = "") points(loc_2d_mesh[, 1], loc_2d_mesh[, 2]) We then compute the \(A\) matrix, which is needed for simulation, and connects the observation locations to the mesh: n.rep <- 30 A <- inla.spde.make.A( mesh = mesh_2d, loc = loc_2d_mesh, index = rep(1:m, times = n.rep), repl = rep(1:n.rep, each = m) ) Notice that for the simulated data, we should use the \(A\) matrix from inla.spde.make.A() function. We will now simulate a latent process with standard deviation \(\sigma=1\) and range \(0.1\). We will use \(\nu=0.5\) so that the model has an exponential covariance function. To this end we create a model object with the matern.operators() function: nu <- 0.5 sigma <- 1 range <- 0.1 kappa <- sqrt(8 * nu) / range tau <- sqrt(gamma(nu) / (sigma^2 * kappa^(2 * nu) * (4 * pi) * gamma(nu + 1))) d <- 2 operator_information <- matern.operators( mesh = mesh_2d, nu = nu, range = range, sigma = sigma, m = 2, parameterization = "matern" ) More details on this function can be found at the Rational approximation with the rSPDE package vignette. To simulate the latent process all we need to do is to use the simulate() method on the operator_information object. We then obtain the simulated data \(y\) by connecting with the \(A\) matrix and adding the gaussian noise. set.seed(1) u <- simulate(operator_information, nsim = n.rep) y <- as.vector(A %*% as.vector(u)) + rnorm(m * n.rep) * 0.1 The first replicate of the simulated random field as well as the observation locations are shown in the following figure. proj <- inla.mesh.projector(mesh_2d, dims = c(100, 100)) df_field <- data.frame(x = proj$lattice$loc[,1], y = proj$lattice$loc[,2], field = as.vector(inla.mesh.project(proj, field = as.vector(u[, 1]))), type = "field") df_loc <- data.frame(x = loc_2d_mesh[, 1], y = loc_2d_mesh[, 2], field = y[1:m], type = "locations") df_plot <- rbind(df_field, df_loc) ggplot(df_plot) + aes(x = x, y = y, fill = field) + facet_wrap(~type) + xlim(0,1) + ylim(0,1) + geom_raster(data = df_field) + geom_point(data = df_loc, aes(colour = field), show.legend = FALSE) + scale_fill_viridis() + scale_colour_viridis() Fitting the inlabru rSPDE model Let us then use the rational SPDE approach to fit the data. We begin by creating the model object. rspde_model.rep <- rspde.matern(mesh = mesh_2d, parameterization = "spde") Let us now create the data.frame() and the vector with the replicates indexes: rep.df <- data.frame(y = y, x1 = rep(loc_2d_mesh[,1], n.rep), x2 = rep(loc_2d_mesh[,2], n.rep)) coordinates(rep.df) <- c("x1", "x2") repl <- rep(1:n.rep, each=m) Let us create the component and fit. It is extremely important not to forget the replicate when fitting model with the bru() function. It will not produce warning and might fit some meaningless model. cmp.rep <- y ~ -1 + field(coordinates, model = rspde_model.rep, replicate = repl ) rspde_fit.rep <- bru(cmp.rep, data = rep.df, family = "gaussian" ) We can get the summary: summary(rspde_fit.rep) ## inlabru version: 2.8.0 ## INLA version: 23.06.25 ## Components: ## field: main = cgeneric(coordinates), group = exchangeable(1L), replicate = iid(repl) ## Likelihoods: ## Family: 'gaussian' ## Data class: 'SpatialPointsDataFrame' ## Predictor: y ~ . ## Time used: ## Pre = 2.03, Running = 451, Post = 12.1, Total = 465 ## Random effects: ## Name Model ## field CGeneric ## ## Model hyperparameters: ## mean sd 0.025quant 0.5quant ## Precision for the Gaussian observations 99.81 5.214 90.85 99.42 ## Theta1 for field -2.92 0.088 -3.12 -2.91 ## Theta2 for field 3.05 0.036 2.98 3.05 ## Theta3 for field -1.69 0.035 -1.75 -1.69 ## 0.975quant mode ## Precision for the Gaussian observations 111.28 97.88 ## Theta1 for field -2.78 -2.87 ## Theta2 for field 3.12 3.05 ## Theta3 for field -1.62 -1.70 ## ## Deviance Information Criterion (DIC) ...............: -5740.20 ## Deviance Information Criterion (DIC, saturated) ....: 10796.48 ## Effective number of parameters .....................: 4800.15 ## ## Watanabe-Akaike information criterion (WAIC) ...: -6756.51 ## Effective number of parameters .................: 2770.11 ## ## Marginal log-Likelihood: -4549.58 ## is computed ## Posterior summaries for the linear predictor and the fitted values are computed ## (Posterior marginals needs also 'control.compute=list(return.marginals.predictor=TRUE)') and the summary in the user’s scale: result_fit_rep <- rspde.result(rspde_fit.rep, "field", rspde_model.rep) summary(result_fit_rep) ## mean sd 0.025quant 0.5quant 0.975quant mode ## tau 0.0539524 0.00458441 0.0442093 0.0543578 0.0617612 0.0562702 ## kappa 21.2080000 0.75174100 19.7644000 21.1971000 22.7163000 21.1785000 ## nu 0.6232650 0.01824090 0.5916070 0.6216930 0.6627620 0.6163930 result_df <- data.frame( parameter = c("tau", "kappa", "nu"), true = c(tau, kappa, nu), mean = c( result_fit_rep$summary.tau$mean, result_fit_rep$summary.kappa$mean, result_fit_rep$summary.nu$mean ), mode = c( result_fit_rep$summary.tau$mode, result_fit_rep$summary.kappa$mode, result_fit_rep$summary.nu$mode ) ) print(result_df) ## parameter true mean mode ## 1 tau 0.08920621 0.05395238 0.05627016 ## 2 kappa 20.00000000 21.20801526 21.17845745 ## 3 nu 0.50000000 0.62326490 0.61639279 Let us also obtain the summary on the matern parameterization: result_fit_rep_matern <- rspde.result(rspde_fit.rep, "field", rspde_model.rep, parameterization = "matern") summary(result_fit_rep_matern) ## mean sd 0.025quant 0.5quant 0.975quant mode ## std.dev 1.065840 0.01340710 1.0406200 1.065710 1.092460 1.067150 ## range 0.105136 0.00446221 0.0966347 0.105014 0.114198 0.104791 ## nu 0.623265 0.01824090 0.5916070 0.621693 0.662762 0.616393 result_df_matern <- data.frame( parameter = c("std_dev", "range", "nu"), true = c(sigma, range, nu), mean = c( result_fit_rep_matern$summary.std.dev$mean, result_fit_rep_matern$summary.range$mean, result_fit_rep_matern$summary.nu$mean ), mode = c( result_fit_rep$summary.std.dev$mode, result_fit_rep$summary.range$mode, result_fit_rep$summary.nu$mode ) ) print(result_df_matern) ## parameter true mean mode ## 1 std_dev 1.0 1.0658377 0.6163928 ## 2 range 0.1 0.1051357 0.6163928 ## 3 nu 0.5 0.6232649 0.6163928 An example with a non-stationary model Our goal now is to show how one can fit model with non-stationary \(\sigma\) (std. deviation) and non-stationary \(\rho\) (a range parameter). One can also use the parameterization in terms of non-stationary SPDE parameters \(\kappa\) and \(\tau\). For this example we will consider simulated data. Simulating the data Let us consider a simple Gaussian linear model with a latent spatial field \(x(\mathbf{s})\), defined on the rectangle \((0,10) \times (0,5)\), where the std. deviation and range parameter satisfy the following log-linear regressions: \[\begin{align} \log(\sigma(\mathbf{s})) &= \theta_1 + \theta_3 b(\mathbf{s}),\\ \log(\rho(\mathbf{s})) &= \theta_2 + \theta_3 b(\mathbf{s}), \end{align}\] where \(b(\mathbf{s}) = (s_1-5)/10\). We assume the data is observed at \(m\) locations, \(\{\mathbf{s}_1 , \ldots , \mathbf{s}_m \}\). For each \(i = 1,\ldots,m,\) we have \[y_i = x_1(\mathbf{s}_i)+\varepsilon_i,\] where \(\varepsilon_1,\ldots,\varepsilon_{m}\) are iid normally distributed with mean 0 and standard deviation 0.1. We begin by defining the domain and creating the mesh: rec_domain <- cbind(c(0, 1, 1, 0, 0) * 10, c(0, 0, 1, 1, 0) * 5) mesh <- inla.mesh.2d(loc.domain = rec_domain, cutoff = 0.1, max.edge = c(0.5, 1.5), offset = c(0.5, 1.5)) We follow the same structure as INLA. However, INLA only allows one to specify B.tau and B.kappa matrices, and, in INLA, if one wants to parameterize in terms of range and standard deviation one needs to do it manually. Here we provide the option to directly provide the matrices B.sigma and B.range. The usage of the matrices B.tau and B.kappa are identical to the corresponding ones in inla.spde2.matern() function. The matrices B.sigma and B.range work in the same way, but they parameterize the stardard deviation and range, respectively. The columns of the B matrices correspond to the same parameter. The first column does not have any parameter to be estimated, it is a constant column. So, for instance, if one wants to share a parameter with both sigma and range (or with both tau and kappa), one simply let the corresponding column to be nonzero on both B.sigma and B.range (or on B.tau and B.kappa). We will assume \(\nu = 0.8\), \(\theta_1 = 0, \theta_2 = 1\) and \(\theta_3=1\). Let us now build the model to obtain the sample with the spde.matern.operators() function: nu <- 0.8 true_theta <- c(0,1, 1) B.sigma = cbind(0, 1, 0, (mesh$loc[,1] - 5) / 10) B.range = cbind(0, 0, 1, (mesh$loc[,1] - 5) / 10) # SPDE model op_cov_ns <- spde.matern.operators(mesh = mesh, theta = true_theta, nu = nu, B.sigma = B.sigma, B.range = B.range, m = 2, parameterization = "matern") Let us now sample the data with the simulate() method: u <- as.vector(simulate(op_cov_ns, seed = 123)) Let us now obtain 600 random locations on the rectangle and compute the \(A\) matrix: m <- 600 loc_mesh <- cbind(runif(m) * 10, runif(m) * 5) A <- inla.spde.make.A( mesh = mesh, loc = loc_mesh ) We can now generate the response vector y: y <- as.vector(A %*% as.vector(u)) + rnorm(m) * 0.1 Fitting the inlabru rSPDE model Let us then use the rational SPDE approach to fit the data. We begin by creating the model object. We are creating a new one so that we do not start the estimation at the true values. rspde_model_nonstat <- rspde.matern(mesh = mesh, B.sigma = B.sigma, B.range = B.range, parameterization = "matern") Let us now create the data.frame() and the vector with the replicates indexes: nonstat_df <- data.frame(y = y, x1 = loc_mesh[,1], x2 = loc_mesh[,2]) coordinates(nonstat_df) <- c("x1", "x2") Let us create the component and fit. It is extremely important not to forget the replicate when fitting model with the bru() function. It will not produce warning and might fit some meaningless model. cmp_nonstat <- y ~ -1 + field(coordinates, model = rspde_model_nonstat ) rspde_fit_nonstat <- bru(cmp_nonstat, data = nonstat_df, family = "gaussian", options = list(verbose = FALSE) ) We can get the summary: summary(rspde_fit_nonstat) ## inlabru version: 2.8.0 ## INLA version: 23.06.25 ## Components: ## field: main = cgeneric(coordinates), group = exchangeable(1L), replicate = iid(1L) ## Likelihoods: ## Family: 'gaussian' ## Data class: 'SpatialPointsDataFrame' ## Predictor: y ~ . ## Time used: ## Pre = 1.72, Running = 103, Post = 0.679, Total = 106 ## Random effects: ## Name Model ## field CGeneric ## ## Model hyperparameters: ## mean sd 0.025quant 0.5quant ## Precision for the Gaussian observations 94.176 9.908 76.090 93.690 ## Theta1 for field -0.004 0.156 -0.306 -0.007 ## Theta2 for field 0.977 0.207 0.585 0.972 ## Theta3 for field 0.909 0.321 0.318 0.898 ## Theta4 for field -1.227 0.165 -1.563 -1.224 ## 0.975quant mode ## Precision for the Gaussian observations 115.055 92.787 ## Theta1 for field 0.310 -0.016 ## Theta2 for field 1.402 0.948 ## Theta3 for field 1.582 0.836 ## Theta4 for field -0.913 -1.208 ## ## Deviance Information Criterion (DIC) ...............: -691.18 ## Deviance Information Criterion (DIC, saturated) ....: 937.37 ## Effective number of parameters .....................: 333.28 ## ## Watanabe-Akaike information criterion (WAIC) ...: -711.02 ## Effective number of parameters .................: 237.03 ## ## Marginal log-Likelihood: -25.16 ## is computed ## Posterior summaries for the linear predictor and the fitted values are computed ## (Posterior marginals needs also 'control.compute=list(return.marginals.predictor=TRUE)') We can obtain outputs with respect to parameters in the original scale by using the function rspde.result(): result_fit_nonstat <- rspde.result(rspde_fit_nonstat, "field", rspde_model_nonstat) summary(result_fit_nonstat) ## mean sd 0.025quant 0.5quant 0.975quant mode ## Theta1.matern -0.004457 0.156346 -0.306205 -0.00657588 0.310192 -0.0162841 ## Theta2.matern 0.977483 0.207468 0.584839 0.97237100 1.402110 0.9483780 ## Theta3.matern 0.909403 0.321295 0.318317 0.89805500 1.582380 0.8364450 ## nu 0.911897 0.114643 0.694702 0.90995600 1.143480 0.9097780 Let us compare the mean to the true values of the parameters: summ_res_nonstat <- summary(result_fit_nonstat) result_df <- data.frame( parameter = result_fit_nonstat$params, true = c(true_theta, nu), mean = summ_res_nonstat[,1], mode = summ_res_nonstat[,6] ) print(result_df) ## parameter true mean mode ## 1 Theta1.matern 0.0 -0.004457 -0.0162841 ## 2 Theta2.matern 1.0 0.977483 0.9483780 ## 3 Theta3.matern 1.0 0.909403 0.8364450 ## 4 nu 0.8 0.911897 0.9097780 We can also plot the posterior densities. To this end we will use the gg_df() function, which creates ggplot2 user-friendly data frames: posterior_df_fit <- gg_df(result_fit_nonstat) ggplot(posterior_df_fit) + geom_line(aes(x = x, y = y)) + facet_wrap(~parameter, scales = "free") + labs(y = "Density") Further options of the inlabru implementation There are several additional options that are available. For instance, it is possible to change the order of the rational approximation, the upper bound for the smoothness parameter (which may speed up the fit), change the priors, change the type of the rational approximation, among others. These options are described in the “Further options of the rSPDE-INLA implementation” section of the R-INLA implementation of the rational SPDE approach vignette. Observe that all these options are passed to the model through the rspde.matern() function, and therefore the resulting model object can directly be used in the bru() function, in an identical manner to the examples above. References Bolin, David, Alexandre B. Simas, and Zhen Xiong. 2023. “Covariance-Based Rational Approximations of Fractional SPDEs for Computationally Efficient Bayesian Inference.” Journal of Computational and Graphical Statistics. Lindgren, Finn, Håvard Rue, and Johan Lindström. 2011. “An Explicit Link Between Gaussian Fields and Gaussian Markov Random Fields: The Stochastic Partial Differential Equation Approach.” Journal of the Royal Statistical Society. Series B. Statistical Methodology 73 (4): 423–98.
__label__pos
0.985448
Take the 2-minute tour × Computer Science Stack Exchange is a question and answer site for students, researchers and practitioners of computer science. It's 100% free, no registration required. So we can prove that the language say $A = \{ \langle M,w \rangle \mid \text{M is TM that accepts } w^R \text{ whenever it accepts } w \}$ is undecidable by assuming it is decidable and use that to construct a $TM$ deciding $A_{TM}$. So by contradiction $A$ is undecidable. But what if the language was $\{ \langle M,w \rangle \mid \text{M accepts } w \text{ but on input } w^R \text{halts and rejects} \}$? I was thinking to prove that it's r.e, we can construct a Turing recognizer, say $K$, which recognizes this language by simulating $M$ on $w$ and do whatever $M$ does. But how does the machine know what's $w$ and $w^R$? Non determinism maybe? Or am I looking at it the wrong way? And to prove that it's undecidable would we use the same approach as that for $A$? share|improve this question 2   Just to clarify, what if $w = w^{R}$? –  Luke Mathieson Dec 10 '12 at 4:57      @LukeMathieson No pairs $\langle M, w \rangle$ where $w = w^{R}$ can be in this language. –  Ben Dec 10 '12 at 23:53 add comment migrated from cstheory.stackexchange.com Dec 10 '12 at 4:41 This question came from our site for theoretical computer scientists and researchers in related fields. 3 Answers up vote 1 down vote accepted Let $ A = \{\langle M, w \rangle \mid M \text { is a TM, } M \text { accepts } w \text { and on input } w^R \text { halts and rejects} \} $. We prove that $A$ is not decidable by showing $\text{HALT}\le_m A$. The reduction works as follows. Let $\langle M, w \rangle $ be an instance of $\text{HALT}$, then we construct a Turing machine $M'$ based on $M$ and $w$. Let $M_{01}$ be some TM that accepts $01$ and rejects all other inputs. The TM $M'$ on input $v$ works now as follows 1. $M'$ simulates $M(w)$ 2. When the simulation is finished simulate $M_{01}(v)$ and return the result of the simulation The reduction maps $\langle M, w \rangle$ to $\langle M', 01 \rangle$. We have now $$ \begin{align} \langle M, w \rangle \not \in \text{HALT} & \Rightarrow M' \text{cycles on every input} \\ & \Rightarrow \langle M', 01 \rangle \not \in A \end{align} $$ and \begin{align} \langle M, w \rangle \in \text{HALT} & \Rightarrow M' \text{acts as } M_{01} \\ & \Rightarrow M' \text{ accepts } 01 \text{ and rejects } 10\\ & \Rightarrow \langle M', 01 \rangle \in A \end{align} share|improve this answer      No, if $w = w^R$, there simply is no M that both accepts w and rejects w. So the language does not contain any strings of the form <M, w> where w is a palindrome. But this means your reduction would always reject such <M, w>, even when M actually halts on w, so it's not a correct reduction from HALT. But why use Mlex and wmin at all? M' should just simulate M on w, then accept 01 and reject everything else. Then <M, w> is in HALT iff <M', 01> is in A. –  Ben Dec 11 '12 at 12:13      Thanks for the pointer. I simplified my answer. –  A.Schulz Dec 11 '12 at 12:27      @A.Schulz instead of using reduction, if I tried to prove it by contradiction and construct a hypothetical decider for A_{TM} using the outline provided by Ben. The decider for A_{TM} will construct M' which will behave as M on w (should accept w and reject w^R). If this simulation accepts then M' accepts. Run the decider for A on <M', w>. If this accepts, then the A_{TM} decider accepts. If it rejects, reject. This runs into trouble if w=w^R I think. –  muddy Dec 11 '12 at 18:42      Thank you for your proof though! Much appreciated. –  muddy Dec 11 '12 at 18:43 add comment You can construct a recognizer that simply simulates M on w and then simulates M on $w^R$. This will halt in finite time for all that are in A (by definition), and then you can accept if the first simulation accepted and the second rejected. That suffices for the language being recognisable (recursively enumerable). I'm not sure what you mean by "how does the machine know what's $w$ and $w^R$ ?" w is part of the input pair, and w^R is easily generated from it. The way you've defined the language A, you don't have to worry about guessing which is the one that M should accept and which is the one it should reject. But if the language was such that either M accepts w and rejects in finite time w^R or M rejects w in finite time and accepts w^R, then it's still easy. You do the same thing and accept if exactly one of the two simulations accepts, without caring which one. I don't actually know a "standard" proof that $A$ is undecidable. But to prove your new language undecidable, I'd make a decider for A_TM that on input produces M' that rejects all other input than w and behaves as M on w. M' definitely rejects $w^R$ in finite time, so will be accepted by the decider for your language iff M accepts w. I imagine to prove $A$ undecidable you do the same thing but make M' accept all other strings, so it definitely accepts $w^R$ and accepts w iff the hypothetical $A$ decider accepts. share|improve this answer      Thank you! That makes so much more sense. lol –  muddy Dec 11 '12 at 4:01      @muddy Hmm, actually my proof sketch is incorrect for your new language (it works for the original A that requires acceptance of $w^R$). If $w = w^R$ and M accepts $w$ then M' doesn't reject $w^R$, but <M, w> should be in A_TM. So you have to do something more complicated. I'll fix that tomorrow. –  Ben Dec 11 '12 at 12:23 add comment This is what I had mind to prove it's undecidable. Let $ A = \{\langle M, w \rangle \mid M \text { is a TM, } M \text { accepts } w \text { and on input } w^R \text { halts and rejects} \} $ Assume $A$ is decidable. Let $H$ be a $TM$ deciding it. Construct $TM$ $K$ deciding $A_{TM}$ as follows $K$: ''On input $ \langle M, w \rangle :$ $ \hspace {20 mm}$ -Run $H$ on input $ \langle M, w \rangle $ $ \hspace {20 mm}$ -If $H$ accepts, then accept $ \hspace {20 mm}$ -If $H$ rejects, then reject.'' Now $K$ decides $A_{TM}$ as it always accepts or rejects based on $H$ which will always accept or reject as it is also a decider. $\Longrightarrow A_{TM}$ is decidable which is a contradiction. $\Longrightarrow A$ is undecidable. Is this right? share|improve this answer 1   Your K is exactly equivalent to H, and just tells you whether for a given <M, w>, M accepts w and rejects w^R in finite time. But M could accept w and go in an infinite loop on w^R, or accept w^R. H would reject this pair, when A_TM requires a positive answer. So although it's a decider (assuming H exists), it doesn't decide A_TM. –  Ben Dec 11 '12 at 0:00 add comment Your Answer   discard By posting your answer, you agree to the privacy policy and terms of service. Not the answer you're looking for? Browse other questions tagged or ask your own question.
__label__pos
0.998756
Process control block 0 407 Process control block In this tutorial, you will learn about, the Process control block. Process control block (PCB): Process control block used to specify a set of rules through which we can store various information. It stores various information about any file or instructions like name of a process, the status of the process, type of process, Resource allocated for a process, scheduling information, memory specified for the process, install Input-Output devices for a process, process Id. And process size. PCB occurs when any process or task send for Operating System Gi is a type of process management component supported by the Operating System. It is active at 1st introduction with the process. Each process is represented in the Operating System by a Process control block (PCB). It contains much information associated with a specific process. This includes: 1. Process state. 2. Program counter. 3. CPU registers. 4. CPU scheduling information. 5. Memory management information. 6. Accounting information. 7. I/O status information. Process Scheduling: The objectives of multi-programming are to have some process running at all times to maximize CPU utilization. For a Uni-process system, there will never be more than one running process. If these are more processes the rest will have to wait until the CPU is free and can be rescheduled. Comment below if you have queries related to the above topic, Process control block. LEAVE A REPLY Please enter your comment! Please enter your name here
__label__pos
0.990948
Community Groups are officially here! We've released Groups, a new feature that enables us to connect community members of similar industries and interests in a shared, private space. You can check out all of the details here, including information about who can join, how to join, and what Groups are currently offered. Please leave your feedback through this Community Groups Feedback Survey. Dynamically resize creatives ruka ruka IrelandCommunity Member, XMPN Member, Qualtrics Brand Admin Guru ✭✭ Hello, when creating a Creative, I can specify the size of the creative. Is there a way of using a dynamic size that would be a % of the screen size? The use case here is where you push a creative and the website could be consumed in a laptop monitor, external monitor or even a table. All these will have different screen sizes/resolutions, especially if you consider rotation for the assets that support it (e.g. tablets). Fixed-size creatives will make the creative oversizes in some cases, Have you come across this challenge and how have you handled it? thank you Tagged: Answers • ana_velez_voce ana_velez_voce MedellínCommunity Member, Qualtrics Partner, XMPN Member, XMPN Champion, Qualtrics Brand Admin Superuser ✭✭✭✭ Hi!! maybe with the custom HTML creative you can achieve this • ruka ruka IrelandCommunity Member, XMPN Member, Qualtrics Brand Admin Guru ✭✭ Hi Ana, is HTML creative and option or you are just referring to the possibility of modifying the creative with standard HTML and use resizing tags for it? Have you done in the past? I would look to have a pick at how. Thank you so much.
__label__pos
0.711916
2 I want to install a bunch of packages, but I don't want to install the documentation of them. How can I do this? 3 --path-exclude could be used to filter out unwanted files when installing a package: dpkg -i --path-exclude=/usr/share/doc/* ... To make permanent solution, Create a file /etc/dpkg/dpkg.cfg.d/01_nodoc which specifies the desired filters. Example: path-exclude /usr/share/doc/* # we need to keep copyright files for legal reasons path-include /usr/share/doc/*/copyright path-exclude /usr/share/man/* path-exclude /usr/share/groff/* path-exclude /usr/share/info/* # lintian stuff is small, but really unnecessary path-exclude /usr/share/lintian/* path-exclude /usr/share/linda/* Then change /etc/apt/apt.conf.d/99synaptic or create new file containing: APT::Install-Recommends "false"; Reference: • Is there a way to do this with apt? I am using the apt-get command to install my packages – Robert Oct 24 '14 at 15:54 • @Rob3 yes, i've update answer. setup conf files then use the command/tools you want. – user.dz Oct 24 '14 at 16:23 1 Usually, the doc packages are recommended by the main package, but aren't hard dependencies. If they were hard dependencies (for example, texlive-full), I don't think there's a safe or simple way. For recommended packages, the answer is simple: sudo apt-get install --no-install-recommends <package-name> Your Answer By clicking “Post Your Answer”, you agree to our terms of service, privacy policy and cookie policy Not the answer you're looking for? Browse other questions tagged or ask your own question.
__label__pos
0.570899
Em_limittext This is a discussion on Em_limittext within the Windows Programming forums, part of the Platform Specific Boards category; I am setting a text limit for one of my edit controls like this: Code: SendMessage(num1,EM_LIMITTEXT,(WPARAM)3,0); When I call: Code: ... 1. #1 C++ Enthusiast jmd15's Avatar Join Date Mar 2005 Location MI Posts 532 Em_limittext I am setting a text limit for one of my edit controls like this: Code: SendMessage(num1,EM_LIMITTEXT,(WPARAM)3,0); When I call: Code: SendMessage(num1,EM_GETLIMITTEXT,0,0); This alwasy returns 0 but I'm setting the limit to 3? I also can type over 3 characters into the edit box. Why isn't this working? Sending that message to my edit box should only allow 3 characters to be typed into the edit box, but this is not the case. Thanks. Trinity: "Neo... nobody has ever done this before." Neo: "That's why it's going to work." c9915ec6c1f3b876ddf38514adbb94f0 2. #2 train spotter Join Date Aug 2001 Location near a computer Posts 3,859 try using the MAKEWPARAM() macro from windows.h "Man alone suffers so excruciatingly in the world that he was compelled to invent laughter." Friedrich Nietzsche "I spent a lot of my money on booze, birds and fast cars......the rest I squandered." George Best "If you are going through hell....keep going." Winston Churchill 3. #3 C++ Enthusiast jmd15's Avatar Join Date Mar 2005 Location MI Posts 532 Ok, I've tried that with different combinations for hi and lo words. I tried: Code: MAKEWPARAM(3,3); //and MAKEWPARAM(0,3); //and MAKEWPARAM(3,0); But this still doesn't work. Any more suggestions? Thanks. Trinity: "Neo... nobody has ever done this before." Neo: "That's why it's going to work." c9915ec6c1f3b876ddf38514adbb94f0 4. #4 Registered User Join Date Jul 2005 Posts 69 First I verified that the following works correctly: Code: LRESULT CALLBACK WindowProcedure (HWND hwnd, UINT message, WPARAM wParam, LPARAM lParam) { static HBRUSH hBrush; static HWND hEdit; static LRESULT lResult; static char buffer[16]; switch (message) /* handle the messages */ { case WM_CREATE: hEdit = CreateWindowEx ( 0, "EDIT", "", WS_CHILD | WS_VISIBLE, 10, 10, 50, 16, hwnd, NULL, NULL, NULL); SendMessage(hEdit, EM_LIMITTEXT, WPARAM(3), 0); lResult = SendMessage(hEdit,EM_GETLIMITTEXT,0,0); wsprintf(buffer, "%i", lResult); MessageBox(hwnd, buffer, "Caption", MB_OK); break; All I can really say about your code is that 'num1' seems a bit of a strange HWND to me. Maybe you can show us more code. Popular pages Recent additions subscribe to a feed 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21
__label__pos
0.853261
 Actions Make changes to an article automatically using a plugin From Joomla! Documentation Overview Plugins are the simplest extensions to write. A simple plugin requires only two files. Yet plugins can be extremely handy. One common case for using plugins is doing some customized processing every time an article is saved. For example: • Change the published state of an article. • Add standard text to articles. • Set the Section or Category for an article. Let's see how you can use a simple plugin to do these types of tasks. First Plugin A simple plugin has two files. One is an XML file that tells Joomla! how to install the plugin. The other is the actual plugin PHP code. To make it as easy as possible for people to write plugins, Joomla! includes example plugins in every installation. In our example, we are writing a "content" plugin, since we are working with articles. So the two example files we can use as guidelines are plugins/content/example.xml amd plugins/content/example.php. The XML file for our example is called modifyarticle.xml. We just copied the file plugins/content/example.xml to modifyarticle.xml and put it in a new folder outside of the Joomla! installation. (We will use the Joomla! install program later on to add this to our Joomla! system. So for now we want to keep this separate from Joomla!.) Then we edited it as follows: <?xml version="1.0" encoding="utf-8"?> <install version="1.5" type="plugin" group="content"> <name>Modify Edited Articles</name> <creationDate>January 2009</creationDate> <author>Joomla Doc Team</author> <authorEmail>[email protected]</authorEmail> <authorUrl>http://joomlacode.org</authorUrl> <copyright>Copyright (c) 2009</copyright> <license>GPL</license> <version>1.0.0</version> <description>Example plugin for Joomla! wiki.</description> <files> <filename plugin="modifyarticle">modifyarticle.php</filename> </files> </install> Next, we copy the file plugins/content/example.php to modifyarticle.php and put it in the same folder outside of the Joomla! installation. Then we edit this file so it reads as follows: <?php /** * @copyright Copyright (C) 2005 - 2008 Open Source Matters. All rights reserved. * @license GNU/GPL, see LICENSE.php */ // Check to ensure this file is included in Joomla! defined( '_JEXEC' ) or die( 'Restricted access' ); jimport( 'joomla.plugin.plugin' ); class plgContentModifyArticle extends JPlugin { /** * Constructor * * For php4 compatability we must not use the __constructor as a constructor for plugins * because func_get_args ( void ) returns a copy of all passed arguments NOT references. * This causes problems with cross-referencing necessary for the observer design pattern. * * @param object $subject The object to observe * @param object $params The object that holds the plugin parameters * @since 1.5 */ function plgContentModifyArticle ( &$subject, $params ) { parent::__construct( $subject, $params ); } /** * Example before save content method * * Method is called right before content is saved into the database. * Article object is passed by reference, so any changes will be saved! * NOTE: Returning false will abort the save with an error. * You can set the error by calling $article->setError($message) * * @param object A JTableContent object * @param bool If the content is just about to be created * @return bool If false, abort the save */ function onBeforeContentSave( &$article, $isNew ) { global $mainframe; $user =& JFactory::getUser(); // get the user if ($user->usertype == 'Author') { // unpublish if user is "Author" $article->state = 0; // from dev 11:59 } return true; } } Notice that the example.php file has six methods plus the constructor. These document the six possible events that you can use for plugins. In our example, we want to modify the article just before it is saved to the database. So we call our method (or function) "onBeforeContentSave". This method name must be exactly as shown. The name is what tells Joomla! when to run our plugin. In standard Joomla!, when a user in the "Author" group first submits a new article, it is set to "Unpublished". However, once the article has been published, this user can edit the article and it stays published. In our example, we want to change this so that, each time an article is edited by a user in the "Author" group, it gets changed back to "Unpublished". The code above does that. First, we get the user. Then, if the user is in the "Author" group, we change the article's "state" to zero, meaning "Unpublished". Notice that the method has $article as a parameter, so its fields and methods are all available to us. Install the Plugin Once we have the plugin coded, we need to create a zip archive so we can install it in our Joomla! site. Here are the steps: • Use any Zip or Gzip program to make a Zip or Gzip archive of the two files created above. • In the back end of your Joomla! site, navigate to Extensions → Install / Uninstall. • Press Browse and find your zip archive file. • Press Upload File and Install. You should get a message indicating that the extension was installed successfully. • Finally, navigate to Extensions → Plugin Manager, find the new plugin (called "Modify Edited Articles" based on the name in the XML file), and enable it. At this point, you should be able to test that it is working as expected. Add Standard Text Once we have the basic framework, changing the plugin to do other things is simple. For example, say we want to add standard text to any article submitted by a user in the "Author" group. Here is the code for the "onBeforeContentSave()" method: function onBeforeContentSave( &$article, $isNew ) { global $mainframe; $newText = '<p>This is my new text to add to the article.</p>'; $user =& JFactory::getUser(); // get the user if ($user->usertype == 'Author') { if ($article->fulltext) { if (strpos($article->fulltext, $newText) == 0) { $article->fulltext .= $newText; } } else { if (strpos($article->introtext, $newText) == 0) { $article->introtext .= $newText; } } } return true; } In this code, we have some standard text we put in the $newText variable. Then we deal with a complication of the $article object. Joomla! articles can have an optional "Read more..." break. If they don't have one, the entire text of the article is in the introtext field. If there is a "Read more..." break, then the part of the article after the break is in the fulltext field. So, in the code, we have to handle both cases. If there is something in fulltext, then we add the standard text there. Otherwise, we add it to introtext. There is one other complication. We don't want to add the standard text if it already there. Otherwise, we would keep adding more copies of it each time the article is edited. So we test to see if it already part of the article's text and we only add it if it isn't already there. Notice that the second parameter of this method is $isNew. This is a boolean that is true if this is a new article. So we could test for that if we only wanted to add the text for new article submissions. Set the Section or Category Setting the Section or Category ID of an article is very easy. For example, in the Sample Website, this code $article->sectionid = '4'; $article->catid = '25'; would change the article's Section to "About Joomla!" and Category to "The Project". Note that you need to use the "id" fields for the Section and Category. In this example, you also might want to remove these fields from the front-end form (by using a template override) Other Things We Can Do It is useful to understand that any of the article's fields are available here. Below is a partial list: • title: article's title • alias: article's alias • state: 0 for unpublished, 1 for published, -1 for archived • created: date created • created_by: user id of author • created_by_alias: author alias field • modified: date last modified • modified_by: user id of last author to modify • metakey: Metadata Information / Keywords • metadesc: Metadata Information / Description • access: 0 for public, 1 for registered, 2 for special Also, there are other built-in events we can use for plugins. These are documented in the plugins/content/example.php file. In addition, there are other plugin types. See the plugins section of the Wiki for more information.
__label__pos
0.905599
 SQL Server 2005 database mirroring over a VPN router Can SQL Server 2005 database mirroring work if the mirror server is to be in another building with the networks connected via VPN router? Software/Hardware used: ASKED: February 27, 2009  3:34 PM UPDATED: February 27, 2009  7:19 PM Answer Wiki: This question was posted <a href="http://go.techtarget.com/r/5981650/2258499/9">here</a> as well. The answer is still the same. Yes, database mirroring can be used between buildings and/or over a VPN. Just make sure that the amount of bandwidth that you are using is less than the amount of bandwidth available over the link between the buildings. Last Wiki Answer Submitted:  February 27, 2009  7:19 pm  by  Denny Cherry   64,520 pts. All Answer Wiki Contributors:  Denny Cherry   64,520 pts. To see all answers submitted to the Answer Wiki: View Answer History. Discuss This Question: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
__label__pos
0.612129
Question regarding configure build type and extra-source-files here it is stated that “the package’s extra-source-files are available to the configure script when it is executed”. I interpret this as that the configure script can read from these files. Is this correct? If so, how would the configure script access them? Quick testing revealed that they are not located in the script working directory or its sub-directories (not that it have any). None of the paths passed as parameters to the script is related to the extra source files (in fact I don’t think those directories would be created unless the cabal project is installed). Is the manual misleading? Isn’t the cwd of the configure script the package directory itself? No, i tested that. The working directory is the same directory as the .buildinfo file, which is not the package directory. The script’s working directory is actually empty when it is first called. You’re right, indeed this is misleading in the manual. I think it is true when Setup.hs configure is called directly. But when we run a cabal build in v2, we change the build directory, so it instead looks like dist-newstyle/build/x86_64-osx/ghc-9.4.5/custom-test-0.1.0.0/build. In such a case the extra-source-files are still available, but you have to go up a large number of directories, and should not, in general, depend on how many one has to go up. I think this does not tend to come up for end-users, since configure type builds are typically only used with autoconf scripts, which do not make use of this. A PR to cleanup the manual would be welcome.
__label__pos
0.518892
Distributed Cognition From Cyborg Anthropology Revision as of 23:54, 17 December 2011 by Caseorganic (Talk | contribs) (diff) ← Older revision | Latest revision (diff) | Newer revision → (diff) Jump to: navigation, search Distributed-cognition-maggie-nichols.jpg Definition All media can be understood as augmenting humanity's basic cognitive structure, both in the concrete sense of literally re-wiring one's brain and in the more figurative sense of displacing certain tasks to other forms of media. The internet combined with search platforms could be considered the collective cognition of the human species, or the "noosphere"[1]. The invention of writing allowed individuals to free their memory and direct the brain to other tasks. Computers allow humans to take this process to an entirely new level by allowing one to store information in a medium that can be accessed anywhere. Wikipedia is an example of distributed cognitive network, as nodes of content and edits can be added from anywhere around the world. Collectively, Wikipedia represents the cognitive makeup of many different kinds of people from different cultures, backgrounds and ideologies. References 1. Krippendorff, Klaus. Noosphere. Web Dictionary of Cybernetics and Systems. Principia Cybernetica Web. Publish date unknown. Accessed April 2011. http://pespmc1.vub.ac.be/ASC/NOOSPHERE.html
__label__pos
0.572548
Angular icon Get $100 off the 7-Courses Angular Master Bundle! See the bundle then add to cart and your discount is applied. days hours mins secs Opinionated AngularJS styleguide for teams Jul 23, 2014 11 mins read AngularJS post After reading Google’s AngularJS guidelines, I felt they were a little too incomplete and also guided towards using the Closure library. They also state “We don’t think this makes sense for all projects that use AngularJS, and we’d love to see our community of developers come up with a more general Style that’s applicable to AngularJS projects large and small”, so here goes. From my experience with Angular, several talks and working in teams, here’s my opinionated styleguide for syntax, building and structuring Angular applications. Official styleguide repo now on GitHub, all future styleguide updates will be here! Module definitions Angular modules can be declared in various ways, either stored in a variable or using the getter syntax. Use the getter syntax at all times (angular recommended). Bad: var app = angular.module('app', []); app.controller(); app.factory(); Good: angular .module('app', []) .controller() .factory(); From these modules we can pass in function references. Module method functions Angular modules have a lot of methods, such as controller, factory, directive, service and more. There are many syntaxes for these modules when it comes to dependency injection and formatting your code. Use a named function definition and pass it into the relevant module method, this aids in stack traces as functions aren’t anonymous (this could be solved by naming the anonymous function but this method is far cleaner). Bad: var app = angular.module('app', []); app.controller('MyCtrl', function () { }); Good: function MainCtrl () { } angular .module('app', []) .controller('MainCtrl', MainCtrl); Define a module once using angular.module('app', []) setter, then use the angular.module('app') getter elsewhere (such as other files). To avoid polluting the global namespace, wrap all your functions during compilation/concatenation inside an IIFE which will produce something like this: Best: (function () { angular.module('app', []); // MainCtrl.js function MainCtrl () { } angular .module('app') .controller('MainCtrl', MainCtrl); // AnotherCtrl.js function AnotherCtrl () { } angular .module('app') .controller('AnotherCtrl', AnotherCtrl); // and so on... })(); Controllers Controllers are classes and can use a controllerAs syntax or generic controller syntax. Use the controllerAs syntax always as it aids in nested scoping and controller instance reference. controllerAs DOM bindings Bad: <div ng-controller="MainCtrl"> {{ someObject }} </div> Good: <div ng-controller="MainCtrl as main"> {{ main.someObject }} </div> Exploring JavaScript Array Methods cover ⚡️ FREE eBook: 🔥 ForEach, Map, Filter, Reduce, Some, Every, Find Todd Motto “This book is straight to the point, syntax exploration, comprehensive guide, real-world examples, tips and tricks - it covers all you need” Todd Motto, author of Exploring JavaScript Array Methods Binding these ng-controller attributes couples the declarations tightly with our DOM, and also means we can only use that controller for that specific view (there are rare cases we might use the same view with different controllers). Use the router to couple the controller declarations with the relevant views by telling each route what controller to instantiate. Best: <!-- main.html --> <div> {{ main.someObject }} </div> <!-- main.html --> <script> // ... function config ($routeProvider) { $routeProvider .when('/', { templateUrl: 'views/main.html', controller: 'MainCtrl', controllerAs: 'main' }); } angular .module('app') .config(config); //... </script> This avoids using $parent to access any parent controllers from a child controller, simple hit the main reference and you’ve got it. This could avoid things such as $parent.$parent calls. controllerAs this keyword The controllerAs syntax uses the this keyword inside controllers instead of $scope. When using controllerAs, the controller is infact bound to $scope, there is a degree of separation. Bad: function MainCtrl ($scope) { $scope.someObject = {}; $scope.doSomething = function () { }; } angular .module('app') .controller('MainCtrl', MainCtrl); You can also use the prototype Object to create controller classes, but this becomes messy very quickly as each dependency injected provider needs a reference bound to the constructor Object. Bad and Good: Good for inheritance, bad (verbose) for general use. function MainCtrl ($scope) { this.someObject = {}; this._$scope = $scope; } MainCtrl.prototype.doSomething = function () { // use this._$scope }; angular .module('app') .controller('MainCtrl', MainCtrl); If you’re using prototype and don’t know why, then it’s bad. If you are using prototype to inherit from other controllers, then that’s good. For general use, the prototype pattern can be verbose. Good: function MainCtrl () { this.someObject = {}; this.doSomething = function () { }; } angular .module('app') .controller('MainCtrl', MainCtrl); These just show examples of Objects/functions inside Controllers, however we don’t want to put logic in controllers… Avoid controller logic Avoid writing logic in Controllers, delegate to Factories/Services. Bad: function MainCtrl () { this.doSomething = function () { }; } angular .module('app') .controller('MainCtrl', MainCtrl); Good: function MainCtrl (SomeService) { this.doSomething = SomeService.doSomething; } angular .module('app') .controller('MainCtrl', MainCtrl); This maximises reusability, encapsulated functionality and makes testing far easier and persistent. Services Services are instantiated and should be class-like also and reference the this keyword, keep function style consistent with everything else. Good: function SomeService () { this.someMethod = function () { }; } angular .module('app') .service('SomeService', SomeService); Factory Factories give us a singleton module for creating service methods (such as communicating with a server over REST endpoints). Creating and returning a bound Object keeps controller bindings up to date and avoids pitfalls of binding primitive values. Important: A “factory” is in fact a pattern/implementation, and shouldn’t be part of the provider’s name. All factories and services should be called “services”. Bad: function AnotherService () { var someValue = ''; var someMethod = function () { }; return { someValue: someValue, someMethod: someMethod }; } angular .module('app') .factory('AnotherService', AnotherService); Good: We create an Object with the same name inside the function. This can aid documentation as well for comment-generated docs. function AnotherService () { var AnotherService = {}; AnotherService.someValue = ''; AnotherService.someMethod = function () { }; return AnotherService; } angular .module('app') .factory('AnotherService', AnotherService); Any bindings to primitives are kept up to date, and it makes internal module namespacing a little easier, we can easily see any private methods and variables. Directives Any DOM manipulation should take place inside a directive, and only directives. Any code reusability should be encapsulated (behavioural and markup related) too. DOM manipulation DOM manipulation should be done inside the link method of a directive. Bad: // do not use a controller function MainCtrl (SomeService) { this.makeActive = function (elem) { elem.addClass('test'); }; } angular .module('app') .controller('MainCtrl', MainCtrl); Good: // use a directive function SomeDirective (SomeService) { return { restrict: 'EA', template: [ '<a href="" class="myawesomebutton" ng-transclude>', '<i class="icon-ok-sign"></i>', '</a>' ].join(''), link: function ($scope, $element, $attrs) { // DOM manipulation/events here! $element.on('click', function () { $(this).addClass('test'); }); } }; } angular .module('app') .directive('SomeDirective', SomeDirective); Any DOM manipulation should take place inside a directive, and only directives. Any code reusability should be encapsulated (behavioural and markup related) too. Naming conventions Custom directives should not be ng-* prefixed to prevent future core overrides if your directive name happens to land in Angular (such as when ng-focus landed, there were many custom directives called this beforehand). It also makes it more confusing to know which are core directives and which are custom. Bad: function ngFocus (SomeService) { return {}; } angular .module('app') .directive('ngFocus', ngFocus); Good: function focusFire (SomeService) { return {}; } angular .module('app') .directive('focusFire', focusFire); Directives are the only providers that we have the first letter as lowercase, this is due to strict naming conventions in the way Angular translates camelCase to hyphenated, so focusFire will become <input focus-fire> when used on an element. Usage restriction If you need to support IE8, you’ll want to avoid using the comments syntax for declaring where a directive will sit. Really, this syntax should be avoided anyway - there are no real benefits of using it - it just adds confusion of what is a comment and what isn’t. Bad: These are terribly confusing. <!-- directive: my-directive --> <div class="my-directive"></div> Good: Declarative custom elements and attributes are clearest. <my-directive></my-directive> <div my-directive></div> You can restrict usage using the restrict property inside each directive’s Object. Use E for element, A for attribute, M for comment (avoid) and C for className (avoid this too as it’s even more confusing, but plays better with IE). You can have multiple restrictions, such as restrict: 'EA'. Resolve promises in router, defer controllers After creating services, we will likely inject them into a controller, call them and bind any new data that comes in. This becomes problematic of keeping controllers tidy and resolving the right data. Thankfully, using angular-route.js (or a third party such as ui-router.js) we can use a resolve property to resolve the next view’s promises before the page is served to us. This means our controllers are instantiated when all data is available, which means zero function calls. Bad: function MainCtrl (SomeService) { var self = this; // unresolved self.something; // resolved asynchronously SomeService.doSomething().then(function (response) { self.something = response; }); } angular .module('app') .controller('MainCtrl', MainCtrl); Good: function config ($routeProvider) { $routeProvider .when('/', { templateUrl: 'views/main.html', resolve: { doSomething: function (SomeService) { return SomeService.doSomething(); } } }); } angular .module('app') .config(config); At this point, our service will internally bind the response of the promise to another Object which we can reference in our “deferred-instantiated” controller: Good: function MainCtrl (SomeService) { // resolved! this.something = SomeService.something; } angular .module('app') .controller('MainCtrl', MainCtrl); We can go one better, however and create a resolve property on our own Controllers to couple the resolves with the Controllers and avoid logic in the router. Best: // config with resolve pointing to relevant controller function config ($routeProvider) { $routeProvider .when('/', { templateUrl: 'views/main.html', controller: 'MainCtrl', controllerAs: 'main', resolve: MainCtrl.resolve }); } // controller as usual function MainCtrl (SomeService) { // resolved! this.something = SomeService.something; } // create the resolved property MainCtrl.resolve = { doSomething: function (SomeService) { return SomeService.doSomething(); } }; angular .module('app') .controller('MainCtrl', MainCtrl) .config(config); Route changes and ajax spinners While the routes are being resolved we want to show the user something to indicate progress. Angular will fire the $routeChangeStart event as we navigate away from the page, which we can show some form of loading and ajax spinner, which can then be removed on the $routeChangeSuccess event (see docs). Avoid $scope.$watch Using $scope.$watch should be avoided unless there are no others options. It’s less performant than binding an expression to something like ng-change, a list of supported events are in the Angular docs. Bad: <input ng-model="myModel"> <script> $scope.$watch('myModel', callback); </script> Good: <input ng-model="myModel" ng-change="callback"> <!-- $scope.callback = function () { // go }; --> Project/file structure One role, one file, rule. Separate all controllers, services/factories, directives into individual files. Don’t add all controllers in one file, you will end up with a huge file that is very difficult to navigate, keeps things encapsulated and bitesize. Bad: |-- app.js |-- controllers.js |-- filters.js |-- services.js |-- directives.js Keep naming conventions for files consistent, don’t invent fancy names for things, you’ll just forget them. Good: |-- app.js |-- controllers/ | |-- MainCtrl.js | |-- AnotherCtrl.js |-- filters/ | |-- MainFilter.js | |-- AnotherFilter.js |-- services/ | |-- MainService.js | |-- AnotherService.js |-- directives/ | |-- MainDirective.js | |-- AnotherDirective.js Depending on the size of your code base, a “feature-driven” approach may be better to split into functionality chunks. Good: |-- app.js |-- dashboard/ | |-- DashboardService.js | |-- DashboardCtrl.js |-- login/ | |-- LoginService.js | |-- LoginCtrl.js |-- inbox/ | |-- InboxService.js | |-- InboxCtrl.js Naming conventions and conflicts Angular provides us many Objects such as $scope and $rootScope that are prefixed with $. This incites they’re public and can be used. We also get shipped with things such as $$listeners, which are available on the Object but are considered private methods. Avoid using $ or $$ when creating your own services/directives/providers/factories. Bad: Here we create $$SomeService as the definition, not the function name. function SomeService () { } angular .module('app') .factory('$$SomeService', SomeService); Good: Here we create SomeService as the definition, and the function name for consistency/stack traces. function SomeService () { } angular .module('app') .factory('SomeService', SomeService); Minification and annotation Annotation order It’s considered good practice to dependency inject Angular’s providers in before our own custom ones. Bad: // randomly ordered dependencies function SomeCtrl (MyService, $scope, AnotherService, $rootScope) { } Good: // ordered Angular -> custom function SomeCtrl ($scope, $rootScope, MyService, AnotherService) { } Minification methods, automate it Use ng-annotate for automated dependency injection annotation, as ng-min is deprecated. You can find ng-annotate here. With our function declarations outside of the module references, we need to use the @ngInject comment to explicitly tell ng-annotate where to inject our dependencies. This method uses $inject which is faster than the Array syntax. Manually specifying the dependency injection arrays costs too much time. Bad: function SomeService ($scope) { } // manually declaring is time wasting SomeService.$inject = ['$scope']; angular .module('app') .factory('SomeService', SomeService); Good: Using the ng-annotate keyword @ngInject to instruct things that need annotating: /** * @ngInject */ function SomeService ($scope) { } angular .module('app') .factory('SomeService', SomeService); Will produce: /** * @ngInject */ function SomeService ($scope) { } // automated SomeService.$inject = ['$scope']; angular .module('app') .factory('SomeService', SomeService); Thanks for reading! You'll also like...
__label__pos
0.847212
Kullanıcı Kılavuzu İptal Reduce noise and restore audio 1. Audition User Guide 2. Introduction 1. What's new in Adobe Audition 2. Audition system requirements 3. Finding and customizing shortcuts 4. Applying effects in the Multitrack Editor 5. Known issues 3. Workspace and setup 1. Control surface support 2. Viewing, zooming, and navigating audio 3. Customizing workspaces 4. Connecting to audio hardware in Audition 5. Customizing and saving application settings 6. Perform Mic Check (Beta) 4. Digital audio fundamentals 1. Understanding sound 2. Digitizing audio 5. Importing, recording, and playing 1. Multichannel audio workflow 2. Create, open, or import files in Adobe Audition 3. Importing with the Files panel 4. Extracting audio from CDs 5. Supported import formats 6. Navigate time and playing audio in Adobe Audition 7. Recording audio 8. Monitoring recording and playback levels 9. Remove silences from your audio recordings 6. Editing audio files 1. Edit, repair, and improve audio using Essential Sound panel 2. Session Markers and Clip Marker for Multitrack 3. Generating text-to-speech 4. Matching loudness across multiple audio files 5. Displaying audio in the Waveform Editor 6. Selecting audio 7. How to copy, cut, paste, and delete audio in Audition 8. Visually fading and changing amplitude 9. Working with markers 10. Inverting, reversing, and silencing audio 11. How to automate common tasks in Audition 12. Analyze phase, frequency, and amplitude with Audition 13. Frequency Band Splitter 14. Undo, redo, and history 15. Converting sample types 16. Creating podcasts using Audition 7. Applying effects 1. Enabling CEP extensions 2. Effects controls 3. Applying effects in the Waveform Editor 4. Applying effects in the Multitrack Editor 5. Adding third party plugins 6. Notch Filter effect 7. Fade and Gain Envelope effects (Waveform Editor only) 8. Manual Pitch Correction effect (Waveform Editor only) 9. Graphic Phase Shifter effect 10. Doppler Shifter effect (Waveform Editor only) 8. Effects reference 1. Apply amplitude and compression effects to audio 2. Delay and echo effects 3. Diagnostics effects (Waveform Editor only) for Audition 4. Filter and equalizer effects 5. Modulation effects 6. Reduce noise and restore audio 7. Reverb effects 8. How to use special effects with Audition 9. Stereo imagery effects 10. Time and pitch manipulation effects 11. Generate tones and noise 9. Mixing multitrack sessions 1. Creating remix 2. Multitrack Editor overview 3. Basic multitrack controls 4. Multitrack routing and EQ controls 5. Arrange and edit multitrack clips with Audition 6. Looping clips 7. How to match, fade, and mix clip volume with Audition 8. Automating mixes with envelopes 9. Multitrack clip stretching 10. Video and surround sound 1. Working with video applications 2. Importing video and working with video clips 3. 5.1 surround sound 11. Keyboard shortcuts 1. Finding and customizing shortcuts 2. Default keyboard shortcuts 12. Saving and exporting 1. Save and export audio files 2. Viewing and editing XMP metadata How to Reduce Noise and Restore Audio in Adobe Audition Watch this video to learn how to reduce unwanted noise and restore audio to produce quality audio content. Techniques for restoring audio You can fix a wide array of audio problems by combining two powerful features. First, use Spectral Display to visually identify and select ranges of noise or individual artifacts. (See Select spectral ranges and Select artifacts and repair them automatically.) Then, use either Diagnostic or Noise Reduction effects to fix problems like the following: Not: The real-time restoration effects above, which are available in both the Waveform and Multitrack editors, quickly address common audio problems. For unusually noisy audio, however, consider using offline, process effects unique to the Waveform Editor, such as Hiss Reduction and Noise Reduction. Watch the Audio restoration techniques video to learn best practices for fixing audio in Audition using the Amplitude Statistics panel, spectral frequency display, adaptive noise reduction, Diagnostics panel, and DeClipper and DeHummer effects. Selecting various types of noise in Spectral Display A. Hiss B. Crackle C. Rumble  Watch the video How to use the Spectral Frequency Display to clean up your audio to learn more about using Spectral Frequency Display.   Noise Reduction effect (Waveform Editor only) The Noise Reduction/Restoration > Noise Reduction effect dramatically reduces background and broadband noise with a minimal reduction in signal quality. This effect can remove a combination of noise, including tape hiss, microphone background noise, power-line hum, or any noise that is constant throughout a waveform. The proper amount of noise reduction depends upon the type of background noise and the acceptable loss in quality for the remaining signal. In general, you can increase the signal‑to‑noise ratio by 5 to 20 dB and retain high audio quality. To achieve the best results with the Noise Reduction effect, apply it to audio with no DC offset. With a DC offset, this effect may introduce clicks in quiet passages. (To remove a DC offset, choose Favorites > Repair DC Offset.) Evaluating and adjusting noise with the Noise Reduction graph: A. Drag control points to vary reduction in different frequency ranges B. Low amplitude noise. C. High amplitude noise D. Threshold below which noise reduction occurs.  Apply the Noise Reduction effect 1. In the Waveform Editor, select a range that contains only noise and is at least half a second long. Not: To select noise in a specific frequency range, use the Marquee Selection tool. (See Select spectral ranges.) 2. Choose Effects > Noise Reduction/Restoration > Capture Noise Print. 3. In the Editor panel, select the range from which you want to remove noise. 4. Choose Effects > Noise Reduction/Restoration > Noise Reduction. 5. Set the desired options. Not: When recording in noisy environments, record a few seconds of representative background noise that can be used as a noise print later on. Noise Reduction options Capture Noise Print Extracts a noise profile from a selected range, indicating only background noise. Adobe Audition gathers statistical information about the background noise so it can remove it from the remainder of the waveform. Tip: If the selected range is too short, Capture Noise Print is disabled. Reduce the FFT Size or select a longer range of noise. If you can’t find a longer range, copy and paste the currently selected range to create one. (You can later remove the pasted noise by using the Edit > Delete command.) Save the Current Noise Print  Saves the noise print as an .fft file, which contains information about sample type, FFT (Fast Fourier Transform) size, and three sets of FFT coefficients: one for the lowest amount of noise found, one for the highest amount, and one for the power average. Load a Noise Print from Disk  Opens any noise print previously saved from Adobe Audition in FFT format. However, you can apply noise prints only to identical sample types. (For example, you can’t apply a 22 kHz mono profile to 44kHz stereo samples.) note: Because noise prints are so specific, a print for one type of noise won’t produce good results with other types. If you regularly remove similar noise, however, a saved profile can greatly increase efficiency. Graph Depicts frequency along the x‑axis (horizontal) and the amount of noise reduction along the y‑axis (vertical). The blue control curve sets the amount of noise reduction in different frequency ranges. For example, if you need noise reduction only in the higher frequencies, adjust the control curve downward to the right of the graph. If you click the Reset button  to flatten the control curve, the amount of noise reduction is based entirely on the noise print. Tip: To better focus on the noise floor, click the menu button  to the upper right of the graph, and deselect Show Control Curve and Show Tooltip Over Graph. Noise Floor High shows the highest amplitude of detected noise at each frequency; Low shows the lowest amplitude. Threshold shows the amplitude below which noise reduction occurs. Tip: The three elements of the noise floor can overlap in the graph. To better distinguish them, click the menu button , and select options from the Show Noise Floor menu. Scale Determines how frequencies are arranged along the horizontal x‑axis: • For finer control over low frequencies, select Logarithmic. A logarithmic scale more closely resembles how people hear sound. • For detailed, high‑frequency work with evenly spaced intervals in frequency, select Linear. Channel Displays the selected channel in the graph. The amount of noise reduction is always the same for all channels. Select Entire File Lets you apply a captured noise print to the entire file. Noise Reduction Controls the percentage of noise reduction in the output signal. Fine-tune this setting while previewing audio to achieve maximum noise reduction with minimum artifacts. (Excessively high noise reduction levels can sometimes cause audio to sound flanged or out-of-phase.) Reduce By Determines the amplitude reduction of detected noise. Values between 6 and 30 dB work well. To reduce bubbly artifacts, enter lower values. Output Noise Only Previews only noise so you determine if the effect is removing any desirable audio. Advanced settings Click the triangle to display the following options: Spectral Decay Rate Specifies the percentage of frequencies processed when audio falls below the noise floor. Fine‑tuning this percentage allows greater noise reduction with fewer artifacts. Values of 40% to 75% work best. Below those values, bubbly‑sounding artifacts are often heard; above those values, excessive noise typically remains. Smoothing Takes into account the variance of the noise signal in each frequency band. Bands that vary greatly when analyzed (such as white noise) will be smoothed differently than constant bands (like 60-Hz hum). In general, increasing the smoothing amount (up to 2 or so) reduces burbly background artifacts at the expense of raising the overall background broadband noise level. Precision Factor Controls changes in amplitude. Values of 5-10 work best, and odd numbers are ideal for symmetrical processing. With values of 3 or less, the Fast Fourier transform is performed in giant blocks, and between them drops or spikes in volume can occur. Values beyond 10 cause no noticeable change in quality, but they increase processing time. Transition Width Determines the amplitude range between noise and desirable audio. For example, a width of zero applies a sharp, noise gate to each frequency band. Audio just above the threshold remains; audio just below is truncated to silence. Alternatively, you can specify a range over which the audio fades to silence based upon the input level. For example, if the transition width is 10 dB, and the noise level for the band is ‑60 dB, audio at ‑60 dB stays the same, audio at ‑62 dB is reduced slightly, and audio at ‑70 dB is removed entirely. FFT Size Determines how many individual frequency bands are analyzed. This option causes the most drastic changes in quality. The noise in each frequency band is treated separately, so with more bands, noise is removed with finer frequency detail. Good settings range from 4096 to 8192. Fast Fourier Transform size determines the tradeoff between frequency- and time-accuracy. Higher FFT sizes might cause swooshing or reverberant artifacts, but they very accurately remove noise frequencies. Lower FFT sizes result in better time response (less swooshing before cymbal hits, for example), but they can produce poorer frequency resolution, creating hollow or flanged sounds. Noise Print Snapshots Determines how many snapshots of noise to include in the captured profile. A value of 4000 is optimal for producing accurate data. Very small values greatly affect the quality of the various noise reduction levels. With more snapshots, a noise reduction level of 100 will likely cut out more noise, but also cut out more original signal. However, a low noise reduction level with more snapshots will also cut out more noise, but likely retain the intended signal. Sound Remover effect   The Sound Remover effect (Effects > Noise Reduction/Restoration) removes unwanted audio sources from a recording. This effect analyzes a selected portion of the recording, and builds a sound model, which is used to find and remove the sound. The generated model can also be modified using parameters that indicate its complexity. A high complexity sound model requires more refinement passes to process the recording, but provides more accurate results. You can also save the sound model for later use. Several common presets are also included to remove some common noise sounds, such as sirens and ringing mobile phones. Learn Sound Model Uses the selected waveform to learn the sound model. Select an area on the waveform that only contains the sound to remove, and then press Learn Sound Model. You can also save and load sound models on disc. Sound Model Complexity Indicates the complexity of the Sound Model. The more complex or mixed the sound is, the better results you'll get with a higher complexity setting, though the longer it will take to calculate. Settings range from 1 to 100. Sound Refinement Passes Defines the number of refinement passes to make to remove the sound patterns indicated in the sound model. Higher number of passes require longer processing time, but offer more accurate results. Content Complexity Indicates the complexity of the signal. The more complex or mixed the sound is, the better results you'll get with a higher complexity setting, though the longer it will take to calculate. Settings range from 5 to 100. Content Refinement Passes Specifies the number of passes to make on the content to remove the sounds that match the sound model. A higher number of passes require more processing time, but generally provide more accurate results. Enhanced Supression This increases the aggressiveness of the sound removal algorithm, and can be modified on the Strength value. A higher value will remove more of the sound model from mixed signals, which can result in greater loss of desired signal, while a lower value will leave more of the overlapping signal and therefore, more of the noise may be audible (though less than the original recording.) Enhance for Speech Specifies that the audio includes speech and is careful in removing audio patterns that closely resemble speech. The end result makes sure that speech is not removed, while removing noise. FFT Size Determines how many individual frequency bands are analyzed. This option causes the most drastic changes in quality. The noise in each frequency band is treated separately, so with more bands, noise is removed with finer frequency detail. Good settings range from 4096 to 8192. Fast Fourier Transform size determines the tradeoff between frequency- and time-accuracy. Higher FFT sizes might cause swooshing or reverberant artifacts, but they very accurately remove noise frequencies. Lower FFT sizes result in better time response (less swooshing before cymbal hits, for example), but they can produce poorer frequency resolution, creating hollow or flanged sounds. Watch the video Sound removal and noise reduction strategies to see how you can reduce noise and remove unwanted sounds from your audio. Adaptive Noise Reduction effect The Noise Reduction/Restoration > Adaptive Noise Reduction effect quickly removes variable broadband noise such as background sounds, rumble, and wind. Because this effect operates in real time, you can combine it with other effects in the Effects Rack and apply it in the Multitrack Editor. By contrast, the standard Noise Reduction effect is available only as an offline process in the Waveform Editor. That effect, however, is sometimes more effective at removing constant noise, such as hiss or hum. For best results, apply Adaptive Noise Reduction to selections that begin with noise followed by desirable audio. The effect identifies noise based on the first few seconds of audio. Not: This effect requires significant processing. If your system performs slowly, lower FFT Size and turn off High Quality Mode. Reduce Noise By Determines the level of noise reduction. Values between 6 and 30 dB work well. To reduce bubbly background effects, enter lower values. Noisiness Indicates the percentage of original audio that contains noise. Fine Tune Noise Floor Manually adjusts the noise floor above or below the automatically calculated floor. Signal Threshold Manually adjusts the threshold of desirable audio above or below the automatically calculated threshold. Spectral Decay Rate Determines how quickly noise processing drops by 60 decibels. Fine‑tuning this setting allows greater noise reduction with fewer artifacts. Values that are too short create bubbly sounds; values that are too long create a reverb effect. Broadband Preservation Retains desirable audio in specified frequency bands between found artifacts. A setting of 100 Hz, for example, ensures that no audio is removed 100 Hz above or below found artifacts. Lower settings remove more noise but may introduce audible processing. FFT Size Determines how many individual frequency bands are analyzed. Choose a high setting to increase frequency resolution; choose a low setting to increase time resolution. High settings work well for artifacts of long duration (like squeaks or power-line hum), while low settings better address transient artifacts (like clicks and pops). Watch the video Remove noise from audio files with Audition to see how you can reduce noise and remove unwanted sounds from your audio. Automatic Click Remover effect To quickly remove crackle and static from vinyl recordings, use the Noise Reduction/Restoration > Automatic Click Remover effect. You can correct a large area of audio or a single click or pop. This effect provides the same options as the DeClicker effect, which lets you choose which detected clicks to address (see DeClicker options). However, because the Automatic Click Remover operates in real time, you can combine it with other effects in the Effects Rack and apply it in the Multitrack Editor. The Automatic Click Remover effect also applies multiple scan and repair passes automatically; to achieve the same level of click reduction with the DeClicker, you must manually apply it multiple times. Threshold Determines sensitivity to noise. Lower settings detect more clicks and pops but may include audio you wish to retain. Settings range from 1 to 100; the default is 30. Complexity Indicates the complexity of noise. Higher settings apply more processing but can degrade audio quality. Settings range from 1 to 100; the default is 16. Automatic Phase Correction effect The Noise Reduction/Restoration > Automatic Phase Correction effect addresses azimuth errors from misaligned tape heads, stereo smearing from incorrect microphone placement, and many other phase-related problems. Global Time Shift Activates the Left and Right Channel Shift sliders, which let you apply a uniform phase shift to all selected audio. Auto Align Channels and Auto Center Panning Align phase and panning for a series of discrete time intervals, which you specify using the following options: Time Resolution Specifies the number of milliseconds in each processed interval. Smaller values increase accuracy; larger ones increase performance. Responsiveness Determines overall processing speed. Slow settings increase accuracy; fast settings increase performance. Channel Specifies the channels phase correction will be applied to. Analysis Size Specifies the number of samples in each analyzed unit of audio. Not: For the most precise, effective phase correction, use the Auto Align Channels option. Enable the Global Time Shift sliders only if you are confident that a uniform adjustment is necessary, or if you want to manually animate phase correction in the Multitrack Editor. Click/Pop Eliminator effect Use the Click/Pop Eliminator effect (Effects > Noise Reduction/Restoration) to remove microphone pops, clicks, light hiss, and crackles. Such noise is common on recordings such as old vinyl records and on-location recordings. The effect dialog box stays open, and you can adjust the selection, and fix multiple clicks without reopening the effect several times. Detection and correction settings are used to find clicks and pops. The detection and rejection ranges are displayed graphically. Detection graph Shows the exact threshold levels to be used at each amplitude, with amplitude along the horizontal ruler (x-axis) and threshold level along the vertical ruler (y-axis). Adobe Audition uses values on the curve to the right (above -20 dB or so) when processing louder audio and values on the left when processing softer audio. Curves are color-coded to indicate detection and rejection. Scan for All Levels Scans the highlighted area for clicks based on the values for Sensitivity and Discrimination, and determines values for Threshold, Detect, and Reject. Five areas of audio are selected, starting at the quietest and moving to the loudest. Sensitivity Determines the level of clicks to detect. Use a lower value, such as 10, to detect lots of subtle clicks, or a value of 20 to detect a few louder clicks. (Detected levels with Scan for All Levels are always higher than with this option.) Discrimination Determines how many clicks to fix. Enter high values to fix very few clicks and leave most of the original audio intact. Enter lower values, such as 20 or 40, if the audio contains a moderate number of clicks. Enter extremely low values, such as 2 or 4, to fix constant clicks. Scan for Threshold Levels Automatically sets the Maximum, Average, and Minimum Threshold levels. Maximum, Average, Minimum Determine the unique detection and rejection thresholds for the maximum, average, and minimum amplitudes of the audio. For example, if audio has a maximum RMS amplitude of -10 dB, you should set Maximum Threshold to -10 dB. If the minimum RMS amplitude is -55 dB, then set Minimum Threshold to -55. Set the threshold levels before you adjust the corresponding Detect and Reject values. (Set the Maximum and Minimum Threshold levels first, because once they’re in place, you shouldn’t need to adjust them much.) Set the Average Threshold level to about three quarters of the way between the Maximum and Minimum Threshold levels. For example, if Maximum Threshold is set to 30 and Minimum Threshold is set to 10, set Average Threshold to 25. After you audition a small piece of repaired audio, you can adjust the settings as needed. For example, if a quiet part still has a lot of clicks, lower the Minimum Threshold level a bit. If a loud piece still has clicks, lower the Average or Maximum Threshold level. In general, less correction is required for louder audio, as the audio itself masks many clicks, so repairing them isn’t necessary. Clicks are very noticeable in very quiet audio, so quiet audio tends to require lower detection and rejection thresholds. Second Level Verification (Reject Clicks) Rejects some of the potential clicks found by the click detection algorithm. In some types of audio, such as trumpets, saxophones, female vocals, and snare drum hits, normal peaks are sometimes detected as clicks. If these peaks are corrected, the resulting audio will sound muffled. Second Level Verification rejects these audio peaks and corrects only true clicks. Detect Determines sensitivity to clicks and pops. Possible values range from 1 to 150, but recommended values range from 6 to 60. Lower values detect more clicks. Start with a threshold of 35 for high-amplitude audio (above -15 dB), 25 for average amplitudes, and 10 for low-amplitude audio (below-50 dB). These settings allow for the most clicks to be found, and usually all of the louder ones. If a constant crackle is in the background of the source audio, try lowering the Min Threshold level or increasing the dB level to which the threshold is assigned. The level can be as low as 6, but a lower setting can cause the filter to remove sound other than clicks. If more clicks are detected, more repair occurs, increasing the possibility of distortion. With too much distortion of this type, audio begins to sound flat and lifeless. If this occurs, set the detection threshold rather low, and select Second Level Verification to reanalyze the detected clicks and disregard percussive transients that aren’t clicks. Reject Determines how many potential clicks (found using the Detection Threshold) are rejected if Second Level Verification box is selected. Values range from 1 to 100; a setting of 30 is a good starting point. Lower settings allow for more clicks to be repaired. Higher settings can prevent clicks from being repaired, as they might not be actual clicks. You want to reject as many detected clicks as possible but still remove all audible clicks. If a trumpet-like sound has clicks in it, and the clicks aren’t removed, try lowering the value to reject fewer potential clicks. If a particular sound becomes distorted, then increase the setting to keep repairs at a minimum. (The fewer repairs that are needed to get good results, the better.) FFT Size Determines the FFT size used to repair clicks, pops, and crackle. In general, select Auto to let Adobe Audition determine the FFT size. For some types of audio, however, you might want to enter a specific FFT size (from 8 to 512). A good starting value is 32, but if clicks are still quite audible, increase the value to 48, and then 64, and so on. The higher the value, the slower the correction will be, but the better the potential results. If the value is too high, rumbly, low frequency distortion can occur. Fill Single Click Corrects a single click in a selected audio range. If Auto is selected next to FFT Size, then an appropriate FFT size is used for the restoration based on the size of the area being restored. Otherwise, settings of 128 to 256 work very well for filling in single clicks. Once a single click is filled, press the F3 key to repeat the action. You can also create a quick key in the Favorites menu for filling in single clicks. Pop Oversamples Width Includes surrounding samples in detected clicks. When a potential click is found, its beginning and end points are marked as closely as possible. The Pop Oversamples value (which can range from 0 to 300) expands that range, so more samples to the left and right of the click are considered part of the click. If corrected clicks become quieter but are still evident, increase the Pop oversamples value. Start with a value of 8, and increase it slowly to as much as 30 or 40. Audio that doesn’t contain a click shouldn’t change very much if it’s corrected, so this buffer area should remain mostly untouched by the replacement algorithm. Increasing the Pop Oversamples value also forces larger FFT sizes to be used if Auto is selected. A larger setting may remove clicks more cleanly, but if it’s too high, audio will start to distort where the clicks are removed. Run Size Specifies the number of samples between separate clicks. Possible values range from 0 to 1000. To independently correct extremely close clicks, enter a low value; clicks that occur within the Run Size range are corrected together. A good starting point is around 25 (or half the FFT size if Auto next to FFT Size isn’t selected). If the Run Size value is too large (over 100 or so), then the corrections may become more noticeable, as very large blocks of data are repaired at once. If you set the Run Size too small, then clicks that are very close together may not be repaired completely on the first pass. Pulse Train Verification Prevents normal waveform peaks from being detected as clicks. It may also reduce detection of valid clicks, requiring more aggressive threshold settings. Select this option only if you’ve already tried to clean up the audio but stubborn clicks remain. Link Channels Processes all channels equally, preserving the stereo or surround balance. For example, if a click is found in one channel, a click will most likely be detected in the other. Detect Big Pops Removes large unwanted events (such as those more than a few hundred samples wide) that might not be detected as clicks. Values can range from 30 to 200. Note that a sharp sound like a loud snare drum hit can have the same characteristic as a very large pop, so select this option only if you know the audio has very large pops (like a vinyl record with a very big scratch in it). If this option causes drum hits to sound softer, slightly increase the threshold to fix only loud, obvious pops. If loud, obvious pops aren’t fixed, select Detect Big Pops, and use settings from about 30 (to find quiet pops) to 70 (to find loud pops). Ignore Light Crackle Smooths out one-sample errors when detected, often removing more background crackle. If the resulting audio sounds thinner, flatter, or more tinny, deselect this option. Passes Performs up to 32 passes automatically to catch clicks that might be too close together to be repaired effectively. Fewer passes occur if no more clicks are found and all detected clicks are repaired. In general, about half as many clicks are repaired on each successive pass. A higher detection threshold might lead to fewer repairs and increase the quality while still removing all clicks. Watch the video Use the Click/Pop Eliminator and DeClicker effects to learn how you can remove microphone pops, clicks, light hiss, and crackles.   DeHummer effect The Noise Reduction/Restoration > DeHummer effect removes narrow frequency bands and their harmonics. The most common application addresses power line hum from lighting and electronics. But the DeHummer can also apply a notch filter that removes an overly resonant frequency from source audio. Not: To quickly address typical audio problems, choose an option from the Presets menu. Frequency Sets the root frequency of the hum. If you’re unsure of the precise frequency, drag this setting back and forth while previewing audio. Not: To visually adjust root frequency and gain, drag directly in the graph. Q Sets the width of the root frequency and harmonics above. Higher values affect a narrower range of frequencies, and lower values affect a wider range. Gain Determines the amount of hum attenuation. Number of Harmonics Specifies how many harmonic frequencies to affect. Harmonic Slope Changes the attenuation ratio for harmonic frequencies. Output Hum Only Lets you preview removed hum to determine if it contains any desirable audio. DeReverb effect The Noise Reduction/Restoration > DeReverb effect estimates the reverberation profile and helps adjust the reverberation amount. The values range from 0% to 100% and control the amount of processing applied to the audio signal. DeReverb effect controls DeReverb effect controls Processing focus There are five processing focus buttons. Each of the Processing focus buttons focuses the noise suppression process on specific parts of the signal's frequency spectrum. All frequency focus    Use this to apply the same processing to the full frequency spectrum of the signal Hi frequency focus    Use this to focus processing on the high-end range of the frequency spectrum   Hi/low frequency focus    Use this focus to process more on the high and low-end range of the frequency spectrum of the signal and less on the mid range Mid frequency focus    Use this option to apply focus on the mid-range of the frequency spectrum of the signal and less to the high and low-end range Low frequency focus    This option focuses processing on the low-end range of the frequency spectrum Applying the dereverberation effect could result in lower levels of output in comparison to the orginal audio due to reduction in dynamic range. The output gain works as a make-up gain and allows you to adjust the level of the output signal. Use the slidrer to adjust gain manually. Alternatively, you can enable automatic adjustment of gain by enablig the Auto Gain checkbox. DeNoise effect The Noise Reduction/Restoration > DeNoise effect reduces or completely removes noise from your audio file. This could be unwanted hum and hiss, fans, air conditioner or any other background noise. You can control the amount of noise reduced using a slider. The values range from 0% to 100% and control the amount of processing applied to the audio signal. DeNoise effect controls DeNoise effect controls Adjust Gain Applying the DeNoise effect could reduce the  level of the output signal and make it lower than the level of original audio. Use the Gain slider to control the amount of output signal.  Enable the Output Noise Only checkbox to listen to the removed noise in isolation. The processing focus for DeNoise effect is similar to DeReverb effect. For more information, see Processing focus. Hiss Reduction effect (Waveform Editor only) The Noise Reduction/Restoration > Hiss Reduction effect reduces hiss from sources such as audio cassettes, vinyl records, or microphone preamps. This effect greatly lowers the amplitude of a frequency range if it falls below an amplitude threshold called the noise floor. Audio in frequency ranges that are louder than the threshold remain untouched. If audio has a consistent level of background hiss, that hiss can be removed completely. Not: To reduce other types of noise that have a wide frequency range, try the Noise Reduction effect. (See Noise Reduction effect (Waveform Editor only).) Using the Hiss Reduction graph to adjust the noise floor Capture Noise Floor Graphs an estimate of the noise floor. The estimate is used by the Hiss Reduction effect to more effectively remove only hiss while leaving regular audio untouched. This option is the most powerful feature of Hiss Reduction. To create a graph that most accurately reflects the noise floor, click Get Noise Floor with a selection of audio that contains only hiss. Or, select an area that has the least amount of desirable audio, in addition to the least amount of high frequency information. (In the spectral display, look for an area without any activity in the top 75% of the display.) After you capture the noise floor, you might need to lower the control points on the left (representing the lower frequencies) to make the graph as flat as possible. If music is present at any frequency, the control points around that frequency will be higher than they should be. Graph Represents the estimated noise floor for each frequency in the source audio, with frequency along the horizontal ruler (x‑axis) and the amplitude of the noise floor along the vertical ruler (y‑axis). This information helps you distinguish hiss from desirable audio data. The actual value used to perform hiss reduction is a combination of the graph and the Noise Floor slider, which shifts the estimated noise floor reading up or down for fine-tuning. Not: To disable tooltips for frequency and amplitude, click the menu button  to the upper right of the graph, and deselect Show Tooltip Over Graph. Scale Determines how frequencies are arranged along the horizontal x‑axis: • For finer control over low frequencies, select Logarithmic. A logarithmic scale more closely resembles how people hear sound. • For detailed, high‑frequency work with evenly spaced intervals in frequency, select Linear. Channel Displays the selected audio channel in the graph. Reset  Resets the estimated noise floor. To reset the floor higher or lower, click the menu button  to the upper right of the graph, and choose an option from the Reset Control Curve menu. Not: For quick, general‑purpose hiss reduction, a complete noise floor graph isn’t always necessary. In many cases, you can simply reset the graph to an even level and manipulate the Noise Floor slider. Noise Floor Fine‑tunes the noise floor until the appropriate level of hiss reduction and quality is achieved. Reduce By Sets the level of hiss reduction for audio below the noise floor. With higher values (especially above 20 dB) dramatic hiss reduction can be achieved, but the remaining audio might become distorted. With lower values, not as much noise is removed, and the original audio signal stays relatively undisturbed. Output Hiss Only Lets you preview only hiss to determine if the effect is removing any desirable audio. Advanced settings Click the triangle to display these options: Spectral Decay Rate When audio is encountered above the estimated noise floor, determines how much audio in surrounding frequencies is assumed to follow. With low values, less audio is assumed to follow, and hiss reduction will cut more closely to the frequencies being kept. Values of 40% to 75% work best. If the value is too high (above 90%), unnaturally long tails and reverbs might be heard. If the value is too low, background bubbly effects might be heard, and music might sound artificial. Precision Factor Determines the time-accuracy of hiss reduction. Typical values range from 7 to 14. Lower values might result in a few milliseconds of hiss before and after louder parts of audio. Larger values generally produce better results and slower processing speeds. Values over 20 don’t ordinarily improve quality any further. Transition Width Produces a slow transition in hiss reduction instead of an abrupt change. Values from 5 to 10 usually achieve good results. If the value is too high, some hiss may remain after processing. If the value is too low, background artifacts might be heard. FFT Size Specifies a Fast Fourier Transform size, which determines the tradeoff between frequency- and time-accuracy. In general, sizes from 2048 to 8192 work best. Lower FFT sizes (2048 and below) result in better time response (less swooshing before cymbal hits, for example), but they can produce poorer frequency resolution, creating hollow or flanged sounds. Higher FFT sizes (8192 and above) might cause swooshing, reverb, and drawn out background tones, but they produce very accurate frequency resolution. Control Points Specifies the number of points added to the graph when you click Capture Noise Floor. Watch the Clean up background noise and reduce hiss to learn how to clean up background noises and apply hiss reduction to audio with Adobe Audition.  Adobe Daha hızlı ve daha kolay yardım alın Yeni kullanıcı mısınız?
__label__pos
0.838171
iOS收到推送后,跳转到某一页面 以前做过推送, 但只是那种最基本的广播推送(向所有安装appde设备通知), 列播组播这种对指定用户推送消息还没做过, 最近刚好有个项目,向指定用户推送物流信息、物品状态等等。于是前几天就和也没做过推送的后台干起来了,详情如下: 我用的是友盟推送, 配置证书这一环节直接跳过了,这个网上有讲的。给大家讲一点常识,友盟推送分生产环境和开发环境。用手机刷上去的就是开发环境, 发布到苹果商店就是生产环境,没发布前怎么模拟呢, 用普通账号打的ad hoc 包, 用企业账号打的ad hoc 包或者enterprise包都可以测试生产环境。 ** 开发环境下, 你把APP删掉,重新调试上来,就会生成一个新的device_token了!下面的几个步骤是自己的理解结合网上的资料写出的想法,大牛们轻的吐槽…… ** 1、传device_token的时机 后台向指定用户做推送,那么必须知道某个用户的device_token,那么怎么获取token呢,APP启动后会在appdelegate的didRegisterForRemoteNotificationsWithDeviceToken方法里返回device_token信息: - (void)application:(UIApplication *)application didRegisterForRemoteNotificationsWithDeviceToken:(NSData *)deviceToken { [UMessage registerDeviceToken:deviceToken]; NSString * token = [[[[deviceToken description] stringByReplacingOccurrencesOfString: @"<" withString: @""] stringByReplacingOccurrencesOfString: @">" withString: @""] stringByReplacingOccurrencesOfString: @" " withString: @""]; } 但是后台向指定用户推送, 就必须把uid(用户ID)和token关联起来,我这里的做法时, 用户登录的时候就上传token信息, 退出登录的时候, 就清除用户绑定的token信息, 这样确保后台会向用户最后一次登录的设备号做推送(只是自己的理解,当然了,iOS设备收到推送后,还要判断用户是否登录而且登录的是不是你要推送的用户)。 2、 收到通知时app的状态 收到通知的时候APP的状态可能是未启动、前台活跃(任何界面)、后台等三种。 • 未启动时,点击通知栏启动App, 会在didFinishLaunchingWithOptions方法里收到通知内容。 • 剩下两种会在didReceiveRemoteNotification方法里收到通内容。 - (void)application:(UIApplication *)application didReceiveRemoteNotification:(NSDictionary *)userInfo{ _userInfo = userInfo; //关闭友盟对话框 [UMessage setAutoAlert:NO]; [UMessage didReceiveRemoteNotification:userInfo]; NSLog(@"_______________友盟系统方法 userInfo %@",userInfo); if(userInfo)// 调用appdelegate的分类处理业务逻辑 [self dealWithMyMessagePush:userInfo]; } ** 代码里面删减了一些和推送不相关的代码,我的项目架构是tab+nav ** - (BOOL)application:(UIApplication *)application didFinishLaunchingWithOptions:(NSDictionary *)launchOptions{ self.window.rootViewController = [[FDIMGBarController alloc] init]; // 分类 [self UMengShareMethodAndCount:launchOptions]; // 类别 [self FD_updateAppVersion]; NSDictionary* userInfo = [launchOptions objectForKey:UIApplicationLaunchOptionsRemoteNotificationKey]; if(userInfo){//推送信息 self.userInfo = userInfo;//[userInfo copy] } return YES; } • 未启动时收到通知,就判断字典userInfo是否为空, 不为空是说明有通知消息。我这里的做法是把她复制给AppDelegate的某个属性, 在首页控制器取到AppDelegate的这个值,判断是否为空,不为空就进行下一步操作。 • 用户在前台或后台收到消息的时候, 我会弹出一个弹出框提醒用户,是否前往我的消息界面。 5、 未启动时首页控制器逻辑处理 在viewDidLoad方法里: AppDelegate * app = (AppDelegate *)[UIApplication sharedApplication].delegate; //pushName 是我给后天约定的通知必传值,所以我可以根据他是否为空来判断是否有通知 NSString * pushName = [[app.userInfo objectForKey:@"aps"] objectForKey:@"alert"]; if(![SYFCustomCLASS SYFIsEmptyOrNull:pushName]) [self getPushInfo:app.userInfo]; 如果有通知的话: -(void)getPushInfo:(NSDictionary *)dict{ if(!IsLogin){// 判断用户是否登录 LoginViewController * loginVC = [[LoginViewController alloc] initWithNibName:@"LoginViewController" bundle:nil]; //通知必返回要通知用户的uid, 判断登录的用户是不是你要通知的用户 loginVC.push_uid = dict[@"uid"]; FDNavigationController * loginNav = [[FDNavigationController alloc] initWithRootViewController:loginVC]; [self presentViewController:loginNav animated:YES completion:^{}]; }else {// 这就文章标题说的某一界面 MyUserMessageVC * messageVC = [[MyUserMessageVC alloc] initWithNibName:@"MyUserMessageVC" bundle:nil]; [self.navigationController pushViewController:messageVC animated:YES]; } } 当用户的登录成功的时候,在上传device_token的接口方法里, 需要判断登录用户的uid是不是你要通知的用户。如果不是你要通知的用户,dismiss就好了,结束;如果是就要跳转到我的消息界面。 if(![self.push_uid isEqualToString:currentuid]) [self dismissViewControllerAnimated:YES completion:NULL]; else{ [self dismissViewControllerAnimated:YES completion:^{ AppDelegate * app = (AppDelegate *)[UIApplication sharedApplication].delegate; //AppDelegate的分类 [app testLoginerUidCorret]; }]; 方法如下: - (void)testLoginerUidCorret{ // 取到tabbarcontroller FDIMGBarController *tabBarController = ( FDIMGBarController*)self.window.rootViewController; // 取到navigationcontroller FDNavigationController * nav = (FDNavigationController *)tabBarController.selectedViewController; //取到nav控制器当前显示的控制器 UIViewController * baseVC = (UIViewController *)nav.visibleViewController; //如果是当前控制器是我的消息控制器的话,刷新数据即可 if([baseVC isKindOfClass:[MyUserMessageVC class]]) { MyUserMessageVC * vc = (MyUserMessageVC *)baseVC; [vc reloadMessageData]; return; } // 否则,跳转到我的消息 MyUserMessageVC * messageVC = [[MyUserMessageVC alloc] initWithNibName:@"MyUserMessageVC" bundle:nil]; [nav pushViewController:messageVC animated:YES]; } 6、应用在前台或后台的逻辑处理 逻辑和未启动时很多情况都是类似,弹出提醒框, 点击立即前往时,判断用户是否登录,如果用户登录,直接跳转到我的消息界面;没有登录请参考上面逻辑。 ** 由于第一次做推送某一页面,写的不好处,请大家多多交流指点,有疑问或更好的想法的可以和我说。 ** 推荐阅读更多精彩内容 • 概述 在多数移动应用中任何时候都只能有一个应用程序处于活跃状态,如果其他应用此刻发生了一些用户感兴趣的那么通过通知... 莫离_焱阅读 2,896评论 1 8 • 极光推送: 1.JPush当前版本是1.8.2,其SDK的开发除了正常的功能完善和扩展外也紧随苹果官方的步伐,SD... Isspace阅读 3,455评论 12 16 • 来源:崔江涛的博客 概述在多数移动应用中任何时候都只能有一个应用程序处于活跃状态,如果其他应用此刻发生了一些用户感... 阿楠有猫也有米阅读 264评论 0 0 • 一、简介 目前说的是基于ios10之前,所以有些类是过期的. 推送分为本地推送和远程推送2种。可以在应用没有打开甚... JingYa_Lu阅读 346评论 0 0 • 关于这个话题,我很早就想写一篇文章了,来说说我一步一步获得更多的知识和想法的来源。最早的时候,那当然就是看书了。这... 质子陈阅读 57评论 0 2
__label__pos
0.837432
PHP String Parsing Searching a substring with strpos Example strpos can be understood as the number of bytes in the haystack before the first occurrence of the needle. var_dump(strpos("haystack", "hay")); // int(0) var_dump(strpos("haystack", "stack")); // int(3) var_dump(strpos("haystack", "stackoverflow"); // bool(false) Checking if a substring exists Be careful with checking against TRUE or FALSE because if a index of 0 is returned an if statement will see this as FALSE. $pos = strpos("abcd", "a"); // $pos = 0; $pos2 = strpos("abcd", "e"); // $pos2 = FALSE; // Bad example of checking if a needle is found. if($pos) { // 0 does not match with TRUE. echo "1. I found your string\n"; } else { echo "1. I did not found your string\n"; } // Working example of checking if needle is found. if($pos !== FALSE) { echo "2. I found your string\n"; } else { echo "2. I did not found your string\n"; } // Checking if a needle is not found if($pos2 === FALSE) { echo "3. I did not found your string\n"; } else { echo "3. I found your string\n"; } Output of the whole example: 1. I did not found your string 2. I found your string 3. I did not found your string Search starting from an offset // With offset we can search ignoring anything before the offset $needle = "Hello"; $haystack = "Hello world! Hello World"; $pos = strpos($haystack, $needle, 1); // $pos = 13, not 0 Get all occurrences of a substring $haystack = "a baby, a cat, a donkey, a fish"; $needle = "a "; $offsets = []; // start searching from the beginning of the string for($offset = 0; // If our offset is beyond the range of the // string, don't search anymore. // If this condition is not set, a warning will // be triggered if $haystack ends with $needle // and $needle is only one byte long. $offset < strlen($haystack); ){ $pos = strpos($haystack, $needle, $offset); // we don't have anymore substrings if($pos === false) break; $offsets[] = $pos; // You may want to add strlen($needle) instead, // depending on whether you want to count "aaa" // as 1 or 2 "aa"s. $offset = $pos + 1; } echo json_encode($offsets); // [0,8,15,25]
__label__pos
0.994556
Source From Crypto++ Wiki Jump to navigation Jump to search Source.png In the Pipelining paradigm, Sources are the origin of data. They serve the opposite role of a Sink. Crypto++ provides the following stock Sources: Sources exist for different types of objects (as shown above). A StringSources requires no additional information to originate data. Other objects, such as FileSources require additional information such as a filename. Still others, such as a RandomNumberSource require a RandomNumberGenerator and byte count. RandomNumberSource.png Between Sources and Sinks are Filters which perform processing. Examples The following example demonstrates creation of a FileSource. FileSource file( filename ); The following example demonstrates reading a file, and placing the contents of the file in a string. This is known as pipelining. string s; FileSource file( filename, new StringSink( s ) ); cout << s << endl; The following example performs the same operation as above, but without the variable file. string s; FileSource( filename, true, new StringSink( s ) ); cout << s << endl; A slightly more complicated example of pipelining is below. Before the FileSource is placed in the string, it is hex decoded. string s; FileSource( filename, new HexDecoder( new StringSink( s ) ) ); cout << s << endl; Note that the HexDecoder and StringSink created with new do not require explicit destruction - the FileSource will call delete on the HexDecoder, which in turns calls delete on the StringSink when it (the FileSource) is destroyed. Finally, the example below places 4 random bytes of data into a StringSink after hex encoding using a random number source. As the chaining gets longer, nesting the chaining structure as with if statements offers readability. AutoSeededRandomPool rng; RandomNumberSource( rng, 4, true, new HexEncoder( new ArraySink( s ) ) // HexEncoder ); // RandomNumberSource With the type formatting in place, data flow through through the construct is readily apparent. TransformationFlow.png Sinks See Sinks for the corresponding Sinks. Downloads No downloads.
__label__pos
0.916287
How To Create Directories/Folders In Linux? Directories or Folders are used to store files and folders. Directories can be created from GUI by using a desktop environment and application or command-line interface commands. In this tutorial, we will learn how to create directories in Linux. This tutorial can be used for all major Linux distributions like Ubuntu, Debian, Mint, Kali, CentOS, Fedora, RHEL, SUSE, etc. mkdir Command Syntax The mkdir command the standard command used to create directories. The mkdir command has the following syntax. mkdir OPTIONS DIRECTORY • OPTIONS is used to set different attributes about the directory creation. This options is optional. • DIRECTORY is single or multiple directories which will be created. This option is required. List Existing Directories Before starting to create new directories and folders we can list existing directories and folders to get their names. Also after creating the directories and folders created directories and folders can be listed too. The ls command is used to list directories and folders. ls Create Diretory Even the mkdir provides different features we will start with a simple example where we will create a single directory with the specified name. The new directory name will be “data” for the following example. mkdir data We can list the created directory with the ls command like below. ls The absolute path or full path can be used to create a directory that is a more reliable and flexible way. By default, the given directory will be created in the current working path but by using the relative path directories can be created in other paths. In the following example, we will create a directory named data in the “/var/log“. mkdir /var/log/data Relative paths can be also used to create directories. Relative path is set according to the current working directory or given path specifier. In the following example we will create the directory named “data” in the parent directory. mkdir ../data Create Multiple Directories The mkdir command can be also used to create multiple directories in a single command. Just provides the directory names by separating them with spaces. mkdir data test backup Also, the mkdir command can be used to create multiple directories with their absolute or full path. This is a more reliable and flexible way to create directories. mkdir /etc/test /var/log/data Create Non-Existin Multi-Level Directories Multi-level directories can be created with the mkdir command even some intermediate directories do not exist. These non-existing directories and child directories will be also created by using the -p option. Just provide the child and parent directories you want to create. In the following example, the directory year and 2020 do not exist but will be created too. mkdir -p ./year/2020/December Set Permissions During Directory Creation By default when a directory is created the umask value will be set as permission. Also, the chmod command can be used to change given single or multiple directories permissions but the mkdir command also supports settings permission during creation. The -m option can be used to set different than default permissions by providing the numerical representation of the permissions. In the following example, we will create a directory named “newdir” which has 777 permission. mkdir -m 777 newdir Alternatively multiple created directories permissions can be set with the -m option like below. mkdir -m 777 newdir data backup “cannot create directory: File exists” Error The “cannot create directory: File exists” error occurs when the given directory is already exist. So the new directory can not be created and overwritten. READ  How To Page Up and Page Down In Vim/Vi? “cannot create directory: No such a file or directory” Error The “cannot create directory: File exists” error can occur in different situations where any of the parent directories in the given path do not exist and so the given directory can not be created because of the parent directory. This error can be solved by creating parent directories too by using the -p option which is described previosuly. mkdir -p /home/ismail/Music/Rammstein/2000 Create Directory with GUI The GUI tools like File Manager etc. can be used to create directories. But it has limited functionalities according to the mkdir command. For example multiple directories can not be created with GUI. Below we will create a new directory by right-clicking and selecting the “New Folder“. Create New Folder/Directory In this step, we will specify the directory name we want to create in the current path. The last step is clicking on the “Create” button. Leave a Comment
__label__pos
0.92985
Unit: Mathematics Program: Mathematics (BA) Degree: Bachelor's Date: Tue Nov 17, 2020 - 2:25:07 pm 1) Program Student Learning Outcomes (SLOs) and Institutional Learning Objectives (ILOs) 1. Recipients of an undergraduate degree in mathematics are expected to learn, understand, and be able to apply: calculus in one and several variables. (1b. Specialized study in an academic field) 2. Recipients of an undergraduate degree in mathematics are expected to learn, understand, and be able to apply: linear algebra and the theory of vector spaces. (1b. Specialized study in an academic field) 3. Recipients of an undergraduate degree in mathematics are expected to learn, understand, and be able to apply: several mathematical topics at the junior and senior level. (1b. Specialized study in an academic field) 4. Recipients of an undergraduate degree in mathematics are expected to learn, understand, and be able to apply: in depth at least one advanced topic of mathematics. (1b. Specialized study in an academic field) 5. Students are expected to acquire the ability and skills to: develop and write direct proofs, proofs by contradiction, and proofs by induction. (1b. Specialized study in an academic field, 2a. Think critically and creatively, 2c. Communicate and report) 6. Students are expected to acquire the ability and skills to: formulate definitions and give examples and counterexamples. (1b. Specialized study in an academic field, 2a. Think critically and creatively, 2c. Communicate and report) 7. Students are expected to acquire the ability and skills to: read mathematics without supervision. (1b. Specialized study in an academic field, 2b. Conduct research, 3a. Continuous learning and personal growth) 8. Students are expected to acquire the ability and skills to: follow and explain algorithms. (1b. Specialized study in an academic field, 2a. Think critically and creatively, 2c. Communicate and report) 9. Students are expected to acquire the ability and skills to: apply mathematics to other fields. (1a. General education, 1b. Specialized study in an academic field, 2a. Think critically and creatively, 2c. Communicate and report) 10. Recipients of an undergraduate degree in mathematics are expected to have learned about research in mathematics. (1b. Specialized study in an academic field, 2b. Conduct research, 3a. Continuous learning and personal growth) 2) Your program's SLOs are published as follows. Please update as needed. Department Website URL: http://math.hawaii.edu/wordpress/program-goals/ Student Handbook. URL, if available online: Information Sheet, Flyer, or Brochure URL, if available online: UHM Catalog. Page Number: Course Syllabi. URL, if available online: http://math.hawaii.edu/wordpress/syllabi/ Other: 3) Please review, add, replace, or delete the existing curriculum map. Curriculum Map File(s) from 2020: 4) For your program, the percentage of courses that have course SLOs explicitly stated on the syllabus, a website, or other publicly available document is as follows. Please update as needed. 0% 1-50% 51-80% 81-99% 100% 5) Does the program have learning achievement results for its program SLOs? (Example of achievement results: "80% of students met expectations on SLO 1.")(check one): No Yes, on some(1-50%) of the program SLOs Yes, on most(51-99%) of the program SLOs Yes, on all(100%) of the program SLOs 6) Did your program engage in any program learning assessment activities between November 1, 2018 and October 31, 2020? Yes No (skip to question 17) 7) What best describes the program-level learning assessment activities that took place for the period November 1, 2018 and October 31, 2020? (Check all that apply.) Create/modify/discuss program learning assessment procedures (e.g., SLOs, curriculum map, mechanism to collect student work, rubric, survey) Collect/evaluate student work/performance to determine SLO achievement Collect/analyze student self-reports of SLO achievement via surveys, interviews, or focus groups Use assessment results to make programmatic decisions (e.g., change course content or pedagogy, design new course, hiring) Investigate other pressing issue related to student learning achievement for the program (explain in question 8) Other: 8) Briefly explain the assessment activities that took place since November 2018. All mathematics undergraduate majors are required to take a capstone seminar (Math 480), and as part of the seminar they take the assessment exam. The exam is written and it has three parts; Part I (Calculus and Linear Algebra), Part II (Differential Equations, Basic Proofs, and Examples) and Part III (Problems and Theorems from Senior Courses). Students do not receive credit for the course if they do not take the exam. In addition, the students prepare a research project, submit a 3-5 page paper written in LaTeX, and give oral presentations.   9) What types of evidence did the program use as part of the assessment activities checked in question 7? (Check all that apply.) Artistic exhibition/performance Assignment/exam/paper completed as part of regular coursework and used for program-level assessment Capstone work product (e.g., written project or non-thesis paper) Exam created by an external organization (e.g., professional association for licensure) Exit exam created by the program IRB approval of research Oral performance (oral defense, oral presentation, conference presentation) Portfolio of student work Publication or grant proposal Qualifying exam or comprehensive exam for program-level assessment in addition to individual student evaluation (graduate level only) Supervisor or employer evaluation of student performance outside the classroom (internship, clinical, practicum) Thesis or dissertation used for program-level assessment in addition to individual student evaluation Alumni survey that contains self-reports of SLO achievement Employer meetings/discussions/survey/interview of student SLO achievement Interviews or focus groups that contain self-reports of SLO achievement Student reflective writing assignment (essay, journal entry, self-assessment) on their SLO achievement. Student surveys that contain self-reports of SLO achievement Assessment-related such as assessment plan, SLOs, curriculum map, etc. Program or course materials (syllabi, assignments, requirements, etc.) Other 1: Other 2: 10) State the number of students (or persons) who submitted evidence that was evaluated. If applicable, please include the sampling technique used. 24 students in 2019-2020 (both BA and BS) 20 students in 2018-2019 (both BA and BS); because of time conflict 2 students took it as a reading course.   11) Who interpreted or analyzed the evidence that was collected? (Check all that apply.) Course instructor(s) Faculty committee Ad hoc faculty group Department chairperson Persons or organization outside the university Faculty advisor Advisors (in student support services) Students (graduate or undergraduate) Dean/Director Other: 12) How did they evaluate, analyze, or interpret the evidence? (Check all that apply.) Used a rubric or scoring guide Scored exams/tests/quizzes Used professional judgment (no rubric or scoring guide used) Compiled survey results Used qualitative methods on interview, focus group, open-ended response data External organization/person analyzed data (e.g., external organization administered and scored the nursing licensing exam) Other: 13) Summarize the results from the evaluation, analysis, interpretation of evidence (checked in question 12). For example, report the percentage of students who achieved each SLO. The results of assessment activities as they relate to the SLOs are summarized below: (1)  Learn, understand and be able to apply: Calculus in one and several variables. In Part I of the exam students demonstrated reasonable skill set developed in calculus classes. Most students made some progress on most problems.  As is fairly typical, the weakest problem was one based on Taylor series. That is the only problem in Part I which requires a certain level of understanding beyond the skillset, and low scores on this problem have persisted for many years.  In Spring 2019 in addition to the assessment exam, class time was spent discussing several advanced calculus topics such as error estimates for numerical integration, and the non-existence of nowhere zero vector fields on the sphere. Students actively participated in these discussions and were generally able to offer substantive and considered comments, which was very encouraging.  (2) Learn, understand and be able to apply:  Linear algebra and the theory of vector spaces. Linear algebra questions from Parts I and II of the exam were answered well by the majority of students.  Approximately half of students took more advanced linear algebra Math 411.  In Spring 2020 some of those students gave reasonable answers to related questions from Part III of the exam.  However, in Spring 2019 only two students made substantive attempts at questions based on the advanced 400-level linear algebra course.  In addition, a large portion of class time was spent discussing problems from the textbook on linear algebra used in the Math 480 course. The topics covered were typically applications of linear algebra to combinatorics, geometry, and some disciplines outside mathematics. The level of these presentations was variable (both in terms of communicating the ideas, and in terms of understanding the ideas), but was generally quite good. Students participated in each others’ talks, asking sensible questions, and seem to follow in the most part.  (3) Learn, understand and be able to apply: Several mathematical topics at the junior and senior level. At the junior level, student responses to the questions about Math 311, 321, and 331 material were patchy. There were some very good responses, but a large number with fairly poor responses. Especially the performance on the questions related to Math 331 went down compared to previous years. Our assessment activities are not designed to judge student understanding in junior-level courses outside this core.  The number of 400 courses taken is in general far above our minimum requirements (two courses at the 400-level for both the BA and BS), which is encouraging. In Spring 2019 in terms of the assessment exam, the most popular areas for response were M 471/2 (10 students attempted questions), M 420 (10 students), M 412/3 (7 students), M 407 (5 students), and M 431 (4 students).  However, there were no substantive responses to questions from some important areas of mathematics: notably, complex analysis, or any part of foundations, which is worrying.  Having said that, one student did present on a topic from M 444, so it seems at least some students are taking and enjoying topics from that course.  In Spring 2020 it was also noted that the questions related to Math 431 in Part III of the exam got no responses at all.  (4) Learn, understand and be able to apply: In depth at least one advanced topic of    mathematics.  Although 400 level courses are more popular than required, which is certainly a good sign, the students have stopped taking two-course sequences since that requirement was removed a few years ago.  Among the students taking the capstone course in Spring 2020 only one student took two-course sequences; this student actually took two of them: M 412 − 413 and M 471 − 472. Hence it is difficult to assess this SLO as we do not consistently offer as many two-course sequences as we used to.             (5)  Students are expected to acquire the ability and skills to: Develop and write direct   proofs, proofs by contradiction, and proofs by induction.  This SLO was primarily checked with Part II of the exam. The majority of students demonstrated their ability to create and reasonably neatly write down a simple mathematical argument.  This was also confirmed in Math 480 class: during their presentations, many students demonstrated their ability to devise and explain a simple mathematical argument. However, the inclination towards taking mostly computational courses manifested itself: some students encounter conceptual difficulties with more sophisticated “non-constructive proof of existence” arguments.  This is concerning, but consistent with results in the last 5 years.  (6)  Students are expected to acquire the ability and skills to: Formulate definitions and give examples and counterexamples  As in (5) this was assessed using students answers to Part II of assessment exam as well as their presentations in Math 480. Both demonstrate satisfactory achievements towards this goal.  (7)  Students are expected to acquire the ability and skills to: Read mathematics without supervision. This was assessed in Math 480 class: every student had to read a topic from the textbook on their own and present the material in class. This exercise has worked out quite reasonably by the students demonstrating their satisfactory ability to read and understand mathematics without supervision.  In Spring 2019, all students also had to read mathematical texts by themselves for their final presentations based on a 400-level course. Some needed some help from the faculty teaching the course (in some cases because they chose quite ambitious topics), but in general the level of ability to read mathematics, and in some cases to search the relevant literature, seemed quite good.  (8)  Students are expected to acquire the ability and skills to: Follow and explain algorithms.  This was not assessed uniformly for all students and was not assessed on the exam. However, many topics from the text covered algorithms of one sort or another, and the students were able to successfully present these.  In addition, some students choose for their presentations topics in linear algebra which yield specific algorithms. Their explanations of the algorithms were very reasonable. In fact, some of them were double majoring in computer science; such students naturally demonstrated pretty good understanding of both algorithms and the ways to assess their efficiency. While the information is incomplete, there seems no cause for concern here.  (9)  Students are expected to acquire the ability and skills to: Apply mathematics to other fields.  Again, this was not assessed uniformly for all students, and was not assessed on the exam. However, several topics from the text covered applications to other fields of one sort or another, and the students were able to successfully present these. In addition, several students talked about interesting applications in the topics they chose personally, including psychology, computer science, and game theory. This was generally done very well.             (10)  Recipients of an undergraduate degree in mathematics are expected to have learned about research in Mathematics.  With regards to learning about research in mathematics, the senior seminar was successful in exposing students to the different areas of mathematical research particularly with the wide range of undergraduate seminars in Spring 2019. However, due to the pandemic in Spring 2020 there were no undergraduate seminars.   14) What best describes how the program used the results? (Check all that apply.) Assessment procedure changes (SLOs, curriculum map, rubrics, evidence collected, sampling, communications with faculty, etc.) Course changes (course content, pedagogy, courses offered, new course, pre-requisites, requirements) Personnel or resource allocation changes Program policy changes (e.g., admissions requirements, student probation policies, common course evaluation form) Students' out-of-course experience changes (advising, co-curricular experiences, program website, program handbook, brown-bag lunches, workshops) Celebration of student success! Results indicated no action needed because students met expectations Use is pending (typical reasons: insufficient number of students in population, evidence not evaluated or interpreted yet, faculty discussions continue) Other: 15) Please briefly describe how the program used its findings/results. The results were discussed at the faculty meeting, and the Curriculum Committee is considering changes that may be warranted. For example, we are evaluating the rotation of our 400 level courses and revising our abstract algebra courses. In Spring 2020 we introduced a new geometry course, Math 353, which combines topics from Math 351 and 352.      16) Beyond the results, were there additional conclusions or discoveries? This can include insights about assessment procedures, teaching and learning, and great achievements regarding program assessment in this reporting period. Overall, the students’ performance, both on the assessment exam and in Math 480 class is satisfactory, and the program SLOs have been reached. We are satisfied that more students are taking more than required number of 400 level classes and will continue our advising efforts in that direction. The main concern is that students seem to have difficulty writing correct logical arguments. This is a central skill in all of mathematics and seemed very uneven. Our curriculum is designed for students to pick up these skills in the required junior-level courses: 311, 321, and 331. It may be worth revisiting the effectiveness of these courses. It seemed students were better able to speak about, discuss, and read mathematics than they are able to write it. It might be particularly worth revisiting the effectiveness of the above courses (particularly 321 and 331, which are ‘Writing Intensive’) in teaching students how to write mathematics well.  A possible explanation for the disparity between students’ other abilities and their writing is simply that some of them do not take the assessment exams very seriously (they are not incentivized to do so in any way). It might be worth altering the course to better incentivize this.  In addition, for Spring 2020 assessment, very few students took more theoretical 400 level classes (Math 413, 421, 431, 444) which is of concern.  The students’ knowledge of Analysis is especially worrisome: not only there were no attempts to solve problems related to Math 431 and 444, but also the responses to questions related to Math 331 looked weaker than they were in the previous years. This may be a result of low enrollment in Math 431: the knowledge acquired in Math 331 becomes isolated and gets lost by those students who discontinue their studies in Analysis.      17) If the program did not engage in assessment activities, please justify.
__label__pos
0.762235
This is a live mirror of the Perl 5 development currently hosted at https://github.com/perl/perl5 Update documentation for hash_seed() [perl5.git] / perlvars.h 1 /*    perlvars.h 2  * 3  *    Copyright (C) 1999, 2000, 2001, 2002, 2003, 2004, 2005, 2006, 2007, 4  *    by Larry Wall and others 5  * 6  *    You may distribute under the terms of either the GNU General Public 7  *    License or the Artistic License, as specified in the README file. 8  * 9  */ 10 11 /* 12 =head1 Global Variables 13 These variables are global to an entire process.  They are shared between 14 all interpreters and all threads in a process.  Any variables not documented 15 here may be changed or removed without notice, so don't use them! 16 If you feel you really do need to use an unlisted variable, first send email to 17 L<[email protected]|mailto:[email protected]>.  It may be that 18 someone there will point out a way to accomplish what you need without using an 19 internal variable.  But if not, you should get a go-ahead to document and then 20 use the variable. 21 22 =cut 23 */ 24 25 /* Don't forget to re-run regen/embed.pl to propagate changes! */ 26 27 /* This file describes the "global" variables used by perl 28  * This used to be in perl.h directly but we want to abstract out into 29  * distinct files which are per-thread, per-interpreter or really global, 30  * and how they're initialized. 31  * 32  * The 'G' prefix is only needed for vars that need appropriate #defines 33  * generated in embed*.h.  Such symbols are also used to generate 34  * the appropriate export list for win32. */ 35 36 /* global state */ 37 #if defined(USE_ITHREADS) 38 PERLVAR(G, op_mutex,    perl_mutex)     /* Mutex for op refcounting */ 39 #endif 40 PERLVARI(G, curinterp,  PerlInterpreter *, NULL) 41                                         /* currently running interpreter 42                                          * (initial parent interpreter under 43                                          * useithreads) */ 44 #if defined(USE_ITHREADS) 45 PERLVAR(G, thr_key,     perl_key)       /* key to retrieve per-thread struct */ 46 #endif 47 48 /* XXX does anyone even use this? */ 49 PERLVARI(G, do_undump,  bool,   FALSE)  /* -u or dump seen? */ 50 51 #ifndef PERL_USE_SAFE_PUTENV 52 PERLVARI(G, use_safe_putenv, bool, TRUE) 53 #endif 54 55 #if defined(FAKE_PERSISTENT_SIGNAL_HANDLERS)||defined(FAKE_DEFAULT_SIGNAL_HANDLERS) 56 PERLVARI(G, sig_handlers_initted, int, 0) 57 #endif 58 #ifdef FAKE_PERSISTENT_SIGNAL_HANDLERS 59 PERLVARA(G, sig_ignoring, SIG_SIZE, int) 60                                         /* which signals we are ignoring */ 61 #endif 62 #ifdef FAKE_DEFAULT_SIGNAL_HANDLERS 63 PERLVARA(G, sig_defaulting, SIG_SIZE, int) 64 #endif 65 66 /* XXX signals are process-wide anyway, so we 67  * ignore the implications of this for threading */ 68 #ifndef HAS_SIGACTION 69 PERLVARI(G, sig_trapped, int,   0) 70 #endif 71 72 #ifndef PERL_MICRO 73 /* If Perl has to ignore SIGPFE, this is its saved state. 74  * See perl.h macros PERL_FPU_INIT and PERL_FPU_{PRE,POST}_EXEC. */ 75 PERLVAR(G, sigfpe_saved, Sighandler_t) 76 77 /* these ptrs to functions are to avoid linkage problems; see 78  * perl-5.8.0-2193-g5c1546dc48 79  */ 80 PERLVARI(G, csighandlerp,  Sighandler_t,  Perl_csighandler) 81 PERLVARI(G, csighandler1p, Sighandler1_t, Perl_csighandler1) 82 PERLVARI(G, csighandler3p, Sighandler3_t, Perl_csighandler3) 83 #endif 84 85 /* This is constant on most architectures, a global on OS/2 */ 86 #ifdef OS2 87 PERLVARI(G, sh_path,    char *, SH_PATH) /* full path of shell */ 88 #endif 89 90 #ifdef USE_PERLIO 91 92 #  if defined(USE_ITHREADS) 93 PERLVAR(G, perlio_mutex, perl_mutex)    /* Mutex for perlio fd refcounts */ 94 #  endif 95 96 PERLVARI(G, perlio_fd_refcnt, int *, 0) /* Pointer to array of fd refcounts.  */ 97 PERLVARI(G, perlio_fd_refcnt_size, int, 0) /* Size of the array */ 98 PERLVARI(G, perlio_debug_fd, int, 0)    /* the fd to write perlio debug into, 0 means not set yet */ 99 #endif 100 101 #ifdef HAS_MMAP 102 PERLVARI(G, mmap_page_size, IV, 0) 103 #endif 104 105 #if defined(USE_ITHREADS) 106 PERLVAR(G, hints_mutex, perl_mutex)    /* Mutex for refcounted he refcounting */ 107 #  if ! defined(USE_THREAD_SAFE_LOCALE) || defined(TS_W32_BROKEN_LOCALECONV) 108 PERLVAR(G, locale_mutex, perl_mutex)   /* Mutex for setlocale() changing */ 109 #  endif 110 #  ifndef USE_THREAD_SAFE_LOCALE 111 PERLVAR(G, lc_numeric_mutex, perl_mutex)   /* Mutex for switching LC_NUMERIC */ 112 #  endif 113 #endif 114 115 #ifdef USE_POSIX_2008_LOCALE 116 PERLVAR(G, C_locale_obj, locale_t) 117 #endif 118 119 PERLVARI(G, watch_pvx,  char *, NULL) 120 121 /* 122 =for apidoc AmnU|Perl_check_t *|PL_check 123 124 Array, indexed by opcode, of functions that will be called for the "check" 125 phase of optree building during compilation of Perl code.  For most (but 126 not all) types of op, once the op has been initially built and populated 127 with child ops it will be filtered through the check function referenced 128 by the appropriate element of this array.  The new op is passed in as the 129 sole argument to the check function, and the check function returns the 130 completed op.  The check function may (as the name suggests) check the op 131 for validity and signal errors.  It may also initialise or modify parts of 132 the ops, or perform more radical surgery such as adding or removing child 133 ops, or even throw the op away and return a different op in its place. 134 135 This array of function pointers is a convenient place to hook into the 136 compilation process.  An XS module can put its own custom check function 137 in place of any of the standard ones, to influence the compilation of a 138 particular type of op.  However, a custom check function must never fully 139 replace a standard check function (or even a custom check function from 140 another module).  A module modifying checking must instead B<wrap> the 141 preexisting check function.  A custom check function must be selective 142 about when to apply its custom behaviour.  In the usual case where 143 it decides not to do anything special with an op, it must chain the 144 preexisting op function.  Check functions are thus linked in a chain, 145 with the core's base checker at the end. 146 147 For thread safety, modules should not write directly to this array. 148 Instead, use the function L</wrap_op_checker>. 149 150 =cut 151 */ 152 153 #if defined(USE_ITHREADS) 154 PERLVAR(G, check_mutex, perl_mutex)     /* Mutex for PL_check */ 155 #endif 156 #ifdef PERL_GLOBAL_STRUCT  157 PERLVAR(G, ppaddr,      Perl_ppaddr_t *) /* or opcode.h */ 158 PERLVAR(G, check,       Perl_check_t *) /* or opcode.h */ 159 PERLVARA(G, fold_locale, 256, unsigned char) /* or perl.h */ 160 #endif 161 162 #ifdef PERL_NEED_APPCTX 163 PERLVAR(G, appctx,      void*)          /* the application context */ 164 #endif 165 166 #if defined(HAS_TIMES) && defined(PERL_NEED_TIMESBASE) 167 PERLVAR(G, timesbase,   struct tms) 168 #endif 169 170 /* allocate a unique index to every module that calls MY_CXT_INIT */ 171 172 #ifdef PERL_IMPLICIT_CONTEXT 173 # ifdef USE_ITHREADS 174 PERLVAR(G, my_ctx_mutex, perl_mutex) 175 # endif 176 PERLVARI(G, my_cxt_index, int,  0) 177 #endif 178 179 /* this is currently set without MUTEX protection, so keep it a type which 180  * can be set atomically (ie not a bit field) */ 181 PERLVARI(G, veto_cleanup, int, FALSE)   /* exit without cleanup */ 182 183 /* 184 =for apidoc AmnUx|Perl_keyword_plugin_t|PL_keyword_plugin 185 186 Function pointer, pointing at a function used to handle extended keywords. 187 The function should be declared as 188 189         int keyword_plugin_function(pTHX_ 190                 char *keyword_ptr, STRLEN keyword_len, 191                 OP **op_ptr) 192 193 The function is called from the tokeniser, whenever a possible keyword 194 is seen.  C<keyword_ptr> points at the word in the parser's input 195 buffer, and C<keyword_len> gives its length; it is not null-terminated. 196 The function is expected to examine the word, and possibly other state 197 such as L<%^H|perlvar/%^H>, to decide whether it wants to handle it 198 as an extended keyword.  If it does not, the function should return 199 C<KEYWORD_PLUGIN_DECLINE>, and the normal parser process will continue. 200 201 If the function wants to handle the keyword, it first must 202 parse anything following the keyword that is part of the syntax 203 introduced by the keyword.  See L</Lexer interface> for details. 204 205 When a keyword is being handled, the plugin function must build 206 a tree of C<OP> structures, representing the code that was parsed. 207 The root of the tree must be stored in C<*op_ptr>.  The function then 208 returns a constant indicating the syntactic role of the construct that 209 it has parsed: C<KEYWORD_PLUGIN_STMT> if it is a complete statement, or 210 C<KEYWORD_PLUGIN_EXPR> if it is an expression.  Note that a statement 211 construct cannot be used inside an expression (except via C<do BLOCK> 212 and similar), and an expression is not a complete statement (it requires 213 at least a terminating semicolon). 214 215 When a keyword is handled, the plugin function may also have 216 (compile-time) side effects.  It may modify C<%^H>, define functions, and 217 so on.  Typically, if side effects are the main purpose of a handler, 218 it does not wish to generate any ops to be included in the normal 219 compilation.  In this case it is still required to supply an op tree, 220 but it suffices to generate a single null op. 221 222 That's how the C<*PL_keyword_plugin> function needs to behave overall. 223 Conventionally, however, one does not completely replace the existing 224 handler function.  Instead, take a copy of C<PL_keyword_plugin> before 225 assigning your own function pointer to it.  Your handler function should 226 look for keywords that it is interested in and handle those.  Where it 227 is not interested, it should call the saved plugin function, passing on 228 the arguments it received.  Thus C<PL_keyword_plugin> actually points 229 at a chain of handler functions, all of which have an opportunity to 230 handle keywords, and only the last function in the chain (built into 231 the Perl core) will normally return C<KEYWORD_PLUGIN_DECLINE>. 232 233 For thread safety, modules should not set this variable directly. 234 Instead, use the function L</wrap_keyword_plugin>. 235 236 =cut 237 */ 238 239 #if defined(USE_ITHREADS) 240 PERLVAR(G, keyword_plugin_mutex, perl_mutex)   /* Mutex for PL_keyword_plugin */ 241 #endif 242 PERLVARI(G, keyword_plugin, Perl_keyword_plugin_t, Perl_keyword_plugin_standard) 243 244 PERLVARI(G, op_sequence, HV *, NULL)    /* dump.c */ 245 PERLVARI(G, op_seq,     UV,     0)      /* dump.c */ 246 247 #ifdef USE_ITHREADS 248 PERLVAR(G, dollarzero_mutex, perl_mutex) /* Modifying $0 */ 249 #endif 250 251 /* Restricted hashes placeholder value. 252    In theory, the contents are never used, only the address. 253    In practice, &PL_sv_placeholder is returned by some APIs, and the calling 254    code is checking SvOK().  */ 255 256 PERLVAR(G, sv_placeholder, SV) 257 258 #if defined(MYMALLOC) && defined(USE_ITHREADS) 259 PERLVAR(G, malloc_mutex, perl_mutex)    /* Mutex for malloc */ 260 #endif 261 262 PERLVARI(G, hash_seed_set, bool, FALSE) /* perl.c */ 263 PERLVARA(G, hash_seed, PERL_HASH_SEED_BYTES, unsigned char) /* perl.c and hv.h */ 264 #if defined(PERL_HASH_STATE_BYTES) 265 PERLVARA(G, hash_state, PERL_HASH_STATE_BYTES, unsigned char) /* perl.c and hv.h */ 266 #endif 267 #if defined(PERL_USE_SINGLE_CHAR_HASH_CACHE) 268 PERLVARA(G, hash_chars, (1+256) * sizeof(U32), unsigned char) /* perl.c and hv.h */ 269 #endif 270 271 /* The path separator can vary depending on whether we're running under DCL or 272  * a Unix shell. 273  */ 274 #ifdef __VMS 275 PERLVAR(G, perllib_sep, char) 276 #endif 277 278 /* Definitions of user-defined \p{} properties, as the subs that define them 279  * are only called once */ 280 PERLVARI(G, user_def_props,     HV *, NULL) 281 282 #if defined(USE_ITHREADS) 283 PERLVAR(G, user_def_props_aTHX, PerlInterpreter *)  /* aTHX that user_def_props 284                                                        was defined in */ 285 PERLVAR(G, user_prop_mutex, perl_mutex)    /* Mutex for manipulating 286                                               PL_user_defined_properties */ 287 #endif 288 289 /* Everything that folds to a given character, for case insensitivity regex 290  * matching */ 291 PERLVAR(G, utf8_foldclosures, SV *) 292 293 /* these record the best way to to perform certain IO operations while 294  * atomically setting FD_CLOEXEC. On the first call, a probe is done 295  * and the result recorded for use by subsequent calls. 296  * In theory these variables aren't thread-safe, but the worst that can 297  * happen is that two treads will both do an initial probe 298  */ 299 PERLVARI(G, strategy_dup,        int, 0)        /* doio.c */ 300 PERLVARI(G, strategy_dup2,       int, 0)        /* doio.c */ 301 PERLVARI(G, strategy_open,       int, 0)        /* doio.c */ 302 PERLVARI(G, strategy_open3,      int, 0)        /* doio.c */ 303 PERLVARI(G, strategy_mkstemp,    int, 0)        /* doio.c */ 304 PERLVARI(G, strategy_socket,     int, 0)        /* doio.c */ 305 PERLVARI(G, strategy_accept,     int, 0)        /* doio.c */ 306 PERLVARI(G, strategy_pipe,       int, 0)        /* doio.c */ 307 PERLVARI(G, strategy_socketpair, int, 0)        /* doio.c */ 308 309 #ifdef PERL_IMPLICIT_CONTEXT 310 #  ifdef PERL_GLOBAL_STRUCT_PRIVATE 311 /* per-module array of pointers to MY_CXT_KEY constants. 312  * It simulates each module having a static my_cxt_index var on builds 313  * which don't allow static vars */ 314 PERLVARI(G, my_cxt_keys, const char **, NULL) 315 PERLVARI(G, my_cxt_keys_size, int,      0)      /* size of PL_my_cxt_keys */ 316 #  endif 317 #endif
__label__pos
0.716223
website security The Role of Quantum Computing in Website Security Quantum computing is poised to revolutionize various aspects of technology, including cybersecurity. In the realm of website security, quantum computing presents both opportunities and challenges that demand careful consideration. In this article, US Logo and Web delves into the transformative role of quantum computing in website security, exploring topics such as encryption breakthroughs, post-quantum cryptography, secure communication protocols, random number generation, threats, and challenges. By understanding these key aspects, businesses and cybersecurity professionals can better navigate the evolving landscape of quantum security and adopt strategies to safeguard sensitive data and communication channels effectively. Encryption Breakthroughs Traditional encryption methods, such as RSA (Rivest-Shamir-Adleman) and ECC (Elliptic Curve Cryptography), rely on the difficulty of certain mathematical problems for security. For instance, RSA encryption is based on the challenge of factoring large composite numbers into their prime factors, a task believed to be computationally infeasible for classical computers beyond a certain size of numbers. Similarly, ECC relies on the difficulty of solving the elliptic curve discrete logarithm problem. However, quantum computers have the potential to break these encryption schemes efficiently. Quantum algorithms like Shor’s algorithm can factor large numbers and solve discrete logarithm problems exponentially faster than classical algorithms. This breakthrough in encryption-breaking capabilities poses a significant threat to the security of data transmitted over the web using traditional encryption methods. Post-Quantum Cryptography (PQC) To address the vulnerabilities exposed by quantum computing, researchers and cryptographic experts are actively developing post-quantum cryptographic algorithms. These algorithms are designed to resist attacks from both classical and quantum computers, ensuring long-term security for sensitive data. Post-quantum cryptographic algorithms explore mathematical problems that are believed to be hard even for quantum computers. Examples include lattice-based cryptography, code-based cryptography, multivariate polynomial cryptography, and hash-based cryptography. These algorithms offer a diverse range of approaches to encryption that are resilient against quantum attacks. The adoption of post-quantum cryptographic standards is crucial for maintaining secure communication channels on websites and other digital platforms. It represents a proactive strategy to mitigate the risks posed by the emergence of quantum computing in the realm of cybersecurity. Secure Communication Protocols Quantum computing not only challenges traditional encryption methods but also paves the way for innovative secure communication protocols. One notable example is Quantum Key Distribution (QKD), a protocol that leverages the principles of quantum mechanics to achieve secure key exchange between communicating parties. QKD relies on the properties of quantum entanglement and the Heisenberg uncertainty principle to ensure that any eavesdropping attempts are detectable, thereby guaranteeing the security of the exchanged cryptographic keys. This quantum-resistant key exchange mechanism offers a higher level of security compared to classical key exchange protocols vulnerable to quantum attacks. Integrating secure communication protocols like QKD into website security frameworks enhances the confidentiality and integrity of data transmission, particularly in environments where sensitive information is exchanged over the internet. Random Number Generation Random numbers play a crucial role in cryptographic operations, such as generating encryption keys and ensuring the unpredictability of cryptographic protocols. Quantum computing offers a unique advantage in generating true random numbers, which are inherently unpredictable and unbiased. Traditional pseudo-random number generators (PRNGs) are deterministic and rely on algorithms to produce seemingly random sequences of numbers. However, these sequences can exhibit patterns or vulnerabilities that quantum computers may exploit. In contrast, quantum random number generators (QRNGs) harness quantum phenomena, such as photon polarization or electron tunneling, to generate truly random numbers. This randomness is fundamental to cryptographic security, providing a solid foundation for key generation and cryptographic operations in website security protocols. Threats and Challenges While quantum computing brings significant advancements to website security, it also introduces new threats and challenges that organizations must address. One major concern is the potential misuse of quantum computing power for malicious purposes, such as breaking into encrypted systems or conducting cyberattacks with unprecedented speed and efficiency. Additionally, the transition to post-quantum cryptographic standards requires careful planning and coordination across industries to ensure compatibility, interoperability, and robustness. Organizations need to invest in research and development efforts to implement quantum-resistant security solutions effectively. Moreover, the field of quantum computing itself is rapidly evolving, with ongoing research and discoveries that may impact the security landscape. Staying informed about the latest developments in quantum technology and cybersecurity in web development is essential for mitigating risks and adapting to future challenges. In conclusion, quantum computing presents both opportunities and challenges for website security. By embracing post-quantum cryptographic standards, adopting secure communication protocols, leveraging quantum random number generation, and addressing web development service can enhance their cybersecurity posture in the quantum era. Quantum-Safe Cryptography Standards As quantum computing advances, the need for quantum-safe cryptography becomes increasingly urgent. Quantum-safe cryptography, also known as quantum-resistant or quantum-proof cryptography, involves designing cryptographic algorithms that can withstand attacks from both classical and quantum computers. The development of quantum-safe standards is a collaborative effort involving academia, industry, and standardization bodies. Organizations like NIST (National Institute of Standards and Technology) are actively evaluating and standardizing post-quantum cryptographic algorithms to establish a right web development framework for quantum-resistant security practices. By adopting quantum-safe cryptography standards, website operators can future-proof their security infrastructure against potential quantum threats, ensuring the continued confidentiality, integrity, and authenticity of sensitive data. Quantum Cryptanalysis and Vulnerability Assessment Quantum computing not only poses challenges for encryption but also enables new techniques for cryptanalysis. Quantum cryptanalysis explores methods for attacking cryptographic systems using quantum algorithms and principles, such as quantum Fourier transforms and quantum walks. Vulnerability assessment in the quantum era involves evaluating the resilience of existing cryptographic protocols and systems against quantum attacks. Techniques like quantum key extraction, quantum side-channel attacks, and quantum-inspired algorithms are studied to identify weaknesses and develop countermeasures. Understanding the capabilities and limitations of quantum cryptanalysis is essential for designing robust security architectures that can withstand sophisticated attacks in the quantum computing era. Quantum-Resilient Infrastructure Building quantum-resilient infrastructure entails integrating quantum-safe security measures into the fabric of digital ecosystems, including websites, cloud services, and network communications. This involves deploying quantum-resistant cryptographic algorithms, implementing secure communication protocols, and ensuring hardware and software components are resilient to quantum threats. Quantum-resilient infrastructure also encompasses secure key management practices, quantum-resistant authentication mechanisms, and continuous monitoring and updates to adapt to evolving threats. Collaboration with cybersecurity experts and leveraging quantum security solutions from trusted vendors are key strategies for enhancing quantum resilience. Quantum-Enhanced Security Solutions Beyond mitigating quantum threats, quantum computing can also be leveraged to enhance security in novel ways. Quantum-enhanced security solutions leverage quantum principles, such as quantum entanglement, quantum teleportation, and quantum key distribution, to achieve unprecedented levels of security and privacy. Examples of quantum-enhanced security solutions include quantum-secure communication networks, quantum-resistant blockchain technologies, and quantum-inspired anomaly detection systems. These innovations offer a glimpse into the future of cybersecurity, where quantum technologies play a central role in safeguarding digital assets and infrastructure. Quantum Awareness and Education Lastly, promoting quantum awareness and education is essential for empowering individuals and organizations to navigate the complexities of quantum computing and cybersecurity. Training programs, workshops, and educational resources on quantum-safe practices, quantum cryptography, and quantum computing fundamentals can bridge the knowledge gap and foster a quantum-aware culture. By fostering a community of quantum-literate professionals, researchers, and decision-makers, we can collectively address the challenges and opportunities presented by quantum computing in website security and beyond. In summary, quantum computing’s impact on website security extends beyond encryption to encompass quantum-safe standards, cryptanalysis, resilient infrastructure, innovative security solutions, and quantum awareness. Embracing these aspects enables organizations to navigate the quantum era with confidence and resilience in the face of evolving cyber threats. Conclusion: In conclusion, the emergence of quantum computing has ushered in a new era in website security, bringing both unprecedented capabilities and complex challenges. Encryption breakthroughs and the development of post-quantum cryptographic algorithms are reshaping how data is protected against quantum threats. Secure communication protocols like Quantum Key Distribution offer enhanced security for key exchange, while quantum random number generation ensures the unpredictability crucial for cryptographic operations. However, alongside these advancements come new threats and challenges. The potential misuse of quantum computing power and the need for a seamless transition to post-quantum standards require careful attention and proactive measures from organizations. Staying informed about quantum technology’s developments and investing in robust quantum-resistant security solutions will be pivotal in ensuring a secure digital future amidst the quantum revolution. Leave a Comment Your email address will not be published. Required fields are marked *
__label__pos
0.999816
Mergify| Docs Assign Assign a pull request to a user. The assign action allows you to assign one or more users to a pull request when the conditions you specify are met. This can help automate the process of designating the right people to handle specific pull requests, based on your conditions. This can significantly streamline your pull request management process and ensure that the right people are always aware of and working on the pull requests that require their attention. Parameters Key nameValue typeDefault add_userslist of template The users to assign to the pull request. remove_userslist of template The users to remove from assignees. As the list of users in add_users or remove_users is based on templates, you can use, e.g., {{author}} to assign the pull request to its author. Examples Assign Defined User Below is an example of how to use the assign action: pull_request_rules: - name: assign PR to a user conditions: - "#files=1" actions: assign: add_users: - "username1" - "username2" Assign Pull Request Author User Below is an example of how to use the assign action to assign a pull request to its author: pull_request_rules: - name: assign PR to its author conditions: - "#files=1" actions: assign: add_users: - "{{author}}" Edit on GitHub Company • About Us • Careers • Customers • Media Kit Products Community Help
__label__pos
0.9844
Source code for pyretis.analysis.analysis # -*- coding: utf-8 -*- # Copyright (c) 2015, PyRETIS Development Team. # Distributed under the LGPLv2.1+ License. See LICENSE for more info. """Module defining functions useful in analysis of simulation data. Important methods defined here ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ running_average (:py:func:`.running_average`) Method to calculate a running average. block_error (:py:func:`.block_error`) Perform block error analysis. block_error_corr (:py:func:`.block_error_corr`) Method to run a block error analysis and calculate relative errors and correlation length. """ import numpy as np from pyretis.analysis.histogram import histogram_and_avg __all__ = ['running_average', 'block_error', 'block_error_corr'] [docs]def running_average(data): """Create a running average of the given data. The running average will be calculated over the rows. Parameters ---------- data : numpy.array This is the data we will average. Returns ------- out : numpy.array The running average. """ one = np.ones(np.shape(data)) return data.cumsum(axis=0) / one.cumsum(axis=0) def _chunks(itera, size): """Yield successive same-sized chunks from an iterable. Parameters ---------- itera : iterable This is the iterable we will return chunks of. size : int The size of the chunks. Yields ------ out : object like `itera` The successive same-sized chunks from `itera`- Notes ----- The code is based on one question at Stackoverflow [chunks]_. References ---------- .. [chunks] Stackoverflow, "How do you split ...", http://stackoverflow.com/a/312464 """ for i in range(0, len(itera), size): yield itera[i:i+size] [docs]def block_error(data, maxblock=None, blockskip=1): """Perform block error analysis. This function will estimate the standard deviation in the input data by performing a block analysis. The number of blocks to consider can be specified or it will be taken as the half of the length of the input data. Averages and variance are calculated using a on-the-fly algorithm [1]_. Parameters ---------- data : numpy.array (or iterable with data points) The data to analyse. maxblock : int, optional Can be used to set the maximum length of the blocks to consider. Note that the `maxbloc` will never be set longer than half the length in data. blockskip : int, optional This can be used to skip certain block lengths, i.e. `blockskip = 1` will consider all blocks up to `maxblock`, while `blockskip = n` will consider every n'th block up to `maxblock`, i.e. it will use block lengths equal to `1`, `1 + n`, `1 + 2*n`, and so on. Returns ------- blocklen : numpy.array These contains the block lengths considered. block_avg : numpy.array The averages as function of block length. block_err : numpy.array Estimate of errors as function of block length. block_err_avg : float Average of the error estimate using blocks where ``length > maxblock//2``. References ---------- .. [1] Wikipedia, "Algorithms for calculating variance", http://en.wikipedia.org/wiki/Algorithms_for_calculating_variance """ if maxblock is None or maxblock < 1: maxblock = len(data) // 2 else: maxblock = min(maxblock, len(data) // 2) # define helper variables: blocklen = np.arange(0, maxblock, blockskip, dtype=np.int_) # blocklen contains the lengths of the blocks blocklen += 1 # +1 to make blocklen[i] = length of block no i where numbering # starts at 0 -> blocklen[0] = 1 and so on. Note that arange does # create [0, ..., maxblock). block = np.zeros(len(blocklen)) # to accumulate values for a block nblock = np.zeros(block.shape) # to count number of whole blocks block_avg = np.zeros(block.shape) # to store averages in block block_var = np.zeros(block.shape) # estimator of variance for i, datai in enumerate(data): block += datai # accumulate the value to all blocks # next pick out blocks which are "full": k = np.where((i + 1) % blocklen == 0)[0] # update estimate of average and variance block[k] = block[k] / blocklen[k] nblock[k] += 1 deltas = block[k] - block_avg[k] block_avg[k] = block_avg[k] + deltas / nblock[k] block_var[k] = block_var[k] + deltas * (block[k] - block_avg[k]) # reset these blocks block[k] = 0.0 block_var = block_var / (nblock - 1) block_err = np.sqrt(block_var / nblock) # estimate of error k = np.where(blocklen > maxblock // 2)[0] block_err_avg = np.average(block_err[k]) return blocklen, block_avg, block_err, block_err_avg [docs]def block_error_corr(data, maxblock=None, blockskip=1): """Run block error analysis on the given data. This will run the block error analysis and return relative errors and the correlation length. Parameters ---------- data : numpy.array Data to analyse. maxblock : int, optional The maximum block length to consider. blockskip : int, optional This can be used to skip certain block lengths, i.e. `blockskip = 1` will consider all blocks up to `maxblock`, while `blockskip = n` will consider every n'th block up to `maxblock`, i.e. it will use block lengths equal to `1`, `1 + n`, `1 + 2*n`, and so on. Returns ------- out[0] : numpy.array These contains the block lengths considered (`blen`). out[1] : numpy.array Estimate of errors as function of block length (`berr`). out[2] : float Average of the error estimate for blocks (`berr_avg`) with ``length > maxblock // 2``. out[3] : numpy.array Estimate of relative errors normalised by the overall average as a function of block length (`rel_err`). out[4] : float The average relative error (`avg_rel_err`), for blocks with ``length > maxblock // 2``. out[5] : numpy.array The estimated correlation length as a function of block length (`ncor`). out[6] : float The average (for blocks with length > maxblock // 2) estimated correlation length (`avg_ncor`). """ blen, bavg, berr, berr_avg = block_error(data, maxblock=maxblock, blockskip=blockskip) # also calculate some relative errors: rel_err = np.divide(berr, abs(bavg[0])) avg_rel_err = np.divide(berr_avg, abs(bavg[0])) ncor = np.divide(berr**2, berr[0]**2) avg_ncor = np.divide(berr_avg**2, berr[0]**2) return blen, berr, berr_avg, rel_err, avg_rel_err, ncor, avg_ncor def mean_square_displacement(data, ndt=None): """Calculate the mean square displacement for the given data. Parameters ---------- data : numpy.array, 1D This numpy.array contain the data as a function of time. ndt : int, optional This parameter is the number of time origins. I.e. points up to ndt will be used as time origins. If not specified the value of the input ``data.size // 5`` will be used. Returns ------- msd : numpy.array, 2D First column is the mean squared displacement and the second column is the corresponding standard deviation. """ length = data.size if ndt is None or ndt < 1: ndt = length // 5 msd = [] for i in range(1, ndt): delta = (data[i:] - data[:-i])**2 msd.append((delta.mean(), delta.std())) return np.array(msd) def analyse_data(data, settings): """Analyse the given data and run some common analysis procedures. Specifically it will: 1) Calculate a running average. 2) Obtain a histogram. 3) Run a block error analysis. Parameters ---------- data : numpy.array, 1D This numpy.array contain the data as a function of time. settings : dict This dictionary contains settings for the analysis. Returns ------- result : dict This dict contains the results. """ result = {} asett = settings['analysis'] # 1) Do the running average result['running'] = running_average(data) # 2) Obtain distributions: result['distribution'] = histogram_and_avg(data, asett['bins'], density=True) # 3) Do the block error analysis: result['blockerror'] = block_error_corr(data, maxblock=asett['maxblock'], blockskip=asett['blockskip']) return result
__label__pos
0.999767
0 Имеется вот такое дерево JSON: { "username":"Named", "workers": { "Named.w1":{"alive":"1","hashrate":"60570","username":"Named.w1"}, "Named.w2":{"alive":"1","hashrate":"69105","username":"Named.w2"}, "Named.w3":{"alive":"1","hashrate":"68004","username":"Named.w3"}, "Named.w4":{"alive":"1","hashrate":"54238","username":"Named.w4"}, "Named.w5":{"alive":"1","hashrate":"52310","username":"Named.w5"}, "Named.w6":{"alive":"1","hashrate":"63323","username":"Named.w6"}, "Named.w7":{"alive":"1","hashrate":"63048","username":"Named.w7"} } } Пробовал создавать классы при помощи JSON-генератора, но классы создаются не правильно. Думаю из-за того что в username присутствует точка, и среда ругается на отсутствие класса или пространства имен. Если при создании классов точку удалить - все правильно, но не удается десериализовать в объекты. Да и идея не совсем универсальная, т.к. она будет работать только на определенного пользователя. Пробовал создать другой вариант: JObject search = JObject.Parse(text); IList<JToken> results = search["workers"].Children().ToList(); IList<User> Users = new List<User>(); foreach (JToken token in results) { Console.WriteLine(token); User user = JsonConvert.DeserializeObject<User>(token.ToString()); Users.Add(user); } Ругается: Error converting value "Named.w1" to type 'CACoinotronSendMail.User Сам класс описан вот таким образом: public class User { public string alive { get; set; } public string hashrate { get; set; } public string username { get; set; } } Как можно десериализовать такой JSON, где username не является уникальным? 2 using Newtonsoft.Json; using Newtonsoft.Json.Linq; public class User { public string alive { get; set; } public string hashrate { get; set; } public string username { get; set; } } var j = JToken.Parse(System.IO.File.ReadAllText(@"C:\Temp\json.txt")); foreach (var t in j["workers"].Children()) { var user = JsonConvert.DeserializeObject<User>(t.First.ToString()); Console.WriteLine(user.hashrate); } 1 Ваш ответ Нажимая на кнопку «Отправить ответ», вы соглашаетесь с нашими пользовательским соглашением, политикой конфиденциальности и политикой о куки Всё ещё ищете ответ? Посмотрите другие вопросы с метками или задайте свой вопрос.
__label__pos
0.996679
Alvin Alvin - 1 year ago 69 SQL Question Storing long hex value What is the best way to store long hex string that has more then 1500 chars? What is the best type? 1. Byte 2. Text 3. LongText 4. VarChar Answer Source • BYTE have fixed padding and a maximum length of 255 byte, so it's no good for you. • VARCHAR and TEXT have both a maximum length of 65535 bytes (64kB) and uses 2 more extra bytes to store the length of the data. • LONGTEXT have a maximum length of 4294967295 bytes (4GB) and uses 4 more extra bytes to store the length of the data. If you know that you will always have less than 64kB of data in your HEX string, I would choose TEXT, otherwise go with LONGTEXT. Recommended from our users: Dynamic Network Monitoring from WhatsUp Gold from IPSwitch. Free Download
__label__pos
0.999802
Routes pattern for multilanguage site Hello everyone, I’m trying to set up a route but I can’t make work the same pattern for multisite urls. Here is the code: $kirby->set('route',[ 'pattern' => '(:any)/download/([a-f0-9]{32})', 'action' => function($page_uri, $hash) { if ($file = page($page_uri)->files()->findBy('hash', $hash)) { $filename = $file->url(); return go($filename); } return go('/'); } ]); This works fine for a default language url like www.website.com/some-page/download/[hash] but it doesn’t work for a url like www.website.com/es/some-page/download/[hash]. I’ve tried adding another placeholder for the language but since my default language url doesn’t contains /en for instance it doesn’t work. Any ideas on how to solve this? It’s probably easiest to use two routes, one with language code, one without. Or you have to check for the language code. Use search to find some examples here on the forum. I will follow your advice and use two different routes. Thanks! Something like this should work too. Looks ugly but gets the job done :wink: $kirby->set('route', [ 'pattern' => '(?:(^[A-Za-z]{2})//?)?(:any)/download/([a-f0-9]{32})', 'action' => function($lang, $page_uri, $hash) { $lang = $lang ?: 'en'; // the default language // do stuff } ]); 2 Likes Thank you I will try it
__label__pos
0.648555
Online education portals like Udacity and Coursera are really changing the world of remote learning in significant ways. By making free and high quality education accessible to a global audience, these platforms are opening up undreamt of possibilities for communities around the world to improve, grow, and prosper in the digital economy of the 21st century. Education at top tier colleges and universities has traditionally been a social and economic privilege, but now anyone can join in the learning revolution by sitting in virtual classrooms with the world’s best and brightest educators. Whether this involves learning how to code and build smart phone apps, or starting up a new business, or learning about public health literacy, the sky is the limit of what’s now possible. Everything about Web and Network Monitoring The Chronology of a Click, Part XII This series has been following the chronological progress of a click on a link. The click was converted to a request, sent to the server, and processed by the server. The server created a response and sent it back to the client, where the client-side script executed. Today’s episode tells how the output from the client-side script becomes a fully-rendered web page. Part XIII tells how event-driven scripts, which execute after the page is fully loaded, affect performance.The client-side script seems to send its output to the browser’s viewport, but in reality the output is sent to the browser engine. There are several engines in common usage: WebKit is used in Chrome and Safari, Gecko is used in Firefox, Trident is used in Internet Explorer, and Presto is used in Opera.The browser engine is sometimes referred to as a rendering engine or a layout engine, but these are incorrect terms. The rendering engine and the layout engine are subcomponents of the browser engine. The layout engine is described in Build the Render Tree below. The rendering engine is described in Render the Document below.   1) Build the Content Tree The content tree, usually called the DOM tree, is a hierarchical data structure. It represents the content and structure of the document with no formatting information. Upon receipt of HTML from the client-side script, the browser engine parses the HTML, inserts it into nodes, then inserts the nodes into the appropriate place in the content tree.The content tree’s structure represents the structure of the document. The relationship between HTML tags and the HTML block in which they appear is represented by a child/parent relationship within the content tree. For example, the html node has two children, the head node and the body node. Another example: If a <div> section of the HTML contains five paragraphs, then the corresponding div node in the content tree has five p nodes dangling from it. Performance Consideration:  The HTML parser is very forgiving. When we violate HTML rules, it does its best to figure out what we meant and to present some output that might be what we’re looking for. Unfortunately, there is a performance penalty for that extra effort. For best performance, use well-formed XHTML instead of HTML. Performance Consideration:  Reduce the size of the content tree, especially its depth. 2) Build the Style Structure This section may follow Build the Content Tree, but it does not follow it chronologically. Building the content tree and building the style structure are contemporaneous – as the parser works its way through the document, it builds the content tree as it encounters HTML and it builds the style structure as it encounters formatting information. The content tree and the style structure together represent a separation of content and formatting.I expect the style structure is not cascading, but rather something like a list of already-cascaded styles that apply to nodes in the content tree. It therefore contains fully-realized formatting rules (i.e., effective styles as opposed to cascading styles). The style structure differs from browser engine to browser engine. There is no explicit or de facto standard, but that’s okay because there is no public API into it. Only the parser and the rendering engine need to understand it. Performance Consideration:  Minimize the number of CSS rules. Eliminate duplicated and unused rules.   3) Build the Render Tree The render tree is a hierarchical structure that represents the positioning and formatting of the document. Each node in the render tree is called a frame. Each frame represents a visible rectangle on the end-user’s screen. The structure of this tree reflects the structure of the visuals seen by the end-user. [It does not reflect the structure of the document. That’s what the content tree does.] The render tree links the content and its formatting together, making it ready for rendering.After the content tree and style structure are built, the layout engine uses that information to build the render tree. Now, that’s a bit of a lie, isn’t it? The content tree and the style structure need not be fully built. The layout engine can start determining positions and sizes as soon as the first little bit of information becomes available. True, it may need to duplicate some of its effort (e.g., an element that is relatively positioned above elements that have already been laid out will require those other elements to be laid out a second time), but delaying startup until the parser and layout engine are completely finished can take even longer. Performance Consideration:  JavaScript should never use the DOM during page load. Create static style sheets in the <head> instead. If the client-side script cannot produce a static style sheet for the element, perhaps the server-side script can. In any case, format the element statically and stay away from the DOM tree while the page is loading. After the page is completely loaded, it’s a different story. See the Reflow section in Part XIII for tips we can use after the page loads (i.e., in response to events). Performance Consideration:  Avoid HTML tables. Never use them for formatting; use <div>s instead. If you must use them, keep them as small as possible. Layout can almost always be calculated in a single pass, but HTML tables may need two passes. There is no correspondence between the content tree’s structure and the render tree’s. The content tree’s structure reflects the document’s HTML structure. The render tree’s structure reflects the positioning of the frames seen by the end-user, left-to-right and top-to-bottom. These two structures can be quite different. Some nodes found in the content tree are not found in the render tree. For example, the head node will not appear in the render tree because it requires no rendering. Another example: Content tree nodes that are not visible (e.g., display:none in CSS) are not included in the render tree. Some nodes found in the render tree are not found in the content tree. For example, each line of text requires a separate node in the render tree, but they are lumped together into one text node in the content tree.   4) Render the Document After the browser engine starts to build the render tree, the rendering engine kicks in and paints (or draws, if you prefer) the visuals that the user will see. In fact, if it renders slowly enough, the end-user can watch the document appear, be formatted, and move into the correct position. Gyuque uploaded these three visualizations of the rendering process to YouTube: [youtube https://www.youtube.com/watch?v=dndeRnzkJDU?rel=0&w=140&h=105] [youtube https://www.youtube.com/watch?v=ZTnIxIA5KGw?rel=0&w=140&h=105] [youtube https://www.youtube.com/watch?v=AKZ2fj8155I?rel=0&w=140&h=105] Did you notice how much repetition there is? Most frames are built and rebuilt several times. This is an example of the repetition mentioned in Build the Render Tree above. Each change can trigger any number of other changes. Rendering does not really happen after the previous step. It can begin as soon as the layout engine puts something into the render tree. Then it runs concurrently with the parser and layout engine. If the rendering engine responds too slowly to changes in the rendering tree, that is an obvious performance issue. However, responding too quickly can also become a performance issue. If a script or stylesheet changes previously-rendered nodes, the rendering engine can find itself rendering, re-rendering, re-re-rendering, and so on. [This is actually much more common than one might expect.] Performance Consideration:  Specify all formatting in the <head> and specify all content in the <body>. If a document provides formatting information after the content to which it applies, the rendering engine will use default formatting rules when it encounters the content. Later, when the layout engine sees the new formatting information, it will relayout the frame and all affected frames, then the rendering engine will have to do all its time-consuming work again. If the layout engine were to wait a few milliseconds for the content tree and style structure to settle down into their final state, it can do the layout once rather than multiple times. This will trigger one change to the render tree instead of multiple changes, which will make the rendering engine do its job once instead of multiple times. Performance Consideration:  In fact, some browsers do improve performance by purposely delaying rendering, but in one way they are at the mercy of the programmer. If the script asks for current layout information, the layout engine has to immediately process all the queued frames before it can answer the question. By clearing the queue, answering the question eliminates the performance improvements that could have been. Programmers should avoid code that queries the size (height or width) or position (top or left) of any element. 5) When onLoad Fires When the layout engine and rendering engine are finished, the <body>‘s onLoad event fires. This indicates that the document is ready for user interaction. It is fully visible and all the elements are functional. We’re good to go.JavaScript code can respond to the onLoad event. Performance Consideration:  In the grand scheme of things, it’s not really onLoad that determines when a document is interactive, it’s the end-user. Some elements must be present and fully functional before the end-user will consider the page ready to go. Other elements aren’t quite so urgent. They can wait a few milliseconds, or perhaps even a few hundred milliseconds. The developer must be aware of the end-user’s thought processes. He must know what the user needs immediately vs. what he needs in a second or two. Example: A typical user needs to see the top of the document sooner than the bottom of the document. The top is therefore needed-now and the bottom is therefore needed-soon. Other elements that may be needed-soon or needed-later: footers, advertising, dynamic content, images, etc. Code that creates needed-later elements can be executed after onLoad fires. We note that this is a perceived performance improvement, not a real one. However, from the end-user’s perspective, it is very real. He can get on with what he wants to do sooner. He may not even notice that some elements have not been fully created yet. If, however, most users do notice the delay in an element, then we have incorrectly classified the element as needed-later when it is really needed-now. Performance must always relate back to the end-user’s wait time. Conclusion The parser builds the content tree and the style structure from the HTML. The layout engine builds the render tree from the content tree and style structure. The rendering engine uses the render tree to draw the text and images that are seen by the end-user.Since the layout engine and the rendering engine execute concurrently, the layout engine can make changes to frames that have already been drawn. When this happens, the frames need to be drawn again. If this were only an occasional occurrence, it wouldn’t be a problem, but in real life it happens far too often. Part XIII tells how post-loading, event-driven scripts affect performance. Watch for it on the Monitor.Us Blog. References Garsiel, Tali. How Browsers Work. Published 2010.02.13 and last updated 2011.06.04 at https://taligarsiel.com/Projects/howbrowserswork1.htm. This classic article details the workings of modern web browsers. It’s a little bit behind the times, but not enough to matter to a casual reader. The explanations are well-written and easy to understand (assuming some technical background on the reader’s part). It’s not short, but shortening it would eliminate some of the details that it sets out to explain. Post Tagged with About Warren Gaebel Warren wrote his first computer program in 1970 (yes, it was Fortran).  He earned his Bachelor of Arts degree from the University of Waterloo and his Bachelor of Computer Science degree at the University of Windsor.  After a few years at IBM, he worked on a Master of Mathematics (Computer Science) degree at the University of Waterloo.  He decided to stay home to take care of his newborn son rather than complete that degree.  That decision cost him his career, but he would gladly make the same decision again. Warren is now retired, but he finds it hard to do nothing, so he writes web performance articles for the Monitor.Us blog.  Life is good!
__label__pos
0.921989
Questions tagged [cryptography] Questions about the construction and analysis of protocols and algorithms for secure computation and communication (including authentication, integrity, and privacy aspects). Filter by Sorted by Tagged with -2 votes 0answers 27 views Solving subset-sum encryption (Princeton Creative Assignment) The question: https://www.cs.princeton.edu/courses/archive/spring03/cs226/assignments/password.html Input files: ftp://ftp.cs.princeton.edu/pub/cs226/password/ The question asks to use a symbol table ... -2 votes 1answer 31 views Manually Performing ECB & CBC I am learning about block ciphers and while I understand the concept of Electronic Codeblock mode and cipher block chaining, I could not find any relevant practical examples online. Can someone ... 0 votes 2answers 63 views Undetectable error correcting codes I have a 256 bit string (indistinguishable from random) which I wish to encode into a greater length string using an error correction code. The result must also be indistinguishable from random. It ... 0 votes 0answers 15 views Is this Correct, the existence of cryptography requires $UP \cap Co-UP \not\subseteq BPP$ Is this Correct, the existence of cryptography requires $UP \cap Co-UP \not\subseteq BPP$? Or does it require $UP \not\subseteq BPP$? 0 votes 0answers 31 views Books for learning Cryptocurrency Please suggest a good for mathematical section of studying cryptocurrency. -1 votes 0answers 21 views Are there any quantum resistant equivalents to RSA? Are there any quantum resistant equivalents to RSA? Basically are there any methods of cyrtopgraphy that hame the same user as RSA but are resitant to quantum computers. 0 votes 0answers 34 views Asking for your help with LFSR, linear automaton I am a software developer new to the site. I am currently learning about LFSR (linear feedback shift register). Every day I solve a question which is given to me, but today I am lost. I can not solve ... 1 vote 1answer 26 views One-way function is not injective when it is in NP Let us $\Sigma = \{0,1\}$ and $f: \Sigma^* \rightarrow \Sigma^* \in FP$ for which is valid that $\exists k: \forall x \in \Sigma^* : \lvert x \rvert ^ {1/k} \leq \lvert f(x) \rvert \leq \lvert x \... 0 votes 1answer 22 views How to perform matrix multiplication in Mixing Columns step of AES? I am studying AES and trying to implement it. But I am having difficulty understanding the Mixing Column step. In that step we have to perform matrix multiplication between the state matrix and ... 1 vote 0answers 27 views Accessible CS Math Job [closed] I am a CS undergrad and a huge enthusiast of pure math. I have been doing competitive programming and proofs for a while. My ambition is to become an academic in theoretical computer science. The ... 1 vote 1answer 35 views Underlying codes for Niederreiter cryptosystems Niederreiter cryptosystem is usually described by a parity check matrix $H$ over $\mathbb{F}_{2^n}$. The minimum distance $d$ is given by $d := min\lbrace k \text{ such that there are $k$ linearly ... 1 vote 2answers 59 views Decryption of RSA How can we decrypt an RSA message if we only have the public key? For example, Message: 21556870,12228498 Public Key: ... 1 vote 0answers 17 views Converting Smart Contracts into Arithmetic Circuits [closed] Are there any existing programs available to convert smart contracts into Arithmetic Circuits with which I can implement and experiment with Secure Multiparty Computation (MPC) protocols? 1 vote 0answers 56 views Absolute limit in Huffman encoding I have computed the average code length using the Huffman encoding. Now, I am asked to compare the average code length with Absolute limit but I ain't sure about the absolute limit here is :( Could ... 0 votes 0answers 16 views Extracting text from a hidden image Say i have cropped an image and have hidden some part of it, like shown below Is it possible for anyone to download this image and able to decipher what the hidden text is? 0 votes 0answers 29 views Digitally signing a digital signature I have the following question in a list of security exercises. What it can be the sense of digitally signing a digital signature? Sincerely I can't think of anything. Someone could answer? 0 votes 0answers 17 views Abusing ElGamal in order to attack a known encrypted text I saw a very interesting question regarding Elgamal cryptosystem that I don't know its answer. It is really interesting and I would be very happy if you could elaborate on it and explain the tricky ... 0 votes 0answers 22 views learning the private key by reusing same random variable k in ELgamal i wonder: if for some reason, someone, say alice, sends unencrypted messages to bob and signs it using elgamal signature, can oscar,the adversary, gain knowledge of the private key if alice reused the ... 1 vote 0answers 13 views equivalent sub-keys in DES encryption i am trying to understand the DES cryptosystem and was wondering: what would've happened if all of the sub-keys were equal? does it reduce the security? can we actually find the keys if we know all ... 2 votes 1answer 30 views Period of sum of two periodic sequences I was wondering, what is to be the shortest possible key using Vigenere encryption, if a text is ciphered one time with a key of length $i$ using Vigenere and second time with a key of length $j$ ... 2 votes 1answer 35 views Determining key from plaintext and ciphertext (Vigenère) I was wondering why mathematically we can know the key in the Vigenère cipher if we know in advance a message and its encryption. 27 votes 5answers 50k views Can a public key be used to decrypt a message encrypted by the corresponding private key? From what I have seen about usage of a pair of public and private keys, the public key is used for encrypting a message, and the private key is used for decrypting the encrypted message. If a message ... 0 votes 0answers 34 views Is 'the bombe' technically a computer?what is technically meant to be a computer? Some say that 'the bombe' created by Alan Turing is technically not a computer despite decrypting the codes. Why is it so? Is 'the bombe' technically a computer? First of all what is technically ... 3 votes 1answer 24 views Solve SUBSET SUM for Reciprocals of Primes Let $p_1, ..., p_n$ distinct prime numbers with $P = \prod_{i=1}^{n}{p_i}$ and $A=(a_1, ..., a_n)$ with $a_i = P/p_i$. Problem Show the SUBSET SUM problem $(A, \alpha)$ can be solved in polynomial (... 2 votes 2answers 135 views If current time in milliseconds is considered good enough random seed for a pseudorandom number generator, why not just use that time directly? I was reading about pseudorandom number generators and how they need a seed and how that seed is usually current system time in milliseconds. One of the most common algorithms is the linear ... 0 votes 1answer 17 views Cryptography: Play Fair cipher In our university, we were taught that since all 26 letters cant be placed together in a $5*5$ square matrix we must put $i/j$ together like this represented in the below diagram. Here our key is $... 0 votes 1answer 46 views Diffie-Hellman and its disadvantage with large primes I was reading our university slide on the Diffie-Hellman where it is mentioned that one of the disadvantages of D-H is that For large prime, $p-1$ is an even number so, $\mathbb{Z}^*_p$ will have a ... 116 votes 6answers 11k views Why hasn't there been an encryption algorithm that is based on the known NP-Hard problems? Most of today's encryption, such as the RSA, relies on the integer factorization, which is not believed to be a NP-hard problem, but it belongs to BQP, which makes it vulnerable to quantum computers. ... 0 votes 2answers 45 views What are the elements of the modular ring mod 7? [closed] Are the elements of a modular ring simply the set of all the numbers from 1 to p−1? in this case p−1 = 6 ? I asked this on the math stack exchange https://math.stackexchange.com/q/3375667 and was ... 1 vote 1answer 26 views Weaknesses arising from using same key in both channel directions I came across the following question: Which of the following risk may arise, when same key is used to encrypt both directions of a communication channel, that are not present if using different keys ... 3 votes 1answer 137 views Understanding Incentive Compatibility of pooled Bitcoin Mining paper I'm trying to understand the paper Incentive Compatibility of Bitcoin Mining Pool Reward Functions (Schrijvers, Bonneau, Doneh and Roughgarden, in Financial Cryptography and Data Security – ... 5 votes 1answer 69 views Perfect Probabilistic Encryption still requires key length about as long as message Let $(E,D)$ be a probabilistic encryption scheme with $n$-length keys (given a key $k$, we denote the corresponding encryption function by $E_k$) and $n+10$-length messages. Then, show that there ... 1 vote 2answers 51 views RSA cryptosystem: Encrypting a signed message I've started to read the section about the RSA cryptosystem in CLRS (page 958) and I don't understand the way it describes how to encrypt a signed message. If Bob wants to send a message $M$ to Alice:... 4 votes 3answers 2k views Is time complexity more important than space complexity? I've noticed quite a few cryptographic algorithms speak mainly of the time complexity of an algorithm. For example, with a hashing function h, find x given y = h(x). We normally speak on how long it ... 1 vote 0answers 70 views A tree-like data structure with rights delegation for distributed computing Every actor can create a root node and delegate a right to add a child node. Every node contains name of its’ creator or who added it, and value S. Sum of all values S at the same level of the tree ... 3 votes 1answer 77 views Is F(F(s,x), x) necessarily a PRF if F is? Given a PRF - F, is $G_s(x)=F_{F_s(x)}(x)$ necessarily a PRF? First I thought how to tackle this problem. First I tried using a hybrid argument: $|\mathbb P[F_{F_s(x)}(x)] -\mathbb P[f(x)]|\leq |\... 1 vote 0answers 63 views Inverting a function [closed] I posted this question on mathoverflow two years ago but got no answer: Let $w = a_0 \cdot a_1 \cdots a_{n-1} $ be a word from $ \{0,1\}^n $, $|w| = n$ Let $m = \sum_{i=0}^{n-1}{ a_i \cdot 2 ^ {n-1-... 2 votes 2answers 71 views Why don't most websites I visit seem to use TLS with Ephemeral Diffie-Hellman? When I click "view certificate," under Public Key Algorithm it usually says "RSA Encryption" or "Elliptic Curve Public Key." I assume this is the algorithm used to agree on a premaster secret (... 1 vote 1answer 25 views Why does Feistel encryption algorithm encode half block every time? Why does Feistel encryption algorithm encode half block every time? What would happen if the entire block is encrypted in each step? 14 votes 4answers 643 views Is it possible to create a “Time Capsule” using encryption? I want to create a digital time capsule which will remain unreadable for some period of time and then become readable. I do not want to rely on any outside service to, for instance, keep the key ... 3 votes 1answer 59 views Anonymous Visibility Check in P2P networks I'm working on a problem for a P2P network for games. The problem is the following: Consider two opponents on a grid, each stores it's own position. Player 1 wants to know if it sees player 2. In ... 10 votes 2answers 825 views Can you prevent a man in the middle from reading the message? I have heard about all these Man-In-The-Middle Attack preventions and I am wondering, how this can possibly work if the man in the middle only listens to your stream and does not want to change the ... 5 votes 2answers 508 views What is the complete version of the paper: “How to Generate and Exchange Secrets (extended abstract)” by Andrew Yao? I've found numerous places that claim that the paper "How to Generate and Exchange Secrets" by Andrew Yao introduces garbled circuits as a solution to the secure multiparty computation problem. ... -1 votes 2answers 52 views Finding small-length RSA private key I was wondering if there is an algorithm to derive $p$ and $q$ or is it simply trial and error? Consider the following RSA crypto-system with public key $(437,13)$. Since the numbers are so small, it ... 1 vote 1answer 95 views fully homomorphic encryption with information-theoretic security? An encryption algorithm with information-theoretic security is one which even with infinite amount of computation cannot be broken. That is, given only the ciphertext, no amount of computation can ... 1 vote 0answers 46 views Is it possible to build a secure PRG from two functions one of them being a PRG? Having two deterministic functions $G_1, G_2 : \left\{0,1\right\}^\lambda \rightarrow \left\{0,1\right\}^{\lambda+l}$, at least one of which is a secure PRG. Being $\alpha$ a constant, is it possible ... 1 vote 1answer 203 views Is the bitwise-xor of a Uniform bit string and a non_uniform bit string Uniform? Having two bit strings $x,y \in \left\{0,1\right\}^n$, where $x$ is selected following a uniform distribution but $y$ is not. Is $z = x \oplus y$ uniform? 1 vote 1answer 41 views DES encryption - what happens if all sub-keys are the same? What will happen if we change the DES encryrption algorithm to use the same key in each round ? How will it affect the encryption - will it only make the encryption same as decryption or are the more ... 1 vote 1answer 112 views Prove hash family is 3-wise independent Let $q$ be a prime number and let $\mathbb{Z}_q = \left\{1,\dots,q-1\right\}$; I need to prove that the family $\mathcal{H} = \left\{h_s \colon \mathbb{Z}_q \rightarrow \mathbb{Z}_q\right\}_{s \in \... 1 vote 1answer 48 views Can I decrypt an RSA algorithm using only the message I am wanting to decrypt, an “e” value and a “n” value? I have a ciphertext $c$ encrypted with RSA algorithm that needs to decrypt. I have the public key $(n,e)$. Is it possible to decrypt with this amount of information? 1 2 3 4 5
__label__pos
0.98295
Orkut Application Platform On-Site Application Developer Guide: Messages Contents Sending Messages Your application can send a message to any of the user's friend using the API. To do that, you must call the requestSendMessage function of the opensocial object: var friendId = /* friend's OpenSocial ID here */; var params = {}; params[opensocial.Message.Field.TITLE] = 'Message subject here'; params[opensocial.Message.Field.BODY] = 'Message body here.'; opensocial.requestSendMessage(friendId, opensocial.newMessage(params), callback); The message will be delivered by email to the indicated friend. You can use HTML on the message body but, as is the case with activities, your code is subject to sanitization and rewriting. Some important points to be aware of are: • Confirmation dialog: As was the case with posting activities, sending messages will also trigger a confirmation dialog. See the figure below for an example of how it looks. • HTML rewriting: As is the case with activities, the HTML that gets sent in messages is also subject to sanitization and rewriting, so not all tags and attributes will be honored. Always test your messages to see that they are being correctly delivered. • Limits on messages: In order to prevent spam, Orkut imposes a limit of messages per day that a particular application can send on behalf of a particular user. As of the time of this writing, the current limit is 10 messages. Note: Orkut does not currently support sending messages to groups, such as for example all the viewer's friends. You have to send messages one by one. Message Confirmation Screen. The user can see the message proposed by the application and can add a text of his own. A Complete Example As you can see in the code above, you need the friend's OpenSocialID to send a message, so that ID needs to come from a previous request. The following example shows the complete process: it requests and displays a list of friends for the user to choose from, and sends a message to the friend chosen by the user: Select the friend to send the message to: <div id='friendlist'> </div> <span style='color: grey' id='status'></span> <script type='text/javascript'> function setStatus(s) { debug.say(s); document.getElementById('status').innerHTML = s; } function init() { // load friends var req = opensocial.newDataRequest(); var opt_params = {}; opt_params[opensocial.DataRequest.PeopleRequestFields.MAX] = 20; var idspec = opensocial.newIdSpec({ "userId":"VIEWER", "groupId":"FRIENDS" }); req.add(req.newFetchPeopleRequest(idspec, opt_params), 'friends'); req.send(onLoadFriends); setStatus("Loading friends... Please wait."); } function onLoadFriends(data) { if (data.hadError()) { setStatus("*** error: " + data.getErrorMessage()); return; } var friends = data.get('friends').getData().asArray(); var html = [ "<ul>" ]; for (var i in friends) { html.push("<li><a href='javascript:void(0)' "); html.push("onclick='window.sendToFriend(\""); html.push(friends[i].getId()); html.push("\")'>Send to "); html.push(friends[i].getDisplayName()); html.push("</a></li>"); } html.push("</ul>"); document.getElementById('friendlist').innerHTML = html.join(''); setStatus("Choose a friend to send a message to."); } // we have to put this function in the window because we are using it as a literal // event handler on the link (sure, there are more elegant ways to do this, but // since this is just an example, we're not focusing on Javascript beauty) window.sendToFriend = function(opensocialID) { sendTheMessage(opensocialID, "Test message", "Hello! This is a <b>test</b>! " + "Isn't <a href='http://www.orkut.com'>Orkut</a> <em>cool?</em>"); }; function sendTheMessage(to, title, body) { var params = {}; params[opensocial.Message.Field.TITLE] = title; params[opensocial.Message.Field.BODY] = body; opensocial.requestSendMessage(to, opensocial.newMessage(params), onMessageSent); setStatus("Message request sent. Waiting for confirmation..."); } function onMessageSent(data) { setStatus(data.hadError() ? "*** Error: " + data.getErrorMessage() : "Success."); } gadgets.util.registerOnLoadHandler(init); </script> Authentication required You need to be signed in with Google+ to do that. Signing you in... Google Developers needs your permission to do that.
__label__pos
0.836299
50% Off Our Lifetime Plan! Ends Soon.Learn more What is Pi? Updated on Dec 5, 2012 3.2 based on 5 ratings In mathematics, pi is more than just a delicious treat! It's the world's most famous irrational number, used to describe the ratio of a circle's circumference to its diameter. Read about this constant, and then take a look at how long the number is. Fourth Grade Geometry Worksheets: What is Pi? Comments
__label__pos
1
Login | Register    LinkedIn Google+ Twitter RSS Feed Download our iPhone app TODAY'S HEADLINES  |   ARTICLE ARCHIVE  |   FORUMS  |   TIP BANK Browse DevX Sign up for e-mail newsletters from DevX advertisement   Generating XML from ADO Recordsets The process of communicating between XML and databases is not ideal but there has been progress on that front with both Oracle and Microsoft providing updates that address key XML functionality needs. advertisement n an ideal world, the process of communicating between XML and the database world would be completely seamless and transparent—you point an XML DOM at a database, say Update or Request, and you're done. In this all-too-messy real world where the "standards" change from day to day and no two database companies would dare agree on a common format for fear that it may give their competitor a marginal advantage, this utopian vision is not quite here yet. However, that's not to say that there hasn't been progress in the XML to RDBMS (relational database management system) front. At the XML One Fall '99 conference in Santa Clara, Oracle debuted their superb Oracle 8i update, which provided a number of XML features. Microsoft's SQL Server 7 also incorporated a number of basic XML features into their Active Data Objects (ADO) 2.5 engine, which also provides some fixes for basic limitations in the ADO 2.1 provider. The XML format that the Microsoft ADO 2.5 (and ADO 2.1) engine produces is written in Microsoft XML-Data Schema format, which is also known as the XML Reduced Data Schema, or simply Reduced Data. The XML Reduced Data Schema works by specifying the datatypes and similar characteristics of the schema (default values, primary key information, and so forth) from the database and placing this information into the first half of the document. A similar process then extracts the data to place in XML row nodes. By separating these, you can cut down on the amount of spurious information that the XML document contains while still being able to get relevant datatype information. For one recent project, I needed to get information about role-playing characters for an online Web game. I stored the characters table in an Access database (characters.mdb), then used ADO 2.5 to retrieve a general recordset query (see Listing 1). The conversion from database to XML is done with the function rs.save(). Save basically takes the recordset and converts it into a stream format. If the first argument given is a string containing a URL, then the function save() outputs the data into its intrinsic binary format. However, by passing the adPersistXML flag, the stream is converted into an XML stream instead. In ADO 2.1, you were forced to output the stream to a file, which proved a significant performance hit. The stream had to be converted into a Unicode formatted text string, spooled out to the hard drive through a standard file interface, then if you needed the resultant XML, the file had to be reloaded and reparsed back into XML stream. ADO 2.5 lets you write the result directly out to an XML DOM Document, bypassing both conversion and parsing, and making for much smoother code. You can see the output for the data provided in the characters.mdb file in Listing 2. A Trip Through Namespace There are four explicit namespaces defined in the Microsoft XML Reduced Data Schema: • The XML Reduced Data Schema namespace "s", which contains the specific information about the structure of the internal XML format used by each rowset. (xmlns:s="uuid:BDC6E3F0-6DA3-11d1-A2A3-00AA00C14882" ) • The Data Type Specification namespace "dt", which points to Microsoft's internal data typing mechanism.(xmlns:dt="uuid:C2F41010-65B3-11d1-A29F-00AA00C14882" ) • The Rowset namespace "rs", which contains information about the database implementation for each row. (xmlns:rs="urn:schemas-microsoft-com:rowset" ) • The RowsetSchema namespace "z", which contains the actual data from the database. This utilizes the schema defined by the "s" namespace. (xmlns:z="#RowsetSchema") This set of namespaces actually provides some interesting usage information. Both the Microsoft XML Reduced Data Schema namespace and the Data Type Specification namespace actually point to GUIDs (globally unique identifiers), which uniquely identify them to the system. However, these GUIDs (actually UUIDs—universally unique IDs—which is the preferred term) don't explicitly point to any COM objects, but instead are used as identifying tags by the XML parser to map internally to the proper objects. By doing this, the parser treats the namespace UUID as a handle, or a pointer to a pointer, making it easier to upgrade the respective parsers in the future. The Rowset namespace, on the other hand, points to a more traditional flavor of namespace: a URN (Uniform Resource Name) that the XML parser identifies with the Reduced Data rowset. The URN uses notation that may be less familiar if you are used to working with the URL notation of the Web, but urn:schemas-microsoft-com:rowset is more or less the same as the URL http://schemas.microsoft.com/rowset. The final namespace, RowsetSchema or "z" namespace, retrieves the schema identified by the ID "RowsetSchema" earlier in the document. This schema is local, created on the fly by the ADO package. Because a typical database recordset will change based upon the SQL source string, this schema will change as well. Thus, you could control the way that the schema is generated by being more specific with your query. For example, this code will output just these four elements into the schema, rather than the entire table: Rs.open "SELECT characterName, vocation, species, gender FROM Characters" The Microsoft XML Reduced Data Schema format intrinsically associates datatypes to each of the attributes. Thus, if you make a request using the getAttribute method or nodeTypedValue property of the XML DOM, the value returned will be of the correct datatype. For example, to retrieve the strength attribute of the Elf Sheinara, you'd use these XML calls: Set strengthNode=rdataXML.selectSingleNode(_ "//z:row[@characterName='Sheinara']/@Strength") Strength=strengthNode.nodeTypedValue ' or Set characterNode=rdataXML.selectSingleNode(_ "//z:row[@characterName='Sheinara']") Strength=characterNode.getAttribute("Strength") In either case, the value is returned as 0.96, which is the four digit approximation (with trailing zeroes not explicitly displayed) of the internal format "0.95999999999999996". Put another way, these two elements return a floating point number, not a string. Converting to an Element Schema Still, the z:row format has some serious limitations to it. In general, while attributes process a little faster than elements (they have their own specialized interface, and as such can be optimized for performance, whereas the more general elements cannot), they may not necessarily be in the most preferred format for your own use. For example, suppose that you wanted to turn the attributes into a more traditional element-based XML structure that looks like Listing 3. You can perform the conversion in a specific case (for instance, with known container and object names, such as characters and character) with an XSL script as shown in Listing 4. One of the primary problems with such a script is that you would need to explicitly change the source code for the script every time you wanted to run it to take into account different object or group names. While you can parameterize the output, this undertaking is not trivial. However, this is one of those places where the DOM is actually the more efficient way of handling the problem. The convertToElementTree function actually emulates the functionality of the XSL script, but because you can pass in the relevant group and object names as parameters into a function, it is both more concise and easier to manage (see Listing 5). It takes as an argument a Reduced Data XML structure, the name of the group, and the name of each object in the group, then returns a DOM object in the element-only format. To generate the sample output shown in Listing 3, you would need to call the convertToElementTree function as follows: Dim charactersDoc as DOMDocument Set charactersDoc=ConvertToElementTree(_ rdataXML,"characters","character") Towards a Schema Standard? I should note that the beta version of Windows SQL Server 7.5 actually generates such a format by default (although you may need to change the containing elements), at least when called from a Web address. Currently, only Microsoft and its partners support the XML Reduced Data Schema format, and in all likelihood it will end up being principally an internal data format, especially when the W3C XML Schema becomes a formal recommendation. Microsoft originally designed the XML Reduced Data Schema as a proposal for identifying and handling data in the XML space, and it was an instrumental document for creating the W3C XML Schema. However, because the latter schema is an open standard, it has been tweaked and changed to better reflect the needs and requirements of other database vendors such as Oracle or IBM. The Schema standard proposal itself is divided up into two "Working Draft" documents: The W3C XML Schema Structures document (www.w3.org/TR/xmlschema-1/) and the W3C XML Schema Data-type document (www.w3.org/TR/xmlschema-2/). The first document specifies the Schema architecture, and is structurally similar to the Microsoft XML Reduced Data Schema discussed in this article—the primary differences have to do with the addition of archetypes to allow for the creation of custom datatypes. The second document, on the other hand, focuses on the datatypes themselves, including ways of creating and inheriting datatype specifications. While many of these are similar to the original Microsoft datatypes (such as datatypes for integers, real numbers, dates, and similar "primitive" types), the names or scoped ranges may have changed since the Microsoft XML Reduced Data Schema was first proposed. The W3C Schema specifications are currently considered Working Drafts (and should be entering Last Call shortly), although they will likely become Proposed Recommendations to the XML canon by early in 2000. For those unfamiliar with the terminology involved, a Working Draft is a document that is currently in development, much like a legislative bill in the United States Congress. Last Call indicates that the participating members of the W3C can include minor amendments or clarifications to the document, but cannot substantively change the document. If it passes when the W3C members vote on the draft, it is considered a Proposed Recommendation and is immutable at that point; otherwise, it goes back to being a Working Draft. Finally, the Proposed Recommendation becomes a formal Recommendation after a final vote, pending evidence that would prove that the standard is unworkable. A Recommendation is the closest thing to a law in the W3C. The document is considered stable and can be coded against, and any changes that need to be made to it would instead be considered part of the next version. Recently, the XPath and XSL-Transform documents became formal Recommendations; the XHTML and XML DOM 2 specifications are now Proposed Recommendations; and MathML, XLink, and SVG are all in Last Call. Once the Schema specifications become Recommendations, then all of the critical pieces for XML development will be stable, which should have a profound impact on companies wishing to deploy XML solutions. As a final note, the W3C also includes the Note designation for a document. This is a document that may present one member's suggestions for implementing a certain technology, or may simply be a useful document for clarifying the needs and requirements for an XML technology. The Microsoft XML Reduced Data Schema proposal is just such a Note, and can be retrieved from the W3 site as well (www.w3.org/TR/1998/NOTE-XML-data-0105/). Check www.w3.org/TR for the status of all such Notes, Working Drafts, and Recommendations. The XML Reduced Data Schema notation will probably be with Microsoft for a while, even after the W3C XML Schema documents become formal Recommendations. They form the underpinnings of much of the current data handling strategy for inter-Microsoft applications, and will probably have a big part to play in the upcoming Biztalk Server. With XSL or DOM code you can easily transform the data into other formats, and can even (with more effort than I can show in this article) modify the internal schema data representations into a variety of different schema configurations. This ability to transform both the data and the underlying schema of that data will figure large in your XML work in the future. Click here to download the code and the sample Access database.     Kurt Cagle is a writer and developer specializing in XML, XSLT and distributed Internet application technologies, and is author of Sybex's XML Developers' Handbook and coauthor of the upcoming Wrox Professional XSLT book. He lives in Olympia, Washington with his wife and two daughters, and welcomes any comments, questions or suggestions for articles. He can be reached at [email protected] Comment and Contribute           (Maximum characters: 1200). You have 1200 characters left.     Sitemap Thanks for your registration, follow us on our social networks to keep up-to-date
__label__pos
0.578657
Marble GeoDataLineStyle.cpp 1 // SPDX-License-Identifier: LGPL-2.1-or-later 2 // 3 // SPDX-FileCopyrightText: 2008 Patrick Spendrin <[email protected]> 4 // 5  6  7 #include "GeoDataLineStyle.h" 8  9 #include "GeoDataTypes.h" 10  11 #include <QDataStream> 12  13 namespace Marble 14 { 15  16 class GeoDataLineStylePrivate 17 { 18  public: 19  GeoDataLineStylePrivate() 20  : m_width( 1.0 ), m_physicalWidth( 0.0 ), 21  m_capStyle( Qt::FlatCap ), m_penStyle( Qt::SolidLine ), 22  m_cosmeticOutline( false ), m_background( false ) 23  { 24  } 25  26  /// The current width of the line 27  float m_width; 28  /// The current real width of the line 29  float m_physicalWidth; 30  Qt::PenCapStyle m_capStyle; 31  Qt::PenStyle m_penStyle; 32  bool m_cosmeticOutline; 33  bool m_background; 34  QVector< qreal > m_pattern; 35 }; 36  38  : d (new GeoDataLineStylePrivate ) 39 { 40 } 41  43  : GeoDataColorStyle( other ), d (new GeoDataLineStylePrivate( *other.d ) ) 44 { 45 } 46  48  : d ( new GeoDataLineStylePrivate ) 49 { 50  setColor( color ); 51 } 52  53 GeoDataLineStyle::~GeoDataLineStyle() 54 { 55  delete d; 56 } 57  59 { 61  *d = *other.d; 62  return *this; 63 } 64  65 bool GeoDataLineStyle::operator==( const GeoDataLineStyle &other ) const 66 { 67  if ( GeoDataColorStyle::operator!=( other ) ) { 68  return false; 69  } 70  71  return d->m_width == other.d->m_width && 72  d->m_physicalWidth == other.d->m_physicalWidth && 73  d->m_capStyle == other.d->m_capStyle && 74  d->m_penStyle == other.d->m_penStyle && 75  d->m_background == other.d->m_background && 76  d->m_pattern == other.d->m_pattern; 77 } 78  79 bool GeoDataLineStyle::operator!=( const GeoDataLineStyle &other ) const 80 { 81  return !this->operator==( other ); 82 } 83  84 const char* GeoDataLineStyle::nodeType() const 85 { 86  return GeoDataTypes::GeoDataLineStyleType; 87 } 88  90 { 91  d->m_width = width; 92 } 93  95 { 96  return d->m_width; 97 } 98  100 { 101  return d->m_physicalWidth; 102 } 103  105 { 106  d->m_physicalWidth = realWidth; 107 } 108  110 { 111  return d->m_cosmeticOutline; 112 } 113  115 { 116  d->m_cosmeticOutline = enabled; 117 } 118  120 { 121  return d->m_capStyle; 122 } 123  125 { 126  d->m_capStyle = style; 127 } 128  130 { 131  return d->m_penStyle; 132 } 133  135 { 136  d->m_penStyle = style; 137 } 138  140 { 141  return d->m_background; 142 } 143  145 { 146  d->m_background = background; 147 } 148  150 { 151  return d->m_pattern; 152 } 153  155 { 156  d->m_pattern = pattern; 157 } 158  159 void GeoDataLineStyle::pack( QDataStream& stream ) const 160 { 161  GeoDataColorStyle::pack( stream ); 162  163  stream << d->m_width; 164  stream << d->m_physicalWidth; 165  stream << (int)d->m_penStyle; 166  stream << (int)d->m_capStyle; 167  stream << d->m_background; 168 } 169  171 { 172  GeoDataColorStyle::unpack( stream ); 173  174  stream >> d->m_width; 175  stream >> d->m_physicalWidth; 176  int style; 177  stream >> style; 178  d->m_penStyle = ( Qt::PenStyle ) style; 179  stream >> style; 180  d->m_capStyle = ( Qt::PenCapStyle ) style; 181  stream >> d->m_background; 182 } 183  184 } Qt::PenCapStyle capStyle() const Return the current pen cap style. specifies the style how lines are drawn QVector< qreal > dashPattern() const Return the current dash pattern. SolidLine void setPhysicalWidth(float realWidth) Set the physical width of the line (in meters) Qt::PenStyle penStyle() const Return the current pen cap style. void setBackground(bool background) Set whether to draw the solid background. Binds a QML item to a specific geodetic location in screen coordinates. void setColor(const QColor &value) Set a new color. void pack(QDataStream &stream) const override Serialize the style to a stream. GeoDataLineStyle & operator=(const GeoDataLineStyle &other) assignment operator float width() const Return the current width of the line. const char * nodeType() const override Provides type information for downcasting a GeoData. an abstract base class for various style classes void setPenStyle(Qt::PenStyle style) Set pen cap style. void setCapStyle(Qt::PenCapStyle style) Set pen cap style. bool cosmeticOutline() const Return whether the line has a cosmetic 1 pixel outline. QColor color() const Return the color component. void pack(QDataStream &stream) const override Serialize the style to a stream. void setDashPattern(const QVector< qreal > &pattern) Sets the dash pattern. void setWidth(float width) Set the width of the line. void unpack(QDataStream &stream) override Unserialize the style from a stream. float physicalWidth() const Return the current physical width of the line. bool background() const Return true if background get drawn. GeoDataColorStyle & operator=(const GeoDataColorStyle &other) assignment operator void unpack(QDataStream &stream) override Unserialize the style from a stream. GeoDataLineStyle() Construct a new GeoDataLineStyle. void setCosmeticOutline(bool enabled) Set whether the line has a cosmetic 1 pixel outline. This file is part of the KDE documentation. Documentation copyright © 1996-2022 The KDE developers. Generated on Tue Jan 25 2022 23:11:36 by doxygen 1.8.11 written by Dimitri van Heesch, © 1997-2006 KDE's Doxygen guidelines are available online.
__label__pos
0.932087
Edit on GitHub The Node.js Event Loop, Timers, and process.nextTick() What is the Event Loop? The event loop is what allows Node.js to perform non-blocking I/O operations — despite the fact that JavaScript is single-threaded — by offloading operations to the system kernel whenever possible. Since most modern kernels are multi-threaded, they can handle multiple operations executing in the background. When one of these operations completes, the kernel tells Node.js so that the appropriate callback may be added to the poll queue to eventually be executed. We'll explain this in further detail later in this topic. Event Loop Explained When Node.js starts, it initializes the event loop, processes the provided input script (or drops into the REPL, which is not covered in this document) which may make async API calls, schedule timers, or call process.nextTick(), then begins processing the event loop. The following diagram shows a simplified overview of the event loop's order of operations. ┌───────────────────────────┐ ┌─>│ timers │ │ └─────────────┬─────────────┘ │ ┌─────────────┴─────────────┐ │ │ pending callbacks │ │ └─────────────┬─────────────┘ │ ┌─────────────┴─────────────┐ │ │ idle, prepare │ │ └─────────────┬─────────────┘ ┌───────────────┐ │ ┌─────────────┴─────────────┐ │ incoming: │ │ │ poll │<─────┤ connections, │ │ └─────────────┬─────────────┘ │ data, etc. │ │ ┌─────────────┴─────────────┐ └───────────────┘ │ │ check │ │ └─────────────┬─────────────┘ │ ┌─────────────┴─────────────┐ └──┤ close callbacks │ └───────────────────────────┘ note: each box will be referred to as a "phase" of the event loop. Each phase has a FIFO queue of callbacks to execute. While each phase is special in its own way, generally, when the event loop enters a given phase, it will perform any operations specific to that phase, then execute callbacks in that phase's queue until the queue has been exhausted or the maximum number of callbacks has executed. When the queue has been exhausted or the callback limit is reached, the event loop will move to the next phase, and so on. Since any of these operations may schedule more operations and new events processed in the poll phase are queued by the kernel, poll events can be queued while polling events are being processed. As a result, long running callbacks can allow the poll phase to run much longer than a timer's threshold. See the timers and poll sections for more details. NOTE: There is a slight discrepancy between the Windows and the Unix/Linux implementation, but that's not important for this demonstration. The most important parts are here. There are actually seven or eight steps, but the ones we care about — ones that Node.js actually uses - are those above. Phases Overview • timers: this phase executes callbacks scheduled by setTimeout() and setInterval(). • pending callbacks: executes I/O callbacks deferred to the next loop iteration. • idle, prepare: only used internally. • poll: retrieve new I/O events; execute I/O related callbacks (almost all with the exception of close callbacks, the ones scheduled by timers, and setImmediate()); node will block here when appropriate. • check: setImmediate() callbacks are invoked here. • close callbacks: some close callbacks, e.g. socket.on('close', ...). Between each run of the event loop, Node.js checks if it is waiting for any asynchronous I/O or timers and shuts down cleanly if there are not any. Phases in Detail timers A timer specifies the threshold after which a provided callback may be executed rather than the exact time a person wants it to be executed. Timers callbacks will run as early as they can be scheduled after the specified amount of time has passed; however, Operating System scheduling or the running of other callbacks may delay them. Note: Technically, the poll phase controls when timers are executed. For example, say you schedule a timeout to execute after a 100 ms threshold, then your script starts asynchronously reading a file which takes 95 ms: const fs = require('fs'); function someAsyncOperation(callback) { // Assume this takes 95ms to complete fs.readFile('/path/to/file', callback); } const timeoutScheduled = Date.now(); setTimeout(() => { const delay = Date.now() - timeoutScheduled; console.log(`${delay}ms have passed since I was scheduled`); }, 100); // do someAsyncOperation which takes 95 ms to complete someAsyncOperation(() => { const startCallback = Date.now(); // do something that will take 10ms... while (Date.now() - startCallback < 10) { // do nothing } }); When the event loop enters the poll phase, it has an empty queue (fs.readFile() has not completed), so it will wait for the number of ms remaining until the soonest timer's threshold is reached. While it is waiting 95 ms pass, fs.readFile() finishes reading the file and its callback which takes 10 ms to complete is added to the poll queue and executed. When the callback finishes, there are no more callbacks in the queue, so the event loop will see that the threshold of the soonest timer has been reached then wrap back to the timers phase to execute the timer's callback. In this example, you will see that the total delay between the timer being scheduled and its callback being executed will be 105ms. Note: To prevent the poll phase from starving the event loop, libuv (the C library that implements the Node.js event loop and all of the asynchronous behaviors of the platform) also has a hard maximum (system dependent) before it stops polling for more events. pending callbacks This phase executes callbacks for some system operations such as types of TCP errors. For example if a TCP socket receives ECONNREFUSED when attempting to connect, some *nix systems want to wait to report the error. This will be queued to execute in the pending callbacks phase. poll The poll phase has two main functions: 1. Calculating how long it should block and poll for I/O, then 2. Processing events in the poll queue. When the event loop enters the poll phase and there are no timers scheduled, one of two things will happen: • If the poll queue is not empty, the event loop will iterate through its queue of callbacks executing them synchronously until either the queue has been exhausted, or the system-dependent hard limit is reached. • If the poll queue is empty, one of two more things will happen: • If scripts have been scheduled by setImmediate(), the event loop will end the poll phase and continue to the check phase to execute those scheduled scripts. • If scripts have not been scheduled by setImmediate(), the event loop will wait for callbacks to be added to the queue, then execute them immediately. Once the poll queue is empty the event loop will check for timers whose time thresholds have been reached. If one or more timers are ready, the event loop will wrap back to the timers phase to execute those timers' callbacks. check This phase allows a person to execute callbacks immediately after the poll phase has completed. If the poll phase becomes idle and scripts have been queued with setImmediate(), the event loop may continue to the check phase rather than waiting. setImmediate() is actually a special timer that runs in a separate phase of the event loop. It uses a libuv API that schedules callbacks to execute after the poll phase has completed. Generally, as the code is executed, the event loop will eventually hit the poll phase where it will wait for an incoming connection, request, etc. However, if a callback has been scheduled with setImmediate() and the poll phase becomes idle, it will end and continue to the check phase rather than waiting for poll events. close callbacks If a socket or handle is closed abruptly (e.g. socket.destroy()), the 'close' event will be emitted in this phase. Otherwise it will be emitted via process.nextTick(). setImmediate() vs setTimeout() setImmediate() and setTimeout() are similar, but behave in different ways depending on when they are called. • setImmediate() is designed to execute a script once the current poll phase completes. • setTimeout() schedules a script to be run after a minimum threshold in ms has elapsed. The order in which the timers are executed will vary depending on the context in which they are called. If both are called from within the main module, then timing will be bound by the performance of the process (which can be impacted by other applications running on the machine). For example, if we run the following script which is not within an I/O cycle (i.e. the main module), the order in which the two timers are executed is non-deterministic, as it is bound by the performance of the process: // timeout_vs_immediate.js setTimeout(() => { console.log('timeout'); }, 0); setImmediate(() => { console.log('immediate'); }); $ node timeout_vs_immediate.js timeout immediate $ node timeout_vs_immediate.js immediate timeout However, if you move the two calls within an I/O cycle, the immediate callback is always executed first: // timeout_vs_immediate.js const fs = require('fs'); fs.readFile(__filename, () => { setTimeout(() => { console.log('timeout'); }, 0); setImmediate(() => { console.log('immediate'); }); }); $ node timeout_vs_immediate.js immediate timeout $ node timeout_vs_immediate.js immediate timeout The main advantage to using setImmediate() over setTimeout() is setImmediate() will always be executed before any timers if scheduled within an I/O cycle, independently of how many timers are present. process.nextTick() Understanding process.nextTick() You may have noticed that process.nextTick() was not displayed in the diagram, even though it's a part of the asynchronous API. This is because process.nextTick() is not technically part of the event loop. Instead, the nextTickQueue will be processed after the current operation is completed, regardless of the current phase of the event loop. Here, an operation is defined as a transition from the underlying C/C++ handler, and handling the JavaScript that needs to be executed. Looking back at our diagram, any time you call process.nextTick() in a given phase, all callbacks passed to process.nextTick() will be resolved before the event loop continues. This can create some bad situations because it allows you to "starve" your I/O by making recursive process.nextTick() calls, which prevents the event loop from reaching the poll phase. Why would that be allowed? Why would something like this be included in Node.js? Part of it is a design philosophy where an API should always be asynchronous even where it doesn't have to be. Take this code snippet for example: function apiCall(arg, callback) { if (typeof arg !== 'string') return process.nextTick(callback, new TypeError('argument should be string')); } The snippet does an argument check and if it's not correct, it will pass the error to the callback. The API updated fairly recently to allow passing arguments to process.nextTick() allowing it to take any arguments passed after the callback to be propagated as the arguments to the callback so you don't have to nest functions. What we're doing is passing an error back to the user but only after we have allowed the rest of the user's code to execute. By using process.nextTick() we guarantee that apiCall() always runs its callback after the rest of the user's code and before the event loop is allowed to proceed. To achieve this, the JS call stack is allowed to unwind then immediately execute the provided callback which allows a person to make recursive calls to process.nextTick() without reaching a RangeError: Maximum call stack size exceeded from v8. This philosophy can lead to some potentially problematic situations. Take this snippet for example: let bar; // this has an asynchronous signature, but calls callback synchronously function someAsyncApiCall(callback) { callback(); } // the callback is called before `someAsyncApiCall` completes. someAsyncApiCall(() => { // since someAsyncApiCall has completed, bar hasn't been assigned any value console.log('bar', bar); // undefined }); bar = 1; The user defines someAsyncApiCall() to have an asynchronous signature, but it actually operates synchronously. When it is called, the callback provided to someAsyncApiCall() is called in the same phase of the event loop because someAsyncApiCall() doesn't actually do anything asynchronously. As a result, the callback tries to reference bar even though it may not have that variable in scope yet, because the script has not been able to run to completion. By placing the callback in a process.nextTick(), the script still has the ability to run to completion, allowing all the variables, functions, etc., to be initialized prior to the callback being called. It also has the advantage of not allowing the event loop to continue. It may be useful for the user to be alerted to an error before the event loop is allowed to continue. Here is the previous example using process.nextTick(): let bar; function someAsyncApiCall(callback) { process.nextTick(callback); } someAsyncApiCall(() => { console.log('bar', bar); // 1 }); bar = 1; Here's another real world example: const server = net.createServer(() => {}).listen(8080); server.on('listening', () => {}); When only a port is passed, the port is bound immediately. So, the 'listening' callback could be called immediately. The problem is that the .on('listening') callback will not have been set by that time. To get around this, the 'listening' event is queued in a nextTick() to allow the script to run to completion. This allows the user to set any event handlers they want. process.nextTick() vs setImmediate() We have two calls that are similar as far as users are concerned, but their names are confusing. • process.nextTick() fires immediately on the same phase • setImmediate() fires on the following iteration or 'tick' of the event loop In essence, the names should be swapped. process.nextTick() fires more immediately than setImmediate(), but this is an artifact of the past which is unlikely to change. Making this switch would break a large percentage of the packages on npm. Every day more new modules are being added, which means every day we wait, more potential breakages occur. While they are confusing, the names themselves won't change. We recommend developers use setImmediate() in all cases because it's easier to reason about. Why use process.nextTick()? There are two main reasons: 1. Allow users to handle errors, cleanup any then unneeded resources, or perhaps try the request again before the event loop continues. 2. At times it's necessary to allow a callback to run after the call stack has unwound but before the event loop continues. One example is to match the user's expectations. Simple example: const server = net.createServer(); server.on('connection', (conn) => { }); server.listen(8080); server.on('listening', () => { }); Say that listen() is run at the beginning of the event loop, but the listening callback is placed in a setImmediate(). Unless a hostname is passed, binding to the port will happen immediately. For the event loop to proceed, it must hit the poll phase, which means there is a non-zero chance that a connection could have been received allowing the connection event to be fired before the listening event. Another example is running a function constructor that was to, say, inherit from EventEmitter and it wanted to call an event within the constructor: const EventEmitter = require('events'); const util = require('util'); function MyEmitter() { EventEmitter.call(this); this.emit('event'); } util.inherits(MyEmitter, EventEmitter); const myEmitter = new MyEmitter(); myEmitter.on('event', () => { console.log('an event occurred!'); }); You can't emit an event from the constructor immediately because the script will not have processed to the point where the user assigns a callback to that event. So, within the constructor itself, you can use process.nextTick() to set a callback to emit the event after the constructor has finished, which provides the expected results: const EventEmitter = require('events'); const util = require('util'); function MyEmitter() { EventEmitter.call(this); // use nextTick to emit the event once a handler is assigned process.nextTick(() => { this.emit('event'); }); } util.inherits(MyEmitter, EventEmitter); const myEmitter = new MyEmitter(); myEmitter.on('event', () => { console.log('an event occurred!'); }); Zum Seitenanfang
__label__pos
0.851305
Swiftで行こう…第13回「トランプ5」(もちゃち) 2014-11-22 目次ページへ ーーーMavericks Xcode6.1ーーー 前回の問題の答えの一つです。 013-01 013-02 以下ソースコードです。 import UIKit class ViewController: UIViewController { @IBOutlet weak var lblCard: UILabel! @IBOutlet weak var lblCount: UILabel! override func viewDidLoad() { super.viewDidLoad() // Do any additional setup after loading the view, typically from a nib. for i in 0 ..< cardCount { check += [false] var lbl = UILabel(frame: CGRectMake(0, 0, 50, 21)) lbl.center = CGPointMake(160 + 50 * CGFloat(i / 13), 50 + CGFloat(i % 13) * 30) lbl.textAlignment = NSTextAlignment.Center lbl.text = "⬛️" eachCard += [lbl] self.view.addSubview(lbl) } count = cardCount } override func didReceiveMemoryWarning() { super.didReceiveMemoryWarning() // Dispose of any resources that can be recreated. } let mark: [String] = ["♣️","♦️","♥️","♠️"] let number: [String] = ["A","2","3","4","5","6","7","8","9","T","J","Q","K"] let cardCount = 52 var check: [Bool] = [] var count = 0 var eachCard: [UILabel] = [] @IBAction func btnGoTouch(sender: AnyObject) { if count == 0 { for i in 0 ..< cardCount { check[i] = false eachCard[i].text = "⬛️" } var alert = UIAlertView() alert.title = "初期化" alert.message = "カードを配り直します" alert.addButtonWithTitle("OK") alert.show() count = cardCount } var card: Int = 0 var randInt = Int(arc4random_uniform(UInt32(count))) for i in 0 ... randInt { while check[card] { card++ } if i < randInt { card++ } } check[card] = true count-- lblCount.text = count.description if card == 52 { lblCard.text = "JK" } else { lblCard.text = mark[card / 13] + number[card % 13] } eachCard[card].text = lblCard.text } } このように大量のラベルを貼る時は、コードで生成してしまった方が楽です。 さて次回からポーカーに入ります。 問題です。5枚のカードを重複無く配って下さい。 目次ページへ Since1991 © Shimayugu All Rights Reserved. アップルおよびアップルのロゴは、米国および他の国々で登録されたApple Inc.の商標です。 湘南マッキントッシュユーザーズグループは、独立したユーザグループで、Apple Japan合同会社が権限を与えた団体、支援する団体、またはその他に承認する団体ではありません。
__label__pos
0.915055
Can iMessage messages be tracked? Can iMessage messages be tracked? Apple emphasized that because iMessages are encrypted, the company is not able to give police access to the content of conversations. How can you track someones iMessage? How to See Someone’s iMessages 1. Step 1: Find out the Apple ID of the iPhone you want to monitor. 2. Step 2: Get a subscription to mSpy. 3. Step 3: Check your email for instructions. 4. Step 4: Log in to your mSpy Control Panel. 5. Step 5: Enter the iCloud credentials. 6. Step 6: Wait a few hours. 7. Step 7: Log in whenever you like. Does Apple keep a record of iMessages? Apple may record and store some information related to your use of the Messages app and the iMessage service to operate and improve Apple’s products and services: Apple may store information about your use of iMessage in a way that doesn’t identify you. How can I see my daughters iMessages? Use iCloud Sync You can use the iCloud sync feature in iOS to receive text messages from your child’s iPhone. This means that you’ll have to know your child’s Apple ID and password. You would also have access to the device to allow messages to be synced to iCloud. Can iPhone messages be tracked? If someone installs spyware onto your device, then they can remotely access any of the data on your iPhone – including all of your text messages. How can I see my boyfriend’s text messages on my iPhone? If you want to see your boyfriend’s text messages with Minspy, you just have to follow these steps: Step 1: Create a Minspy account and get a subscription plan for iOS devices. Step 2: Verify the iCloud credentials of your boyfriend’s iPhone with Minspy. Step 3: Click on ‘Start’ and you are ready to monitor his iPhone. How can I see my son’s text messages iPhone? If you use iOS 12 or a more recent version, you can use Apple’s cloud message sync feature. By enabling iCloud syncing, you can access all data from your child’s device. Make sure to enable message synching so you can read messages from your child’s phone. Can I see iMessages on iCloud? Any messages currently stored in iCloud are now accessible on your device, and any new messages you receive on your device will be stored in iCloud. To see messages stored in iCloud, open the Messages app. How can I monitor my child’s iPhone without them knowing? How to Track the Phone Without Them Knowing? 1. Google Maps. Google Maps allows you to sneakily see the other mobile’s location. 2. Secretly Track Your Kids’ Phones Using “Find My Friends” Though# Find My friends app is not for spying purposes, it can be used in that sense. 3. Track Your Daughter’s Phone Using SecureTeen. Can my husband read my iMessages? Yes, possibly. Many certainly try. Remember, iMessage lets you receive text messages from any email address that is registered with your Apple ID. Your spouse’s iPhone could very well be one of those devices. How can I read my iPhone messages without them knowing? Tap and long press on the message thread, holding the tap until a message preview pops up on the screen. Scan the message preview to review and read the message, as long as you don’t tap it again you will keep the message in preview mode and be able to read the message without sending a read receipt. Can someone spy on my text messages? Yes, if someone has hacked your phone then he or she can surely read your text messages. And, to do the same one needs to install a tracking or spying app on your smartphone. Can someone spy through a phone camera? Yes, you can be spied upon through the camera of your smartphone. Can someone read your text messages from another device? Reading Text Messages Secretly You can read text messages on any phone, be it Android or iOS, without the knowledge of the target user. All you need is a phone spy service for it. Such services are not rare nowadays. There are so many apps that advertise phone spying solutions with top-notch services. Can you see someone’s texts with their Apple ID? Yes. If they can login to your iCloud then they have the exact same access as you do. iCloud has no way of knowing who is using the AppleID and signing in. How to track Old Text messages on iPhone? How to Track Old Text Messages/iMessages on iPhone. Look up Old Text Messages on iPhone. You can easily find old messages on iPhone 11/X/8/7/6 without scrolling with the search bar on iMessages. Tap Message app. While viewing the Messages list, swipe down with your finger to expose the search box. How do I access text messages on my iPhone? You can also access text messages on your iPhone with Spotlight. Just tap and swipe to the right to bring up Spotlight Search from the Home screen. Then, tap the search bar and enter the information you are searching for. How to find out what someone is texting on iPhone? Keep taping the upper bar on the very top of your iPhone screen, the bar that displays the current time, your carrier’s name, battery level, etc. As you tap, it will quickly scroll to the beginning of the message conversation. The trick can also be applied to lookup messages in other messaging apps like WhatsApp. How do I find old iMessages on my iPhone? Use Spotlight to Find Old iMessages/Text Messages. You can also access to text messages on iPhone with Spotlight. Just tap and swipe to the right to bring up Spotlight Search from the Home screen. Then, tap the search bar and enter the information you are searching for.
__label__pos
0.999931
4 I recently upgraded to OS X 10.9 Mavericks and am now seeing a "Choose Application" dialog pop up every minute or two asking: "Where is the GrowlHelperApp.app?". I don't particularly want to buy the new version of Growl, so how can I determine which application is looking for it so that I can change it's preferences or remove it? Choose Application dialog: Where is GrowlHelperApp.app? Update: It turns out that it was an old Dashboard widget that was looking for Growl. I found this out by deleting all of my Dashboard widgets since I haven't used it in a while, but I'd still be interested in finding out how one would discover what is launching this dialog. 1 • What apps are running when this pops up? – sdmeyers Nov 4, 2013 at 22:13 2 Answers 2 1 You could have used sudo opensnoop to see what files are being opened or fseventer to see what files are being modified. Those dialogs are often shown by AppleScripts that have reference a missing application. You could also have tried running mdfind GrowlHelperApp, but it doesn't search the contents of compiled scripts or scripts saved as applications. 1 Helped me solve this problem I am having an awful lot of time. The mdfind GrowlHelperApp with combo of deleteting the resource files found helped me to remove the annoying popup window. In my case, Firefox was the rooting problem app. You must log in to answer this question. Not the answer you're looking for? Browse other questions tagged .
__label__pos
0.66682
Large family I have as many brothers as sisters and each my brother has twice as many sisters as brothers. How many children do parents have? Result n =  7 Solution: n = m+z m -1 = z 2(z - 1) = m m-n+z = 0 m-z = 1 m-2z = -2 m = 4 n = 7 z = 3 Calculated by our linear equations calculator. Our examples were largely sent or created by pupils and students themselves. Therefore, we would be pleased if you could send us any errors you found, spelling mistakes, or rephasing the example. Thank you! Leave us a comment of this math problem and its solution (i.e. if it is still somewhat unclear...): Showing 0 comments: 1st comment Be the first to comment! avatar Tips to related online calculators Do you have a linear equation or system of equations and looking for its solution? Or do you have quadratic equation? Following knowledge from mathematics are needed to solve this word math problem: Next similar math problems: 1. School marks dostatocny Boris has a total of 22 marks. Ones have 3 times less than threes. There are two twos. How many has ones and threes? 2. CZK coins CZK-coins Thaddeus and Jolana together have 15 CZK. Jolana has half of Thaddeus money. Nevertheless Jolana has 3 coins and Thaddeus 2 coins. Which coin has Thaddeus and Jolana (Help: CZK coins has values 1,2,5,10,20,50 CZK)? 3. Every day 7 pages books_9 Adelka reads the book every day 7 pages. When she reads one more page a day she will read it three days earlier. How long will Adelka read a book? How much does a book of pages have? 4. The store carot The store received the same number of cans of peas and corn. The first day sold 10 cans of peas and 166 cans ofcorn so that left 5 times more peas than corn cans. How many cans of each kind were in the store? 5. Circus cirkus On the circus performance was 150 people. Men were 10 less than women and children 50 more than adults. How many children were in the circus? 6. Men, women and children regiojet On the trip went men, women and children in the ratio 2:3:5 by bus. Children pay 60 crowns and adults 150. How many women were on the bus when a bus was paid 4,200 crowns? 7. Students skola After the fifth-grade class left 20% of students. In the seventh grade were added 2 pupils, in the eighth 1 pupil, in the ninth, the number has not changed, but it is now tenth students less than it was in the fifth grade. How many pupils are in the 9th. 8. School year boys_girls At the end of the school year has awarded 20% of the 250 children who attend school. Awat got 18% boys and 23% of girls. Determine how many boys and how many girls attend school. 9. Ball game lopta_3 Richard, Denis and Denise together scored 932 goals. Denis scored 4 goals over Denise but Denis scored 24 goals less than Richard. Determine the number of goals for each player. 10. Cows and calves cow_5 There are 168 cows and calves in the cowshed. Cows are in nine stalls and calves in four stalls. Same count cows are in every cow stall and three more in each calf stall than in a cow stall. What is the capacity of the stalls for cows and what for calves? 11. I think numbers_49 I think a number. When I multiply it by five, and after that I subtract 477, I get the same number as if I multiplied it twice. What number do I think? 12. Football match 4 futball_ball In a football match with the Italy lost 3 goals with Germans. Totally fell 5 goals in the match. Determine the number of goals of Italy and Germany. 13. Number unknown plusminus_3 Adela thought the two-digit number, she added it to its ten times and got 407. What number does she think? 14. Unknown numbers eq222_3 The sum of two consecutive natural numbers and their triple is 92. Find these numbers. 15. Isosceles trapezoid lichobeznik_2 Perimeter of the isosceles trapezoid is 48 cm. One side is two times greater than the second side. Determine the dimensions of the trapezoid. 16. Last page books_44 Two consecutive sheets dropped out of the book. The sum of the numbers on the sides of the dropped sheets is 154. What is the number of the last page of the dropped sheets? 17. Tippler flaska_mlieka Bottle with cork cost 8.8 Eur. The bottle is 0.8 euros more expensive than cork. How much is a bottle and the cork?
__label__pos
0.898548
\( \newcommand{\E}{\mathrm{E}} \) \( \newcommand{\A}{\mathrm{A}} \) \( \newcommand{\R}{\mathrm{R}} \) \( \newcommand{\N}{\mathrm{N}} \) \( \newcommand{\Q}{\mathrm{Q}} \) \( \newcommand{\Z}{\mathrm{Z}} \) \( \def\ccSum #1#2#3{ \sum_{#1}^{#2}{#3} } \def\ccProd #1#2#3{ \sum_{#1}^{#2}{#3} }\) CGAL 5.0 - 2D and 3D Linear Geometry Kernel Kernel::ComputeSquaredRadius_2 Concept Reference Definition Operations A model of this concept must provide: Kernel::FT operator() (const Kernel::Circle_2 &c)  returns the squared radius of c.   Kernel::FT operator() (const Kernel::Point_2 &p, const Kernel::Point_2 &q, const Kernel::Point_2 &r)  returns the squared radius of the circle passing through p, q and r. More...   Kernel::FT operator() (const Kernel::Point_2 &p, const Kernel::Point_2 &q)  returns the squared radius of the smallest circle passing through p, and q, i.e. one fourth of the squared distance between p and q.   Kernel::FT operator() (const Kernel::Point_2 &p)  returns the squared radius of the smallest circle passing through p, i.e. \( 0\).   Member Function Documentation ◆ operator()() Kernel::FT Kernel::ComputeSquaredRadius_2::operator() ( const Kernel::Point_2 p, const Kernel::Point_2 q, const Kernel::Point_2 r  ) returns the squared radius of the circle passing through p, q and r. Precondition p, q and r are not collinear.
__label__pos
0.992409
hey guys, i was wondering if you could help me with my problem! i made a program that uses 3 arguments as inline commands: argv[0] is the name of the program) argv [1] is the name of the first file argv[2] is the name of the second file The purpose of the program is to write an inline command with these 3 arguments in the form <program_name.exe file1.txt file2.txt> and the program copies the content of file1.txt in file2.txt and then deletes file1.txt (similar to cut command). The thing is that everytime a different name is used and i cant use remove command that takes the destination of the file as an argument. what do you suggest?thanks for your time! i hope i was clear. #include <stdio.h> int main ( int argc, char *argv[] ) { char c; if (argc!=3) { printf("Wrong number of arguments"); } else { FILE *file1; FILE *file2; file1=fopen(argv[1],"r"); file2 = fopen( argv[2], "w" ); while ((c=fgetc(file1)) != EOF) { fputc(c,file2); } fclose( file1); fclose( file2); } } Edited 3 Years Ago by mike_2000_17: Fixed formatting For Removing a file you can use remove macro defined in stdio.h int remove (const char *filename); on success it returns 0, but before removing a file be sure that file is closed. you want to remove argv[1] after copied to argv[2]. if ( remove(argv[1]) == 0) printf("Removed!"); else printf("Error Removing File!"); I think it could help you.. Edited 6 Years Ago by vinitmittal2008: n/a This question has already been answered. Start a new discussion instead.
__label__pos
0.872023
Work PowerShell Size-Based Log Splitter A mash-up of the following links: Blog post “Split file csv by size” Replace Text in a String Stack Overflow Regex Remove-Comma-Between-Double-Quotes param ($path, $size) $src = $path $SplitPath = $src.replace(".csv", "_clean_{0}.csv") # Read in source file and grab header row. $inData = New-Object -TypeName System.IO.StreamReader -ArgumentList $src $header = $inData.ReadLine() # Create initial output object $outData = New-Object -TypeName System.Text.StringBuilder [void]$outData.Append($header) $i = 0 while( $line = $inData.ReadLine() ){ # If the object is longer than $size then output the content of the object and create a new one. if( $outData.Length -gt $size ){ Write-Output "Splitting to filename " ($SplitPath -f $i) $outData.ToString() | Out-File -FilePath ( $SplitPath -f $i ) -Encoding ascii $outData = New-Object -TypeName System.Text.StringBuilder [void]$outData.Append($header) $i++ } # Escape commas within quotes $line = $line.Replace(',(?!(([^"]*"){2})*[^"]*$)','\,') # Remove double-double quotes $line = $line.Replace('""','') Write-Verbose "$currentFile, $line" [void]$outData.Append("`r`n$($line)") } # Write contents of final object Write-Output "Splitting to filename " ($SplitPath -f $i) $outData.ToString() | Out-File -FilePath ( $SplitPath -f $i ) -Encoding ascii #Close StreamReader $inData.Close()
__label__pos
0.995038
iT邦幫忙 2019 iT 邦幫忙鐵人賽 2 Modern Web 勇者鬥Vue龍系列 第 32 Vue.js Core 30天屠龍記(第32天): 依賴注入 前面介紹到可以使用 $parent 來訪問父組件的實體,這樣的方式限制了存取的實體一定要與被引用的組件是直接的關係,這樣的存取方式會使組件間變得有強大的耦合,限縮了組件的彈性,本文要介紹的依賴注入方式可以減少兩個組件間的關聯性,增加各組件的彈性。 使用 $parent 的限制 以一個樹的組件來解釋使用 $parent 的問題,假設有兩個組件: rootnode : Vue.component('root', { name: 'root', template: ` <div> {{nodeName}} <slot></slot> </div> `, props: ['nodeName'] }); Vue.component('node', { name: 'node', template: ` <div @click.stop="showRoot"> {{nodeName}} <slot></slot> </div> `, props: ['nodeName'], methods: { showRoot: function() { alert('Root Name: ' + this.$parent.nodeName); } } }); var vm = new Vue({ el: '#app' }); 點擊 node 組件時會觸發事件,顯示出樹根的名字,這裡使用 this.$parent.nodeName 取得樹根的名字,在下面這樣單層的結構中是沒有問題的: <div id="app"> <root node-name="Root"> <node node-name="Node"> </node> </root> </div> 但如果是三層以上的結構會變為下面這樣: <div id="app"> <root node-name="Root"> <node node-name="Node"> <node node-name="NodeOfNode"> </node> </node> </root> </div> 當結構是如此時, this.$parent.nodeName 取得的就不再是 Root 了,而會取到 Node ,因此使用 $parent 限制了整個樹組件的可用性,接下來我們使用依賴注入改寫這個例子。 使用依賴注入 使用依賴注入有兩個要設定的地方: • 要注入的物件需要由 inject 設置。 • 被注入的物件需要由 provide 設置。 上面的例子中注入的物件為 root 組件的 nodeName ,所以要在 root 組件中使用 provide 屬性來提供 rootName 的值為 nodeName 的設定: Vue.component('root', { ... provide: function() { return { rootName: this.nodeName }; } }); 接著我們在注入給 node 組件,使用 inject 屬性設置要注入的 rootName 物件: Vue.component('node', { ... inject: ['rootName'], methods: { showRoot: function() { alert('Root Name: ' + this.rootName); } } }); 如此一來就可以直接在 node 實體中使用 rootName 了。 依賴注入不用每層都做 inject 設定 在上面的例子中由於除了 root 組件外下層都是使用 node 組件做設定的,這樣可能會形成每層都需要設置 inject 的錯誤觀念,實際上 Vue.js 的依賴注入不管中間的組件有沒有 inject 都可以在下層的組件中做注入的動作。 下面我們修改一下上面的例子,使用在 rootnode 外多加一個 leaf 組件,再來將 node 中的注入拔除: Vue.component('node', { name: 'node', template: ` <div @click.stop="showRoot"> {{nodeName}} <slot></slot> </div> `, props: ['nodeName'], // inject: ['rootName'], // methods: { // showRoot: function() { // alert('Root Name: ' + this.rootName); // } // } }); Vue.component('leaf', { name: 'leaf', template: ` <div @click.stop="showRoot"> {{nodeName}} <slot></slot> </div> `, props: ['nodeName'], inject: ['rootName'], methods: { showRoot: function() { alert('Root Name: ' + this.rootName); } } }); <div id="app"> <root node-name="Root"> <node node-name="Node"> <node node-name="NodeOfNode"> <leaf node-name="Leaf"> </leaf> </node> </node> </root> </div> 在上面的結構下, rootleaf 沒有直接的關係,但是 inject 依然有效,說明了 inject 不用是直接的上下層關係。 provideinject 屬性使用方式 provide 將要被注入到其他組件的物件定義在 provide 屬性,其它的組件才可以注入這些物件, provide 有兩個設定方式: 物件及函式: // Object var Provider = { provide: { foo: 'bar' } } // () => Object var Provider = { provide: function() { return { foo: 'bar' } } } 設定的變數為固定的時候(例如: stringnumber ),可以直接使用物件設定就好,但像是上面的例子( this.nodeName )那樣的變數就要使用函式才能設置。 inject inject 的基本設置就是陣列,陣列中的字串是要注入的物件名稱: inject: ['foo'] 這樣就可以把 foo 給注入到組件中做使用。 另一個設定方式為物件,物件的 Key 為注入的物件名稱,而 Value 是一個物件,這個物件有兩個可選的屬性: fromdefault : • from : 目標物件名稱,在注入的組件中想要改變被注入物件的名稱時使用此屬性設置。 • default : 當上層沒有相符的被注入物件時使用此數值當作預設值。 下面繼續使用之前的樹組件當作例子: Vue.component('leaf', { ... inject: { rootAlias: { from: 'rootName' } }, methods: { showRoot: function() { alert('Root Name: ' + this.rootAlias); } } }); 這裡將注入進來的 rootName 改名為 rootAlias ,因此在 leaf 組件中就可以使用 rootAlias 取得 rootName 的資料了。 接著我們將樹組件的結構改為下面這樣: <node node-name="Node"> <node node-name="NodeOfNode"> <leaf node-name="Leaf"> </leaf> </node> </node> root 組件從結構中拔除,這意味著 leaf 已經拿不到 rootName 了,所以點擊後 Root 的名稱會變為 undefined ,這時可以在設定上加上 default : Vue.component('leaf', { ... inject: { rootAlias: { from: 'rootName', default: 'Default Root' } }, methods: { showRoot: function() { alert('Root Name: ' + this.rootAlias); // alert('Root Name: ' + this.rootName); } } }); 這樣就可以避免上層沒有注入而導致錯誤的情形。 DEMO 結語 依賴注入在一般的開發上使用到的機會比較少,但如果是在開發或是使用一些底層或是通用的工具時就可能要了解依賴注入的使用方式,像是 VeeValidate 中如果要在不同的組件使用相同的驗證器的話就需要使用依賴注入。 參考資料 上一篇 Vue.js Core 30天屠龍記(第31天): 尋訪其他組件 系列文 勇者鬥Vue龍32 尚未有邦友留言 立即登入留言
__label__pos
0.989536
Shodor interactivate perimeter explorer download The goals of this site are the creation, collection, evaluation, and dissemination of javabased courseware for middle school mathematics explorations. Collect the worksheet to accompany the perimeter explorer applet now and complete it. A shape will be automatically generated with the perimeter. The independent institute was founded in 1999 to foster breakthroughs in the fundamental understanding of our universe, from the smallest particles to the entire cosmos. Students use the areas and perimeters they compute for. Using a teacher designed worksheet, students will use the interactivate software tool plop it, accessed at. The word problems also include graph paper for students to draw and answer reallife area and perimeter problems. Winner of the standing ovation award for best powerpoint templates from presentations magazine. In this lesson, students will explore the area of irregular shapes to find multiple different methods for calculating area. Shapes is a simple utility for mac os x which allows you to quickly work out quantities such as area and perimeter on a range of simple shapes. The design shapes usually reoccur in different sizes. As many students now have mobile phones which include a calculator this resource attempts to emulate such a calculator. Open your browser to perimeter explorer in order to demonstrate this activity to the. This is an interactive resource about calculating the area of a shape presented on a. So when the picture folds on itself, we have to scan the columns and the rows to see how many extra perimeter lines there are. Students decipher which type of twodimensional would create the most space for designing buildings. Project an irregular shape on the board using the shape explorer applet. Common core math practice standards 17 are also addressed. Common core high school geometry resources for expressing. Explorations with area and perimeter announce to students that a new city ordinance requires that schools must cover a minimum area, regardless of how tall the buildings are, in order to be. Math mammoth south african version has been customised to south africa in the following manners. Shodor interactivate lessons pythagorean theorem abstract. Parallelograms discussion introduces students to parallelograms and rhombbi and defines the characteristics necessary to determine each shape. Math mammoth grade 5a worktext comprises a complete maths curriculum for the first half of fifth grade mathematics. Perimeter explorer interactivate maths zone cool learning. Shodor interactivate discussions perimeter explorer student. This 3dimensional model allows users to explore functions and their revolution around an axis. Perimeter explorer o find perimeter of irregular straightedged shapes on a grid. Shodor interactivate lessons area explorations abstract. Shodor interactivate discussions perimeter explorer. Online games and resources for shapes, quadrilaterals. Area explorer interactivate maths zone cool learning games. Perimeter elementary shodor interactivate lessons perimeter elementary abstract. Demo version of perimeter, an strategy game, sp, for pcs and laptops with windows systems. Working with a partner and using the interactive area explorer at. Various videos including area and perimeter that shows how various arrangements can have the same area, but a different perimeter. Worlds best powerpoint templates crystalgraphics offers more powerpoint templates than anyone else in the world, with over 4 million to choose from. This method can be used instead of counting all the squares. A maths dictionary for kids a maths dictionary for kids 2007 by jenny eather interactive dictionary. Perimeter explorer is one of the interactivate assessment explorers. So the perimeter of the shape is eighteen plus two, which is twenty. Students compute the area and perimeter of twodimensional figures to demonstrate their understanding of using mathematical formulas. Practice perimeter, area, and circumference skills herefor free. Geometry explorer download software free download geometry. Next, open your browser to the pythagorean explorer in order to demonstrate this activity to the students. When you want put up a new fence around your home, how will you know how much fencing you need. Home online resources geometry the pythagorean theorem the pythagorean theorem. Middle school students ability to function well with this mode of instruction has not been established in the literature, although the circumstances of generation z growing up with technology suggest potential success. Team 62 letter to parents team 62 welcome back letter every child deserves a champion an adult who will never give up on them, who understands the power of connection and insists that they become the best that they can possibly be. Easier a fractal is a shape, often drawn by a computer, that repeats itself in a pattern. This is an annotated and handpicked list of online games, worksheets, tutorials and activities for area and perimeter, suiting best grades 36. This is similar to the game show lets make a deal, where you choose one of n doors in hopes of finding a grand prize behind one of the doors. You can also click below or copy into your browser. This is a lesson using the geometric interpretations of the various conic sections to explain their equations. Grade 4 perimeter and area involving whole numbers edugains. Introduction to area and perimeter using project interactivate. The school system retains control over what links will be placed on systemrelated websites. A shape will be automatically generated with the perimeter that you choose. Word problems with leon interactive math word problems. So the perimeter of the object is two more than the perimeter of the box. There are also videos on finding the area of a triangle, trapezoid, various trapezoids, and circles. The first step is think about how we find perimeter. Theyll give your presentations a professional, memorable appearance the kind of sophisticated look that todays audiences expect. Introduction to area and perimeter using project interactivate wilson burgos. Geometry explorer download software geometry master v. Using the perimeter explorer applet, create a figure with area of. And imagine the outside edges of the figure are tight ropes. This week we will focus on place value and rounding 3. Run a simulation to generate results from running the monty hall for multiple trials. Perimeter explorer is one of the interactivate assessment. Find the perimeter and area of odd shapes on rectangular grid. Students also access project succeed, deaf stem lessons, and blue waters internships. Area and perimeter interactive math notebook goalbook. The lesson involves using two java applets on the project interactivate websitesee screensh ots that follow. Comparison of facetoface and online mathematics learning of. Is there a way to calculate perimeter without having to count everything. A free powerpoint ppt presentation displayed as a flash slide show on id. Online education is increasing in popularity at the college and high school levels with several studies showing the comparability of elearning and more traditional methods. Move the adjust slider first then calculate area and perimeter of shape drawn yr 6 perimeter explorer interactivate students are shown shapes on a grid after setting the area and asked to calculate. Please click on this link or go to our website where we have streamlined information for easier access. Move the adjust slider first then calculate area and perimeter of shape drawn yr 6 area explorer interactivate set the perimeter and calculate the area of the shape drawn on the grid. Interactive web sites will provide students with practice in calculating both area and perimeter, and real world problems will encourage students to discover that math is fun and something we. Students should be able to understand and recognize the ones, tens, hundreds, and thousands place. A shape will be automatically generated with the area that you choose. Area explorer is one of the interactivate assessment explorers. Chart and diagram slides for powerpoint beautifully designed chart and diagram s for powerpoint with visually stunning graphics and animation effects. This is a handpicked list of online activities, tutorials, and worksheets concerning the the pythagorean theorem. Solve problems involving the addition, subtraction, multiplication, and division of singleand multidigit whole numbers, and. Geometry explorer updates gustavus adolphus college. The journal prompts are scaffolded for different ability levels. Once we have calculated the perimeter we will put our answer in the perimeter input box and click the check answer button. Ppt finding area and perimeter of polygons powerpoint. Students hypothesize the most important aspects of designing a building for a reasonable cost. Open your browser to perimeter explorer in order to demonstrate this activity to the students. This is an annotated and handpicked list of games and other online resources related to shapes, quadrilaterals, triangles, and polygons, suitable for grades 15. An interactive whiteboard resource to encourage students to apply their maths skills to the real world. Common core high school geometry resources for geometric. Area and perimeter games, great for primary school. Perimeter institute is the worlds largest research hub devoted to theoretical physics. Now lets play with the perimeter explorer applet and see how well our method works. Keep, access, and share your bookmarks and favorite links on the web at. Estimate, measure, and record length, perimeter, area, mass, capacity, volume, and elapsed. For this example the length is 4 and the width is 5. Once one or two functions have been rotated around an axis, the applet calculates. Designing and constructing different shaped buildings sas. On a mission to transform learning through computational thinking, shodor is dedicated to the reform and improvement of mathematics and science education through student enrichment, faculty enhancement, and interactive curriculum development at all levels. Students will learn about perimeter and the units used to measure perimeter using a variety of materials including their hands, feet, rulers, and computer applets. Tips4math grade 4 perimeter and area involving whole numbers overall expectations students will. Download and load the shockwave plugin available at. Our new crystalgraphics chart and diagram slides for powerpoint is a collection of over impressively designed datadriven chart and editable diagram s guaranteed to impress any audience. Conic flyer is a virtual manipulative for learners to manipulate different types of conic section equations on a coordinate plane using slider bars. This lesson is designed to examine the mathematical concept of perimeter. Includes both a math vocabulary journal and journal prompts. Online games and resources for shapes, quadrilaterals, triangles, and polygons. Ppt spice up your math class with technology powerpoint. 1109 1210 176 109 1568 48 258 549 367 1321 92 1044 1097 1638 771 263 1315 989 1238 288 1245 483 1223 920 184 527 1206 545 305 1208 948 1042 1176 18 354 432 270 553 309
__label__pos
0.891577
Skip to main content \(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\) Mathematics LibreTexts 5.5: Decimals and Fractions (Part 1) • Page ID 5000 • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \) \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)\(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\) \(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\) Learning Objectives • Convert fractions to decimals • Order decimals and fractions • Simplify expressions using the order of operations • Find the circumference and area of circles be prepared! Before you get started, take this readiness quiz. 1. Divide: 0.24 ÷ 8. If you missed this problem, review Example 5.4.9. 2. Order 0.64__0.6 using < or >. If you missed this problem, review Example 5.2.7. 3. Order −0.2__−0.1 using < or >. If you missed this problem, review Example 5.2.8. Convert Fractions to Decimals In Decimals, we learned to convert decimals to fractions. Now we will do the reverse—convert fractions to decimals. Remember that the fraction bar indicates division. So \(\dfrac{4}{5}\) can be written 4 ÷ 5 or \(5 \overline{)4}\). This means that we can convert a fraction to a decimal by treating it as a division problem. Note: Convert a Fraction to a Decimal To convert a fraction to a decimal, divide the numerator of the fraction by the denominator of the fraction. Example \(\PageIndex{1}\): Write the fraction \(\dfrac{3}{4}\) as a decimal. Solution A fraction bar means division, so we can write the fraction 3 4 using division. A division problem is shown. 3 is on the inside of the division sign and 4 is on the outside. Divide. A division problem is shown. 3.00 is on the inside of the division sign and 4 is on the outside. Below the 3.00 is a 28 with a line below it. Below the line is a 20. Below the 20 is another 20 with a line below it. Below the line is a 0. Above the division sign is 0.75. So the fraction \(\dfrac{3}{4}\) is equal to 0.75. Exercise \(\PageIndex{1}\): Write the fraction as a decimal: \(\dfrac{1}{4}\). Answer \(0.25\) Exercise \(\PageIndex{2}\): Write the fraction as a decimal: \(\dfrac{3}{8}\). Answer \(0.375\) Example \(\PageIndex{2}\): Write the fraction \(− \dfrac{7}{2}\) as a decimal. Solution The value of this fraction is negative. After dividing, the value of the decimal will be negative. We do the division ignoring the sign, and then write the negative sign in the answer. $$- \dfrac{7}{2}$$ Divide 7 by 2. A division problem is shown. 7.0 is on the inside of the division sign and 2 is on the outside. Below the 7 is a 6 with a line below it. Below the line is a 10. Below the 10 is another 10 with a line below it. Below the line is a 0. 3.5 is written above the division sign. So, \(− \dfrac{7}{2}\) = −3.5. Exercise \(\PageIndex{3}\): Write the fraction as a decimal: \(− \dfrac{9}{4}\). Answer \(-2.25\) Exercise \(\PageIndex{4}\): Write the fraction as a decimal: \(− \dfrac{11}{2}\). Answer \(-5.5\) Repeating Decimals So far, in all the examples converting fractions to decimals the division resulted in a remainder of zero. This is not always the case. Let’s see what happens when we convert the fraction \(\dfrac{4}{3}\) to a decimal. First, notice that \(\dfrac{4}{3}\) is an improper fraction. Its value is greater than 1. The equivalent decimal will also be greater than 1. We divide 4 by 3. A division problem is shown. 4.000 is on the inside of the division sign and 3 is on the outside. Below the 4 is a 3 with a line below it. Below the line is a 10. Below the 10 is a 9 with a line below it. Below the line is another 10, followed by another 9 with a line, followed by another 10, followed by another 9 with a line, followed by a 1. Above the division sign is 1.333... No matter how many more zeros we write, there will always be a remainder of 1, and the threes in the quotient will go on forever. The number 1.333… is called a repeating decimal. Remember that the “…” means that the pattern repeats. Definition: Repeating Decimal A repeating decimal is a decimal in which the last digit or group of digits repeats endlessly. How do you know how many ‘repeats’ to write? Instead of writing 1.333 … we use a shorthand notation by placing a line over the digits that repeat. The repeating decimal 1.333 … is written 1.\(\overline{3}\). The line above the 3 tells you that the 3 repeats endlessly. So 1.333… = 1.\(\overline{3}\). For other decimals, two or more digits might repeat. Table \(\PageIndex{1}\) shows some more examples of repeating decimals. Table \(\PageIndex{1}\) 1.333… = 1.\(\overline{3}\) 3 is the repeating digit 4.1666… = 4.1\(\overline{6}\) 6 is the repeating digit 4.161616… = 4.\(\overline{16}\) 16 is the repeating block 0.271271271… = 0.\(\overline{271}\) 271 is the repeating block Example \(\PageIndex{3}\): Write \(\dfrac{43}{22}\) as a decimal. Solution Divide 43 by 22. A division problem is shown. 43.00000 is on the inside of the division sign and 22 is on the outside. Below the 43 is a 22 with a line below it. Below the line is a 210 with a 198 with a line below it. Below the line is a 120 with 110 and a line below it. Below the line is 100 with 88 and a line below it. Below the line is 120 with 110 and a line below it. Below the line is 100 with 88 and a line below it. Below the line is an ellipses. There are arrows pointing to the 120s saying 120 repeats. There are arrows pointing to the 100s saying 100 repeats. There are arrows pointing to the 88s saying, in red, “The pattern repeats, so the numbers in the quotient will repeat as well.” The quotient is shown above the division sign. It is 1.95454. Notice that the differences of 120 and 100 repeat, so there is a repeat in the digits of the quotient; 54 will repeat endlessly. The first decimal place in the quotient, 9, is not part of the pattern. So, \[\dfrac{43}{22} = 1.9 \overline{54}\] Exercise \(\PageIndex{5}\): Write as a decimal: \(\dfrac{27}{11}\). Answer \(2. \overline{45}\) Exercise \(\PageIndex{6}\): Write as a decimal: \(\dfrac{51}{22}\). Answer \(2.3 \overline{18}\) It is useful to convert between fractions and decimals when we need to add or subtract numbers in different forms. To add a fraction and a decimal, for example, we would need to either convert the fraction to a decimal or the decimal to a fraction. Example \(\PageIndex{4}\): Simplify: \(\dfrac{7}{8}\) + 6.4. Solution Change \(\dfrac{7}{8}\) to a decimal. CNX_BMath_Figure_05_03_011_img-01.png 0.875 + 6.4 Add.   7.275 Exercise \(\PageIndex{7}\): Simplify: \(\dfrac{3}{8}\) + 4.9. Answer \(5.275\) Exercise \(\PageIndex{8}\): Simplify: 5.7 + \(\dfrac{13}{20}\). Answer \(6.35\) Order Decimals and Fractions In Decimals, we compared two decimals and determined which was larger. To compare a decimal to a fraction, we will first convert the fraction to a decimal and then compare the decimals. Example \(\PageIndex{5}\): Order \(\dfrac{3}{8}\)__0.4 using < or >. Solution Convert \(\dfrac{3}{8}\) to a decimal. 0.375__0.4 Compare 0.375 to 0.4 0.375 < 0.4 Rewrite with the original fraction. \(\dfrac{3}{8}\) < 0.4 Exercise \(\PageIndex{9}\): Order each of the following pairs of numbers, using < or >. \[\dfrac{17}{20} \_ \_ \; 0.82\] Answer \(>\) Exercise \(\PageIndex{10}\): Order each of the following pairs of numbers, using < or >. \[\dfrac{3}{4} \_ \_ \; 0.785\] Answer \(<\) When ordering negative numbers, remember that larger numbers are to the right on the number line and any positive number is greater than any negative number. Example \(\PageIndex{6}\): Order −0.5___\(− \dfrac{3}{4}\) using < or >. Solution Convert \(− \dfrac{3}{4}\) to a decimal. −0.5___−0.75 Compare −0.5 to −0.75. −0.5 > −0.75 Rewrite the inequality with the original fraction. −0.5 > \(− \dfrac{3}{4}\) Exercise \(\PageIndex{11}\): Order each of the following pairs of numbers, using < or >: \[− \dfrac{5}{8} \_ \_ −0.58\] Answer \(<\) Exercise \(\PageIndex{12}\): Order each of the following pairs of numbers, using < or >: \[−0.53 \_ \_ − \dfrac{11}{20}\] Answer \(>\) Example \(\PageIndex{7}\): Write the numbers \(\dfrac{13}{20}\), 0.61, \(\dfrac{11}{16}\) in order from smallest to largest. Solution Convert the fractions to decimals. 0.65, 0.61, 0.6875 Write the smallest decimal number first. 0.61, ____, _____ Write the next larger decimal number in the middle place. 0.61, 0.65, _____ Write the last decimal number (the larger) in the third place. 0.61, 0.65, 0.6875 Rewrite the list with the original fractions. 0.61, \(\dfrac{13}{20}, \dfrac{11}{16}\) Exercise \(\PageIndex{13}\): Write each set of numbers in order from smallest to largest: \(\dfrac{7}{8}, \dfrac{4}{5}\), 0.82. Answer \(\frac{4}{5}\), \(0.82\), \(\frac{7}{8}\) Exercise \(\PageIndex{14}\): Write each set of numbers in order from smallest to largest: 0.835, \(\dfrac{13}{16}, \dfrac{3}{4}\). Answer \(\frac{3}{4}\), \(\frac{13}{16}\), \(0.835\) Simplify Expressions Using the Order of Operations The order of operations introduced in Use the Language of Algebra also applies to decimals. Do you remember what the phrase “Please excuse my dear Aunt Sally” stands for? Example \(\PageIndex{8}\): Simplify the expressions: (a) 7(18.3 − 21.7) (b) \(\dfrac{2}{3}\) (8.3 − 3.8) Solution (a) 7(18.3 − 21.7) Simplify inside parentheses. 7(−3.4) Multiply. −23.8 (b) \(\dfrac{2}{3}\) (8.3 − 3.8) Simplify inside parentheses. $$\dfrac{2}{3} (4.5)$$ Write 4.5 as a fraction. $$\dfrac{2}{3} \left(\dfrac{4.5}{1}\right)$$ Multiply. $$\dfrac{9}{3}$$ Simplify. $$3$$ Exercise \(\PageIndex{15}\): Simplify: (a) 8(14.6 − 37.5) (b) \(\dfrac{3}{5}\) (9.6 − 2.1). Answer a \(-183.2\) Answer b \(4.5\) Exercise \(\PageIndex{16}\): Simplify: (a) 25(25.69 − 56.74) (b) \(\dfrac{2}{7}\) (11.9 − 4.2). Answer a \(-776.25\) Answer b \(2.2\) Example \(\PageIndex{9}\): Simplify each expression: (a) 6 ÷ 0.6 + (0.2)4 − (0.1)2 (b) \(\left(\dfrac{1}{10}\right)^{2}\) + (3.5)(0.9) Solution (a) 6 ÷ 0.6 + (0.2)4 − (0.1)2 Simplify exponents. 6 ÷ 0.6 + (0.2)4 − 0.01 Divide. 10 + (0.2)4 − 0.01 Multiply. 10 + 0.8 − 0.01 Add. 10.8 − 0.01 Subtract. 10.79 (b) \(\left(\dfrac{1}{10}\right)^{2}\) + (3.5)(0.9) Simplify exponents. \(\dfrac{1}{100}\) + (3.5)(0.9) Multiply. \(\dfrac{1}{100}\) + 3.15 Convert \(\dfrac{1}{100}\) to a decimal. 0.01 + 3.15 Add. 3.16 Exercise \(\PageIndex{17}\): Simplify: 9 ÷ 0.9 + (0.4)3 − (0.2)2. Answer \(11.16\) Exercise \(\PageIndex{18}\): Simplify: \(\left(\dfrac{1}{2}\right)^{2}\) + (0.3)(4.2). Answer \(1.51\) Contributors and Attributions 5.5: Decimals and Fractions (Part 1) is shared under a not declared license and was authored, remixed, and/or curated by OpenStax. • Was this article helpful?
__label__pos
0.999835
Link suggestions creating duplicate pages, neither links to existing page Steps to reproduce 1. Open page in Obsidian 2. Try generating link to existing page. Expected result 1 page is suggested, and the resulting link points to the page in your database of this title. Actual result Some pages have 2 page titles suggested, both appearing identical (but one with a space at the end). Neither link goes to the desired page. Both are seen as blank. Video: https://1drv.ms/u/s!Ah7iCG_uriY8m6pDYVQHYpW5E6X8zQ?e=HgbEnU Environment • Operating system: Windows 10 Pro • Obsidian version: Obsidian 6.7 • Using custom CSS: yes Additional information I am storing my obsidian folder on OneDrive. Perhaps it is doing something funny. I have the local storage feature on (no files are just 'links to files online'). 1. Does the note’s title The Visual System contain non breakable whitespaces? 2. Can you try to cancel the title of that note and rewrite it and see if/how it works? 1. I haven’t explicitly used any HTML in my vault. Attempting to detect these spaces with the sample method as the example described here - https://en.wikipedia.org/wiki/Non-breaking_space - Does not reveal them. 2. Elaborate on what you mean here. After checking for non-breaking spaces but squeezing the window and expanding it, something happened that fixed the issue. Suddenly I can’t get the 2 suggestions any longer and it is a single suggestion linking to the correct page. I’ll update this thread if it comes back. It has nothing to do with HTML. We have seen issues where note titles (the filenames) contain these special characters. I asked you to rename the note so that you can get rid of these characters. I didn’t rename it, but it seems the issue has gone away. I thought it had something to with HTML since the page discussing non-breaking white space appeared to focus on it as being HTML. fwiw, I’ve never used anything besides normal characters and spaces in obsidian. I’ll keep an eye out for the issue again. If you have any quick ways to check for non-breaking white spaces, please let me know. 1 Like
__label__pos
0.850246
.NET Interview Questions - Part 1 Posted by VIJI Saturday, June 27, 2009 1. What is the difference between ASP.Net and VB.NET? ASP.Net is an “environment”, and VB.Net is a programming language. You can write ASP.Net pages (called “Web Forms” by Microsoft) using VB.Net (or C#, or J# or Managed C++ or any one of a number of .Net compatible languages). 2. What base class do all Forms inherit from? System.Windows.Forms.Form 3. What is the difference between Debug.Write and Trace.Write? When should each be used? The Debug.Write call won’t be compiled when the DEBUG symbol is not defined (when doing a release build). Trace.Write calls will be compiled. Debug.Write is for information you want only in debug builds, Trace.Write is for final realease 4. Can you give an example of when it would be appropriate to use a web service as opposed to non-serviced .NET component? Web service is one of main component in Service Oriented Architecture. You could use web services when your clients and servers are running on different networks and also different platforms. This provides a loosely coupled architecture. 5. Is it possible to restrict the scope/field method of a class to the classes in the same name space? There is no way to restrict to a namespace. Namespaces are never units of protection. But if you’re using assemblies, you can use the ‘internal’ access modifier to restrict access to only within the assembly. 6. What is the equivalent of exit () for quitting in .NET application? You can use System.Environment.Exit(int exitCode) to exit the application or Application.Exit() if it’s a Windows Forms app. 7. Is there regular expression (regex) support available to c# developers? Yes. The .NET class libraries provide support for regular expressions. Look at the System.Text.RegularExpressions namespace. 8. How many types of validation controls are supported in ASP.NET? RequiredField Validator Control,Range Validator Control, RegularExpression Validator Control,Custom Validator Control and Validation Summary Control are provided by ASP.NET. 9. What is Tracing in ASP.NET? ASP.NET introduces new functionality that allows you to write debug statements, directly in your code, without having to remove them from your application when it is deployed to production servers. Called tracing, this feature allows you to write variables or structures in a page, assert whether a condition is met, or simply trace through the code. 10. What exactly happens when aspx page is requested from Browser? At its core, the ASP.NET execution engine compiles the page into a class, which derives from the code behind class (which in turn derives directly or indirectly from the Page class). Then it injects the newly created class into the execution environment, instantiates it, and executes it. 11. Methods of Deployment: You can deploy an ASP.NET Web application using any one of the following three deployment options 1.XCOPY Deployment2.Using the Copy Project option in VS .NET3.Deployment using VS.NET installer 12. What can be stored in Web.config file? There are number of important settings that can be stored in the configuration file. Here are some of the most frequently used configurations, stored conveniently inside Web.config file.. 1. Database connections2. Session States3. Error Handling4. Security 13. Is String a value type of Reference Type? string is actually ref Type but some difference with other ref objectValue type - bool, byte, chat, decimal, double, enum , float, int, long, sbyte, short,strut, uint, ulong, ushortValue types are stored in the StackReference type - class, delegate, interface, object, stringReference types are stored in the Heap 14. What is the purpose of garbage Collection? The purpose of Garge collection is to identify and discard objects that are no longer needed by a program so that their resources may be reclaimed and reused. 15. What does AspCompat=”true” mean and when should I use it? AspCompat is an aid in migrating ASP pages to ASPX pages. It defaults to false but should be set to true in any ASPX file that creates apartment-threaded COM objects–that is, COM objects registered ThreadingModel=Apartment. That includes all COM objects written with Visual Basic 6.0. 0 Responses to .NET Interview Questions - Part 1 Post a Comment
__label__pos
0.900152
Alessander Fran&#231;a Alessander Fran&#231;a - 1 year ago 109 React JSX Question sidebar with Reactjs I am trying to do a sidebar with Reactjs but I am not getting it. I want to know how to pass the props from App.js to Middle.js correctly. I have the structure index.js > routes.js > App.js > Header.js, Middle.js > Sidebar.js, (DashboardPage.js, AccountPage.js - pages to be rendered dinamically) Header.js -> Has a IndexLink at image Sidebar.js -> Has a IndexLink and a Link to render AccountPage.js The route is set for App.js, but the component is supposed to load inside the Middle.js component. Here are my codes: index.js import 'babel-polyfill'; import React from 'react'; import { render } from 'react-dom'; import { Router, browserHistory } from 'react-router'; import routes from './routes'; render( <Router history={browserHistory} routes={routes} />, document.getElementById('app') ); routes.js import React from 'react'; import { Route, IndexRoute } from 'react-router'; import App from './components/App'; import DashboardPage from './components/middle/dashboard/DashboardPage'; import AccountPage from './components/middle/accounts/AccountPage'; export default ( <Route path="/" component={App}> <IndexRoute component={DashboardPage} /> <Route path="accounts" component={AccountPage} /> </Route> ); App.js import 'babel-polyfill'; import React, {PropTypes} from 'react'; import ReactDOM from 'react-dom'; import Header from './common/Header'; import Middle from './middle/Middle' import '../../css/style.css'; class App extends React.Component{ var { app } = this.props; render() { return ( <div> <Header /> <Middle /> </div> ); } } export default App; Header.js // IndexLink at img 'use strict'; import React from 'react'; import User from './User'; import {IndexLink, Link} from 'react-router'; import '../../../css/header.css'; class Header extends React.Component{ render() { return ( <div className="contactto-header"> <div className="contactto-header-content"> <IndexLink to="/"><img className="contactto-header-content-logo" src="static/img/logo.png" alt="contactto logo" /></IndexLink> <div className="contactto-header-content-alarm"></div> <div className="contactto-header-content-user"> <User /> </div> </div> </div> ); } } export default Header; Middle.js 'use strict'; import React, {PropTypes} from 'react'; import '../../../css/middle.css'; import SideBar from './SideBar' class Middle extends React.Component{ render() { return ( <div className="contactto-middle"> <div className="contactto-middle-content"> <SideBar /> {app.children} // should render here </div> </div> ); } } export default Middle; Sidebar.js 'use strict'; import React from 'react'; import {IndexLink, Link} from 'react-router'; import '../../../css/sidebar.css'; class SideBar extends React.Component{ render() { return ( <div className="sidebar-container"> <ul className="sidebar-container-ul"> <li className="sidebar-container-ul-li"> <IndexLink className="sidebar-container-ul-li-a" to="/">Dashboard</IndexLink> </li> <li className="sidebar-container-ul-li"> <Link className="sidebar-container-ul-li-a" to="accounts">Contas</Link> </li> </ul> </div> ); } } export default SideBar; What is wrong? I am new with React, if another code is not done correctly please tell me :) Thanks in advance. Answer Source First of all, your route components will be available under App through this.props.children. If you want to wrap them with you can try like this: class App extends React.Component{ render() { return ( <div> <Header /> <Middle> {this.props.children} </Middle> </div> ); } } An then in the Middle.js class Middle extends React.Component{ render() { return ( <div className="contactto-middle"> <div className="contactto-middle-content"> <SideBar /> {this.props.children} </div> </div> ); } }
__label__pos
0.983201
Commit 89c1f1cb authored by qinsoon's avatar qinsoon reformat code byy rustfmt parent d1df3753 Pipeline #749 failed with stages in 3 minutes and 12 seconds // Copyright 2017 The Australian National University // // // Licensed under the Apache License, Version 2.0 (the "License"); // you may not use this file except in compliance with the License. // You may obtain a copy of the License at // // // http://www.apache.org/licenses/LICENSE-2.0 // // // Unless required by applicable law or agreed to in writing, software // distributed under the License is distributed on an "AS IS" BASIS, // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. ......@@ -18,10 +18,12 @@ extern crate gcc; #[cfg(target_arch = "x86_64")] fn main() { gcc::compile_library("libruntime_c.a", &["src/runtime/runtime_c_x64_sysv.c"]); gcc::Config::new().flag("-O3").flag("-c") .file("src/runtime/runtime_asm_x64_sysv.S") .compile("libruntime_asm.a"); gcc::Config::new() .flag("-O3") .flag("-c") .file("src/runtime/runtime_asm_x64_sysv.S") .compile("libruntime_asm.a"); } #[cfg(target_os = "linux")] ......@@ -29,7 +31,9 @@ fn main() { fn main() { gcc::compile_library("libruntime_c.a", &["src/runtime/runtime_c_aarch64_sysv.c"]); gcc::Config::new().flag("-O3").flag("-c") gcc::Config::new() .flag("-O3") .flag("-c") .file("src/runtime/runtime_asm_aarch64_sysv.S") .compile("libruntime_asm.a"); } \ No newline at end of file } This diff is collapsed. This diff is collapsed. ......@@ -36,7 +36,7 @@ pub enum BinOp { FSub, FMul, FDiv, FRem FRem, } #[derive(Copy, Clone, Debug, PartialEq)] ......@@ -69,7 +69,7 @@ pub enum CmpOp { FULT, FULE, FUNE, FUNO FUNO, } impl CmpOp { ......@@ -154,7 +154,7 @@ impl CmpOp { SLT => ULT, SGT => UGT, SLE => ULE, _ => self, _ => self, } } ......@@ -162,32 +162,23 @@ impl CmpOp { use op::CmpOp::*; match self { SGE | SLT | SGT | SLE => true, _ => false _ => false, } } pub fn is_int_cmp(self) -> bool { use op::CmpOp::*; match self { EQ | NE | SGE | SGT | SLE | SLT | UGE | UGT | ULE | ULT => true, _ => false EQ | NE | SGE | SGT | SLE | SLT | UGE | UGT | ULE | ULT => true, _ => false, } } pub fn is_symmetric(self) -> bool { use op::CmpOp::*; match self { EQ | NE | FORD| FUNO| FUNE | FUEQ | FONE | FOEQ => true, _ => false EQ | NE | FORD | FUNO | FUNE | FUEQ | FONE | FOEQ => true, _ => false, } } } ......@@ -205,7 +196,7 @@ pub enum ConvOp { SITOFP, BITCAST, REFCAST, PTRCAST PTRCAST, } #[derive(Copy, Clone, Debug, PartialEq)] ......@@ -220,5 +211,5 @@ pub enum AtomicRMWOp { MAX, MIN, UMAX, UMIN } \ No newline at end of file UMIN, } ......@@ -23,4 +23,4 @@ pub type P<T> = Arc<T>; /// Construct a `P<T>` from a `T` value. pub fn P<T>(value: T) -> P<T> { Arc::new(value) } \ No newline at end of file } This diff is collapsed. This diff is collapsed. This diff is collapsed. ......@@ -47,11 +47,12 @@ use utils::LinkedHashMap; use std::collections::HashMap; // number of normal callee saved registers (excluding RSP and RBP) pub const CALLEE_SAVED_COUNT : usize = 5; pub const CALLEE_SAVED_COUNT: usize = 5; /// a macro to declare a set of general purpose registers that are aliased to the first one macro_rules! GPR_ALIAS { ($alias: ident: ($id64: expr, $r64: ident) -> $r32: ident, $r16: ident, $r8l: ident, $r8h: ident) => { ($alias: ident: ($id64: expr, $r64: ident) -> $r32: ident, $r16: ident, $r8l: ident, $r8h: ident) => { lazy_static!{ pub static ref $r64 : P<Value> = GPR!($id64, stringify!($r64), UINT64_TYPE); pub static ref $r32 : P<Value> = GPR!($id64 +1, stringify!($r32), UINT32_TYPE); ......@@ -59,7 +60,8 @@ macro_rules! GPR_ALIAS { pub static ref $r8l : P<Value> = GPR!($id64 +3, stringify!($r8l), UINT8_TYPE); pub static ref $r8h : P<Value> = GPR!($id64 +4, stringify!($r8h), UINT8_TYPE); pub static ref $alias : [P<Value>; 5] = [$r64.clone(), $r32.clone(), $r16.clone(), $r8l.clone(), $r8h.clone()]; pub static ref $alias : [P<Value>; 5] = [$r64.clone(), $r32.clone(), $r16.clone(), $r8l.clone(), $r8h.clone()]; } }; ......@@ -70,15 +72,17 @@ macro_rules! GPR_ALIAS { pub static ref $r16 : P<Value> = GPR!($id64 +2, stringify!($r16), UINT16_TYPE); pub static ref $r8 : P<Value> = GPR!($id64 +3, stringify!($r8) , UINT8_TYPE ); pub static ref $alias : [P<Value>; 4] = [$r64.clone(), $r32.clone(), $r16.clone(), $r8.clone()]; pub static ref $alias : [P<Value>; 4] = [$r64.clone(), $r32.clone(), $r16.clone(), $r8.clone()]; } }; ($alias: ident: ($id64: expr, $r64: ident)) => { lazy_static!{ pub static ref $r64 : P<Value> = GPR!($id64, stringify!($r64), UINT64_TYPE); pub static ref $r64 : P<Value> = GPR!($id64, stringify!($r64), UINT64_TYPE); pub static ref $alias : [P<Value>; 4] = [$r64.clone(), $r64.clone(), $r64.clone(), $r64.clone()]; pub static ref $alias : [P<Value>; 4] = [$r64.clone(), $r64.clone(), $r64.clone(), $r64.clone()]; } }; } ......@@ -179,7 +183,7 @@ pub fn get_alias_for_length(id: MuID, length: usize) -> P<Value> { if id < FPR_ID_START { let vec = match GPR_ALIAS_TABLE.get(&id) { Some(vec) => vec, None => panic!("didnt find {} as GPR", id) None => panic!("didnt find {} as GPR", id), }; match length { ......@@ -188,7 +192,7 @@ pub fn get_alias_for_length(id: MuID, length: usize) -> P<Value> { 16 => vec[2].clone(), 8 => vec[3].clone(), 1 => vec[3].clone(), _ => panic!("unexpected length {} for {}", length, vec[0]) _ => panic!("unexpected length {} for {}", length, vec[0]), } } else { for r in ALL_FPRS.iter() { ......@@ -235,7 +239,7 @@ pub fn get_color_for_precolored(id: MuID) -> MuID { if id < FPR_ID_START { match GPR_ALIAS_LOOKUP.get(&id) { Some(val) => val.id(), None => panic!("cannot find GPR {}", id) None => panic!("cannot find GPR {}", id), } } else { // we do not have alias for FPRs ......@@ -250,9 +254,9 @@ pub fn check_op_len(op: &P<Value>) -> usize { Some(64) => 64, Some(32) => 32, Some(16) => 16, Some(8) => 8, Some(1) => 8, _ => panic!("unsupported register length for x64: {}", op.ty) Some(8) => 8, Some(1) => 8, _ => panic!("unsupported register length for x64: {}", op.ty), } } ......@@ -320,7 +324,7 @@ lazy_static! { ]; } pub const FPR_ID_START : usize = 100; pub const FPR_ID_START: usize = 100; lazy_static!{ // floating point registers, we use SSE registers ......@@ -496,7 +500,7 @@ lazy_static! { } /// creates context for each machine register in FunctionContext pub fn init_machine_regs_for_func (func_context: &mut FunctionContext) { pub fn init_machine_regs_for_func(func_context: &mut FunctionContext) { for reg in ALL_MACHINE_REGS.values() { let reg_id = reg.extract_ssa_id().unwrap(); let entry = SSAVarEntry::new(reg.clone()); ......@@ -508,9 +512,9 @@ pub fn init_machine_regs_for_func (func_context: &mut FunctionContext) { /// gets the number of registers in a certain register group pub fn number_of_usable_regs_in_group(group: RegGroup) -> usize { match group { RegGroup::GPR => ALL_USABLE_GPRS.len(), RegGroup::GPR => ALL_USABLE_GPRS.len(), RegGroup::GPREX => ALL_USABLE_GPRS.len(), RegGroup::FPR => ALL_USABLE_FPRS.len() RegGroup::FPR => ALL_USABLE_FPRS.len(), } } ......@@ -574,9 +578,9 @@ pub fn get_callee_saved_offset(reg: MuID) -> isize { let id = if reg == RBX.id() { 0 } else { (reg - R12.id())/4 + 1 (reg - R12.id()) / 4 + 1 }; (id as isize + 1)*(-8) (id as isize + 1) * (-8) } /// is a machine register (by ID) callee saved? ......@@ -597,10 +601,9 @@ pub fn is_valid_x86_imm(op: &P<Value>) -> bool { if op.ty.get_int_length().is_some() && op.ty.get_int_length().unwrap() <= 32 { match op.v { Value_::Constant(Constant::Int(val)) if val as i32 >= i32::MIN && val as i32 <= i32::MAX => { true }, _ => false Value_::Constant(Constant::Int(val)) if val as i32 >= i32::MIN && val as i32 <= i32::MAX => true, _ => false, } } else { false ......@@ -615,49 +618,53 @@ pub fn estimate_insts_for_ir(inst: &Instruction) -> usize { match inst.v { // simple BinOp(_, _, _) => 1, BinOp(_, _, _) => 1, BinOpWithStatus(_, _, _, _) => 2, CmpOp(_, _, _) => 1, ConvOp{..} => 0, CmpOp(_, _, _) => 1, ConvOp { .. } => 0, // control flow Branch1(_) => 1, Branch2{..} => 1, Select{..} => 2, Watchpoint{..} => 1, WPBranch{..} => 2, Switch{..} => 3, Branch1(_) => 1, Branch2 { .. } => 1, Select { .. } => 2, Watchpoint { .. } => 1, WPBranch { .. } => 2, Switch { .. } => 3, // call ExprCall{..} | ExprCCall{..} | Call{..} | CCall{..} => 5, Return(_) => 1, ExprCall { .. } | ExprCCall { .. } | Call { .. } | CCall { .. } => 5, Return(_) => 1, TailCall(_) => 1, // memory access Load{..} | Store{..} => 1, CmpXchg{..} => 1, AtomicRMW{..} => 1, AllocA(_) => 1, AllocAHybrid(_, _) => 1, Fence(_) => 1, Load { .. } | Store { .. } => 1, CmpXchg { .. } => 1, AtomicRMW { .. } => 1, AllocA(_) => 1, AllocAHybrid(_, _) => 1, Fence(_) => 1, // memory addressing GetIRef(_) | GetFieldIRef{..} | GetElementIRef{..} | ShiftIRef{..} | GetVarPartIRef{..} => 0, GetIRef(_) | GetFieldIRef { .. } | GetElementIRef { .. } | ShiftIRef { .. } | GetVarPartIRef { .. } => 0, // runtime call New(_) | NewHybrid(_, _) => 10, NewStack(_) | NewThread(_, _) | NewThreadExn(_, _) | NewFrameCursor(_) => 10, ThreadExit => 10, Throw(_) => 10, SwapStack{..} => 10, ThreadExit => 10, Throw(_) => 10, SwapStack { .. } => 10, CommonInst_GetThreadLocal | CommonInst_SetThreadLocal(_) => 10, CommonInst_Pin(_) | CommonInst_Unpin(_) => 10, // others Move(_) => 0, PrintHex(_) => 10, PrintHex(_) => 10, SetRetval(_) => 10, ExnInstruction{ref inner, ..} => estimate_insts_for_ir(&inner), ExnInstruction { ref inner, .. } => estimate_insts_for_ir(&inner), _ => unimplemented!(), } } // Copyright 2017 The Australian National University // // // Licensed under the Apache License, Version 2.0 (the "License"); // you may not use this file except in compliance with the License. // You may obtain a copy of the License at // // // http://www.apache.org/licenses/LICENSE-2.0 // // // Unless required by applicable law or agreed to in writing, software // distributed under the License is distributed on an "AS IS" BASIS, // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. ......@@ -26,18 +26,18 @@ use std::fs::File; use std::collections::HashMap; /// should emit Mu IR dot graph? pub const EMIT_MUIR : bool = true; pub const EMIT_MUIR: bool = true; /// should emit machien code dot graph? pub const EMIT_MC_DOT : bool = true; pub const EMIT_MC_DOT: bool = true; pub struct CodeEmission { name: &'static str name: &'static str, } impl CodeEmission { pub fn new() -> CodeEmission { CodeEmission { name: "Code Emission" name: "Code Emission", } } } ......@@ -66,7 +66,7 @@ impl CompilerPass for CodeEmission { pub fn create_emit_directory(vm: &VM) { use std::fs; match fs::create_dir(&vm.vm_options.flag_aot_emit_dir) { Ok(_) => {}, Ok(_) => {} Err(_) => {} } } ......@@ -78,8 +78,14 @@ fn create_emit_file(name: String, vm: &VM) -> File { file_path.push(name); match File::create(file_path.as_path()) { Err(why) => panic!("couldn't create emit file {}: {}", file_path.to_str().unwrap(), why), Ok(file) => file Err(why) => { panic!( "couldn't create emit file {}: {}", file_path.to_str().unwrap(), why ) } Ok(file) => file, } } ......@@ -92,8 +98,14 @@ pub fn emit_mu_types(suffix: &str, vm: &VM) { file_path.push(&vm.vm_options.flag_aot_emit_dir); file_path.push("___types".to_string() + suffix + ".muty"); let mut file = match File::create(file_path.as_path()) { Err(why) => panic!("couldn't create mu types file {}: {}", file_path.to_str().unwrap(), why), Ok(file) => file Err(why) => { panic!( "couldn't create mu types file {}: {}", file_path.to_str().unwrap(), why ) } Ok(file) => file, }; { ......@@ -107,12 +119,16 @@ pub fn emit_mu_types(suffix: &str, vm: &VM) { if ty.is_struct() { write!(file, "{}", ty).unwrap(); let struct_ty = struct_map.get(&ty.get_struct_hybrid_tag().unwrap()).unwrap(); let struct_ty = struct_map .get(&ty.get_struct_hybrid_tag().unwrap()) .unwrap(); writeln!(file, " -> {}", struct_ty).unwrap(); writeln!(file, " {}", vm.get_backend_type_info(ty.id())).unwrap(); } else if ty.is_hybrid() { write!(file, "{}", ty).unwrap(); let hybrid_ty = hybrid_map.get(&ty.get_struct_hybrid_tag().unwrap()).unwrap(); let hybrid_ty = hybrid_map .get(&ty.get_struct_hybrid_tag().unwrap()) .unwrap(); writeln!(file, " -> {}", hybrid_ty).unwrap(); writeln!(file, " {}", vm.get_backend_type_info(ty.id())).unwrap(); } else { ......@@ -144,7 +160,7 @@ fn emit_mc_dot(func: &MuFunctionVersion, vm: &VM) { let blocks = mc.get_all_blocks(); type DotID = usize; let name_id_map : HashMap<MuName, DotID> = { let name_id_map: HashMap<MuName, DotID> = { let mut ret = HashMap::new(); let mut index = 0; ......@@ -159,7 +175,12 @@ fn emit_mc_dot(func: &MuFunctionVersion, vm: &VM) { for block_name in blocks.iter() { // BB [label = " write!(file, "{} [label = \"{}:\\l\\l", id(block_name.clone()), block_name).unwrap(); write!( file, "{} [label = \"{}:\\l\\l", id(block_name.clone()), block_name ).unwrap(); for inst in mc.get_block_range(&block_name).unwrap() { file.write_all(&mc.emit_inst(inst)).unwrap(); ......@@ -173,8 +194,10 @@ fn emit_mc_dot(func: &MuFunctionVersion, vm: &VM) { for block_name in blocks.iter() { let end_inst = mc.get_block_range(block_name).unwrap().end; for succ in mc.get_succs(mc.get_last_inst(end_inst).unwrap()).into_iter() { match mc.get_block_for_inst(*succ) { for succ in mc.get_succs(mc.get_last_inst(end_inst).unwrap()) .into_iter() { match mc.get_block_for_inst(*succ) { Some(target) => { let source_id = id(block_name.clone()); let target_id = id(target.clone()); ...... // Copyright 2017 The Australian National University // // // Licensed under the Apache License, Version 2.0 (the "License"); // you may not use this file except in compliance with the License. // You may obtain a copy of the License at // // // http://www.apache.org/licenses/LICENSE-2.0 // // // Unless required by applicable law or agreed to in writing, software // distributed under the License is distributed on an "AS IS" BASIS, // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. ...... This diff is collapsed. // Copyright 2017 The Australian National University // // // Licensed under the Apache License, Version 2.0 (the "License"); // you may not use this file except in compliance with the License. // You may obtain a copy of the License at // // // http://www.apache.org/licenses/LICENSE-2.0 // // // Unless required by applicable law or agreed to in writing, software // distributed under the License is distributed on an "AS IS" BASIS, // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. ......@@ -21,7 +21,7 @@ use compiler::backend; use std::any::Any; pub struct PeepholeOptimization { name: &'static str name: &'static str, } impl CompilerPass for PeepholeOptimization { ......@@ -38,7 +38,8 @@ impl CompilerPass for PeepholeOptimization { let mut cf = compiled_funcs.get(&func.id()).unwrap().write().unwrap(); for i in 0..cf.mc().number_of_insts() { // if two sides of a move instruction are the same, it is redundant, and can be eliminated // if two sides of a move instruction are the same, // it is redundant, and can be eliminated self.remove_redundant_move(i, &mut cf); // if a branch jumps a label that contains another jump, such as ......@@ -66,45 +67,49 @@ impl CompilerPass for PeepholeOptimization { impl PeepholeOptimization { pub fn new() -> PeepholeOptimization { PeepholeOptimization { name: "Peephole Optimization" name: "Peephole Optimization", } } fn remove_redundant_move(&mut self, inst: usize, cf: &mut CompiledFunction) { // if this instruction is a move, and move from register to register (no memory operands) if cf.mc().is_move(inst) && !cf.mc().is_using_mem_op(inst) { cf.mc().trace_inst(inst); // get source reg/temp ID let src : MuID = { let src: MuID = { let uses = cf.mc().get_inst_reg_uses(inst); if uses.len() == 0 { // moving immediate to register, its not redundant return; } } uses[0] }; // get dest reg/temp ID let dst : MuID = cf.mc().get_inst_reg_defines(inst)[0]; let dst: MuID = cf.mc().get_inst_reg_defines(inst)[0]; // turning temp into machine reg let src_machine_reg : MuID = { let src_machine_reg: MuID = { match cf.temps.get(&src) { Some(reg) => *reg, None => src None => src, } }; let dst_machine_reg : MuID = { let dst_machine_reg: MuID = { match cf.temps.get(&dst) { Some(reg) => *reg, None => dst None => dst, } }; // check if two registers are aliased if backend::is_aliased(src_machine_reg, dst_machine_reg) { info!("move between {} and {} is redundant! removed", src_machine_reg, dst_machine_reg); info!( "move between {} and {} is redundant! removed", src_machine_reg, dst_machine_reg ); // redundant, remove this move cf.mc_mut().set_inst_nop(inst); } else { ......@@ -149,7 +154,7 @@ impl PeepholeOptimization { // the instruction that we may rewrite let orig_inst = inst; // the destination we will rewrite the instruction to branch to let final_dest : Option<MuName> = { let final_dest: Option<MuName> = { let mut cur_inst = inst; let mut last_dest = None; loop { ......@@ -158,8 +163,13 @@ impl PeepholeOptimization { Some(ref dest) => { // get the block for destination let first_inst = mc.get_block_range(dest).unwrap().start; debug_assert!(mc.is_label(first_inst).is_none(), "expect start inst {} of \ block {} is a inst instead of label", first_inst, dest); debug_assert!( mc.is_label(first_inst).is_none(), "expect start inst {} of \ block {} is a inst instead of label", first_inst, dest ); trace!("examining first inst {} of block {}", first_inst, dest); ......@@ -170,10 +180,10 @@ impl PeepholeOptimization { cur_inst = first_inst; last_dest = Some(dest2.clone()); } None => break None => break, } } None => break None => break, } } last_dest ......@@ -184,13 +194,23 @@ impl PeepholeOptimization { let start = mc.get_block_range(&dest).unwrap().start; match mc.get_next_inst(start) { Some(i) => i, None => panic!("we are jumping to a block {}\ that does not have instructions?", dest) None => { panic!( "we are jumping to a block {}\ that does not have instructions?", dest ) } } }; info!("inst {} chain jumps to {}, rewrite as branching to {} (successor: {})", orig_inst, dest, dest, first_inst); info!( "inst {} chain jumps to {}, rewrite as branching to {} (successor: {})", orig_inst, dest, dest, first_inst ); mc.replace_branch_dest(inst, &dest, first_inst); } } ......
__label__pos
0.999677
eric6/E5Gui/E5GenericDiffHighlighter.py Sat, 27 Feb 2021 12:08:23 +0100 author Detlev Offenbach <[email protected]> date Sat, 27 Feb 2021 12:08:23 +0100 changeset 8138 169e65a6787c parent 7923 91e843545d9a child 8143 2c730d5fd177 permissions -rw-r--r-- Shell: added functionality to show a prompt when the main client process has exited (e.g. a script ended). # -*- coding: utf-8 -*- # Copyright (c) 2015 - 2021 Detlev Offenbach <[email protected]> # """ Module implementing a syntax highlighter for diff outputs. """ import re from PyQt5.QtGui import QSyntaxHighlighter, QTextCharFormat, QFont import Preferences def TERMINAL(pattern): """ Function to mark a pattern as the final one to search for. @param pattern pattern to be marked (string) @return marked pattern (string) """ return "__TERMINAL__:{0}".format(pattern) # Cache the results of re.compile for performance reasons _REGEX_CACHE = {} class E5GenericDiffHighlighter(QSyntaxHighlighter): """ Class implementing a generic diff highlighter. """ def __init__(self, doc): """ Constructor @param doc reference to the text document (QTextDocument) """ super(E5GenericDiffHighlighter, self).__init__(doc) self.regenerateRules() def __initColours(self): """ Private method to initialize the highlighter colours. """ self.textColor = Preferences.getDiffColour("TextColor") self.addedColor = Preferences.getDiffColour("AddedColor") self.removedColor = Preferences.getDiffColour("RemovedColor") self.replacedColor = Preferences.getDiffColour("ReplacedColor") self.contextColor = Preferences.getDiffColour("ContextColor") self.headerColor = Preferences.getDiffColour("HeaderColor") self.whitespaceColor = Preferences.getDiffColour("BadWhitespaceColor") def createRules(self, *rules): """ Public method to create the highlighting rules. @param rules set of highlighting rules (list of tuples of rule pattern (string) and highlighting format (QTextCharFormat)) """ for _idx, ruleFormat in enumerate(rules): rule, formats = ruleFormat terminal = rule.startswith(TERMINAL('')) if terminal: rule = rule[len(TERMINAL('')):] try: regex = _REGEX_CACHE[rule] except KeyError: regex = _REGEX_CACHE[rule] = re.compile(rule) self._rules.append((regex, formats, terminal)) def formats(self, line): """ Public method to determine the highlighting formats for a line of text. @param line text line to be highlighted (string) @return list of matched highlighting rules (list of tuples of match object and format (QTextCharFormat)) """ matched = [] for rx, formats, terminal in self._rules: match = rx.match(line) if not match: continue matched.append([match, formats]) if terminal: return matched return matched def makeFormat(self, fg=None, bg=None, bold=False): """ Public method to generate a format definition. @param fg foreground color (QColor) @param bg background color (QColor) @param bold flag indicating bold text (boolean) @return format definiton (QTextCharFormat) """ font = Preferences.getEditorOtherFonts("MonospacedFont") charFormat = QTextCharFormat() charFormat.setFontFamily(font.family()) charFormat.setFontPointSize(font.pointSize()) if fg: charFormat.setForeground(fg) if bg: charFormat.setBackground(bg) if bold: charFormat.setFontWeight(QFont.Bold) return charFormat def highlightBlock(self, text): """ Public method to highlight a block of text. @param text text to be highlighted (string) """ formats = self.formats(text) if not formats: # nothing matched self.setFormat(0, len(text), self.normalFormat) return for match, formatStr in formats: start = match.start() groups = match.groups() # No groups in the regex, assume this is a single rule # that spans the entire line if not groups: self.setFormat(0, len(text), formatStr) continue # Groups exist, rule is a tuple corresponding to group for groupIndex, group in enumerate(groups): if not group: # empty match continue # allow None as a no-op format length = len(group) if formatStr[groupIndex]: self.setFormat(start, start + length, formatStr[groupIndex]) start += length def regenerateRules(self): """ Public method to initialize or regenerate the syntax highlighter rules. """ self.normalFormat = self.makeFormat() self.__initColours() self._rules = [] self.generateRules() def generateRules(self): """ Public method to generate the rule set. Note: This method must me implemented by derived syntax highlighters. """ pass eric ide mercurial
__label__pos
0.975001
Making an infinite CSS carousel After searching and struggling for many hours, I still couldn’t find a good guide on how to make an infinite CSS carousel. There’s not reason this can’t be done using html and css, and yet almost every guide is using javascript, or css that doesn’t act… After searching and struggling for many hours, I still couldn't find a good guide on how to make an infinite CSS carousel. There's not reason this can't be done using html and css, and yet almost every guide is using javascript, or css that doesn't actually work when you try it. It's so simple and yet a working example is just not available, so I'm sharing what we've done at Rhosys to make our brand carousel. And of course you want to see the finished product first: rhosys-brand-carousel Diving into the code Here's out simple display: <section class="user-cloud"> <div class="brands-container"> <div class="brands-carousel"> <picture> <source srcset="assets/images/s-customer-cloud1.png" media="(max-width: 766px)" /> <img src="assets/images/customer-cloud1.png" /> </picture> <picture> <source srcset="assets/images/s-customer-cloud2.png" media="(max-width: 766px)" /> <img src="assets/images/customer-cloud2.png" /> </picture> <picture> <source srcset="assets/images/s-customer-cloud3.png" media="(max-width: 766px)" /> <img src="assets/images/customer-cloud3.png" /> </picture> <picture> <source srcset="assets/images/s-customer-cloud4.png" media="(max-width: 766px)" /> <img src="assets/images/s-customer-cloud4.png" /> </picture> </div> </div> </section> We have four images on mobile and only three on desktop. To make this responsive we are using the picture tag with source and img set. Img selectors didn't work, so we use picture. No idea why, but rather than fighting with that it's easier to do this. No messy x2 multiples or figuring out what the size of the elements should be on the screen. Then we'll add some nice padding and setup to our container. *IMPORTANT: the max-width here should always been the same width as all your pictures, so that 100% means the full picture width: .brands-container { max-width: 1050px; margin: auto; padding:0 1em; overflow: hidden; } .brands-carousel { position: relative; padding-left: 0; margin: 0; height: 200px; overflow: hidden; } .brands-carousel > div { width: 100%; } That's the setup, which is relatively simple, and here's the important setup: # Each picture in the carousel is 100% of the parent. .brands-carousel > picture { width: 100%; position: absolute; top: 0; display: flex; justify-content: center; animation: carousel 20s linear infinite; # It also starts off the screen until it is time. transform: translateX(100%); } I'm going to skip talking about the first-picture keyframe, but every picture get's the same setup, it takes N seconds to move onto the screen and stay there until the next picture moves. Calculated on desktop there are three pictures, so each one gets 1/3 of the times on stage. .brands-carousel > picture:nth-child(1) { animation-name: first-picture, carousel; animation-duration: 20s; animation-iteration-count: 1, infinite; animation-delay: 0s, 20s; transform: translateX(0%); } .brands-carousel > picture:nth-child(2) { animation-delay: Calc(20s * .33); } .brands-carousel > picture:nth-child(3) { animation-delay: Calc(20s * .66); } # The keyframes @keyframes first-picture { 0% { transform: translateX(0%); } 7.5%, 33% { transform: translateX(0); } 40.5%, 100% { transform: translateX(-100%); } } @keyframes carousel { 0% { transform: translateX(100%); } 7.5%, 33% { transform: translateX(0); } 40.5%, 100% { transform: translateX(-100%); } } So the main keyframe is carousel. Since each image is on stage for 1/3 of the time, It will slide in taking 7.5% of the 20s time to do that, and stay there until the end of it's 33%, and which time it transfers out. Since each image takes 7.5% to enter, it also has to take 7.5% to leave. 33% + 7.5% = 40.5. And this almost totally works, except for one thing, until the 33% of 20s no image is fully displayed. The fix for this is a hack which shows the first image two times. We'll show it on the screen to start until it leaves and at the same time we'll show it off the screen to the right until it starts. We'll then delay the second animation one full round. Because of this, we need the first-picture keyframe, and this works great. # The keyframes for mobile @keyframes first-picture-responsive { 0% { transform: translateX(0%); } 5.5%, 25% { transform: translateX(0); } 30.5%, 100% { transform: translateX(-100%); } } @keyframes carousel-responsive { 0% { transform: translateX(100%); } 5.5%, 25% { transform: translateX(0); } 30.5%, 100% { transform: translateX(-100%); } } Don't show the forth picture on desktop .brands-carousel > picture:last-child { display: none; } On mobile we'll make some small adjustments, instead of 20s for 3, well have 27s for 4 images, each image gets 1/4 of the time on stage. @media screen and (max-width: 766px) { .brands-carousel > picture { animation: carousel-responsive 27s linear infinite; } .brands-carousel > picture:nth-child(1) { animation-name: first-picture-responsive, carousel-responsive; animation-duration: 27s; animation-iteration-count: 1, infinite; animation-delay: 0s, 27s; } .brands-carousel > picture:nth-child(2) { animation-delay: Calc(27s * .25); } .brands-carousel > picture:nth-child(3) { animation-delay: Calc(27s * .50); } .brands-carousel > picture:nth-child(4) { animation-delay: Calc(27s * .75); display: block; } } Finishing up If you want to change the time of the full animate loop replace the 20s with your new full time. To change how long a transition is on the screen reduce the 7.5% to a smaller value (and reduce the 40.5% by the same amount). To make any other change (i.e. increasing the length of time the image is static, you'll need to compute that based on 33% of the total time and then recalculate the transition percentage. Right now the static image is 7.5% to 33% that's 25.5% of 20s (5.1s) on the screen. If you want that to be 6s on the screen without reducing the transition time (7.5% * 20s = 1.5s). Calculate the total new time 6 / .255 = 23.5s for the full animation and then new transition percentages are 1.5s / 23.5s = 6.4% so the new keyframes would be # Updating all the 20s => 23.5s @keyframes carousel { 0% { transform: translateX(100%); } 6.4%, 33% { transform: translateX(0); } 39.4%, 100% { transform: translateX(-100%); } } And that's it. Here's a link to the code Print Share Comment Cite Upload Translate CITATION GOES HERE CITATION GOES HERE Select a language:
__label__pos
0.980966
0 I'm trying to add a function to my site that will count the number of images attached to a post, and if only one, to add a class to either that <img> or to the containing <a>. I think I'm fairly close, but don't have the PHP syntax skills to finish it up. The code I've hacked together, sourced from various threads on the topic, is: /** * * Single-image posts will receive a separate class name * */ add_filter( 'the_content', 'single_image_content_filter', 20 ); // Count images in post function single_image_content_filter( $content ) { $attachments = get_children(array('post_parent'=>$post->ID)); $imgcount = count($attachments); // If only one attachment, add a new CSS class if ( $imgcount === 1 ) { global $post; $classes = 'single-img'; // separated by spaces, e.g. 'img image-link' // check if there are already classes assigned to the anchor, and/or add one via $classes if ( preg_match('/<a.*? class=".*?">/', $content) ) { $content = preg_replace('/(<a.*? class=".*?)(".*?>)/', '$1 ' . $classes . '$2', $content); } else { $content = preg_replace('/(<a.*?)>/', '$1 class="' . $classes . '" >', $content); } return $content; } } • Can you better not use JavaScript for this? Like: $('.post img:first'); for example – pascalvgemert May 17 '14 at 19:28 • I don't want it to just target the first out of a possible many images. I want it to only add a class if only one image is found. – dmoz May 17 '14 at 19:36 • You can still do that with JS, something like: if($('.post img').length == 1) { $('.post img').addClass('some-class'); } – pascalvgemert May 17 '14 at 19:38 • Any way to get me the full code (answer), and where would be the best place to put it? – dmoz May 17 '14 at 19:42 • Not today, tomorrow I have some time. Maybe another user can work it out? – pascalvgemert May 17 '14 at 19:46 1 Using @pascalvgemert's suggestion as a starting point, I'm using the following script: jQuery(document).ready(function() { var imgCount = jQuery(".single-post #content-left p a").children("img").length; if (imgCount == 1) { jQuery("img").addClass("lone-image"); } }); Works like a charm. | improve this answer | | Your Answer By clicking “Post Your Answer”, you agree to our terms of service, privacy policy and cookie policy Not the answer you're looking for? Browse other questions tagged or ask your own question.
__label__pos
0.819654
0 I can do this with StructureMap using Constructor Injection. However I cannot find a way to do this with Simple Injector. Here is some code that illustrates this (sorry for the length) I've looked at the lambda in the Register method, but can't seem to understand how to call a single application wide instance of the container to get the one instance I need. These are the object graphs I wish to construct: var bannerTalker = new LoudMouth( new ConsoleShouter(), // Implements IConsoleVoicer new ObnoxiousBannerGenerator()); var plainTalker = new TimidSpeaker( new ConsoleWhisperer()); // Implements IConsoleVoicer Here's the code: ``` c# public interface IConsoleVoicer { void SaySomething(string whatToSay); } public class ConsoleWhisperer : IConsoleVoicer { public void SaySomething(string whatToSay) { Console.WriteLine(whatToSay?.ToLower()); } } public class ConsoleShouter : IConsoleVoicer { public void SaySomething(string whatToSay) { Console.WriteLine(whatToSay?.ToUpper()); } } public interface IBannerGenerator { string GetBanner(); } public class ObnoxiousBannerGenerator : IBannerGenerator { public string GetBanner() { return "OBNOXIOUS"; } } public interface IBannerTalker { void SayWithBanner(string somethingToSay); } public class LoudMouth : IBannerTalker { private IConsoleVoicer Voicer { get; set; } private IBannerGenerator BannerGenerator { get; set; } public LoudMouth( IConsoleVoicer concoleVoicer, IBannerGenerator bannerGenerator) { Voicer = concoleVoicer; BannerGenerator = bannerGenerator; } public void SayWithBanner(string somethingToSay) { Voicer.SaySomething(string.Format("{0}:{1}", BannerGenerator.GetBanner(), somethingToSay)); } } public interface IPlainTalker { void SayIt(string somethingToSay); } public class TimidSpeaker : IPlainTalker { private IConsoleVoicer Voicer { get; set; } public TimidSpeaker(IConsoleVoicer concoleVoicer) { Voicer = concoleVoicer; } public void SayIt(string somethingToSay) { Voicer.SaySomething(somethingToSay); } } And this is what I've tried: static void Main(string[] args) { var container = new Container(); container.Register<IBannerGenerator, ObnoxiousBannerGenerator>(); container.Register<IPlainTalker, TimidSpeaker>(); container.Register<IBannerTalker, LoudMouth>(); //HERE IS THE DILEMMA! How do I assign // to IBannerTalker a A LoudMouth contructed with a ConsoleShouter, // and to IPlainTalkerTalker a A TimidSpeaker contructed with a ConsoleWhisperer //container.Register<IConsoleVoicer, ConsoleShouter>(); container.Register<IConsoleVoicer, ConsoleWhisperer>(); var bannerTalker = container.GetInstance<IBannerTalker>(); var plainTalker = container.GetInstance<IPlainTalker>(); bannerTalker.SayWithBanner("i am a jerk"); plainTalker.SayIt("people like me"); } 0 Ric .Net is right in pointing you at the RegisterConditional methods. The following registrations complete your quest: container.Register<IBannerGenerator, ObnoxiousBannerGenerator>(); container.Register<IPlainTalker, TimidSpeaker>(); container.Register<IBannerTalker, LoudMouth>(); container.RegisterConditional<IConsoleVoicer, ConsoleShouter>( c => c.Consumer.ImplementationType == typeof(LoudMouth)); container.RegisterConditional<IConsoleVoicer, ConsoleWhisperer>( c => c.Consumer.ImplementationType == typeof(TimidSpeaker)); • Disco! This worked. I was unaware of the RegisterConditional method. – Kerry Thomas Jun 10 at 13:23 Your Answer By clicking “Post Your Answer”, you agree to our terms of service, privacy policy and cookie policy Not the answer you're looking for? Browse other questions tagged or ask your own question.
__label__pos
0.985132
node连接mysql出错 发布于 9 年前 作者 zhangking 9599 次浏览 最后一次编辑是 5 年前 我的代码 var Client = require(‘mysql’).Client; var client = new Client(); client.host = ‘localhost’; client.port = 3306; client.user = ‘root’; client.password = ‘123456’; client.database=‘test1’; query(client); function query(client){ client.query( ‘select * from user’, function(err,res,fields){ console.log(res); client.end(); } ); }; 报错 /home/king/node/mysql.js:2 var client = new Client(); ^ TypeError: undefined is not a function at Object.<anonymous> (/home/king/node/mysql.js:2:14) at Module._compile (module.js:456:26) at Object.Module._extensions…js (module.js:474:10) at Module.load (module.js:356:32) at Function.Module._load (module.js:312:12) at Function.Module.runMain (module.js:497:10) at startup (node.js:119:16) at node.js:901:3 这是mysql 模块没加载么,可是我的目录下已经有啦含mysql的node_moudle啦,求解惑 2 回复 改成这样试试 var Client = require('mysql').createConnection({ host:Db.host, user:Db.user, password:Db.password, database: Db.database }); 可以啦,谢谢,这个方法是直接返回了一个实例化对象么 回到顶部
__label__pos
0.599984
How To Structure A Lotto Database To Enable Complex SQL Queries [ad_1] Most lotto researchers know that large numbers of lottery combinations create a major challenge in developing the right structure of database to cope with complex queries. An added problem is in developing systems to analyse hot numbers from previous results. For example, restricting the value of prime numbers and identifying consecutive balls might require different database structures as it may not be practical to identify prime numbers through the simplicity of SQL code. This is because although lines will have different numbers, identical data definitions will apply. How your database is designed can either enhance or inhibit your ability to develop complex SQL queries on lottery data. The Prime Lotto System As An Example Of Querying A Database The popular “Prime system”, which comprises 2 primes, 1 non prime odd and 3 even numbers provides an excellent case study. In my SQL Server database, the structure of the prime system combinations is like this: Prime,Prime, Non-prime odd,Even,Even,Even Example: 2,5,9,4,8,28 The table holds nearly 600,000 possible combinations and is efficient for identifying and restricting each number type. For example, to set the second prime to either 7 or 29, the SQL query would look like this: n2=7 or n2=29 But what if I wanted to ensure the first number was 6? The problem is that 6 would never be the first ball as it is not a prime number. Making Your Lotto Queries More Flexible By Creating A New Table One solution is to create a second table with the structure you need. In this case the example line “2,5,9,4,8,28” becomes “2,4,5,8,9,28”. This means more complex queries can be defined. This approach is simply looking at the same data from a different angle. • Restricting number groupings such as “1,3,4” • Consecutive prime and non-prime numbers • Spread of numbers across different decile groups I’ve now got two separate data files that structure the numbers in separate ways. The new structure enables a more flexible approach with an option of using one or both tables in SQL queries. It should be possible to write queries that will search all the data for lines that meet more flexible and comprehensive parameters. Summary This article has introduced the concept of separate data tables to enable more flexible SQL queries on lottery data. With a little thought and original thinking complex lotto research studies can be developed and deployed. How To Structure A Lotto Database To Enable Complex SQL Queries Leave a Reply Your email address will not be published. Required fields are marked * Scroll to top Pin It on Pinterest
__label__pos
0.985137
DeFi 10122 September 2021 Zero-Knowledge Proofs In DeFi Explained Zero-Knowledge Proofs Summary • Zero-knowledge proof or ZK protocol is a type of verification procedure • Zero-Knowledge proof is used to prove digital asset ownership • Zero-knowledge proofs are common with modern cryptographers Zero-knowledge proof or ZK protocol is a type of verification procedure between a verifier and a prover. For it to work, the prover is supposed to prove to the verifier that they are the owner of a certain digital asset by providing a piece of information only known between them and the verifier. They do this without revealing the full information, like a few letters or words in a sentence. Zero-knowledge proofs have become common in modern cryptographers to boost the level of security and privacy in DeFi platforms. For a verifications method to be termed a ZK proof, it needs to meet the basic requirements of soundness and completeness. As a scalable blockchain, avalanche has the potential to achieve up to 4500 transactions throughout per second while maintain its decentralized nature. Kibet Elikana Write a comment
__label__pos
0.977915
| How to Make Another Font Default in Apple Notes The Apple Notes app is known for its simplicity and ease of use. One of the often-overlooked features of this simplicity is the default font used in the application. In today’s blog post, we’ll explore what font Apple Notes uses by default and how to set another font as your default. What Font is Used as Default? The default font in Apple Notes varies slightly depending on the device you’re using. For most Mac users, the default font is generally “Helvetica” or “San Francisco,” whereas on iOS devices, the default is usually “San Francisco.” These fonts are chosen for their readability and clean lines, making them excellent choices for note-taking. Why Does the Default Font Matter? You might wonder why you should care about the default font. Here are a few reasons: • Readability: A good font makes your notes easier to read. • Aesthetics: The default font sets the tone for the overall look of your notes. • Consistency: Using the same font across all notes provides a unified, professional appearance. How to Make Another Font Default Unfortunately, Apple Notes doesn’t provide a straightforward way to change the default font within the app itself. However, there are some workarounds: • Create a Template: First, create a new note with the desired font. • Duplicate: Whenever you need a new note, duplicate this template instead of creating a new one. The default font in Apple Notes is usually “San Francisco” on iOS devices and either “San Francisco” or “Helvetica” on Mac. While the app doesn’t offer an easy way to change the default font, workarounds exist, such as using templates on Mac or automations on iOS. Understanding and possibly changing the default font can significantly enhance your note-taking experience. Similar Posts Leave a Reply
__label__pos
0.696385
Методы оптимизации ПО для современных микроконтроллеров - Теория - Shelek Несмотря на то, что современные микропроцессоры и микроконтроллеры имеют все более высокую вычислительную мощность, процесс оптимизации остается таким же необходимым, как и прежде. Те задачи, которые раньше выполнялись за несколько часов, теперь выполняются за несколько микросекунд. Часто функции, написанные на языке высокого уровня, выполняются настолько быстро, что для них трудно определить какие-то временные рамки. Однако в результате выполнения большого количества вызовов той или иной функции незначительные изменения в скорости выполнения каждой из них становятся заметными. Очевидно, что необходима оптимизация программ, особенно разрабатываемых для применения во встроенных системах. Приложения для встроенных систем часто более требовательны к производительности процессора, в плане скорости, потребления и использования памяти, чем какие-либо другие типы приложений. Например, приложения общего назначения, обычно предъявляют требования к скорости выполнения в среднем случае, но, скажем, в системе цифровой обработки сигналов реального времени, требования к производительности предъявляются в каждом отдельном случае. Если программное обеспечение не достаточно быстро, оно считается неисправным. Даже если требования реального времени не критичны, оптимизация программного обеспечения позволит разработчику выбрать более медленный или дешевый процессор, предоставляя конкурентное преимущество на рынке. В последнее время при разработке ПО для встраиваемых систем все чаще используется очень гибкий инструмент - язык С. Эффективность реализации алгоритма на С, влияет на эффективность кода сгенерированного компилятором. Существует большое количество наиболее общих и часто используемых методов оптимизации кода программного обеспечения, которые позволяют немного поправить ваш код и ускорить его выполнение. Применение методов специфичных для данного процессора или алгоритма позволит добиться более существенных результатов. Попытками оптимизации на данном уровне можно добиться определенного прироста в скорости выполнения, однако данные методы менее эффективны, чем выбор наилучшего подходящего алгоритма или удаление необязательных циклов. Память может оказаться узким местом в архитектуре встроенной системы. Данную проблему можно решить, разместив наиболее часто используемые данные во внутренней, более быстрой памяти, а оставшиеся во внешней, более медленной. Самая быстрая память, это регистры, расположенные на кристалле процессора. Их всегда недостаточно, правильное распределение сильно влияет на общую производительность. Следующая по быстроте это кэш-память. В ней размещаются данные или команды, которые процессор предполагает использовать в следующие ближайшие моменты времени, таким образом, повышается к ним скорость доступа. Кэширование данных и команд, оказывает наибольшее воздействие на эффективность выполнения программы. Процесс заполнения внутренней памяти данными из внешней памяти требует большого времени. Удалось оценить, что каждый кэш-промах может привести к потере 10 20 тактов. Промах во внешней кэш-памяти ведет к потере, 20-40 циклов, а ошибка, вызванная отсутствием страницы, приводит к потере миллиона команд. Первое, что необходимо рассмотреть при оптимизации доступа к памяти это локальность обращений. На нее можно воздействовать изменением порядка, в котором осуществляется доступ, и размещаются объекты в памяти, или путем группировки всех часто используемых структур данных. Предположим, у нас имеется массив целых значений размерности 1024Х1024, и мы собираемся подсчитать сумму значений в столбце, превышающих максимальную величину: Код: sum = 0; for (i=0; i1024; i++) { if (array[i][K] MAX_VAL) sum += array[i][K]; } В случае, если кэширование отсутствует, будет сгенерированна последовательность из 1024 циклов чтения перемешанных с выборкой команд (подразумевается отсутствие кэш-памяти команд). В том случае, если производится кэширование, каждое новое чтение будет приводить к кэш-промаху, запуская цикл ее заполнения из памяти. В большинстве случаев общее время выполнения суммирования велико. Подсчет суммы значений в строке, превышающих максимальную величину: Код: sum = 0; for (i=0; i1024; i++) { if (array[K][i] MAX_VAL) sum += array[K][i]; } Этот кусок кода будет выполняться более быстро за счет эффекта кэширования. В любом случае, если кэш-память достаточно велика, чтобы разместить в ней весь массив данных, мы получим сходное увеличение производительности и при доступе к строкам и при доступе к столбцам массива. Если кэш слишком мала, то она может привести к уменьшению производительности. Поэтому очень важно использовать структуры данных с высокой локальностью обращений. Например, динамические связанные списки, могут уменьшить производительность программы, когда вы производите поиск данных в нем, или перемещаетесь к концу списка. Реализация списка на основе массива, может быть лучше и намного быстрее из-за лучшего кэширования, даже если учесть трудность изменения его размеров. Не следует полагать, что память под массив, выделяемая функцией malloc, будет размещена вплотную друг к другу при каждом последовательном вызове. Размещение одной сплошной большой области, должно быть осуществлено самостоятельно. Еще одним фактором, влияющим на эффективность работы с памятью, является то, каким способом производится обращения к каждому элементу массива. Наиболее часто работа с массивами, производится внутри цикла. При этом в качестве управления циклом используют подсчет индекса до определенной величины. Этот же индекс используется для доступа к данным в одном или сразу нескольких массивах: Код: typedef struct { unsigned int a; unsigned int b; unsigned char c; }str_t; str_t sum[20]; str_t array1[20]; str_t array2[20]; for (ind = 0; ind 20; ind++) { sum[ind].a = array1[ind].a + array2[ind].a; } Время от времени и обычно в целях отличных от прохода через массив данных при помощи циклов, применяют указатели вместо оператора доступа к элементу массива. К элементам любого массива можно обратится через указатели. В итоге не трудно преобразовать исходный код, построенный на операторах доступа к элементу массива на тот, что базируется на работе с указателями: Код: for (ind = 0; ind 20; ++ind) { sum-a = array1-a + array2-a; ++array1; ++array2; ++sum; } Различия в этих двух фрагментах исходного кода не привлекают должного внимания и часто остаются не замеченными. Однако их эффективность значительно различается. Работа через указатели эффективнее. Многие компиляторы сгенерируют код, примерно, похожий на следующий: Код: //for (ind = 0; ind 20; ind++) MOV.W #0x0, R9 //{ // sum[ind].a=array1[ind].a+array2[ind].a; ??main_0: MOV.W R9, R12 MOV.W #0x6, R14 CALL #?Mul16 MOV.W array1(R12), R15 ADD.W array2(R12), R15 MOV.W R15, sum(R12) //} ADD.W #0x1, R9 CMP.W #0x14, R9 JNC ??main_0 Код: //for (t = 0; t count; t++) MOV.B #0x14, R14 //{ // sum-a = array1-a + array2-a; ??main_1: MOV.W @R10, R15 ADD.W @R11, R15 MOV.W R15, 0(R8) // ++array1; ADD.W #0x6, R10 // ++array2; ADD.W #0x6, R11 // ++sum; ADD.W #0x6, R8 //} ADD.B #0xff, R14 JNE ??main_1 При каждом обращении к элементу массива, индекс цикла переводится в смещение относительно его начала и после этого добавляется к начальному адресу массива. при каждом обращении к его элементу. В случае, если размер элемента массива равен степени двойки, преобразование индекс-смещение простое. Чрезмерные накладные расходы будут появляться в том случае, если размер элемента массива не равен степени двойки. В этом случае при преобразовании будет вызываться команда умножения, либо специальная функция как в случае с некоторыми микроконтроллерами MSP430. Доступ к массиву через указатели лишен данного недостатка. В данном случае указатель просто увеличивается постоянно, на величину 6 байт. При этом не имеет значения размер элемента массива, и тот факт, поддерживает ли микроконтроллер операцию умножения на аппаратном уровне. Благодарю за внимание, продолжение следует!.. • Posted on 31.01.2010 22:06 • Просмотры: 1203
__label__pos
0.617184
Markdown 语法 基础语法 Halo 是用的 Markdown 解析器为 flexmark-java,基于 CommonMark (spec 0.28) 标准开发,语法参考:https://spec.commonmark.org/0.28/ 代码块 ```language 代码块 ``` 其中,language 为必填,如果不填写,很可能主题的代码高亮插件无法识别代码的语言,导致样式异常。举几个例子: ```java public static void main(String[] args){ System.out.println("Hello World!"); } ``` ```javascript console.log("Hello World!") ``` 自动链接 支持自动将一个链接解析为可点击的格式,如下: https://halo.run 将被解析为: <a href="https://halo.run">https://halo.run</a> Emoji 支持将 Emoji 的文字形式转化为图片形式,如下: :100: 将被解析为: 💯 更多 Emoji 表情可访问:https://emoji.svend.cc 数学公式 行内公式: $a \ne 0$ 段落公式: $$ x = {-b \pm \sqrt{b^2-4ac} \over 2a}. $$ Q&A: Q:编辑器可以显示公式,为啥发布之后前台看不了?你这不是瞎写吗? A:不是!你需要知道的是,并不是所有主题都支持显示公式。这时候,你就需要自己添加解析插件了。 Q:那是要我改代码吗?改 Halo 还是主题?这我也不会啊?那怎么办?你能帮帮我吗? A:别老想去改代码了。添加的方法很简单: 首先,登陆到后台,进入 系统 -> 博客设置 -> 其他设置。将下面的代码复制到 自定义内容页面 head <script src="//cdn.jsdelivr.net/npm/[email protected]/unpacked/MathJax.js?config=TeX-MML-AM_CHTML" defer></script> <script> document.addEventListener('DOMContentLoaded', function () { MathJax.Hub.Config({ 'HTML-CSS': { matchFontHeight: false }, SVG: { matchFontHeight: false }, CommonHTML: { matchFontHeight: false }, tex2jax: { inlineMath: [ ['$','$'], ['\\(','\\)'] ], displayMath: [["$$", "$$"], ["\\[", "\\]"]] } }); }); </script> 图表 饼图: ```mermaid pie title NETFLIX "Time spent looking for movie" : 90 "Time spent watching it" : 10 ``` 更多用法查看:https://mermaidjs.github.io/#/ Q&A: Q:同上,这也是编辑器可显示,前台显示不了啊? A:添加插件的方法和上面的一样: 首先,登陆到后台,进入 系统 -> 博客设置 -> 其他设置。将下面的代码复制到 自定义内容页面 head <script src="https://cdn.jsdelivr.net/npm/[email protected]/dist/mermaid.min.js"></script> 短连接 Halo 内置一些短连接以更好地支持一些 HTML 语法,但是编辑器目前并不能解析,只能发布之后才可以看到效果,如下: 网易云音乐 [music:id] 示例: [music:32507038] 将被解析为: <iframe frameborder="no" border="0" marginwidth="0" marginheight="0" width=330 height=86 src="//music.163.com/outchain/player?type=2&id=32507038&auto=1&height=66"></iframe> 哔哩哔哩动画 [bilibili:aid,width,height] 示例: [bilibili:65898131,256,256] 将被解析为: <iframe height="256" width="256" src="//player.bilibili.com/player.html?aid=65898131" scrolling="no" border="0" frameborder="no" framespacing="0" allowfullscreen="true"> </iframe>
__label__pos
0.979674
19.2.303. MPI_Request_get_status MPI_Request_get_status - Access information associated with a request without freeing the request. 19.2.303.1. SYNTAX 19.2.303.1.1. C Syntax #include <mpi.h> int MPI_Request_get_status(MPI_Request request, int *flag, MPI_Status *status) 19.2.303.1.2. Fortran Syntax USE MPI ! or the older form: INCLUDE 'mpif.h' MPI_REQUEST_GET_STATUS(REQUEST, FLAG, STATUS, IERROR) INTEGER REQUEST, STATUS(MPI_STATUS_SIZE), IERROR LOGICAL FLAG 19.2.303.1.3. Fortran 2008 Syntax USE mpi_f08 MPI_Request_get_status(request, flag, status, ierror) TYPE(MPI_Request), INTENT(IN) :: request LOGICAL, INTENT(OUT) :: flag TYPE(MPI_Status) :: status INTEGER, OPTIONAL, INTENT(OUT) :: ierror 19.2.303.2. INPUT PARAMETER • request: Communication request (handle). 19.2.303.3. OUTPUT PARAMETERS • flag: Boolean flag, same as from MPI_Test (logical). • status: MPI_Status object if flag is true (status). 19.2.303.4. DESCRIPTION MPI_Request_get_status sets flag = true if the operation is complete or sets flag = false if it is not complete. If the operation is complete, it returns in status the request status. It does not deallocate or deactivate the request; a subsequent call to test, wait, or free should be executed with that request. If your application does not need to examine the status field, you can save resources by using the predefined constant MPI_STATUS_IGNORE as a special value for the status argument. 19.2.303.5. ERRORS Almost all MPI routines return an error value; C routines as the return result of the function and Fortran routines in the last argument. Before the error value is returned, the current MPI error handler associated with the communication object (e.g., communicator, window, file) is called. If no communication object is associated with the MPI call, then the call is considered attached to MPI_COMM_SELF and will call the associated MPI error handler. When MPI_COMM_SELF is not initialized (i.e., before MPI_Init/MPI_Init_thread, after MPI_Finalize, or when using the Sessions Model exclusively) the error raises the initial error handler. The initial error handler can be changed by calling MPI_Comm_set_errhandler on MPI_COMM_SELF when using the World model, or the mpi_initial_errhandler CLI argument to mpiexec or info key to MPI_Comm_spawn/MPI_Comm_spawn_multiple. If no other appropriate error handler has been set, then the MPI_ERRORS_RETURN error handler is called for MPI I/O functions and the MPI_ERRORS_ABORT error handler is called for all other MPI functions. Open MPI includes three predefined error handlers that can be used: • MPI_ERRORS_ARE_FATAL Causes the program to abort all connected MPI processes. • MPI_ERRORS_ABORT An error handler that can be invoked on a communicator, window, file, or session. When called on a communicator, it acts as if MPI_Abort was called on that communicator. If called on a window or file, acts as if MPI_Abort was called on a communicator containing the group of processes in the corresponding window or file. If called on a session, aborts only the local process. • MPI_ERRORS_RETURN Returns an error code to the application. MPI applications can also implement their own error handlers by calling: Note that MPI does not guarantee that an MPI program can continue past an error. See the MPI man page for a full list of MPI error codes. See the Error Handling section of the MPI-3 standard for more information.
__label__pos
0.775693
can you play xbox on an imac? [Answer] 2022 can you play xbox on an imac? – If you have difficulty or question the problem. You are on the right page. On this page yoosklondonsummit.com will provide information and answers taken from various sources regarding answers to can you play xbox on an imac? : Can I play Xbox on my imac? Yes, you can play Xbox on your Mac using an emulator. There are a few different emulators available, but the best one to use is called Boot Camp. Boot Camp is a program that comes pre-installed on Macs that allows you to run Windows on your Mac. Once you have installed Windows, you can then install the Xbox emulator and start playing! Can you use an old imac as a monitor for Xbox? Yes, you can use an old imac as a monitor for Xbox. You will need to connect the imac to the Xbox using an HDMI cable. Can you use an iMac as a gaming monitor? Yes, you can use an iMac as a gaming monitor. The iMac has a powerful graphics card that can handle most games. However, if you want to get the best gaming experience, you may want to consider using a dedicated gaming monitor. How do I play Xbox on my Mac with HDMI? First, you’ll need an Xbox One and a Mac with an HDMI port. Then, connect the Xbox One to the Mac using an HDMI cable. Finally, open the Xbox app on the Mac and sign in with your Microsoft account. Can you connect Xbox to 2011 iMac? Yes, you can connect an Xbox to a 2011 iMac. To do so, you’ll need to use an HDMI cable to connect the two devices. Can I use my iMac screen as a monitor? Yes, you can use your iMac screen as a monitor. To do so, go to System Preferences and click on Display. Then, click on the tab that says “Arrangement.” You’ll see a list of all of the displays that are connected to your computer. Click on the box next to the name of the display that you want to use as your main display. Can iMac 2021 be used as a monitor? Yes, you can use your iMac as a monitor. To do so, go to System Preferences and click on Displays. Under the Arrangement tab, you’ll see an option that says “Mirror displays.” Check this box to make your iMac show the same thing as your other monitor. Can I use my 27-inch iMac as a monitor? Yes, you can use your 27-inch iMac as a monitor. To do so, connect the computer to an external display using the appropriate cables. Once the display is connected, press and hold the Option key on your keyboard and click the System Preferences icon in the Dock. Select Displays from the menu, then choose the Arrangement tab. Drag the iMac’s display to the left or right of the external display. How much Hz does a iMac have? The frequency of an iMac’s display is 60 Hz. Can you use a 24-inch iMac as a monitor? Yes, you can use a 24-inch iMac as a monitor. To do so, you’ll need to connect the computer to another device, such as a desktop or laptop computer, using a cable. Once the connection is established, you can open the System Preferences app and select “Displays” to adjust the settings for your new monitor. Can I use my 2015 iMac as a monitor? Yes, you can use your 2015 iMac as a monitor. To do so, you’ll need to connect your iMac to another computer and use the “Duplicate Display” option in the System Preferences menu. Can you use a 2017 iMac as a monitor? Yes, you can use a 2017 iMac as a monitor. The only thing you need is an adapter to convert the lightning connector to a standard HDMI cable. Can I use my iMac 21.5 2017 as a monitor? Yes, you can use your iMac 21.5 2017 as a monitor. To do so, you’ll need to connect the computer to another device, such as a laptop, using a cable. Once the devices are connected, you can open the System Preferences app and click on “Display.” From there, you’ll be able to select your iMac as the main display. Does my iMac have Target display Mode? Yes, your iMac has target display mode. To use it, connect your iMac to another display using an adapter like the Apple Mini DisplayPort to DVI Adapter. Hold down the Option key and click the Displays button in the System Preferences window. Select your external display from the list of displays. Can I use my old iMac as a second screen? Yes, you can use your old iMac as a second screen. To do this, you’ll need to connect your iMac to your computer using a Thunderbolt cable. Once you’ve done this, open the System Preferences app and select “Displays.” From here, you should be able to select your iMac as a secondary display. How about our explanation above regarding can you play xbox on an imac?? Hopefully the answers above can help you solve the problem. Thank You About yoosklondonsummit Check Also how do i permanently delete my facebook account on jio mobile? [Answer] 2022 how do i permanently delete my facebook account on jio mobile? – If you have …
__label__pos
0.996282
/* * COPYRIGHT: See COPYING in the top level directory * PROJECT: ReactOS NT Library * FILE: dll/ntdll/dispatch/dispatch.c * PURPOSE: User-Mode NT Dispatchers * PROGRAMERS: Alex Ionescu ([email protected]) * David Welch */ /* INCLUDES *****************************************************************/ #include #define NDEBUG #include typedef NTSTATUS (NTAPI *USER_CALL)(PVOID Argument, ULONG ArgumentLength); /* FUNCTIONS ****************************************************************/ /* * @implemented */ VOID NTAPI KiUserExceptionDispatcher(PEXCEPTION_RECORD ExceptionRecord, PCONTEXT Context) { EXCEPTION_RECORD NestedExceptionRecord; NTSTATUS Status; /* Dispatch the exception and check the result */ if (RtlDispatchException(ExceptionRecord, Context)) { /* Continue executing */ Status = NtContinue(Context, FALSE); } else { /* Raise an exception */ Status = NtRaiseException(ExceptionRecord, Context, FALSE); } /* Setup the Exception record */ NestedExceptionRecord.ExceptionCode = Status; NestedExceptionRecord.ExceptionFlags = EXCEPTION_NONCONTINUABLE; NestedExceptionRecord.ExceptionRecord = ExceptionRecord; NestedExceptionRecord.NumberParameters = Status; /* Raise the exception */ RtlRaiseException(&NestedExceptionRecord); } /* * @implemented */ VOID NTAPI KiRaiseUserExceptionDispatcher(VOID) { EXCEPTION_RECORD ExceptionRecord; /* Setup the exception record */ ExceptionRecord.ExceptionCode = ((PTEB)NtCurrentTeb())->ExceptionCode; ExceptionRecord.ExceptionFlags = 0; ExceptionRecord.ExceptionRecord = NULL; ExceptionRecord.NumberParameters = 0; /* Raise the exception */ RtlRaiseException(&ExceptionRecord); } /* * @implemented */ VOID NTAPI KiUserCallbackDispatcher(ULONG Index, PVOID Argument, ULONG ArgumentLength) { /* Return with the result of the callback function */ USER_CALL *KernelCallbackTable = NtCurrentPeb()->KernelCallbackTable; ZwCallbackReturn(NULL, 0, KernelCallbackTable[Index](Argument, ArgumentLength)); }
__label__pos
0.998005
  Author: Lawrence Albert Pardo-Ilao - Naga City, Philippines Unlock For Us Quick Tip: A Windows Vista Power Button that actually Turns off your PC A very simple concept that makes us ask a question, Why Microsoft set the default to turn to sleep instead of shutting down? powercfg Here's how: 1. Open the Run Window. If you can't see the option in your start menu, just press Windows Key+R. 2. Type cmd and press enter > enter powercfg.cpl,1 without spaces, Scroll-down and find the option "Startmenu power button" and Finally, change the options to Shut down as shown in the picture above. Easy? Enjoy! Quick Tips: How to Open/Load .ISO Files in Windows Vista/XP Secure logon for Windows XP and Vista Hide your computer in the Network Partitioning your Drive in Windows XP without destroying your files 0 Comments:   © UnlockForUs 2007-2021| Blogger| Google.com | License Agreement
__label__pos
0.561551
Ejemplos Ejemplo #1 Crear un fichero Zip <?php $zip  = new ZipArchive(); $filename "./test112.zip"; if ( $zip->open($filenameZipArchive::CREATE)!==TRUE) {     exit( "cannot open <$filename>\n"); } $zip->addFromString("testfilephp.txt" time(), "#1 Esto es una cadena de prueba añadida como  testfilephp.txt.\n"); $zip->addFromString("testfilephp2.txt" time(), "#2 Esto es una cadena de prueba añadida como testfilephp2.txt.\n"); $zip->addFile($thisdir "/too.php","/testfromfile.php"); echo  "numficheros: " $zip->numFiles "\n"; echo  "estado:" $zip->status "\n"; $zip->close(); ?> Ejemplo #2 Volcar la información del archivo y listarlos <?php $za  = new ZipArchive(); $za->open('test_with_comment.zip'); print_r($za); var_dump($za); echo  "numFicheros: " $za->numFiles "\n"; echo  "estado: " $za->status  "\n"; echo  "estadosSis: " $za->statusSys "\n"; echo  "nombreFichero: " $za->filename "\n"; echo  "comentario: " $za->comment "\n"; for ( $i=0$i<$za->numFiles;$i++) {     echo  "index: $i\n";      print_r($za->statIndex($i)); } echo  "numFichero:" $za->numFiles "\n"; ?> Ejemplo #3 Usando contenedor Zip, leer meta info de OpenOffice <?php $reader  = new XMLReader(); $reader->open('zip://' dirname(__FILE__) . '/test.odt#meta.xml'); $odt_meta = array(); while ( $reader->read()) {     if ( $reader->nodeType == XMLREADER::ELEMENT) {          $elm $reader->name;     } else {         if ( $reader->nodeType == XMLREADER::END_ELEMENT && $reader->name == 'office:meta') {             break;         }         if (! trim($reader->value)) {             continue;         }          $odt_meta[$elm] = $reader->value;     } } print_r($odt_meta); ?> Este ejemplo utiliza la antigua API (PHP 4), abre un fichero ZIP, lee cada fichero del archivo y imprime su contenido. El archivotest2.zip usado en este ejmplo es uno de los archivos de prueba la fuente de distribución de ZZIPlib. Ejemplo #4 Ejemplo del uso de Zip <?php $zip  zip_open("/tmp/test2.zip"); if ( $zip) {     while ( $zip_entry zip_read($zip)) {         echo  "Nombre:               " zip_entry_name($zip_entry) . "\n";         echo  "Tamaño actual del fichero:    " zip_entry_filesize($zip_entry) . "\n";         echo  "Tamaño comprimido:    " zip_entry_compressedsize($zip_entry) . "\n";         echo  "Método de compresión: " zip_entry_compressionmethod($zip_entry) . "\n";         if ( zip_entry_open($zip$zip_entry"r")) {             echo  "Contenido del fichero:\n";              $buf zip_entry_read($zip_entryzip_entry_filesize($zip_entry));             echo  "$buf\n";              zip_entry_close($zip_entry);         }         echo  "\n";     }      zip_close($zip); } ?> add a note add a note User Contributed Notes 5 notes up 5 info at peterhofer dot ch 6 years ago All these examples will not work if the php script has no write access within the folder. Although you may say this is obvious, I found that in this case, $zip->open("name", ZIPARCHIVE::CREATE) doesn't return an error as it might not try to access the file system but rather allocates memory. It is only $zip->close() that returns the error. This might cause you seeking at the wrong end. up 3 webmaster @ anonymous 1 year ago you can use this code for reading JAR files (java archives) JAR files use the same ZIP format, so can be easily read $zf = zip_open(realpath('D:/lucene/allinone/lucene-core.jar')); $i=1; while($zf && $ze = zip_read($zf)) {     $zi[$i]['zip entry name']= zip_entry_name($ze);     $zi[$i]['zip entry filesize']= zip_entry_filesize($ze);     $zi[$i]['zip entry compressed size']= zip_entry_compressedsize($ze);     $zi[$i]['zip entry compression method']= zip_entry_compressionmethod($ze);     $zi[$i]['zip entry open status'] = zip_entry_open($zf,$ze);     //$zi[$i]['zip entry file contents'] = zip_entry_read($ze,100);     $i++; } print_r($zi); zip_close($zf); up 4 geandjay at gmail dot com 6 years ago <?php         $zip = new ZipArchive;     $zip->open('teste.zip');     $zip->extractTo('./');     $zip->close();         echo "Ok!"; ?> up 0 Stefano Di Paolo 6 years ago 1) If you want to add files to a ZIP archive but you don't know if the ZiP file exists or not, you MUST check: this changes the way you open it !. 2) you can not append multiple flags, can use only one (or none). If the zip does not exists, open it with: $ziph->open($archiveFile, ZIPARCHIVE::CM_PKWARE_IMPLODE) (or a different compression method) If the zip already exists, open it with: $ziph->open($archiveFile) or $ziph->open($archiveFile, ZIPARCHIVE::CHECKCONS) Example: make backup files every hour and ZIP them all in a daily ZIP archive, so you want to get only one ZIP per day, each ZIP containing 24 files: <?php   function archivebackup($archiveFile, $file, &$errMsg)   {     $ziph = new ZipArchive();     if( file_exists($archiveFile))     {       if( $ziph->open($archiveFile, ZIPARCHIVE::CHECKCONS) !== TRUE)       {         $errMsg = "Unable to Open $archiveFile";         return 1;       }     }     else     {       if( $ziph->open($archiveFile, ZIPARCHIVE::CM_PKWARE_IMPLODE) !== TRUE)       {         $errMsg = "Could not Create $archiveFile";         return 1;       }     }     if(! $ziph->addFile($file))     {       $errMsg = "error archiving $file in $archiveFile";       return 2;     }     $ziph->close();         return 0;   } ?> up -8 Jonathan 5 years ago If you find your zip file not being created make sure every file you are adding to the zip is valid.  If even one file is not available when zip->close is called then the archive will fail and your zip file won't be created. To Top
__label__pos
0.753105
Commit f00c155d authored by Stéphane Albert's avatar Stéphane Albert ENH: Moved generated quicklook image-file into dataset directory; Added... ENH: Moved generated quicklook image-file into dataset directory; Added DatasetModel::GetDirectory() accessor; Added QuicklookModel::GetImageModel<>() accessors; Added VectorImageModel::GetDatasetModel() accessor; Remove unnecessary typedefs in QuicklookModel.h. parent 8d123df9 ......@@ -97,7 +97,7 @@ DatasetModel ::ImportImage( const QString& filename , int w, int h ) { // 1. Instanciate local image model. VectorImageModel* vectorImageModel = new VectorImageModel(); VectorImageModel* vectorImageModel = new VectorImageModel( this ); vectorImageModel->setObjectName( filename ); ...... ......@@ -130,6 +130,10 @@ public: /** */ inline AbstractImageModel* GetSelectedImageModel(); /** */ inline const QDir& GetDirectory() const; /*-[ SIGNALS SECTION ]-----------------------------------------------------*/ // ......@@ -209,6 +213,15 @@ private slots: namespace mvd { /*****************************************************************************/ inline const QDir& DatasetModel ::GetDirectory() const { return m_Directory; } /*****************************************************************************/ inline bool ...... ......@@ -276,7 +276,7 @@ I18nApplication QDir cacheDir( homeDir ); if( !cacheDir.cd( I18nApplication::CACHE_DIR ) ) if( !cacheDir.cd( I18nApplication::CACHE_DIR_NAME ) ) throw SystemError( ToStdString( QString( "('%1')" ).arg( homeDir.filePath( I18nApplication::CACHE_DIR_NAME ) ) ......@@ -335,8 +335,6 @@ I18nApplication return true; } /*******************************************************************************/ /* SLOTS */ /*******************************************************************************/ ...... ......@@ -134,6 +134,22 @@ private slots: /*****************************************************************************/ /* INLINE SECTION */ // // Qt includes (sorted by alphabetic order) //// Must be included before system/custom includes. // // System includes (sorted by alphabetic order) // // ITK includes (sorted by alphabetic order) // // OTB includes (sorted by alphabetic order) // // Monteverdi includes (sorted by alphabetic order) namespace mvd { } // end namespace 'mvd' ...... ......@@ -52,6 +52,15 @@ namespace mvd */ /*****************************************************************************/ /* CONSTANTS */ const char* QuicklookModel::IMAGE_FILE_EXT = ".ql"; /*****************************************************************************/ /* STATIC IMPLEMENTATION SECTION */ /*****************************************************************************/ /* CLASS IMPLEMENTATION SECTION */ ......@@ -75,37 +84,44 @@ QuicklookModel { // // get the parent vector image model VectorImageModel * viModel = qobject_cast< VectorImageModel* >( parent() ); const VectorImageModel* viModel = GetImageModel< VectorImageModel >(); assert( viModel!=NULL ); // get the filename and use it to compose the quicklook filename const char* filename = viModel->GetFilename().toAscii().constData(); std::string fnameNoExt = itksys::SystemTools::GetFilenameWithoutExtension( filename ); std::string path = itksys::SystemTools::GetFilenamePath( filename ); std::string ext = itksys::SystemTools::GetFilenameExtension( filename ); const DatasetModel* datasetModel = viModel->GetDatasetModel(); assert( datasetModel!=NULL ); std::ostringstream qlfname; // Source image file information. QFileInfo imageFileInfo( viModel->GetFilename() ); if(path!="") { qlfname << path<<"/"; } // Quicklook file information. QFileInfo quicklookFileInfo( datasetModel->GetDirectory().path(), imageFileInfo.completeBaseName() + QuicklookModel::IMAGE_FILE_EXT + "." + imageFileInfo.suffix() ); qlfname<<fnameNoExt<<"_quicklook.tif"; // Quicklook filename. QString quicklookFilename( quicklookFileInfo.filePath() ); // check if the file exists if (!itksys::SystemTools::FileExists(qlfname.str().c_str())) // First time? if( !quicklookFileInfo.exists() ) { // write the file on the disk VectorImageFileWriterType::Pointer writer = VectorImageFileWriterType::New(); writer->SetFileName(qlfname.str()); writer->SetInput(viModel->ToImage()); writer->Update(); // Instanciate a quicklook file writer. VectorImageFileWriterType::Pointer fileWriter( VectorImageFileWriterType::New() ); // Write quicklook file on the disk. fileWriter->SetFileName( ToStdString( quicklookFilename ) + ".toto" ); fileWriter->SetInput( viModel->ToImage() ); fileWriter->Update(); } // reload the quicklook QString qlname(qlfname.str().c_str()); SetFilename(qlname, 512, 512); // Source stored quicklook image-file. // TODO: Remove hard-coded 512x512 px size. SetFilename( quicklookFilename, 512, 512 ); // Initialize RgbaImageModel. InitializeRgbaPipeline(); ...... ......@@ -89,11 +89,45 @@ public: public: /** Constructor */ QuicklookModel( QObject * parent =NULL ); QuicklookModel( QObject* parent =NULL ); /** Destructor */ virtual ~QuicklookModel(); /** * \brief Get the parent image-model of this quicklook image as an * AbstractImageModel. * * \return The parent image-model of this quicklook image. */ inline const AbstractImageModel* GetImageModel() const; /** * \brief Get the parent image-model of this quicklook image as an * AbstractImageModel. * * \return The parent image-model of this quicklook image. */ inline AbstractImageModel* GetImageModel(); /** * \brief Get the parent image-model of this quicklook image as a * TImageModel. * * \return The parent image-model of this quicklook image. */ template< typename TImageModel > inline const TImageModel* GetImageModel() const; /** * \brief Get the parent image-model of this quicklook image as a * TImageModel. * * \return The parent image-model of this quicklook image. */ template< typename TImageModel > inline TImageModel* GetImageModel(); /*-[ PUBLIC SLOTS SECTION ]------------------------------------------------*/ // ......@@ -127,17 +161,6 @@ protected: // // Private types. private: /** * Display type of source image (to OpenGL). */ typedef RGBAImageType DisplayImageType; // /** // * Extract filter. // */ // typedef // itk::ExtractImageFilter< SourceImageType, SourceImageType > // ExtractFilterType; // // Private methods. ......@@ -146,17 +169,60 @@ private: // // Private attributes. private: /** * \brief The quicklook image-file extension * (e.g. '/tmp/my_source_image.tif.ql'.) */ static const char* IMAGE_FILE_EXT; /*-[ PRIVATE SLOTS SECTION ]-----------------------------------------------*/ // // Slots. private slots: /** */ }; } // end namespace 'mvd' /*****************************************************************************/ /* INLINE SECTION */ namespace mvd { /*****************************************************************************/ const AbstractImageModel* QuicklookModel ::GetImageModel() const { return GetImageModel< AbstractImageModel >(); } /*****************************************************************************/ AbstractImageModel* QuicklookModel ::GetImageModel() { return GetImageModel< AbstractImageModel >(); } /*****************************************************************************/ template< typename TImageModel > const TImageModel* QuicklookModel ::GetImageModel() const { return qobject_cast< const TImageModel* >( parent() ); } /*****************************************************************************/ template< typename TImageModel > TImageModel* QuicklookModel ::GetImageModel() { return qobject_cast< TImageModel* >( parent() ); } } // end namespace 'mvd' ...... ......@@ -29,12 +29,6 @@ /*****************************************************************************/ /* INCLUDE SECTION */ // // Monteverdi includes (sorted by alphabetic order) #include "mvdColorSetupWidget.h" #include "mvdAbstractImageModel.h" #include "mvdTypes.h" // // Qt includes (sorted by alphabetic order) //// Must be included before system/custom includes. ......@@ -51,6 +45,12 @@ #include "otbRenderingImageFilter.h" #include "otbGenericRSTransform.h" // // Monteverdi includes (sorted by alphabetic order) #include "mvdColorSetupWidget.h" #include "mvdAbstractImageModel.h" #include "mvdTypes.h" /*****************************************************************************/ /* PRE-DECLARATION SECTION */ ......@@ -64,6 +64,7 @@ namespace mvd { // // Internal classes pre-declaration. class DatasetModel; /*****************************************************************************/ ......@@ -256,6 +257,20 @@ public: /** Destructor */ virtual ~VectorImageModel(); /** * \brief Get the parent DatasetModel. * * \return The parent DatasetModel. */ inline const DatasetModel* GetDatasetModel() const; /** * \brief Get the parent DatasetModel. * * \return The parent DatasetModel. */ inline DatasetModel* GetDatasetModel(); /** */ // TODO: Move into template wrapper base-class. SourceImageType::ConstPointer ToImage() const; ......@@ -526,9 +541,29 @@ private slots: /*****************************************************************************/ /* INLINE SECTION */ // // Monteverdi includes (sorted by alphabetic order) #include "mvdDatasetModel.h" namespace mvd { /*****************************************************************************/ const DatasetModel* VectorImageModel ::GetDatasetModel() const { return qobject_cast< const DatasetModel* >( parent() ); } /*****************************************************************************/ DatasetModel* VectorImageModel ::GetDatasetModel() { return qobject_cast< DatasetModel* >( parent() ); } /*****************************************************************************/ inline VectorImageModel::SourceImageType::ConstPointer ...... Markdown is supported 0% or You are about to add 0 people to the discussion. Proceed with caution. Finish editing this message first! Please register or to comment
__label__pos
0.62105
ScotlandPHP 2016 The DOMNamedNodeMap class (PHP 5, PHP 7) Class synopsis DOMNamedNodeMap implements Traversable { /* Properties */ readonly public int $length ; /* Methods */ DOMNode getNamedItem ( string $name ) DOMNode getNamedItemNS ( string $namespaceURI , string $localName ) DOMNode item ( int $index ) } Properties length The number of nodes in the map. The range of valid child node indices is 0 to length - 1 inclusive. Table of Contents add a note add a note User Contributed Notes 2 notes up 4 kendsnyder at gmail dot com 6 years ago To add to xafford's comment. When iterating a named node map collection using ->item() or using foreach, removing a attribute with DOMNode->removeAttribute() or DOMNode->removeAttributeNode() alters the collection as if it were a stack. To illustrate, the code below tries to remove all attributes from each element but only removes the first. One work around is to copy the named node map into an array before removing attributes. Using PHP 5.2.9 on Windows XP. <?php error_reporting (E_ALL); $html  = '<h1 id="h1test" class="h1test">Heading</h1>'; $html .= '<p align="left" class="ptest">Hello World</p>'; $doc = new DOMDocument(); $doc->loadHTML($html); // remove attributes from  the h1 element $h1 = $doc->getElementsByTagName('h1')->item(0); $length = $h1->attributes->length; for ( $i = 0; $i < $length; ++$i) {     $name = $h1->attributes->item($i)->name;     $h1->removeAttribute($name);     echo "h1: removed attribute `$name`<br>"; } // remove attributes from the p element $p = $doc->getElementsByTagName('p')->item(0); foreach ( $p->attributes as $name => $attrNode) {     $p->removeAttribute($name);     echo "p: removed attribute `$name`<br>"; } ?> OUTPUT: ------- h1: removed attribute `id` Notice: Trying to get property of non-object in nodemap.php on line 13 h1: removed attribute `` p: removed attribute `align` up 2 xafford 6 years ago I stumbled upon a problem with DOMNamedNodeMap. If you iterate over a DOMNamedNodeMap, representing the attributes of a DOMElement with foreach and you use DOMElement::removeAttributeNode only the first attribute will be handled. Example (not complete): <?php /* * Imagine you got a node like this: * <a onclick="alert('evil')" href="http://example.com">evil</a> * and onclick should be removed, href would not be tested. */ foreach ( $node->attributes as $attribute ) {     echo 'checking attribute ', $attribute->name, '<br />';     if ( ! in_array ( $attribute->name, $allowed_attributes ) )     {         $node->removeAttributeNode ( $attribute );     } } ?> The output would be: checking attribute onclick To Top
__label__pos
0.939756
2 I had an LG G2, running LineageOS 7.1 with enabled encryption. I took regular backups using TWRP 3 (which can deal with encryption). The phone broke. I now want to restore the backup using TWRP on a new LG G2. How can I restore the backup from the old phone using encryption onto the new phone (no encryption yet)? I tried to restore the backup as-is and it doesn't boot (black screen). Anything special that I need to do before restoring? Should I first encrypt the new phone and then restore the backup? Or do I need to strip the encryption from the backup somehow before restoring it? 2 As it turns out, TWRP/nandroid backups aren't encrypted. The encryption layer sits lower at the level of the device. You can therefore restore the backup on a new device like any other backup simply by using the restore function in TWRP. You have to re-enable encryption afterwards, of course. This also means that backup data is not encrypted and therefore must be protected in storage. I've never read this anywhere; be aware of the security implications. Edit: The reason for the above mentioned boot problem was unrelated to the backup. I had the wrong bootstack on the new phone and first needed to upgrade to the most recent stock rom. 1 There's no reason as to why the device won't turn on at all if it's of the same exact model as the backed up one. Unless if the twrp backup got corrupt and you restored ignoring the message digest check (MD5). Remember before accessing TWRP if device is encrypted, TWRP asks for decryption password to decrypt data hence you copied files after it's decryption. There's no need to even think about encryption. Just if the backup doesn't work, and I'm sure you have TWRP, grab a freshly downloaded lineage and flash it to device. It's better to have a working phone with your data lost than having a bricked one Your Answer By clicking “Post Your Answer”, you agree to our terms of service, privacy policy and cookie policy Not the answer you're looking for? Browse other questions tagged or ask your own question.
__label__pos
0.569182
Take the 2-minute tour × Stack Overflow is a question and answer site for professional and enthusiast programmers. It's 100% free, no registration required. I need to draw some charts to describe area-effect patterns (gaming related stuff) that come in two varieties: (1) Circular "splash" with 3-4 intensity radius values separated by color (say 1, 2, 3 in red, orange and yellow). (2) Wedge with starting angle, ending angle and radius (say -30, +30 and 5). I will pull the data from a bunch of existing data, examples below. But can I do this with Highcharts polar chart? Is there a good way to define areas like that? Can you make a range series to go around the polar axis for example? And how to make a proper wedge? area_effect: { | distance: { | | short: 0f; | | medium: 0.5f; | | long: 1.2f; | | distant: 1.2f; | }; | area_info: { | | angle_left: 0f; | | angle_right: 0f; | | radius: 1.2f; | | area_type: "Circle"; | }; | hp_damage: { | | short: 1f; | | medium: 0.4f; | | long: 0.2f; | | distant: 0.2f; | }; }; area_info: { | angle_left: -70f; | angle_right: 70f; | radius: 5f; | area_type: "Pie"; | line_length: 0f; | radius_inner: 0f; }; share|improve this question Your Answer   discard By posting your answer, you agree to the privacy policy and terms of service. Browse other questions tagged or ask your own question.
__label__pos
0.999814
Skip to content Trang chủ » Uninstall Mongodb From Ubuntu: A Step-By-Step Guide Uninstall Mongodb From Ubuntu: A Step-By-Step Guide How to Uninstall MongoDB on Ubuntu 18.04 LTS Remove Mongodb From Ubuntu How to Remove MongoDB from Ubuntu: A Comprehensive Guide MongoDB is a popular document database used by many developers and organizations. However, there might come a time when you need to remove MongoDB from your Ubuntu system. Whether it’s to free up disk space or switch to a different database solution, this article will guide you through the process of completely removing MongoDB from your Ubuntu machine. We will cover the purpose of MongoDB removal, determining the MongoDB installation, stopping MongoDB services, removing the MongoDB package, deleting MongoDB data directories, uninstalling MongoDB dependencies, removing configuration files, cleaning up the system, and verifying MongoDB removal. The Purpose of MongoDB Removal: There can be various reasons why you might want to remove MongoDB from your Ubuntu system. Some of the common purposes include: 1. Switching to a different database: If you have decided to use a different database solution or no longer require a document database, you may want to remove MongoDB from your system. 2. Freeing up disk space: MongoDB can consume a significant amount of disk space due to its data storage mechanism. Removing MongoDB can help reclaim disk space. 3. Clean uninstallation: If you want to perform a clean uninstallation of MongoDB, removing all traces of the software from your system becomes necessary. Determining the MongoDB Installation: Before proceeding with the removal process, it is important to determine the installation method for MongoDB on your Ubuntu system. MongoDB can be installed using various methods, such as package managers or manual installations. If you installed MongoDB using the package manager, you can use the following command to determine the exact package name: “` dpkg –list | grep mongodb “` Stopping MongoDB Services: Before removing MongoDB, it is necessary to stop all MongoDB services running on your Ubuntu machine. This ensures that any active processes associated with MongoDB are terminated gracefully. You can use the following command to stop the MongoDB service: “` sudo systemctl stop mongodb “` Removing the MongoDB Package: Once you have stopped the MongoDB service, you can proceed with removing the MongoDB package from your system. If you installed MongoDB using a package manager like apt, you can use the following command: “` sudo apt-get remove –purge “` Replace `` with the actual package name determined in the previous step. Deleting MongoDB Data Directories: MongoDB stores its data in directories specified in the configuration files. To completely remove MongoDB, it is essential to delete these data directories. Typically, the default data directory for MongoDB is `/var/lib/mongodb`. You can use the following command to delete the data directory: “` sudo rm -r /var/lib/mongodb “` Uninstalling MongoDB Dependencies: During the installation process, MongoDB may have installed additional dependencies. To ensure a clean removal, it is recommended to uninstall these dependencies. Execute the following command to uninstall MongoDB dependencies: “` sudo apt-get autoremove “` Removing Configuration Files: MongoDB configuration files are stored in the `/etc` directory. To remove MongoDB configuration files, use the following command: “` sudo rm /etc/mongodb.conf “` Cleaning up the System: To perform a thorough removal of MongoDB, you can clean up any remaining files and directories associated with MongoDB. Execute the following commands to perform the cleanup: “` sudo rm -r /var/log/mongodb sudo rm -r /var/lib/mongodb “` Verifying MongoDB Removal: To ensure that MongoDB has been successfully removed from your Ubuntu system, you can check if the MongoDB processes and directories have been completely deleted. Use the following command to verify MongoDB removal: “` ps aux | grep -v grep | grep mongod “` If no results are displayed, MongoDB has been successfully removed. FAQs: 1. How to install MongoDB on Ubuntu? To install MongoDB on Ubuntu, you can use the package manager, apt, with the command ‘sudo apt-get install mongodb’. 2. How to install MongoDB on Linux? The installation process for MongoDB on Linux is similar to installing it on Ubuntu. You can use the package manager of your Linux distribution to install MongoDB. 3. How to install mongo shell on Ubuntu? To install the MongoDB shell on Ubuntu, you can use the command ‘sudo apt-get install -y mongodb-clients’. 4. How to remove MongoDB from CentOS 7? The process to remove MongoDB from CentOS 7 is similar to removing it from Ubuntu. You can follow the steps mentioned in this article. 5. How to install MongoDB 4.4 on Ubuntu? To install MongoDB 4.4 on Ubuntu, you can follow the official MongoDB documentation or use a package manager like apt to install the specific version. 6. How to connect to MongoDB on Ubuntu? You can connect to MongoDB on Ubuntu using the command-line interface or GUI tools like MongoDB Compass. Use the appropriate connection string or GUI tool settings to establish a connection. 7. How to check the version of MongoDB on Ubuntu? To check the version of MongoDB installed on Ubuntu, you can use the command ‘mongod –version’ in the terminal. 8. Is there a Vietnamese version of this article? Trợ lý ảo không hỗ trợ tiếng Việt. Tuy nhiên, bạn có thể sử dụng một công cụ dịch trực tuyến để dịch bài viết này sang tiếng Việt. Conclusion: Removing MongoDB from Ubuntu requires a series of steps to ensure a clean and thorough uninstallation. By following the steps outlined in this article, you can successfully remove MongoDB from your Ubuntu system, freeing up disk space and preparing for a fresh start or a switch to a different database solution. How To Uninstall Mongodb On Ubuntu 18.04 Lts How To Uninstall Mongodb Compass In Ubuntu 20.04 Using Terminal? How to Uninstall MongoDB Compass in Ubuntu 20.04 using Terminal MongoDB Compass is a powerful GUI tool that allows developers to visually explore and analyze MongoDB databases. However, there may come a time when you need to uninstall MongoDB Compass from your Ubuntu 20.04 system. In this article, we will guide you through the process of completely removing MongoDB Compass using the terminal. Uninstalling MongoDB Compass Before proceeding with the uninstallation process, it is important to ensure that you have administrative privileges on your Ubuntu system. Step 1: Access the Terminal To open the Terminal in Ubuntu 20.04, you can press “Ctrl+Alt+T” on your keyboard. Alternatively, you can open it by searching for “Terminal” in the Activities search bar. Step 2: Uninstall MongoDB Compass To completely uninstall MongoDB Compass, you first need to locate the installation directory. You can find it by running the following command in the terminal: “`bash which mongod “` This command will provide you with the path to the “mongod” executable, which is typically located in the “/usr/bin” directory. Next, you can remove MongoDB Compass by running the following command: “`bash sudo apt-get purge mongodb-compass “` This command will remove the MongoDB Compass package from your Ubuntu 20.04 system. You may be prompted to enter your password, as this command requires root access. Step 3: Remove MongoDB Data Directory By default, MongoDB Compass stores its data in the “/var/lib/mongodb” directory. To remove this data directory, you can run the following command: “`bash sudo rm -r /var/lib/mongodb “` This command will permanently delete MongoDB data, so make sure to back up any important data before executing it. Step 4: Clean Up Remaining MongoDB Files To ensure a complete uninstallation, you should also clean up any remaining MongoDB files. These files include configuration files, log files, and the MongoDB Compass configuration directory. Run the following commands to remove them: “`bash sudo rm /etc/mongod.conf sudo rm -r /var/log/mongodb sudo rm -r ~/.mongodb-compass “` These commands will remove the MongoDB configuration file, the log files, and the MongoDB Compass configuration directory, respectively. Frequently Asked Questions (FAQs) Q1: Can I reinstall MongoDB Compass after uninstallation? A1: Yes, you can reinstall MongoDB Compass after uninstalling it by following the installation instructions provided by the official MongoDB documentation. Q2: Will uninstalling MongoDB Compass affect my existing MongoDB databases? A2: No, uninstalling MongoDB Compass does not affect your existing MongoDB databases. It only removes the GUI tool, and your data will remain intact. Q3: Are there any alternative GUI tools available for MongoDB? A3: Yes, there are several alternative GUI tools available for MongoDB, such as Robo 3T, Studio 3T, and Navicat for MongoDB. You can choose the one that best suits your needs and preferences. Q4: What if the “which mongod” command does not provide a path? A4: If the “which mongod” command does not provide a path, it means that MongoDB is not installed on your system. In this case, you don’t need to uninstall MongoDB Compass, as it won’t have any functionality without MongoDB. Q5: Can I reinstall MongoDB Compass without uninstalling it completely? A5: Yes, you can reinstall MongoDB Compass without uninstalling it completely by following the installation instructions provided by the official MongoDB documentation. However, it is recommended to perform a complete uninstallation if you encounter any issues or conflicts. Conclusion Uninstalling MongoDB Compass from your Ubuntu 20.04 system using the terminal is a straightforward process. By following the steps provided in this article, you can properly remove MongoDB Compass and its associated files. Remember to use caution when executing commands and always have a backup of your data to avoid any unintentional loss. How To Remove Mongodb Completely From Centos? How to Remove MongoDB Completely from CentOS MongoDB is a popular open-source NoSQL database system that offers high performance and scalability. However, there may arise situations where you need to remove MongoDB from your CentOS server entirely. In this article, we will guide you through the step-by-step process to remove MongoDB completely from CentOS, ensuring a clean uninstallation. Additionally, we will address some frequently asked questions related to MongoDB removal. Before proceeding with the MongoDB removal process, it is important to take a backup of your databases to prevent any potential data loss. Once you have done so, carefully follow the steps outlined below to completely remove MongoDB from your CentOS system: Step 1: Stop MongoDB Services The first step is to stop any running MongoDB service on your CentOS server. Open a terminal and enter the following command: “` sudo systemctl stop mongod “` Step 2: Remove MongoDB Packages Once the MongoDB service is stopped, you can proceed to remove the MongoDB packages from your system. To do this, use the following command: “` sudo yum erase $(rpm -qa | grep mongodb-org) “` This command will remove all MongoDB packages currently installed on your CentOS system. Step 3: Remove Configuration Files To ensure a clean uninstallation, remove any remaining MongoDB configuration files. Execute the following commands to remove the configuration files: “` sudo rm -r /var/log/mongodb sudo rm -r /var/lib/mongo “` These commands will delete the log files and database files associated with MongoDB. Step 4: Remove User and Group To completely remove MongoDB, remove the MongoDB user and group from your system. Issue the following commands to remove the user and associated group: “` sudo userdel mongod sudo groupdel mongod “` Step 5: Perform Remaining Cleanup Perform a final cleanup to remove any residual MongoDB files and directories. Execute the following commands: “` sudo rm -r /etc/mongod.conf sudo rm -r /usr/lib/systemd/system/mongod.service “` This will remove the MongoDB configuration file and service file from your CentOS system. Congratulations! You have successfully removed MongoDB from your CentOS server. However, if you encounter any issues or face difficulties during the removal process, refer to the FAQs section below for further assistance. FAQs Q1: Will uninstalling MongoDB delete my database files? No, uninstalling MongoDB will not delete your database files. However, it is always recommended to take a backup of your databases before proceeding with the uninstallation process to prevent any accidental data loss. Q2: How can I verify if MongoDB has been successfully removed from my CentOS server? After following the steps mentioned above, you can verify the removal of MongoDB by executing the command `mongo`. If you receive an error stating that the command is not found, it indicates that MongoDB has been successfully removed. Q3: Can I reinstall MongoDB after removing it? Yes, you can reinstall MongoDB after a complete removal. Simply follow the installation instructions provided by MongoDB, ensuring a clean installation. Q4: Are there any more cleanup steps necessary after uninstalling MongoDB? The steps mentioned in this article cover the necessary cleanup for a complete removal of MongoDB. However, in certain cases, there might be additional files or directories related to MongoDB that need to be manually removed. If you encounter any such files or directories, it is safe to remove them. Q5: Does removing MongoDB impact other applications or services on CentOS? Removing MongoDB should not impact other applications or services on CentOS, as long as they do not have any dependencies on MongoDB. However, it is always a good practice to review the dependencies of any applications or services before removing MongoDB. In conclusion, removing MongoDB completely from CentOS requires a few simple steps to ensure a clean uninstallation. By following the steps outlined in this article, you can safely remove MongoDB from your CentOS server and resolve any related dependencies. Remember to take a backup of your databases before proceeding and refer to the provided FAQs for any additional assistance. Keywords searched by users: remove mongodb from ubuntu Install MongoDB Ubuntu, Install MongoDB-Linux, Install mongo shell Ubuntu, Remove MongoDB CentOS 7, Install mongodb 4.4 on ubuntu, Connect to MongoDB Ubuntu, Check version mongodb ubuntu, Tại MongoDB Categories: Top 80 Remove Mongodb From Ubuntu See more here: nhanvietluanvan.com Install Mongodb Ubuntu Install MongoDB on Ubuntu MongoDB is an open-source NoSQL database management system that provides high performance, scalability, and flexibility for storing and managing massive amounts of data. It is widely used by organizations to build modern applications and handle big data processing. This article provides a comprehensive guide on how to install MongoDB on Ubuntu, one of the most popular Linux distributions. Prerequisites Before proceeding with the installation, make sure your Ubuntu system meets the following prerequisites: 1. Ubuntu OS: You should have a machine running Ubuntu as your operating system. 2. Root access: Ensure you have root access or are logged in as a user with sudo privileges. 3. Update packages: It is recommended to update your system’s package list with the latest version information. Use the following command to update: “` sudo apt update “` Step 1: Import the MongoDB Repository To install the latest version of MongoDB on Ubuntu, you need to import the MongoDB repository. Follow the steps below: 1. Import the MongoDB public key: “` wget -qO – https://www.mongodb.org/static/pgp/server-5.0.asc | sudo apt-key add – “` 2. Create a list file for MongoDB: “` echo “deb [ arch=amd64,arm64 ] https://repo.mongodb.org/apt/ubuntu $(lsb_release -cs)/mongodb-org/5.0 multiverse” | sudo tee /etc/apt/sources.list.d/mongodb-org-5.0.list “` Step 2: Update Packages and Install MongoDB After importing the MongoDB repository, update the package list once again to include the newly added repository information. Use the following command: “` sudo apt update “` Next, you can install MongoDB by executing the following command: “` sudo apt install -y mongodb-org “` This command installs the MongoDB package and its related tools. The “-y” flag automatically answers “yes” to all prompts during the installation. Step 3: Start and Enable MongoDB Once the installation is complete, start the MongoDB service using the following command: “` sudo systemctl start mongod “` To enable MongoDB to start automatically on system boot, run: “` sudo systemctl enable mongod “` To check the status of the MongoDB service, type: “` sudo systemctl status mongod “` If the service is running without any issues, you should see an active (running) status. Step 4: Connect to MongoDB MongoDB connection can be established using the MongoDB shell, also known as the mongo shell. To launch the MongoDB shell, simply type the following command in your terminal: “` mongo “` If the connection is successful, you will be presented with the MongoDB shell prompt, indicating that you are now connected to the MongoDB server. Frequently Asked Questions (FAQs) Q1. How can I verify the installation of MongoDB? To verify the successful installation of MongoDB, use the following command to check the version: “` mongod –version “` Q2. What is the location of MongoDB configuration files? MongoDB configuration files are usually located in the “/etc/mongod.conf” directory. Q3. How can I stop the MongoDB service? To stop the MongoDB service, use the following command: “` sudo systemctl stop mongod “` Q4. How can I uninstall MongoDB from Ubuntu? To completely remove MongoDB from your Ubuntu system, follow these steps: 1. Stop the MongoDB service: “` sudo systemctl stop mongod “` 2. Remove MongoDB packages: “` sudo apt purge mongodb-org* “` 3. Remove MongoDB data directory: “` sudo rm -r /var/log/mongodb sudo rm -r /var/lib/mongodb “` Q5. Can I run multiple instances of MongoDB on the same Ubuntu machine? Yes, you can run multiple instances of MongoDB on the same Ubuntu machine by configuring each instance with a unique port and data directory. In conclusion, installing MongoDB on Ubuntu is a straightforward process that involves importing the MongoDB repository, updating packages, and installing the MongoDB package. By following the steps outlined in this article, you can quickly set up MongoDB on your Ubuntu system and start leveraging its powerful features for your application development and data management needs. Install Mongodb-Linux Install MongoDB-Linux MongoDB is a popular and widely used NoSQL database that offers high performance, flexibility, and scalability. It is designed to handle large amounts of data and its document-oriented model allows for easy integration with various programming languages and frameworks. Installing MongoDB on Linux is a straightforward process, and this article will guide you through the steps required to get started with MongoDB on your Linux machine. Step 1: Update System Packages Before installing MongoDB, it is recommended to update your system packages to ensure you have the latest software versions. Use the following command to update your system packages: “` sudo apt update “` Step 2: Import MongoDB Repository MongoDB is not available in the default repositories of most Linux distributions. Therefore, you need to import the MongoDB repository to ensure you can easily install and update MongoDB using the package manager. The repository can be imported by executing the following commands: “` wget -qO – https://www.mongodb.org/static/pgp/server-4.4.asc | sudo apt-key add – “` This command will import the MongoDB repository public key. “` echo “deb [ arch=amd64,arm64 ] https://repo.mongodb.org/apt/ubuntu focal/mongodb-org/4.4 multiverse” | sudo tee /etc/apt/sources.list.d/mongodb-org-4.4.list “` This command adds the MongoDB repository to the apt sources list. Step 3: Install MongoDB Now that the MongoDB repository is imported, you can proceed with installing MongoDB by running the following command: “` sudo apt install mongodb-org “` The installation process might take a few minutes, and it will also install the necessary dependencies for MongoDB. Step 4: Start MongoDB Service After the installation is complete, you need to start the MongoDB service for it to be accessible. Use the following command to start the service: “` sudo systemctl start mongod “` You can verify if MongoDB has started successfully by checking the status of the service: “` sudo systemctl status mongod “` If the output displays that the service is active and running, MongoDB is successfully installed and running on your Linux machine. Step 5: Enable MongoDB Service on Startup To ensure that MongoDB starts automatically whenever your Linux machine boots, use the following command: “` sudo systemctl enable mongod “` This will enable the MongoDB service to start on system boot. Frequently Asked Questions (FAQs): Q1: How can I access the MongoDB shell? To access the MongoDB shell, open a new terminal window and run the following command: “` mongo “` This will connect you to the MongoDB instance running on your local machine. Q2: How can I create a new MongoDB database? To create a new database in MongoDB, you need to use the `use` command followed by the desired database name. For example, to create a database named “mydatabase”, run the following command in the MongoDB shell: “` use mydatabase “` Q3: How can I create a new document in a MongoDB collection? To create a new document in a MongoDB collection, you need to use the `insertOne` or `insertMany` method. Here’s an example of how to use the `insertOne` method to insert a new document into a collection: “` db.collectionName.insertOne({ key: value, key2: value2 }) “` Replace `collectionName` with the name of your collection, and `key` and `value` with the fields and values of your document. Q4: How can I query data from a MongoDB collection? To query data from a MongoDB collection, you need to use the `find` method. Here’s an example of how to retrieve all documents from a collection named “mycollection”: “` db.mycollection.find({}) “` This will return all documents stored in the “mycollection” collection. Q5: How can I update documents in a MongoDB collection? To update documents in a MongoDB collection, you need to use the `updateOne` or `updateMany` method. Here’s an example of how to use the `updateOne` method to update a document that matches a specific condition: “` db.collectionName.updateOne({ field: value }, { $set: { field2: newValue } }) “` Replace `collectionName` with the name of your collection, `field` with the field on which you want to apply the condition, `value` with the value that matches the condition, `field2` with the field you want to update, and `newValue` with the new value. In conclusion, installing MongoDB on Linux is a relatively simple process that involves updating system packages, importing the MongoDB repository, installing MongoDB, starting the MongoDB service, and enabling the service on startup. MongoDB provides a powerful and flexible database solution for handling large amounts of data in a document-oriented way. The FAQs section covers some common questions related to using MongoDB, such as accessing the MongoDB shell, creating databases and documents, querying data, and updating documents. With these instructions and knowledge, you can start using MongoDB on your Linux machine and leverage its capabilities to build robust and scalable applications. Images related to the topic remove mongodb from ubuntu How to Uninstall MongoDB on Ubuntu 18.04 LTS How to Uninstall MongoDB on Ubuntu 18.04 LTS Found 20 images related to remove mongodb from ubuntu theme How To Install And Configure Mongodb In Ubuntu? - Geeksforgeeks How To Install And Configure Mongodb In Ubuntu? – Geeksforgeeks Remove Ppa Repository Of Mongodb Ubuntu Linux - Youtube Remove Ppa Repository Of Mongodb Ubuntu Linux – Youtube Mongodb Drop Collection Mongodb Drop Collection Mongodb - Uninstalling Mongo - Stack Overflow Mongodb – Uninstalling Mongo – Stack Overflow Install Mongodb Community Edition On Ubuntu 20.04-22.04 - Techsbucket Install Mongodb Community Edition On Ubuntu 20.04-22.04 – Techsbucket Mongodb - Uninstalling Mongo - Stack Overflow Mongodb – Uninstalling Mongo – Stack Overflow How To Install Mongodb 6+ On Ubuntu 22.04 Lts | Libssl1.1 Error Fixed | Latest 2023 #Mongodb #Ubuntu - Youtube How To Install Mongodb 6+ On Ubuntu 22.04 Lts | Libssl1.1 Error Fixed | Latest 2023 #Mongodb #Ubuntu – Youtube Cài Đặt Mongodb Trên Ubuntu 20.04 | Hotline: 0972710812 Cài Đặt Mongodb Trên Ubuntu 20.04 | Hotline: 0972710812 Ubuntu: How Can I Uninstall Mongodb And Reinstall The Latest Version? - Youtube Ubuntu: How Can I Uninstall Mongodb And Reinstall The Latest Version? – Youtube Mongodb - Delete Multiple Documents Using Mongoshell - Geeksforgeeks Mongodb – Delete Multiple Documents Using Mongoshell – Geeksforgeeks How To Install And Configure Mongodb In Ubuntu? - Geeksforgeeks How To Install And Configure Mongodb In Ubuntu? – Geeksforgeeks How To Run Mongodb As A Docker Container – Bmc Software | Blogs How To Run Mongodb As A Docker Container – Bmc Software | Blogs Mongodb - Delete Multiple Documents Using Mongoshell - Geeksforgeeks Mongodb – Delete Multiple Documents Using Mongoshell – Geeksforgeeks How To Install Mongodb On Ubuntu/Debian? - Linuxfordevices How To Install Mongodb On Ubuntu/Debian? – Linuxfordevices Unable To Install Mongodb Properly On Ubuntu 18.04 Lts - Stack Overflow Unable To Install Mongodb Properly On Ubuntu 18.04 Lts – Stack Overflow How To Install Mongodb On Ubuntu 20.04/18.04 Lts - H2S Media How To Install Mongodb On Ubuntu 20.04/18.04 Lts – H2S Media Install Mongodb Community Edition On Ubuntu — Mongodb Manual Install Mongodb Community Edition On Ubuntu — Mongodb Manual Mongodb Replication: A Complete Introduction – Bmc Software | Blogs Mongodb Replication: A Complete Introduction – Bmc Software | Blogs Uninstall Python In Ubuntu | Delft Stack Uninstall Python In Ubuntu | Delft Stack Hướng Dẫn Cài Đặt Mongodb Trên Ubuntu 20.04 Hướng Dẫn Cài Đặt Mongodb Trên Ubuntu 20.04 Mongodb] Gỡ Bỏ Dịch Vụ Mongodb Hoàn Toàn Trên Ubuntu - Technology Diver Mongodb] Gỡ Bỏ Dịch Vụ Mongodb Hoàn Toàn Trên Ubuntu – Technology Diver How To Install Mongodb 5.0 On Ubuntu 21.04 | Linuxhelp Tutorials How To Install Mongodb 5.0 On Ubuntu 21.04 | Linuxhelp Tutorials How To Remove Password In Ubuntu - Howtech How To Remove Password In Ubuntu – Howtech Install Mongodb On Ubuntu: 5 Easy Steps Install Mongodb On Ubuntu: 5 Easy Steps Mongodb Delete | How Delete Command Works In Mongodb | Examples Mongodb Delete | How Delete Command Works In Mongodb | Examples Mongodb - Environment Mongodb – Environment Ubuntu: How Can I Uninstall Mongodb And Reinstall The Latest Version? - Youtube Ubuntu: How Can I Uninstall Mongodb And Reinstall The Latest Version? – Youtube Apt-Get Remove Ttf Ubuntu Font Family | How To Uninstall Ttf Ubuntu Font Family In Linux With Apt-Get? Apt-Get Remove Ttf Ubuntu Font Family | How To Uninstall Ttf Ubuntu Font Family In Linux With Apt-Get? Delete A User Account In Ubuntu With The Deluser Command Delete A User Account In Ubuntu With The Deluser Command How To Delete And Remove Files On Ubuntu Linux Terminal - Tuts Make How To Delete And Remove Files On Ubuntu Linux Terminal – Tuts Make Mongodb] Gỡ Bỏ Dịch Vụ Mongodb Hoàn Toàn Trên Ubuntu - Technology Diver Mongodb] Gỡ Bỏ Dịch Vụ Mongodb Hoàn Toàn Trên Ubuntu – Technology Diver How To Install Mongodb On Raspberry Pi? - Donskytech.Com How To Install Mongodb On Raspberry Pi? – Donskytech.Com How To Run Mongodb As A Docker Container – Bmc Software | Blogs How To Run Mongodb As A Docker Container – Bmc Software | Blogs Cách Xóa Dữ Liệu Cũ Và Điều Chỉnh Kích Thước Mongo Database Trên Unifi Cách Xóa Dữ Liệu Cũ Và Điều Chỉnh Kích Thước Mongo Database Trên Unifi Article link: remove mongodb from ubuntu. Learn more about the topic remove mongodb from ubuntu. See more: https://nhanvietluanvan.com/luat-hoc Leave a Reply Your email address will not be published. Required fields are marked *
__label__pos
0.950746
Skip to Content 0 Former Member Apr 20, 2016 at 04:21 PM Migrated to PB 12.6 from 11. Weird errors with number fields. 215 Views Hi, I recently migrated to 12.6 from 11.2. In the datawindows numeric fields of length 10 or more are having weird behavior. I have a column of type "Number", editmask as "##########". When I enter 1111111111 it changes to 1111111168. 90000025 changes to 90000024. 90000028 changes to 90000032. This is the same behavior with all the number fields in the all the datawindows. What am I missing here ? Did something change about number fields between 11.2 and 12.6 ? Please help me. Thanks, Chan. Attachments
__label__pos
0.513098
Take the 2-minute tour × Stack Overflow is a question and answer site for professional and enthusiast programmers. It's 100% free, no registration required. I am using oracleclient provider. I was wondering how do I use a parameter in the query. select * from table A where A.a in ( parameter). The parameter should be a multivalue parameter. how do I create a data set? share|improve this question 5 Answers 5 Simple. Add the parameter to the report and make sure to check it off as multi-valued. Then in the data tab and go in and edit the query click the "..." button to edit the dataset. Under the parameters tab create a mapping parameter so it looks something like this (obviously you will have different names for your parameters): @ids | =Parameters!ContractorIDS.Value Then in the query tab use the coorelated sub-query like your example above. I have done this many times with SQL server and there is no reason it should not work with Oracle since SSRS is going to build an ANSI compliant SQL statement which it will pass to Oracle. where A.myfield in (@ids) share|improve this answer      This is fine for SQL Server but not for Oracle. You should have tested this against Oracle before you provided it as an answer. I agree there's no reason it shouldn't work, but I can tell you from experience that it doesn't. –  Davos Jan 16 '13 at 23:31 You can't have a variable in list in oracle directly. You can however, break apart a comma seperated list into rows that can be used in your subquery. The string txt can be replaced by any number of values seperated by a comma. select * from a where a.a in ( SELECT regexp_substr(txt,'[^,]+',1,level) FROM (SELECT 'hello,world,hi,there' txt -- replace with parameter FROM DUAL) CONNECT BY LEVEL <= LENGTH (REGEXP_REPLACE (txt, '[^,]'))+1 ) The query works by first counting the number of commas that are in the text string. It does this by using a reqular expression to remove all non commas and then counts the length of the remainder. It then uses an Oracle "trick" to return that number + 1 number of rows from the dual table. It then uses the regexp_substr function to pull out each occurence. share|improve this answer      I don't believe the variable is not in the list by the time it hits Oracle. SSRS takes this value and breaks it up into static inline query text, so it should still work as long as Oracle supports the "IN" query operator. –  James Feb 13 '09 at 20:59      Edit: I don't believe the variable is in the list by the time it hits Oracle. SSRS takes this value and breaks it up into static inline query text, so it should still work as long as Oracle supports the "IN" query operator. –  James Feb 13 '09 at 20:59 Firstly in SSRS with an Oracle OLEDB connection you need to use the colon, not the @ symbol e.g. :parameter not @parameter but then you aren't able to do this as a multi-valued parameter, it only accepts single values. Worse, if you are using an ODBC connection you have to use the question mark by itself e.g. ? not @parameter and then the ordering of parameters becomes important, and they also cannot be multi-valued. The only ways you are left with is using an expression to construct a query (join() function for the param) or calling a stored proc. The stored proc option is best because the SSRS can handle the parameters for stored procs to both SQL Server and Oracle very cleanly, but if that is not an option you can use this expression: ="select column1, column2, a from table A where A.a in (" + Join(Parameters!parameter.Value,", ") + ")" Or if the parameter values are strings which need apostrophes around them: ="select column1, column2, a from table A where A.a in ('" + Join(Parameters!parameter.Value,"', '") + "')" When you right-click on the dataset, you can select "dataset properties" and then use the fx button to edit the query as an expression, rather than using the query designer which won't let you edit it as an expression. This expression method is limited to a maximum limit of about 1000 values but if you have that many this is the wrong way to do it anyway, you'd rather join to a table. share|improve this answer      I wish I could upvote this 3 more times! –  Jeremy Thompson Jan 9 at 4:27 I don't think you can use a parameter in such a situation. (Unless oracle and the language you're using supports array-type parameters ? ) share|improve this answer The parameters in oracle are defined as ":parametername", so in your query you should use something like: select * from table A where value in (:parametername) Add the parameter to the paramaters folders in the report and mark the checkbox "Allow multiple values". share|improve this answer Your Answer   discard By posting your answer, you agree to the privacy policy and terms of service.
__label__pos
0.645424
Dynamic Geometry Dynamic Geometry Software I have access to Geometer’s Sketchpad at school; its installed on a bunch of laptops and in a couple of computer labs.  Geogebra is a similar program but I do not have much experience with it. For the past 4 years I’ve done various “labs” with the Geometry students where they create a set of objects in sketchpad and then measure the different characteristics.  From these measurements they are supposed to write a corresponding theorem to fit the data.  For instance they would create the following object: They would then measure the angle CAO and the arc ADB to find out that the measure of the angle is exactly half that of the arc.  The top quarter of students would have this written down after putting some words in their pencil: “The measure of the angle formed by a tangent and a chord is half the measure of the arc that is inside“.  My “putting words in their pencil” is the following leading question: (the full document and its 49 steps are here: http://drop.io/circleAnglesLab ) 49 steps? Are you kidding? Am I giving the students too much help? (or as Dan Meyer would say: “be less helpful”) The reason there are 49 steps to the document is that this is the first time the students are seeing geometer’s sketchpad and it’s less than intuitive on how to measure an angle or an arc with this software.  The document was created when I thought that it’d be best to give the kids as precise of a path as possible to the truth. Overall I think this type of project is a good idea for how to teach some sections of geometry, but I’d like to be less focused on the steps and more focused on the material.  Many students would carefully go through the steps and get to the end with the theorems written correctly, but still mess up the proper theorems on the assessments. Where should I go from here? Maybe I should have an intro “lesson” on how to use sketchpad to measure circles (demo how to measure an arc) and then set them free with a goal (Find the measure of an inscribed angle in relationship to an arc).  High expectations are an excellent thing in a classroom, but am I asking the all the Oilers to be The Great One? Last Question. For those Sketchpad and Geogebra experts out there, would you recommend Geogebra over Sketchpad?  I haven’t had enough time yet this summer to start playing with Geogebra and so I only have a very basic understanding of it.  Thanks! 6 thoughts on “Dynamic Geometry 1. I’m no expert in either, but one HUGE advantage of GeoGebra is that it’s free. Kids can go to GeoGebra Download page and run the program at home by installing using web start or run them online by using applet start. You can also easily export the GeoGebra files to applets and post it online so kids can access them at home and play with them. I’ve only started learning GeoGebra this summer and so far I’m impressed with what’s possible. Here are some things that I’ve tried so far. 2. You know that’s an excellent point. Free beats Paid every time, especially when you want the kids to buy (haha) into the software. Thats a major plus in the geogebra column. 3. To answer the last question first: GeoGebra. Hands down. I’m no expert with GSP, but I’ve found it very difficult while GGB is much more intuitive for myself as well as my students. As Sheng pointed out, free helps too. There is a very supportive community of GGB folks as well who are very willing to share resources and help with any questions you may have. I like the lab idea but would strongly recommend a lab or two where stident’s get a chance to poke around to learn the difference between a drawing and a construction as well as familiarize themselves with all of the tools. After that, your labs can be much more hands off for you and hands on for them. 4. I agree that for kids, Geogebra is free–a big plus. The other thing I like is that what you create in Geogebra can go into a webpage and look exactly like what you created. Javasketchpad does NOT do this! The down side of Geogebra is that you have to rely on people to answer your pleas for help, and although they are very responsive on their forum, you don’t learn how to do it yourself! I don’t know LATEX so whenever I have to do something a little strange I have to get someone to tell me how to write the LATEX command. There is some amazing stuff out there in Geogebra on the web, but you have to really search for it. It is also not as user friendly in general as Sketchpad. Hope this helps. 5. Well you’ve all convinced me to give geogebra a solid effort. A disadvantage is that I’ve gotten a bunch of the (somewhat stodgy) department to learn sketchpad so I’ll be leaving them behind. Then again those who did learn sketchpad just use it to make pretty diagrams for quizzes/tests. p.s. How the heck do you pronounce geogebra? ge-o-ge-BRAH? Leave a Reply Your email address will not be published. Required fields are marked *
__label__pos
0.860089
R/config.R Defines functions pmx_warnings print.pmxConfig load_config_files load_config print.configs pmx_get_configs pmx_config Documented in load_config pmx_config pmx_get_configs print.configs print.pmxConfig #' This function can be used to define the pmx configuration used in plots. e.g. Monolox/Nonmem #' #' @param sys \code{charcarter} system used , monolix,nonmem,... #' @param inputs \code{charcater} path to the inputs settings file (yaml format) #' @param plots \code{charcater} path to the inputs settings file (yaml format) #' @param ... extra arguments not used #' #' @return \code{pmxConfig} object #' @export #' @example inst/examples/pmx_config.R #' @details #' To create a controller user can create a pmxConfig object using \cr #' - either an input template file \cr #' - or a plot template file \cr #' - or both. \cr #' By default the 'standing' configuration will be used. pmx_config <- function(sys = "mlx", inputs, plots, ...) { if (missing(inputs)) { inputs <- system.file(package = "ggPMX", "init", "mlx","standing.ipmx") } if (missing(plots)) { plots <- system.file(package = "ggPMX", "init","standing.ppmx") } if (!file.exists(inputs)) stop("inputs template file does not exist") if (!file.exists(plots)) stop("plots template file does not exist") load_config_files(inputs, plots, sys) } #' Get List of built-in configurations #' @param sys can be mlx, by default all configurations will be listed #' @return names of the config #' @export #' #' @examples #' pmx_get_configs() pmx_get_configs <- function(sys = "mlx") { sys <- tolower(sys) template_dir <- file.path(system.file(package = "ggPMX"), "templates", sys) res <- if (dir.exists(template_dir)) { template_path <- list.files( template_dir, full.names = TRUE, recursive = FALSE ) if (length(template_path) == 0) { return(NULL) } template_name <- gsub("[.].*", "", basename(template_path)) dx <- data.frame( sys = sys, name = template_name, path = template_path, stringsAsFactors = FALSE ) class(dx) <- c("configs", "data.frame") dx } res } #' This function can be used to print configuration of the defined object using S3 method. #' @param x object of class configs #' @param ... pass additional options (not used presently) #' @return print result #' @export print.configs <- function(x, ...) { assert_that(is_configs(x)) cat(sprintf( "There are %i configs for %s system \n", nrow(x), unique(x$sys) )) for (i in seq_len(nrow(x))) { cat(sprintf("config %i : name %s \n", i, x[i, "name"])) } } #' Obtain the data source config #' #' @param x the config name. #' @param sys can be mlx,nm,... #' @return a list :data configuration object #' @importFrom yaml yaml.load_file #' @export load_config <- function(x, sys = c("mlx", "nm", "mlx18")) { assert_that(is_string(x)) sys <- match.arg(sys) input_dir <- file.path(system.file(package = "ggPMX"), "templates", sys) plot_dir <- file.path(system.file(package = "ggPMX"), "init") ifile <- file.path(input_dir, sprintf("%s.ipmx", x)) pfile <- file.path(plot_dir, sprintf("%s.ppmx", x)) if (length(ifile) == 0) { stop(sprintf("No configuration found for: %s", x)) } if (length(ifile) == 0) { stop(sprintf("No configuration found for: %s", x)) } load_config_files(ifile, pfile, sys) } load_config_files <- function(ifile, pfile, sys) { if (!file.exists(ifile)) { return(NULL) } if (!file.exists(pfile)) { return(NULL) } iconfig <- yaml.load_file(ifile) pconfig <- yaml.load_file(pfile) config <- list(data = iconfig, plots = pconfig) config$sys <- sys class(config) <- "pmxConfig" config } #' S3 method print pmxConfig object #' #' @param x pmxConfig object #' @param ... addtional arguments to pass to print (unused currently) #' #' @return invisible object #' @importFrom knitr kable #' @export print.pmxConfig <- function(x, ...) { data_name <- plot_name <- NULL assert_that(is_pmxconfig(x)) if (exists("data", x)) { datas_table <- data.table( data_name = names(x$data), data_file = sapply(x$data, "[[", "file"), data_label = sapply(x$data, "[[", "label") ) ctr <- list(...)$ctr if (!is.null(ctr)) { datas_table <- rbind( datas_table, data.table( data_name = "input", data_file = if (!is.null(ctr$input_file)) basename(ctr$input_file) else "", data_label = "modelling input" ) ) } datas_table <- datas_table[ data_name %in% c("input", names(ctr$data))] print(kable(datas_table), format = "latex") } if (exists("plots", x)) { plots_table <- data.table( plot_name = tolower(names(x$plots)), plot_type = sapply(x$plots, "[[", "ptype") ) plot_names <- list(...)$plot_names if (!is.null(plot_names)) { plots_table <- plots_table[ plot_name %in% plot_names] } print(kable(plots_table), format = "latex") } invisible(x) } pmx_warnings <- function(x, warn) { assert_that(is_pmxclass(x)) if (warn %in% names(x$warnings)) { message(x$warnings[[warn]]) } } Try the ggPMX package in your browser Any scripts or data that you put into this service are public. ggPMX documentation built on Sept. 20, 2021, 5:09 p.m.
__label__pos
0.995415
[Gtk-sharp-list] Gtk.ListStore IEnumerable Interface Norbert Berzen norbert at spice.gia.rwth-aachen.de Thu Jan 26 06:18:44 EST 2006 Hello people, I'am doing a little spare time project with mono/gtk#. When using 'foreach' on a 'Gtk.ListStore' my program crashes. Digging into the gtk#-implementation I get the following assumption about the reasons for that crash: The Method 'IEnumerable.GetEnumerator' simply constructs a 'TreeIterator' which connects 4 event handlers to the 'model' the iterator operates on. My guess is as follows: What will happen, when the 'TreeEnumerator' ceases to exist after completion of the 'foreach'-loop and then the 'TreeModel' gets changed? As of my understanding the 'TreeModel' will call the handler(s) 'row_changed', 'row_inserted', (or whatever) on the no longer existing 'TreeEnumerator' and that leads to a crash. One may argue that alas the 'TreeModel' exists the 'TreeEnumerator' will not get reaped (by GC) since the 'TreeEnumerator' is reachable through the 'TreeModel's' event. But that may not be true since the 'TreeModel's' event is no field-like event! Instead of this it is declared by an 'event accessor' declaration (see .../gtk/generated/ListStore.cs). So there may be no GC-visible reference to the 'TreeEnumerator'. Can any of you prove my guesses? If my assumptions are correct, could it be fixed by implementing a finalizer '~TreeEnumerator' as follows? ~TreeEnumerator () { model.RowChanged -= new RowChangedHandler (row_changed); model.RowDeleted -= new RowDeletedHandler (row_deleted); model.RowInserted -= new RowInsertedHandler (row_inserted); model.RowsReordered -= new RowsReorderedHandler (rows_reordered); } Thanks in advance, -- Dr. Norbert Berzen Tel.: +49 241 80-95292 Geodaet. Institut, RWTH Aachen Fax.: +49 241 80-92142 Templergraben 55, 52062 Aachen E-Mail: norbert at spice.gia.rwth-aachen.de More information about the Gtk-sharp-list mailing list
__label__pos
0.667523
Electron Documentation1.7.9 Electron 1.7.9 / Docs / Development / Upgrading Chrome Checklist Upgrading Chrome Checklist This document is meant to serve as an overview of what steps are needed on each Chrome upgrade in Electron. These are things to do in addition to updating the Electron code for any Chrome/Node API changes. Verify ffmpeg Support Electron ships with a version of ffmpeg that includes proprietary codecs by default. A version without these codecs is built and distributed with each release as well. Each Chrome upgrade should verify that switching this version is still supported. You can verify Electron's support for multiple ffmpeg builds by loading the following page. It should work with the default ffmpeg library distributed with Electron and not work with the ffmpeg library built without proprietary codecs. <!DOCTYPE html> <html> <head> <meta charset="utf-8"> <title>Proprietary Codec Check</title> </head> <body> <p>Checking if Electron is using proprietary codecs by loading video from http://www.quirksmode.org/html5/videos/big_buck_bunny.mp4</p> <p id="outcome"></p> <video style="display:none" src="http://www.quirksmode.org/html5/videos/big_buck_bunny.mp4" autoplay></video> <script> const video = document.querySelector('video') video.addEventListener('error', ({target}) => { if (target.error.code === target.error.MEDIA_ERR_SRC_NOT_SUPPORTED) { document.querySelector('#outcome').textContent = 'Not using proprietary codecs, video emitted source not supported error event.' } else { document.querySelector('#outcome').textContent = `Unexpected error: ${target.error.code}` } }) video.addEventListener('playing', () => { document.querySelector('#outcome').textContent = 'Using proprietary codecs, video started playing.' }) </script> </body> </html> See something that needs fixing? Propose a change to the source file. Need a different version of the docs? See available versions. Want something searchable? View all docs on one page
__label__pos
0.543613
320 Matching Annotations 1. Jul 2021 2. bafybeic2mwwizbcleyp2to22xtg7gutywzhoiwxcbuumkhpxugdnbaswoq.ipfs.localhost:8080 bafybeic2mwwizbcleyp2to22xtg7gutywzhoiwxcbuumkhpxugdnbaswoq.ipfs.localhost:8080 1. linear combination of theksolutions 解的线性组合 2. omogeneous linearrecurrence relation 齐次线性递归 3. non-homogeneous linearrecurrence relation 非齐次线性递归 4. derivative 形式幂级数的导数 5. quotient formal powerseries 形式幂级数的商不一定存在 6. formal power series 形式幂级数 7. multiplication rule 形式幂级数的乘法规则 8. generating function 生成函数 9. characteristic polynomial 特征多项式 1. FCFS same as SJF 2. response time t0 t1 P cpu queue 0 1 P2 busy 1 2 - cs [P1] 2 7 P1 busy 7 8 - cs [P4,P5,P3] 8 10 P4 busy 10 11 - cs [P5,P3] 11 14 P5 busy 14 15 - cs [P3] 15 19 P3 busy total response time: (1-0) + (7-2) + (10-4) + (14-7) + (19-8) = 30 3. response time t0 t1 P cpu queue 0 1 - cs [P2] 1 2 P2 busy 2 3 - cs [P1] 3 6 P1 busy 6 7 - cs [P4,P1,P5] 7 9 P4 busy 9 10 - cs [P1,P5,P3] 10 12 P1 busy 12 13 - cs [P5, P3] 13 16 P5 busy 16 17 - cs [P3] 17 20 P3 busy 20 21 - cs [P3] 21 22 P3 busy total response time: (2-0) + (12-2) + (9-4) + (16-7) + (22-8) = 40 4. finish time - the arrival time turnaround time 3. May 2021 4. bafybeidej4tasx6dk67t6srmq5fhdy32z26kyrsweo2nskxx7eds6y2bgm.ipfs.localhost:8080 bafybeidej4tasx6dk67t6srmq5fhdy32z26kyrsweo2nskxx7eds6y2bgm.ipfs.localhost:8080 1. Use forward difference approximation. Cannot use central difference approximation Neumann 边值条件处理方式 2. Dirichlet boundary condition Dirichlet 边值条件 $$ y(a) = \beta $$ 3. Neumann boundary condition Neumann 边值条件 $$ y'(a) = \alpha $$ 5. bafybeidej4tasx6dk67t6srmq5fhdy32z26kyrsweo2nskxx7eds6y2bgm.ipfs.localhost:8080 bafybeidej4tasx6dk67t6srmq5fhdy32z26kyrsweo2nskxx7eds6y2bgm.ipfs.localhost:8080 1. second order central difference $$ \frac{\partial \phi}{\partial x} = \frac{\phi{i+1} - \phi{i-1}}{2\Delta x} $$ $$ \frac{\partial^2\phi}{\partial x^2} = \frac{\phi_{i+1} - 2\phi + \phi_{i-1}}{\Delta x^2} $$ 6. bafybeicq7ssvd6yyrdzgs4bpby23imx7vdxhb55w7hmt7zu7m2n6ybmnly.ipfs.localhost:8080 bafybeicq7ssvd6yyrdzgs4bpby23imx7vdxhb55w7hmt7zu7m2n6ybmnly.ipfs.localhost:8080 1. STABILITY OF CRANK-NICOLSON 稳定性 2. So the local truncation error for Crank-Nicolsonformula is and the global truncation error local truncation error $$ O(\Delta t^3) $$ global truncation error $$ O(\Delta t^2) $$ 3. ACCURACY OF CRANK-NICOLSON 精度 4. Crank-Nicolson • Explicit Euler • Implicit Euler • Crank-Nicolson 5. Crank-Nicolson formula $$ x_{n+1} = x_n + \frac{\Delta t}{2}\left[f(t_n,x_n) + f(t_{n+1}, x_{n+1}) \right] $$ 7. bafybeifdxtggnhokzsb6hg4ve3sijpewdbwc4fxo56mhc5e6ockfih6rvy.ipfs.localhost:8080 bafybeifdxtggnhokzsb6hg4ve3sijpewdbwc4fxo56mhc5e6ockfih6rvy.ipfs.localhost:8080 1. 4th order Runge Kutta 4th order Runge-Kutta $$ \begin{aligned} k_1 & = f(t_n, x_n) \ k_2 & = f\left(t_n + \frac{\Delta t}{2} , x_n + \frac{\Delta t}{2} k_1\right)\ k_3 & = f\left(t_n + \frac{\Delta t}{2} , x_n + \frac{\Delta t}{2} k_2\right)\ k_4 & = f\left(t_n + \Delta t, x_n + \Delta t k3\right)\ x{n+1} & = x_n + \left(\frac{1}{6}k_1 + \frac{1}{3}(k_2+k_3) + \frac{1}{6}k_4\right)\Delta t \end{aligned} $$ 2. 2nd order Runge Kutta 2nd order Runge-Kutta $$ \begin{aligned} k_1 & = f(t_n, x_n) \ k_2 & = f(t_n + \Delta t , x_n + \Delta t k1)\ x{n+1} & = x_n + \left(\frac{1}{2}k_1 + \frac{1}{2}k_2\right)\Delta t \end{aligned} $$ 3. 2nd order Runge-Kutta $$ \begin{aligned} k_1 & = f(t_n, x_n) \ k_2 & = f(t_n + \Delta t , x_n + \Delta t k1)\ x{n+1} & = x_n + \left(\frac{1}{2}k_1 + \frac{1}{2}k_2\right)\Delta t \end{aligned} $$ 1. Derivedoperators $$ [a]\varphi \equiv \neg \langle a\rangle\neg \varphi $$ $$ x\models \langle a\rangle\top \Leftrightarrow \exist_{x'} x\overset{a}{\to} x' \land x'\models \top $$ $$ \begin{aligned} x\models [a]\perp & \Leftrightarrow \forall_{x'} x\overset{a}{\to} x' \Rightarrow x'\models\perp \\ & \Leftrightarrow x\overset{a}{\nrightarrow} \end{aligned} $$ 2. $$ [a]\varphi \equiv \neg \langle a\rangle\neg \varphi $$ 3. satisfaction $$ \models $$ 8. bafybeicq7ssvd6yyrdzgs4bpby23imx7vdxhb55w7hmt7zu7m2n6ybmnly.ipfs.localhost:8080 bafybeicq7ssvd6yyrdzgs4bpby23imx7vdxhb55w7hmt7zu7m2n6ybmnly.ipfs.localhost:8080 1. approximate the first derivatives 中心差分二阶导数 2. source $$ f(x+h) = f(x) + f'(x)h + \frac1{2}f''(x)h^2 +\frac1{6}f'''(\xi_3)h^3,\f(x-h) = f(x) - f'(x)h + \frac1{2}f''(x)h^2 -\frac1{6}f'''(\xi'_3)h^3\ f'(x) \approx \frac{f(x+h)-f(x-h)}{2h} $$ 3. Numerical Methods for Engineers 9. bafybeiakog6znbrgrydmhpruc5nzmoreo2gg7dwtpfvgxnelgbgcxuit3a.ipfs.localhost:8080 bafybeiakog6znbrgrydmhpruc5nzmoreo2gg7dwtpfvgxnelgbgcxuit3a.ipfs.localhost:8080 1. A (2, 5) $$A(2,5) = A(1,A(2, 4)) = 2^{2^{(2^{16})}}$$ $$\Longleftarrow A(2,4) = A(1, A(2,3)) =2^{(2^{16})}$$ $$\begin{aligned} \Longleftarrow A(2,3) = A(1, A(2,2)) = 2^{16} \end{aligned} $$ $$\begin{aligned} \Longleftarrow A(2,2) &= A(1, A(2,1)) \\ & = A(1, A(1, 2)) \\ & = A(1, 4) = 16 \end{aligned} $$ 10. bafybeiax3vr6i7wslj3n5rckyvavow3p2myxquup4cismrern5p6nxjapi.ipfs.localhost:8080 bafybeiax3vr6i7wslj3n5rckyvavow3p2myxquup4cismrern5p6nxjapi.ipfs.localhost:8080 1. $$ \begin{align} \hat{\bar{x}}_k = {\color{red} F} \bar{x}_{k-1} \\ \hat{P}_k = {\color{red} F}P_{k-1}{\color{red} F^\intercal} + {\color{red} Q} \\ K = \hat{P}_k{\color{red} H^\intercal}({\color{red} H}\hat{P}_k{\color{red} H^\intercal} + {\color{red} R})^{-1}\\ \bar{x}_k = \hat{\bar{x}}_k + K(\bar{z}_k - {\color{red} H}\hat{\bar{x}}_k) \\ P_k = (I-K{\color{red} H})\hat{P}_k \end{align} $$ 2. Further, especially when getting into nonlinear filtering, if the state information is not perfect,but the uncertainty is set to 0 (or just too small a value) it might cause the filter to be unstable. unstable 3. xk $$\hat{\bar{x}}_k$$ 1. incident to some vertex vertex cover Annotators 1. Markov chain with the same boundary values. harmonic is unique 2. This current is $$ \begin{aligned} i_a &= \sum_y(v_a-v_y)C_{ay} \\ & = v_a\sum_yC_{ay} - \sum_y v_yC_{ay} \\ & = C_a v_a - \sum_y v_y P_{ay} C_a \\ & = C_a (1 - \sum_y P_{ay} v_y) \end{aligned} $$ 3. flowing into the current is from outside source 11. Apr 2021 1. which stores the last operationfrom thesametransaction prevLSN 2. status: 事务的三种状态: 运行中, 提交中, 终止中 3. a sequence of log records Log record 类型 • UPDATE • COMMIT • ABORT • END • CLR 4. how a database can abort a transaction 终止事务的原因: • 死锁 • 用户主动终止 • 操作违反约束 • 系统崩溃 5. what inequality must hold 1. 所有被写入磁盘的日志的序列号都 <= flushedLSN 2. 日志被数据先写入磁盘 6. prevLSN • flushedLSN 保存在内存 • prevLSN 保存在 log record • pageLSN 保存在 data page 7. crease by 10 each time 为什么不是连续的? 8. two rules 1. 日志要在数据之前写入磁盘 2. 当事务提交时所有日志必须写入磁盘 9. Durability 持久性 12. Jun 2019 1. The Strategy Pattern defines a family of algorithms, encapsulates each one, and makes them interchangeable. Strategy lets the algorithm vary independently from clients that use it. a behavioral pattern 2. Favor composition over inheritance. composition vs inheritance • Document title: MCUXpresso SDK USB Stack User’s Guide • Document number: MCUXSDKUSBSUG • Document version: Rev. 9, December 2018 Tags Annotators 1. Full Speed USB connector (USB0) J3 • title: LPCXpresso546x8/540xx/54S0xx Board User Manual • version: Rev. 2.1—7th January 2019 Annotators 1. splinters 碎片 2. block size include the header and padding Annotators 13. May 2019 1. The stator, or stationary part of the stepping motorholds multiple windings. The arrangement of thesewindings is the primary factor that distinguishesdifferent types of stepping motors from an electricalpoint of view. 定子上的线圈排列是步进电机电气上的主要区分因素 2. variable reluctance 可变磁阻 3. stator 定子 4. rotor 转子 5. Brushless 无刷 Annotators 1. The ADC converts an analog input voltage to a 10-bit digital value through successive approximation. successive approximation 2. The ADC contains a Sample and Hold circuit which ensures that the input voltage to the ADC is held at a constant level during conversion. Sample and Hold circuit. 3. 0x08 TWSR: a START has been trasmitted. Annotators 14. Mar 2019 15. Feb 2019 1. any word beginning with an upper-case letter is a Prolog variable 2. This is the way logical conjunction is expressed in Prolog (that is, the comma means and ) 3. the semicolon ; is the Prolog symbol for or how to express disjunction 16. Oct 2018 1. the potency of a substance in inhibiting a specific biological or biochemical function 物质抑制某特定生物/生化功能的效力 2. inhibitory concentration 抑制浓度 1. In certain databases the key values may be points in some multi-dimensional space what is the best data structure for such search problems? 17. Sep 2018 1. it is usual to use a suffix / followed by a number to indicate the predicate’s arity 后缀 / 18. people.inf.ethz.ch people.inf.ethz.ch 1. 2.2.1 Pinhole Camera Model $$s\tilde{\mathbf{m}} = \mathbf{A}[\mathbf{R}\ \mathbf{t}]\tilde{M}$$ 2. 2.2.2 Absolute Conic 1. the :- means implication, the , means conjunction, and the ; means disjunction • :-: means implication • ; : means conjunction • , : means disjunction 1. This will introduce us to the three basic constructs in Prolog: facts, rules, and queries. facts, rules, queries 19. Aug 2018 1. network *net = make_network(sections->size - 1); 网络层数由配置文件中的 section 数决定: net->n = n; 2. get_network_output_layer 3. net->outputs = out.outputs; 网络的输出 1. Enter a directory where you’d like to store your code and run: scrapy startproject tutorial 创建一个Scrapy project 2. we want the attribute href ::attr(href) 20. Jul 2018 1. How big is your cache block Cache block size: 2 (in words) = 8 (in bytes) 2. How much data fits in the WHOLE cache Capacity: 8 words = 32 Bytes 1. Your processor must provide the following outputs 九个输出 2. Your processor has 2 inputs 两个输入: 指令 和 时钟 1. The __thiscall calling convention is used on member functions and is the default calling convention used by C++ member functions that do not use variable arguments. 成员函数使用 __thiscall 调用约定 1. Qt add-ons • for specific purposes 2. Qt essentials • general and useful • available on all supported platforms 1. There are a few different options to run GUI applications inside a Docker container like using SSH with X11 forwarding, or VNC but the simplest one that I figured out was to share my X11 socket with the container and use it directly. docker run -ti --rm \ -e DISPLAY=$DISPLAY \ -v /tmp/.X11-unix:/tmp/.X11-unix \ firefox 1. 2/3" (~ 8.8 x 6.6 mm) 海康 DS-2CD7A86F-IZ(S) 800万 2/3" CMOS ICR日夜型筒型网络摄像机<br> 镜头参数: 2.8~16mm @F1.4,水平视场角:23.8°~98.5° math.atan(8.8 / 2 / 2.8) * 180 / math.pi * 2 21. Jun 2018 1. Floating Point Calling Conventions 浮点参数传递方式及返回值 2. CPU Registers CPU寄存器 3. The supported data types are as follows 数据类型 4. Comments 注释语句 5. Parameter Passing 参数传递 6. Floating Point Instructions 浮点指令运算 1. The idea of the numeric_limits<T> trait class is due to John Barton and Lee Nackman numeric_limits<T> 22. May 2018 1. 计算机图形显示和图象处理的算法<br> 英文原版: Algorithms for Graphics and Image Processing <br> see here <br> 作者: Theo Pavlidis 1. 中文翻译版为: <br> 计算机图形显示和图象处理的算法, 1987, 科学出版社 2. 这本书涉及了许多图像处理中经典的算法, 包括: • 细化 Thinning • 边界追踪 Contour Tracing • 边界填充 Contour Filling? 3. Algorithms for Graphics and Image Processing 1. 机器人机构学的数学基础 - 丁靖军, 刘辛军, 丁希仑, 戴建生 编著 (1-50) 2. 第1章 绪论 1.1 机构学与机器人学的发展历史概述 机构学广义上成为机构与机器科学 (Mechanism and Machine Science) • 第一阶段(古世纪--18世纪中叶): Aristotle, Problems of Machines 1. 机器人机构学的数学基础 - 丁靖军, 刘辛军, 丁希仑, 戴建生 编著 (版权页, 前言页, 目录页) 1. The chapter 2 (Multilayer Networks) gives a overview of the most prominent <s>convolutional network architectures</s> multilayer architectures. 1. 49-50<br> George Polya<br>Most famous for his classical problem book (known as Polya-Szego), he is also revered as the godfather of combinatorics. L. J. Mordell and Gabor Szego<br>Speaking of Szego, here he is (right). Mordell was a great number theorist, known for, among other things, the deep conjecture (related to Fermat's last "theorem") that was proved by Falting (#579) in 1983. PT42 1. 577-578<br> J. E. Taylor<br>Jean, another Council shot, works on minimal surfaces; she has been known to talk about soap bubbles, of course, and crystals too. R. A. Askey<br>I caught Dick during a Bloomington visit in March 1984. Hist specialty is special functions, such as ultraspherical polynomials and Jocobi polynomials; a part of his work was involved in de Branges' solution of the Bieberbach conjecture. PT315 1.  Borodovsky  &  Ekisheva  (2006), Problems and Solutions in Biological Sequence Analysis 2. What  is  the  probability  P(S)  that  this  sequence  S  was  generated  by  the  HMM  model? 计算序列由模型生成的概率 3. Viterbi  algorithm the probability of the most probable path ending in state \(k\) with observation \(i\) is $$ p_l(i,x) = e_l(i)\max_k(p_k(j, x-1)\cdot p_{kl}) $$ or using the log of the probabilities $$ \ln p_l(i,x) = \ln e_l(i) + \max_k(\ln pk(j, x-1) + \ln p{kl}) $$ 4. HMM  :  Viterbi  algorithm  -­a  toy  example 23. Apr 2018 1. two cate-gories. 基于深度学习的自然场景文字检测大致可分为两类: 1. 将文字当为一般对象采用一般对象的检测范式, 对多方向文字处理有问题 2. 分割出文字区域再进行复杂的后期处理 24. Mar 2018 1. a revolutionary technique(expression tem-plates,invented independently by Todd Veldhuizen-'' andDavid Vandevoorde") could be used to improve dramaticallythe runtime efficiency of high-level C++ code like sumlt. expression templates invented independently by Todd Veldhuizen and David Vandevoorde 1. Beating the Abstraction Penalty in C++ Using Expression Templates, 2. DisambiguatedGlommableExpressionTemplates Disambiguated Glommable Expression Templates, Computers in Physics 11, 263 (1997) 3. Todd Veldhuizen - Techniques for Scientific C++ (RT542.pdf) 4. Scienti candEngineeringC++ Barton and Nackman, Scientific and Engineering C++ 25. eigen.tuxfamily.org eigen.tuxfamily.org 1. MSVC (Visual Studio), 2010 and newer. vs2010 及以上版本 2. Eigen is a pure template library defined in the headers 纯模板库 1. expres-sion templates (CIP 10:6, 1996 p. 552; CJP I 1 :3, 1997, p. 263) 1. Beating the Abstraction Penalty in C++ Using Expression Templates, Computers in Physics 10, 552 (1996) 2. Disambiguated Glommable Expression Templates, Computers in Physics 11, 263 (1997) 2. Todd Veldhuizen, the co-discoverer of expres-sion templates (CIP 10:6, 1996 p. 552; CJP I 1 :3, 1997, p. 263), i now a graduate student at the Univer-sity of Waterloo. Todd Veldhuizen 创建Blitz++ 时是滑铁卢大学的一名研究生 1. the checker board checker board classification example
__label__pos
0.999239
Beefy Boxes and Bandwidth Generously Provided by pair Networks Think about Loose Coupling   PerlMonks   XSLT vs Templating, Part 2 by Masem (Monsignor) on Feb 07, 2002 at 04:37 UTC ( #143791=perlmeditation: print w/replies, xml ) Need Help?? In XSLT vs Templating?, I asked fellow monks for their opinion on using either a templating solution, like Template Toolkit 2, for the processing of output from a DB or similar prior to page display, or using XML output along with XSLT to work up the page display. Both have benefits and disadvantages, and no real forerunner came out of that discussion per se, beyond giving everyone else food for thought. Now, I've been playing with XSLT over the last few days, and I thought that I could use a quick test to bookmark which methods might be better than others. Thus, I took 6 different ways one could work up output from a database that typically would be displayed as a web page, and timed the results to see which is the most efficient. The database is Postgres, and I am using the monk locations as compiled by jcwren to fill it. The application that I do is to call the following query on the db: SELECT id, name, xp, lat, long FROM monks ORDER BY lat LIMIT 25 The output then must be compiled into a table, as demonstrated below: 16309j.a.p.h.616-47-45 22313puck179-41.3174.783333 65703rob_au3136-37.816667144.95 116014pjf827- 37.6725144.843333 ...and so forth. The 6 methods that I tried were: • Strict use of DBI and the builtin print command • Use of DBI and CGI's generation functions, after massaging the data into a form usable for these calls • Use of DBI and TT2 with no extra modules to process the data (Do note that with all the TT2-based methods, I used the non-XS variable cache, partially because the XS one appears to be broken in the current version) • Use of XML::Generator::DBI, TT2, and TT2's XML::Simple plugin to parse the XML. Note that because of the way XML::Simple read in the data, the order of the data was not explicitly kept, and a sort statement was required in the template file (and even then, it sorted on text, not numbers) • Use of XML::Generator::DBI, TT2, and TT2's XML::XPath plugin to parse the XML • Use of XML::Generator::DBI and XML::LibXSLT to transform the XML. I do realize that there are other XML/XSLT engines out there, and strictly speaking, this method employs non-pure Perl parts (as it piggybacks on GNOME's libxslt library), but this is still not an unreasonable test to perform. Note in the code below that I tried to prep each method as much as possible before running the tests, including prepping the TT2 variable cache, creating the DBI handle, creating the XML parser, etc. The only thing that each subroutine should do is execute the SQL statement, process the data, and print it out to a target (here being /dev/null since I didn't want to flood my fs with a hugh file of the same text over and over). In addition to the code, I've included the various templates that I used for the TT2 and XSLT methods. The results from timing 50 runs a piece of each method are below (Box is Linux, perl 5.6.1, 200Mhz with 128Megs Ram): Benchmark: timing 50 iterations of DBI and CGI, DBI and Print, DBI and + TT2, XML and TT2/Simple, XML and TT2/XPath, XML and XSLT... DBI and CGI: 5 wallclock secs ( 2.02 usr + 0.02 sys = 2.04 CPU) @ 2 +4.51/s (n=50) DBI and Print: 2 wallclock secs ( 0.23 usr + 0.02 sys = 0.25 CPU) @ + 200.00/s (n=50) (warning: too few iterations for a reliable count) DBI and TT2: 13 wallclock secs (10.91 usr + 0.07 sys = 10.98 CPU) @ +4.55/s (n=50) XML and TT2/Simple: 56 wallclock secs (43.87 usr + 0.23 sys = 44.10 C +PU) @ 1.13/s (n=50) XML and TT2/XPath: 154 wallclock secs (144.17 usr + 0.73 sys = 144.90 + CPU) @ 0.35/s (n=50) XML and XSLT: 27 wallclock secs (24.23 usr + 0.11 sys = 24.34 CPU) @ + 2.05/s (n=50) XML and TT2/XPath 0.345/s -- -70% -83% -92% -99% -100% XML and TT2/Simple 1.13/s 229% -- -45% -75% -95% -99% XML and XSLT 2.05/s 495% 81% -- -55% -92% -99% DBI and TT2 4.55/s 1220% 302% 122% -- -81% -98% DBI and CGI 24.5/s 7003% 2062% 1093% 438% -- -88% DBI and Print 200/s 57860% 17540% 9636% 4292% 716% -- To no surpise, straightforward solutions overwhelm those that do extra things (like generating XML). Of course, when you look at the code, and think of separation of presentation and content, this is only a CPU efficiency, and not a development efficiency. Typically, the XML solutions fared worse than the straight DBI solutions. Of course, there is probably overhead in the XML::Generator::DBI methods, however, given the fair distance between the two XML/TT2 solutions and XML/XSLT, compared with the DBI solutions, this isn't probably very large. Instead, probably much of the overhead for the XML/TT2 solutions is the fact that they are using pure perl libraries to tackle the XML, and that means a lot of inching forward regexs. With the XML/XSLT solution, we have a precompiled library, and while there's probaby inch-by-inch regexes there as well, the lack of time needed to recompile the code is very significant. What I do find surprising and yet satisfactory is that performace-wise, the XML/XSLT solution isn't terribly off from the best solution in terms of bother CPU and development efficiency, the DBI/TT2 solution. Yes, we are looking at what appears to be between 200-250% increase in time per cycle for the XSLT solution, but not insurmountable. For the purposes that I would be considering XML for (non-commerical, low traffic site), that would certainly acceptable. Certainly, this is a simple test, but I think it does show that any solution of content generation that tried to separate content from the logic is going to run into CPU bottlenecks, obviously. And here, in the decision to go with a template solution vs XML, I think both are still up in the air, though templating has the edge in CPU usage. However, I do read about many benefits of XSLT. For example, the process that I transform the data here is as follows: XSLT Transform DBI --> XML -----------------> XHTML which is about as easy as you can get. But because transforms can go to any space, not just XHTML, it's possible to insert special steps along the way to get where you need. For example, if I was going to allow to customization in which data was to be present (a 'fast' view vs a 'full' view) I could simply add a few more steps to this process to get to that: CGI-->userinfo -----| | Data Pruning XSLT DBI--> DataXML -----+--> XML ------------------> XML2 | XHTML Transform | | V XHTML whereby the data pruning transformation would take the various <monk> elements, and using info from the userinfo, convert those into a new <tablerow> tag; at this point, regardless of the customization of the user, we have several <tablerow>s in the XML. Then we simply plug those into the XHTML transformer, taking the <tablerow>s into alternating colored <tr>s. Other customizations can be done here as well, and furthermore, other levels of transforms can be easily chained to make a complex but easily servicable web page. While you can coerce TT2 to do a similar functionality, it's just not as simple as chaining XSLT transformations. So the end result of this study is that both Templating and XSLT have their place. If CPU efficiency is absolutely critical, it's much better to go with TT2 (or any other template solution). On the other hand, if you want to have a lot more flexibility in the transformations, using XML/XSLT may be the right way, but as pointed out in recent threads, this is still new to a lot of people and may take a lot of getting used to. And now for the code: test.pl - The actual program #!/usr/bin/perl -w use strict; use DBI; use CGI qw/-no_xhtml :standard/; use XML::Generator::DBI; use XML::Handler::YAWriter; use XML::LibXML; use XML::LibXSLT; use Template; use Data::Dumper; use Benchmark qw( cmpthese ); $Template::Config::STASH = 'Template::Stash'; my $dbh = DBI->connect( "dbi:Pg:dbname=monksdb", "", "" ) or die $DBI::errstr; my $query = "SELECT id, name, xp, lat, long FROM monks ORDER BY lat LI +MIT 25"; my $sth = $dbh->prepare( $query ) or die $DBI::errstr; my $ya = XML::Handler::YAWriter->new( AsString => 1 ); my $generator = XML::Generator::DBI->new( Handler => $ya, dbh => $dbh, RowElement => "monk" ); my $tt2 = Template->new; my $tt2_nonXML = "template1.tt2"; my $tt2_XML = "template2.tt2"; my $tt2_XPath = "template3.tt2"; my $parser = new XML::LibXML; my $xslt = new XML::LibXSLT; my $sheet = "xslt_sheet.xsl"; my $slt = $parser->parse_file( $sheet ); my $stylesheet = $xslt->parse_stylesheet( $slt ); open FILE, ">/dev/null" or die "Cannot write out: $!"; my $target = \*FILE; cmpthese( 50, { "DBI and Print" => \&generate_from_straight_dbi_and_print, "DBI and CGI" => \&generate_from_straight_dbi_and_cgi, "DBI and TT2" => \&generate_from_straight_dbi_and_tt2, "XML and TT2/Simple" => \&generate_from_xml_and_tt2_and_xmlsimp +le, "XML and TT2/XPath" => \&generate_from_xml_and_tt2_and_xpath, "XML and XSLT" => \&generate_from_xml_and_xslt } ); close FILE; # Here, we use straight DBI calls and print calls to mark up # the table sub generate_from_straight_dbi_and_print { # my $target = shift; $sth->execute() or die $DBI::errstr; print $target "Content-Type: text/html\n\n"; print $target "<html><body><table>\n"; my $colorrow = 0; while ( my ( $id, $name, $xp, $lat, $long ) = $sth->fetchrow_array() + ) { $colorrow = !$colorrow; my $color = ( $colorrow ) ? "#FFFFFF" : "#D0D0FF"; print $target <<ROW; <tr> <td bgcolor="$color">$id</td> <td bgcolor="$color">$name</td> <td bgcolor="$color">$xp</td> <td bgcolor="$color">$lat</td> <td bgcolor="$color">$long</td> </tr> ROW ; } print $target "</table></body></html>"; } # Here, we group the results as to make it easier for CGI # to print out (avoiding large HERE docs...) sub generate_from_straight_dbi_and_cgi { # my $target = shift; $sth->execute() or die $DBI::errstr; my @data; while ( my @row = $sth->fetchrow_array() ) { push @data, \@row; } my $colorrow = 0; print $target header('text/html'), start_html, table( map { $colorrow = !$colorrow; my $color = ( $colorrow ) ? "#FFFFFF" : "#D0D0FF"; Tr( td( {-bgcolor=>$color}, $_ ) ) } @data ), end_html; } # Here, we pass the results to Template Toolkit for printing sub generate_from_straight_dbi_and_tt2 { # my $target = shift; $sth->execute() or die $DBI::errstr; my @data; while ( my @row = $sth->fetchrow_array() ) { push @data, \@row; } print $target header; $tt2->process( $tt2_nonXML, { monks => \@data }, $target ) or die $tt2->error(),"\n"; } # Use TT2 again, but now pass it XML and use the XPath module # for parsing sub generate_from_xml_and_tt2_and_xmlsimple { # my $target = shift; my $xml = $generator->execute( $query ); print $target header; $tt2->process( $tt2_XML, { results => $xml }, $target ) or die $tt2->error(), "\n"; } # Use TT2 again, but now pass it XML and use the XPath module # for parsing sub generate_from_xml_and_tt2_and_xpath { # my $target = shift; my $xml = $generator->execute( $query ); print $target header; $tt2->process( $tt2_XPath, { results => $xml }, $target ) or die $tt2->error(), "\n"; } # Use LibXML/LibXSLT to parse the results sub generate_from_xml_and_xslt { # my $target = shift; my $xml = $generator->execute( $query ); print $target header; my $source = $parser->parse_string( $xml ); my $results = $stylesheet->transform( $source ); print $target $stylesheet->output_string( $results ); } template1.tt2 - The straight-forward TT2 Template [% colorrow = 0 %] <html> <body> <table> [% FOREACH monkinfo = monks %] <tr> [% colorrow = !colorrow %] [% IF colorrow %] [% color = "#FFFFFF" %] [% ELSE %] [% color = "#D0D0FF" %] [% END %] [% FOREACH item = monkinfo %] <td bgcolor="[% color %]"> [% item %] </td> [% END %] </tr> [% END %] </table> </body> </html> template2.tt2 - The TT2/XML::Simple Template [% USE xml = XML.Simple( results ) %] [% xml %] [% colorrow = 0 %] <html> <body> <table> [% orderedmonks = xml.select.monk.sort(keys.lat) %] [% FOREACH monkinfo = orderedmonks %] <tr> [% colorrow = !colorrow %] [% IF colorrow %] [% color = "#FFFFFF" %] [% ELSE %] [% color = "#D0D0FF" %] [% END %] <td bgcolor="[% color %]"> [% xml.select.monk.$monkinfo.id %] </td> <td bgcolor="[% color %]"> [% monkinfo %] </td> <td bgcolor="[% color %]"> [% xml.select.monk.$monkinfo.xp %] </td> <td bgcolor="[% color %]"> [% xml.select.monk.$monkinfo.lat %] </td> <td bgcolor="[% color %]"> [% xml.select.monk.$monkinfo.long %] </td> </tr> [% END %] </table> </body> </html> template3.tt2 - The TT2/XPath Template [% USE xpath= XML.XPath( results ) %] [% colorrow = 0 %] <html> <body> <table> [% FOREACH monk = xpath.findnodes('/database/select/monk') %] <tr> [% colorrow = !colorrow %] [% IF colorrow %] [% color = "#FFFFFF" %] [% ELSE %] [% color = "#D0D0FF" %] [% END %] <td bgcolor="[% color %]"> [% xpath.find('id',monk) %] </td> <td bgcolor="[% color %]"> [% xpath.find('name',monk) %] <td> <td bgcolor="[% color %]"> [% xpath.find('xp',monk) %] </td> <td bgcolor="[% color %]"> [% xpath.find('lat',monk) %] </td> <td bgcolor="[% color %]"> [% xpath.find('long',monk) %] </td> </tr> [% END %] </table> </body> </html> xslt_sheet.xsl - The XSLT Tranform <xsl:stylesheet version = '1.0' xmlns:xsl='http://www.w3.org/1999/XSL/Transform'> <xsl:template match="/database/select"> <table> <xsl:for-each select="//monk"> <tr> <td> <xsl:if test="position() mod 2 = 0"> <xsl:attribute name="bgcolor">#D0D0FF</xsl:attribute> </xsl:if> <xsl:if test="position() mod 2 = 1"> <xsl:attribute name="bgcolor">#FFFFFF</xsl:attribute> </xsl:if> <xsl:value-of select="id"/> </td> <td> <xsl:if test="position() mod 2 = 0"> <xsl:attribute name="bgcolor">#D0D0FF</xsl:attribute> </xsl:if> <xsl:if test="position() mod 2 = 1"> <xsl:attribute name="bgcolor">#FFFFFF</xsl:attribute> </xsl:if> <xsl:value-of select="name"/> </td> <td> <xsl:if test="position() mod 2 = 0"> <xsl:attribute name="bgcolor">#D0D0FF</xsl:attribute> </xsl:if> <xsl:if test="position() mod 2 = 1"> <xsl:attribute name="bgcolor">#FFFFFF</xsl:attribute> </xsl:if> <xsl:value-of select="xp"/> </td> <td> <xsl:if test="position() mod 2 = 0"> <xsl:attribute name="bgcolor">#D0D0FF</xsl:attribute> </xsl:if> <xsl:if test="position() mod 2 = 1"> <xsl:attribute name="bgcolor">#FFFFFF</xsl:attribute> </xsl:if> <xsl:value-of select="lat"/> </td> <td> <xsl:if test="position() mod 2 = 0"> <xsl:attribute name="bgcolor">#D0D0FF</xsl:attribute> </xsl:if> <xsl:if test="position() mod 2 = 1"> <xsl:attribute name="bgcolor">#FFFFFF</xsl:attribute> </xsl:if> <xsl:value-of select="long"/> </td> </tr> </xsl:for-each> </table> </xsl:template> </xsl:stylesheet> ----------------------------------------------------- Dr. Michael K. Neylon - [email protected] || "You've left the lens cap of your mind on again, Pinky" - The Brain "I can see my house from here!" It's not what you know, but knowing how to find it if you don't know that's important Replies are listed 'Best First'. Re: XSLT vs Templating, Part 2 by Matts (Deacon) on Feb 07, 2002 at 10:37 UTC Wow, nice set of tests! It's a little unfair on the XSLT stuff, since you're going from DBI to XML to String to XML to XSLT. I need to update XML::Generator::DBI to output SAX2 (which I'm doing right now), after which you'll be able to go direct from XML::Generator::DBI into XML::LibXML::SAX::Builder, and skip the string phase completely. I may even run the tests to see how it fares for me. I'll reply here after I've made those updates! Alternatively you can cause XML::Generator::DBI to use the XML::Handler::BuildDOM as a handler in order to generate a DOM object that can then be fed to XML::XSLT again cutting out the intermediate reparse of the XML. I am toying with the idea of providing a utility to create modules that encapsulate preparsed DOM objects from XSLT so that step can be taken out as well if anyone is interested /J\ OK, I hacked on this a bit. It seems that almost without a doubt the bottleneck is in XML::Generator::DBI (I added a tiny benchmark to the single sub, and generate took 11u, xslt took 1u and output took 1u. So I'm going to update it to SAX2, and make sure it goes faster. There's a lot of cruft in there, and I've learnt some about how to make DBI calls faster since I wrote it. You can make things faster by simply changing your generator to: my $generator = XML::Generator::DBI->new( Handler => XML::LibXML::SAX::Builder->new(), dbh => $dbh, RowElement => "monk" ); Which seems to work (even without updating XML::Generator::DBI). Then you have to add ->toString to your calls for the TT level stuff. It makes all three faster - by about 40%. Ah, it makes a big difference if you also turn off indenting on XML::Generator::DBI (pass NoIndent => 1 to new()). I think that should be the default! Re: XSLT vs Templating, Part 2 by gellyfish (Monsignor) on Feb 07, 2002 at 14:41 UTC I do realize that there are other XML/XSLT engines out there, and strictly speaking, this method employs non-pure Perl parts I think that XML::XSLT would have come out very badly in speed terms - I have been favouring ease of maintenance and consistency in my recent refactoring with little concern for speed and efficiency :) Also of course it does ultimately depend on XML::Parser which has a non-perl dependency /J\ I didn't realize this existed as well; however, I would suspect that in the DBI->XML->XSLT path, the choice of which parser or XSLT library that you use is going to have some, but not a significant impact on the overall speed, assuming that, as with LibXSLT and XML::Parser, there's a non-perl component. As demonstrated by the two pure-perl routes, any significant processing of XML is going to need a boost by having pre-compiled code available for at least parsing the system. However, I think I'll add the XML::XSLT case as well as skipping the DOM->string->DOM conversion that I do as gellyfish mentioned in reply to Matts response above, as additional tests, just for completeness. (I could also probably improve the xslt sheet itself, for the row coloring code doesn't seem to be overly efficient). I also understand that GNOME's LibXML (which XML::LibXML uses) is not fully complient with recent W3C specs, so that may be a notch against it, as I'd expect a fully complient library to be a bit more rigorous and thus more CPU demanding. ----------------------------------------------------- Dr. Michael K. Neylon - [email protected] || "You've left the lens cap of your mind on again, Pinky" - The Brain "I can see my house from here!" It's not what you know, but knowing how to find it if you don't know that's important There are no known areas where LibXML/LibXSLT are not fully compliant with the relevant W3C specs (they're even authored by the W3C!). If you do know of areas where they fall down on this please do let me know and I'll forward the info on to Daniel. Re: XSLT vs Templating, Part 2 by perrin (Chancellor) on Feb 07, 2002 at 17:16 UTC Two general DBI performance tips: 1) Use prepare_cached() when you can. In this case, you can. 2) Use bind_columns(). It's faster and saves some memory. I agree, in general you'd use these, but to be fair to XML::Generator::DBI, which to the best of my knowledge you cannot take advantage of these features, I kept them off. Given the baseline performance of the DBI/print system, which did fifty iters without blinking, I would suspect these tricks would only slighty improve the non-XML systems (which all were faster to start with). ----------------------------------------------------- Dr. Michael K. Neylon - [email protected] || "You've left the lens cap of your mind on again, Pinky" - The Brain "I can see my house from here!" It's not what you know, but knowing how to find it if you don't know that's important Actually I just uploaded 0.02 of XML::Generator::DBI to CPAN, which can take a prepared statement instead of as string as a parameter to execute(). And it already did the bind_columns trick. It also now makes Indent off the default, so it should be about as fast as I can make it. The other thing I did was change your '//monks' in your XSLT to just 'monks', which is much faster, since it doesn't examine every single child node. Technically, you could do the XML side of things without XML::Generator::DBI, and it would be faster since you could code if for your specific case rather than making a general DBI-->XML tool. Then you could take full advantage of all DBI speed tweaks. However, you would probably end up writing a lot more code. When comparing performance, I do think that being unable to fully use DBI's performance features matters. Personally though, I would be more interested in knowing how easy it is to code the different approaches, since they all seem to perform well enough. (Updates) Re: XSLT vs Templating, Part 2 by Masem (Monsignor) on Feb 08, 2002 at 01:32 UTC I've added the various suggestions in this thread: using bind_columns and prepare_cached from perrin, the use of XML::LibXML::SAX::Builder to avoid creating a temporary string, and using matts' new XML::Generator::DBI version to take advantage of the sth handler. I've also added code that uses XML::XSLT, which surprising is VERY bad at least as I've used it (did I use it wrong? I couldn't get a pure XML DOM document to work with it...) The results are: XML and XSLT, String Intermediate, XML::XSLT 0.159/s + -- -62% -88% -94% -95% -96% -99% + -100% XML and TT2/XPath 0.422/s + 166% -- -67% -84% -87% -90% -98% - +100% XML and TT2/Simple 1.29/s + 713% 205% -- -51% -60% -69% -95% - +99% XML and XSLT, String Intermediate, LibXSLT 2.66/s + 1573% 529% 106% -- -17% -36% -90% - +98% XML and XSLT, XML Intermediate, LibXSLT 3.20/s + 1915% 657% 148% 20% -- -23% -87% - +98% DBI and TT2 4.13/s + 2505% 879% 221% 56% 29% -- -84% - +98% DBI and CGI 25.4/s + 15931% 5924% 1873% 858% 696% 516% -- - +85% DBI and Print 172/s + 108526% 40717% 13269% 6391% 5291% 4071% 578% - +- Nothing surprising about the order, but as you can see, with the various improvements, the behavior of LibXSLT getting close to that of DBI and TT2. There may be other improves that one could do, but as it stands, I think some of the benefits of using XSLT can outweight the small performance penalty compared to TT2 usage. Except for changing the XSLT template as Matts indicated, the rest of the files remain unchanged. Here's the updated test code, however... #!/usr/bin/perl -w use strict; use DBI; use CGI qw/-no_xhtml :standard/; use XML::Generator::DBI; use XML::Handler::YAWriter; use XML::LibXML::SAX::Builder; use XML::LibXML; use XML::LibXSLT; use XML::XSLT; use Template; use Data::Dumper; use Benchmark qw( cmpthese ); $Template::Config::STASH = 'Template::Stash'; my $dbh = DBI->connect( "dbi:Pg:dbname=monksdb", "", "" ) or die $DBI::errstr; my $query = "SELECT id, name, xp, lat, long FROM monks ORDER BY lat LI +MIT 25"; my $sth = $dbh->prepare_cached( $query ) or die $DBI::errstr; my $ya = XML::Handler::YAWriter->new( AsString => 1 ); my $generator = XML::Generator::DBI->new( Handler => $ya, dbh => $dbh, RowElement => "monk" ); my $generator2 = XML::Generator::DBI->new( Handler => XML::LibXML::SAX::Builder->new(), dbh => $dbh, RowElement => "monk" ); my $tt2 = Template->new; my $tt2_nonXML = "template1.tt2"; my $tt2_XML = "template2.tt2"; my $tt2_XPath = "template3.tt2"; my $parser = new XML::LibXML; my $xslt = new XML::LibXSLT; my $sheet = "xslt_sheet.xsl"; my $slt = $parser->parse_file( $sheet ); my $stylesheet = $xslt->parse_stylesheet( $slt ); my $stylesheet2 = XML::XSLT->new( $sheet, warnings => 1 ); open FILE, ">/dev/null" or die "Cannot write out: $!"; my $target = \*FILE; cmpthese( 100, { "DBI and Print" => \&generate_from_straight_dbi_and_print, "DBI and CGI" => \&generate_from_straight_dbi_and_cgi, "DBI and TT2" => \&generate_from_straight_dbi_and_tt2, "XML and TT2/Simple" => \&generate_from_xml_and_tt2_and_xmlsimp +le, "XML and TT2/XPath" => \&generate_from_xml_and_tt2_and_xpath, "XML and XSLT, String Intermediate, LibXSLT " => \&generate_fro +m_xml_and_xslt_string, "XML and XSLT, XML Intermediate, LibXSLT" => \&generate_from_xm +l_and_xslt_xml, "XML and XSLT, String Intermediate, XML::XSLT" => \&generate_fr +om_xmlxslt_xml } ); close FILE; # Here, we use straight DBI calls and print calls to mark up # the table sub generate_from_straight_dbi_and_print { # my $target = shift; $sth->execute() or die $DBI::errstr; my ( $id, $name, $xp, $lat, $long ); $sth->bind_columns( \$id, \$name, \$xp, \$lat, \$long ); print $target "Content-Type: text/html\n\n"; print $target "<html><body><table>\n"; my $colorrow = 0; while ( $sth->fetch() ) { $colorrow = !$colorrow; my $color = ( $colorrow ) ? "#FFFFFF" : "#D0D0FF"; print $target <<ROW; <tr> <td bgcolor="$color">$id</td> <td bgcolor="$color">$name</td> <td bgcolor="$color">$xp</td> <td bgcolor="$color">$lat</td> <td bgcolor="$color">$long</td> </tr> ROW ; } print $target "</table></body></html>"; } # Here, we group the results as to make it easier for CGI # to print out (avoiding large HERE docs...) sub generate_from_straight_dbi_and_cgi { # my $target = shift; $sth->execute() or die $DBI::errstr; my ( $id, $name, $xp, $lat, $long ); $sth->bind_columns( \$id, \$name, \$xp, \$lat, \$long ); my @data; while ( $sth->fetch ) { push @data, [$id, $name, $xp, $lat, $long]; +} my $colorrow = 0; print $target header('text/html'), start_html, table( map { $colorrow = !$colorrow; my $color = ( $colorrow ) ? "#FFFFFF" : "#D0D0FF"; Tr( td( {-bgcolor=>$color}, $_ ) ) } @data ), end_html; } # Here, we pass the results to Template Toolkit for printing sub generate_from_straight_dbi_and_tt2 { # my $target = shift; $sth->execute() or die $DBI::errstr; my ( $id, $name, $xp, $lat, $long ); $sth->bind_columns( \$id, \$name, \$xp, \$lat, \$long ); my @data; while ( $sth->fetch ) { push @data, [$id, $name, $xp, $lat, $long]; +} print $target header; $tt2->process( $tt2_nonXML, { monks => \@data }, $target ) or die $tt2->error(),"\n"; } # Use TT2 again, but now pass it XML and use the XPath module # for parsing sub generate_from_xml_and_tt2_and_xmlsimple { # my $target = shift; my $xml = $generator->execute( $query ); print $target header; $tt2->process( $tt2_XML, { results => $xml }, $target ) or die $tt2->error(), "\n"; } # Use TT2 again, but now pass it XML and use the XPath module # for parsing sub generate_from_xml_and_tt2_and_xpath { # my $target = shift; my $xml = $generator->execute( $query ); print $target header; $tt2->process( $tt2_XPath, { results => $xml }, $target ) or die $tt2->error(), "\n"; } # Use LibXML/LibXSLT to parse the results sub generate_from_xml_and_xslt_string { # my $target = shift; my $xml = $generator->execute( $sth ); print $target header; my $source = $parser->parse_string( $xml ); my $results = $stylesheet->transform( $source ); print $target $stylesheet->output_string( $results ); } sub generate_from_xml_and_xslt_xml { # my $target = shift; my $xml = $generator2->execute( $sth ); print $target header; my $results = $stylesheet->transform( $xml ); print $target $stylesheet->output_string( $results ); } sub generate_from_xmlxslt_xml { # my $target = shift; my $xml = $generator->execute( $sth ); print $target header; print $target $stylesheet2->serve( $xml ); } ----------------------------------------------------- Dr. Michael K. Neylon - [email protected] || "You've left the lens cap of your mind on again, Pinky" - The Brain "I can see my house from here!" It's not what you know, but knowing how to find it if you don't know that's important I've also added code that uses XML::XSLT , which surprising is VERY bad at least as I've used it (did I use it wrong? I couldn't get a pure XML DOM document to work with it...) Heh, I discovered that there was an evil interaction between XML::Generator::DBI and XML::Handler::BuildDOM - I have sent patches to both Matt and Tim ;-} I'll update this later with an example of how you might do this when I get a minute Update: Matt has applied a fix to XML::Generator::DBI and this code now works as billed : #!/usr/bin/perl -w use strict; use XML::XSLT; use XML::Generator::DBI; use XML::Handler::BuildDOM; use XML::DOM; use DBI; my $ya = XML::Handler::BuildDOM->new(); my $dbh = DBI->connect("dbi:Informix:tdcusers",'','',{ChopBlanks => 1} +); my $generator = XML::Generator::DBI->new( Handler => $ya, dbh => $dbh ); my $style =<<EOFOO; <?xml version="1.0"?> <xsl:stylesheet xmlns:xsl="http://www.w3.org/1999/XSL/Transform" versi +on="1.0"> <xsl:output encoding = "iso-8859-1"/> <xsl:template match = "select"> <html> <head> <title> Test </title> </head> <body> <table> <xsl:for-each select="row"> <tr> <td> <xsl:value-of select="prefix" /> </td> <td> <xsl:value-of select="code" /> </td> <td> <xsl:value-of select="ctext" /> </td> </tr> </xsl:for-each> </table> </body> </html> </xsl:template> </xsl:stylesheet> EOFOO my $stylesheet = XML::XSLT->new($style); my $sql = 'select * from text_codes'; my $dom = $generator->execute($sql); $stylesheet->transform(Source => $dom); $stylesheet->toString(); You will obviously want to use your own database and an appropriate stylesheet :) /J\ Log In? Username: Password: What's my password? Create A New User Node Status? node history Node Type: perlmeditation [id://143791] Approved by root help Chatterbox? and all is quiet... How do I use this? | Other CB clients Other Users? Others chanting in the Monastery: (4) As of 2017-03-29 05:40 GMT Sections? Information? Find Nodes? Leftovers? Voting Booth? Should Pluto Get Its Planethood Back? Results (343 votes). Check out past polls.
__label__pos
0.538676
Home> using system; using system.collections.generic; using system.linq; using system.text; using system.collections; using system.text.regularexpressions; using system.data.sqlclient; namespace consoleapplication1 { class program { //Note:Go is not allowed in the cmd.executenonquery () statement, and there is an error. static string connectionstring="server=20111011-2204 \\ sqlserver2008;uid=ecuser;pwd=1234;database=stu;"; static void main (string [] args) { string sql = @ "alter table student add datebak varchar (16) go update student set datebak=convert (char, getdate (), 101) go update student set memo=datebak go alter table student drop column datebak go "; console.writeline ("1. No transaction:"); executesqlwithgo (sql); console.writeline ("2. Use transaction:"); executesqlwithgousetran (sql); console.readline (); } public static void executesqlwithgo (string sql) { int effectedrows=0; using (sqlconnection conn=new sqlconnection (connectionstring)) { conn.open (); sqlcommand cmd=new sqlcommand (); cmd.connection=conn; try { //Note:Here we use newline _ followed by 0 or more spaces _ followed by go to split the string string [] sqlarr=regex.split (sql.trim (), "\ r \ n \\ s * go", regexoptions.ignorecase); foreach (string strsql in sqlarr) { if (strsql.trim (). length>1&&strsql.trim ()!="\ r \ n") { cmd.commandtext=strsql; effectedrows=cmd.executenonquery (); } } } catch (system.data.sqlclient.sqlexception e) { throw new exception (e.message); } finally { conn.close (); } } } public static void executesqlwithgousetran (string sql) { using (sqlconnection conn=new sqlconnection (connectionstring)) { conn.open (); sqlcommand cmd=new sqlcommand (); cmd.connection=conn; sqltransaction tx=conn.begintransaction (); cmd.transaction=tx; try { //Note:Here we use newline _ followed by 0 or more spaces _ followed by go to split the string string [] sqlarr=regex.split (sql.trim (), "\ r \ n \\ s * go", regexoptions.ignorecase); foreach (string strsql in sqlarr) { if (strsql.trim (). length>1&&strsql.trim ()!="\ r \ n") { cmd.commandtext=strsql; cmd.executenonquery (); } } tx.commit (); } catch (system.data.sqlclient.sqlexception e) { tx.rollback (); throw new exception (e.message); } finally { conn.close (); } } } } } c • Previous Adding RadioButton to ASPNET GridView cannot be selected • Next Knowledge of Java's use of this keyword and method overloading
__label__pos
0.999949
ChannelAdvisor - Authorization Failed Options I'm attempting to establish a connection to ChannelAdvisor. I've got my dev-key and password configured in Domo and it authenticates properly. I've even followed their documentation (https://developer.channeladvisor.com/authorization/soap-api-credentials-flow/rest-request-access-endpoint) to make sure the profile / account IDs are approved for my developer key.   The odd part is that when I go to create a new connection using that accout it will authenticate itself correctly when pulling a list of profiles / account ids from the list for Account Name, however when I attempt to actually run the connector it always fails with "Authorization Failed" and no additional helpful information.   I'm able to call the ChannelAdvisor API just fine outside of Domo and retrieve results but was wondering if anyone else had issues pulling data with the ChannelAdvisor connector or if there's an actual issue with the connector itself. **Was this post helpful? Click Agree or Like below** **Did this solve your problem? Accept it as a solution!** Best Answer • GrantSmith GrantSmith Coach Answer ✓ Options This ended up being an issue with ChannelAdvisor - They needed to approve the API connection on their end. **Was this post helpful? Click Agree or Like below** **Did this solve your problem? Accept it as a solution!** Answers • GrantSmith GrantSmith Coach Answer ✓ Options This ended up being an issue with ChannelAdvisor - They needed to approve the API connection on their end. **Was this post helpful? Click Agree or Like below** **Did this solve your problem? Accept it as a solution!**
__label__pos
0.684886
Skip to main content Version: 4.xx.xx Theme Ant Design allows you to customize design tokens to satisfy UI diversity from business or brand requirements, including primary color, border radius, border color, etc. Design Tokens are the smallest element that affects the theme. By modifying the Design Token, we can present various themes or components. Refer to the Ant Design documentation for more information about customizing Ant Design theme. Theme customization <ConfigProvider/> component can be used to change theme. It is not required if you decide to use the default theme. Overriding the themes You can override or extend the default themes. You can also create your own theme. Let's see how to do this. import { Refine } from "@refinedev/core"; import { Layout, ConfigProvider } from "@refinedev/antd"; const API_URL = "https://api.fake-rest.refine.dev"; const App: React.FC = () => { return ( <ConfigProvider theme={{ components: { Button: { borderRadius: 0, }, Typography: { colorTextHeading: "#1890ff", }, }, token: { colorPrimary: "#f0f", }, }} > <Refine /* ... */ > <Layout> {/* ... */} </Layout> </Refine> </ConfigProvider> ); }; Use Preset Algorithms Themes with different styles can be quickly generated by modifying algorithm. Ant Design 5.0 provides three sets of preset algorithms by default, which are default algorithm theme.defaultAlgorithm, dark algorithm theme.darkAlgorithm and compact algorithm theme.compactAlgorithm. You can switch algorithms by modifying the algorithm property of theme in <ConfigProvider/>. Refer to the Ant Design documentation for more information about customizing Ant Design theme. Switching to dark theme Let's start with adding a switch to the Header component. import { Space, Button } from "antd"; interface HeaderProps { theme: "light" | "dark"; setTheme: (theme: "light" | "dark") => void; } const Header: FC<HeaderProps> = (props) => { return ( <Space direction="vertical" align="end" style={{ padding: "1rem", }} > <Button onClick={() => { props.setTheme(props.theme === "light" ? "dark" : "light"); }} icon={props.theme === "light" ? <IconMoonStars /> : <IconSun />} /> </Space> ); }; Then, we can use the theme property of the ConfigProvider component to switch between light and dark themes. import { Refine } from "@refinedev/core"; import { ConfigProvider, theme } from "antd"; import { Header } from "./Header"; const App: React.FC = () => { const [currentTheme, setCurrentTheme] = useState<"light" | "dark">("dark"); return ( <ConfigProvider theme={{ algorithm: currentTheme === "light" ? theme.defaultAlgorithm : theme.darkAlgorithm, }} > <Refine /* ... */ > <Layout Header={Header}> {/* ... */} </Layout> </Refine> </ConfigProvider> ); }; tip If you want to customize the default layout elements provided with @refinedev/antd package, check out the Custom Layout tutorial. Example RUN IN YOUR LOCAL npm create refine-app@latest -- --example customization-theme-antd
__label__pos
0.990755
Problem 2.3 from SCHAUM’S OUTLINE OF Principles of COMPUTER SCIENCE Count the primitive operations in your algorithm to find the mean. What is the order of growth of your mean algorithm? setup 4 loop 5 * length (length = n, the count of numbers to be averaged) return 2 Order of growth : Θ( n ) related source : SCHAUM’S OUTLINE OF Principles of COMPUTER SCIENCE related resource : http://www.cs.nott.ac.uk/~nza/G52ADS/lecture2.pdf
__label__pos
0.999958
Html submission by ValidateInput and AllowHtml attribute in MVC4 Shailendra Chauhan  Print   2 min read   13 Mar 2013 26 Sep 2016 Intermediate 164K Sometimes, your required to save Html data in the database. By default Asp.Net MVC doesn't allow a user to submit html for avoiding Cross Site Scripting attack to your application. Suppose you have below form and you can submit the Html in description textarea. If you do this and try to submit it you will get the error as shown in fig. However, if you want to do this, you can achieve it by using ValidateInput attribute and AllowHtml attribute. ValidateInput Attribute This is the simple way to allow the submission of HTML. This attribute can enable or disable input validation at the controller level or at any action method. ValidateInput at Controller Level [ValidateInput(false)] public class HomeController : Controller { public ActionResult AddArticle() { return View(); } [HttpPost] public ActionResult AddArticle(BlogModel blog) { if (ModelState.IsValid) { } return View(); } } Now, the user can submit Html for this Controller successfully. ValidateInput at Action Method Level public class HomeController : Controller { public ActionResult AddArticle() { return View(); } [ValidateInput(false)] [HttpPost] public ActionResult AddArticle(BlogModel blog) { if (ModelState.IsValid) { } return View(); } } Now, the user can submit Html for this action method successfully. Limitation of ValidateInput attribute This attribute also has the issue since this allow the Html input for all the properties and that is unsafe. Since you have enable Html input for only one-two properties then how to do this. To allow Html input for a single property, you should use AllowHtml attribute. AllowHtml Attribute This is the best way to allow the submission of HTML for a particular property. This attribute will be added to the property of a model to bypass input validation for that property only. This explicit declaration is more secure than the ValidateInput attribute. using System.ComponentModel.DataAnnotations; using System.Web.Mvc; public class BlogModel { [Required] [Display(Name = "Title")] public string Title { get; set; } [AllowHtml] [Required] [Display(Name = "Description")] public string Description{ get; set; } } Make sure, you have removed the ValidateInput attribute from Conroller or Action method. Now, the user can submit Html only for the Description property successfully. What do you think? I hope you will enjoy the tips and tricks while programming with MVC Razor. I would like to have feedback from my blog readers. Your valuable feedback, question, or comments about this article are always welcome. Take our free skill tests to evaluate your skill! In less than 5 minutes, with our skill test, you can identify your knowledge gaps and strengths. Learn to Crack Your Technical Interview + + Accept cookies and close this message
__label__pos
0.960863
Now Hiring: Are you a driven and motivated software consultant? Blog What does ‘one true source’ of information mean? What is one true source of information.00_00_00_00.Still001 Videos What does ‘one true source’ of information mean? With over 25 years experience in delivering and implementing Construction software solutions to the industry, Greg Joyce has developed a series of videos to provide more insight into how you can get the most out of your construction technology software. Having all of your data syncronised in one location is crucial. The term one true source is how we determine your sucess. %d bloggers like this:
__label__pos
1
Jump to content [TOPIC: topicViewTemplate] [GLOBAL: userSmallPhoto] Photo Different results with identical build settings and config file Started by thomas6 Oct 06 2017 06:14 AM 3 replies to this topic fullscreen build config settings [TOPIC CONTROLS] This topic has been archived. This means that you cannot reply to this topic. [/TOPIC CONTROLS] [modOptionsDropdown] [/modOptionsDropdown] [reputationFilter] [TOPIC: post.html] #1 thomas6 [GLOBAL: userInfoPane.html] thomas6 • Contributor • 980 posts • Corona SDK Hi!   Strangeness here: I have two projects. The first and oldest runs fullscreen nicely from the start. I was happy with this so I just copied the build settings and config file over to the new project. However, this new one runs in a floating window with title bar, strangely enough.   Not only that, but I also have the impression that some settings don't behave as they should. For the record, my build settings are: settings = { win32 = { preferenceStorage = "sqlite", singleInstance = true, }, window = { defaultMode = "normal", defaultViewWidth = 1920, defaultViewHeight = 1080, resizable = false, enableCloseButton = true, }, orientation = { default = "landscape", supported = { "landscape", "landscapeLeft", "landscapeRight" }, } } And my config.lua is: application = { content = { fps = 60, audioPlayFrequency = 44100, width = 1080, height = 1920, scale = "letterBox", antialias = true, }, } As stated above, these are identical for both applications. Is anyone aware of bugs with this? [TOPIC: post.html] #2 thomas6 [GLOBAL: userInfoPane.html] thomas6 • Contributor • 980 posts • Corona SDK Here's some visual reference. I have even copied the full project completely into the two folders. One starts fullscreen, the other windowed.   Is it possible the older app maybe wrote some setting somewhere in the sqlite or registry file, and even after changing it, it kept the old style of windowing for itself, but the new app just uses the real new settings?   By the way, my goal is just to have my app run fullscreen on my 1920 x 1080 monitor.   Attached File  IMG_8945.JPG   83.53KB   0 downloads Attached File  IMG_8946.JPG   66.6KB   0 downloads [TOPIC: post.html] #3 thomas6 [GLOBAL: userInfoPane.html] thomas6 • Contributor • 980 posts • Corona SDK For some reason my pictures are upside down, but you get the picture, right? [TOPIC: post.html] #4 Rob Miracle [GLOBAL: userInfoPane.html] Rob Miracle • Moderator • 26,072 posts • Enterprise You probably should use:   defaultMode = "fullscreen",   in your build.settings if you want full screen. The config.lua is about setting your content scale. It has little to do with windows modes on desktop. All those windows size controls are in build.settings.   Rob [topic_controls] [/topic_controls]  
__label__pos
0.975971
What is 557000 percent of 330000000 - step by step solution Simple and best practice solution for 557000% of 330000000. Check how easy it is, and learn it for the future. Our solution is simple, and easy to understand, so don`t hesitate to use it as a solution of your homework. If it's not what You are looking for type in the calculator fields your own values, and You will get the solution. To get the solution, we are looking for, we need to point out what we know. 1. We assume, that the number 330000000 is 100% - because it's the output value of the task. 2. We assume, that x is the value we are looking for. 3. If 330000000 is 100%, so we can write it down as 330000000=100%. 4. We know, that x is 557000% of the output value, so we can write it down as x=557000%. 5. Now we have two simple equations: 1) 330000000=100% 2) x=557000% where left sides of both of them have the same units, and both right sides have the same units, so we can do something like that: 330000000/x=100%/557000% 6. Now we just have to solve the simple equation, and we will get the solution we are looking for. 7. Solution for what is 557000% of 330000000 330000000/x=100/557000 (330000000/x)*x=(100/557000)*x       - we multiply both sides of the equation by x 330000000=0.00017953321364452*x       - we divide both sides of the equation by (0.00017953321364452) to get x 330000000/0.00017953321364452=x 1838100000000=x x=1838100000000 now we have: 557000% of 330000000=1838100000000 You can always share this solution See similar equations: | What is 0.667 percent of 1600 - step by step solution | | What is 0.25 percent of 2.36 - step by step solution | | 41 is what percent of 190108 - step by step solution | | What is 41 percent of 190108 - step by step solution | | 25000 is what percent of 78000000000 - step by step solution | | What is 2 percent of 1171.62 - step by step solution | | 1500 is what percent of 3549 - step by step solution | | 27600 is what percent of 18.8 - step by step solution | | What is 3.165 percent of 18.99 - step by step solution | | What is 18.8 percent of 27600 - step by step solution | | What is 2 percent of 1148.65 - step by step solution | | What is 2 percent of 1126.15 - step by step solution | | What is 7 percent of 215.98 - step by step solution | | What is 2 percent of 1104.07 - step by step solution | | 2251 is what percent of 2300 - step by step solution | | 2300 is what percent of 2251 - step by step solution | | 700 is what percent of 2698 - step by step solution | | What is 2 percent of 1061208 - step by step solution | | 71.5 is what percent of 1700 - step by step solution | | 577000 is what percent of 32500000 - step by step solution | | 412.11 is what percent of 1076 - step by step solution | | 177 is what percent of 2492 - step by step solution | | 33090 is what percent of 260000 - step by step solution | | 328000 is what percent of 5000 - step by step solution | | 953 is what percent of 1960 - step by step solution | | What is 70 percent of 24050 - step by step solution | | 627.26 is what percent of 202.157 - step by step solution | | 453 is what percent of 1350 - step by step solution | | What is 453 percent of 1350 - step by step solution | | What is 150 percent of 4.26 - step by step solution | | What is 6 percent of 48.25 - step by step solution | | 28800 is what percent of 450000 - step by step solution | Equations solver categories
__label__pos
0.983327
blob: 2bb9e18d9ee134a078c664c966c2de6d2d4342f5 [file] [log] [blame] /* * Copyright (C) 2009 Eric Benard - [email protected] * * Based on pcm038.c which is : * Copyright 2007 Robert Schwebel <[email protected]>, Pengutronix * Copyright (C) 2008 Juergen Beisert ([email protected]) * * This program is free software; you can redistribute it and/or * modify it under the terms of the GNU General Public License * as published by the Free Software Foundation; either version 2 * of the License, or (at your option) any later version. * This program is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the * GNU General Public License for more details. * * You should have received a copy of the GNU General Public License * along with this program; if not, write to the Free Software * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, * MA 02110-1301, USA. */ #include <linux/i2c.h> #include <linux/io.h> #include <linux/mtd/plat-ram.h> #include <linux/mtd/physmap.h> #include <linux/platform_device.h> #include <linux/serial_8250.h> #include <linux/usb/otg.h> #include <linux/usb/ulpi.h> #include <asm/mach-types.h> #include <asm/mach/arch.h> #include <asm/mach/time.h> #include <asm/mach/map.h> #include <mach/eukrea-baseboards.h> #include <mach/common.h> #include <mach/hardware.h> #include <mach/iomux-mx27.h> #include <mach/ulpi.h> #include "devices-imx27.h" static const int eukrea_cpuimx27_pins[] __initconst = { /* UART1 */ PE12_PF_UART1_TXD, PE13_PF_UART1_RXD, PE14_PF_UART1_CTS, PE15_PF_UART1_RTS, /* UART4 */ #if defined(MACH_EUKREA_CPUIMX27_USEUART4) PB26_AF_UART4_RTS, PB28_AF_UART4_TXD, PB29_AF_UART4_CTS, PB31_AF_UART4_RXD, #endif /* FEC */ PD0_AIN_FEC_TXD0, PD1_AIN_FEC_TXD1, PD2_AIN_FEC_TXD2, PD3_AIN_FEC_TXD3, PD4_AOUT_FEC_RX_ER, PD5_AOUT_FEC_RXD1, PD6_AOUT_FEC_RXD2, PD7_AOUT_FEC_RXD3, PD8_AF_FEC_MDIO, PD9_AIN_FEC_MDC, PD10_AOUT_FEC_CRS, PD11_AOUT_FEC_TX_CLK, PD12_AOUT_FEC_RXD0, PD13_AOUT_FEC_RX_DV, PD14_AOUT_FEC_RX_CLK, PD15_AOUT_FEC_COL, PD16_AIN_FEC_TX_ER, PF23_AIN_FEC_TX_EN, /* I2C1 */ PD17_PF_I2C_DATA, PD18_PF_I2C_CLK, /* SDHC2 */ #if defined(CONFIG_MACH_EUKREA_CPUIMX27_USESDHC2) PB4_PF_SD2_D0, PB5_PF_SD2_D1, PB6_PF_SD2_D2, PB7_PF_SD2_D3, PB8_PF_SD2_CMD, PB9_PF_SD2_CLK, #endif #if defined(CONFIG_SERIAL_8250) || defined(CONFIG_SERIAL_8250_MODULE) /* Quad UART's IRQ */ GPIO_PORTB | 22 | GPIO_GPIO | GPIO_IN, GPIO_PORTB | 23 | GPIO_GPIO | GPIO_IN, GPIO_PORTB | 27 | GPIO_GPIO | GPIO_IN, GPIO_PORTB | 30 | GPIO_GPIO | GPIO_IN, #endif /* OTG */ PC7_PF_USBOTG_DATA5, PC8_PF_USBOTG_DATA6, PC9_PF_USBOTG_DATA0, PC10_PF_USBOTG_DATA2, PC11_PF_USBOTG_DATA1, PC12_PF_USBOTG_DATA4, PC13_PF_USBOTG_DATA3, PE0_PF_USBOTG_NXT, PE1_PF_USBOTG_STP, PE2_PF_USBOTG_DIR, PE24_PF_USBOTG_CLK, PE25_PF_USBOTG_DATA7, /* USBH2 */ PA0_PF_USBH2_CLK, PA1_PF_USBH2_DIR, PA2_PF_USBH2_DATA7, PA3_PF_USBH2_NXT, PA4_PF_USBH2_STP, PD19_AF_USBH2_DATA4, PD20_AF_USBH2_DATA3, PD21_AF_USBH2_DATA6, PD22_AF_USBH2_DATA0, PD23_AF_USBH2_DATA2, PD24_AF_USBH2_DATA1, PD26_AF_USBH2_DATA5, }; static struct physmap_flash_data eukrea_cpuimx27_flash_data = { .width = 2, }; static struct resource eukrea_cpuimx27_flash_resource = { .start = 0xc0000000, .end = 0xc3ffffff, .flags = IORESOURCE_MEM, }; static struct platform_device eukrea_cpuimx27_nor_mtd_device = { .name = "physmap-flash", .id = 0, .dev = { .platform_data = &eukrea_cpuimx27_flash_data, }, .num_resources = 1, .resource = &eukrea_cpuimx27_flash_resource, }; static const struct imxuart_platform_data uart_pdata __initconst = { .flags = IMXUART_HAVE_RTSCTS, }; static const struct mxc_nand_platform_data cpuimx27_nand_board_info __initconst = { .width = 1, .hw_ecc = 1, }; static struct platform_device *platform_devices[] __initdata = { &eukrea_cpuimx27_nor_mtd_device, }; static const struct imxi2c_platform_data cpuimx27_i2c1_data __initconst = { .bitrate = 100000, }; static struct i2c_board_info eukrea_cpuimx27_i2c_devices[] = { { I2C_BOARD_INFO("pcf8563", 0x51), }, }; #if defined(CONFIG_SERIAL_8250) || defined(CONFIG_SERIAL_8250_MODULE) static struct plat_serial8250_port serial_platform_data[] = { { .mapbase = (unsigned long)(MX27_CS3_BASE_ADDR + 0x200000), /* irq number is run-time assigned */ .uartclk = 14745600, .regshift = 1, .iotype = UPIO_MEM, .flags = UPF_BOOT_AUTOCONF | UPF_SKIP_TEST | UPF_IOREMAP, }, { .mapbase = (unsigned long)(MX27_CS3_BASE_ADDR + 0x400000), /* irq number is run-time assigned */ .uartclk = 14745600, .regshift = 1, .iotype = UPIO_MEM, .flags = UPF_BOOT_AUTOCONF | UPF_SKIP_TEST | UPF_IOREMAP, }, { .mapbase = (unsigned long)(MX27_CS3_BASE_ADDR + 0x800000), /* irq number is run-time assigned */ .uartclk = 14745600, .regshift = 1, .iotype = UPIO_MEM, .flags = UPF_BOOT_AUTOCONF | UPF_SKIP_TEST | UPF_IOREMAP, }, { .mapbase = (unsigned long)(MX27_CS3_BASE_ADDR + 0x1000000), /* irq number is run-time assigned */ .uartclk = 14745600, .regshift = 1, .iotype = UPIO_MEM, .flags = UPF_BOOT_AUTOCONF | UPF_SKIP_TEST | UPF_IOREMAP, }, { } }; static struct platform_device serial_device = { .name = "serial8250", .id = 0, .dev = { .platform_data = serial_platform_data, }, }; #endif static int eukrea_cpuimx27_otg_init(struct platform_device *pdev) { return mx27_initialize_usb_hw(pdev->id, MXC_EHCI_INTERFACE_DIFF_UNI); } static struct mxc_usbh_platform_data otg_pdata __initdata = { .init = eukrea_cpuimx27_otg_init, .portsc = MXC_EHCI_MODE_ULPI, }; static int eukrea_cpuimx27_usbh2_init(struct platform_device *pdev) { return mx27_initialize_usb_hw(pdev->id, MXC_EHCI_INTERFACE_DIFF_UNI); } static struct mxc_usbh_platform_data usbh2_pdata __initdata = { .init = eukrea_cpuimx27_usbh2_init, .portsc = MXC_EHCI_MODE_ULPI, }; static const struct fsl_usb2_platform_data otg_device_pdata __initconst = { .operating_mode = FSL_USB2_DR_DEVICE, .phy_mode = FSL_USB2_PHY_ULPI, }; static bool otg_mode_host __initdata; static int __init eukrea_cpuimx27_otg_mode(char *options) { if (!strcmp(options, "host")) otg_mode_host = true; else if (!strcmp(options, "device")) otg_mode_host = false; else pr_info("otg_mode neither \"host\" nor \"device\". " "Defaulting to device\n"); return 1; } __setup("otg_mode=", eukrea_cpuimx27_otg_mode); static void __init eukrea_cpuimx27_init(void) { imx27_soc_init(); mxc_gpio_setup_multiple_pins(eukrea_cpuimx27_pins, ARRAY_SIZE(eukrea_cpuimx27_pins), "CPUIMX27"); imx27_add_imx_uart0(&uart_pdata); imx27_add_mxc_nand(&cpuimx27_nand_board_info); i2c_register_board_info(0, eukrea_cpuimx27_i2c_devices, ARRAY_SIZE(eukrea_cpuimx27_i2c_devices)); imx27_add_imx_i2c(0, &cpuimx27_i2c1_data); imx27_add_fec(NULL); platform_add_devices(platform_devices, ARRAY_SIZE(platform_devices)); imx27_add_imx2_wdt(); imx27_add_mxc_w1(); #if defined(CONFIG_MACH_EUKREA_CPUIMX27_USESDHC2) /* SDHC2 can be used for Wifi */ imx27_add_mxc_mmc(1, NULL); #endif #if defined(MACH_EUKREA_CPUIMX27_USEUART4) /* in which case UART4 is also used for Bluetooth */ imx27_add_imx_uart3(&uart_pdata); #endif #if defined(CONFIG_SERIAL_8250) || defined(CONFIG_SERIAL_8250_MODULE) serial_platform_data[0].irq = IMX_GPIO_NR(2, 23); serial_platform_data[1].irq = IMX_GPIO_NR(2, 22); serial_platform_data[2].irq = IMX_GPIO_NR(2, 27); serial_platform_data[3].irq = IMX_GPIO_NR(2, 30); platform_device_register(&serial_device); #endif if (otg_mode_host) { otg_pdata.otg = imx_otg_ulpi_create(ULPI_OTG_DRVVBUS | ULPI_OTG_DRVVBUS_EXT); if (otg_pdata.otg) imx27_add_mxc_ehci_otg(&otg_pdata); } else { imx27_add_fsl_usb2_udc(&otg_device_pdata); } usbh2_pdata.otg = imx_otg_ulpi_create(ULPI_OTG_DRVVBUS | ULPI_OTG_DRVVBUS_EXT); if (usbh2_pdata.otg) imx27_add_mxc_ehci_hs(2, &usbh2_pdata); #ifdef CONFIG_MACH_EUKREA_MBIMX27_BASEBOARD eukrea_mbimx27_baseboard_init(); #endif } static void __init eukrea_cpuimx27_timer_init(void) { mx27_clocks_init(26000000); } static struct sys_timer eukrea_cpuimx27_timer = { .init = eukrea_cpuimx27_timer_init, }; MACHINE_START(EUKREA_CPUIMX27, "EUKREA CPUIMX27") .atag_offset = 0x100, .map_io = mx27_map_io, .init_early = imx27_init_early, .init_irq = mx27_init_irq, .handle_irq = imx27_handle_irq, .timer = &eukrea_cpuimx27_timer, .init_machine = eukrea_cpuimx27_init, .restart = mxc_restart, MACHINE_END
__label__pos
0.999503
How do you display categories? Solved7.56K viewsThemesaskbug category 0 I have the categories page displayed at  http://trader.help/categories/ But the links do not work. How do I correct this? Thank you. Question is closed for new answers. selected answer 1 Hello Emmett, Just removing page argument from shortcode did it. As I said many times to remove page argument from shortcode. Passing a page argument force shortcode to render a specific page. commented on answer OH!!! I just needed [anspress] shortcode. I didnt understand what you meant by “argument”. Thanks Kumar. It sounds like I need that short code on all of the assigned pages. You are viewing 1 out of 4 answers, click here to view all answers.
__label__pos
0.571109
Java Question Beginners problems building a library DonManfred Expert Licensed User Hello java-masters, i´m actually building a library and i have kind of problems. I just want to know whether i´m doing that right or whether i´m doing it wrong because...... B4X: @SuppressWarnings("rawtypes") public List ListDatastores() throws DbxException{ Set<DbxDatastoreInfo> ds = mDatastoreManager.listDatastores(); List l = new List(); l.Initialize(); for (DbxDatastoreInfo dsi : ds) { HashMap m = new HashMap(); m.put("id", dsi.id); m.put("mtime", dsi.mtime); m.put("title", dsi.title); m.put("hashcode", dsi.hashCode()); l.Add(m); } //l.getObject().addAll(ds); return l; } This code works in principle... But it seems that i just get ONE Datastore back to b4a. B4X: Dim dslist As List = manager.ListDatastores Log("dslistsize="&dslist.Size) For i = 0 To dslist.Size -1 Log(dslist.Get(i)) Next {mtime=null, id=default, title=null, hashcode=-1502101888} It seems that this is just the default-datastore from the device but not one of the Datastores i can see in dropboxs developerconsole. But i was expecting exactly this datastores... dbxdatastores_002.png Could that be? Maybe i´m doing the initialization of the datastore wrong with the manager java-code B4X: /** * Links the application with the user's Dropbox account. *If the account was not linked before then the user will be shown an authentication form. *The AccountReady event will be raised when the account is ready. */ public void LinkAccount(final BA ba) { if (manager.hasLinkedAccount()) { try { // Use Dropbox datastores mDatastoreManager = DbxDatastoreManager.forAccount(manager.getLinkedAccount()); } catch (DbxException.Unauthorized e) { System.out.println("Account was unlinked remotely"); } if (mDatastoreManager == null) { // Account isn't linked yet, use local datastores mDatastoreManager = DbxDatastoreManager.localManager(manager); } //try { // datastore = mDatastoreManager.openDefaultDatastore(); //} catch (DbxException e) { // System.out.println("Account was unlinked remotely"); //} waitForFileSystem(ba); return; } ion = new IOnActivityResult() { @Override public void ResultArrived(int resultCode, Intent intent) { if (resultCode != Activity.RESULT_OK) ba.raiseEvent(DbxAccountManagerWrapper.this, eventName + "_accountready", false); else waitForFileSystem(ba); } }; try { ba.startActivityForResult(ion, null); } catch (NullPointerException npe) { //required... } BA.SharedProcessBA sba = ba.sharedProcessBA; try { Field f = BA.SharedProcessBA.class.getDeclaredField("onActivityResultCode"); f.setAccessible(true); int requestCode = f.getInt(sba) - 1; manager.startLink((Activity)ba.sharedProcessBA.activityBA.get().activity, requestCode); } catch (Exception e) { throw new RuntimeException(e); } } I´m not sure whether i did that right or not. It´s my attempt of integrating datastores into the program-flow (using the LinkAccount-command in the dropboxsync-library (which i am extending here) Probably it´s totally wrong If someone have any kind of comment, suggestions, help, whatever; i would really appreciate any kind of help/tips/answers Thank you!   Last edited: DonManfred Expert Licensed User Note that you should use anywheresoftware.b4a.objects.collections.Map instead of HashMap though it is not related to this issue. will have a look at this. thanx! The ListDatastores code is correct which means that Dropbox returned a single item. Ok, then i believe i have an local-datastore only. Not an linked one to my dropbox-account B4X: if (mDatastoreManager == null) { // Account isn't linked yet, use local datastores mDatastoreManager = DbxDatastoreManager.localManager(manager); ' This seems to be the active datastoremanager. } Sounds for me that my attempt B4X: try { // Use Dropbox datastores mDatastoreManager = DbxDatastoreManager.forAccount(manager.getLinkedAccount()); } catch (DbxException.Unauthorized e) { System.out.println("Account was unlinked remotely"); } does not results in an linked DatastoreManager (mDatastoreManager = NULL)   Top
__label__pos
0.905358
0 Quiero capturar el contenido de una celda en especifico de mi tabla, en jtable lo hacia con: miTabla.getValueAt(int fila,int columna); De que manera puedo hacerlo en javaFX? Mi codigo: //Archivo controlador de la app. public class FXMLDocumentController implements Initializable { @FXML private TableView <Proceso> tablaInfo; @FXML private TableColumn columnaId; @FXML private TableColumn columnaNombre; @FXML private TableColumn columnaQuantum; @FXML private TableColumn columnaRecursos; @FXML private TableColumn columnaEstado; @FXML private TableColumn columnaTiempo; ObservableList<Proceso> lista; @FXML private void botonIniciarClicked(ActionEvent event) { for (int i = 1 ; i < 13 ; i++){ int Q = (int)((Math.random()*29)+1); Proceso proceso = new Proceso(); proceso.id.set(i); proceso.nombre.set("Proceso "+i); proceso.quantum.set(Q); proceso.recursos.set(1); proceso.estado.set("Ejecutando"); proceso.tiempo.set(1); lista.add(proceso); } public void Procesar(int cont){ int idProcesar = tablaInfo.getValueAt(cont,0); //Aqui surge el error, "cannot find symbol method getValueAt()" } } //-----------------------Clase Proceso---------------------- public class Proceso{ public SimpleIntegerProperty id = new SimpleIntegerProperty(); public SimpleStringProperty nombre = new SimpleStringProperty(); public SimpleIntegerProperty quantum = new SimpleIntegerProperty(); public SimpleDoubleProperty recursos = new SimpleDoubleProperty(); public SimpleStringProperty estado = new SimpleStringProperty(); public SimpleIntegerProperty tiempo = new SimpleIntegerProperty(); public Integer getId() { return id.get(); } public String getNombre() { return nombre.get(); } public Integer getQuantum() { return quantum.get(); } public Double getRecursos() { return recursos.get(); } public String getEstado() { return estado.get(); } public Integer getTiempo() { return tiempo.get(); } } 2 respuestas 2 3 Realmente en JavaFX cambia completamente la filosofia de las tablas, no se parece en nada un JTable con un TableView. //para obtener el nombre por ejemplo, yo lo hago recorriento fila por fila de la tabla y despues coginedo el valor de la columna que deseo for (int i = 0; i < tbView.getItems().size(); i++) System.out.println(tbView.getItems().get(i).getNombre().toString()); Tambien olvide que lo puedes hacer de esta forma, con un for anidado, uno para recorrer las filas y el otro para las columnas que estan visibles for (int i = 0; i < tbView.getItems().size(); i++) for (TableColumn column : tbView.getVisibleLeafColumns()) System.out.println(column.getCellData(i)); 0 0 De acuerdo a tu ejemplo, tendrías que modificar el método 'Procesar' de la siguiente forma: public void Procesar(int cont){ Proceso proc = tablaInfo.getItems().get(cont); int idProcesar = proc.getId(); . . . } Cada fila de 'tablaInfo' es un objeto del tipo 'Proceso'. Primero obtienes el objeto ubicado en la fila 'cont' y luego obtienes el campo que te interesa, en este caso el campo 'id', utilizando el método (propiedad) getId() definido en la clase (Proceso). 2 • Ya lo solucione, gracias :) – Gabo Reyes Commented el 27 abr. 2017 a las 15:16 • @GaboReyes Si ya solucionaste tu pregunta publica tu solucion o si la respuesta de alguno te funciono marca la respuesta como aceptada. – JGarnica Commented el 10 may. 2017 a las 21:13 Tu Respuesta By clicking “Publica tu respuesta”, you agree to our terms of service and acknowledge you have read our privacy policy. ¿No es la respuesta que buscas? Examina otras preguntas con la etiqueta o formula tu propia pregunta.
__label__pos
0.816022
You copied the Doc URL to your clipboard. STXRH Store Exclusive Register Halfword stores a halfword from a register to memory if the PE has exclusive access to the memory address, and returns a status value of 0 if the store was successful, or of 1 if no store was performed. See Synchronization and semaphores. The memory access is atomic. For information about memory accesses see Load/Store addressing modes. 313029282726252423222120191817161514131211109876543210 01001000000Rs0(1)(1)(1)(1)(1)RnRt sizeLo0Rt2 No offset STXRH <Ws>, <Wt>, [<Xn|SP>{,#0}] integer n = UInt(Rn); integer t = UInt(Rt); integer s = UInt(Rs); // ignored by all loads and store-release boolean tag_checked = n != 31; Assembler Symbols <Ws> Is the 32-bit name of the general-purpose register into which the status result of the store exclusive is written, encoded in the "Rs" field. The value returned is: 0 If the operation updates memory. 1 If the operation fails to update memory. <Wt> Is the 32-bit name of the general-purpose register to be transferred, encoded in the "Rt" field. <Xn|SP> Is the 64-bit name of the general-purpose base register or stack pointer, encoded in the "Rn" field. Aborts and alignment If a synchronous Data Abort exception is generated by the execution of this instruction: • Memory is not updated. • <Ws> is not updated. A non halfword-aligned memory address causes an Alignment fault Data Abort exception to be generated, subject to the following rules: • If AArch64.ExclusiveMonitorsPass() returns TRUE, the exception is generated. • Otherwise, it is implementation defined whether the exception is generated. If AArch64.ExclusiveMonitorsPass() returns FALSE and the memory address, if accessed, would generate a synchronous Data Abort exception, it is implementation defined whether the exception is generated. Operation bits(64) address; bits(16) data; boolean rt_unknown = FALSE; boolean rn_unknown = FALSE; if HaveMTEExt() then SetNotTagCheckedInstruction(!tag_checked); if s == t then Constraint c = ConstrainUnpredictable(Unpredictable_DATAOVERLAP); assert c IN {Constraint_UNKNOWN, Constraint_NONE, Constraint_UNDEF, Constraint_NOP}; case c of when Constraint_UNKNOWN rt_unknown = TRUE; // store UNKNOWN value when Constraint_NONE rt_unknown = FALSE; // store original value when Constraint_UNDEF UNDEFINED; when Constraint_NOP EndOfInstruction(); if s == n && n != 31 then Constraint c = ConstrainUnpredictable(Unpredictable_BASEOVERLAP); assert c IN {Constraint_UNKNOWN, Constraint_NONE, Constraint_UNDEF, Constraint_NOP}; case c of when Constraint_UNKNOWN rn_unknown = TRUE; // address is UNKNOWN when Constraint_NONE rn_unknown = FALSE; // address is original base when Constraint_UNDEF UNDEFINED; when Constraint_NOP EndOfInstruction(); if n == 31 then CheckSPAlignment(); address = SP[]; elsif rn_unknown then address = bits(64) UNKNOWN; else address = X[n]; if rt_unknown then data = bits(16) UNKNOWN; else data = X[t]; bit status = '1'; // Check whether the Exclusives monitors are set to include the // physical memory locations corresponding to virtual address // range [address, address+dbytes-1]. if AArch64.ExclusiveMonitorsPass(address, 2) then // This atomic write will be rejected if it does not refer // to the same physical locations after address translation. Mem[address, 2, AccType_ATOMIC] = data; status = ExclusiveMonitorsStatus(); X[s] = ZeroExtend(status, 32); Operational information If PSTATE.DIT is 1, the timing of this instruction is insensitive to the value of the data being loaded or stored. Was this page helpful? Yes No
__label__pos
0.748615
system-tools 1. Description "In computing, booting (or booting up) is the initialization of a computerized system." system-tools contains the components to be used in non-standard Debian based Linux systems, e.g. when booting from a read-only medium. 2. Download 3. Installation 3.1 Source 1. sudo apt install asciidoc git docbook-xml docbook-xsl libxml2-utils make xsltproc 2. git clone https://sources.open-infrastructure.net/software/system-tools 3. cd system-tools && sudo make install 3.2 Debian 9 (stretch) and newer 4. Development Bug reports, feature requests, and patches are welcome via the Debian Bug Tracking System. Please base them against the 'next' Git branch using common sense. 5. Authors
__label__pos
0.999681
math posted by wa Anna starts the race 10m ahead of ping.ping runs at 20%faster tan anna`s speed.they will be level in the race after 9seconds.find average speed of anna 1. Steve if anna's speed is s, then since distance = speed*time, 9s = 9(1.2s)-10 9s = 10.8s - 10 1.8s = 10 s = 5.55 m/s Respond to this Question First Name Your Answer Similar Questions 1. spanish I'm trying to say: at the party, Anna and her father argued about Annas behavior.. so far I have: A la fiesta, Anna y su padre rineron acerca de la conducta de Anna. but A, the first word, is wrong and so is the verb ending..? 2. Math/Probability Anna and bob play a game in which Anna begins by rolling a fair dice, after which bob tosses a fair coin. They take turns until one of them wins. Anna wins when she rolls a 6. Bob wins when the coin lands on heads. What is the probability … 3. Math Anna and bob play a game in which Anna begins by rolling a fair dice, after which bob tosses a fair coin. They take turns until one of them wins. Anna wins when she rolls a 6. Bob wins when the coin lands on heads. What is the probability … 4. math Anna and Jamie were at track practive. The track is 2/5 kilometers around. Anna ran 1 lap in 2 minutes. How many minutes does it take Anna to run one kilometer? 5. Physics Bob and Bob Jr. stand at open doorways at opposite ends of an airplane hangar 25 m long. Anna owns a spaceship, 40 m long as it sits on the runway. Anna takes off in her spaceship, then swoops through the hangar at constant velocity. … 6. reading comprehension Anna has two aunts, Gertie and Samantha. She has one Uncle, jimbo. Jimbo has a nephew on Anna's side of the family, Timothy, who has two children, John and Aubrey. Which of the following indicates the relationship between Timothy's … 7. Math Laurie and Anna are running a 3 mile race. After 5 minutes, Laurie has run 3/5 miles and Anna has run 7/10 mile. If both ladies can keep their same pace for the remainder of the race, how many minutes ahead of Laurie will Anna cross … 8. ENGLISH Anna has two aunts, Gertie and Samantha, and one uncle, Jimbo. Jimbo has a nephew on Anna's side of the family, Timothy, who has two children, John and Aubrey. Which of the following indicates the relationship between Timothy's daughter … 9. Math In a 5 km race, Anna swims 450 m, then runs 800 m. She then cycles the rest of the race. How many kilometres does she cycle? 10. math algebra Gray Glaze requires 15 hours to put a new roof on a house. His apprentice, Anna Gandy, can record the house by herself in 20 hours. After working alone on a roof for 6 hours, Gary leaves for another job. Anna takes over and completes … More Similar Questions
__label__pos
0.510388
Handler 中的 handleMessage 所在线程是由什么决定的? lqpgjv 大多数情况下,handleMessage所在线程和 handler 初始化所在的线程相同,但 handler 初始化的时候可以传入一个 Looper 对象,此时handleMessage所在线程和参数looper所在线程相同。 1. 含参构造public Handler(Looper looper) class MainActivity : AppCompatActivity() { var handler: Handler? = null var looper: Looper? = null override fun onCreate(savedInstanceState: Bundle?) { super.onCreate(savedInstanceState) setContentView(R.layout.activity_main) looper = Looper.getMainLooper() val thread = object : Thread() { override fun run() { super.run() Log.e("abc", "--- Runnable:threadName ---" + Thread.currentThread().name) handler = object : Handler(looper) { override fun handleMessage(msg: Message?) { super.handleMessage(msg) Log.e("abc","--- handleMessage:threadName ---" + Thread.currentThread().name ) } } } } thread.start() myBtn.setOnClickListener { val msg = Message() handler!!.sendMessage(msg) } } } // log 打印情况 --- Runnable:threadName ---Thread-2 --- handleMessage:threadName ---main 从 log 中可以看到 handler 初始化所在线程在 Thread-2,而handleMessage所在的线程是主线程main. 2. 无参构造 如果使用无参的 Handler 初始化构造,需要手动调用Looper.prepare()Looper.loop() val thread = object : Thread() { override fun run() { super.run() Log.e("abc", "--- Runnable:threadName ---" + Thread.currentThread().name) Looper.prepare() handler = object : Handler() { override fun handleMessage(msg: Message?) { super.handleMessage(msg) Log.e( "abc", "--- handleMessage:threadName ---" + Thread.currentThread().name ) } } Looper.loop() } } // log 打印情况 --- Runnable:threadName ---Thread-2 --- handleMessage:threadName ---Thread-2 不手动调用Looper.prepare()会抛出异常: java.lang.RuntimeException: Can't create handler inside thread Thread[Thread-2,5,main] that has not called Looper.prepare() 主线程中使用 Handler 大多数时候我们不会在子线程中初始化和使用 handler,而是在主线程中使用,此时不需要prepare()loop(),因为主线程中自带一个 Looper(通过Looper.getMainLooper()可以获取) 3. 一个线程可以有多少个 Looper?Handler 和 Looper 之间如何关联? 3.1 一个线程可以有多少个 Looper 查看Looper.prepare()源码: private static void prepare(boolean quitAllowed) { if (sThreadLocal.get() != null) { throw new RuntimeException("Only one Looper may be created per thread"); } sThreadLocal.set(new Looper(quitAllowed)); } 继续查看sThreadLocal.set(new Looper(quitAllowed)) static final ThreadLocal<Looper> sThreadLocal = new ThreadLocal<Looper>(); Threadlocal 是一个线程内部的存储类,可以在指定线程内存储数据,数据存储以后,只有指定线程可以得到存储数据。在这里 ThreadLocal 的作用是保证了每个线程都有各自的 Looper,就是说一个线程只能有一个 Looper,关于 Threadlocal,可以看看这篇文章 Threadlocal 接下来看看创建 Looper 实例的方法new Looper(quitAllowed) final MessageQueue mQueue; final Thread mThread; private Looper(boolean quitAllowed) { mQueue = new MessageQueue(quitAllowed); mThread = Thread.currentThread(); 在构造方法里,初始化 了MessageQueue 和代表当前线程的属性 mThread. 调用Looper.prepare()其实就是利用 ThreadLocal 为当前的线程创建了一个独立的 Looper,这其中包含了一个消息队列 3.2 Handler 和 Looper 之间如何关联 一个线程只能有一个 Looper,但一个线程中可以创建多个 Handler,那么一个 Looper 怎么和多个 Handler 对应呢?查看源码可知,post(Runnable r)postDelayed(Runnable r, long delayMillis)postAtTime(Runnable r, long uptimeMillis)sendMessage最终调用的都是enqueueMessage方法: private boolean enqueueMessage(MessageQueue queue, Message msg, long uptimeMillis) { msg.target = this; if (mAsynchronous) { msg.setAsynchronous(true); } return queue.enqueueMessage(msg, uptimeMillis); } msg.target = this这里就是将当前的 Handler 赋值给 Message 对象的 target 属性,这样在处理消息的时候通过msg.target就可以区分开不同的 Handler 了。 阅读 340 1 声望 0 粉丝 0 条评论 你知道吗? 1 声望 0 粉丝 宣传栏
__label__pos
0.999132
Pass r.Form to another function This works: func index(w http.ResponseWriter, r *http.Request) { r.ParseForm() for key, values := range r.Form { // range over map for _, value := range values { // range over []string fmt.Println(key, value) } } My question is how do I r.Form this to another function? Like this mail.Send(r.Form) How should the receiving function look like? 1 Like The field Form from http.Request is of the type url.Values, which is just map[string][]string. 1 Like Thank you. But how should the receiving function look like? This seems to give an error: func Send(form) map[string][]string { 1 Like That’s not valid Go code. It should look something like this: func Send(form url.Values) error { //Send an email, report an error if something went wrong. } 3 Likes Thank you! It seems to be on the right track…, but func Send(form url.Values) { ParseForm(form) for key, values := range form { // range over map for _, value := range values { // range over []string fmt.Println(key, value) } } The error: “undefined: ParseForm”. How can I use ParseForm on the form variable? 1 Like You don’t parse the form in the Send function, your parse it in your handler so that the Form field gets populated and the you pass that Form (which is of type url.Values) to your Send function. So something like this: func index(w http.ResponseWriter, r *http.Request) { // The r.Form field is only available after ParseForm is called so you have to parse first. r.ParseForm() //Do some form checking or whatever work you need here, then call Send with the values. err := mail.Send(r.Form) if err != nil { //Handle your error. } //Continue with whatever work you need done. } 2 Likes Right on spot! Thank you for your support. 1 Like
__label__pos
0.999936
Soal dan Pembahasan – Persamaan Diferensial Linear Orde Dua (Non-Homogen) dengan Koefisien Konstan Berikut ini disajikan beberapa soal beserta penyelesaiannya mengenai persamaan diferensial linear orde dua (non-homogen) dengan koefisien konstan. Metode yang digunakan melibatkan penyelesaian PD homogennya, sehingga Anda diharuskan sudah menguasai teknik penyelesaiannya.  Baca: Soal dan Pembahasan – Persamaan Diferensial Linear Orde Dua (Homogen) dengan Koefisien Konstan Gunakan bantuan tabel FUC di bawah untuk mengerjakan soal-soal berikut ini. $$\begin{array}{|c|c|c|} \hline \text{No.} & \text{Fung}\text{si Undetermined Coefficient (FUC)} & \text{Himp}\text{unan Undetermined Coefficient (HUC)} \\ \hline 1 & x^n & \{x^n, x^{n-1}, \cdots, x, 1\} \\ \hline 2 & e^{ax} & \{e^{ax}\}, a \neq 0 \\ \hline 3 & \sin (bx+c)~\text{atau}~\cos (bx+c) & \{\sin (bx+c, \cos (bx+c)\}, b \neq 0 \\ \hline 4 & x^ne^{ax} & \{x^ne^{ax}, x^{n-1}e^{ax}, \cdots, xe^{ax}, e^{ax}\}, a \neq 0 \\ \hline 5 & x^n \sin (bx + c)~\text{atau}~x^n \cos (bx + c) & \{x^n \sin (bx+c), x^n \cos (bx+c), x^{n-1} \sin (bx+c), x^{n-1} \cos (bx+c), \cdots, \sin (bx+c), \cos (bx+c)\}, b \neq 0 \\ \hline 6 & e^{ax} \sin (bx+c)~\text{atau}~e^{ax} \cos (bx+c) & \{e^{ax} \sin (bx+c), e^{ax} \cos (bx+c)\}, a \neq 0, b \neq 0 \\ \hline 7 & x^n e^{ax} \sin (bx+c)~\text{atau}~x^n e^{ax} \cos (bx+c) &  \{x^n e^{ax} \sin (bx+c), x^n e^{ax} \cos (bx+c), x^{n-1} e^{ax} \sin (bx+c), x^{n-1} e^{ax} \cos (bx+c),  \cdots, e^{ax} \sin (bx+c), e^{ax} \cos (bx+c)\}, a \neq 0, b \neq 0 \\ \hline \end{array}$$ Baca: Soal dan Pembahasan – Persamaan Diferensial (Tingkat Dasar) Soal Nomor 1 Cari solusi umum dari $\dfrac{\text{d}^2y}{\text{d}x^2}- 2 \dfrac{\text{d}y}{\text{d}x}- 3y = 5$. Pembahasan PD di atas bukan PD homogen sebab ruas kanannya mengandung konstanta tak nol. Gunakan cara yang sama seperti mencari penyelesaian umum PD homogen. Persamaan karakteristiknya adalah $m^2- 2m- 3 = (m- 3)(m + 1) = 0$. Dengan demikian, akar karakteristiknya adalah $m = 3 \lor m =-1$. Berarti, penyelesaian umum PD homogen terkait adalah $y_c = C_1e^{3x} + C_2e^{-x}$. Dengan memperhatikan koefisien $y$ pada PD, kita dapatkan bahwa perlu adanya konstanta baru yang bila dikalikan dengan $-3$, hasilnya $5$.  Konstanta itu adalah $-\dfrac{5}{3}$. Jadi, solusi umum PD tersebut adalah $\boxed{y = y_c- \dfrac{5}{3} = C_1e^{3x} + C_2e^{-x}- \dfrac{5}{3}}$. [collapse] Soal Nomor 2 Cari solusi umum dari $\dfrac{\text{d}^2y}{\text{d}x^2}- 2 \dfrac{\text{d}y}{\text{d}x}- 3y = 2e^{4x}$. Pembahasan Langkah pertama adalah menentukan solusi komplementer (umum) untuk PD homogen terkait, yaitu $\dfrac{\text{d}^2y}{\text{d}x^2}- 2 \dfrac{\text{d}y}{\text{d}x}- 3y = 0$ Persamaan karakteristiknya adalah $m^2- 2m- 3 = 0$, dengan akar karakteristik $m = 3$ dan $m =-1$. Jadi, solusi umumnya adalah $y_c = C_1e^{3x} + C_2e^{-x}$ Langkah selanjutnya adalah menentukan solusi partikulir (solusi khusus) PD non-homogen tersebut. Misalkan $y_p = Ae^{4x}$ merupakan solusi khususnya, sehingga $y’ = 4Ae^{4x}$ dan $y^{\prime \prime} = 16Ae^{4x}$. Substitusikan ke PD, diperoleh $16Ae^{4x}- 2(4Ae^{4x})- 3Ae^{4x} = 2e^{4x}$ $\Leftrightarrow 5Ae^{4x} = 2e^{4x}$ $\Leftrightarrow A = \dfrac{2}{5}$ Berarti, solusi khususnya adalah $y_p = \dfrac{2}{5}e^{4x}$ Solusi umum PD itu adalah $\boxed{y = y_c + y_p = C_1e^{3x} + C_2e^{-x} + \dfrac{2}{5}e^{4x}}$ [collapse] Baca: Soal dan Pembahasan – Penyelesaian Persamaan Diferensial Linear Orde Satu Soal Nomor 3 Cari solusi umum dari $\dfrac{\text{d}^2y}{\text{d}x^2}- 2 \dfrac{\text{d}y}{\text{d}x}- 3y = 2e^{3x}$. Pembahasan Solusi umum PD non-homogen terkait adalah $y_c = C_1e^{3x} + C_2e^{-x}$. Sekarang, kita akan menentukan solusi khusus PD homogennya dengan cara yang sama seperti sebelumnya. Misalkan $y_p = Ae^{3x}$ merupakan solusi khususnya, sehingga $y’ = 3Ae^{3x}$ dan $y^{\prime \prime} = 9Ae^{3x}$. Substitusikan ke PD, diperoleh $9Ae^{3x}- 2(3Ae^{3x})- 3Ae^{3x} = 2e^{3x}$ $0 = 2e^{3x}$ Dalam hal ini, kita menemukan bahwa nilai $A$ menjadi sembarang konstanta, sebab pada solusi umum $y_c$ sudah terkandung suku dengan ekspresi $e^{3x}$. Ulangi step dengan memisalkan $y_p = Axe^{3x}$ sebagai solusi khususnya, sehingga $y_p’ = 3Axe^{3x} + Ae^{3x}$ dan $y_p” = 9Axe^{3x} + 6Ae^{3x}$. Substitusikan ke PD hingga diperoleh $\begin{aligned} (9Axe^{3x} + 6Ae^{3x}) &- 2(3Axe^{3x} + Ae^{3x}) \\ &- 3Axe^{3x} = 2e^{3x} \end{aligned}$ $\Leftrightarrow 4Ae^{3x} = 2e^{3x}$ $\Leftrightarrow A = \dfrac{1}{2}$ Jadi, $y_p = \dfrac{1}{2}xe^{3x}$ Dengan demikian, solusi umum PD homogen tersebut adalah $\boxed{y = y_c + y_p = C_1e^{3x} + C_2e^{-x} + \dfrac{1}{2}xe^{3x}}$ [collapse] Soal Nomor 4 Cari solusi umum dari $\dfrac{\text{d}^2y}{\text{d}x^2}- 3\dfrac{\text{d}y}{\text{d}x} + 2y = 4x^2$. Pembahasan Solusi umum PD homogen yang terkait adalah $y_c = C_1e^{2x} + C_2e^{x}$ Diketahui himpunan UC dari ekspresi di ruas kanan PD adalah $\{x^2, x, 1\}$. Misalkan $y_p = Ax^2 + Bx + C$ adalah solusi khusus PD, dan diperoleh $y_p’ =2Ax + B$ dan $y_p^{\prime \prime} = 2A$ Substitusikan ke PD: $$\begin{aligned} \dfrac{\text{d}^2y}{\text{d}x^2}- 3\dfrac{\text{d}y}{\text{d}x} + 2y & = 4x^2 \\ 2A-3(2Ax + B) + 2(Ax^2 + Bx + C) & = 4x^2 \\ 2Ax^2 + (-6A + 2B)x + (2A- 3B + 2C) & = 4x^2 \end{aligned}$$Dari persamaan di atas, diperoleh sistem persamaan linear $\begin{cases} 2A & = 4 \\-6A+ 2B & = 0 & \\ 2A- 3B + 2C & = 0 \end{cases}$ Selesaikan sehingga diperoleh $\begin{cases} A = 2 & \\ B = 6 & \\ C = 7 \end{cases}$ Jadi, $y_p = 2x^2 + 6x + 7 $ Solusi umumnya adalah $y = y_c + y_p$, yaitu $\boxed{y = C_1e^{2x} + C_2e^{x} + 2x^2 + 6x + 7 }$ [collapse] Baca: Soal dan Pembahasan – Penyelesaian Persamaan Diferensial dengan Variabel Terpisah Soal Nomor 5 Cari solusi umum dari $\dfrac{\text{d}^2y}{\text{d}x^2}- 2\dfrac{\text{d}y}{\text{d}x}- 8y = 4e^{2x}- 21e^{-3x}$. Pembahasan Solusi umum PD homogen yang terkait adalah $y_c = C_1e^{4x} + C_2e^{-2x}$ Diketahui himpunan UC dari ekspresi di ruas kanan PD adalah $\{e^{2x}, e^{-3x}\}$. Misalkan $y_p = Ae^{2x}+ Be^{-3x}$ adalah solusi khusus PD, dan diperoleh $y_p’ = 2Ae^{2x}- 3Be^{-3x}$ dan $y_p^{\prime \prime}  = 4Ae^{2x} + 9Be^{-3x}$ Substitusikan ke PD: $$\begin{aligned} \dfrac{d^2y}{\text{d}x^2}- 2\dfrac{\text{d}y}{\text{d}x}- 8y & = 4e^{2x}- 21e^{-3x} \\ (4Ae^{2x} + 9Be^{-3x})- 2(2Ae^{2x}- 3Be^{-3x})- 8(Ae^{2x}+ Be^{-3x}) & = 4e^{2x}- 21e^{-3x} \\ (-8A)e^{2x} + 7Be^{-3x} & = 4e^{2x}- 21e^{-3x} \end{aligned}$$Dari persamaan di atas, diperoleh sistem persamaan linear $\begin{cases}-8A & = 4  \\ 7B & =-21 \end{cases}$ Selesaikan sehingga diperoleh $\begin{cases} A =-\dfrac{1}{2} & \\ B =-3 \end{cases}$ Jadi, $y_p =-\dfrac{1}{2}e^{2x}- 3e^{-3x}$ Solusi umumnya adalah $y = y_c + y_p$, yaitu $\boxed{y = C_1e^{4x} + C_2e^{-2x}-\dfrac{1}{2}e^{2x}- 3e^{-3x}}$ [collapse] Soal Nomor 6 Tentukan solusi umum dari $\dfrac{\text{d}^2y}{\text{d}x^2}- 2\dfrac{\text{d}y}{\text{d}x}- 3y = 2e^x- 10 \sin x$. Pembahasan Solusi umum PD homogen yang terkait adalah $y_c = C_1e^{3x} + C_2e^{-x}$ Diketahui himpunan UC dari ekspresi di ruas kanan PD adalah $\{e^x, \sin x, \cos x\}$. Misalkan $y_p = Ae^x + B \sin x + C \cos x$ adalah solusi khusus PD, dan diperoleh $y_p’ = Ae^x + B \cos x- C \sin x$ $y_p^{\prime \prime} = Ae^x- B \sin x- C \cos x$ Substitusikan ke PD: $\dfrac{d^2y}{\text{d}x^2}- 2\dfrac{\text{d}y}{\text{d}x}- 3y = 2e^x- 10 \sin x$ $\begin{aligned} \Leftrightarrow & (Ae^x -B \sin x- C \cos x) -2(Ae^x + \\ &  B \cos x- C \sin x) -3( Ae^x + B \sin x + \\ &  C \cos x) = 2e^x- 10 \sin x \end{aligned}$ $\begin{aligned} \Leftrightarrow & -4Ae^x + (-4B + 2C) \sin x + \\ &  (-2B- 4C) \cos x = 2e^x- 10 \sin x \end{aligned}$ Dari persamaan di atas, diperoleh sistem persamaan linear $\begin{cases}-4A = 2 & \\-4B + 2C =-10 & \\-2B- 4C = 0 \end{cases}$ Selesaikan sehingga diperoleh $\begin{cases} A =-\dfrac{1}{2} & \\ B =2 & \\ C =-1 \end{cases}$ Jadi, $y_p =-\dfrac{1}{2}e^x + 2\sin x- \cos x$ Solusi umumnya adalah $y = y_c + y_p$, yaitu $$\boxed{y = C_1e^{3x} + C_2e^{-x}- \dfrac{1}{2}e^x + 2\sin x- \cos x}$$ [collapse] Baca: Soal dan Pembahasan – Persamaan Diferensial Eksak Soal Nomor 7 Cari solusi umum dari $\dfrac{\text{d}^2y}{\text{d}x^2} +2 \dfrac{\text{d}y}{\text{d}x} + 5y = 6 \sin 2x + 7 \cos 2x$. Pembahasan Persamaan karakteristik dari PD homogen terkait adalah $m^2 + 2m + 5 = 0$. Akar karakteristiknya (gunakan rumus kuadrat) adalah $m =-1 \pm 2i$ sehingga solusi umumnya adalah $y_c = e^{-x}(C_1 \sin 2x + C_2 \cos 2x)$ Diketahui himpunan UC dari ekspresi di ruas kanan PD adalah $\{\sin 2x, \cos 2x\}$. Misalkan $y_p = A \sin 2x + B \cos 2x$ adalah solusi khusus PD, dan diperoleh $y_p’ = 2A \cos 2x- 2B \sin 2x$ $y_p^{\prime \prime} =-4A \sin 2x- 4B \cos 2x$ Substitusikan ke PD: $\dfrac{d^2y}{\text{d}x^2} +2 \dfrac{\text{d}y}{\text{d}x} + 5y = 6 \sin 2x + 7 \cos 2x$ $\begin{aligned} \Leftrightarrow & (-4A \sin 2x- 4B \cos 2x)  + 2(2A \cos 2x \\ & -2B \sin 2x) + 5(A \sin 2x + B \cos 2x) \\ & = 6 \sin 2x + 7 \cos 2x \end{aligned}$ $\begin{aligned} \Leftrightarrow & (A- 4B)\sin 2x + (4A + B)\cos 2x \\ & = 6 \sin 2x + 7 \cos 2x \end{aligned}$ Dari persamaan di atas, diperoleh sistem persamaan linear $\begin{cases} A-4B = 6 & \\ 4A+B= 7 \end{cases}$ Selesaikan sehingga diperoleh $\begin{cases} A = 2 & \\ B =-1 \end{cases}$ Jadi, $y_p = 2 \sin 2x- \cos 2x$ Solusi umumnya adalah $y = y_c + y_p$, yaitu $$\boxed{y = e^{-x}(C_1 \sin 2x + C_2 \cos 2x) + 2 \sin 2x- \cos 2x}$$ [collapse] Soal Nomor 8 Cari solusi umum dari $\dfrac{\text{d}^2y}{\text{d}x^2} +2 \dfrac{\text{d}y}{\text{d}x} + 2y = 10 \sin 4x$. Pembahasan Persamaan karakteristik dari PD homogen terkait adalah $m^2 + 2m + 2 = 0$. Akar karakteristiknya (gunakan rumus kuadrat) adalah $m =-1 \pm i$ sehingga solusi umumnya adalah $y_c = e^{-x}(C_1 \sin x + C_2 \cos x)$ Diketahui himpunan UC dari ekspresi di ruas kanan PD adalah $\{\sin 4x, \cos 4x\}$. Misalkan $y_p = A \sin 4x + B \cos 4x$ adalah solusi khusus PD, dan diperoleh $y_p’ = 4A \cos 4x- 4B \sin 4x$ $y_p^{\prime \prime} =-16A \sin 4x- 16B \cos 4x$ Substitusikan ke PD: $\dfrac{d^2y}{\text{d}x^2} +2 \dfrac{\text{d}y}{\text{d}x} + 2y = 10 \sin 4x$ $\begin{aligned} \Leftrightarrow & (-16A \sin 4x-16B \cos 4x) + \\ &  2(4A \cos 4x- 4B \sin 4x) + 2(A \sin 4x + \\ & B \cos 4x) = 10 \sin 4x \end{aligned}$ $\begin{aligned} \Leftrightarrow & (-14A- 8B)\sin 4x + (8A- 14B) \\ & \cos 4x = 10 \sin 4x \end{aligned}$ Dari persamaan di atas, diperoleh sistem persamaan linear $\begin{cases}-14A-8B & = 10 \\ 8A-14B & = 0 \end{cases}$ Selesaikan sehingga diperoleh $\begin{cases} A =-\dfrac{7}{13}& \\ B =-\dfrac{4}{13} \end{cases}$ Jadi, $y_p =-\dfrac{7}{13} \sin 4x-\dfrac{4}{13} \cos 4x$ Solusi umumnya adalah $y = y_c + y_p$, yaitu $$\boxed{y = e^{-x}(C_1 \sin x + C_2 \cos x)-\dfrac{7}{13} \sin 4x-\dfrac{4}{13} \cos 4x}$$  [collapse] Soal Nomor 9 Cari solusi umum dari $\dfrac{\text{d}^2y}{\text{d}x^2}- 3\dfrac{\text{d}y}{\text{d}x}- 4y = 16x- 12e^{2x}$. Pembahasan Solusi umum PD homogen yang terkait adalah $y_c = C_1e^{4x} + C_2e^{-x}$ Diketahui himpunan UC dari ekspresi di ruas kanan PD adalah $\{x, 1, e^{2x}\}$. Misalkan $y_p = Ax + B + Ce^{2x}$ adalah solusi khusus PD, dan diperoleh $y_p’ = A + 2Ce^{2x}$ dan $y_p^{\prime \prime} = 4Ce^{2x}$ Substitusikan ke PD: $\dfrac{d^2y}{\text{d}x^2}- 3\dfrac{\text{d}y}{\text{d}x}- 4y = 16x- 12e^{2x}$ $\begin{aligned} \Leftrightarrow & (4Ce^{2x})- 3(A + 2Ce^{2x}) \\ & -4(Ax + B + Ce^{2x}) = 16x- 12e^{2x} \end{aligned}$ $\begin{aligned} \Leftrightarrow & (-6C)e^{2x} + (-4A)x + (-3A- 4B) \\ & = 16x- 12e^{2x} \end{aligned}$ Dari persamaan di atas, diperoleh sistem persamaan linear $\begin{cases}-6C & =-12 & \\-4A & = 16 & \\-3A- 4B & = 0 \end{cases}$ Selesaikan sehingga diperoleh $\begin{cases} A =-4 & \\ B = 3 & \\ C = 2 \end{cases}$ Jadi, $y_p =-4x + 3 + 2e^{2x}$ Solusi umumnya adalah $y = y_c + y_p$, yaitu $\boxed{y = C_1e^{4x} + C_2e^{-x}- 4x + 3 + 2e^{2x}}$ [collapse] Baca: Soal dan Pembahasan – Penyelesaian Persamaan Diferensial Homogen (Reduksi dan Pemisahan Variabel)
__label__pos
1
CSS Group and Nested Selector Group selector There are many elements with the same style in the style sheet. h1  {     color:green; } h2  {     color:green; } p  {     color:green; } In order to minimize the code, you can use group selectors. Each selector is separated by a comma. In the following example, we use the group selector for the above code: Example h1,h2,p {     color:green; }  Try It!  Nested selector It may apply to the style of the selector inside the selector. Four styles are set in the following examples: • p{  } : Specify a style for all p elements. • .marked{  } : Specify a style for all elements with class="marked" . • .marked p{  } : Specify a style for all p elements in class="marked" elements . • p.marked{  } : for all class = "marked" the p element specifies a pattern. Example p {     color:blue;         text-align:center; } .marked {     background-color:red; } .marked p {     color:white; } p.marked {     text-decoration:underline; }  Try It! 
__label__pos
1