content
stringlengths
228
999k
pred_label
stringclasses
1 value
pred_score
float64
0.5
1
Dynamically Add And Remove Form Input Fields With Angular 6 Making dynamic addition field/fieldset control is awesome features and user-friendly. Today, i am going to share with you how to add more fields using jquery in laravel 5. I used angular CLI to generate the folder structure and created a form folder for component files. Angular 6: Reactive Forms; React: React + Formik; Vue: Vue. This tutorial shows how to get, add and remove options form a select/dropdown using plain JavaScript. The HttpModule is not a core Angular module. Angular 2 Dynamically Add & remove Components Angular 2 gives full control to add or remove component in other component programmatically by simple steps. I'm trying to add input fields dynamically while the user clicks the add button and for each form field there must be a remove button, when the user clicks that the form fields must be removed, I need to achieve this using Angular 2, as I'm new to Angular 2 please help me to complete it. Add this script to your form's Settings > Customize HTML in the Before Fields. This blog post tells about how to add and remove fields dynamically with the help of JavaScript codes. An Input Field is a way to make the text of a Text Control editable. He has only ever used template-driven forms (as opposed to reactive forms); and, has never run into a limitation that he couldn't overcome using the template-driven syntax. net, How to create textbox and label at runtime in vb. Examples of directives are the interpolation directive ({{ }} ), the ng-repeat directive and ng-if directive. AngularJs Add Form Fields Dynamically- It is very easy to add the form fields dynamically sometimes we need some form fields to add dynamically. Yes, it requires learning the right buttons to push. The example is a simple registration form with pretty standard fields for first name, last name, email and password. The biggest problem with CSS-only animation is handling dynamic user selections. autofocus This Boolean attribute lets you specify that a form control should have input focus when the. These options are easy to implement with the data/argument code generator located. Whether you're building highly interactive web applications or you just need to add a date picker to a form control, jQuery UI is the perfect choice. Now, put it all together in a micro-app that can display a list of heroes and add new heroes to the list. AngularJs Add Form Fields Dynamically- It is very easy to add the form fields dynamically sometimes we need some form fields to add dynamically. Victor Savkin is a co-founder of nrwl. The selector of MatRadioButton is mat-radio-button that works same as. If the user blurs the field with an invalid value, the input is emptied and gets focus again. Default: 'bottom-right' (other value supported : 'bottom-left') This option is currently only available in the component implementation. NET / MVC / MVC - how to dynamically add UI fields MVC - how to dynamically add UI fields [Answered] RSS 6 replies. Dynamically Create HTML Form Fields - jQuery. Generally a component is loaded using component selector in component template that is identified at Angular compile time. Hi i've managed to use jquery to add a new text field when I press the "add" button but when I try with a select box which get the options from a database mysql using php it doesn't work. I thought this might be helpful for a lot of other. User Registration form has following basic fields. We've written in the past on the great features of Angular in forms and how to process forms. Dynamically add and remove TextBox and get value of dynamic TextBox using jQuery by Admin · April 6, 2015 Now we are going to make this stuff by going through step by step, so that you can make it yourself as well learn the logics and some basic of jQuery selectors and events. Example 3: Dynamic pie chart This example is more advanced then the previous two: is contains a simple list of number fields with a button at the bottom to add a new line item, while generating a pie chart on the right. In one of the first major issues I needed to attempt to solve was trying to add items dynamically to a View. Live Inline CRUD has enrich web page in which we can add new records dynamically, edit or update existing data and delete or remove mysql records dynamically display remaining data on web page without refresh of web page by using AngularJS with PHP Mysql. NET MVC3 Dynamically added form fields model binding. A common use would be a "quantity" input on an eCommerce site. This is because we want to make the validation message of this particular field dynamic, based on our checkbox. You can add validation rules to it. Forms are no different. AngularJs Add Form Fields Dynamically- It is very easy to add the form fields dynamically sometimes we need some form fields to add dynamically. With PDFill, you can fill and save your editing into a new PDF, just like Adobe Reader. There are times, your manager asks you to change the static input field to dynamic multiple input fields easily create able by end users. Replace 'dfe' in the example below with the key of the Data from Entries dropdown field that the "list" field is dependent on. Throughout this tutorial. We will also add a TextBox to validate, as well as a button to submit the form with. 16 May 2018 - Built tutorial with Angular 6. Following are the salient features of this Registration example. The label for a form field can be specified by adding a mat-label element. One of the cool features of angular is the form validation. This method is a shortcut for. FormControl. But everywhere I look I see developers that aren’t using Angular’s form capabilities to their full potential. Can you suggest a workaround in the meantime? It would be great to use your module for our APP, but we absolutely need to add form fields dynamically. Below is the “Little Tour of Heroes” component. Just in Materialize, if you add the length attribute, a character-counter will appear (in the below length is set to 150). Hopefully this will give you an idea of how to do dynamic forms in Angular 2. In this tutorial I have shown how to add and remove input fields dynamically. In the first article in this series we looked at the history of HTML5 forms and many of the new attributes available to us. It was written to demonstrate to a friend, the simplicity and power of Angular to address programming challenges - in recruitment processes - quickly and effectively. element that this element is a descendant of or the form element whose id is specified by the form attribute of the input element. Developers face pretty specific challenges when creating a highly accessible single page application (SPA) user experience. Sep 17, 2012. What is Filter in AngularJS? Filters are used for formatting data displayed to the user. js, and Express. It is built specifically to overcome the various limitations of existing technologies in one seamlessly integrated package. Then change the header Text value. Editor’s Note – If you’re looking to add AngularJS to SharePoint, check out this post. Read more online tutorials now!. I will use Angular, MongoDB, Node. Not all forms are meant to be static. NET MVC3 and how Modelbinding handles that. If you know how to add or remove record to MySQL database table with jQuery AJAX then it’s been a lot easier for you to do with AngularJS. on( "blur", handler ) in the first two variations, and. "Add row" button to duplicate form fields; Easily repeat / remove fields as the user needs to add more records "Sometimes you need to present an arbitrary number of controls or groups. So you have a form and would like to add form fields dynamically from a response to a user event? It's easy to do with Reactive Forms and FormArray. If you want to place the picker just under the input field, just specify input. AngularJs Set Focus on Input Field - There are many ways to set Focus on Input Field using AngularJs. Common filename patterns include main. data-dynamic-form-template defines the div as a template, and the value given is the id for the template. add/remove more input fields dynamically using jQuery and bootstrap,jQuery add more example,add dynamic input fields in form,add remove rows in bootstrap. This post looks at how to add a new option to a select with jQuery. As I research and learn more about SharePoint programming, I occasionally come across blog posts that start with titles like “the best way…” or “the correct way…” to accomplish a certain task. We need to listen to optionB value changes and based on that we add or remove the validators we require. This includes large signature features of HTML such as video playback and subtitles, form controls and form submission, and a 2D graphics API known as the HTML canvas. Live Inline CRUD has enrich web page in which we can add new records dynamically, edit or update existing data and delete or remove mysql records dynamically display remaining data on web page without refresh of web page by using AngularJS with PHP Mysql. Working with forms is pretty easy in Angular 2. True multi value. Add Input Fields Dynamically to Form Using JavaScript Here you will get working code for adding input fields dynamically using a link or button. The selector of MatRadioButton is mat-radio-button that works same as. In this tutorial, I’ll teach you the basics of Node. True multi value. In-place editing with Twitter Bootstrap, jQuery UI or pure jQuery. Here in this tutorial we are going to explain how you can reset form fields in AngularJs. If they click to search this will remove focus from the input and it will shrink back to 45%, which also messes with the click event. however my problem is that i have some more buttons like save and print on the form and when i click on those buttons it causes postback of the page and the TextBoxDiv1 becomes visible and the jquery functions for add and remove buttons stop working after that. An Input Field is a way to make the text of a Text Control editable. By using the magic of their rich angular directive we can achieve our goal to make show hide password feature in our form. What I have tried. I currently work there. html which is our sign in view and a file called signin. json file and add a link to Let us assume our simple form will have two fields – an input field for the name of. With template-driven forms, the default way to work with forms in Angular, template directives are used to build an internal representation of the form. select boxes). The author is sharing an article on how to dynamically display controls in FormArray using Angular 5 Reactive Forms and enabled/disabled the validators based on the selection. learned something across the weekend thought of sharing it with you, So this article is about writing your own custom validator for an input field in angular 4. Dynamically created "input" with type checkbox does after 6 months with Angular, forms leave much to be desired despite of the fact that form is core component of. FormControls encapsulate the field's value, and states such as being valid, dirty (changed), or has errors. Fixed an issue that caused the Add Form button to sometimes display improperly on smaller screens. Here is a nice example of dynamically adding inputs to your form as users need them. In this article we are going to see how to add, remove experience component to any other component, we will also try to pass some values to component from parent to child component to see if. Receives the index position of the element in the set and the old class value as arguments. Currency Filter. The goal is the predict the values of a particular target variable (labels). To create HTML form using NgForm with NgModel is called template-driven form. remove() Method. Each form field has a name, which should be unique and descriptive; you’ll use this name when you collect and analyze data, but it does not appear on the form the user sees. You just need to decide between Template-Driven and Reactive Forms and you are ready to start with some bindings and validation. It seems that you have been busy and had no time yet. The placeholder attribute shows text in a field until the field is focused upon, then hides the text. Updated jQuery Grid v4. Dynamically Building Forms. NET is vastly more flexible than the built in DataContractJsonSerializer or the older JavaScript serializer. Add this Code in app. Fetch the users data from the database using PHP & MySQL, and display the users data using AngularJS. Dynamically created "input" with type checkbox does after 6 months with Angular, forms leave much to be desired despite of the fact that form is core component of. So you have a form and would like to add form fields dynamically from a response to a user event? It's easy to do with Reactive Forms and FormArray. This post looks at how to add a new option to a select with jQuery. In the demonstration, I create a form for entering new records and show the list of records in the table. You can use reactive form to add and update grid records. But now we want to make this by using AngularJS script. A common use would be a "quantity" input on an eCommerce site. For more information, see the autocomplete attribute in. Here in this tutorial we are going to explain how you can create functionality to add some dynamic fields. In this simplistic example, 3 text field and a button is wrapped inside a form. You can also use our online tool to run and edit the code. Thus for this you will need a way to add or remove fields dynamically into the HTML page. For basic applications, the addElement() and removeElement() functions I've written should be able to handle the job. For a live demo of the final result, see this StackBlitz. Getting Started with Angular 2 Step by Step: 5 - Forms and Validation UPDATE (18th May 2017) : This tutorial has been updated to the latest version of Angular (Angular 4). The book and code has since been updated to use StackBlitz instead. AngularJS filters come in handy when formatting the data for display and as such they are most often used along with AngularJS expressions. Mastering the scope of the directives in AngularJS; React TDD Example: Unit Testing and Building a React Component with Jest, Gulp and React Test Utils; Practical Guide to PreLink, PostLink and Controller Methods of Angular Directives 'Resolve' in AngularJS routes, explained. If you require to add input fields dynamically in angular js using php mysql database then you are. html file would look like:. Next we can take a closer look at the input control and add a simple required attribute for making it a required input. add/remove more input fields dynamically using jQuery and bootstrap,jQuery add more example,add dynamic input fields in form,add remove rows in bootstrap. In this article i will show you how to create input controls at run time. Making dynamic addition field/fieldset control is awesome features and user-friendly. It’s a nifty introduction to Angular with examples of “data-binding,” one of the key Angular concepts (other goodies like routers and modules to come in later posts). Building a fully functional contact form takes only a few minutes and you don’t have to write one bit of PHP, CSS, or HTML!. Whether you're building highly interactive web applications or you just need to add a date picker to a form control, jQuery UI is the perfect choice. Learn how to enable & disable radio buttons in an HTML form by using jQuery. While marking up the form itself is easy, checking whether each field has a valid and coherent value is more difficult, and informing the user about the problem may become a headache. NET MVC3 Dynamically added form fields model binding. I will go through the common concepts of Angular 7 Reactive Forms like FormGroup, FormControl, FormBuilder, and FormaArray. In this post we will show you Angular 6 – Reactive Forms Validation Example, hear for Angular 6 – Template-Driven Forms Validation Example we will give you demo and example for implement. You can use reactive form to add and update grid records. Here is how our signin. AngularJS’s main website, Angularjs. Updated the format of submission dates to match that of the WordPress install. We will also add a TextBox to validate, as well as a button to submit the form with. 2; Running the Angular 6 Login Tutorial Example Locally. What I have tried. Angular 6: Template-Driven Forms; React: React + Formik; Vue: Vue. Here in this tutorial we are going to explain how you can create functionality to add some dynamic fields. Send form input via an Angular 2 component to ASP. This article gives an explanation about how to dynamically add/remove row Within HTML table using angular and bootstrap 4 and also explained how to design a master form using bootstrap 4, how to insert a record from text box to HTML Table and also show you how to use HTML grid in Angular. /title> var ss_jQuery = jQuery. More info See in Glossary element in itself and must be combined with one or more visual UI elements in order to be visible. The biggest advantage of jQuery lies in its simplicity of selecting elements. Access adds the calculated field, and then highlights the field header so that you can type a field name. However, when I click on “Add Another URL”, it doesn’t do anything. ag-Grid is feature rich datagrid designed for the major JavaScript Frameworks. Dynamically Adding Items. HTML type attribute for the token input. We then send a network request to the server. Field instances contain the data of that instance as well as the functionality to render it within your Form. ts: import { FormsModule } from '@angular/forms';. Well organized and easy to understand Web building tutorials with lots of examples of how to use HTML, CSS, JavaScript, SQL, PHP, Python, Bootstrap, Java and XML. The indexes that were generated for the form would no longer be accurate if there is not logic to recalculate and update the elements on the client side. Adding a search box to filter the result is also quite simple, if we would like to use simple text search. It is used to set the pattern validation. In our upcoming videos we will discuss how to do the same. Common filename patterns include main. See the input docs on MDN for more information. NET - Preventing SQL Injection Attacks to constrain form field input received through server controls. You will find out which modules are required in this component, what are the possibilities of configuring the component, and what events and methods you can use in working with it. In the first div, we're using interpolation to show a property called titleAlert. angular-cli. Now, again let’s add span element for minLength and maxLength validation on our full-name form control so within this full name input field. This sets the text in between quotes in the input field, as long as the user didn’t type anything. Angular 2 Dynamically Add & remove Components Angular 2 gives full control to add or remove component in other component programmatically by simple steps. Following are the salient features of this Registration example. In this case using this feature user can directly add/remove additional fields and enter data's without support by a programmer. Ideally the user would click on an 'add row' button that would be just below the current last row and another row of text input fields would be added. Add Form Fields Dynamically using AngularJS | Angular 6 dynamic form fields adding form fields dynamically,angularjs add textbox dynamically,add/remove input fields dynamically with angularjs. Recently, I was working with Angular 5 Reactive forms to create a registration form. to add our dynamic list of. Instead of adding the class input-field, use the input-field directive. Our component:. Adding new Item to a list of items, inline is a very nice feature you can provide to your user. It seems that you have been busy and had no time yet. Custom Form Controls in Angular. : One more thing that page_load event will rebuild all the control field every time the page will be loaded. 8 to transpile the TypeScript code and bundle the Angular 6 modules together, and the webpack dev server is used as the local web server, to learn more about using webpack with TypeScript you can check out. This page will walk through Angular dynamic component loader example. In addition you can use some conditional css classes to highlight invalid fields to the user. Controller. net, How to create textbox and label at runtime in vb. If you are new to model-driven forms, please refer to How to Build Model-driven Forms in Angular 2 for a basic rundown. NET Core web application and made it retrieve the weather for London which is then displayed via an Angular 2 component. The indexes that were generated for the form would no longer be accurate if there is not logic to recalculate and update the elements on the client side. We may sometime require to generate. This index is based on a consolidation of the “Contents” entries in the daily Federal Register. dart, and angular_forms. For discuss this things we will define two dropdown list, in first select box we will fill country name which is get from mysql table by using AngularJS and in second dropdown list we will fill state name of first selected dropdown list country which is also get from mysql table by using Angular Javascript with php programming language. Create Editable Grid Using AngularJS & ASP. Victor Savkin is a co-founder of nrwl. Now, again let's add span element for minLength and maxLength validation on our full-name form control so within this full name input field. Form validations can be done in two ways in AngularJS. NET MVC3 and how Modelbinding handles that. Flask AngularJS App Powered By RESTful API – Home Page Creating Sign In Page. 16 May 2018 - Built tutorial with Angular 6. In this tutorial, I am going to show you how to add remove input fields dynamically in your form using jquery. We will create add employee and remove employee button to add/remove record from table. How would I go about adding the capability in the form so that the user can add more input fields by clicking the "Add". Click here to download code and sample project. Components are the centerpiece of Angular 2. In the past, we have gone over how to validate basic forms using Angular. In this file we have write html code for display alert message by using bootstrap, create form with add or remove dynamic input fields and below form we have make on table for display inserted data. How to set selected option dynamically in Angular 6. The user can add a hero by typing the hero’s name in the input box and clicking Add. Dynamically Adding Items. AngularJS offers client-side form validation. Now, put it all together in a micro-app that can display a list of heroes and add new heroes to the list. Abstract: Using Angular ng-template to promote a HTML Table as a Data Grid with databinding for CRUD operations. Create a form component. Sometimes, you want to allow the users to add certain parts of the form as they need them. JSFiddle or its authors are not responsible or liable for any loss or damage of any kind during the usage of provided code. In our upcoming videos we will discuss how to do the same. Angular js auto focus for input box Create a directive for autofocus and use where ever you need on the module. With the release of ASP. AngularJs Set Focus on Input Field – There are many ways to set Focus on Input Field using AngularJs. Angular is a platform for building mobile and desktop web applications. Add the appropriate validation attributes to the input fields, as shown in Listing 1. Works with text inputs, selects, checkboxes and radio buttons. Please see this post for more information. The Angular Material Form Field is the wrapper for the other form controls elements such as input, text-area, select, radio button, checkbox, etc. Angular dynamic form validation. That’s why JsonPipe will out put an object literal with a counter field of the value Let’s start off by adding the input. Input fields are the simplest forms to grasp. Here, we are going to create and initialise the form control objects in our component. attr() at AllInOneScript. Check your browser console output to see the result. append() or. Then change the header Text value. Working with forms is pretty easy in Angular 2. Previously I showed how to create a recursive treeview, but in this post I will show how to build interactive user interfaces. This tutorial is geared to the beginner and introduces the basic construction of a GUI with functionality. In this tutorial I have shown how to add and remove input fields dynamically. There are times, your manager asks you to change the static input field to dynamic multiple input fields easily create able by end users. In this second part of the Angular 2 Forms series we’re going to focus on another important aspect of form creation: input validation. Generally this is a square but it may have rounded corners. Get text from Dynamic List field. Add, Edit and Remove values using in-line Form. If you already experienced using HTML forms, this should be review, however, if not we recommend a brief visit through the Tizag HTML Forms Tutorial. Using PHP for handling requests and return a response. Step-by-step. NET Core web application and made it retrieve the weather for London which is then displayed via an Angular 2 component. This sets the text in between quotes in the input field, as long as the user didn’t type anything. Let's say you need to create web form with dynamic number of input fields. so, in the component class along with the required validator on this full name form control. Here in this tutorial we are going to explain how to remove an item from AngularJs Array. About HTML Preprocessors. NET Forums / General ASP. With the help of Angularjs, We can dynamically add or remove list items in an array. If you couldn't define which group the current button belongs to, you could only have one group of radio buttons on each page. learned something across the weekend thought of sharing it with you, So this article is about writing your own custom validator for an input field in angular 4. AngularJS Email validation using ng-pattern and Regular Expression (RegEx) You can to do form validations in AngularJS using Regular Expression or RegEx. This means that the outer form is valid when all of the child forms are valid as well. You can use our online editor to try & edit the code. This helps do customized validations (more accurate validations) on your input fields. It is used to set the pattern validation. Dynamic form fields. link Selecting Form Elements. Adding a text field. "how to add Dynamically Form field in Angular 2& 5???" is published by Alien Coder. I am able to create one field dynamically so far. True multi value. We are also interested to add form fields dynamically. If you already experienced using HTML forms, this should be review, however, if not we recommend a brief visit through the Tizag HTML Forms Tutorial. Now, put it all together in a micro-app that can display a list of heroes and add new heroes to the list. By using the magic of their rich angular directive we can achieve our goal to make show hide password feature in our form. angular-cli. Simply type a word into the field input, and generate your element. add/remove more input fields dynamically using jQuery and bootstrap,jQuery add more example,add dynamic input fields in form,add remove rows in bootstrap. 2; Running the Angular 6 Login Tutorial Example Locally. Dynamic frameworks like Angular often call for modern user experiences. With the Angular 2's new forms module, we can build complex forms with even more intuitive syntax. In this article we are going to see how to add, remove experience component to any other component, we will also try to pass some values to component from parent to child component to see if. Today, We want to share with you Angular 6 Form Validation Example Tutorial. We are having a form and now we would like to add form fields dynamically to form. We think it’s time to talk about model-driven and reactive forms on our blog. AngularJS | Adding Form Fields Dynamically Building web applications/CRM containing multi data enty fields, sometime required to add many field for additonal informations. An Angular form has two parts: an HTML-based template and a component class to handle data and user interactions programmatically. There are fifteen options which controls the behavior and format of the form input field. In this tutorial I have shown how to add and remove input fields dynamically. NET Forums / General ASP. The creation of web forms has always been a complex task. AngularJS | Adding Form Fields Dynamically Building web applications/CRM containing multi data enty fields, sometime required to add many field for additonal informations. By default, when text is present the floating label floats above the form field control. Dynamic Form Builder Styled With Bootstrap - Formy. The example code to add remove multiple input fields dynamically using jQuery and get multiple input fields value using PHP. An Angular sample application that includes selecting, adding, updating, and deleting data with HttpClient service, reactive forms for object and array types, in-line data list editing, custom input validations, and various other features (updated to Angular 8 CLI and Bootstrap 4. Let us consider a simple scenario. I will use Angular, MongoDB, Node. Creating a table using ng-repeat is quite simple. JSFiddle or its authors are not responsible or liable for any loss or damage of any kind during the usage of provided code. Learn how to use Angular 5's powerful property binding and event binding in this free tutorial and course. This document describes how to create a simple web application that connects to a MySQL database server. Of course the lack of styling that comes with OPTION elements isn't optimal, and there's no method to hook DATALIST's up to a service endpoint for more dynamic suggestions, but this new element is a step in the right direction!. Which is not happening. If the menu is dynamic, you can add, remove, or change menu items in a single place—the database table in which the items are stored—to update all instances of the same menu on the site. in this tutorial we will be discussed how to add and delete records dynamically to the table using jquery, Normally in project we give options to users to add multiple rows and submit into database, here i am creating functionality to add and delete static row. In this tutorial I have shown how to add and remove input fields dynamically. The floating label is a text label displayed on top of the form field control when the control does not contain any text or when does not show any option text. Adding a search box to filter the result is also quite simple, if we would like to use simple text search. Receives the index position of the element in the set and the old class value as arguments. Angular — Extending ui-select to add options dynamically. This article will explain how to dynamically add / remove HTML Input (Form) fields created using ng-repeat directive in AngularJS. Examples of directives are the interpolation directive ({{ }} ), the ng-repeat directive and ng-if directive. ng-untouched. Input fields are the simplest forms to grasp. Voila! Now, the next step is to create your own element, which I've created a nice little script and small form to do exactly that. Using Dynamic forms, I need to give users the ability to create forms fields dynamically. " - Angular Docs. I am sharing an article on how to dynamically display controls in FormArray using Angular 5 Reactive Forms and enable/disable the validators based on the selection. The HttpModule is not a core Angular module. Dynamically Building Forms. Creating a table using ng-repeat is quite simple. The following code shows a two-way data binding with ngModel against a property of type string :. 3, as data() did not use data-attributes, this was not an issue. Making Forms with Microsoft Word. it works fine, but after i close the model, the model goes "display none" but not totally disappear from the page. This index is based on a consolidation of the “Contents” entries in the daily Federal Register. "Add row" button to duplicate form fields; Easily repeat / remove fields as the user needs to add more records "Sometimes you need to present an arbitrary number of controls or groups. Our component will consist of a form with one input as form element. input elements of type checkbox are rendered by default as boxes that are checked (ticked) when activated, like you might see in an official government paper form. As I research and learn more about SharePoint programming, I occasionally come across blog posts that start with titles like “the best way…” or “the correct way…” to accomplish a certain task. If I click on "Add Another URL" then a new form field should come up, with input options of lecture title and add Video lecture here. Angular already has many built in validations for form fields which could could be used for basic use-cases. Ngx-formly is the Angular (2+) counterpart for doing. View Quickstart Developer Guide. The technology skills platform that provides web development, IT certification and ondemand training that helps your career and your business move forward with the right technology and the right skills. Dynamically Adding Items. FDF is a text file format specifically for data exported from PDF form fields. we want a minimum of 2 characters, but not exceeding a maximum of 10 characters. This blog post tells about how to add and remove fields dynamically with the help of JavaScript codes. We all have to get input from users, validate it and act on it. Dynamic form can be very useful and more economical to create the forms based on…. We remain focused on adding new features to Web Forms to improve the development experience and keep the technology up-to-date with web practices. Note the following: The element carries the HTML validation attributes: required, minlength, and maxlength.
__label__pos
0.582807
Browse files wip • Loading branch information... 1 parent 4f8a436 commit 311f235e1aa2335f1f79111f58d081d3845452a6 @Montellese committed Nov 1, 2012 View 2 xbmc/dialogs/GUIDialogSmartPlaylistRule.cpp @@ -412,7 +412,7 @@ void CGUIDialogSmartPlaylistRule::UpdateButtons() type = CGUIEditControl::INPUT_TYPE_TEXT; break; case CSmartPlaylistRule::DATE_FIELD: - if (m_rule.m_operator.operation == FilterOperationInTheLast) + if (m_rule.m_operator.GetOperation() == FilterOperationInTheLast) type = CGUIEditControl::INPUT_TYPE_TEXT; else type = CGUIEditControl::INPUT_TYPE_DATE; View 30 xbmc/filter/Filter.cpp @@ -132,9 +132,7 @@ CFilter::CFilter() void CFilter::Reset() { - m_ruleCombination.m_combinations.clear(); - m_ruleCombination.m_rules.clear(); - m_ruleCombination.SetType(CFilterRuleCombination::CombinationAnd); + m_ruleCombination.Reset(); m_type = MediaTypeSong; } @@ -171,6 +169,18 @@ bool CFilter::Save(CVariant &obj) const return true; } +bool CFilter::Filter(const CDatabase &db, std::string &query) const +{ + set<string> referencedPlaylists; + return filter(db, referencedPlaylists, query); +} + +bool CFilter::Filter(CFileItemList& items) const +{ + set<string> referencedPlaylists; + return filter(referencedPlaylists, items); +} + void CFilter::GetAvailableFields(std::vector<std::string> &fieldList) { for (unsigned int index = 0; index < NUM_FIELDS; index++) @@ -242,3 +252,17 @@ std::string CFilter::GetLocalizedOperator(const CFilterOperator &op) } return ""; } + +bool CFilter::filter(const CDatabase &db, std::set<std::string> &referencedPlaylists, std::string &query) const +{ + // TODO + + return false; +} + +bool CFilter::filter(std::set<std::string> &referencedPlaylists, CFileItemList& items) const +{ + // TODO + + return false; +} View 16 xbmc/filter/Filter.h @@ -19,31 +19,40 @@ * */ +#include <set> #include <string> #include <vector> #include "FilterRule.h" #include "FilterRuleCombination.h" #include "utils/DatabaseUtils.h" +class CDatabase; +class CFileItemList; + class CFilter { public: CFilter(); virtual ~CFilter() { } virtual void Reset(); + virtual bool IsEmpty() const { return m_ruleCombination.GetRules().empty() && m_ruleCombination.GetCombinations().empty(); } virtual bool Load(const CVariant &obj); virtual bool Save(CVariant &obj) const; MediaType GetType() const { return m_type; } virtual void SetType(MediaType type) { m_type = type; } + CFilterRuleCombination& GetRuleCombination() { return m_ruleCombination; } + const CFilterRuleCombination& GetRuleCombination() const { return m_ruleCombination; } + void SetMatchAllRules(bool matchAll) { m_ruleCombination.SetType(matchAll ? CFilterRuleCombination::CombinationAnd : CFilterRuleCombination::CombinationOr); } bool GetMatchAllRules() const { return m_ruleCombination.GetType() == CFilterRuleCombination::CombinationAnd; } - - virtual bool IsEmpty() const { return m_ruleCombination.m_rules.empty() && m_ruleCombination.m_combinations.empty(); } + + virtual bool Filter(const CDatabase &db, std::string &query) const; + virtual bool Filter(CFileItemList& items) const; static void GetAvailableFields(std::vector<std::string> &fieldList); static const std::string& TranslateField(Field field); @@ -56,6 +65,9 @@ class CFilter static std::string GetLocalizedOperator(const CFilterOperator &op); protected: + virtual bool filter(const CDatabase &db, std::set<std::string> &referencedPlaylists, std::string &query) const; + virtual bool filter(std::set<std::string> &referencedPlaylists, CFileItemList& items) const; + CFilterRuleCombination m_ruleCombination; MediaType m_type; }; View 44 xbmc/filter/FilterRule.cpp @@ -25,21 +25,45 @@ using namespace std; +bool CFilterOperator::Load(const CVariant &obj) +{ + if (!obj.isObject() || + !obj.isMember("operation") || !obj["operation"].isString()) + return false; + + *this = CFilter::TranslateOperator(obj["operation"].asString()); + if (obj.isMember("negated") && obj["negated"].isBoolean()) + m_negated = obj["negated"].asBoolean(); + + return true; +} + +bool CFilterOperator::Save(CVariant &obj) const +{ + if (obj.isNull() || m_operation == FilterOperationNone) + return false; + + obj["operation"] = CFilter::TranslateOperator(*this); + if (m_negated) + obj["negated"] = true; + + return true; +} + CFilterRule::CFilterRule() : m_field(FieldNone), m_type(FilterFieldTypeNone), m_browseable(false) { } bool CFilterRule::Load(const CVariant &obj) { if (!obj.isObject() || - !obj.isMember("field") || !obj["field"].isString() || - !obj.isMember("operator") || !obj["operator"].isString()) + !obj.isMember("field") || !obj["field"].isString() || + !obj.isMember("operator") || !obj["operator"].isObject() || + !m_operator.Load(obj["operator"])) return false; - m_field = CFilter::TranslateField(obj["field"].asString().c_str()); - m_operator = CFilter::TranslateOperator(obj["operator"].asString().c_str()); - - if (m_operator.operation == FilterOperationTrue) + m_field = CFilter::TranslateField(obj["field"].asString().c_str()); + if (m_operator.m_operation == FilterOperationTrue) return true; if (!obj.isMember("value") || (!obj["value"].isString() && !obj["value"].isArray())) @@ -64,12 +88,12 @@ bool CFilterRule::Load(const CVariant &obj) bool CFilterRule::Save(CVariant &obj) const { - if (obj.isNull() || (m_value.empty() && m_operator.operation != FilterOperationTrue)) + if (obj.isNull() || + (m_value.empty() && m_operator.m_operation != FilterOperationTrue) || + !m_operator.Save(obj["operator"])) return false; - obj["field"] = CFilter::TranslateField(m_field); - obj["operator"] = CFilter::TranslateOperator(m_operator); - + obj["field"] = CFilter::TranslateField(m_field); obj["value"] = CVariant(CVariant::VariantTypeArray); for (vector<std::string>::const_iterator it = m_value.begin(); it != m_value.end(); it++) obj["value"].push_back(*it); View 37 xbmc/filter/FilterRule.h @@ -54,21 +54,32 @@ class CFilterOperator { public: CFilterOperator(FilterOperation op = FilterOperationNone, bool neg = false) - : operation(op), negated(neg) + : m_operation(op), m_negated(neg) { } - bool operator==(const CFilterOperator &other) const { return operation == other.operation && negated == other.negated; } + bool operator==(const CFilterOperator &other) const { return m_operation == other.m_operation && m_negated == other.m_negated; } bool operator!=(const CFilterOperator &other) const { return !(*this == other); } + + virtual bool Load(const CVariant &obj); + virtual bool Save(CVariant &obj) const; + + FilterOperation GetOperation() const { return m_operation; } + void SetOperation(FilterOperation op) { m_operation = op; } + bool IsNegated() const { return m_negated; } + void Negate(bool negate) { m_negated = negate; } - int ToInt() const { return (int)operation | (int)negated << 31; } + int ToInt() const { return (int)m_operation | (int)m_negated << 31; } void FromInt(int op) { - operation = (FilterOperation)(op & 0x7FFFFFFF); - negated = (bool)(op >> 31); + m_operation = (FilterOperation)(op & 0x7FFFFFFF); + m_negated = (bool)(op >> 31); } - FilterOperation operation; - bool negated; +private: + friend class CFilterRule; + + FilterOperation m_operation; + bool m_negated; }; class CFilterRule @@ -80,10 +91,22 @@ class CFilterRule virtual bool Load(const CVariant &obj); virtual bool Save(CVariant &obj) const; + Field GetField() const { return m_field; } + void SetField(Field field) { m_field = field; } + FilterFieldType GetType() const { return m_type; } + void SetType(FilterFieldType type) { m_type = type; } + const CFilterOperator& GetOperator() const { return m_operator; } + CFilterOperator& GetOperator() { return m_operator; } + void SetOperator(const CFilterOperator& op) { m_operator = op; } + bool IsBrowseable() const { return m_browseable; } + void SetBrowseable(bool browseable) { m_browseable = browseable; } + std::string GetValue() const; + const std::vector<std::string>& GetValues() const { return m_value; } void SetValue(const std::string &value); void SetValue(const std::vector<std::string> &values); +private: Field m_field; FilterFieldType m_type; CFilterOperator m_operator; View 7 xbmc/filter/FilterRuleCombination.cpp @@ -28,6 +28,13 @@ CFilterRuleCombination::CFilterRuleCombination() : m_type(CombinationAnd) { } +void CFilterRuleCombination::Reset() +{ + m_type = CombinationAnd; + m_rules.clear(); + m_combinations.clear(); +} + bool CFilterRuleCombination::Load(const CVariant &obj) { if (!obj.isObject() && !obj.isArray()) View 8 xbmc/filter/FilterRuleCombination.h @@ -34,12 +34,15 @@ class CFilterRuleCombination { public: CFilterRuleCombination(); + virtual ~CFilterRuleCombination() { } typedef enum { CombinationOr = 0, CombinationAnd } Combination; + virtual void Reset(); + virtual bool Load(const CVariant &obj); virtual bool Save(CVariant &obj) const; @@ -48,9 +51,14 @@ class CFilterRuleCombination Combination GetType() const { return m_type; } void SetType(Combination combination) { m_type = combination; } + const CFilterRules& GetRules() const { return m_rules; } + CFilterRules& GetRules() { return m_rules; } void AddRule(const CFilterRule &rule); + const CFilterRuleCombinations& GetCombinations() const { return m_combinations; } + CFilterRuleCombinations& GetCombinations() { return m_combinations; } void AddCombination(const CFilterRuleCombination &rule); +private: Combination m_type; CFilterRuleCombinations m_combinations; CFilterRules m_rules; View 2 xbmc/filter/dialogs/GUIDialogMediaFilter.cpp @@ -284,7 +284,7 @@ void CGUIDialogMediaFilter::CreateSettings() if (filter.rule == NULL) filter.data = new int(CHECK_ALL); else - filter.data = new int(filter.rule->m_operator.operation == FilterOperationTrue ? CHECK_YES : CHECK_NO); + filter.data = new int(filter.rule->m_operator.GetOperation() == FilterOperationTrue ? CHECK_YES : CHECK_NO); vector<pair<int, int> > entries; entries.push_back(pair<int, int>(CHECK_ALL, CHECK_LABEL_ALL)); View 66 xbmc/playlists/SmartPlayList.cpp @@ -115,7 +115,7 @@ static const translateField fields[] = { CSmartPlaylistRule::CSmartPlaylistRule() { m_field = FieldNone; - m_operator.operation = FilterOperationContains; + m_operator.SetOperation(FilterOperationContains); m_parameter.clear(); } @@ -136,7 +136,7 @@ bool CSmartPlaylistRule::Load(TiXmlElement *element, const CStdString &encoding m_field = TranslateField(field); m_operator = TranslateOperator(oper); - if (m_operator.operation == FilterOperationTrue) + if (m_operator.GetOperation() == FilterOperationTrue) return true; TiXmlNode *parameter = element->FirstChild(); @@ -191,7 +191,7 @@ bool CSmartPlaylistRule::Load(const CVariant &obj) m_field = TranslateField(obj["field"].asString().c_str()); m_operator = TranslateOperator(obj["operator"].asString().c_str()); - if (m_operator.operation == FilterOperationTrue) + if (m_operator.GetOperation() == FilterOperationTrue) return true; if (!obj.isMember("value") || (!obj["value"].isString() && !obj["value"].isArray())) @@ -216,7 +216,7 @@ bool CSmartPlaylistRule::Load(const CVariant &obj) bool CSmartPlaylistRule::Save(TiXmlNode *parent) const { - if (parent == NULL || (m_parameter.empty() && m_operator.operation != FilterOperationTrue)) + if (parent == NULL || (m_parameter.empty() && m_operator.GetOperation() != FilterOperationTrue)) return false; TiXmlElement rule("rule"); @@ -238,7 +238,7 @@ bool CSmartPlaylistRule::Save(TiXmlNode *parent) const bool CSmartPlaylistRule::Save(CVariant &obj) const { - if (obj.isNull() || (m_parameter.empty() && m_operator.operation != FilterOperationTrue)) + if (obj.isNull() || (m_parameter.empty() && m_operator.GetOperation() != FilterOperationTrue)) return false; obj["field"] = TranslateField(m_field); @@ -278,8 +278,8 @@ CStdString CSmartPlaylistRule::TranslateOrder(SortBy order) CFilterOperator CSmartPlaylistRule::TranslateOperator(const char *oper) { CFilterOperator op = CFilter::TranslateOperator(oper); - if (op.operation == FilterOperationNone) - op.operation = FilterOperationContains; + if (op.GetOperation() == FilterOperationNone) + op.SetOperation(FilterOperationContains); return op; } @@ -629,10 +629,10 @@ CStdString CSmartPlaylistRule::GetVideoResolutionQuery(const CStdString &paramet else if (iRes >= 540) { min = 721; max = 960; } else { min = 0; max = 720; } - switch (m_operator.operation) + switch (m_operator.GetOperation()) { case FilterOperationEquals: - if (!m_operator.negated) + if (!m_operator.IsNegated()) retVal.AppendFormat(">= %i and iVideoWidth <= %i)", min, max); else retVal.AppendFormat("< %i or iVideoWidth > %i)", min, max); @@ -656,30 +656,30 @@ CStdString CSmartPlaylistRule::GetWhereClause(const CDatabase &db, const CStdStr if ((strType == "tvshows" || strType == "episodes") && m_field == FieldYear) { // special case for premiered which is a date rather than a year // TODO: SMARTPLAYLISTS do we really need this, or should we just make this field the premiered date and request a date? - if (op.operation == FilterOperationEquals) - op.operation = FilterOperationContains; + if (op.GetOperation() == FilterOperationEquals) + op.SetOperation(FilterOperationContains); } CStdString operatorString, negate; if (GetFieldType(m_field) == TEXTIN_FIELD) { - if (op.operation == FilterOperationEquals && op.negated) + if (op.GetOperation() == FilterOperationEquals && op.IsNegated()) negate = " NOT"; } else { // the comparison piece - switch (op.operation) + switch (op.GetOperation()) { case FilterOperationContains: operatorString = " LIKE '%%%s%%'"; - if (op.negated) + if (op.IsNegated()) negate = " NOT"; break; case FilterOperationEquals: operatorString = " LIKE '%s'"; - if (op.negated) + if (op.IsNegated()) negate = " NOT"; break; @@ -694,9 +694,9 @@ CStdString CSmartPlaylistRule::GetWhereClause(const CDatabase &db, const CStdStr case FilterOperationAfter: case FilterOperationGreaterThan: case FilterOperationInTheLast: - if (!op.negated) + if (!op.IsNegated()) operatorString = " > "; - else if (op.operation == FilterOperationInTheLast) + else if (op.GetOperation() == FilterOperationInTheLast) operatorString = " < "; if (GetFieldType(m_field) == NUMERIC_FIELD || GetFieldType(m_field) == SECONDS_FIELD) @@ -715,7 +715,7 @@ CStdString CSmartPlaylistRule::GetWhereClause(const CDatabase &db, const CStdStr break; case FilterOperationTrue: - if (!op.negated) + if (!op.IsNegated()) operatorString = " = 1"; else { @@ -730,7 +730,7 @@ CStdString CSmartPlaylistRule::GetWhereClause(const CDatabase &db, const CStdStr } // boolean operators don't have any values in m_parameter, they work on the operator - if (m_operator.operation == FilterOperationTrue) + if (m_operator.GetOperation() == FilterOperationTrue) { if (strType == "movies") { @@ -791,7 +791,7 @@ CStdString CSmartPlaylistRule::GetWhereClause(const CDatabase &db, const CStdStr if (GetFieldType(m_field) == DATE_FIELD) { - if (m_operator.operation == FilterOperationInTheLast) + if (m_operator.GetOperation() == FilterOperationInTheLast) { // translate time period CDateTime date=CDateTime::GetCurrentDateTime(); CDateTimeSpan span; @@ -819,8 +819,8 @@ CStdString CSmartPlaylistRule::GetWhereClause(const CDatabase &db, const CStdStr else if (m_field == FieldAlbumArtist) query = table + ".idAlbum" + negate + " IN (SELECT idAlbum FROM album_artist, artist WHERE album_artist.idArtist = artist.idArtist AND artist.strArtist" + parameter + ")"; else if (m_field == FieldLastPlayed && - (m_operator.operation == FilterOperationLessThan || m_operator.operation == FilterOperationBefore || - (m_operator.operation == FilterOperationInTheLast && m_operator.negated))) + (m_operator.GetOperation() == FilterOperationLessThan || m_operator.GetOperation() == FilterOperationBefore || + (m_operator.GetOperation() == FilterOperationInTheLast && m_operator.IsNegated()))) query = GetField(m_field, strType) + " is NULL or " + GetField(m_field, strType) + parameter; } else if (strType == "albums") @@ -858,8 +858,8 @@ CStdString CSmartPlaylistRule::GetWhereClause(const CDatabase &db, const CStdStr else if (m_field == FieldCountry) query = GetField(FieldId, strType) + negate + " IN (SELECT idMovie FROM countrylinkmovie JOIN country ON country.idCountry=countrylinkmovie.idCountry WHERE country.strCountry" + parameter + ")"; else if ((m_field == FieldLastPlayed || m_field == FieldDateAdded) && - (m_operator.operation == FilterOperationLessThan || m_operator.operation == FilterOperationBefore || - (m_operator.operation == FilterOperationInTheLast && m_operator.negated))) + (m_operator.GetOperation() == FilterOperationLessThan || m_operator.GetOperation() == FilterOperationBefore || + (m_operator.GetOperation() == FilterOperationInTheLast && m_operator.IsNegated()))) query = GetField(m_field, strType) + " IS NULL OR " + GetField(m_field, strType) + parameter; else if (m_field == FieldSet) query = GetField(FieldId, strType) + negate + " IN (SELECT idMovie FROM setlinkmovie JOIN sets ON sets.idSet=setlinkmovie.idSet WHERE sets.strSet" + parameter + ")"; @@ -879,8 +879,8 @@ CStdString CSmartPlaylistRule::GetWhereClause(const CDatabase &db, const CStdStr else if (m_field == FieldDirector) query = GetField(FieldId, strType) + negate + " IN (SELECT idMVideo FROM directorlinkmusicvideo JOIN actors ON actors.idActor=directorlinkmusicvideo.idDirector WHERE actors.strActor" + parameter + ")"; else if ((m_field == FieldLastPlayed || m_field == FieldDateAdded) && - (m_operator.operation == FilterOperationLessThan || m_operator.operation == FilterOperationBefore || - (m_operator.operation == FilterOperationInTheLast && m_operator.negated))) + (m_operator.GetOperation() == FilterOperationLessThan || m_operator.GetOperation() == FilterOperationBefore || + (m_operator.GetOperation() == FilterOperationInTheLast && m_operator.IsNegated()))) query = GetField(m_field, strType) + " IS NULL OR " + GetField(m_field, strType) + parameter; } else if (strType == "tvshows") @@ -898,8 +898,8 @@ CStdString CSmartPlaylistRule::GetWhereClause(const CDatabase &db, const CStdStr else if (m_field == FieldMPAA) query = GetField(FieldId, strType) + negate + " IN (SELECT idShow FROM tvshowview WHERE " + GetField(m_field, strType) + parameter + ")"; else if ((m_field == FieldLastPlayed || m_field == FieldDateAdded) && - (m_operator.operation == FilterOperationLessThan || m_operator.operation == FilterOperationBefore || - (m_operator.operation == FilterOperationInTheLast && m_operator.negated))) + (m_operator.GetOperation() == FilterOperationLessThan || m_operator.GetOperation() == FilterOperationBefore || + (m_operator.GetOperation() == FilterOperationInTheLast && m_operator.IsNegated()))) query = GetField(m_field, strType) + " IS NULL OR " + GetField(m_field, strType) + parameter; } else if (strType == "episodes") @@ -915,8 +915,8 @@ CStdString CSmartPlaylistRule::GetWhereClause(const CDatabase &db, const CStdStr else if (m_field == FieldWriter) query = GetField(FieldId, strType) + negate + " IN (SELECT idEpisode FROM writerlinkepisode JOIN actors ON actors.idActor=writerlinkepisode.idWriter WHERE actors.strActor" + parameter + ")"; else if ((m_field == FieldLastPlayed || m_field == FieldDateAdded) && - (m_operator.operation == FilterOperationLessThan || m_operator.operation == FilterOperationBefore || - (m_operator.operation == FilterOperationInTheLast && m_operator.negated))) + (m_operator.GetOperation() == FilterOperationLessThan || m_operator.GetOperation() == FilterOperationBefore || + (m_operator.GetOperation() == FilterOperationInTheLast && m_operator.IsNegated()))) query = GetField(m_field, strType) + " IS NULL OR " + GetField(m_field, strType) + parameter; else if (m_field == FieldStudio) query = GetField(FieldId, strType) + negate + " IN (SELECT idEpisode FROM episodeview WHERE strStudio" + parameter + ")"; @@ -939,8 +939,8 @@ CStdString CSmartPlaylistRule::GetWhereClause(const CDatabase &db, const CStdStr query = table + ".idFile" + negate + " IN (SELECT DISTINCT idFile FROM streamdetails WHERE fVideoAspect " + parameter + ")"; if (m_field == FieldPlaycount && strType != "songs" && strType != "albums") { // playcount IS stored as NULL OR number IN video db - if ((m_operator.operation == FilterOperationEquals && it->Equals("0") == !m_operator.negated) || - (m_operator.operation == FilterOperationLessThan)) + if ((m_operator.GetOperation() == FilterOperationEquals && it->Equals("0") == !m_operator.IsNegated()) || + (m_operator.GetOperation() == FilterOperationLessThan)) { CStdString field = GetField(FieldPlaycount, strType); query = field + " IS NULL OR " + field + parameter; @@ -1018,7 +1018,7 @@ CStdString CSmartPlaylistRuleCombination::GetWhereClause(const CDatabase &db, co } if (playlist.GetType().Equals(strType)) { - if (it->m_operator.operation == FilterOperationEquals && it->m_operator.negated) + if (it->m_operator.GetOperation() == FilterOperationEquals && it->m_operator.IsNegated()) currentRule.Format("NOT (%s)", playlistQuery.c_str()); else currentRule = playlistQuery; 0 comments on commit 311f235 Please sign in to comment.
__label__pos
0.819951
Question: How To Set Time On Windows 10? Step 1: Click the bottom-right clock icon on the taskbar, and select Date and time settings. Or You can right click the clock icon, click Adjust data /time. Step 2: As the Date and time Windows opens, you can turn off Set time automatically. How can I change the time on my computer? To set the date and time on your computer: • Press the Windows key on your keyboard to display the taskbar if it isn’t visible. • Right-click the Date/Time display on the taskbar and then choose Adjust Date/Time from the shortcut menu. • Click the Change Date and Time button. • Enter a new time in the Time field. Why is my computer showing the wrong time? To fix your time zone in Windows 10, right-click the system clock in your Taskbar and select Adjust date/time. Under the Time Zone header, check whether the information is correct. If not, select the correct time zone from the drop-down menu. Under Date and Time, click Set the time and date, which opens another window. How do I change the time on my HP laptop Windows 10? Click the date and time in the taskbar, then click Date and time settings. To set your computer clock to update automatically, turn on the Set time automatically setting. To change the date and time manually, click the Change button in the Change date and time section. How do I change the system date and time? Click on Change date and time settings to get the Date and Time menu. (Alternatively, use Start > Control Panel > Date and Time.) Click on Change time zone and select your time zone. When you have the correct time zone, check that the time displayed in the Date & Time tab is correct. How do I change the time on my computer Windows 10? 2 ways to change date and time on Windows 10: 1. Way 1: Change them in Control Panel. 2. Step 1: Click the bottom-right clock icon on the desktop, and tap Change date and time settings in the pop-up small window. 3. Step 2: As the Date and Time window opens, click Change date and time to continue. How do I change my timezone on Windows 10? To let Windows 10 select and set the Time Zone automatically, click on the Start Button to open the Start Menu. Now in the left pane, select Date & Time. The date & Time settings here are quite simple here as the main overview has it all. You can set the time to adjust automatically or change it manually. How do I fix my computer clock error? Solution 1 – Synchronize your PC’s clock with the default Microsoft Time Server • Restart your computer in Safe mode, • Click the time tab in the bottom right corner of your screen, • Click Change and time settings… • Click on the Internet Time folder. How do you fix a slow running clock? Quartz 1. Check the batteries in the back of the clock for power. Replace the batteries if they are bad or corroded. 2. Replace the batteries if the clock is running slow or it rings erratically. 3. Set the time using the minute hand if it is running too fast or slow. 4. Open the back of the clock and inspect it for dust or debris. How do I set the clock to 12 hour on Windows 10? Change 24 Hour Clock to 12 Hour Clock in Windows 10 • Click on the Windows 10 Start button and select Settings. • Click on Time and Language. • Next, click on the Change date and time formats link (See image below). • On the next screen, click on Short Time and pick h:mm tt from the drop-down choices. How do I change the time on my computer to 24 hours? Click Control Panel, and then click Clock, Language, and Region. Note: If you are using Control Panel in Classic View, double-click Regional and Language Options, and then skip to step 3. On the Time tab, do one of the following: Change Time format to HH:mm:ss for a 24-hour clock. How do I change the time on my HP laptop? Open Date and Time by clicking the Start button , clicking Control Panel, clicking Clock, Language, and Region, and then clicking Date and Time. Click the Internet Time tab, and then click Change settings. If you’re prompted for an administrator password or confirmation, type the password or provide confirmation. How do I fix the time on Windows 10? Once you open Control Panel, navigate to Clock, Language and Region section and click on Date and Time. Navigate to Internet Time tab and click Change settings button. In the Server section select time.nist.gov instead of time.windows.com and click Update now. Click OK to save changes. How do I change my desktop time to 12 hours? Summary – How to use a 24 hour clock in Windows 7 1. Click the Start button. 2. Click Control Panel. 3. Click Clock, Language and Region. 4. Click the Change date, time or number format link. 5. Click the Short Time dropdown menu, then click the HH:mm option. 6. Click the Long Time dropdown menu, then click the HH:mm:ss option. Why does Windows clock keep changing? The time on your clock keeps changing to the wrong time. First, make sure your clock is set to the correct time zone. If your time zone is correct you may have a bad CMOS battery but you can get around it by having the system sync more often with the internet time. How do I change the time on Windows 10 home? Change Windows 10 Time & Date. Click the clock on the taskbar and then select Date & Time settings under the calendar that pops up. Then turn off the options to set the time and time zone automatically. If these are enabled, the option to change the date, time, and time zone will be grayed out. How do I change the time and date on my computer permanently? Select Change date and time settings in the bottom of the window that appears (shown below). • In the Date and Time window, under the Date and Time tab, click the Change date and time button. • Make your adjustments and click OK. • Click OK on the main Date and Time window to save the changes. How do I change the date format in Windows 10 to mm dd yyyy? If you want to format the date and time with something more unique, you’ll need to use Control Panel. 1. Open Control Panel. 2. Click on the Clock, Language, and Region link. 3. Click on the Change date, time, or numbers formats link. 4. Under the Formats tab, click on the Additional settings button. 5. Click on the Time tab. How do I change the timezone in Windows? Change Windows Time and Timezone • Step 1: Double click the clock located in the right-most corner of the taskbar and then click on Date and time settings. • Step 2: switch “Set time automatically” to off and click on the Change button. • Step 3: Change the date and time and click Change. How do I change the time on Windows? Windows 10 – Changing the System Date and Time 1. Right-click on the time in the bottom-right of the screen and select Adjust Date/Time. 2. A window will open. On the left side of the window select the Date & time tab. Then, under “Change date and time” click Change. 3. Enter the time and press Change. 4. The system time has been updated. How do I change the timezone in CMD? How to adjust time zone using Command Prompt • Open Start. • Search for Command Prompt, right-click the top result, and select the Run as administrator option. • Type the following command to confirm the current time zone and press Enter: • Type the following command and note the time zone that you want to use and press Enter: How do I change my laptop clock to 12 hour windows? Note: If you are using Control Panel in Classic View, double-click Regional and Language Options, and then skip to step 3. On the Time tab, do one of the following: Change Time format to HH:mm:ss for a 24-hour clock. Change Time format to hh:mm:ss tt for a 12-hour clock. How do I change the lock screen time in Windows 10? Change Windows 10 Lock screen time format 1. Open Control Panel. 2. Go to the following path: Control Panel\Clock, Language, and Region. Here, click on the Region icon. 3. The following window will appear: There, click adjust the short clock format you want to have on the lock screen. 4. Now, switch to the Administrative tab and click the button “Copy settings” How do I change the time in Outlook 2016? Change the Time Zone Formats in Outlook 2010!! • Click the File tab. • Click Options. • Click Calendar. • Under Time Zones, type a description in the Label box. • In the Time zone list, Choose any format that you. • Click on Adjust for daylight saving time check box for computer clock adjustment automatically. How do I change the language in Windows 10? Do the following to open Time & Language options in Windows 10: 1. Tap on the Windows-key, and either type Settings and hit enter, or locate the Settings link in the Start Menu and click on it. 2. Select Time & Language from the options displayed in the Settings window. What does the Windows key look like? The Windows key is a standard key on most keyboards on computers built to use a Windows operating system. It is labeled with a Windows logo, and is usually placed between the Ctrl and Alt keys on the left side of the keyboard; there may be a second identical key on the right side as well. How do I change the time on my Dell laptop? To manually change the computer time and time zone, perform the following steps: • Press the Windows + X keys to open the Power User Tasks Menu, OR swipe in from the right side of the screen to open the Charms menu and select Settings. • Select Control Panel, then select the Large Icons view. • Select Change date and time Why is my Windows 10 time wrong? Windows may simply be set to the wrong time zone and every time you fix the time, it resets itself to that time zone when you reboot. To fix your time zone in Windows 10, right-click the system clock in your Taskbar and select Adjust date/time. Under the Time Zone header, check whether the information is correct. How do I fix the wrong time on Windows 10? Instructions to fix the Windows 10 time being wrong 1. Press Windows key + r ( + r). 2. Type services.msc. 3. Click Windows Time in the Name column. 4. Alternate click and then click Properties. 5. Change Startup type to Automatic (if it’s not already set to Automatic). 6. Click Start if the service isn’t started. Why is my Windows clock wrong? But sometimes a user’s Windows clock can go awry and display the incorrect date or time, usually due to hardware issues, a temporary loss of Internet connectivity, or online synchronization problems. In the Date and Time settings window, click the Internet Time tab and then select Change Settings. Photo in the article by “Pixabay” https://pixabay.com/vectors/clock-watch-analog-time-hour-42320/ Like this post? Please share to your friends: OS Today
__label__pos
0.998946
WdfWorkItemCreate function (wdfworkitem.h) [Applies to KMDF and UMDF] The WdfWorkItemCreate method creates a framework work-item object, which can subsequently be added to the system's work-item queue. Syntax NTSTATUS WdfWorkItemCreate( [in] PWDF_WORKITEM_CONFIG Config, [in] PWDF_OBJECT_ATTRIBUTES Attributes, [out] WDFWORKITEM *WorkItem ); Parameters [in] Config A pointer to a caller-allocated WDF_WORKITEM_CONFIG structure that the driver must have already initialized by calling WDF_WORKITEM_CONFIG_INIT. [in] Attributes A pointer to a caller-allocated WDF_OBJECT_ATTRIBUTES structure that specifies attributes for the work-item object. [out] WorkItem A pointer to a variable that receives a handle to the new work-item object. Return value WdfWorkItemCreate returns STATUS_SUCCESS if the operation succeeds. Otherwise, this method might return one of the following values: Return code Description STATUS_INVALID_PARAMETER An invalid parameter was supplied. STATUS_INVALID_DEVICE_REQUEST The work-item object's parent is not a device object or the ancestor of a device object. STATUS_INSUFFICIENT_RESOURCES There were insufficient system resources to create a work-item object. STATUS_WDF_INCOMPATIBLE_EXECUTION_LEVEL The AutomaticSerialization member in the WDF_WORKITEM_CONFIG structure that the Config parameter points to is TRUE, but the parent object's execution level is not WdfExecutionLevelPassive. STATUS_WDF_PARENT_NOT_SPECIFIED The Attributes parameter was NULL, or the ParentObject member of the WDF_OBJECT_ATTRIBUTES structure that Attributes specifies was NULL. Remarks After a driver calls WdfWorkItemCreate to create a work item, it typically stores item-specific information in the context memory of the work-item object. The driver's EvtWorkItem callback function, which performs the work item's tasks, can access this information to determine the tasks that it must perform. (For more information about storing information in the context memory, see Framework Object Context Space.) After storing work-item information, the driver must call WdfWorkItemEnqueue to add the work item to the system's work-item queue. When a system worker thread becomes available, the thread removes the work item from the queue and calls the EvtWorkItem callback function. When the driver creates a work-item object, it must specify a parent object for the work-item object in the ParentObject member of the WDF_OBJECT_ATTRIBUTES structure. The parent object must be a framework device object or any object whose chain of parents leads to a framework device object. The framework will delete the work-item object when it deletes the device object. To delete the work-item object earlier, the driver can call WdfObjectDelete, as described in Using Framework Work Items. The driver can retrieve a work item's parent object by calling WdfWorkItemGetParentObject. If your driver provides EvtCleanupCallback or EvtDestroyCallback callback functions for the work-item object, note that the framework calls these callback functions at IRQL = PASSIVE_LEVEL. For more information about work items, see Using Framework Work Items. Examples The following code example initializes a WDF_OBJECT_ATTRIBUTES structure, initializes a WDF_WORKITEM_CONFIG structure, and calls WdfWorkItemCreate. NTSTATUS status = STATUS_SUCCESS; PWORKER_ITEM_CONTEXT context; WDF_OBJECT_ATTRIBUTES attributes; WDF_WORKITEM_CONFIG workitemConfig; WDFWORKITEM hWorkItem; WDF_OBJECT_ATTRIBUTES_INIT(&attributes); WDF_OBJECT_ATTRIBUTES_SET_CONTEXT_TYPE( &attributes, WORKER_ITEM_CONTEXT ); attributes.ParentObject = FdoData->WdfDevice; WDF_WORKITEM_CONFIG_INIT( &workitemConfig, CallbackFunction ); status = WdfWorkItemCreate( &workitemConfig, &attributes, &hWorkItem ); if (!NT_SUCCESS(status)) { return status; } Requirements     Target Platform Universal Minimum KMDF version 1.0 Minimum UMDF version 2.0 Header wdfworkitem.h (include Wdf.h) Library Wdf01000.sys (KMDF); WUDFx02000.dll (UMDF) IRQL <= DISPATCH_LEVEL DDI compliance rules DriverCreate, KmdfIrql, KmdfIrql2 See also WdfWorkItemEnqueue
__label__pos
0.73554
Jan 202013    January 20, 2013  Posted by at 8:54 pm Not So Stupid Questions  Add comments [To celebrate my first year of programming I will ask a ‘stupid’ questions daily on my blog for a year, to make sure I learn at least 365 new things during my second year as a developer] What is the difference between a simulator and an emulator? What is the difference between a simulator and an emulator? I have to admit, I’ve been using these two words interchangeably a few times. Not really because I didn’t understand there was a difference, but because I wasn’t quite sure of the difference – and if it mattered. So what is the difference? A simulator simulates the original- so it looks and behaves like the real thing on the surface, but without actually replicating how the original really works internally. An emulator on the other hand mimics the original, and also its inner workings which – in theory anyway- should make it more close to the real thing. It kind of recreates the original – instead of simulating it / faking it. Lost looking at the definition of the words should indicate the difference: Definition Emulate: 1. To strive to equal or excel, especially through imitation 2. To compete with successfully; approach or attain equality with. 3. Computer Science To imitate the function of (another system), as by modifications to hardware or software that allow the imitating system to accept the same data, execute the same programs, and achieve the same results as the imitated system Definition Simulate: 1. a. To have or take on the appearance, form, or sound of; imitate. b. To make in imitation of or as a substitute for. 2. To make a pretense of; feign 3. To create a representation or model of (a physical system or particular situation, for example). Recommendations are that you always try on an actual device, but the simulator might leave you with more surprises if you don’t. Probably not good ones. Here are a few good discussions : http://stackoverflow.com/questions/1584617/simulator-or-emulator-what-is-the-difference What’s the difference between emulation and simulation? What’s the difference between simulation and emulation  Leave a Reply You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong> (required) (required) What is 6 + 10 ? Please leave these two fields as-is: IMPORTANT! To be able to proceed, you need to solve the following simple math (so we know that you are a human) :-)
__label__pos
0.966484
Binomial coefficient mathematics Binomial coefficient mathematics Learn about this topic in these articles: major reference • Figure 1: Ferrers' partitioning diagram for 14. In combinatorics: Binomial coefficients An ordered set a1, a2,…, ar of r distinct objects selected from a set of n objects is called a permutation of n things taken r at a time. The number of permutations is given by nP Read More × Are we living through a mass extinction? The 6th Mass Extinction
__label__pos
0.885485
Android Animation: Подождите, пока не закончите? Я хотел бы подождать, пока анимация закончится * в Android ImageView до продолжения выполнения программы, каков правильный способ сделать это? • (В этом контексте «законченный» означает, что он проходит через все свои кадры ровно один раз и останавливается на последнем. Я не понимаю, будет ли эта анимация андроидом: oneshot = «true», потому что я буду использовать ее несколько Раз, но он не будет работать непрерывно, но с перерывами) Исследования / Догадки: A. В глубине души мой вопрос, кажется, является вопросом Java-потока, потому что Android AnimationDrawable реализует Java.lang.Runnable . Таким образом, потоки могут быть решением. Возможно, ответ будет включать в себя присоединение ? B. Подход других, похоже, заключался в использовании AnimationListener , это кажется сложным и излишне сложным для моих простых потребностей. Плюс я точно не знаю, как это сделать. C. Класс AnimationDrawable имеет (boolean) метод isRunning, который, вероятно, можно использовать в цикле while (т.е. while (anim.isRunning ()) {wait (100ms)}). Но у меня есть ощущение, что это неправильный подход. Хотя что-то похожее, похоже, упоминается в учебнике параллелизма Фрагмент кода this.q_pic_view.setImageResource(0); this.q_pic_view.setBackgroundResource(R.drawable.animation_test); AnimationDrawable correct_animation = (AnimationDrawable) this.q_pic_view.getBackground(); correct_animation.start(); //here I tried to implement option C but it didn't work while(correct_animation.isRunning()){ try { Thread.sleep(20); } catch (InterruptedException e) { // TODO Auto-generated catch block e.printStackTrace(); } } Анимация <?xml version="1.0" encoding="utf-8"?> <animation-list android:id="@+id/AnimTest" android:oneshot="true" xmlns:android="http://schemas.android.com/apk/res/android"> <item android:drawable="@drawable/animtest001" android:duration="33"/> <item android:drawable="@drawable/animtest002" android:duration="100"/> <item android:drawable="@drawable/animtest003" android:duration="66"/> <item android:drawable="@drawable/animtest004" android:duration="66"/> <item android:drawable="@drawable/animtest005" android:duration="33"/> <item android:drawable="@drawable/animtest006" android:duration="66"/> <item android:drawable="@drawable/animtest007" android:duration="33"/> <item android:drawable="@drawable/animtest008" android:duration="66"/> <item android:drawable="@drawable/animtest009" android:duration="100"/> <item android:drawable="@drawable/animtest010" android:duration="100"/> </animation-list> Один из возможных способов достижения желаемого эффекта, хотя и не ответ на мой вопрос, заключается в том, чтобы отложить выполнение дополнительного кода, выполнив что-то вроде этого: int duration = 0; //Add all of the frames together for (int i=0; i<my_animation.getNumberOfFrames(); i++){ duration = duration + correct_animation.getDuration(i); } //delay the execution Handler handler = new Handler(); handler.postDelayed(new Runnable(){ public void run() { DoYourNextStuff(); } }, duration); //delay is here Edit: Есть много способов решить эту проблему, ответ, возможно, решает проблему, которую я не тестировал , но в итоге я просто ожидал нужного количества времени (сначала), затем я перешел на использование задачи async (Который имеет метод обработчика для завершения) и запускал мою анимацию в вызове updateProgress задачи async напрямую. В последней итерации я использую резьбовое представление поверхности для запуска анимации. Этот последний путь, безусловно, самый быстрый и лучший (по другим причинам, о которых спрашивали в этом посте). Надеюсь, это поможет кому-то. Solutions Collecting From Web of "Android Animation: Подождите, пока не закончите?" Предложить вам • Создайте объект для инкапсуляции времени анимации • В объекте у вас будет поток или таймер • Предоставьте методы для start() анимации и awaitCompletion() • Используйте private final Object completionMonitor поле notifyAll() private final Object completionMonitor для отслеживания завершения, synchronize на нем и использования wait() и notifyAll() для координирования awaitCompletion() Фрагмент кода: final class Animation { final Thread animator; public Animation() { animator = new Thread(new Runnable() { // logic to make animation happen }); } public void startAnimation() { animator.start(); } public void awaitCompletion() throws InterruptedException { animator.join(); } } Вы также можете использовать ThreadPoolExecutor с одним потоком или ScheduledThreadPoolExecutor и захватывать каждый кадр анимации как Callable . Передача последовательности Callable s и использование invokeAll() или CompletionService для блокировки вашего интересующего потока до завершения анимации. Самый простой способ – отправить (отложенное) Runnable в поток пользовательского интерфейса: Animation fadeout = new AlphaAnimation(1.f, 0.f); fadeout.setDuration(500); view.startAnimation(fadeout); view.postDelayed(new Runnable() { @Override public void run() { view.setVisibility(View.GONE); } }, 500); Это сделает работу безболезненно. И никогда (никогда, никогда, никогда!) Не пытайтесь заблокировать поток пользовательского интерфейса в Android. Если вы это сделаете, телефон замерзает, и вы все равно не увидите анимацию. Если вам нужно подождать некоторое время, используйте другой поток. Используйте прослушиватель анимации, чтобы прослушивать крючки жизненного цикла анимации. Animation fadeout = new AlphaAnimation(1.f, 0.f); fadeout.setDuration(500); final View viewToAnimate = view; fadeout.setAnimationListener(new AnimationListener(){ @Override public void onAnimationStart(Animation animation){} @Override public void onAnimationRepeat(Animation animation){} @Override public void onAnimationEnd(Animation animation){ viewToAnimate.setVisibility(View.GONE); } }); view.startAnimation(fadeout); Привет, другие альтернативы используют ObjectAnimator . Опять же, как упоминалось выше, вам придется использовать Listner //get an image resource ImageView imgView = (ImageView) findViewById(R.id.some_image); //create and init an ObjectAnimator ObjectAnimator animation; animation = ObjectAnimator.ofFloat(imgView, "rotationY", 0.0f, 360f); //set the duration of each iteration animation.setDuration(1500); //set number of iterations animation.setRepeatCount(1); //set setInterpolator animation.setInterpolator(new AccelerateDecelerateInterpolator()); //Add a listner to check when the animation ends animation.addListener(new AnimatorListenerAdapter() { @Override public void onAnimationEnd(Animator animation) { //postFirstAnimation(): This function is called after the animation ends postFirstAnimation(); } }); //start the animation animation.start();
__label__pos
0.765065
INtime SDK Help DeleteRtReferenceObject INtime SDK v7.1 > About INtime > INtime Kernel > Global objects > DeleteRtReferenceObject Deletes a reference object. #include <rt.h> BOOLEAN DeleteRtReferenceObject( RTHANDLE hRef ); Parameters hRef The handle for an existing reference object. Remarks The reference object refers to a real object ("the target object"). If the target object is a reference-counted object: If the target object is not a reference-counted object: Return Values TRUE Success. FALSE Failure. The function sets GetLastRtError to one of these values: E_TYPE The handle does not indicate a reference object. E_EXIST The handle does not refer to a valid object, OR The remote object indicated by the reference object does not exist. E_RESOURCE_LIMIT Insufficient resources are available to signal the remote node. Requirements Versions Defined in Include Link to INtime 4.0 intime/rt/include/rt.h rt.h rt.lib See Also
__label__pos
0.661824
2 This link looks very practical and "hands in the dirty". But in the example of Python code for node discovery and connections, it often gives warning about the fact it is of "educational purposes" and not real world. Could you please explain what the caveats of the implementation showed in the link are, and how to correct them in the real world: Example of such limitations highlighted: Example of code Is the limitation about the fact that "except Exception" excepts nearly everything and that is not a good practice? Would putting a security (like don't call recursively indefinitely) be a solution? 2 It's bad practice to use recursion when a while loop would suffice. Recursive code is harder to read, harder to debug, harder for the compiler to optimize and it may cause a stack overflow. Besides, connections shouldn't be made from the main thread. EDIT: Maybe it's because there's no limit of trials after which the code should assume that the user went offline. Otherwise the stack will grow and if the stack doesn't overflow the iterator may go out-of-bounds. 0 1 Using recursion is not a bad practice IMO, they can be used to do lot of interesting things although I am not the best person to comment on such things. There are few interesting comments in this thread about recursion vs while loop: https://www.reddit.com/r/csharp/comments/rnoy7h/difference_between_tail_recursion_and_while_loop/ As far as the code is concerned, I could not find a line that would stop recursion at some point so adding an if statement with some condition to terminate could make it usable in production. 0 Is the limitation about the fact that "except Exception" excepts nearly everything and that is not a good practice? I don't think "except Exception" is bad practice necessarily. Maybe recursing immediately on it happening, like this snippet, is. Would putting a security (like don't call recursively indefinitely) be a solution? Yes, but such security would not be a "don't call recursively indefinitely" if Python supported tail call optimization. You can mind that if you see a snippet like this in a different programming language. Your Answer By clicking “Post Your Answer”, you agree to our terms of service, privacy policy and cookie policy Not the answer you're looking for? Browse other questions tagged or ask your own question.
__label__pos
0.898549
How to Calculate 1 Divided by 96 Using Long Division In this article we are going to work out how to divide 1 by 96 using the long division method. We'll do this step by step and show you the exact process you need to go through every time you do long division. If you practice this enough, you will be able to apply it to any long division problem you might have. Let's first look at this division visually: 1 / 96 I've labelled the parts we need to be concerned with in colors to show you what each means before we get to the long division: • The first number, 1, is called the dividend. • The second number, 96 is called the divisor. Now that we know these terms, we can dive into the step by step process to divide 1 by 96 using long division. Step 1 To get started, put your divisor on the left of the bar and then the dividend goes onto the right: 961 Step 2 How many times does the divisor 96 go into first digit of the dividend 1? The answer is 0 time(s) and so we put 0 at the top: 0 961 Step 3 Now multiply the divisor by the result of the previous step (96 x 0 = 0). Add that answer below the dividend: 0 961 0 Step 4 Subtract the result from the previous step from the second digit of the dividend (1 - 0 = 1) and write that answer below: 0 961 -0 1 What Is 1 Divided By 96 Using Long Division? We have reached the end of the long division method when there are no more digits to move down from the dividend. The answer is the top number, followed by any remainder that is left over: 0 Remainder 1 Link To or Reference This Page If you found this content useful in your research, please do us a great favor and use the tool below to make sure you properly reference us wherever you use it. We really appreciate your support! • "How to Calculate 1 Divided by 96 Using Long Division". DivisibleBot.com. Accessed on July 28, 2021. https://divisiblebot.com/long-division/1-divided-by-96/. • "How to Calculate 1 Divided by 96 Using Long Division". DivisibleBot.com, https://divisiblebot.com/long-division/1-divided-by-96/. Accessed 28 July, 2021. • How to Calculate 1 Divided by 96 Using Long Division. DivisibleBot.com. Retrieved from https://divisiblebot.com/long-division/1-divided-by-96/. More Calculations In addition to solving 1 divided by 96 using long division, there are a couple of other methods you could have used: • Type 1 divided by 96 into a calculator and you'd get 0.0104. • 1/96 could also be shown as a mixed fraction: 0 1/96 • In the mixed fraction, the numerator is the same as the remainder (1), the denominator is our original divisor (96), and the whole number is our final answer (0). Calculate Another Problem divided by Next Long Division Problem Here is the next problem in the series for you to try. Grab a pencil and paper and try to solve it first before you read the article. What is 1 Divided By 97 Using Long Division? Random Long Division Problems Here are some more random calculations for you:
__label__pos
0.725984
鸿 网 互 联 www.68idc.cn 王爽汇编语言第二版学习笔记之实验13(2) 来源:互联网 作者:佚名 时间:2015-08-13 07:42 1 程序: 2 assume cs: code 3 code segment 4 start: 5 mov ax,0b800h 6 mov es,ax 7 mov di, 160 * 12 + 16 * 2 8 mov bx,offset s-offset se 9 mov cx, 80 10 s: 11 mov byte ptr es: [di], ' ! ' 12 add di, 2 13 int 7ch 14 se: 15 nop 16 mov ax,4c00h 1 程序: 2 assume cs:code 3 code segment 4 start: 5 mov ax,0b800h 6 mov es,ax 7 mov di,160*12+16*2 8 mov bx,offset s-offset se 9 mov cx,80 10 s: 11 mov byte ptr es:[di],'!' 12 add di,2 13 int 7ch 14 se: 15 nop 16 mov ax,4c00h 17 int 21h 18 code ends 19 end start 20 7ch中断例程: 21 assume cs:code 22 code segment 23 start: 24 mov ax,cs 25 mov ds,ax 26 mov si,offset do0 27 28 mov ax,0 29 mov es,ax 30 mov di,200h 31 32 mov cx,offset do0end-offset do0 33 cld 34 rep movsb 35 36 mov ax,0 37 mov ds,ax 38 mov word ptr ds:[7ch*4],200H 39 mov word ptr ds:[7ch*4+2],0H 40 41 mov ax,4c00h 42 int 21h 43 44 do0: 45 dec cx 46 jcxz ok 47 push bp 48 mov bp,sp 49 add [bp+2],bx 50 pop bp 51 ok: 52 iret 53 54 do0end: 55 nop 56 57 code ends 58 end start   网友评论 <
__label__pos
0.94519
Why use Airflow plugins I had a fundamental questions , when we want to pull data from external sources we use Airflow Plugins , My question is Why ? Simmilar thing can be done by executing external python script from the airflow dags instead of creating a hook and operator. Is there a reason to prefer Airflow Plugin vs use of external script which is triggered from airflow let’s say using a bash operator? 2 Likes Airflow plugins are helpful minimizing reusable code. You are welcome to write everything with python operators, but if you are constantly doing the same set of python operators, it may be tough to re-use that code. Furthermore, using hooks/operators also give you a more natural interface to using airflow’s built in connections manager. You can find more information on this doc: 2 Likes
__label__pos
0.999993
Skip to main content Version: 4.0.2 Compose behaviors using Subtrees We can build large-scale behavior by inserting smaller and reusable behaviors into larger ones. In other words, we want to create hierarchical behavior trees and make our trees composable. This can be achieved by defining multiple trees in the XML and using the node SubTree to include one tree into the other. CrossDoor behavior This example is inspired by a popular article about behavior trees. It is also the first practical example that uses Decorators and Fallback. crossdoor_subtree.svg <root BTCPP_format="4"> <BehaviorTree ID="MainTree"> <Sequence> <Fallback> <Inverter> <IsDoorClosed/> </Inverter> <SubTree ID="DoorClosed"/> </Fallback> <PassThroughDoor/> </Sequence> </BehaviorTree> <BehaviorTree ID="DoorClosed"> <Fallback> <OpenDoor/> <RetryUntilSuccessful num_attempts="5"> <PickLock/> </RetryUntilSuccessful> <SmashDoor/> </Fallback> </BehaviorTree> </root> The desired behavior is: • If the door is open, PassThroughDoor. • If the door is closed, try OpenDoor, or try PickLock up to 5 times or, finally, SmashDoor. • If at least one of the actions in the DoorClosed subtree succeeded, then PassThroughDoor. The CPP code We will not show the detailed implementation of the dummy actions in CrossDoor. The only interesting piece of code is probably registerNodes. class CrossDoor { public: void registerNodes(BT::BehaviorTreeFactory& factory); // SUCCESS if _door_open == true BT::NodeStatus isDoorClosed(); // SUCCESS if _door_open == true BT::NodeStatus passThroughDoor(); // After 3 attempts, will open a locked door BT::NodeStatus pickLock(); // FAILURE if door locked BT::NodeStatus openDoor(); // WILL always open a door BT::NodeStatus smashDoor(); private: bool _door_open = false; bool _door_locked = true; int _pick_attempts = 0; }; // Helper method to make registering less painful for the user void CrossDoor::registerNodes(BT::BehaviorTreeFactory &factory) { factory.registerSimpleCondition( "IsDoorClosed", std::bind(&CrossDoor::isDoorClosed, this)); factory.registerSimpleAction( "PassThroughDoor", std::bind(&CrossDoor::passThroughDoor, this)); factory.registerSimpleAction( "OpenDoor", std::bind(&CrossDoor::openDoor, this)); factory.registerSimpleAction( "PickLock", std::bind(&CrossDoor::pickLock, this)); factory.registerSimpleCondition( "SmashDoor", std::bind(&CrossDoor::smashDoor, this)); } int main() { BehaviorTreeFactory factory; CrossDoor cross_door; cross_door.registerNodes(factory); // In this example a single XML contains multiple <BehaviorTree> // To determine which one is the "main one", we should first register // the XML and then allocate a specific tree, using its ID factory.registerBehaviorTreeFromText(xml_text); auto tree = factory.createTree("MainTree"); // helper function to print the tree printTreeRecursively(tree.rootNode()); tree.tickWhileRunning(); return 0; }
__label__pos
0.506416
W3cubDocs /SVG <feConvolveMatrix> The <feConvolveMatrix> SVG filter primitive applies a matrix convolution filter effect. A convolution combines pixels in the input image with neighboring pixels to produce a resulting image. A wide variety of imaging operations can be achieved through convolutions, including blurring, edge detection, sharpening, embossing and beveling. A matrix convolution is based on an n-by-m matrix (the convolution kernel) which describes how a given pixel value in the input image is combined with its neighboring pixel values to produce a resulting pixel value. Each result pixel is determined by applying the kernel matrix to the corresponding source pixel and its neighboring pixels. The basic convolution formula which is applied to each color value for a given pixel is: COLORX,Y = ( SUM I=0 to [orderY-1] { SUM J=0 to [orderX-1] { SOURCE X-targetX+J, Y-targetY+I * kernelMatrixorderX-J-1, orderY-I-1 } } ) / divisor + bias * ALPHAX,Y where "orderX" and "orderY" represent the X and Y values for the ‘order’ attribute, "targetX" represents the value of the ‘targetX’ attribute, "targetY" represents the value of the ‘targetY’ attribute, "kernelMatrix" represents the value of the ‘kernelMatrix’ attribute, "divisor" represents the value of the ‘divisor’ attribute, and "bias" represents the value of the ‘bias’ attribute. Note in the above formulas that the values in the kernel matrix are applied such that the kernel matrix is rotated 180 degrees relative to the source and destination images in order to match convolution theory as described in many computer graphics textbooks. To illustrate, suppose you have a input image which is 5 pixels by 5 pixels, whose color values for one of the color channels are as follows: 0 20 40 235 235 100 120 140 235 235 200 220 240 235 235 225 225 255 255 255 225 225 255 255 255 and you define a 3-by-3 convolution kernel as follows: 1 2 3 4 5 6 7 8 9 Let's focus on the color value at the second row and second column of the image (source pixel value is 120). Assuming the simplest case (where the input image's pixel grid aligns perfectly with the kernel's pixel grid) and assuming default values for attributes ‘divisor’, ‘targetX’ and ‘targetY’, then resulting color value will be: (9* 0 + 8* 20 + 7* 40 + 6*100 + 5*120 + 4*140 + 3*200 + 2*220 + 1*240) / (9+8+7+6+5+4+3+2+1) Usage context Categories Filter primitive element Permitted content Any number of the following elements, in any order: <animate>, <set> Attributes Global attributes Specific attributes DOM Interface This element implements the SVGFEConvolveMatrixElement interface. Example SVG <svg viewBox="0 0 200 200" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink"> <defs> <filter id="emboss"> <feConvolveMatrix kernelMatrix="3 0 0 0 0 0 0 0 -3"/> </filter> </defs> <image xlink:href="/files/12668/MDN.svg" x="0" y="0" height="200" width="200" style="filter:url(#emboss);" /> </svg> Result Specifications Browser compatibilityUpdate compatibility data on GitHub Desktop Chrome Edge Firefox Internet Explorer Opera Safari Basic support Yes Yes Yes Yes Yes ? bias ? ? ? ? ? ? divisor ? ? ? ? ? ? edgeMode ? ? ? ? ? ? in Yes Yes Yes Yes Yes ? kernelMatrix Yes Yes Yes Yes Yes ? kernelUnitLength ? ? ? ? ? ? order ? ? ? ? ? ? preserveAlpha ? ? ? ? ? ? targetX ? ? ? ? ? ? targetY ? ? ? ? ? ? Mobile Android webview Chrome for Android Edge Mobile Firefox for Android Opera for Android iOS Safari Samsung Internet Basic support ? Yes Yes Yes ? ? ? bias ? ? ? ? ? ? ? divisor ? ? ? ? ? ? ? edgeMode ? ? ? ? ? ? ? in ? Yes Yes Yes ? ? ? kernelMatrix ? Yes Yes Yes ? ? ? kernelUnitLength ? ? ? ? ? ? ? order ? ? ? ? ? ? ? preserveAlpha ? ? ? ? ? ? ? targetX ? ? ? ? ? ? ? targetY ? ? ? ? ? ? ? See also © 2005–2018 Mozilla Developer Network and individual contributors. Licensed under the Creative Commons Attribution-ShareAlike License v2.5 or later. https://developer.mozilla.org/en-US/docs/Web/SVG/Element/feConvolveMatrix
__label__pos
0.62614
Write a test program that creates two rectangle objects Getter and setter for the instance variable radius. The class contains: Two private instance variables: center an instance of MyPoint and radius int. Design a class named rectangle to represent a rectangle python Hints: import java. The default values are 1 for both width and height. There is inconsistency in the design introduced for teaching purpose. Write the Book class which uses the Author class written earlier. Use classes to develop applications. Advanced: throw an IllegalStateException with the message "Year out of range! Notes: The constructors take an array of Author i. The MyCircle class uses an instance of MyPoint class created in the previous exercise as its center. If Override is not used and toString is misspelled as ToStringit will be treated as a new method in the subclass, instead of overriding the superclass. Do not submit executable jar or compressed zip, rar, 7z, etc. However, it can be differentiated via the referencing instance. rectangle java and rectangledemo java The three dots is known as varargs variable number of argumentswhich is a new feature introduced in JDK 1. The application shall prompt the user for two complex numbers, print their values, check for real, imaginary and equality, and carry out all the arithmetic operations. A constructor which accepts x, y of the top-left corner, width and height as argument, and converts them into the internal representation i. write a program that creates a date object sets its elapsed time to 10000 Write the codes for the Account class and a test driver to test all the public methods. The Rectangle class contains: Two instance variables width double and length double. Java chapter 9 exercise 9 An int data field named speed that specifies the speed of the fan default SLOW. Hypothetically, assume that all rectangles have the same color. Note that the y-axis of the Java graphics coordinate system is inverted, i. Write a program to test this class to display the current year, month, and day. They also override the toString. The no-arg constructor initializes the radius to 1. The data fields value will represent the current time A constructor that constructs a Time object with a specified elapse time since the middle night, January 1, in milliseconds. Java runtime will search the superclass only if it cannot locate the method in this class. Square has no instance variable, but inherits the instance variables width and length from its superclass Rectangle. Github intro to java programming Write a test program that creates an Account object with an account ID of , a balance of , and an annual interest rate of 4. One common way to model these common behaviors is to define an interface called Movable, with abstract methods moveUp , moveDown , moveLeft and moveRight. You are required to perform input validation. Draw the UML diagram for the class. Write the Book class which uses the Author class written earlier. In other words, the MovableCircle composes a MovablePoint, and its radius. For a Book instance says aBook, aBook. Either ask your questions here and show your code, or you're out of luck. Take note that the nextSecond of is Notes: The constructors take an array of Author i. Since the output is generated from several static methods in the class, you may define a static String variable output for storing the output and display it in a message dialog box. Take note that nextMonth for 31 Oct shall be 30 Nov Rated 6/10 based on 62 review Download Create a New Java Class Rectangle
__label__pos
0.971653
Foyer Foyer Follow Foyer Follow 5 Terrible Metrics to Avoid When Evaluating Developer Performance Foyer's photo Foyer ·Jan 13, 2023· 6 min read Play this article Table of contents • Introduction • Worst Metrics • 1. Lines of Code (LoC) • 2. Commit Count • 3. Pull Request Count • 4. Velocity or Story Points • 5. Code Churn • 6. Test Coverage • 7. Impact • Why are these metrics so commonly misused? Introduction As software development teams continue to grow, engineering leaders are increasingly expected to produce tangible numbers to show how their teams are improving. As software developers, we're constantly trying to improve the way we work. But how do we know if we're actually making progress? One way is by tracking key performance indicators (KPIs), which help us measure and analyze our work. However, not all KPIs are created equal. In fact, some can be downright misleading. This ongoing search for software development metrics is a tale as old as time, and while there are many metrics out there, it’s important to understand which ones are flawed and should be avoided. In this blog post, we'll take a look at some of the worst metrics for evaluating developer performance and why you should avoid them. Worst Metrics 1. Lines of Code (LoC) First, one of the worst metrics to use when evaluating developer performance is the number of lines of code they write. This metric is often seen as a measure of productivity, but it’s actually a terrible indicator of a developer’s ability. Just because a developer can write a lot of code doesn’t mean that it’s good code. A developer could write a lot of code that is inefficient, poorly written or even buggy. This metric also doesn’t take into account the amount of time it takes to write the code, which is a much better indicator of a developer’s productivity. LoC is probably the most well-known metric for evaluating developers, but it's also one of the worst. The problem with LoC is that it's extremely noisy. According to an analysis of 1 million open source commits, about 70% of LoC is noise. And that's just the intrinsic noise. When you factor in the fact that about 30% of all commits in open source repos are eventually discarded, the noise level increases to around 80%. But it gets even worse. LoC tends to spike when new features are being implemented, which can incentivize rapid code addition and lead to a codebase bogged down by tech debt. Additionally, the value of a line of code can vary greatly depending on the language it's written in. A line of CSS, for example, might take a fraction of the time to write compared to a line of Java, Python, or Ruby. As a result, the "most valuable" developer as measured by LoC might be the one adding the most CSS, whitespace, and third-party libraries. 2. Commit Count The second terrible metric to use when evaluating developer performance is the number of commits they make. Commit count is relatively easy to track, but it's not very useful. While it does have some advantages over LoC (it's not susceptible to noise from trivial line changes and can be a useful indicator of whether a developer is stuck), it lacks signal. In other words, it doesn't tell us much about the quality or impact of the work being done. Commit frequency is typically used to reward teams with a high frequency and improve teams with a lower one. At face value, it might seem like an okay metric, but it’s easy to game. Just create more commits. Even if it’s not gamed, a rise in commits doesn’t indicate more productivity, output, or value delivered. For example, a developer who makes a lot of small, incremental commits might have a high commit count, but that doesn't necessarily mean they're doing the most valuable work. On the other hand, a developer who makes fewer, larger commits might be doing more impactful work, but they would have a lower commit count. 3. Pull Request Count Pull request count can give you a sense of release cadence and continuous delivery. However, it’s a vanity metric. It doesn’t take into account the size or difficulty of pull requests, and it’s easy to game. It encourages developers to create an excessive amount of small pull requests just to inflate their metric, which causes bloat in the code review process and creates unnecessary overhead across the team. 4. Velocity or Story Points Velocity points are a common agile approach and can be a great tool when used to forecast delivery and estimations. Unfortunately, team velocity and story points are often misused as performance metrics. When you turn velocity from an estimation tool to a measure of software productivity or output, you end up rewarding teams based on points. This immediately jeopardizes the accuracy of estimations as developers are incentivized to inflate points. 5. Code Churn Code churn is a measure of how much a codebase changes over time. While it might seem like a useful metric at first glance, it's quite noisy and doesn't provide much useful information. For example, a high code churn rate could be caused by several factors, such as refactoring, bug fixing, or the implementation of new features. It's hard to discern any meaningful signal from the noise. 6. Test Coverage Test coverage is a measure of how much of a codebase is covered by automated tests. While it's certainly important to have good test coverage, it's not a good metric for evaluating developer performance. That's because test coverage is influenced by several factors beyond an individual developer's control, such as the complexity of the code and the number of edge cases that need to be covered. Additionally, test coverage is a lagging indicator. In other words, it tells us about the quality of the code that's been written, but it doesn't predict the quality of code that will be written in the future. 7. Impact This is a new metric used by many engineering ‘intelligence’ platforms, but it’s far from intelligent. ‘Impact scores’ essentially boil down to lines of code with extra steps. They factor in the number of lines touched, new vs. existing code, etc. - all combined as an “impact score.” A lot of companies attempt to use this metric, and - in almost all cases - developers hate it. Not only does it suffer from the same flaws as lines of code, but it’s even more difficult to understand. The biggest flaw in this metric is its name, as it suggests to executives and managers how this metric should be used. Why are these metrics so commonly misused? The desire for key performance indicators can lead us to measure the wrong things or use metrics in the wrong ways - even when we see the flaws. Being a manager and not being able to measure - that frustration - is powerful. Leaders in charge of thousands of engineers - have no idea what’s going on or if their software development process is a healthy one. Open source maintainers of the largest projects in the world have no insight into whether their communities are healthy, and growing or what the impact of their projects even is. These are areas where software metrics would be useful - but the metrics used today aren’t great ways of doing so. At foyer, we create tailored dashboards for engineering teams to provide insightful analysis of essential metrics to ensure the correct metrics are being measured correctly. By using the right metrics, you can more effectively measure and improve the performance of your team. If you want to learn about healthy patterns and types of metrics to use, you can book a demo with the team foyer we’ve put together a smart analysis dashboard that can help you handle big engineering teams.   Share this
__label__pos
0.971987
Dismiss Notice Join Physics Forums Today! The friendliest, high quality science and math community on the planet! Everyone who loves science is here! Computing the components of the curvature tensor is tedious, are there other methods? 1. May 28, 2012 #1 I've been trying to calculate the Riemann Curvature Tensor for a certain manifold in 3-dimensional Euclidean Space using Christoffel Symbols of the second kind, and so far everything has gone well however... It is extremely tedious and takes a very long time; there is also a high probability of making silly mistakes (like misplacing a variable). Are there any faster methods (not necessarily simpler) or is there no other alternative?   2. jcsd 3. May 28, 2012 #2 lavinia User Avatar Science Advisor Gold Member 2017 Award Re: Computing the components of the curvature tensor is tedious, are there other meth The equations simplify with respect to an orthonormal basis. For a surface, if dx and dy are a local orthonormal basis for the 1 forms, then dx = w[itex]_{12}[/itex]^dy and dy = -w[itex]_{12}[/itex]^dx dw[itex]_{12}[/itex] = -KdV where K is the Gauss curvature and dV is the volume element of the metric.   Last edited: May 28, 2012 Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook Similar Discussions: Computing the components of the curvature tensor is tedious, are there other methods? Loading...
__label__pos
0.878612
Intermediate Code Tutorial: Building TodoMVC app Before you start In this tutorial we’re making TodoMVC, a standard introductory app. You can play with example implementations across various frameworks/languages at todomvc.com. This covers building a “complex,” interactive app. If you are using Plasmic to build web pages or as a CMS, this won’t be relevant for you. This tutorial is focused on just the code, and is an intermediate-level intro to the Plasmic code integration. We’ll provide you with the finished design in Plasmic. From there you will be generating the presentational code and adding the required props, event handlers, etc. to bring the designs to life. For an end-to-end tutorial that covers how to design new components and builds an even simpler app than TodoMVC, see the Minitwitter tutorial. This tutorial covers the codegen mode of integrating Plasmic into a codebase, not PlasmicLoader. Codegen is recommended for when you’ll need a lot of interactivity or otherwise plan to hook up your components to a lot of code, which is the case for a task management app. For websites with mostly content and only light interactivity, we recommend using PlasmicLoader. Learn more about the distinction. This tutorial targets also React with Typescript, but you can also follow along with Javascript (which Plasmic also supports). As you learn Plasmic, please let us know all your questions and feedback! Reach us on our Slack community or at [email protected]. Plasmic TodoMVC project Before starting, make sure you have a Plasmic account already, and take a look at the TodoMVC Plasmic project to get a sense of what we’re making. You should make a copy of this project so you can play with it yourself. We have a quick tour of the project that walks through the components that comprise the UI and (more importantly) how they’re modeled in terms of Plasmic’s core concepts. Please have a read to familiarize yourself with the structure of the project and these core concepts. You can play with a completed codebase implementation on CodeSandbox: All of the code for this completed implementation is also available on Github: https://github.com/plasmicapp/todomvc Setup Follow these steps to set up your project. First, create a brand-new React codebase named “todomvc”. In this tutorial we’ll be showing Typescript, but Plasmic also works with plain JS projects. npx create-react-app todomvc --typescript cd todomvc/ Start the app, which should open up in your browser: yarn start The above gets you a brand-new generic React codebase. The next instructions are Plasmic-specific. You won’t need to remember these; if you just open up the Plasmic project and click on the Codegen toolbar button, you can also see the instructions laid out there: Now, in a separate terminal, install the plasmic command-line tool: yarn global add @plasmicapp/cli # or npm install -g @plasmicapp/cli Let’s create the initial ~/.plasmic.auth and plasmic.json file with: # from todomvc folder plasmic init If this is your first run, you’ll be prompted to create a Personal Access Token in Plasmic. You’ll then be asked a short series of questions to help guide the initialization. Let’s go with the default choices. At the end, plasmic init will confirm with you to install our small runtime library. $ plasmic init 15:31:23:info Using existing Plasmic credentials at /Users/yang/.plasmic.auth ? What directory should React component files (that you edit) be put into? > ./src/components ? What directory should Plasmic-managed files be put into? (This is relative to ./src/components) > ./plasmic ? What target language should Plasmic generate code in? (tsconfig.json detected, guessing Typescript) > ts 15:31:25:info Successfully created plasmic.json. ? @plasmicapp/react-web is a small runtime required by Plasmic-generated code. Do you want to add it now? Yes 15:31:29:info yarn add -W @plasmicapp/react-web yarn add v1.22.4 [1/4] 🔍 Resolving packages... [2/4] 🚚 Fetching packages... [3/4] 🔗 Linking dependencies... warning " > @testing-library/[email protected]" has unmet peer dependency "@testing-library/dom@>=5". [4/4] 🔨 Building fresh packages... success Saved lockfile. success Saved 2 new dependencies. info Direct dependencies └─ @plasmicapp/[email protected] info All dependencies ├─ @plasmicapp/[email protected] └─ [email protected] ✨ Done in 5.89s. 15:31:35:info Successfully added @plasmicapp/react-web dependency. If you decline to install the dependency, you can also always manually add it: yarn add @plasmicapp/react-web # or npm install @plasmicapp/react-web Now ~/.plasmic.auth should have your credentials. Check out the plasmic.json (in the current directory) file if you’re curious what’s inside—it gives some clues as to what’s coming next! Now you’re ready to generate some components. If you haven’t already, you should take a tour of the Plasmic project to understand how it’s structured. First, make a copy of the TodoMVC Plasmic project - open it up and click “Make a copy” in the popup. This is so that you can later make edits to it and sync down the code. In the newly created project, the URL has the following form, ending in the project’s unique ID: https://studio.plasmic.app/projects/XXASAhKsCsKUuxeJL6gacV Copy that project ID, and now sync down all components into /src/components/: plasmic sync --projects {PROJECTID} Next, edit your App.tsx to just render the generated TodoApp component and also fetch the fonts that the designs use: import React from 'react'; import TodoApp from './components/TodoApp'; import ThemeContext from './components/plasmic/PlasmicGlobalVariant__Theme'; function App() { return ( <ThemeContext.Provider value={undefined}> <TodoApp /> </ThemeContext.Provider> ); } export default App; If you glance back in your browser window, you should now see TodoMVC on screen! If you have a tall enough window, you may notice that the TodoApp component is not expanding to the full height of the screen. The TodoApp component is indeed set to stretch to the full height of its parent, but its parent is the unstyled #root div. Add the following to your app’s index.css to make our screen elements span at least the full viewport height: #root { min-height: 100vh; display: flex; } #root > * { height: auto; } /* Explanation for the curious: We want the component's root div to cover at least the full height of the page, but *its* children may have height:100%. For that to work, we can't just set min-height; we have to use align-items: stretch, which height:100% will work with. This is generally the snippet you'll want to use if mounting a Plasmic component that's supposed to be the full page. */ At this point we have just the static design in the browser, but next we’ll bring it to life. Plasmic codegen primer You should now have two new directories, src/components/ and src/components/plasmic/. The plasmic subdirectory is owned and maintained by Plasmic; you should treat it as any library code, and not have to maintain or worry about its contents (similar to any component library you pull in via npm). It contains purely presentational components generated from the design, into which you can pass arbitrary props. The other React tsx files placed directly in src/components/, on the other hand, are just empty scaffolding files for corresponding React components; you own these files, and should edit them as you see fit. See Codegen Guide for more details on the difference. Hence, if you look at TodoApp.tsx, you’ll see that it’s just barebones scaffolding that simply renders the presentational library component: // This is a skeleton starter React component generated by Plasmic. // This file is owned by you, feel free to edit as you see fit. import * as React from 'react'; import { PlasmicTodoApp, DefaultTodoAppProps } from './plasmic/todo_mvc/PlasmicTodoApp'; // Your component props start with props for variants and slots you defined // in Plasmic, but you can add more here, like event handlers that you can // attach to named nodes in your component. // // If you don't want to expose certain variants or slots as a prop, you can use // Omit to hide them: // // interface TodoAppProps extends Omit<DefaultTodoAppProps, "hideProps1"|"hideProp2"> { // // etc. // } // // You can also stop extending from DefaultTodoAppProps altogether and have // total control over the props for your component. interface TodoAppProps extends DefaultTodoAppProps { children?: never; } function TodoApp(props: TodoAppProps) { // Use PlasmicTodoApp to render this component as it was // designed in Plasmic, by activating the appropriate variants, // attaching the appropriate event handlers, etc. You // can also install whatever React hooks you need here to manage state or // fetch data. // // Props you can pass into PlasmicTodoApp are: // 1. Variants you want to activate, // 2. Contents for slots you want to fill, // 3. Overrides for any named node in the component to attach behavior and data, // 4. Props to set on the root node. // // By default, we are just piping all TodoAppProps here, but feel free // to do whatever works for you. return <PlasmicTodoApp {...props} />; } export default TodoApp; There’s a bunch of comments—removing this reveals the simple one-line functional component: import * as React from 'react'; import { PlasmicTodoApp, DefaultTodoAppProps } from './plasmic/todo_mvc/PlasmicTodoApp'; interface TodoAppProps extends DefaultTodoAppProps { children?: never; } function TodoApp(props: TodoAppProps) { return <PlasmicTodoApp {...props} />; } export default TodoApp; Before continuing, we’ll need to introduce some Plasmic concepts. Variants A Plasmic component can have variants defined on it. Each variant specifies a different way to render a component. For example, a Button component might have a “primary” variant that displays a blue background instead of the usual gray, or a “small” variant that uses smaller font size and tighter spacing. Variants are organized into groups. In the Button component, we may have a group role that includes primary and secondary variants, or a size group that includes small and large variants. Often the groups are single-choice—you only allow one variant per group to be active (can’t have a button be both small and large). Lastly, variant groups are optional, so the Button can have size be either small , large, or undefined (normal size). Above, the TodoApp has a state variant group with just a single option, empty, to show the app in the empty state. Note: all names, including variant name strings, are transformed to camelCase, in order to adhere to Javascript convention. Overrides This is the place where you pass props to individual elements that make up the component. For instance, in TodoApp, the top title logo is a text element called app title. If you wanted to, you can rename the app and make the logo respond to a click like so: function TodoApp(props: TodoAppProps) { return ( <PlasmicTodoApp {...props} appTitle={{ children: 'toodoo', onClick: () => alert() }} /> ); } Beyond passing props, you can also wrap the element in something else or even completely replace or remove the element. We give the developer complete flexibility in changing what they need to in the rendered output. Args A Plasmic component renderer can also take in “Args”, which are ways designers have created to customize the component. Most often these are slots; think children prop in React. An example is a Card component, with slots for header and children; the Card component renders the styling and elements of the shell, but the actual header and body content of the Card is specified by whoever is instantiating the Card component. In TodoMVC, notice how the Task component exposes a slot children. So passing in a children arg is how consumers pass it content to render as the text of the Task. If you wanted to, you could hard-code the text like so: interface TaskProps extends DefaultTaskProps {} function Task(props: TaskProps) { return <PlasmicTask {...props}>My task name</PlasmicTask>; } You could also achieve the same effect by directly using overrides to the replace children prop on the specific element in the Task where the slot is rendered, but specifying the component-designated slot is a bit higher-level and can shield you from if for instance the slot gets moved to a different element in the component. Attaching real logic Now let’s start wiring up our real event handlers and data to the elements in our components. To determine what are the elements in each component and what names to reference them by, you’ll want to keep the project open in Plasmic Studio. Rendering a list Let’s begin by powering the list with a real data model. We’ll use plain JS objects and built-in React context for state management. Create src/model.ts to define your data model: export interface Entry { id: number; done: boolean; descrip: string; } let nextId = 0; export function createEntry(descrip: string): Entry { return { id: ++nextId, done: false, descrip }; } export type ShowFilter = 'all' | 'completed' | 'active'; Now update TodoApp to show a list of two Entries that live in React state. From looking inside of the Plasmic project, notice that the list is inside an element tasksContainer. So let’s override its children to reflect this list using the Task component by passing down Entries. import * as React from 'react'; import { PlasmicTodoApp, DefaultTodoAppProps } from './plasmic/todo_mvc/PlasmicTodoApp'; import { useState } from 'react'; import { createEntry, Entry } from '../model'; import Task from './Task'; interface TodoAppProps extends DefaultTodoAppProps { children?: never; } function TodoApp(props: TodoAppProps) { const [entries, setEntries] = useState<Entry[]>([createEntry('Hello world'), createEntry('Goodbye world')]); return ( <PlasmicTodoApp {...props} tasksContainer={{ children: entries.map((entry) => <Task entry={entry} />) }} /> ); } export default TodoApp; You should see a type error because Task doesn’t yet take an entry prop. Let’s update Task to take this and actually reflect the Entry passed in: ... import * as React from "react"; import { PlasmicTask, DefaultTaskProps } from "./plasmic/todo_mvc/PlasmicTask"; import { Entry } from "../model"; interface TaskProps extends DefaultTaskProps { entry: Entry; } function Task({ entry, ...rest }: TaskProps) { return ( <PlasmicTask {...rest} state={entry.done ? "checked" : undefined}> {entry.descrip} </PlasmicTask> ); } export default Task; Notice in particular that: • We are specifying the state variant. • We are passing in the description to the children slot. • We are making sure to exclude the entry prop from the rest of the DefaultTaskProps that get forwarded to PlasmicTask. You should now see your two real (useState-managed) tasks in the browser! Cleaning up the props interface Notice we just added an Entry prop to Task, but we still have some unused props left over, state and children. These control how a Task is rendered, but they are not what you’d want in your real Task React component’s interface — instead, you’d rather have your Task React component take in an Entry data object, and derive the state and children values from that Entry object. This is often the case in general — there are props exposed by the Plasmic presentational component that allow you to control how it’s rendered, but they don’t make sense as props in your actual React component. Instead, you want your actual React component to take in higher-level data objects, and derive the values you want to pass into these renderer knobs inside your React component, as you did above for the Task component. So, let’s go ahead and just delete these knobs from TaskProps. Note that you always need to preserve and forward the className prop, since that’s how Plasmic communicates instance-specific layout styles to the component. interface TaskProps { // className prop is required for positioning instances of // this Component className?: string; entry: Entry; } Finishing the Task component Finish the rest of the behavior of the Task component. It’s the most interactive component in the app. We’ll want to respond to clicks on the checkbox. However, there’s no way to override that element currently, since it’s an unnamed element, and we only generate override props for named elements—in Plasmic, you’ll see it’s just called box, which is the default name for the element. Give it a name in Plasmic by double-clicking it in the left sidebar and entering checkbox. In general, you’ll want to name all the elements that you want to make interactive, add real data to, or otherwise override in any way. While you’re here, let’s also rename the box displaying the name of the task to label, since we’ll want to handle double clicks to introduce edit mode. And lastly, there’s a delete button on the right of the Task, but this is only visible in the Hover state, so switch to it first by bringing up the component panel and clicking the Hover variant. Call it deleteBtn. Here’s a video showing us performing all these renames: Re-run plasmic sync and edit the Task component. Note: from here on out, we will skip showing import lines. interface TaskProps { // className prop is required for positioning instances of // this Component className?: string; entry: Entry; onChange: (entry: Entry) => void; onDelete: () => void; } function Task({ entry, onChange, onDelete, ...rest }: TaskProps) { const textbox = useRef<HTMLInputElement>(null); const [editing, setEditing] = useState(false); function finish() { onChange({ ...entry, descrip: textbox.current!.value }); setEditing(false); } return ( <PlasmicTask {...rest} state={editing ? 'editing' : entry.done ? 'checked' : undefined} checkbox={{ onClick: () => onChange({ ...entry, done: !entry.done }) }} label={{ onDoubleClick: () => setEditing(true) }} deleteBtn={{ onClick: () => { onDelete(); } }} textbox={{ ref: textbox, autoFocus: true, onBlur: () => { finish(); }, defaultValue: entry.descrip, onKeyDown: (e) => { if (e.key === 'Enter') { finish(); } } }} > {entry.descrip} </PlasmicTask> ); } Now wire this up back in the top-level TodoApp: function TodoApp(props: TodoAppProps) { const [entries, setEntries] = useState<Entry[]>([ createEntry("Hello world"), createEntry("Goodbye world"), ]); return ( <PlasmicTodoApp <PlasmicTodoApp {...props} tasksContainer={{ children: entries.map((entry) => ( <Task entry={entry} onChange={(entry) => setEntries(entries.map((e) => (e.id === entry.id ? entry : e))) } onDelete={() => setEntries(entries.filter((e) => e.id !== entry.id)) } /> )), }} /> ); } You should now be able to rename, check off, and delete tasks. Let’s continue in this fashion and finish implementing the rest of the components, starting with the footer. First, try out watch mode, which will actually live-stream changes made in Plasmic Studio as emitted code! plasmic watch We’ll want to support toggling between viewing all/completed/active tasks. Name the parts of Footer that need interactivity—the toggle buttons and the clear button: • allToggle • completedToggle • activeToggle • clearBtn This video shows renaming all these elements: Edit the ToggleButton component to support clicks and take a selected boolean. interface ToggleButtonProps extends DefaultToggleButtonProps { onClick: () => void; selected: boolean; } function ToggleButton({ onClick, selected, state, ...rest ToggleButtonProps) { return ( <PlasmicToggleButton {...rest} state={selected ? 'selected' : undefined} onClick={onClick} /> ); } And implement the behavior in Footer: interface FooterProps extends DefaultFooterProps { children?: never; showFilter: ShowFilter; setShowFilter: (showFilter: ShowFilter) => void; onClear: () => void; } function Footer(props: FooterProps) { const { showFilter, setShowFilter, onClear, ...rest } = props; return ( <PlasmicFooter {...rest} allToggle={{ selected: showFilter === 'all', onClick: () => { setShowFilter('all'); } }} completedToggle={{ selected: showFilter === 'completed', onClick: () => { setShowFilter('completed'); } }} activeToggle={{ selected: showFilter === 'active', onClick: () => { setShowFilter('active'); } }} clearBtn={{ onClick: onClear }} /> ); } Finally, wire things up in the top-level TodoApp: function TodoApp(props: TodoAppProps) { const [entries, setEntries] = useState<Entry[]>([createEntry('Hello world'), createEntry('Goodbye world')]); const [showFilter, setShowFilter] = useState<ShowFilter>('all'); const shownEntries = entries.filter((e) => (showFilter === 'active' ? !e.done : showFilter === 'completed' ? e.done : true)); return ( <PlasmicTodoApp {...props} tasksContainer={{ children: shownEntries.map((entry) => ( <Task entry={entry} onChange={(entry) => setEntries(entries.map((e) => (e.id === entry.id ? entry : e)))} onDelete={() => setEntries(entries.filter((e) => e.id !== entry.id))} /> )) }} footer={{ showFilter, setShowFilter, onClear: () => { setEntries(entries.filter((e) => !e.done)); }, count: entries.filter((e) => !e.done).length }} /> ); } Lastly, make it possible to add new Entries in the Header component. interface HeaderProps extends DefaultHeaderProps { children?: never; onAdd: (entry: Entry) => void; } function Header({ onAdd, ...rest }: HeaderProps) { const [text, setText] = useState(''); return ( <PlasmicHeader {...rest} textbox={{ value: text, onChange: (e) => setText(e.target.value), onKeyDown: (e) => { if (e.key === 'Enter') { onAdd(createEntry(text)); setText(''); } } }} /> ); } Finally, wire things up in TodoApp. return ( <PlasmicTodoApp {...props} header={{ state: entries.length === 0 ? "empty" : entries.every((e) => e.done) ? "allChecked" : undefined, onAdd: (entry) => setEntries([...entries, entry]), }} tasksContainer={{ children: shownEntries.map((entry) => ( <Task ... And with that, you now have a complete and working todo app, where every bit of the UI was created in Plasmic, without needing to write a single line of presentational code! Making design changes Now let’s try making some tweaks in the UI (beyond renaming elements) and regenerating the latest code. Let’s iterate on the design of the TodoApp component in Plasmic Studio. You can do as much or as little as you’d like, but for this exercise, just don’t change up the variants or slots. Have fun with this! Here’s one variation, where we’ve changed the layout to use a grid of cards rather than a vertical list. If you want to reproduce this look, here were the changes made in Plasmic: • Set tasks-container to wrap its contents and have row and column gaps. • Edit the Task component’s root element: fixed width and height, top-aligned children. • Set the box containing the checkbox and the box containing the delete button to have free-floating positioning (command-drag to move them to the bottom-right of the card). • Adjust various paddings. We’ve changed the entire layout of the app, without needing to touch anything in the code. Extra credit: cleaning up Plasmic props When you were removing props from (say) the Task component interface (such as the state variant prop), you may have wondered why the presentational component PlasmicTodoApp didn’t throw Typescript errors trying to render instances of Task with that state prop specified. We in fact make the presentational component ignore this particular common case of stripping out certain Plasmic-generated props. However, if you want to keep that prop name but change its type, or you later reintroduce a different prop of the same name, then at that point you’ll start to see type errors. To handle this, you’ll need to tell Plasmic to stop trying to pass in that particular prop in the generated presentational code and just ignore it. Back in Plasmic Studio, first select an artboard for Task. From the Component Panel’s “Props” tab, mark the prop for this variant group as “Internal,” meaning that we want to control the variant using logic internal to the component, rather than directly exposing it to consumers of the component as a prop. This means Plasmic-generated code will no longer try to instantiate your Task component with those props, and that the Task component will take care of specifying the variants/args itself via <PlasmicTask variants={...} args={...} />. Re-run plasmic sync (or let plasmic watch sync things), and the type errors should be gone. Where to from here? Congrats on making your first complete app using Plasmic! Check out the Codegen Guide to continue learning about working with Plasmic from code, or see the Plasmic Studio Guide to learn how to use Plasmic Studio itself and what makes it different from other design tools. Beyond TodoMVC, you can also explore more Plasmic Example Projects. Most importantly, please continue playing with Plasmic, and tell us all your questions and thoughts! Reach us on our Slack community or at [email protected]. We can’t wait to see what you create with Plasmic. Back to Learn Plasmic.
__label__pos
0.580926
Spring系列第32篇:AOP核心源码、原理详解 本文主要分4部分 1. Aop原理介绍 2. 介绍aop相关的一些类 3. 通过源码详解aop代理的创建过程 4. 通过源码详解aop代理的调用过程 5. Aop代理一些特性的使用案例 Spring AOP原理 原理比较简单,主要就是使用jdk动态代理和cglib代理来创建代理对象,通过代理对象来访问目标对象,而代理对象中融入了增强的代码,最终起到对目标对象增强的效果。 aop相关的一些类 1. 连接点(JoinPoint)相关类 2. 通知(Advice)相关的类 3. 切入点(Pointcut)相关的类 4. 切面(Advisor)相关的类 连接点(JoinPoint)相关类 JoinPoint接口 这个接口表示一个通用的运行时连接点(在AOP术语中) package org.aopalliance.intercept; public interface Joinpoint { /** * 转到拦截器链中的下一个拦截器 */ Object proceed() throws Throwable; /** * 返回保存当前连接点静态部分【的对象】,这里一般指被代理的目标对象 */ Object getThis(); /** * 返回此静态连接点 一般就为当前的Method(至少目前的唯一实现是MethodInvocation,所以连接点得静态部分肯定就是本方法) */ AccessibleObject getStaticPart(); } 几个重要的子接口和实现类,如下: file Invocation接口 此接口表示程序中的调用,调用是一个连接点,可以被拦截器拦截。 package org.aopalliance.intercept; /** * 此接口表示程序中的调用 * 调用是一个连接点,可以被拦截器拦截。 */ public interface Invocation extends Joinpoint { /** * 将参数作为数组对象获取,可以更改此数组中的元素值以更改参数。 * 通常用来获取调用目标方法的参数 */ Object[] getArguments(); } MethodInvocation接口 用来表示连接点中方法的调用,可以获取调用过程中的目标方法。 package org.aopalliance.intercept; import java.lang.reflect.Method; /** * 方法调用的描述,在方法调用时提供给拦截器。 * 方法调用是一个连接点,可以被方法拦截器拦截。 */ public interface MethodInvocation extends Invocation { /** * 返回正在被调用得方法~~~ 返回的是当前Method对象。 * 此时,效果同父类的AccessibleObject getStaticPart() 这个方法 */ Method getMethod(); } ProxyMethodInvocation接口 表示代理方法的调用 public interface ProxyMethodInvocation extends MethodInvocation { /** * 获取被调用的代理对象 */ Object getProxy(); /** * 克隆一个方法调用器MethodInvocation */ MethodInvocation invocableClone(); /** * 克隆一个方法调用器MethodInvocation,并为方法调用器指定参数 */ MethodInvocation invocableClone(Object... arguments); /** * 设置要用于此链中任何通知的后续调用的参数。 */ void setArguments(Object... arguments); /** * 添加一些扩展用户属性,这些属性不在AOP框架内使用。它们只是作为调用对象的一部分保留,用于特殊的拦截器。 */ void setUserAttribute(String key, @Nullable Object value); /** * 根据key获取对应的用户属性 */ @Nullable Object getUserAttribute(String key); } 通俗点理解:连接点表示方法的调用过程,内部包含了方法调用过程中的所有信息,比如被调用的方法、目标、代理对象、执行拦截器链等信息。 上面定义都是一些接口,最终有2个实现。 ReflectiveMethodInvocation 当代理对象是采用jdk动态代理创建的,通过代理对象来访问目标对象的方法的时,最终过程是由ReflectiveMethodInvocation来处理的,内部会通过递归调用方法拦截器,最终会调用到目标方法。 CglibMethodInvocation 功能和上面的类似,当代理对象是采用cglib创建的,通过代理对象来访问目标对象的方法的时,最终过程是由CglibMethodInvocation来处理的,内部会通过递归调用方法拦截器,最终会调用到目标方法。 这2个类源码稍后详解。 通知相关的类 通知用来定义需要增强的逻辑。 file Advice接口 通知的底层接口 package org.aopalliance.aop; public interface Advice { } BeforeAdvice接口 方法前置通知,内部空的 package org.springframework.aop; public interface BeforeAdvice extends Advice { } Interceptor接口 此接口表示通用拦截器 package org.aopalliance.intercept; public interface Interceptor extends Advice { } MethodInterceptor接口 方法拦截器,所有的通知均需要转换为MethodInterceptor类型的,最终多个MethodInterceptor组成一个方法拦截器连。 package org.aopalliance.intercept; @FunctionalInterface public interface MethodInterceptor extends Interceptor { /** * 拦截目标方法的执行,可以在这个方法内部实现需要增强的逻辑,以及主动调用目标方法 */ Object invoke(MethodInvocation invocation) throws Throwable; } AfterAdvice接口 后置通知的公共标记接口 package org.springframework.aop; public interface AfterAdvice extends Advice { } MethodBeforeAdvice接口 方法执行前通知,需要在目标方法执行前执行一些逻辑的,可以通过这个实现。 通俗点说:需要在目标方法执行之前增强一些逻辑,可以通过这个接口来实现。before方法:在调用给定方法之前回调。 package org.springframework.aop; public interface MethodBeforeAdvice extends BeforeAdvice { /** * 调用目标方法之前会先调用这个before方法 * method:需要执行的目标方法 * args:目标方法的参数 * target:目标对象 */ void before(Method method, Object[] args, @Nullable Object target) throws Throwable; } 如同 public Object invoke(){ 调用MethodBeforeAdvice#before方法 return 调用目标方法; } AfterReturningAdvice接口 方法执行后通知,需要在目标方法执行之后执行增强一些逻辑的,可以通过这个实现。 不过需要注意一点:目标方法正常执行后,才会回调这个接口,当目标方法有异常,那么这通知会被跳过。 package org.springframework.aop; public interface AfterReturningAdvice extends AfterAdvice { /** * 目标方法执行之后会回调这个方法 * method:需要执行的目标方法 * args:目标方法的参数 * target:目标对象 */ void afterReturning(@Nullable Object returnValue, Method method, Object[] args, @Nullable Object target) throws Throwable; } 如同 public Object invoke(){ Object retVal = 调用目标方法; 调用AfterReturningAdvice#afterReturning方法 return retVal; } ThrowsAdvice接口 package org.springframework.aop; public interface ThrowsAdvice extends AfterAdvice { } 此接口上没有任何方法,因为方法由反射调用,实现类必须实现以下形式的方法,前3个参数是可选的,最后一个参数为需要匹配的异常的类型。 void afterThrowing([Method, args, target], ThrowableSubclass); 有效方法的一些例子如下: public void afterThrowing(Exception ex) public void afterThrowing(RemoteException) public void afterThrowing(Method method, Object[] args, Object target, Exception ex) public void afterThrowing(Method method, Object[] args, Object target, ServletException ex) 通知包装器 负责将各种非MethodInterceptor类型的通知(Advice)包装为MethodInterceptor类型。 刚才有说过:Aop中所有的Advice最终都会转换为MethodInterceptor类型的,组成一个方法调用链,然后执行 3个包装器类 • MethodBeforeAdviceInterceptor • AfterReturningAdviceInterceptor • ThrowsAdviceInterceptor MethodBeforeAdviceInterceptor类 这个类实现了MethodInterceptor接口,负责将MethodBeforeAdvice方法前置通知包装为MethodInterceptor类型,创建这个类型的对象的时候需要传递一个MethodBeforeAdvice类型的参数,重点是invoke方法 package org.springframework.aop.framework.adapter; @SuppressWarnings("serial") public class MethodBeforeAdviceInterceptor implements MethodInterceptor, BeforeAdvice, Serializable { private final MethodBeforeAdvice advice; public MethodBeforeAdviceInterceptor(MethodBeforeAdvice advice) { Assert.notNull(advice, "Advice must not be null"); this.advice = advice; } @Override public Object invoke(MethodInvocation mi) throws Throwable { //负责调用前置通知的方法 this.advice.before(mi.getMethod(), mi.getArguments(), mi.getThis()); //继续执行方法调用链 return mi.proceed(); } } AfterReturningAdviceInterceptor类 这个类实现了MethodInterceptor接口,负责将AfterReturningAdvice方法后置通知包装为MethodInterceptor类型,创建这个类型的对象的时候需要传递一个AfterReturningAdvice类型的参数,重点是invoke方法 public class AfterReturningAdviceInterceptor implements MethodInterceptor, AfterAdvice, Serializable { private final AfterReturningAdvice advice; public AfterReturningAdviceInterceptor(AfterReturningAdvice advice) { Assert.notNull(advice, "Advice must not be null"); this.advice = advice; } @Override public Object invoke(MethodInvocation mi) throws Throwable { //先执行方法调用链,可以获取目标方法的执行结果 Object retVal = mi.proceed(); //执行后置通知 this.advice.afterReturning(retVal, mi.getMethod(), mi.getArguments(), mi.getThis()); //返回结果 return retVal; } } ThrowsAdviceInterceptor类 这个类实现了MethodInterceptor接口,负责将ThrowsAdvice异常通知包装为MethodInterceptor类型,创建这个类型的对象的时候需要传递一个Object类型的参数,通常这个参数是ThrowsAdvice类型的,重点是invoke方法 package org.springframework.aop.framework.adapter; public class ThrowsAdviceInterceptor implements MethodInterceptor, AfterAdvice { private static final String AFTER_THROWING = "afterThrowing"; private final Object throwsAdvice; //创建ThrowsAdviceInterceptor public ThrowsAdviceInterceptor(Object throwsAdvice) { Assert.notNull(throwsAdvice, "Advice must not be null"); this.throwsAdvice = throwsAdvice; //获取异常通知中定义的所有方法(public、默认的、protected、private) Method[] methods = throwsAdvice.getClass().getMethods(); //轮询methods for (Method method : methods) { //方法名称为afterThrowing && 方法参数为1或者4 if (method.getName().equals(AFTER_THROWING) && (method.getParameterCount() == 1 || method.getParameterCount() == 4)) { //获取方法的最后一个参数类型 Class<?> throwableParam = method.getParameterTypes()[method.getParameterCount() - 1]; //判断方法参数类型是不是Throwable类型的 if (Throwable.class.isAssignableFrom(throwableParam)) { // 缓存异常处理方法到map中(异常类型->异常处理方法) this.exceptionHandlerMap.put(throwableParam, method); } } } //如果exceptionHandlerMap,抛出异常,所以最少要有一个异常处理方法 if (this.exceptionHandlerMap.isEmpty()) { throw new IllegalArgumentException( "At least one handler method must be found in class [" + throwsAdvice.getClass() + "]"); } } /** * 获取异常通知中自定义的处理异常方法的数量 */ public int getHandlerMethodCount() { return this.exceptionHandlerMap.size(); } @Override public Object invoke(MethodInvocation mi) throws Throwable { try { //调用通知链 return mi.proceed(); } catch (Throwable ex) { //获取异常通知中自定义的处理异常的方法 Method handlerMethod = getExceptionHandler(ex); //当处理的方法不为空 if (handlerMethod != null) { //调用异常处理方法 invokeHandlerMethod(mi, ex, handlerMethod); } //继续向外抛出异常 throw ex; //@1 } } /** * 获取throwsAdvice中处理exception参数指定的异常的方法 */ @Nullable private Method getExceptionHandler(Throwable exception) { //获取异常类型 Class<?> exceptionClass = exception.getClass(); //从缓存中获取异常类型对应的方法 Method handler = this.exceptionHandlerMap.get(exceptionClass); //来一个循环,查询处理方法,循环条件:方法为空 && 异常类型!=Throwable while (handler == null && exceptionClass != Throwable.class) { //获取异常的父类型 exceptionClass = exceptionClass.getSuperclass(); //从缓存中查找异常对应的处理方法 handler = this.exceptionHandlerMap.get(exceptionClass); } //将查找结果返回 return handler; } //通过反射调用异常通知中的异常方法 private void invokeHandlerMethod(MethodInvocation mi, Throwable ex, Method method) throws Throwable { //构建方法请求参数 Object[] handlerArgs; //若只有1个参数,参数为:异常对象 if (method.getParameterCount() == 1) { handlerArgs = new Object[] {ex}; } else { //4个参数(方法、方法请求参数、目标对象、异常对象) handlerArgs = new Object[] {mi.getMethod(), mi.getArguments(), mi.getThis(), ex}; } try { //通过反射调用异常通知中的方法 method.invoke(this.throwsAdvice, handlerArgs); } catch (InvocationTargetException targetEx) { throw targetEx.getTargetException(); } } } 从上面可以看出,异常通知,自定义处理异常的方法有几个特点 1. 方法名称必须为afterThrowing 2. 方法参数必须1个或4个,最后一个参数是Throwable类型或其子类型 3. 可以在异常处理中记录一些异常信息,这个还是比较有用的,但是注意一点目标方法抛出的异常最后还是会向外继续抛出@1 光讲源码,大家看着枯燥乏味,来点案例。 先来一个类,用来模拟用户资金操作:充值、提现、查询资金余额;提现的时候余额不足的时候,会抛出异常。 package com.javacode2018.aop.demo4; //模拟资金操作 public class FundsService { //账户余额 private double balance = 1000; //模拟提现 double recharge(String userName, double price) { System.out.println(String.format("%s提现%s", userName, price)); balance += price; return balance; } //模拟提现 double cashOut(String userName, double price) { if (balance < price) { throw new RuntimeException("余额不足!"); } System.out.println(String.format("%s提现%s", userName, price)); balance -= price; return balance; } //获取余额 double getBalance(String userName) { return balance; } } 案例1:前置通知拦截非法访问 资金操作的所有方法都需要验证用户名,当用户名不是“路人”的时候,直接抛出非法访问异常。 package com.javacode2018.aop.demo4; import org.junit.Test; import org.springframework.aop.MethodBeforeAdvice; import org.springframework.aop.framework.ProxyFactory; import org.springframework.lang.Nullable; import java.lang.reflect.Method; public class AopTest4 { @Test public void test1() { //代理工厂 ProxyFactory proxyFactory = new ProxyFactory(new FundsService()); //添加一个方法前置通知,判断用户名不是“路人”的时候,抛出非法访问异常 proxyFactory.addAdvice(new MethodBeforeAdvice() { @Override public void before(Method method, Object[] args, @Nullable Object target) throws Throwable { String userName = (String) args[0]; //如果不是路人的时候,抛出非法访问异常 if (!"路人".equals(userName)) { throw new RuntimeException(String.format("[%s]非法访问!", userName)); } } }); //通过代理工厂创建代理 FundsService proxy = (FundsService) proxyFactory.getProxy(); //调用代理的方法 proxy.recharge("路人", 100); proxy.recharge("张学友", 100); } } 运行输出 路人提现100.0 java.lang.RuntimeException: [张学友]非法访问! at com.javacode2018.aop.demo4.AopTest4$1.before(AopTest4.java:25) at org.springframework.aop.framework.adapter.MethodBeforeAdviceInterceptor.invoke(MethodBeforeAdviceInterceptor.java:55) 案例2:通过异常通知记录异常 通过异常通知来捕获所有方法的运行,发现异常之后,通知开发修复bug。 public static class SendMsgThrowsAdvice implements ThrowsAdvice { //注意方法名称必须为afterThrowing public void afterThrowing(Method method, Object[] args, Object target, RuntimeException e) { //监控到异常后发送消息通知开发者 System.out.println("异常警报:"); System.out.println(String.format("method:[%s],args:[%s]", method.toGenericString(), Arrays.stream(args).collect(Collectors.toList()))); System.out.println(e.getMessage()); System.out.println("请尽快修复bug!"); } } @Test public void test2() { //代理工厂 ProxyFactory proxyFactory = new ProxyFactory(new FundsService()); //添加一个异常通知,发现异常之后发送消息给开发者尽快修复bug proxyFactory.addAdvice(new SendMsgThrowsAdvice()); //通过代理工厂创建代理 FundsService proxy = (FundsService) proxyFactory.getProxy(); //调用代理的方法 proxy.cashOut("路人", 2000); } 运行输出 异常警报: method:[double com.javacode2018.aop.demo4.FundsService.cashOut(java.lang.String,double)]args:[[路人, 2000.0]] 余额不足! 请尽快修复bugjava.lang.RuntimeException: 余额不足! at com.javacode2018.aop.demo4.FundsService.cashOut(FundsService.java:18) 切入点(PointCut)相关类 通知(Advice)用来指定需要增强的逻辑,但是哪些类的哪些方法中需要使用这些通知呢?这个就是通过切入点来配置的。 file PointCut接口 package org.springframework.aop; public interface Pointcut { /** * 类过滤器, 可以知道哪些类需要拦截 */ ClassFilter getClassFilter(); /** * 方法匹配器, 可以知道哪些方法需要拦截 */ MethodMatcher getMethodMatcher(); /** * 匹配所有对象的 Pointcut,内部的2个过滤器默认都会返回true */ Pointcut TRUE = TruePointcut.INSTANCE; } ClassFilter接口 类过滤器。 @FunctionalInterface public interface ClassFilter { /** * 用来判断目标类型是否匹配 */ boolean matches(Class<?> clazz); } MethodMatcher接口 方法过滤器。 public interface MethodMatcher { /** * 执行静态检查给定方法是否匹配 * @param method 目标方法 * @param targetClass 目标对象类型 */ boolean matches(Method method, Class<?> targetClass); /** * 是否是动态匹配,即是否每次执行目标方法的时候都去验证一下 */ boolean isRuntime(); /** * 动态匹配验证的方法,比第一个matches方法多了一个参数args,这个参数是调用目标方法传入的参数 */ boolean matches(Method method, Class<?> targetClass, Object... args); /** * 匹配所有方法,这个内部的2个matches方法任何时候都返回true */ MethodMatcher TRUE = TrueMethodMatcher.INSTANCE; } 我估计大家看MethodMatcher还是有点晕的,为什么需要2个maches方法?什么是动态匹配? 比如下面一个类 public class UserService{ public void work(String userName){ System.out.print(userName+",开始工作了!"); } } work方法表示当前用户的工作方法,内部可以实现一些工作的逻辑。 我们希望通过aop对这个类进行增强,调用这个方法的时候,当传入的用户名是路人的粉丝的的时候,需要先进行问候,其他用户的时候,无需问候,将这个问题的代码可以放在MethodBeforeAdvice中实现,这种情况就是当参数满足一定的条件了,才会使用这个通知,不满足的时候,通知无效,此时就可以使用上面的动态匹配来实现,MethodMatcher类中3个参数的matches方法可以用来对目标方法的参数做校验。 来看一下MethodMatcher过滤的整个过程 1.调用matches(Method method, Class<?> targetClass)方法,验证方法是否匹配 2.isRuntime方法是否为true,如果为false,则以第一步的结果为准,否则继续向下 3.调用matches(Method method, Class<?> targetClass, Object... args)方法继续验证,这个方法多了一个参数,可以对目标方法传入的参数进行校验。 通过上面的过程,大家可以看出来,如果isRuntime为false的时候,只需要对方法名称进行校验,当目标方法调用多次的时候,实际上第一步的验证结果是一样的,所以如果isRuntime为false的情况,可以将验证结果放在缓存中,提升效率,而spring内部就是这么做的,isRuntime为false的时候,需要每次都进行校验,效率会低一些,不过对性能的影响基本上可以忽略。 顾问(Advisor) 通知定义了需要做什么,切入点定义了在哪些类的哪些方法中执行通知,那么需要将他们2个组合起来才有效啊。 顾问(Advisor)就是做这个事情的。 在spring aop中,你可以将advisor理解为切面,切面中通常有2个关键信息: 1. 需要增强的目标方法列表,这个通过切入点(Pointcut)来指定 2. 需要在目标方法中增强的逻辑,这个通过(Advice)通知来指定 file Advisor接口 package org.springframework.aop; import org.aopalliance.aop.Advice; /** * 包含AOP通知(在joinpoint处执行的操作)和确定通知适用性的过滤器(如切入点[PointCut])的基本接口。 * 这个接口不是供Spring用户使用的,而是为了支持不同类型的建议的通用性。 */ public interface Advisor { /** * 返回引用的通知 */ Advice getAdvice(); } 上面这个接口通常不会直接使用,这个接口有2个子接口,通常我们会和这2个子接口来打交道,下面看一下这2个子接口。 PointcutAdvisor接口 通过名字就能看出来,这个和Pointcut有关,内部有个方法用来获取Pointcut,AOP使用到的大部分Advisor都属于这种类型的。 在目标方法中实现各种增强功能基本上都是通过PointcutAdvisor来实现的。 package org.springframework.aop; /** * 切入点类型的Advisor */ public interface PointcutAdvisor extends Advisor { /** * 获取顾问中使用的切入点 */ Pointcut getPointcut(); } DefaultPointcutAdvisor类 PointcutAdvisor的默认实现,这是最常用的Advisor实现,它可以用于任何Pointcut和Advice类型,代码相当简单,里面定义了2个属性:pointcut和advisor,由使用者指定。 IntroductionAdvisor接口 这个接口,估计大家比较陌生,干什么的呢? 一个Java类,没有实现A接口,在不修改Java类的情况下,使其具备A接口的功能。可以通过IntroductionAdvisor给目标类引入更多接口的功能,这个功能是不是非常牛逼。 下面开始2个重点工作。 • 通过源码介绍aop中代理创建过程 • 通过源码介绍代理方法的调用执行过程 代理创建过程源码解析 先看一段代码 //代理工厂 ProxyFactory proxyFactory = new ProxyFactory(new FundsService()); //添加一个方法前置通知,判断用户名不是“路人”的时候,抛出非法访问异常 proxyFactory.addAdvice(new MethodBeforeAdvice() { @Override public void before(Method method, Object[] args, @Nullable Object target) throws Throwable { String userName = (String) args[0]; //如果不是路人的时候,抛出非法访问异常 if (!"路人".equals(userName)) { throw new RuntimeException(String.format("[%s]非法访问!", userName)); } } }); //通过代理工厂创建代理 FundsService proxy = (FundsService) proxyFactory.getProxy(); 我们将上面代码拆分一下,变成下面这样 //1.创建代理所需参数配置(如:采用什么方式的代理、通知列表等) AdvisedSupport advisedSupport = new AdvisedSupport(); //如:添加一个前置通知 advisedSupport.addAdvice(new MethodBeforeAdvice() { @Override public void before(Method method, Object[] args, @Nullable Object target) throws Throwable { String userName = (String) args[0]; //如果不是路人的时候,抛出非法访问异常 if (!"路人".equals(userName)) { throw new RuntimeException(String.format("[%s]非法访问!", userName)); } } }); //设置被代理的目标对象 FundsService target = new FundsService(); advisedSupport.setTarget(target); //2.根据配置信息获取AopProxy对象,AopProxy用来负责创建最终的代理对象 // AopProxy接口有2个实现类(JDK动态代理、cglib代理) // 具体最终会使用哪种方式,需要根据AdvisedSupport中指定的参数来判断 // 创建AopProxy使用了简单工厂模式 AopProxyFactory aopProxyFactory = new DefaultAopProxyFactory(); //通过AopProxy工厂获取AopProxy对象 AopProxy aopProxy = aopProxyFactory.createAopProxy(advisedSupport); //3.通过AopProxy创建代理对象 Object proxy = aopProxy.getProxy(); 从上面可以看出创建代理有3个步骤。 创建代理3大步骤 1. 创建代理所需参数配置 2. 根据代理参数获取AopProxy对象 3. 通过AopProxy获取代理对象 创建代理所需参数配置 创建代理所需参数配置主要是通过AdvisedSupport这个类来做的,看一下类图,下面一个个来介绍。 file 根据代理参数获取AopProxy对象 TargetClassAware接口 比较简单的一个接口,定义了一个方法,用来获取目标对象类型。 所谓目标对象:就是被代理对象,比如上面的fundsService对象。 package org.springframework.aop; public interface TargetClassAware { @Nullable Class<?> getTargetClass(); } ProxyConfig类 这个类比较关键了,代理配置类,内部包含了创建代理时需要配置的各种参数。 package org.springframework.aop.framework; /** * 对外提供统一的代理参数配置类,以确保所有代理创建程序具有一致的属性 */ public class ProxyConfig implements Serializable { // 标记是否直接对目标类进行代理,而不是通过接口产生代理 private boolean proxyTargetClass = false; // 标记是否对代理进行优化。启动优化通常意味着在代理对象被创建后,增强的修改将不会生效,因此默认值为false。 // 如果exposeProxy设置为true,即使optimize为true也会被忽略。 private boolean optimize = false; // 标记是否需要阻止通过该配置创建的代理对象转换为Advised类型,默认值为false,表示代理对象可以被转换为Advised类型 boolean opaque = false; // 标记代理对象是否应该被aop框架通过AopContext以ThreadLocal的形式暴露出去。 // 当一个代理对象需要调用它自己的另外一个代理方法时,这个属性将非常有用。默认是是false,以避免不必要的拦截。 boolean exposeProxy = false; // 标记该配置是否需要被冻结,如果被冻结,将不可以修改增强的配置。 // 当我们不希望调用方修改转换成Advised对象之后的代理对象时,这个配置将非常有用。 private boolean frozen = false; //省略了属性的get set方法 } Advised接口 这个接口中定义了操作Aop代理配置的各种方法(比如指定被代理的目标对象、添加通知、添加顾问等等)。 所有由spring aop创建的代理对象默认都会实现这个接口。 public interface Advised extends TargetClassAware { /** * 返回配置是否已冻结,被冻结之后,无法修改已创建好的代理对象中的通知 */ boolean isFrozen(); /** * 是否对目标类直接创建代理,而不是对接口创建代理,通俗点讲:如果是通过cglib创建代理,此方法返回true,否则返回false */ boolean isProxyTargetClass(); /** * 获取配置中需要代理的接口列表 */ Class<?>[] getProxiedInterfaces(); /** * 判断某个接口是否被代理 */ boolean isInterfaceProxied(Class<?> intf); /** * 设置被代理的目标源,创建代理的时候,通常需要传入被代理的对象,最终被代理的对象会被包装为TargetSource类型的 */ void setTargetSource(TargetSource targetSource); /** * 返回被代理的目标源 */ TargetSource getTargetSource(); /** * 设置是否需要将代理暴露在ThreadLocal中,这样可以在线程中获取到被代理对象,这个配置挺有用的,稍后会举例说明使用场景 */ void setExposeProxy(boolean exposeProxy); /** * 返回exposeProxy */ boolean isExposeProxy(); /** * 设置此代理配置是否经过预筛选,以便它只包含适用的顾问(匹配此代理的目标类)。 * 默认设置是“假”。如果已经对advisor进行了预先筛选,则将其设置为“true” * 这意味着在为代理调用构建实际的advisor链时可以跳过ClassFilter检查。 */ void setPreFiltered(boolean preFiltered); /** * 返回preFiltered */ boolean isPreFiltered(); /** * 返回代理配置中干掉所有Advisor列表 */ Advisor[] getAdvisors(); /** * 添加一个Advisor */ void addAdvisor(Advisor advisor) throws AopConfigException; /** * 指定的位置添加一个Advisor */ void addAdvisor(int pos, Advisor advisor) throws AopConfigException; /** * 移除一个Advisor */ boolean removeAdvisor(Advisor advisor); /** * 移除指定位置的Advisor */ void removeAdvisor(int index) throws AopConfigException; /** * 查找某个Advisor的位置 */ int indexOf(Advisor advisor); /** * 对advisor列表中的a替换为b */ boolean replaceAdvisor(Advisor a, Advisor b) throws AopConfigException; /** * 添加一个通知 */ void addAdvice(Advice advice) throws AopConfigException; /** * 向指定的位置添加一个通知 */ void addAdvice(int pos, Advice advice) throws AopConfigException; /** * 移除一个通知 */ boolean removeAdvice(Advice advice); /** * 获取通知的位置 */ int indexOf(Advice advice); /** * 将代理配置转换为字符串,这个方便排错和调试使用的 */ String toProxyConfigString(); } AdvisedSupport类 这个类是个重点,AOP代理配置管理器的基类,继承ProxyConfig并且实现了Advised接口,创建aop代理之前,所有需要配置的信息都是通过这个类来操作的。 比如:设置是否为目标类创建代理、设置目标对象、配置通知列表等等。 package org.springframework.aop.framework; public class AdvisedSupport extends ProxyConfig implements Advised { public static final TargetSource EMPTY_TARGET_SOURCE = EmptyTargetSource.INSTANCE; TargetSource targetSource = EMPTY_TARGET_SOURCE; /** 建议器是否已经针对特定的目标类进行筛选 */ private boolean preFiltered = false; /** 调用链工厂,用来获取目标方法的调用链 */ AdvisorChainFactory advisorChainFactory = new DefaultAdvisorChainFactory(); /** 方法调用链缓存:以方法为键,以顾问链表为值的缓存。 */ private transient Map<MethodCacheKey, List<Object>> methodCache; //代理对象需要实现的接口列表。保存在列表中以保持注册的顺序,以创建具有指定接口顺序的JDK代理。 private List<Class<?>> interfaces = new ArrayList<>(); //配置的顾问列表。所有添加的Advise对象都会被包装为Advisor对象 private List<Advisor> advisors = new ArrayList<>(); //数组更新了对advisor列表的更改,这更容易在内部操作。 private Advisor[] advisorArray = new Advisor[0]; //无参构造方法 public AdvisedSupport() { this.methodCache = new ConcurrentHashMap<>(32); } //有参构造方法,参数为:代理需要实现的接口列表 public AdvisedSupport(Class<?>... interfaces) { this(); setInterfaces(interfaces); } //设置需要被代理的目标对象,目标对象会被包装为TargetSource格式的对象 public void setTarget(Object target) { setTargetSource(new SingletonTargetSource(target)); } //设置被代理的目标源 @Override public void setTargetSource(@Nullable TargetSource targetSource) { this.targetSource = (targetSource != null ? targetSource : EMPTY_TARGET_SOURCE); } //获取被代理的目标源 @Override public TargetSource getTargetSource() { return this.targetSource; } //设置被代理的目标类 public void setTargetClass(@Nullable Class<?> targetClass) { this.targetSource = EmptyTargetSource.forClass(targetClass); } //获取被代理的目标类型 @Override @Nullable public Class<?> getTargetClass() { return this.targetSource.getTargetClass(); } /** * 设置此代理配置是否经过预筛选,这个什么意思呢:通过目标方法调用代理的时候, * 需要通过匹配的方式获取这个方法上的调用链列表,查找过程需要2个步骤: * 第一步:类是否匹配,第二步:方法是否匹配,当这个属性为true的时候,会直接跳过第一步,这个懂了不 */ @Override public void setPreFiltered(boolean preFiltered) { this.preFiltered = preFiltered; } // 返回preFiltered @Override public boolean isPreFiltered() { return this.preFiltered; } /** * 设置顾问链工厂,当调用目标方法的时候,需要获取这个方法上匹配的Advisor列表, * 获取目标方法上匹配的Advisor列表的功能就是AdvisorChainFactory来负责的 */ public void setAdvisorChainFactory(AdvisorChainFactory advisorChainFactory) { Assert.notNull(advisorChainFactory, "AdvisorChainFactory must not be null"); this.advisorChainFactory = advisorChainFactory; } // 返回顾问链工厂对象 public AdvisorChainFactory getAdvisorChainFactory() { return this.advisorChainFactory; } //设置代理对象需要实现的接口 public void setInterfaces(Class<?>... interfaces) { Assert.notNull(interfaces, "Interfaces must not be null"); this.interfaces.clear(); for (Class<?> ifc : interfaces) { addInterface(ifc); } } //为代理对象添加需要实现的接口 public void addInterface(Class<?> intf) { Assert.notNull(intf, "Interface must not be null"); if (!intf.isInterface()) { throw new IllegalArgumentException("[" + intf.getName() + "] is not an interface"); } if (!this.interfaces.contains(intf)) { this.interfaces.add(intf); adviceChanged(); } } //移除代理对象需要实现的接口 public boolean removeInterface(Class<?> intf) { return this.interfaces.remove(intf); } //获取代理对象需要实现的接口列表 @Override public Class<?>[] getProxiedInterfaces() { return ClassUtils.toClassArray(this.interfaces); } //判断代理对象是否需要实现某个接口 @Override public boolean isInterfaceProxied(Class<?> intf) { for (Class<?> proxyIntf : this.interfaces) { if (intf.isAssignableFrom(proxyIntf)) { return true; } } return false; } //获取配置的所有顾问列表 @Override public final Advisor[] getAdvisors() { return this.advisorArray; } //添加顾问 @Override public void addAdvisor(Advisor advisor) { int pos = this.advisors.size(); addAdvisor(pos, advisor); } //指定的位置添加顾问 @Override public void addAdvisor(int pos, Advisor advisor) throws AopConfigException { //这块先忽略,以后讲解 if (advisor instanceof IntroductionAdvisor) { validateIntroductionAdvisor((IntroductionAdvisor) advisor); } addAdvisorInternal(pos, advisor); } //移除指定的顾问 @Override public boolean removeAdvisor(Advisor advisor) { int index = indexOf(advisor); if (index == -1) { return false; } else { removeAdvisor(index); return true; } } //移除指定位置的顾问 @Override public void removeAdvisor(int index) throws AopConfigException { //当配置如果是冻结状态,是不允许对顾问进行修改的,否则会抛出异常 if (isFrozen()) { throw new AopConfigException("Cannot remove Advisor: Configuration is frozen."); } if (index < 0 || index > this.advisors.size() - 1) { throw new AopConfigException("Advisor index " + index + " is out of bounds: " + "This configuration only has " + this.advisors.size() + " advisors."); } //移除advisors中的顾问 Advisor advisor = this.advisors.remove(index); if (advisor instanceof IntroductionAdvisor) { IntroductionAdvisor ia = (IntroductionAdvisor) advisor; // We need to remove introduction interfaces. for (Class<?> ifc : ia.getInterfaces()) { removeInterface(ifc); } } //更新advisorArray updateAdvisorArray(); //通知已改变,内部会清除方法调用链缓存信息。 adviceChanged(); } @Override public int indexOf(Advisor advisor) { Assert.notNull(advisor, "Advisor must not be null"); return this.advisors.indexOf(advisor); } @Override public boolean replaceAdvisor(Advisor a, Advisor b) throws AopConfigException { Assert.notNull(a, "Advisor a must not be null"); Assert.notNull(b, "Advisor b must not be null"); int index = indexOf(a); if (index == -1) { return false; } removeAdvisor(index); addAdvisor(index, b); return true; } //批量添加顾问 public void addAdvisors(Advisor... advisors) { addAdvisors(Arrays.asList(advisors)); } //批量添加顾问 public void addAdvisors(Collection<Advisor> advisors) { //配置如果是冻结状态,会抛出异常 if (isFrozen()) { throw new AopConfigException("Cannot add advisor: Configuration is frozen."); } if (!CollectionUtils.isEmpty(advisors)) { for (Advisor advisor : advisors) { if (advisor instanceof IntroductionAdvisor) { validateIntroductionAdvisor((IntroductionAdvisor) advisor); } Assert.notNull(advisor, "Advisor must not be null"); this.advisors.add(advisor); } updateAdvisorArray(); adviceChanged(); } } //此方法先忽略,用来为目标类引入接口的 private void validateIntroductionAdvisor(IntroductionAdvisor advisor) { advisor.validateInterfaces(); // If the advisor passed validation, we can make the change. Class<?>[] ifcs = advisor.getInterfaces(); for (Class<?> ifc : ifcs) { addInterface(ifc); } } //指定的位置添加顾问 private void addAdvisorInternal(int pos, Advisor advisor) throws AopConfigException { Assert.notNull(advisor, "Advisor must not be null"); if (isFrozen()) { throw new AopConfigException("Cannot add advisor: Configuration is frozen."); } if (pos > this.advisors.size()) { throw new IllegalArgumentException( "Illegal position " + pos + " in advisor list with size " + this.advisors.size()); } this.advisors.add(pos, advisor); updateAdvisorArray(); adviceChanged(); } //将advisorArray和advisors保持一致 protected final void updateAdvisorArray() { this.advisorArray = this.advisors.toArray(new Advisor[0]); } //获取顾问列表 protected final List<Advisor> getAdvisorsInternal() { return this.advisors; } //添加通知 @Override public void addAdvice(Advice advice) throws AopConfigException { int pos = this.advisors.size(); addAdvice(pos, advice); } //指定的位置添加通知 @Override public void addAdvice(int pos, Advice advice) throws AopConfigException { //此处会将advice通知包装为DefaultPointcutAdvisor类型的Advisor addAdvisor(pos, new DefaultPointcutAdvisor(advice)); } //移除通知 @Override public boolean removeAdvice(Advice advice) throws AopConfigException { int index = indexOf(advice); if (index == -1) { return false; } else { removeAdvisor(index); return true; } } //获取通知的位置 @Override public int indexOf(Advice advice) { Assert.notNull(advice, "Advice must not be null"); for (int i = 0; i < this.advisors.size(); i++) { Advisor advisor = this.advisors.get(i); if (advisor.getAdvice() == advice) { return i; } } return -1; } //是否包含某个通知 public boolean adviceIncluded(@Nullable Advice advice) { if (advice != null) { for (Advisor advisor : this.advisors) { if (advisor.getAdvice() == advice) { return true; } } } return false; } //获取当前配置中某种类型通知的数量 public int countAdvicesOfType(@Nullable Class<?> adviceClass) { int count = 0; if (adviceClass != null) { for (Advisor advisor : this.advisors) { if (adviceClass.isInstance(advisor.getAdvice())) { count++; } } } return count; } //基于当前配置,获取给定方法的方法调用链列表(即org.aopalliance.intercept.MethodInterceptor对象列表) public List<Object> getInterceptorsAndDynamicInterceptionAdvice(Method method, @Nullable Class<?> targetClass) { MethodCacheKey cacheKey = new MethodCacheKey(method); //先从缓存中获取 List<Object> cached = this.methodCache.get(cacheKey); //缓存中没有时,从advisorChainFactory中获取 if (cached == null) { cached = this.advisorChainFactory.getInterceptorsAndDynamicInterceptionAdvice( this, method, targetClass); this.methodCache.put(cacheKey, cached); } return cached; } //通知更改时调用,会清空当前方法调用链缓存 protected void adviceChanged() { this.methodCache.clear(); } //将other中的配置信息复制到当前对象中 protected void copyConfigurationFrom(AdvisedSupport other) { copyConfigurationFrom(other, other.targetSource, new ArrayList<>(other.advisors)); } //将other中的配置信息复制到当前对象中 protected void copyConfigurationFrom(AdvisedSupport other, TargetSource targetSource, List<Advisor> advisors) { copyFrom(other); this.targetSource = targetSource; this.advisorChainFactory = other.advisorChainFactory; this.interfaces = new ArrayList<>(other.interfaces); for (Advisor advisor : advisors) { if (advisor instanceof IntroductionAdvisor) { validateIntroductionAdvisor((IntroductionAdvisor) advisor); } Assert.notNull(advisor, "Advisor must not be null"); this.advisors.add(advisor); } updateAdvisorArray(); adviceChanged(); } //构建此AdvisedSupport的仅配置副本,替换TargetSource。 AdvisedSupport getConfigurationOnlyCopy() { AdvisedSupport copy = new AdvisedSupport(); copy.copyFrom(this); copy.targetSource = EmptyTargetSource.forClass(getTargetClass(), getTargetSource().isStatic()); copy.advisorChainFactory = this.advisorChainFactory; copy.interfaces = this.interfaces; copy.advisors = this.advisors; copy.updateAdvisorArray(); return copy; } } 上面几个类有几个结论,这里说一下。 1. 配置中添加的Advice对象最终都会被转换为DefaultPointcutAdvisor对象,此时DefaultPointcutAdvisor未指定pointcut,大家可以去看一下DefaultPointcutAdvisor中pointcut有个默认值,默认会匹配任意类的任意方法。 2. 当配置被冻结的时候,即frozen为true的时,此时配置中的Advisor列表是不允许修改的。 3. 上面的getInterceptorsAndDynamicInterceptionAdvice方法,通过代理调用目标方法的时候,最后需要通过方法和目标类的类型,从当前配置中会获取匹配的方法拦截器列表,获取方法拦截器列表是由AdvisorChainFactory负责的。getInterceptorsAndDynamicInterceptionAdvice会在调用代理的方法时会执行,稍后在执行阶段会详解。 4. 目标方法和其关联的方法拦截器列表会被缓存在methodCache中,当顾问列表有变化的时候,methodCache缓存会被清除。 配置阶段完成之后,下面进入AopProxy获取阶段。 根据配置获取AopProxy 这个阶段对应的代码: // 创建AopProxy使用了简单工厂模式 AopProxyFactory aopProxyFactory = new DefaultAopProxyFactory(); //通过AopProxy工厂获取AopProxy对象 AopProxy aopProxy = aopProxyFactory.createAopProxy(advisedSupport); 此阶段会根据AdvisedSupport中配置信息,判断具体是采用cglib的方式还是采用jdk动态代理的方式获取代理对象,先看一下涉及到的一些类。 file AopProxy接口 这个接口定义了一个方法,用来创建最终的代理对象,这个接口有2个实现类 • CglibAopProxy:采用cglib的方式创建代理对象 • JkdDynamicAopProxy:采用jdk动态代理的方式创建代理对象 package org.springframework.aop.framework; public interface AopProxy { /** * 创建一个新的代理对象 */ Object getProxy(); /** * 创建一个新的代理对象 */ Object getProxy(@Nullable ClassLoader classLoader); } AopProxy的2个实现类,实现了上面定义的2个方法,稍后在代理的创建阶段详细介绍。 AopProxyFactory接口 通过名称就可以看出来,是一个工厂,负责创建AopProxy,使用的是简单工厂模式。 接口中定义了一个方法,会根据Aop的配置信息AdvisedSupport来获取AopProxy对象,主要是判断采用cglib的方式还是采用jdk动态代理的方式。 package org.springframework.aop.framework; public interface AopProxyFactory { /** * 根据aop配置信息获取AopProxy对象 */ AopProxy createAopProxy(AdvisedSupport config) throws AopConfigException; } DefaultAopProxyFactory类 AopProxyFactory接口的默认实现,代码比较简单,我们来细看一下 package org.springframework.aop.framework; /** * 默认AopProxyFactory实现,创建CGLIB代理或JDK动态代理。 * 对于给定的AdvisedSupport实例,以下条件为真,则创建一个CGLIB代理: * optimize = true * proxyTargetClass = true * 未指定代理接口 * 通常,指定proxyTargetClass来强制执行CGLIB代理,或者指定一个或多个接口来使用JDK动态代理。 */ public class DefaultAopProxyFactory implements AopProxyFactory, Serializable { @Override public AopProxy createAopProxy(AdvisedSupport config) throws AopConfigException { // optimize==true || proxyTargetClass 为true || 配置中没有需要代理的接口 if (config.isOptimize() || config.isProxyTargetClass() || hasNoUserSuppliedProxyInterfaces(config)) { //获取需要被代理的类 Class<?> targetClass = config.getTargetClass(); if (targetClass == null) { throw new AopConfigException("TargetSource cannot determine target class: " + "Either an interface or a target is required for proxy creation."); } //如果被代理的类为接口 或者 被代理的类是jdk动态代理创建代理类,则采用JdkDynamicAopProxy的方式,否则采用cglib代理的方式 if (targetClass.isInterface() || Proxy.isProxyClass(targetClass)) { //采用jdk动态代理的方式 return new JdkDynamicAopProxy(config); } //采用cglib代理的方式 return new ObjenesisCglibAopProxy(config); } else { //采用jdk动态代理的方式 return new JdkDynamicAopProxy(config); } } /** * 确定所提供的AdvisedSupport是否只指定了SpringProxy接口(或者根本没有指定代理接口) */ private boolean hasNoUserSuppliedProxyInterfaces(AdvisedSupport config) { Class<?>[] ifcs = config.getProxiedInterfaces(); return (ifcs.length == 0 || (ifcs.length == 1 && SpringProxy.class.isAssignableFrom(ifcs[0]))); } } 代理创建阶段 到目前为止我们已经根据aop配置信息得到了AopProxy对象了,下面就可以调用AopProxy.getProxy方法获取代理对象了。 AopProxy.createAopProxy方法返回的结果有2种情况 • JdkDynamicAopProxy:以jdk动态代理的方式创建代理 • ObjenesisCglibAopProxy:以cglib的方式创建动态代理 项目详解这2个类的源码 。 JdkDynamicAopProxy类 作用:采用jdk动态代理的方式创建代理对象,并处理代理对象的所有方法调用。 final class JdkDynamicAopProxy implements AopProxy, InvocationHandler, Serializable { //代理的配置信息 private final AdvisedSupport advised; //需要被代理的接口中是否定义了equals方法 private boolean equalsDefined; //需要被代理的接口中是否定义了hashCode方法 private boolean hashCodeDefined; //通过AdvisedSupport创建实例 public JdkDynamicAopProxy(AdvisedSupport config) throws AopConfigException { Assert.notNull(config, "AdvisedSupport must not be null"); if (config.getAdvisors().length == 0 && config.getTargetSource() == AdvisedSupport.EMPTY_TARGET_SOURCE) { throw new AopConfigException("No advisors and no TargetSource specified"); } this.advised = config; } //生成一个代理对象 @Override public Object getProxy() { return getProxy(ClassUtils.getDefaultClassLoader()); } //生成一个代理对象 @Override public Object getProxy(@Nullable ClassLoader classLoader) { if (logger.isTraceEnabled()) { logger.trace("Creating JDK dynamic proxy: " + this.advised.getTargetSource()); } //@0:根据advised的信息获取代理需要被代理的所有接口列表 Class<?>[] proxiedInterfaces = AopProxyUtils.completeProxiedInterfaces(this.advised, true); //查找被代理的接口中是否定义了equals、hashCode方法 findDefinedEqualsAndHashCodeMethods(proxiedInterfaces); /** * 这个大家应该很熟悉吧,通过jdk动态代理创建代理对象,注意最后一个参数是this * 表示当前类,当前类是InvocationHandler类型的,当调用代理对象的任何方法的时候 * 都会被被当前类的 invoke 方法处理 */ return Proxy.newProxyInstance(classLoader, proxiedInterfaces, this); } //判断需要代理的接口中是否定义了这几个方法(equals、hashCode) private void findDefinedEqualsAndHashCodeMethods(Class<?>[] proxiedInterfaces) { for (Class<?> proxiedInterface : proxiedInterfaces) { //获取接口中定义的方法 Method[] methods = proxiedInterface.getDeclaredMethods(); for (Method method : methods) { //是否是equals方法 if (AopUtils.isEqualsMethod(method)) { this.equalsDefined = true; } //是否是hashCode方法 if (AopUtils.isHashCodeMethod(method)) { this.hashCodeDefined = true; } //如果发现这2个方法都定义了,结束循环查找 if (this.equalsDefined && this.hashCodeDefined) { return; } } } } // 这个方法比较关键了,当在程序中调用代理对象的任何方法,最终都会被下面这个invoke方法处理 public Object invoke(Object proxy, Method method, Object[] args) throws Throwable { //旧的代理对象 Object oldProxy = null; //用来标记是否需要将代理对象暴露在ThreadLocal中 boolean setProxyContext = false; //获取目标源 TargetSource targetSource = this.advised.targetSource; //目标对象 Object target = null; //下面进入代理方法的处理阶段 try { // 处理equals方法:被代理的接口中没有定义equals方法 && 当前调用是equals方法 if (!this.equalsDefined && AopUtils.isEqualsMethod(method)) { // 直接调用当前类中的equals方法 return equals(args[0]); } // 处理hashCode方法:被代理的接口中没有定义hashCode方法 && 当前调用是hashCode方法 else if (!this.hashCodeDefined && AopUtils.isHashCodeMethod(method)) { // 直接调用当前类中的hashCode方法 return hashCode(); } /** * 方法来源于 DecoratingProxy 接口,这个接口中定义了一个方法 * 用来获取原始的被代理的目标类,主要是用在嵌套代理的情况下(所谓嵌套代理:代理对象又被作为目标对象进行了代理) */ else if (method.getDeclaringClass() == DecoratingProxy.class) { // 调用AopProxyUtils工具类的方法,内部通过循环遍历的方式,找到最原始的被代理的目标类 return AopProxyUtils.ultimateTargetClass(this.advised); } // 方法来源于 Advised 接口,代理对象默认情况下会实现 Advised 接口,可以通过代理对象来动态向代理对象中添加通知等 else if (!this.advised.opaque && method.getDeclaringClass().isInterface() && method.getDeclaringClass().isAssignableFrom(Advised.class)) { // this.advised是AdvisedSupport类型的,AdvisedSupport实现了Advised接口中的所有方法 // 所以最终通过通过反射方式交给this.advised来响应当前调用 return AopUtils.invokeJoinpointUsingReflection(this.advised, method, args); } // 用来记录方法返回值 Object retVal; //是否需要在threadLocal中暴露代理对象 if (this.advised.exposeProxy) { // 将代理对象暴露在上线文中,即暴露在threadLocal中,那么在当前线程中可以通过静态方法 // AopContext#currentProxy获取当前被暴露的代理对象,这个是非常有用的,稍后用案例来讲解,瞬间就会明白 oldProxy = AopContext.setCurrentProxy(proxy); // 将setProxyContext标记为true setProxyContext = true; } // 通过目标源获取目标对象 target = targetSource.getTarget(); // 获取目标对象类型 Class<?> targetClass = (target != null ? target.getClass() : null); // @1:获取当前方法的拦截器链 List<Object> chain = this.advised.getInterceptorsAndDynamicInterceptionAdvice(method, targetClass); // 拦截器链为空的情况下,表示这个方法上面没有找到任何增强的通知,那么会直接通过反射直接调用目标对象 if (chain.isEmpty()) { // 获取方法请求的参数(有时候方法中有可变参数,所谓可变参数就是带有省略号(...)这种格式的参数,传入的参数类型和这种类型不一样的时候,会通过下面的adaptArgumentsIfNecessary方法进行转换) Object[] argsToUse = AopProxyUtils.adaptArgumentsIfNecessary(method, args); //通过反射直接调用目标方法 retVal = AopUtils.invokeJoinpointUsingReflection(target, method, argsToUse); } else { // 创建一个方法调用器(包含了代理对象、目标对象、调用的方法、参数、目标类型、方法拦截器链) MethodInvocation invocation = new ReflectiveMethodInvocation(proxy, target, method, args, targetClass, chain); // @3:通过拦截器链一个个调用最终到目标方法的调用 retVal = invocation.proceed(); } // 下面会根据方法返回值的类型,做一些处理,比如方法返回的类型为自己,则最后需要将返回值置为代理对象 Class<?> returnType = method.getReturnType(); if (retVal != null && retVal == target && returnType != Object.class && returnType.isInstance(proxy) && !RawTargetAccess.class.isAssignableFrom(method.getDeclaringClass())) { // 将返回值设置为代理对象 retVal = proxy; } // 方法的返回值类型returnType为原始类型(即int、byte、double等这种类型的) && retVal为null, // 此时如果将null转换为原始类型会报错,所以此处直接抛出异常 else if (retVal == null && returnType != Void.TYPE && returnType.isPrimitive()) { throw new AopInvocationException( "Null return value from advice does not match primitive return type for: " + method); } // 返回方法调用结果 return retVal; } finally { // 目标对象不为null && 目标源不是静态的 //所谓静态的,你可以理解为是否是单例的 // isStatic为true,表示目标对象是单例的,同一个代理对象中所有方法共享一个目标对象 // isStatic为false的时候,通常每次调用代理的方法,target对象是不一样的,所以方法调用万之后需要进行释放,可能有些资源清理,连接的关闭等操作 if (target != null && !targetSource.isStatic()) { // 必须释放来自TargetSource中的目标对象 targetSource.releaseTarget(target); } // setProxyContext为ture if (setProxyContext) { // 需要将旧的代理再放回到上线文中 AopContext.setCurrentProxy(oldProxy); } } } } 关于上面代码,有几点细说一下 @0:completeProxiedInterfaces方法 @0处的代码如下,根据代理配置信息,获取需要被代理的所有接口 Class<?>[] proxiedInterfaces = AopProxyUtils.completeProxiedInterfaces(this.advised, true); AopProxyUtils.completeProxiedInterfaces方法源码如下 static Class<?>[] completeProxiedInterfaces(AdvisedSupport advised, boolean decoratingProxy) { //获取代理配置中需要被代理的接口 Class<?>[] specifiedInterfaces = advised.getProxiedInterfaces(); // 需要被代理的接口数量为0 if (specifiedInterfaces.length == 0) { // 获取需要被代理的目标类型 Class<?> targetClass = advised.getTargetClass(); //目标类型不为空 if (targetClass != null) { //目标类型为接口 if (targetClass.isInterface()) { //将其添加到需要代理的接口中 advised.setInterfaces(targetClass); } // 目标类型为jdk动态代理创建的代理对象 else if (Proxy.isProxyClass(targetClass)) { // 获取目标类型上的所有接口,将其添加到需要被代理的接口中 advised.setInterfaces(targetClass.getInterfaces()); } //再次获取代理配置中需要被代理的接口 specifiedInterfaces = advised.getProxiedInterfaces(); } } //判断SpringProxy接口是否已经在被代理的接口中 boolean addSpringProxy = !advised.isInterfaceProxied(SpringProxy.class); //判断Advised接口是否已经在被代理的接口中 boolean addAdvised = !advised.isOpaque() && !advised.isInterfaceProxied(Advised.class); //判断DecoratingProxy接口是否已经在被代理的接口中 boolean addDecoratingProxy = (decoratingProxy && !advised.isInterfaceProxied(DecoratingProxy.class)); //一个计数器,会根据上面三个boolean值做递增 int nonUserIfcCount = 0; if (addSpringProxy) { nonUserIfcCount++; } if (addAdvised) { nonUserIfcCount++; } if (addDecoratingProxy) { nonUserIfcCount++; } // 下面就是构建所有需要被代理的接口 Class<?>[] proxiedInterfaces = new Class<?>[specifiedInterfaces.length + nonUserIfcCount]; System.arraycopy(specifiedInterfaces, 0, proxiedInterfaces, 0, specifiedInterfaces.length); int index = specifiedInterfaces.length; if (addSpringProxy) { proxiedInterfaces[index] = SpringProxy.class; index++; } if (addAdvised) { proxiedInterfaces[index] = Advised.class; index++; } if (addDecoratingProxy) { proxiedInterfaces[index] = DecoratingProxy.class; } return proxiedInterfaces; } 上面的方法执行完毕之后,会得到一个被代理的接口列表,默认情况下会得到下面的一个列表 [开发者硬编码指定的需要被代理的接口列表,SpringProxy,Advised,DecoratingProxy] 最终创建出来的代理对象,默认会实现上面列的所有接口,后面3个接口是aop中自动给我们加上的。 @1:getInterceptorsAndDynamicInterceptionAdvice 这个方法位于AdvisedSupport中,根据方法和目标类型获取方法上面匹配的拦截器链 public List<Object> getInterceptorsAndDynamicInterceptionAdvice(Method method, @Nullable Class<?> targetClass) { //会先尝试从还中获取,如果获取不到,会从advisorChainFactory中获取,然后将其丢到缓存中 MethodCacheKey cacheKey = new MethodCacheKey(method); List<Object> cached = this.methodCache.get(cacheKey); if (cached == null) { cached = this.advisorChainFactory.getInterceptorsAndDynamicInterceptionAdvice( this, method, targetClass); this.methodCache.put(cacheKey, cached); } return cached; } 从advisorChainFactory中获取拦截器链稍后细说,我们把这个阶段叫做连接器链的获取阶段。 @3:ReflectiveMethodInvocation.proceed() 这个是一次会调用拦截器链,最终会调用到目标方法,获得目标方法的返回值,里面的细节见后面的代理方法调用处理阶段 JdkDynamicAopProxy小结 1. 被创建的代理对象默认会实现SpringProxy,Advised,DecoratingProxy 3个接口 2. SpringProxy这个接口中没有任何方法,只是起一个标记作用,用来标记代理对象是使用spring aop创建的 3. 代理对象默认都会实现Advised接口,所以可以通过这个接口动态变更代理对象中的通知 4. DecoratingProxy接口中定义了一个方法getDecoratedClass,用来获取被代理的原始目标对象的类型 下面来看另外一个类:ObjenesisCglibAopProxy,这个继承CglibAopProxy,大部分逻辑都在CglibAopProxy中,所以我们主要看CglibAopProxy中代码。 CglibAopProxy类 作用:采用cglib代理的方式创建代理对象,并处理代理对象的所有方法调用。 getProxy方法为入口,通过方法一个个来解说。 getProxy方法 public Object getProxy(@Nullable ClassLoader classLoader) { // 获取被代理的类 Class<?> rootClass = this.advised.getTargetClass(); // 代理对象的父类(cglib是采用继承的方式是创建代理对象的,所以将被代理的类作为代理对象的父类) Class<?> proxySuperClass = rootClass; // 判断被代理的类是不是cglib创建的类,如果是cblib创建的类,会将其父类作为被代理的类 if (rootClass.getName().contains(ClassUtils.CGLIB_CLASS_SEPARATOR)) { proxySuperClass = rootClass.getSuperclass(); //添加需要被代理的接口 Class<?>[] additionalInterfaces = rootClass.getInterfaces(); for (Class<?> additionalInterface : additionalInterfaces) { this.advised.addInterface(additionalInterface); } } // 开始cglib创建代理,这个大家对cglib比较熟悉的一看就懂 Enhancer enhancer = createEnhancer(); // 设置被代理的父类 enhancer.setSuperclass(proxySuperClass); // 设置被代理的接口[开发者硬编码指定的需要被代理的接口列表,SpringProxy,Advised],这个比jdk动态代理的方式少了一个DecoratingProxy接口 enhancer.setInterfaces(AopProxyUtils.completeProxiedInterfaces(this.advised)); // 设置代理类类名生成策略 enhancer.setNamingPolicy(SpringNamingPolicy.INSTANCE); // 设置字节码的生成策略 enhancer.setStrategy(new ClassLoaderAwareGeneratorStrategy(classLoader)); // @1:获取Callback列表,这个稍后详解 Callback[] callbacks = getCallbacks(rootClass); Class<?>[] types = new Class<?>[callbacks.length]; for (int x = 0; x < types.length; x++) { types[x] = callbacks[x].getClass(); } // @2:设置CallbackFilter,CallbackFilter内部会判断被代理对象中的方法最终会被callbacks列表中的那个Callback来处理 enhancer.setCallbackFilter(new ProxyCallbackFilter( this.advised.getConfigurationOnlyCopy(), this.fixedInterceptorMap, this.fixedInterceptorOffset)); enhancer.setCallbackTypes(types); // 获取代理对象(内部会先创建代理类,然后会根据代理类生成一个代理对象) return createProxyClassAndInstance(enhancer, callbacks); } 上面方法中有2个点比较难,需要说明,分别是@1:getCallbacks方法@2:创建ProxyCallbackFilter对象 @1:getCallbacks方法 通过被代理的类来获取Callback列表,Callback是用来处理代理对象的方法调用的,代理对象中可能有很多方法,每个方法可能采用不同的处理方式,所以会有多个Callback private Callback[] getCallbacks(Class<?> rootClass) throws Exception { // 是否需要将代理暴露在threadLocal中 boolean exposeProxy = this.advised.isExposeProxy(); // 配置是否是冻结的 boolean isFrozen = this.advised.isFrozen(); // 被代理的目标对象是否是动态的(是否是单例的) boolean isStatic = this.advised.getTargetSource().isStatic(); // 当方法上有需要执行的拦截器的时候,会用这个来处理 Callback aopInterceptor = new DynamicAdvisedInterceptor(this.advised); // 当方法上没有需要执行的拦截器的时候,会使用targetInterceptor来处理,内部会通过反射直接调用目标对象的方法 Callback targetInterceptor; /** * 这块根据是否需要暴露代理到threadLocal中以及目标对象是否是动态的,会创建不同的Callback * isStatic为true的时候,同一个代理的不同方法可能都是新的目标对象,所以当代理方法执行完毕之后,需要对目标对象进行释放 */ if (exposeProxy) { targetInterceptor = (isStatic ? new StaticUnadvisedExposedInterceptor(this.advised.getTargetSource().getTarget()) : new DynamicUnadvisedExposedInterceptor(this.advised.getTargetSource())); } else { targetInterceptor = (isStatic ? new StaticUnadvisedInterceptor(this.advised.getTargetSource().getTarget()) : new DynamicUnadvisedInterceptor(this.advised.getTargetSource())); } // targetDispatcher会直接调用目标方法 Callback targetDispatcher = (isStatic ? new StaticDispatcher(this.advised.getTargetSource().getTarget()) : new SerializableNoOp()); Callback[] mainCallbacks = new Callback[] { aopInterceptor, // 处理匹配到拦截器的方法 targetInterceptor, // 处理未匹配到拦截器的方法 new SerializableNoOp(), targetDispatcher, // 处理未匹配到拦截器的方法,和targetInterceptor有何不同呢?目标方法如果返回值的结果是目标对象类型的,会使用 targetInterceptor 处理,内部会返回代理对象 this.advisedDispatcher, // 处理Advised接口中定义的方法 new EqualsInterceptor(this.advised), // 处理equals方法 new HashCodeInterceptor(this.advised) // 处理hashCode方法 }; Callback[] callbacks; // 如果被代理的对象是单例的 && 配置是冻结的,此时会进行优化,怎么优化呢? // 配置冻结的情况下,生成好的代理中通知是无法修改的,所以可以提前将每个方法对应的拦截器链找到给缓存起来 // 调用方法的时候,就直接从缓存中可以拿到方法对应的缓存信息,效率会高一些 if (isStatic && isFrozen) { Method[] methods = rootClass.getMethods(); Callback[] fixedCallbacks = new Callback[methods.length]; this.fixedInterceptorMap = new HashMap<>(methods.length); // 获取每个方法的调用链,然后给缓存在fixedInterceptorMap中 for (int x = 0; x < methods.length; x++) { Method method = methods[x]; List<Object> chain = this.advised.getInterceptorsAndDynamicInterceptionAdvice(method, rootClass); fixedCallbacks[x] = new FixedChainStaticTargetInterceptor( chain, this.advised.getTargetSource().getTarget(), this.advised.getTargetClass()); this.fixedInterceptorMap.put(method, x); } callbacks = new Callback[mainCallbacks.length + fixedCallbacks.length]; System.arraycopy(mainCallbacks, 0, callbacks, 0, mainCallbacks.length); System.arraycopy(fixedCallbacks, 0, callbacks, mainCallbacks.length, fixedCallbacks.length); this.fixedInterceptorOffset = mainCallbacks.length; } else { callbacks = mainCallbacks; } return callbacks; } @2:创建ProxyCallbackFilter对象 enhancer.setCallbackFilter(new ProxyCallbackFilter( this.advised.getConfigurationOnlyCopy(), this.fixedInterceptorMap, this.fixedInterceptorOffset)); 这块重点在于ProxyCallbackFilter中的accept方法,这个方法会根据目标放,获取目标对方最后会让callbacks列表中的哪个Callback处理,大家可以看一下源码,比较简单。 上面getCallbacks方法中涉及到了5个类如下 • DynamicAdvisedInterceptor • StaticUnadvisedExposedInterceptor • StaticUnadvisedInterceptor • DynamicUnadvisedInterceptor • StaticDispatcher 后面4个比较简单,大家可以去看一下源码,主要来看第一个类,基本上代理对象中的大部分自定义的方法都会进入到这个类的intercept方法中进行处理,代码如下 DynamicAdvisedInterceptor类 private static class DynamicAdvisedInterceptor implements MethodInterceptor, Serializable { //代理配置信息 private final AdvisedSupport advised; //构造器,需要一个AdvisedSupport public DynamicAdvisedInterceptor(AdvisedSupport advised) { this.advised = advised; } //这个方法是关键,用来处理代理对象中方法的调用 public Object intercept(Object proxy, Method method, Object[] args, MethodProxy methodProxy) throws Throwable { //被暴露在threadLocal中旧的代理对象 Object oldProxy = null; //用来标记代理对象是否被暴露在threadLocal中 boolean setProxyContext = false; //目标对象 Object target = null; //目标源 TargetSource targetSource = this.advised.getTargetSource(); try { //代理配置中是否需要将代理暴露在threadLocal中 if (this.advised.exposeProxy) { //将代理对象暴露出去 oldProxy = AopContext.setCurrentProxy(proxy); //将setProxyContext置为true setProxyContext = true; } //获取目标对象(即被代理的对象) target = targetSource.getTarget(); Class<?> targetClass = (target != null ? target.getClass() : null); //@1:获取当前方法的拦截器链 List<Object> chain = this.advised.getInterceptorsAndDynamicInterceptionAdvice(method, targetClass); //记录方法返回值 Object retVal; //拦截器链不为空 && 方法是public类型的 if (chain.isEmpty() && Modifier.isPublic(method.getModifiers())) { //获取方法调用参数 Object[] argsToUse = AopProxyUtils.adaptArgumentsIfNecessary(method, args); // 直接调用目标对象的方法 retVal = methodProxy.invoke(target, argsToUse); } else { // 创建一个方法调用器(包含了代理对象、目标对象、调用的方法、参数、目标类型、方法拦截器链) // @2:并执行方法调用器的processd()方法,此方法会一次执行方法调用链,最终会调用目标方法,获取返回结果 retVal = new CglibMethodInvocation(proxy, target, method, args, targetClass, chain, methodProxy).proceed(); } // 处理方法返回结果:会根据方法返回值的类型,做一些处理,比如方法返回的类型为自己,则最后需要将返回值置为代理对象 retVal = processReturnType(proxy, target, method, retVal); return retVal; } finally { // 目标对象不为null && 目标源不是静态的 //所谓静态的,你可以理解为是否是单例的 // isStatic为true,表示目标对象是单例的,同一个代理对象中所有方法共享一个目标对象 // isStatic为false的时候,通常每次调用代理的方法,target对象是不一样的,所以方法调用万之后需要进行释放,可能有些资源清理,连接的关闭等操作 if (target != null && !targetSource.isStatic()) { targetSource.releaseTarget(target); } // setProxyContext为ture if (setProxyContext) { // 需要将旧的代理再放回到上线文中 AopContext.setCurrentProxy(oldProxy); } } } } 上面代码中2个重点:@1@2 @1:获取当前方法的拦截器链,这个在JdkDynamicAopProxy的也有,稍后说。 @2:调用CglibMethodInvocation.proceed(),内部会一次调用方法拦截器链,最终会调用目标方法,获取目标方法返回值,这个稍后放在代理方法处理阶段详解。 下面来看一下方法拦截器链的获取。 方法拦截器链的获取 我们在创建代理的时候,增强的代码通常都放在Advise通知中,但是最终调用方法的时候,这些通知都会被转换为MethodInterceptor来执行,调用方法的过程中,需要先获取方法上匹配的所有方法连接器连,然后依次执行,最终会调用到目标方法。 获取方法对应的拦截器链,对应下面这段代码 org.springframework.aop.framework.AdvisedSupport#getInterceptorsAndDynamicInterceptionAdvice AdvisorChainFactory advisorChainFactory = new DefaultAdvisorChainFactory(); public List<Object> getInterceptorsAndDynamicInterceptionAdvice(Method method, @Nullable Class<?> targetClass) { MethodCacheKey cacheKey = new MethodCacheKey(method); List<Object> cached = this.methodCache.get(cacheKey); if (cached == null) { cached = this.advisorChainFactory.getInterceptorsAndDynamicInterceptionAdvice( this, method, targetClass); this.methodCache.put(cacheKey, cached); } return cached; } 会调用DefaultAdvisorChainFactory#getInterceptorsAndDynamicInterceptionAdvice方法获取方法上匹配的拦截器链。 涉及到的类 file AdvisorChainFactory接口 拦截器链工厂接口,定义了一个方法,用来获取方法匹配的拦截器链列表 package org.springframework.aop.framework; public interface AdvisorChainFactory { /** * 获取方法匹配的拦截器链列表 * @param config:代理配置信息,里面包含了创建代理的所有信息,如:Advisor列表,此方法会从Advisor列表中找到和mehod匹配的 * @param targetClass:目标类 */ List<Object> getInterceptorsAndDynamicInterceptionAdvice(Advised config, Method method, @Nullable Class<?> targetClass); } DefaultAdvisorChainFactory类 AdvisorChainFactory接口的默认实现。 public class DefaultAdvisorChainFactory implements AdvisorChainFactory, Serializable { @Override public List<Object> getInterceptorsAndDynamicInterceptionAdvice( Advised config, Method method, @Nullable Class<?> targetClass) { // 获取Advisor适配器注册器,前面我们有提到过一个知识点:所有的Advisor最终都会转换为MethodInterceptor类型的, // 然后注册方法调用链去执行,AdvisorAdapterRegistry就是搞这个事情的, // 其内部会将非MethodInterceptor类型通知通过适配器转换为MethodInterceptor类型 AdvisorAdapterRegistry registry = GlobalAdvisorAdapterRegistry.getInstance(); //获取配置中的Advisor列表 Advisor[] advisors = config.getAdvisors(); List<Object> interceptorList = new ArrayList<>(advisors.length); //获取被调用方法所在类实际的类型 Class<?> actualClass = (targetClass != null ? targetClass : method.getDeclaringClass()); Boolean hasIntroductions = null; //遍历Advisor列表,找到和actualClass和方法匹配的所有方法拦截器(MethodInterceptor)链列表 for (Advisor advisor : advisors) { //判断是否是PointcutAdvisor类型的,这种类型的匹配分为2个阶段,先看类是否匹配,然后再看方法是否匹配 if (advisor instanceof PointcutAdvisor) { PointcutAdvisor pointcutAdvisor = (PointcutAdvisor) advisor; // 如果isPreFiltered为ture,表示类以及匹配过,不需要看类是否匹配了 if (config.isPreFiltered() || pointcutAdvisor.getPointcut().getClassFilter().matches(actualClass)) { MethodMatcher mm = pointcutAdvisor.getPointcut().getMethodMatcher(); boolean match; if (mm instanceof IntroductionAwareMethodMatcher) { if (hasIntroductions == null) { hasIntroductions = hasMatchingIntroductions(advisors, actualClass); } match = ((IntroductionAwareMethodMatcher) mm).matches(method, actualClass, hasIntroductions); } else { //方法是否匹配 match = mm.matches(method, actualClass); } //方法匹配 if (match) { // 通过AdvisorAdapterRegistry的getInterceptors将advisor转换为MethodInterceptor列表 MethodInterceptor[] interceptors = registry.getInterceptors(advisor); //方法是否动态匹配 if (mm.isRuntime()) { //轮询连接器,将其包装为InterceptorAndDynamicMethodMatcher对象,后续方法调用的时候可以做动态匹配 for (MethodInterceptor interceptor : interceptors) { interceptorList.add(new InterceptorAndDynamicMethodMatcher(interceptor, mm)); } } else { interceptorList.addAll(Arrays.asList(interceptors)); } } } } else if (advisor instanceof IntroductionAdvisor) { IntroductionAdvisor ia = (IntroductionAdvisor) advisor; if (config.isPreFiltered() || ia.getClassFilter().matches(actualClass)) { Interceptor[] interceptors = registry.getInterceptors(advisor); interceptorList.addAll(Arrays.asList(interceptors)); } } else { Interceptor[] interceptors = registry.getInterceptors(advisor); interceptorList.addAll(Arrays.asList(interceptors)); } } return interceptorList; } } 下面来看AdvisorAdapterRegistry这个接口。 AdvisorAdapterRegistry接口 AdvisorAdapter注册器,AdvisorAdapter可以将Advisor中的Advice适配为MethodInterceptor package org.springframework.aop.framework.adapter; public interface AdvisorAdapterRegistry { //将一个通知(Advice)包装为Advisor对象 Advisor wrap(Object advice) throws UnknownAdviceTypeException; //根据Advisor获取方法MethodInterceptor列表 MethodInterceptor[] getInterceptors(Advisor advisor) throws UnknownAdviceTypeException; //注册AdvisorAdapter,AdvisorAdapter可以将Advisor中的Advice适配为MethodInterceptor void registerAdvisorAdapter(AdvisorAdapter adapter); } DefaultAdvisorAdapterRegistry类 AdvisorAdapterRegistry的默认实现,目前里面做的事情主要是将负责将前置通知,异常通知,后置通知转换为MethodInterceptor类型的,源码比较简单,大家看一下就懂了。 public class DefaultAdvisorAdapterRegistry implements AdvisorAdapterRegistry, Serializable { //AdvisorAdapter转换器列表,AdvisorAdapter负责将Advisor中的Advice转换为MethodInterceptor类型的 private final List<AdvisorAdapter> adapters = new ArrayList<>(3); //默认会注册3个AdvisorAdapter,这3个负责将前置通知,异常通知,后置通知转换为MethodInterceptor类型的 public DefaultAdvisorAdapterRegistry() { registerAdvisorAdapter(new MethodBeforeAdviceAdapter()); registerAdvisorAdapter(new AfterReturningAdviceAdapter()); registerAdvisorAdapter(new ThrowsAdviceAdapter()); } @Override public Advisor wrap(Object adviceObject) throws UnknownAdviceTypeException { if (adviceObject instanceof Advisor) { return (Advisor) adviceObject; } if (!(adviceObject instanceof Advice)) { throw new UnknownAdviceTypeException(adviceObject); } Advice advice = (Advice) adviceObject; if (advice instanceof MethodInterceptor) { // So well-known it doesn't even need an adapter. return new DefaultPointcutAdvisor(advice); } //轮询adapters for (AdvisorAdapter adapter : this.adapters) { //adapter是否支持适配advice这个通知 if (adapter.supportsAdvice(advice)) { return new DefaultPointcutAdvisor(advice); } } throw new UnknownAdviceTypeException(advice); } //将Advisor对象转换为MethodInterceptor列表,不过通常情况下一个advisor会返回一个MethodInterceptor @Override public MethodInterceptor[] getInterceptors(Advisor advisor) throws UnknownAdviceTypeException { List<MethodInterceptor> interceptors = new ArrayList<>(3); Advice advice = advisor.getAdvice(); if (advice instanceof MethodInterceptor) { interceptors.add((MethodInterceptor) advice); } //轮询adapters for (AdvisorAdapter adapter : this.adapters) { //先看一下adapter是否支持适配advice这个通知 if (adapter.supportsAdvice(advice)) { //如果匹配,这调用适配器的getInterceptor方法将advisor转换为MethodInterceptor interceptors.add(adapter.getInterceptor(advisor)); } } if (interceptors.isEmpty()) { throw new UnknownAdviceTypeException(advisor.getAdvice()); } return interceptors.toArray(new MethodInterceptor[0]); } @Override public void registerAdvisorAdapter(AdvisorAdapter adapter) { this.adapters.add(adapter); } } AdvisorAdapter接口 package org.springframework.aop.framework.adapter; public interface AdvisorAdapter { //判断这个适配器支持advice这个通知么 boolean supportsAdvice(Advice advice); //获取advisor对应的MethodInterceptor MethodInterceptor getInterceptor(Advisor advisor); } MethodBeforeAdviceAdapter类 适配MethodBeforeAdvice前置通知,负责将MethodBeforeAdvice类型的通知转换为MethodBeforeAdviceInterceptor类型的 class MethodBeforeAdviceAdapter implements AdvisorAdapter, Serializable { @Override public boolean supportsAdvice(Advice advice) { return (advice instanceof MethodBeforeAdvice); } @Override public MethodInterceptor getInterceptor(Advisor advisor) { MethodBeforeAdvice advice = (MethodBeforeAdvice) advisor.getAdvice(); return new MethodBeforeAdviceInterceptor(advice); } } MethodBeforeAdviceInterceptor类 MethodBeforeAdvice通知适配为MethodInterceptor类型的,代码很简单,大家一看就懂。 package org.springframework.aop.framework.adapter; public class MethodBeforeAdviceInterceptor implements MethodInterceptor, BeforeAdvice, Serializable { private final MethodBeforeAdvice advice; public MethodBeforeAdviceInterceptor(MethodBeforeAdvice advice) { Assert.notNull(advice, "Advice must not be null"); this.advice = advice; } @Override public Object invoke(MethodInvocation mi) throws Throwable { //先调用前置通知 this.advice.before(mi.getMethod(), mi.getArguments(), mi.getThis()); //然后继续处理连接器连,内部会调用目标方法 return mi.proceed(); } } AfterReturningAdviceAdapter类 适配AfterReturningAdvice后置通知,负责将AfterReturningAdvice类型的通知转换为AfterReturningAdviceInterceptor类型的 class AfterReturningAdviceAdapter implements AdvisorAdapter, Serializable { @Override public boolean supportsAdvice(Advice advice) { return (advice instanceof AfterReturningAdvice); } @Override public MethodInterceptor getInterceptor(Advisor advisor) { AfterReturningAdvice advice = (AfterReturningAdvice) advisor.getAdvice(); return new AfterReturningAdviceInterceptor(advice); } } AfterReturningAdviceInterceptor类 AfterReturningAdvice通知适配为MethodInterceptor类型的,代码很简单,大家一看就懂。 public class AfterReturningAdviceInterceptor implements MethodInterceptor, AfterAdvice, Serializable { private final AfterReturningAdvice advice; public AfterReturningAdviceInterceptor(AfterReturningAdvice advice) { Assert.notNull(advice, "Advice must not be null"); this.advice = advice; } @Override public Object invoke(MethodInvocation mi) throws Throwable { //先调用拦截器链,内部会调用目标方法 Object retVal = mi.proceed(); //然后执行后置通知 this.advice.afterReturning(retVal, mi.getMethod(), mi.getArguments(), mi.getThis()); //返回结果 return retVal; } } ThrowsAdviceAdapter类 适配ThrowsAdvice前置通知,负责将MethodBeforeAdvice类型的通知转换为MethodBeforeAdviceInterceptor类型的 class ThrowsAdviceAdapter implements AdvisorAdapter, Serializable { @Override public boolean supportsAdvice(Advice advice) { return (advice instanceof ThrowsAdvice); } @Override public MethodInterceptor getInterceptor(Advisor advisor) { return new ThrowsAdviceInterceptor(advisor.getAdvice()); } } MethodBeforeAdviceInterceptor类 ThrowsAdvice通知适配为MethodInterceptor类型的,代码很简单,大家一看就懂。 package org.springframework.aop.framework.adapter; public class ThrowsAdviceInterceptor implements MethodInterceptor, AfterAdvice { private static final String AFTER_THROWING = "afterThrowing"; private final Object throwsAdvice; //创建ThrowsAdviceInterceptor public ThrowsAdviceInterceptor(Object throwsAdvice) { Assert.notNull(throwsAdvice, "Advice must not be null"); this.throwsAdvice = throwsAdvice; //获取异常通知中定义的所有方法(public、默认的、protected、private) Method[] methods = throwsAdvice.getClass().getMethods(); //轮询methods for (Method method : methods) { //方法名称为afterThrowing && 方法参数为1或者4 if (method.getName().equals(AFTER_THROWING) && (method.getParameterCount() == 1 || method.getParameterCount() == 4)) { //获取方法的最后一个参数类型 Class<?> throwableParam = method.getParameterTypes()[method.getParameterCount() - 1]; //判断方法参数类型是不是Throwable类型的 if (Throwable.class.isAssignableFrom(throwableParam)) { // 缓存异常处理方法到map中(异常类型->异常处理方法) this.exceptionHandlerMap.put(throwableParam, method); } } } //如果exceptionHandlerMap,抛出异常,所以最少要有一个异常处理方法 if (this.exceptionHandlerMap.isEmpty()) { throw new IllegalArgumentException( "At least one handler method must be found in class [" + throwsAdvice.getClass() + "]"); } } /** * 获取异常通知中自定义的处理异常方法的数量 */ public int getHandlerMethodCount() { return this.exceptionHandlerMap.size(); } @Override public Object invoke(MethodInvocation mi) throws Throwable { try { //调用通知链 return mi.proceed(); } catch (Throwable ex) { //获取异常通知中自定义的处理异常的方法 Method handlerMethod = getExceptionHandler(ex); //当处理的方法不为空 if (handlerMethod != null) { //调用异常处理方法 invokeHandlerMethod(mi, ex, handlerMethod); } //继续向外抛出异常 throw ex; //@1 } } /** * 获取throwsAdvice中处理exception参数指定的异常的方法 */ @Nullable private Method getExceptionHandler(Throwable exception) { //获取异常类型 Class<?> exceptionClass = exception.getClass(); //从缓存中获取异常类型对应的方法 Method handler = this.exceptionHandlerMap.get(exceptionClass); //来一个循环,查询处理方法,循环条件:方法为空 && 异常类型!=Throwable while (handler == null && exceptionClass != Throwable.class) { //获取异常的父类型 exceptionClass = exceptionClass.getSuperclass(); //从缓存中查找异常对应的处理方法 handler = this.exceptionHandlerMap.get(exceptionClass); } //将查找结果返回 return handler; } //通过反射调用异常通知中的异常方法 private void invokeHandlerMethod(MethodInvocation mi, Throwable ex, Method method) throws Throwable { //构建方法请求参数 Object[] handlerArgs; //若只有1个参数,参数为:异常对象 if (method.getParameterCount() == 1) { handlerArgs = new Object[] {ex}; } else { //4个参数(方法、方法请求参数、目标对象、异常对象) handlerArgs = new Object[] {mi.getMethod(), mi.getArguments(), mi.getThis(), ex}; } try { //通过反射调用异常通知中的方法 method.invoke(this.throwsAdvice, handlerArgs); } catch (InvocationTargetException targetEx) { throw targetEx.getTargetException(); } } } 代理方法的调用过程(拦截器链的执行) 拦截器链执行过程 到目前,已经获取到代理对象,接着会开始使用这个代理对象,在代理对象上执行一些方法调用,此时会依次调用此方法上的所有MethodInterceptor,最终会调用到目标上对应的方法,执行过程如下图 file jdk动态代理方式创建代理最终会调用ReflectiveMethodInvocation#proceed方法。 cglib方式创建的代理最终会调用CglibAopProxy.CglibMethodInvocation#proceed方法。 下面来看一下这个两个类的代码。 ReflectiveMethodInvocation类 package org.springframework.aop.framework; public class ReflectiveMethodInvocation implements ProxyMethodInvocation, Cloneable { //生成的代理对象 protected final Object proxy; //被代理的目标对象 protected final Object target; //被调用的方法 protected final Method method; //调用方法传入参数 protected Object[] arguments; //目标对象类型 private final Class<?> targetClass; /** * 当期被调用的方法上匹配的 MethodInterceptor and InterceptorAndDynamicMethodMatcher 列表 * 即方法调用链列表 */ protected final List<?> interceptorsAndDynamicMethodMatchers; //当前正在调用的连接器索引 private int currentInterceptorIndex = -1; //构造器 protected ReflectiveMethodInvocation( Object proxy, @Nullable Object target, Method method, @Nullable Object[] arguments, @Nullable Class<?> targetClass, List<Object> interceptorsAndDynamicMethodMatchers) { this.proxy = proxy; this.target = target; this.targetClass = targetClass; //获取桥接方法,关于什么是桥接方法,比较简单,百度一下,这里不做说明 this.method = BridgeMethodResolver.findBridgedMethod(method); this.arguments = AopProxyUtils.adaptArgumentsIfNecessary(method, arguments); this.interceptorsAndDynamicMethodMatchers = interceptorsAndDynamicMethodMatchers; } //这里是重点,用来处理被调用的方法,会递归进行调用,所有的拦截器都执行完毕之后,会通过反射调用目标方法 public Object proceed() throws Throwable { // 拦截器都执行完毕之后,通过反射调用目标对象中的方法 if (this.currentInterceptorIndex == this.interceptorsAndDynamicMethodMatchers.size() - 1) { return invokeJoinpoint(); } //获取++this.currentInterceptorIndex指定的拦截器 Object interceptorOrInterceptionAdvice = this.interceptorsAndDynamicMethodMatchers.get(++this.currentInterceptorIndex); //判断拦截器是否是InterceptorAndDynamicMethodMatcher,这种表示是动态拦截器, // 所谓动态拦截器就是要根据方法的参数的值来判断拦截器是否需要执行 if (interceptorOrInterceptionAdvice instanceof InterceptorAndDynamicMethodMatcher) { InterceptorAndDynamicMethodMatcher dm = (InterceptorAndDynamicMethodMatcher) interceptorOrInterceptionAdvice; Class<?> targetClass = (this.targetClass != null ? this.targetClass : this.method.getDeclaringClass()); //判断动态拦截器是否需要执行 if (dm.methodMatcher.matches(this.method, targetClass, this.arguments)) { //执行当前拦截器的调用 return dm.interceptor.invoke(this); } else { //如果不匹配,直接递归进入下一个拦截器的调用 return proceed(); } } else { //执行拦截器的调用 return ((MethodInterceptor) interceptorOrInterceptionAdvice).invoke(this); } } //通过反射调用目标方法 @Nullable protected Object invokeJoinpoint() throws Throwable { return AopUtils.invokeJoinpointUsingReflection(this.target, this.method, this.arguments); } } ProxyFactory简化代理的创建 上面代理的整个创建过程和使用过程还是挺复杂的,spring在AdvisedSupport类的基础上又添加2个子类 • ProxyCreatorSupport • ProxyFactory 通过这2个子类,将步骤稍微简化了一些,这2个类的代码比较简单,上面的如果理解了,看这2个类的代码会非常的轻松,源码这里就不细说了。 ProxyCreatorSupport用来对代理的创建提供支持,内部添加了AopProxyFactory对象的引用,将代理的创建过程给简化了。 AopProxyFactory aopProxyFactory; ProxyFactory类继承了ProxyCreatorSupport,让创建代理的过程更简单了,如果采用硬编码的方式,通常我们会使用ProxyFactory来创建代理对象,代码只需要下面几行了 //通过spring提供的代理创建工厂来创建代理 ProxyFactory proxyFactory = new ProxyFactory(); //ProxyFactory继承了AdvisedSupport类,所以可以直接可以通过ProxyFactory来设置创建代理需要的参数 //为工厂指定目标对象 proxyFactory.setTarget(target); //添加顾问 proxyFactory.addAdvisor(advisor); //调用proxyFactory.getProxy();创建代理 Object proxy = proxyFactory.getProxy(); 案例 下面来一些案例,通过案例理解会更容易一些。 案例1 这个案例主要看一下生成的代理对象的一些信息。 package com.javacode2018.aop.demo5; import com.javacode2018.aop.demo4.FundsService; import org.springframework.aop.MethodBeforeAdvice; import org.springframework.aop.framework.ProxyFactory; import org.springframework.aop.support.DefaultPointcutAdvisor; import org.springframework.lang.Nullable; import java.lang.reflect.Method; public class AopTest5 { public static void main(String[] args) { ProxyFactory proxyFactory = new ProxyFactory(); proxyFactory.setTarget(new FundsService()); proxyFactory.addAdvisor(new DefaultPointcutAdvisor(new MethodBeforeAdvice() { @Override public void before(Method method, Object[] args, @Nullable Object target) throws Throwable { System.out.println(method); } })); //创建代理对象 Object proxy = proxyFactory.getProxy(); System.out.println("代理对象的类型:" + proxy.getClass()); System.out.println("代理对象的父类:" + proxy.getClass().getSuperclass()); System.out.println("代理对象实现的接口列表"); for (Class<?> cf : proxy.getClass().getInterfaces()) { System.out.println(cf); } } } 运行输出 代理对象的类型:class com.javacode2018.aop.demo4.FundsService$$EnhancerBySpringCGLIB$$ad14161a 代理对象的父类:class com.javacode2018.aop.demo4.FundsService 代理对象实现的接口列表 interface org.springframework.aop.SpringProxy interface org.springframework.aop.framework.Advised interface org.springframework.cglib.proxy.Factory 输出中可以看出默认帮我们实现了3个接口[SpringProxy,Advised,Factory] 案例2:代理接口 有接口的情况默认会通过jdk动态代理的方式生成代理,下面来看一下。 来个接口 package com.javacode2018.aop.demo6; public interface IService { void say(String name); } 实现类 package com.javacode2018.aop.demo6; public class Service implements IService { @Override public void say(String name) { System.out.println("hello:" + name); } } 测试案例 package com.javacode2018.aop.demo6; import org.junit.Test; import org.springframework.aop.MethodBeforeAdvice; import org.springframework.aop.framework.ProxyFactory; import org.springframework.lang.Nullable; import java.lang.reflect.Method; public class AopTest6 { @Test public void test1() { Service target = new Service(); ProxyFactory proxyFactory = new ProxyFactory(); //设置需要被代理的对象 proxyFactory.setTarget(target); //设置需要代理的接口 proxyFactory.addInterface(IService.class); proxyFactory.addAdvice(new MethodBeforeAdvice() { @Override public void before(Method method, Object[] args, @Nullable Object target) throws Throwable { System.out.println(method); } }); IService proxy = (IService) proxyFactory.getProxy(); System.out.println("代理对象的类型:" + proxy.getClass()); System.out.println("代理对象的父类:" + proxy.getClass().getSuperclass()); System.out.println("代理对象实现的接口列表"); for (Class<?> cf : proxy.getClass().getInterfaces()) { System.out.println(cf); } //调用代理的方法 System.out.println("n调用代理的方法"); proxy.say("spring aop"); } } 运行输出 代理对象的类型:class com.sun.proxy.$Proxy4 代理对象的父类:class java.lang.reflect.Proxy 代理对象实现的接口列表 interface com.javacode2018.aop.demo6.IService interface org.springframework.aop.SpringProxy interface org.springframework.aop.framework.Advised interface org.springframework.core.DecoratingProxy 调用代理的方法 public abstract void com.javacode2018.aop.demo6.IService.say(java.lang.String) hello:spring aop 从第一行输出中可以看出是采用jdk动态代理方式创建的代理 第二行验证了,所有通过jdk动态代理方式创建的代理对象都是Proxy的子类 输出的接口列表中可以看出,默认帮我们实现了3个接口[SpringAop,Advised,DecoratingProxy] 案例3:强制使用cglib代理 在案例2中加入下面代码,设置proxyTargetClasstrue,会强制使用cglib代理。 //强制使用cglib代理 proxyFactory.setProxyTargetClass(true); 案例4:将代理暴露在threadLocal中 先来看一段代码,Service类中有2个方法,m1方法中会调用m2,通过aop代理对这个类创建了一个代理,通过代理来统计所有调用方法的耗时 package com.javacode2018.aop.demo7; class Service { public void m1() { System.out.println("m1"); this.m2(); } public void m2() { System.out.println("m2"); } } public class AopTest7 { @Test public void test1() { Service target = new Service(); ProxyFactory proxyFactory = new ProxyFactory(); proxyFactory.setTarget(target); proxyFactory.addAdvice(new MethodInterceptor() { @Override public Object invoke(MethodInvocation invocation) throws Throwable { long startTime = System.nanoTime(); Object result = invocation.proceed(); long endTime = System.nanoTime(); System.out.println(String.format("%s方法耗时(纳秒):%s", invocation.getMethod().getName(), endTime - startTime)); return result; } }); Service proxy = (Service) proxyFactory.getProxy(); proxy.m1(); } } 运行输出 m1 m2 m1方法耗时(纳秒):11056000 为什么没有输出m2方法的耗时? 原因:m2方法是在m1方法中通过this的方式来调用的,this实际上指向的是上面代码中的target对象。 那么我们如何能让此处的m2也能被增强,你需要通过代理来调用m2方法才可以,可以将代理对象暴露在threadLocal中,然后在m1方法中获取到threadLoca中的代理对象,通过代理对象来调用m2就可以了。 需要调整改动2处。 第1处:配置代理创建时,将其暴露出去 proxyFactory.setExposeProxy(true); 第2处:m1中调用m2的方法需要修改为下面这样 ((Service)AopContext.currentProxy()).m2(); 再次执行,结果如下 m1 m2 m2方法耗时(纳秒):51200 m1方法耗时(纳秒):12794500 这个功能还是挺有用的,以后我估计大家是可以用到的。 总结 本文内容比较多,大家好好吸收一下,有问题的欢迎留言。 下一篇文章将深入详解介绍spring中如何将aop搞成自动化的,东西也是比较多的,敬请期待。 案例源码 https://gitee.com/javacode2018/spring-series 发表评论 您的电子邮箱地址不会被公开。 必填项已用*标注 关注我们
__label__pos
0.979834
SP Numeric Edit Control Introduction When programming a GUI, quite often you use edit boxes to enter numeric values. A standard edit box has the ES_NUMBER style that lets you restrict a user's input, allowing only digits to be entered. It's a useful option, but it does not cover all cases met in practice. For example, when you need to input floating-point numbers, you should let the user enter not only digits but also a decimal separator and exponent symbol. Moreover, the floating-point format is more complex than a simple sequence of digits, so entered text has to be parsed to make sure it can be converted to a number and possibly let the user know about detected errors. So, I've made a special ActiveX control based on the standard edit box that extends its functionality and offers additional options for handling numbers. Features First of all, I'd like to note that this control deals directly with numeric data types, but not text strings. Internally, it performs a conversion of number values to their textual representation and back. The conversion is performed according to the certain format defined by a mask and an additional set of parameters (format properties). The mask is a text string that defines a format expression that matches certain syntaxes. You can specify your own mask or use the default one that is automatically generated according to system/locale settings. Format properties are used during formatting, scanning, and generation of the default masks. The control operates in two modes: display and editing. Editing mode is turned on when the control gets keyboard input focus; otherwise, it stays in display mode. Each mode has its own format parameters, such as mask and format properties. Therefore, the user can see two different textual representations of the same numeric value. In display mode, the value is only converted in text, but in editing mode two-way conversion is performed. Usually, for editing mode, you should use a simplified format while, for display mode, you can enable a full set of features. Consider that you want to handle currency values. It would be convenient for the user to see number like $4,499.98, but at the same time, during editing, the user should not be forced to enter monetary symbol and separate groups with commas; he or she just has to enter essential data: 4499.98. This is the reason why two modes are provided. Mask A mask consists of the patterns, separated with semicolons. Every pattern corresponds to a certain value range or state. The number of the mask's patterns and their purpose depend on the data type handled by the control. See Table 1 for more information. Table 1: Mask patterns corresponding to data types. Data Type Pattern 1 Pattern 2 Pattern 3 Pattern 4 Pattern 5 Pattern 6 Pattern 7 Pattern 8 Pattern 9 vtInt8 (VT_I1) positive number negative number zero null - - - - - vtInt16 (VT_I2) positive number negative number zero null - - - - - vtInt32 (VT_I4) positive number negative number zero null - - - - - vtInt64 (VT_I8) positive number negative number zero null - - - - - vtUInt8 (VT_UI1) non-zero number zero null - - - - - - vtUInt16 (VT_UI2) non-zero number zero null - - - - - - vtUInt32 (VT_UI4) non-zero number zero null - - - - - - vtUInt64 (VT_UI8) non-zero number zero null - - - - - - vtFloat (VT_R4) positive number negative number positive zero negative zero positive infinity negative infinity quiet NaN signaling NaN null vtDouble (VT_R8) positive number negative number positive zero negative zero positive infinity negative infinity quiet NaN signaling NaN null Any numeric value is formatted according to a certain pattern. For example, a mask for double values (vtDouble) consists of nine patterns; a negative value will be formatted with pattern 2; positive infinity, with pattern 5, and so on. There are two types of patterns: value and literal. Value patterns are used to format definite numeric values. These are "number" and "zero" patterns (patterns related to positive number, negative number, non-zero number, positive zero, negative zero and zero). The rest are literal patterns. Literal patterns are used to represent special state of the value; for example when value is NULL or floating point value is negative or positive infinity. In turn, a pattern consists of segments. All literal patterns have only one segment, but value patterns have at least one segment corresponding to the integer part of a number and can have two additional ones: prefix and suffix. Floating-point patterns additionally have segments for fraction and exponent parts. An exponent part exists only in E-format (exponential) patterns. • Literal pattern schema: { literal } • Integer value pattern schema: { prefix } | { integer } | { suffix } • Floating-point value F-format pattern schema: { prefix } | { integer } { . fraction } | { suffix } • Floating-point value E-format pattern schema: { prefix } | { integer } { . fraction } { e exponent } | { suffix } Segments are always placed in the order listed above. Prefixes and suffixes are separated with a "|" symbol from the "value" part. They are optional, but when used, they both must be present; however, you may specify an empty prefix or suffix. An integer segment begins right after the "|" prefix delimiter if the one is preset or from the pattern's beginning otherwise. Generally, a fraction starts from a "." symbol and exponent from the "e" symbol. However, when the segment is completely included in an "optional" block (discussed below), it starts from the token opening that block. Integer fractions and exponent segments are mandatory for corresponding patterns. During formatting and scanning, the digits of a number are handled sequentially in a definite order, depending on what segment is being processed. Integer and exponent parts are processed from right to left, but the fraction part is processed from left to right. Segments are composed of tokens. Every token specifies a definite instruction for the formatting procedure. There are three types of the tokens that can be used inside segments: control tokens, placeholders, and literals. Table 2: Tokens Delimiters ; end of pattern | prefix/suffix delimiter Control Tokens ( open repeatable block ) close repeatable block [ open optional block ] close optional block Placeholders 0 digit placeholder with default zero value _ digit placeholder with default space value # digit placeholder - negative sign + positive sign $ currency symbol % percent symbol per mile symbol , thousand (group) separator . decimal separator e exponent Reserved {, }, <, > Reserved for future extensions. Literals \ Escape symbol. any character Any character can be used as a literal. Characters corresponding to the reserved symbols should be preceded with a back slash "\". In addition, there are three special escape symbols: \r - generates carriage return (CR) \n - generates line feed (LF) \t - generates tab SP Numeric Edit Control Here, you can see a few examples of the masks: 1. Floating-point number in F-format (non-exponential representation). (###,)##0.00(#);-(###,)##0.00(#);0.00;0.00;\+INF;\-INF;QNaN;SNaN;NULL 2. Floating-point number in E-format (exponential representation). (###,)##0.00(#)e\+(#)0;-(###,)##0.00(#)e\-(#)0;0.00e\+0;0.00e\-0;\+INF;\-INF;QNaN;SNaN;NULL 3. Floating-point number in simplified E-format (exponential representation). (#)0[.0(#)][e\+(#)0];-(#)0[.0(#)][e\-(#)0];0[.0][e\+0];0[.0][e\-0];\+INF;\-INF;QNaN;SNaN 4. Floating-point currency in US-format. $(###,)###.00(#);($|(###,)###.00(#)|);$.00;($|.00|);\+INF;\-INF;QNaN;SNaN By using control tokens, you can define repeatable and optional blocks. The formatting procedure will continue to use a repeatable block until all digits have been handled. An optional block is used once, only if there are unhanded digits. Blocks defined by control tokens cannot partially overlap each other, but one block can be nested inside another. Any block must be located within only one segment. Each opened block must be closed with a corresponding token. Repeatable blocks are applicable only for the value segments such as integer, fraction, and exponent parts. Placeholders are replaced with the digits or symbols they are bound to. For example, "+" will be replaced with a positive sign, "$" with a monetary symbol, and so on. "0", "_", and "#" are digit placeholders. They are used not only for output but also for input during scanning. There is a difference between them; during formatting, when there are no digits to handle, "0" and "_" are substituted with zero and space symbols correspondingly but nothing will be generated instead of "#". They can be used as placeholders only inside value segments. Literals are just written as they are. Format properties By using format properties, you can specify additional parameters for the formatting and scanning operations and customize generation of the default mask. In fact, fpidNegativeInfinity, fpidPositiveInfinity, fpidQuietNaN, fpidSignalingNaN, fpidNull, fpidLeadingZero, fpidDecimalDigitsNumber, fpidExponentDigitsNumber, fpidGrouping, fpidNegativePattern, and FpidPositivePattern are only used for the generation of the default mask, so they have no effect when a custom mask is used. Other properties are involved in formatting and scanning. See Table 3 for more information. Table 3: Format properties ID Description fpidWhiteSpace Symbol used as a default substitution of white space token ("_"). fpidZero Symbol used as a default substitution of zero token ("0"). fpidNegativeSign String value for the negative sign. fpidPositiveSign String value for the positive sign. If numbers without any sign should be interpreted as positive, use an empty string for this parameter. fpidNegativeInfinity Representation of negative infinity. fpidPositiveInfinity Representation of positive infinity. fpidQuietNaN Representation of "quiet not a number" value. fpidSignalingNaN Representation of "signaling not a number" value. fpidNull Representation of a Null value. fpidCurrency String used as the monetary symbol. fpidPercent String used as the percent symbol. fpidPermille String used as the permille symbol. fpidExponent String used as the exponent symbol. fpidDecimalSeparator Character(s) used as the decimal separator. fpidGroupSeparator Character(s) used to separate groups of digits to the left of the decimal. fpidLeadingZero Specifier for leading zeros in decimal fields in the mask generated by default. If set to True, leading zeros will be added; otherwise, no leading will precede the decimal separator. fpidDecimalDigitsNumber Minimal number of fractional digits to be printed. fpidExponentDigitsNumber Minimal number of exponent digits to be printed. fpidGrouping Sizes for each group of digits to the left of the decimal. An explicit size is needed for each group, and sizes are separated by semicolons. If the last value is zero, the preceding value is repeated. fpidNegativePattern Negative number mode; that is, the format for a negative number. fpidPositivePattern Positive number mode; that is, the format for a positive number. Control Architecture The control is implemented with several classes (see Figure 1). Figure 1 The main class of the control is called NumericEditBox. It implements the INumericEditBox interface, which lets you change various properties related to the control's visualization and behavior. By using its Value property, you can access the numeric value handled by the control. NumericEditBox contains another object called Formatter. It can be accessed through the Formatter property at run time, or through the FormatterParams property in the control's designer. Formatter maintains the value type, format type, masks, and format properties for editing and display modes. It actually manages the formatting process, and gives necessary facilities for its configuration. How to Use It First of all, make sure that SpNumericEdit.dll is registered on your PC. If it's not, use the command below to register the COM control. regsvr32 SpNumericEdit.dll If you built the control with Visual Studio, it should be registered automatically. During design time, you can use the FormatterParams property (see Figure 2) to open the special property page (see Figure 3) that makes formatter configuration easy. Figure 2 Figure 3 Also, it is possible to change format parameters at run time. You can do that by using the IFormatter::Configure method: HRESULT Configure([in] ValueTypeConstants enValueType, [in] FormatTypeConstants enFormatType, [in] VARIANT vDisplayFmtProps, [in] VARIANT vEditingFmtProps, [in, defaultvalue(NULL)] BSTR bsDisplatMask, [in, defaultvalue(NULL)] BSTR bsEditingMask); The method takes six parameters, as shown in Table 4: Table 4: Configuration Parameters Parameter Description enValueType Data type of value (vtInt8, vtInt16, and so forth) enFormatType Format type (ftNumeric, ftCurrency, and so on) vDisplayFmtProps, vEditingFmtProps Format properties for display and editing modes correspondingly. This parameter is a VARIANT that can hold SAFEARRAY or IFormatProperties. If SAFEARRAY is passed, each of its elements corresponds to the property value while element index corresponds to property ID. If the element value is NULL, the corresponding property will be set to the system default value. To assign default values to all properties, just pass the VARIANT of the VT_NULL or VT_EMPTY type. bsDisplatMask, bsEditingMask Mask expressions for display and editing modes correspondingly. If a NULL value is passed, the default mask is generated according to system settings. The following C++/MFC code snippet demonstrates how to configure the formatter at run-time. // Custom display mask LPCTSTR lpcwszDisplayMask = _T("It is positive number: \\(+(###,)##0.00(#)\\);") \ _T("It is negative number: \\(-(###,)##0.00(#)\\);") \ _T("It is positive zero: \\(0.00\\);") \ _T("It is negative zero: \\(0.00\\);") \ _T("It is positive infinity: \\(\\+INF\\);") \ _T("It is negative infinity: \\(\\-INF\\);") \ _T("It is quiet not-a-number: \\(QNaN\\);") \ _T("It is signaling not-a-number: \\(SNaN\\);") \ _T("This is NULL"); // Allocate safe array with bounds corresponding to the range of // property ID const long lMin = CNumericeditbox::fpidWhiteSpace; const long lMax = CNumericeditbox::fpidPositivePattern; long rgIndices[1]; SAFEARRAYBOUND rgsabound[1]; rgsabound[0].lLbound = lMin; rgsabound[0].cElements = static_cast<ULONG>(lMax - lMin + 1); COleSafeArray arrDisplayFmtProps; arrDisplayFmtProps.Create(VT_BSTR, 1, rgsabound); // Change fpidNull property CComBSTR cbsValue = L"null"; rgIndices[0] = long(CNumericeditbox::fpidNull); arrDisplayFmtProps.PutElement(rgIndices, BSTR(cbsValue)); // Change fpidGrouping property cbsValue = L"3;2;0"; rgIndices[0] = long(CNumericeditbox::fpidGrouping); arrDisplayFmtProps.PutElement(rgIndices, BSTR(cbsValue)); // Change fpidGroupSeparator property cbsValue = L"'"; rgIndices[0] = long(CNumericeditbox::fpidGroupSeparator); arrDisplayFmtProps.PutElement(rgIndices, BSTR(cbsValue)); // Update formatter parameters CFormatter fmt = m_nedit.get_Formatter(); fmt.Configure(CNumericeditbox::vtDouble, CNumericeditbox::ftNumeric, COleVariant(arrDisplayFmtProps), COleVariant(), lpcwszDisplayMask, NULL); Conclusion This component is free, so please try it. I hope you'll find it useful. Please let me know about bugs and other problems if you find any. Enjoy! History • 02/15/2005. Version 1.0 beta release. • 03/24/2005. Version 1.0 release. New formatting library is used with the control. Provided range checks for the values during input. • 08/16/2005. Version 1.2 alpha release. Formatting library has been redesigned. Some bugs have been fixed. • 10/31/2005. Version 1.2 release. Some bugs found in alpha version have been fixed. Sorce code has been restructured. Downloads Comments • There are no comments yet. Be the first to comment! Leave a Comment • Your email address will not be published. All fields are required. Top White Papers and Webcasts • When individual departments procure cloud service for their own use, they usually don't consider the hazardous organization-wide implications. Read this paper to learn best practices for setting up an internal, IT-based cloud brokerage function that service the entire organization. Find out how this approach enables you to retain top-down visibility and control of network security and manage the impact of cloud traffic on your WAN. • An organization's ability to identify, assess and resolve technical issues is critical to business success. Developer, operations and IT teams must be able to collect and analyze data in real-time if they which to resolve issues quickly without creating additional problems. This article introduces the idea of real-time analytics and demonstrates how log data from different layers of the system and application stack can enable real-time analytics and response. Most Popular Programming Stories More for Developers RSS Feeds Thanks for your registration, follow us on our social networks to keep up-to-date
__label__pos
0.772373
0 $\begingroup$ Is there a name for matrices where the first non-zero entry of each row is the last non-zero entry of its column? For example this matrix: $$\begin{pmatrix} 0 & \bf 1 & 0 & 1 \\ \bf 1 & 0 & 1 & 1 \\ 0 & 0 & 0 & \bf 1 \\ 0 & 0 & \bf 1 & 0 \\ \end{pmatrix}$$ Like row echelon form, they allow you to quickly determine the rank of a matrix, but I wasn't able to find how they are called (if they have a name). $\endgroup$ Your Answer By clicking "Post Your Answer", you acknowledge that you have read our updated terms of service, privacy policy and cookie policy, and that your continued use of the website is subject to these policies. Browse other questions tagged or ask your own question.
__label__pos
0.978952
5 contactRelatedListLWC.js import { LightningElement, track,api } from 'lwc'; import getContactListApex from '@salesforce/apex/ContactController.getContacts'; const columns = [ { label: 'First Name', fieldName: 'FirstName' } ]; export default class ContactRelatedListLWC extends LightningElement { @track data = [{'FirstName' : 'John'}]; @track columns = columns; connectedCallback() { this.getContactListJS(); } getContactListJS(){ getContactListApex().then(result => { [].concat(result).forEach((obj) => { let tempContactObj = {}; tempContactObj.FirstName = obj.FirstName + 'aaa'; this.data.push(tempContactObj); console.log(this.data.length); //This prints the actual length as expected but only one record displayed in the UI. }); }).catch(error => { this.error = error; }); } } Apex class: public class ContactController { @AuraEnabled public static List<Contact> getContacts(){ return [SELECT Id,FirstName FROM Contact LIMIT 5]; } } contactRelatedListLWC.html: <template> <div style="height: 300px;"> <lightning-datatable key-field="FirstName" data={data} columns={columns}> </lightning-datatable> </div> </template> I am querying contacts in Apex controller and passing the value to LWC and in LWC, I am changing firstname and appending with some dummy text. Records returned from Server are not displayed but only the record that I hard-coded in JS is displayed. What is wrong with my approach here? NOTE: My question might seem like a duplicate to this (How to change wired data list after get data from imperative apex method) but I have gone through it and this question is not about changing the value of response returned from Server call. 2 Answers 2 7 Currently lightning datatable has a bug - it does not rerender data when an object in the array data changes or by using push to modify data, so you need to explicitly reassign data for datatable to rerender. For better understanding you can check this playground link Try below: getContactListJS() { getContactListApex().then(result => { this.data = [...result].map(record => { record.FirstName = record.FirstName + 'aaa'; return record; }); }).catch(error => { this.error = error; }); } 3 To add to @salesforce-sas's answer, I'm not sure I would classify this as a bug in datatable, per se, but rather how LWC's tracked and api properties work in general. If you have an object or array as a property and you only update the content then LWC doesn't notice the changes and does not trigger re-rendering (or wire calls etc.). There are two different solutions to this, depending on the use case: 1. Explicitly replace the tracked or api property value, as is the case in salesforce-sas's answer, with the updated content 2. If you are using an object (not an array - so this doesn't apply in this question's scenario) then you can explicitly initialize the object in its declaration so the nested properties to be tracked (or made "reactive") are enumerated. By doing the latter you inform the LWC compiler that you want changes to the declared nested properties (and only those ones) to be considered as reactive. The first solution, which is applicable here, has been covered in the previous answer. The second solution (relevant in other use cases) looks like the following. In the LWC, declare the tracked/api property with dummy object containing the reactive sub-properties: @track data = { example1: undefined, example2: "some default" } In the update code just assign to the sub-properties: this.data.example1 = result.data.something; Use of the this.data property or its content elsewhere, such as in the template or in parameters to wired services etc. will then operate as you expect. On the other hand, if you omit the initialization with a dummy (or miss out explicit declaration of a sub-property in that dummy in the property declaration) then the component won't react to changes to the relevant sub-properties within this property. Of course, you can always fall back on solution 1. 2 • 1 It is actually a bug only, Salesforce core support has accepted it and working to resolve it. Pls check the playground link I gave in my answer and see that although html is rerendered, datable does not. Go to js and uncomment a line by checking comment and then even datatable starts rerendering Commented Sep 25, 2019 at 10:39 • Thanks for posting the other possible solution. – javanoob Commented Sep 25, 2019 at 12:41 You must log in to answer this question. Not the answer you're looking for? Browse other questions tagged .
__label__pos
0.972268
  Show Menu TOPICS× SRP and UGC Essentials Introduction If unfamiliar with the storage resource provider (SRP) and its relationship to user-generated content (UGC), visit Community Content Storage and Storage Resource Provider Overview . This section of the documentation provides some essential information about SRP and UGC. StorageResourceProvider API The SocialResourceProvider API (SRP API) is an extension of various Sling Resource Provider APIs. It includes support for pagination and atomic increment (useful for tally and scoring). Queries are necessary for SCF components as there is the need to sort by date, helpfulness, number of votes, and so on. All SRP options have flexible query mechanisms which do not rely on bucketing. The SRP storage location incorporates the component path. The SRP API should always be used to access UGC as the root path depends on the SRP option selected, such as ASRP, MSRP, or JSRP. The SRP API is not an abstract class, it is an interface. A custom implementation should not be undertaken lightly, as the benefits of future improvements to internal implementations would be missed when upgrading to a new release. The means for using the SRP API are through provided utilities, such as those found in the SocialResourceUtilities package. When upgrading from AEM 6.0 or earlier, it will be necessary to migrate UGC for all SRPs, for which an Open Source tool is available. See Upgrading to AEM Communities 6.3 . Historically, utilities for accessing UGC were found in the SocialUtils package, which no longer exists. For replacement utilities, see SocialUtils Refactoring . Utility Method to Access UGC To access UGC, use a method from the SocialResourceUtilities package that returns a path suitable for accessing UGC from SRP and replaces the deprecated method found in the SocialUtils package. Following is a minimal example of using the resourceToUGCStoragePath() method in a servlet: import com.adobe.cq.social.srp.utilities.api.SocialResourceUtilities; @Reference private SocialResourceUtilities socialResourceUtilities; @Override protected void doGet(final SlingHttpServletRequest request, final SlingHttpServletResponse response) throws ServletException, IOException { String ugcPath = socialResourceUtilities.resourceToUGCStoragePath(request.getResource()); // rest of servlet } For other SocialUtils replacements, see SocialUtils Refactoring . For coding guidelines, visit Accessing UGC with SRP . The path resourceToUGCStoragePath() returns is not suitable for ACL checking . Utility Method to Access ACLs Some SRP implementations, such as ASRP and MSRP, store community content in databases which provide no ACL verification. Shadow nodes provide a location in the local repository to which ACLs can be applied. Using the SRP API, all SRP options perform the same check of the shadow location prior to all CRUD operations. To check ACLs, use a method that returns a path suitable for checking the permissions applied to the resource's UGC. Following is a simple example of using the resourceToACLPath() method in a servlet: import com.adobe.cq.social.srp.utilities.api.SocialResourceUtilities; @Reference private SocialResourceUtilities socialResourceUtilities; @Override protected void doGet(final SlingHttpServletRequest request, final SlingHttpServletResponse response) throws ServletException, IOException { String aclPath = socialResourceUtilities.resourceToACLPath(request.getResource()); // rest of servlet } The path returned by resourceToACLPath() is not suitable for accessing the UGC itself.
__label__pos
0.519047
% Demo to create a movie file from a Gaussian and then optionally save it to disk as an avi video file. %============================================================================================== % Initialization code clear all; clc; workspace; numberOfFrames = 61; x1d = linspace(-3, 3, numberOfFrames); y1d = x1d; t = linspace(0, 5, numberOfFrames); hFigure = figure; %============================================================================================== % Set up the movie structure. % Preallocate movie, which will be an array of structures. % First get a cell array with all the frames. allTheFrames = cell(numberOfFrames,1); vidHeight = 344; vidWidth = 446; allTheFrames(:) = {zeros(vidHeight, vidWidth, 3, 'uint8')}; % Next get a cell array with all the colormaps. allTheColorMaps = cell(numberOfFrames,1); allTheColorMaps(:) = {zeros(256, 3)}; % Now combine these to make the array of structures. myMovie = struct('cdata', allTheFrames, 'colormap', allTheColorMaps); % Create a VideoWriter object to write the video out to a new, different file. % writerObj = VideoWriter('problem_3.avi'); % open(writerObj); % Need to change from the default renderer to zbuffer to get it to work right. % openGL doesn't work and Painters is way too slow. set(gcf, 'renderer', 'zbuffer'); %============================================================================================== % Create the movie. % Get a list of x and y coordinates for every pixel in the x-y plane. [x, y] = meshgrid(x1d, y1d); % After this loop starts, BE SURE NOT TO RESIZE THE WINDOW AS IT'S SHOWING THE FRAMES, or else you won't be able to save it. for frameIndex = 1 : numberOfFrames z = exp(-(x-t(frameIndex)).^2-(y-t(frameIndex)).^2); cla reset; % Enlarge figure to full screen. % set(gcf, 'Units', 'Normalized', 'Outerposition', [0, 0, 1, 1]); surf(x,y,z); axis('tight') zlim([0, 1]); caption = sprintf('Frame #%d of %d, t = %.1f', frameIndex, numberOfFrames, t(frameIndex)); title(caption, 'FontSize', 15); drawnow; thisFrame = getframe(gca); % Write this frame out to a new video file. % writeVideo(writerObj, thisFrame); myMovie(frameIndex) = thisFrame; end % close(writerObj); %============================================================================================== % See if they want to replay the movie. message = sprintf('Done creating movie\nDo you want to play it?'); button = questdlg(message, 'Continue?', 'Yes', 'No', 'Yes'); drawnow; % Refresh screen to get rid of dialog box remnants. close(hFigure); if strcmpi(button, 'Yes') hFigure = figure; % Enlarge figure to full screen. % set(gcf, 'Units', 'Normalized', 'Outerposition', [0, 0, 1, 1]); title('Playing the movie we created', 'FontSize', 15); % Get rid of extra set of axes that it makes for some reason. axis off; % Play the movie. movie(myMovie); close(hFigure); end %============================================================================================== % See if they want to save the movie to an avi file on disk. promptMessage = sprintf('Do you want to save this movie to disk?'); titleBarCaption = 'Continue?'; button = questdlg(promptMessage, titleBarCaption, 'Yes', 'No', 'Yes'); if strcmpi(button, 'yes') % Get the name of the file that the user wants to save. % Note, if you're saving an image you can use imsave() instead of uiputfile(). startingFolder = pwd; defaultFileName = {'*.avi';'*.mp4';'*.mj2'}; %fullfile(startingFolder, '*.avi'); [baseFileName, folder] = uiputfile(defaultFileName, 'Specify a file'); if baseFileName == 0 % User clicked the Cancel button. return; end fullFileName = fullfile(folder, baseFileName); % Create a video writer object with that file name. % The VideoWriter object must have a profile input argument, otherwise you get jpg. % Determine the format the user specified: [folder, baseFileName, ext] = fileparts(fullFileName); switch lower(ext) case '.jp2' profile = 'Archival'; case '.mp4' profile = 'MPEG-4'; otherwise % Either avi or some other invalid extension. profile = 'Uncompressed AVI'; end writerObj = VideoWriter(fullFileName, profile); open(writerObj); % Write out all the frames. numberOfFrames = length(myMovie); for frameNumber = 1 : numberOfFrames writeVideo(writerObj, myMovie(frameNumber)); end close(writerObj); % Display the current folder panel so they can see their newly created file. cd(folder); filebrowser; message = sprintf('Finished creating movie file\n %s.\n\nDone with demo!', fullFileName); uiwait(helpdlg(message)); else uiwait(helpdlg('Done with demo!')); end
__label__pos
0.686138
Skip to content Instantly share code, notes, and snippets. @berdario berdario/dummyhttp.py Secret Last active Aug 29, 2015 Embed What would you like to do? #! /usr/bin/env python import sys import os import locale from collections import defaultdict from socket import socket, AF_INET, SOCK_STREAM from urllib.parse import urlparse from contextlib import closing from functools import wraps import magic from io import open args = dict(enumerate(sys.argv)) encoding_guesser = magic.Magic(mime_encoding=True) curdir = os.path.abspath('.') ENCODING = locale.getlocale()[1] HTTP_METHODS = frozenset(set(['OPTIONS', 'GET', 'HEAD', 'POST', 'PUT', 'DELETE', 'TRACE', 'CONNECT'])) HTTP_VERSION = 'HTTP/1.1' def parse_path(f): @wraps(f) def handler(resource, headers): if not resource.startswith('/'): resource = urlparse(resource).path or '/' resource = os.path.abspath(resource[1:] or '.') if not resource.startswith(curdir): resource = curdir return f(resource, headers) return handler def guess_type(path): typ = magic.from_file(path, mime=True) if typ.startswith('text'): return typ + "; charset=" + encoding_guesser.from_file(path) else: return typ def file_browser(path): headers = {'Connection': 'close'} if not os.path.exists(path): return (None, headers) elif os.path.isdir(path): content_reader = lambda: "\n".join(p for p in os.listdir(path) if not p.startswith('.')) headers.update({'Content-Type': 'text/plain; charset=UTF-8'}) return (content_reader, headers) else: headers.update({'Content-Type': guess_type(path.decode())}) def content_reader(): with open(path, 'rb') as f: return f.read() return (content_reader, headers) def build_response(resource, headers, handler): content, response_headers = file_browser(resource) if content is None: return HTTP_VERSION + " 404 File Not Found\r\n\r\n" else: status_line = " ".join([HTTP_VERSION, '202', 'Accepted']) headers_bytes = "\r\n".join(map(':'.join, list(response_headers.items()))) + '\r\n' return handler(status_line, headers_bytes, content) @parse_path def handle_GET(resource, headers): def handler(status_line, headers_bytes, content_reader): return "\r\n".join([status_line, headers_bytes, content_reader()]) return build_response(resource, headers, handler) @parse_path def handle_HEAD(resource, headers): def handler(status_line, headers_bytes, content): return "\r\n".join([status_line, headers_bytes, '']) return build_response(resource, headers, handler) def missing_method(resource, headers): return HTTP_VERSION + " 405 Method Not Allowed\r\n\r\n" methodmap = defaultdict(lambda : missing_method) methodmap.update({'GET': handle_GET, 'HEAD': handle_HEAD}) def handle_client(sock): data = sock.recv(1000) while not data.endswith("\r\n\r\n"): newdata = sock.recv(1000) if not newdata: return data += newdata headers = data.strip().splitlines() request, headers = headers[0], headers[1:] headers = dict(map(lambda s: s.split(':', 1), headers)) try: method, resource, http_version = request.split() except ValueError: response = HTTP_VERSION + " 400 Bad Request\r\n\r\n" else: if method not in HTTP_METHODS: response = HTTP_VERSION + " 501 Not Implemented\r\n\r\n" else: response = methodmap[method](resource, headers) sock.sendall(response) if __name__ == '__main__': sock = socket(AF_INET, SOCK_STREAM) sock.bind(('', int(args.get(1, 8080)))) sock.listen(1000) while True: with closing(sock.accept()[0]) as newsock: handle_client(newsock) Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
__label__pos
0.982877
# # Hints for the Cray XT4 Catamount/Qk system: # cross-compilation host is a SuSE x86_64-linux, # execution at the target with the 'yod' utility, # linux.sh will run this hints file when necessary. # # cc.sh: compiles the code with the cross-compiler, patches main/exit/_exit # (and traps signals) to be wrappers that echo the exit code. # # run.sh: runs the executable with yod and collects the exit status, # and exits with that status. # # You probably should do the compilation in non-Lustre filesystem # because Lustre does not support all the POSIX system calls, which may # cause weird errors during the Perl build: # 1182003549.604836:3-24:(super.c:1516:llu_iop_fcntl()): unsupported fcntl cmd 2 # # As of 2007-Sep (pre-5.10) miniperl, libperl.a, and perl can be successfully # built; no extensions are built. It would be hard since Perl cannot run # anything external (pipes, system(), backticks or fork/exec, or globbing) # (which breaks MakeMaker, and confuses ext/util/make_ext). # # To build: # # sh Configure -des # make perl # # "make install" won't work since it assumes file globbing (see above). # You can try the following manual way: # # mkdir -p /opt/perl-catamount # mkdir -p /opt/perl-catamount/include # mkdir -p /opt/perl-catamount/lib # mkdir -p /opt/perl-catamount/lib/perl5/5.16.1 # mkdir -p /opt/perl-catamount/bin # cp *.h /opt/perl-catamount/include # cp libperl.a /opt/perl-catamount/lib # cp -pr lib/* /opt/perl-catamount/lib/perl5/5.16.1 # cp miniperl perl run.sh cc.sh /opt/perl-catamount/lib # # With the headers and the libperl.a you can embed Perl to your Catamount # application, see pod/perlembed.pod. You can do for example: # # cc -I/opt/perl-catamount/include -L/opt/perl-catamount/lib -o embed embed.c # yod -sz 1 ./embed -le 'print sqrt(2)' # # You might want to have the run.sh execution wrapper around (it gets created # in the Perl build directory) if you want to run the miniperl or perl in # the XT4. It collects the exit status (note that yod is run with "-sz 1", # so only one instance is run), and possible crash status (bare yod does # not collect the exit status). For example: # # sh /opt/perl-catamount/bin/run.sh /opt/perl-catamount/bin/perl -le 'print 42' # # or if you are still in the build directory: # # sh run.sh ./perl -le 'print 2*3*7' # # The cc.sh is a wrapper for the Catamount cc used when building Perl # (and before that, when running Configure), it arranges for the main() # exit(), _exit() to be wrapped so that the exit/crash status can be # collected (by run.sh). # case "$prefix" in '') prefix=/opt/perl-catamount ;; esac cat >&4 <<__EOF1__ *** *** You seem to be compiling in Linux for the Catamount/Qk environment. *** I'm therefore not going to install perl as /usr/bin/perl. *** Perl will be installed under $prefix. *** __EOF1__ archname='x86_64-catamount' archobjs='catalib.o' d_mmap='undef' d_setlocale='undef' # There is setlocale() but no locales. d_vprintf='define' hintfile='catamount' i_arpainet='undef' i_db='undef' i_netdb='undef' i_niin='undef' incpth=' ' installusrbinperl='undef' libswanted="m crypt c" libpth=' ' locincpth=' ' nonxs_ext=' ' osname='catamount' procselfexe='undef' static_ext=' ' usedl='undef' useithreads='undef' uselargefiles='define' usenm='undef' usethreads='undef' use64bitall='define' BUILD=$PWD case "`yod -Version 2>&1`" in Red*) ;; # E.g. "Red Storm Protocol Release 2.1.0" *) echo >&4 "Could not find 'yod', aborting." exit 1 ;; esac run=$BUILD/run.sh cat > $run <<'__EOF2__' #!/bin/sh # # $run # yod -sz 1 "$@" 2> .yod$$e > .yod$$o status=`awk '/^cata: exe .* pid [0-9][0-9]* (main|exit|_exit) [0-9][0-9]*$/ {print $NF}' .yod$$o|tail -1` grep -v "sz is 1" .yod$$e grep -v "^cata: exe .* pid [0-9][0-9]* " .yod$$o grep "^cata: exe .* signal " .yod$$o rm -f .yod$$o .yod$$e exit $status __EOF2__ chmod 755 $run case "`cc -V 2>&1`" in *catamount*) ;; # E.g. "/opt/xt-pe/1.5.41/bin/snos64/cc: INFO: catamount target is being used" *) echo "Could not find 'cc' for catamount, aborting." exit 1 ;; esac cc=$BUILD/cc.sh cat > $cc <<__EOF3a__ #!/bin/sh # # $0 # # This is essentially a frontend driver for the Catamount cc. # We arrange for # (1) the main(), exit(), _exit() being wrapped (cpp-defined) # catamain(), cataexit(), and _cataexit() # (2) the actual main() etc. are in cata.c, and cata*.o are # linked in when needed # (3) signals being caught # All this mostly for being able to catch the exit status (or crash cause). # argv='' srco='' srct='' exe='' defs='-Dmain=catamain -Dexit=cataexit -D_exit=_cataexit' argv='' BUILD=$BUILD __EOF3a__ cat >> $cc <<'__EOF3b__' case "$1" in --cata_o) ;; *) if test ! -f catalib.o then if test ! -f catalib.c then if test -f ../catalib.c # If compiling in UU during Configure. then cp ../catalib.c catalib.c cp ../catamain.c catamain.c cp ../cata.h cata.h fi fi $0 --cata_o -c catalib.c || exit 1 $0 --cata_o -c catamain.c || exit 1 fi ;; esac while test $# -ne 0 do i=$1 shift case "$i" in --cata_o) ;; *.c) argv="$argv $defs" defs="" if test ! -f $i then echo "$0: $i: No such file or directory" exit 1 fi j=$i$$.c rm -f $j if grep -q -s '#include "cata.h"' $i then : else cat >>$j<<__EOF4__ #include "cata.h" # 1 "$i" __EOF4__ fi cat $i >>$j if grep -q -s 'int main()' $i then argv="$argv -Dmain0" else if grep -q -s 'int main([^,]*,[^,]*)' $i then argv="$argv -Dmain2" else if grep -q -s 'int main([^,]*,[^,]*,[^,]*)' $i then argv="$argv -Dmain3" fi fi fi argv="$argv $j" srct="$j" srco="$i" ;; *.o) if test ! -f "$i" then c=$(echo $i|sed 's/\.o$/.c/') $0 -c $c || exit 1 fi argv="$argv $i" ;; -o) exe="$1" argv="$argv -o $exe -Dargv0=$exe" shift ;; *) argv="$argv $i" ;; esac done case "$exe" in '') ;; *) case "$argv" in *catalib.o*|*" perlmain.o "*) ;; *) argv="$argv catalib.o" ;; esac case "$argv" in *catamain.o*) ;; *) argv="$argv catamain.o" ;; esac ;; esac cc -I$BUILD $argv 2> .cc$$e > .cc$$o status=$? egrep -v 'catamount target|'$$'\.c:$' .cc$$e 1>&2 case "`grep "is not implemented" .cc$$e`" in *"will always fail"*) status=1 ;; esac cat .cc$$o rm -f .cc$$o case "$status" in 0) rm -f .cc$$e $srct ;; esac objt=`echo $srct|sed -e 's/\.c$/.o/'` objo=`echo $srco|sed -e 's/\.c$/.o/'` if test -n "$objt" -a -f "$objt" then mv -f $objt $objo fi exit $status __EOF3b__ chmod 755 $cc cat >cata.h<<__EOF6__ #ifndef CATA_H #define CATA_H void cataexit(int status); void _cataexit(int status); void catasigsetup(); void catasighandle(int signum); #ifdef main0 int catamain(); #else #ifdef main2 int main(int argc, char **argv); #else int main(int argc, char **argv, char **env); #endif #endif #endif #ifdef argv0 #define ARGV0 STRINGIFY(argv0) #else #define ARGV0 argv0 #endif __EOF6__ cat >catalib.c<<__EOF7__ #include #include #undef printf #undef main #undef exit #undef _exit #include "cata.h" char* argv0; void cataexit(int status) { printf("cata: exe %s pid %d exit %d\n", ARGV0, getpid(), status); exit(status); } void _cataexit(int status) { printf("cata: exe %s pid %d _exit %d\n", ARGV0, getpid(), status); _exit(status); } void catasighandle(int signum) { int core = 0; printf("cata: exe %s pid %d signal %d\n", ARGV0, getpid(), signum); switch (signum) { case SIGQUIT: case SIGILL: case SIGTRAP: case SIGABRT: case SIGBUS: case SIGSEGV: case SIGXCPU: case SIGXFSZ: core = 0200; break; default: break; } cataexit(core << 8 | signum); } void catasigsetup() { signal(SIGHUP, catasighandle); signal(SIGINT, catasighandle); signal(SIGQUIT, catasighandle); signal(SIGILL, catasighandle); signal(SIGTRAP, catasighandle); signal(SIGABRT, catasighandle); signal(SIGIOT, catasighandle); /* KILL */ signal(SIGBUS, catasighandle); signal(SIGFPE, catasighandle); signal(SIGUSR1, catasighandle); signal(SIGUSR2, catasighandle); signal(SIGSEGV, catasighandle); signal(SIGPIPE, catasighandle); signal(SIGALRM, catasighandle); signal(SIGTERM, catasighandle); signal(SIGSTKFLT, catasighandle); signal(SIGCHLD, catasighandle); signal(SIGCONT, catasighandle); /* STOP */ signal(SIGTSTP, catasighandle); signal(SIGTTIN, catasighandle); signal(SIGTTOU, catasighandle); signal(SIGURG, catasighandle); signal(SIGXCPU, catasighandle); signal(SIGXFSZ, catasighandle); signal(SIGVTALRM, catasighandle); signal(SIGPROF, catasighandle); signal(SIGWINCH, catasighandle); signal(SIGIO, catasighandle); signal(SIGPWR, catasighandle); signal(SIGSYS, catasighandle); } void boot_DynaLoader (void* cv) { } __EOF7__ cat >catamain.c<<__EOF8__ #include #undef printf #undef main #undef exit #undef _exit #include "cata.h" extern char* argv0; int main(int argc, char *argv[], char *envv[]) { int status; #ifndef argv0 argv0 = argv[0]; #endif catasigsetup(); status = #ifdef main0 catamain(); #else #ifdef main2 catamain(argc, argv); #else catamain(argc, argv, envv); #endif #endif printf("cata: exe %s pid %d main %d\n", ARGV0, getpid(), status); return status; } __EOF8__ echo "Faking DynaLoader" touch DynaLoader.o # Oh, the agony. # That's it.
__label__pos
0.587558
Bài 11 trang 57 Tài liệu dạy – học toán 6 tập 2 Đề bài Tìm hai phân số biết rằng : a) Tổng của chúng bằng tích của chúng. b) Hiệu của chúng bằng tích của chúng. Lời giải chi tiết a)Cặp phân số \({7 \over 3}\)  và \({7 \over 4}\)  có: \({7 \over 3} + {7 \over 4} = {{7.4 + 7.3} \over {12}};{7 \over 3}.{7 \over 4} = {{7.7} \over {3.4}} = {{49} \over {12}}\) Vậy \({7 \over 3} + {7 \over 4} = {7 \over 3}.{7 \over 4}\) Tổng quát: \({a \over b} + {a \over {a - b}} = {a \over b};{a \over {a - b}}(a,b \in Z,a \ne b,b \ne 0).\) b) Cặp phân số \({9 \over 4}\)  và \({9 \over {13}}\)  có: \({9 \over 4} - {9 \over {13}} = {{9.13 - 9.4} \over {4.13}} = {{81} \over {52}};{9 \over 4}.{9 \over {13}} = {{9.9} \over {4.13}} = {{81} \over {52}}\) Vậy \({9 \over 4} - {9 \over {13}} = {9 \over 4}.{9 \over {13}}\) Tổng quát: \({a \over b} - {a \over {a + b}} = {a \over b}.{a \over {a + b}}\)  (với \(a,b \in Z,b \ne 0,a \ne  - b).\) Loigiaihay.com Bình chọn: 4.9 trên 7 phiếu >> Học trực tuyến lớp 6 chương trình mới trên Tuyensinh247.com. Đầy đủ khoá học các bộ sách (Kết nối tri thức với cuộc sống; Chân trời sáng tạo; Cánh diều). Cam kết giúp học sinh lớp 6 học tốt, hoàn trả học phí nếu học không hiệu quả.
__label__pos
1
Network Working GroupN. Freed Request for Comments: 2231Innosoft Obsoletes: 2184K. Moore Updates: 2045, 2047, 2183University of Tennessee Category: Standards TrackNovember 1997 MIME Parameter Value and Encoded Word Extensions: Character Sets, Languages, and Continuations Status of this Memo This document specifies an Internet standards track protocol for the Internet community, and requests discussion and suggestions for improvements. Please refer to the current edition of the “Internet Official Protocol Standards” (STD 1) for the standardization state and status of this protocol. Distribution of this memo is unlimited. Copyright Notice Copyright © The Internet Society (1997). All Rights Reserved. 1. Abstract This memo defines extensions to the RFC 2045 media type and RFC 2183 disposition parameter value mechanisms to provide 1. a means to specify parameter values in character sets other than US-ASCII, 2. to specify the language to be used should the value be displayed, and 3. a continuation mechanism for long parameter values to avoid problems with header line wrapping. This memo also defines an extension to the encoded words defined in RFC 2047 to allow the specification of the language to be used for display as well as the character set. 2. Introduction The Multipurpose Internet Mail Extensions, or MIME [RFC-2045, RFC-2046, RFC-2047, RFC-2048, RFC-2049], define a message format that allows for: 1. textual message bodies in character sets other than US-ASCII, 2. non-textual message bodies, 3. multi-part message bodies, and 4. textual header information in character sets other than US-ASCII. MIME is now widely deployed and is used by a variety of Internet protocols, including, of course, Internet email. However, MIME's success has resulted in the need for additional mechanisms that were not provided in the original protocol specification. In particular, existing MIME mechanisms provide for named media type (content-type field) parameters as well as named disposition (content-disposition field). A MIME media type may specify any number of parameters associated with all of its subtypes, and any specific subtype may specify additional parameters for its own use. A MIME disposition value may specify any number of associated parameters, the most important of which is probably the attachment disposition's filename parameter. These parameter names and values end up appearing in the content-type and content-disposition header fields in Internet email. This inherently imposes three crucial limitations: 1. Lines in Internet email header fields are folded according to RFC 822 folding rules. This makes long parameter values problematic. 2. MIME headers, like the RFC 822 headers they often appear in, are limited to 7bit US-ASCII, and the encoded-word mechanisms of RFC 2047 are not available to parameter values. This makes it impossible to have parameter values in character sets other than US-ASCII without specifying some sort of private per-parameter encoding. 3. It has recently become clear that character set information is not sufficient to properly display some sorts of information -- language information is also needed [RFC-2130]. For example, support for handicapped users may require reading text string aloud. The language the text is written in is needed for this to be done correctly. Some parameter values may need to be displayed, hence there is a need to allow for the inclusion of language information. The last problem on this list is also an issue for the encoded words defined by RFC 2047, as encoded words are intended primarily for display purposes. This document defines extensions that address all of these limitations. All of these extensions are implemented in a fashion that is completely compatible at a syntactic level with existing MIME implementations. In addition, the extensions are designed to have as little impact as possible on existing uses of MIME. IMPORTANT NOTE: These mechanisms end up being somewhat gibbous when they actually are used. As such, these mechanisms should not be used lightly; they should be reserved for situations where a real need for them exists. 2.1. Requirements notation This document occasionally uses terms that appear in capital letters. When the terms "MUST", "SHOULD", "MUST NOT", "SHOULD NOT", and "MAY" appear capitalized, they are being used to indicate particular requirements of this specification. A discussion of the meanings of these terms appears in [RFC-2119]. 3. Parameter Value Continuations Long MIME media type or disposition parameter values do not interact well with header line wrapping conventions. In particular, proper header line wrapping depends on there being places where linear whitespace (LWSP) is allowed, which may or may not be present in a parameter value, and even if present may not be recognizable as such since specific knowledge of parameter value syntax may not be available to the agent doing the line wrapping. The result is that long parameter values may end up getting truncated or otherwise damaged by incorrect line wrapping implementations. A mechanism is therefore needed to break up parameter values into smaller units that are amenable to line wrapping. Any such mechanism MUST be compatible with existing MIME processors. This means that 1. the mechanism MUST NOT change the syntax of MIME media type and disposition lines, and 2. the mechanism MUST NOT depend on parameter ordering since MIME states that parameters are not order sensitive. Note that while MIME does prohibit modification of MIME headers during transport, it is still possible that parameters will be reordered when user agent level processing is done. The obvious solution, then, is to use multiple parameters to contain a single parameter value and to use some kind of distinguished name to indicate when this is being done. And this obvious solution is exactly what is specified here: The asterisk character ("*") followed by a decimal count is employed to indicate that multiple parameters are being used to encapsulate a single parameter value. The count starts at 0 and increments by 1 for each subsequent section of the parameter value. Decimal values are used and neither leading zeroes nor gaps in the sequence are allowed. The original parameter value is recovered by concatenating the various sections of the parameter, in order. For example, the content-type field Content-Type: message/external-body; access-type=URL; URL*0="ftp://"; URL*1="cs.utk.edu/pub/moore/bulk-mailer/bulk-mailer.tar" is semantically identical to Content-Type: message/external-body; access-type=URL; URL="ftp://cs.utk.edu/pub/moore/bulk-mailer/bulk-mailer.tar" Note that quotes around parameter values are part of the value syntax; they are NOT part of the value itself. Furthermore, it is explicitly permitted to have a mixture of quoted and unquoted continuation fields. 4. Parameter Value Character Set and Language Information Some parameter values may need to be qualified with character set or language information. It is clear that a distinguished parameter name is needed to identify when this information is present along with a specific syntax for the information in the value itself. In addition, a lightweight encoding mechanism is needed to accommodate 8 bit information in parameter values. Asterisks ("*") are reused to provide the indicator that language and character set information is present and encoding is being used. A single quote ("'") is used to delimit the character set and language information at the beginning of the parameter value. Percent signs ("%") are used as the encoding flag, which agrees with RFC 2047. Specifically, an asterisk at the end of a parameter name acts as an indicator that character set and language information may appear at the beginning of the parameter value. A single quote is used to separate the character set, language, and actual value information in the parameter value string, and an percent sign is used to flag octets encoded in hexadecimal. For example: Content-Type: application/x-stuff; title*=us-ascii'en-us'This%20is%20%2A%2A%2Afun%2A%2A%2A Note that it is perfectly permissible to leave either the character set or language field blank. Note also that the single quote delimiters MUST be present even when one of the field values is omitted. This is done when either character set, language, or both are not relevant to the parameter value at hand. This MUST NOT be done in order to indicate a default character set or language -- parameter field definitions MUST NOT assign a default character set or language. 4.1. Combining Character Set, Language, and Parameter Continuations Character set and language information may be combined with the parameter continuation mechanism. For example: Content-Type: application/x-stuff title*0*=us-ascii'en'This%20is%20even%20more%20 title*1*=%2A%2A%2Afun%2A%2A%2A%20 title*2="isn't it!" Note that: 1. Language and character set information only appear at the beginning of a given parameter value. 2. Continuations do not provide a facility for using more than one character set or language in the same parameter value. 3. A value presented using multiple continuations may contain a mixture of encoded and unencoded segments. 4. The first segment of a continuation MUST be encoded if language and character set information are given. 5. If the first segment of a continued parameter value is encoded the language and character set field delimiters MUST be present even when the fields are left blank. 5. Language specification in Encoded Words RFC 2047 provides support for non-US-ASCII character sets in RFC 822 message header comments, phrases, and any unstructured text field. This is done by defining an encoded word construct which can appear in any of these places. Given that these are fields intended for display, it is sometimes necessary to associate language information with encoded words as well as just the character set. This specification extends the definition of an encoded word to allow the inclusion of such information. This is simply done by suffixing the character set specification with an asterisk followed by the language tag. For example: From: =?US-ASCII*EN?Q?Keith_Moore?= <[email protected]> 6. IMAP4 Handling of Parameter Values IMAP4 [RFC-2060] servers SHOULD decode parameter value continuations when generating the BODY and BODYSTRUCTURE fetch attributes. 7. Modifications to MIME ABNF The ABNF for MIME parameter values given in RFC 2045 is: parameter := attribute "=" value attribute := token ; Matching of attributes ; is ALWAYS case-insensitive. This specification changes this ABNF to: parameter := regular-parameter / extended-parameter regular-parameter := regular-parameter-name "=" value regular-parameter-name := attribute [section] attribute := 1*attribute-char attribute-char := <any (US-ASCII) CHAR except SPACE, CTLs, "*", "'", "%", or tspecials> section := initial-section / other-sections initial-section := "*0" other-sections := "*" ("1" / "2" / "3" / "4" / "5" / "6" / "7" / "8" / "9") *DIGIT) extended-parameter := (extended-initial-name "=" extended-value) / (extended-other-names "=" extended-other-values) extended-initial-name := attribute [initial-section] "*" extended-other-names := attribute other-sections "*" extended-initial-value := [charset] "'" [language] "'" extended-other-values extended-other-values := *(ext-octet / attribute-char) ext-octet := "%" 2(DIGIT / "A" / "B" / "C" / "D" / "E" / "F") charset := <registered character set name> language := <registered language tag [RFC-1766]> The ABNF given in RFC 2047 for encoded-words is: encoded-word := "=?" charset "?" encoding "?" encoded-text "?=" This specification changes this ABNF to: encoded-word := "=?" charset ["*" language] "?" encoded-text "?=" 8. Character sets which allow specification of language In the future it is likely that some character sets will provide facilities for inline language labeling. Such facilities are inherently more flexible than those defined here as they allow for language switching in the middle of a string. If and when such facilities are developed they SHOULD be used in preference to the language labeling facilities specified here. Note that all the mechanisms defined here allow for the omission of language labels so as to be able to accommodate this possible future usage. 9. Security Considerations This RFC does not discuss security issues and is not believed to raise any security issues not already endemic in electronic mail and present in fully conforming implementations of MIME. 10. References [RFC-822] Crocker, D., “Standard for the format of ARPA Internet text messages”, STD 11, RFC 822, August 1982. [RFC-1766] Alvestrand, H., “Tags for the Identification of Languages”, RFC 1766, March 1995. [RFC-2045] Freed, N. and N. Borenstein, “Multipurpose Internet Mail Extensions (MIME) Part One: Format of Internet Message Bodies”, RFC 2045, November 1996. [RFC-2046] Freed, N. and N. Borenstein, “Multipurpose Internet Mail Extensions (MIME) Part Two: Media Types”, RFC 2046, November 1996. [RFC-2047] Moore, K., “MIME (Multipurpose Internet Mail Extensions) Part Three: Message Header Extensions for Non-ASCII Text”, RFC 2047, November 1996. [RFC-2048] Freed, N., Klensin, J., and J. Postel, “Multipurpose Internet Mail Extensions (MIME) Part Four: Registration Procedures”, RFC 2048, November 1996. [RFC-2049] Freed, N. and N. Borenstein, “Multipurpose Internet Mail Extensions (MIME) Part Five: Conformance Criteria and Examples”, RFC 2049, November 1996. [RFC-2060] Crispin, M., “Internet Message Access Protocol - Version 4rev1”, RFC 2060, December 1996. [RFC-2119] Bradner, S., “Key words for use in RFCs to Indicate Requirement Levels”, RFC 2119, March 1997. [RFC-2130] Weider, C., Preston, C., Simonsen, K., Alvestrand, H., Atkinson, R., Crispin, M., and P. Svanberg, “The Report of the IAB Character Set Workshop held 29 February - 1 March, 1996”, RFC 2130, April 1997. [RFC-2183] Troost, R., Dorner, S., and K. Moore, “Communicating Presentation Information in Internet Messages: The Content-Disposition Header Field”, RFC 2183, August 1997. Authors' Addresses Ned Freed Innosoft International, Inc. 1050 Lakes Drive West Covina CA 91790 USA Phone: +1 626 919 3600 Fax: +1 626 919 3614 EMail: [email protected] Keith Moore University of Tennessee Computer Science Dept. 107 Ayres Hall Knoxville TN 37996-1301 USA EMail: [email protected] Full Copyright Statement Copyright © The Internet Society (1997). All Rights Reserved. This document and translations of it may be copied and furnished to others, and derivative works that comment on or otherwise explain it or assist in its implementation may be prepared, copied, published and distributed, in whole or in part, without restriction of any kind, provided that the above copyright notice and this paragraph are included on all such copies and derivative works. However, this document itself may not be modified in any way, such as by removing the copyright notice or references to the Internet Society or other Internet organizations, except as needed for the purpose of developing Internet standards in which case the procedures for copyrights defined in the Internet Standards process must be followed, or as required to translate it into languages other than English. The limited permissions granted above are perpetual and will not be revoked by the Internet Society or its successors or assigns. This document and the information contained herein is provided on an “AS IS” basis and THE INTERNET SOCIETY AND THE INTERNET ENGINEERING TASK FORCE DISCLAIMS ALL WARRANTIES, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO ANY WARRANTY THAT THE USE OF THE INFORMATION HEREIN WILL NOT INFRINGE ANY RIGHTS OR ANY IMPLIED WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Intellectual Property The IETF takes no position regarding the validity or scope of any intellectual property or other rights that might be claimed to pertain to the implementation or use of the technology described in this document or the extent to which any license under such rights might or might not be available; neither does it represent that it has made any effort to identify any such rights. Information on the IETF's procedures with respect to rights in standards-track and standards-related documentation can be found in BCP-11. Copies of claims of rights made available for publication and any assurances of licenses to be made available, or the result of an attempt made to obtain a general license or permission for the use of such proprietary rights by implementors or users of this specification can be obtained from the IETF Secretariat. The IETF invites any interested party to bring to its attention any copyrights, patents or patent applications, or other proprietary rights which may cover technology that may be required to practice this standard. Please address the information to the IETF Executive Director.
__label__pos
0.567669
This page uses Javascript. Your browser either doesn't support Javascript or you have it turned off. To see this page as it is meant to appear please use a Javascript enabled browser. Personalized Web Page Settings You can configure many JAWS settings for specific Web sites. Personalized settings apply to all pages of a given domain, such as CNN dot com or Freedom Scientific dot com. Some examples where using personalized settings could be useful might include: Make the various JAWS settings work for you to enhance your experiences on the Internet. JAWS includes many configurable settings to access information on Web pages as efficiently as possible, and JAWS uses default settings that provide the most information on the majority of Web sites. However, there are many Web sites that are not designed with accessibility in mind or do not use HTML correctly. When using JAWS default settings, some of these Web pages may not provide enough information, while others may provide too much. What Settings can be Personalized? You can personalize many types of Web settings. For example: Personalizing Web Settings Do the following to create or modify personalized settings for a Web page. Note that changes made are saved permanently to JSI files (for example, FreedomScientific.com.jsi) and do not apply to other Web pages. JSI files are stored in the PersonalizedSettings folder, which can be found by choosing the Start button, All Programs, JAWS X (where X is the JAWS version number), Explore JAWS, and then the Explore My Settings folder. 1. Press INSERT+V to open the Quick Settings dialog box. Focus is in the Search edit box. 2. Press DOWN ARROW to move to the Quick Settings tree view. 3. Press P to move to the Personalize Web Settings group. If necessary, press RIGHT ARROW to expand the group. 4. Press DOWN ARROW to move to the different options and groups within Personalize Web Settings, and press SPACEBAR to make changes. 5. After making all necessary changes, press TAB to move to the OK button and then press SPACEBAR to save changes and exit Quick Settings. Sharing Personalized Settings JAWS stores personalized settings information in JSI files. You can use the JAWS Settings Packager to create and share personalized settings with others. JAWS versions 5.10 and later have this feature which allows you to easily share files of these types with other JAWS users. For information on this program, see the module later in Surf's Up on using Settings Packager. To manually share your personalized settings with other JAWS users 1. First, open the Start Menu, and choose All Programs. 2. Navigate to the JAWS program group, and open the JAWS program submenu. 3. Choose Explore JAWS, Explore My Settings. A folder opens that contains all of the files you have had JAWS modify or create for you. 4. Select the folder PersonalizedSettings and press ENTER to open it. 5. Locate the JSI file with the domain name of the page that uses the personalized settings you want to share. For example: "Google.com.jsi." 6. Copy this file and distribute it to others. Those you share your JSI file with need to copy the JSI file into their user folder on their computer following the steps above. They can now use that HTML page with your personalized settings.
__label__pos
0.937956
Flexclip Coupon Code Pictory Review: AI Driven Video Generation Tool Introduction Engaging content is essential for maintaining and attracting a customer base in today’s digitally fast-paced environment. Pictory is here to help. In this Pictory review, we’ll take a deep dive into its features, pros and cons, pricing, and how it stacks up against its competitors. So, let’s get started! What’s Pictory?  A brief overview  Pictory, an AI-driven video summary tool, allows users to convert long-form content into short, digestible videos. Pictory allows creators to make the most out of their content and repurpose it for social media, as well as other platforms that allow short, engaging videos to gain more traction.  Key features  • AI-powered video summarization • Captions and branding customizable • Social media optimization • Video editing tools • Analytics and insights • Multi-platform compatibility Who are you Pictory for? Pictory is designed for content creators, marketers, businesses, podcasters, webinar hosts, and educators who want to repurpose long-form content into engaging, digestible videos for social media and other platforms. This is especially helpful for people who want to save time and effort with AI-driven video summarization. What does Pictory do? Video upload Pictory users can upload their video content. You can upload a recording of a webinar, podcast, or other long-form video. AI-driven summarization Pictory’s AI engine analyzes the uploaded video to generate a summary version with captions, highlights and other information. This process saves users the time and effort typically required for manual summarization.  Editing and customization  Once the AI has generated the summarized video, users can further edit and customize it using Pictory’s suite of tools. You can adjust captions and add branding elements to the video, as well as optimize it for social media. What Does Well • User Experience Pictory’s interface makes it simple for users to navigate the site and create engaging videos. •  Customizable Templates: Users can customize video elements like captions and branding to match their unique style. • Audio Editing Pictory offers tools to adjust audio levels and add background music. •  Music Library: Users can access a library of royalty-free music tracks to enhance their videos. •  Video Footage Library: Pictory offers a selection of stock footage for use in videos. • Transcription – The platform automatically transcribes uploaded videos. This makes it easier to create captions and summaries. •  Subtitles: Users can edit and customize AI-generated captions to ensure accuracy and readability. • Educational Material – Pictory offers a knowledge base as well as video tutorials to help users learn more about the platform’s features. What Can Be Improved? • Special Effects – Pictory could use more advanced special effects to improve video customization options. • AI Technology – Although Pictory’s AI-driven summarizations are generally accurate, there are still areas for improvement in terms of summaries and captions.  Pros of Pictory  • Innovative AI-powered video summarization • Interface that is user-friendly • Customizable video elements • Social media optimization • Comprehensive editing tools  Cons of Pictory  • Very limited special effects • Occasional inaccuracies in AI-generated captions Pricing plans Pictory offers a range of pricing plans to suit different needs and budgets, including a free plan with basic features and watermarked videos. Upgrading to a paid plan provides additional features and removes the watermark.  Comparison to competitors  •  Pictory vs InVideo  Pictory’s AI-driven summary makes it stand out from InVideo and Pictory, which both offer video editing and customization. Pictory excels at repurposing long-form content, while InVideo is focused on video creation using templates, stock assets and a drag-and drop editor. • Pictory and Vidnami Vidnami, now part of GoDaddy Studio, is a video creation tool that lets users create videos from scratch and convert text into videos. Pictory’s strength lies in its AI-powered summarization, which is not a primary feature of Vidnami. Pictory is better suited for users looking to repurpose long-form content into concise videos. • Pictory and Wisecut Both Pictory (and Wisecut) use AI technology to summarise and edit videos. However, Pictory offers a more comprehensive feature set, including customizable templates, analytics, and multi-platform compatibility. Wisecut focuses mainly on video summarization, making Pictory a more versatile option. • Pictory and Lumen5 Lumen5 is a video creation platform that specializes in converting text-based content, such as blog posts, into engaging videos. Pictory and Lumen5 both allow content repurposing. However, Pictory’s AI-driven summary of long-form video content gives it an edge, providing a unique value proposition. •  Pictory vs FlexClip  FlexClip is an online video editor that can also be used to create templates and stock assets. It also offers text animations and text animations. Pictory’s AI-powered video summarization distinguishes it from FlexClip and makes it a more specialized solution for repurposing long-form content. Other Features • Convert Blog Posts Easily into Videos Pictory is focused on summarizing video content, not converting text-based content such as blog posts into videos. For text-to-video conversion, consider alternatives like Lumen5 or InVideo. • Auto Caption Videos Pictory’s AI technology automatically generates captions for summarised videos, allowing users the ability to modify and customize them as they please. •  Auto Summarize Lengthy Videos: Pictory’s primary strength is its AI-driven summarization, which automatically condenses long-form videos into shorter, digestible versions. • 3,000,000 Stock Video Clips: At the moment, Pictory doesn’t advertise access to a certain number of stock videos clips. This service focuses on repurposing existing content and not providing a stock library. • AI Realistic Voiceover Artists Pictory doesn’t offer AI-generated voices. Pictory’s main focus is on editing and video summarization.  Does Pictory Provide a Money-Back Guarantee?  Pictory does not offer a money back guarantee. Pictory does offer a free trial to allow users to try the platform’s core features before they commit to a paid subscription. Customer Support and Resources Pictory offers customer support via email and live chat. There is also a vast knowledge base and tutorials that will help users make the most out of the platform. Privacy & security Pictory is committed to user privacy and security. We have taken measures to ensure that user data is protected and comply with applicable regulations.  Conclusion  Pictory, an innovative video summarization software, excels at transforming long-form content using AI technology to create engaging and digestible videos. With a user-friendly interface, customizable templates, and a suite of editing tools, Pictory is an ideal solution for content creators, marketers, and businesses looking to repurpose their content for various platforms. Pictory’s AI-powered video summarization is a significant advantage over competitors such as InVideo, Vidnami and Wisecut. While other platforms may offer a wider range of templates or stock assets, Pictory stands out for its efficiency and time-saving capabilities. Pictory is an excellent resource for anyone looking to maximize the potential of their content through innovative AI-driven video summarization. FAQs • Pictory is suitable for beginners? Pictory’s intuitive interface and extensive educational resources make it accessible to users of all levels. •  Does Pictory offer a free trial?  Pictory offers a free plan with basic features, allowing users to test the platform before upgrading to a paid subscription. • Is Pictory available for podcasts and webinars? Yes, Pictory is an excellent tool for repurposing long-form content like podcasts and webinars into engaging, concise videos. • How precise is Pictory’s AI generated summarization? Pictory’s AI-driven summarization is generally accurate, although there may be occasional inaccuracies in captions or summaries. Users have the option to edit and customize the AI-generated content. • How can I change the look and feel for my Pictory videos? Pictory allows users to customize their videos with captions, branding elements and templates. Other Searches [sspostsincat category=”Pictory”]
__label__pos
0.90154
Inspiration People with certain physical disabilities often find themselves at an immediate disadvantage in gaming and in using educational software. There are some amazing people and organizations in the gaming and accessibility worlds that have set out to make that statement less true. People like Bryce Johnson who created the Xbox Adaptive Controller, or everyone from the Special Effect and Able Gamers charities. They use their time and money to create custom controllers that are fit to a specific user with their own unique situation. Here's an example of those setups: Adaptive setup You can see the custom buttons on the pad and the headrest as well as the custom joysticks. These types of customized controllers using the XAC let the user make the controller work for them. These are absolutely amazing developments in the accessible gaming world, but we can do more. Games and software that are fast paced or just challenging in general still leave an uneven playing field for people with disabilities. For example, I can tap a key or click my mouse drastically faster than the person in the example above can reach for the joystick or hit a button on a pad. I have a required range of motion of 2mm where he has a required range of over 12 inches. I built SuaveKeys to level the playing field, now made even better with the power of Azure and Cognitive Services. Lastly, I'd like to take the opportunity to thank my little brother, Bryan, for always being an inspiration to make software more accessible for all. Having had to learn remotely while being homebound for the last few years, he's found ways to interact with both the tools he uses in learning as well as other students in ways that works for him. His inspiration drives me to help others find better ways to navigate the digital world as well. Suave Keys in Education Suave Keys was built to help people use any software, not just games, with more ease. While more and more students are learning remotely, they have to use more and more software in every activity. From Zoom, to Google Classroom, to Microsoft Office. While many of these tools have their own accessibility tools to make navigation and control a bit easier, disabled students are still at a disadvantage. By using other means like voice and expression, students can customize how they interact with the technology they now have to use every day in ways that work best for them. From shortcutting average tasks with voice-driven macros to interacting with their tools without having to reach for a keyboard or mouse. Spending less time fighting to make their tools work for them, students can spend more time and energy on their learning subjects and be able to keep pace with everyone else. While remote learning has left students computer-bound, it doesn't need to keep them keyboard-bound. What it does SuaveKeys lets you play games and use software with your voice and expression alongside the usual input of keyboard and mouse. It acts as a distributed system to allow users to make use of whatever resources they have to connect. For example, if the user only has an alexa speaker and their computer, they can play using Alexa, but now they can use their Android phone or iPhone using the SuaveKeys mobile app. Here's what it looks like: pre-luis The process is essentially: • User signs into their smart speaker and client app • User speaks to the smart speaker or app OR user makes expressions to webcam • NLU and expression detection happens with LUIS and Cognitive services • The request goes to Voicify to add context and routing • Voicify sends the updated request to the SuaveKeys API • The SuaveKeys API sends the processed input to the connected Client apps over websockets • The Client app checks the input phrase against a selected keyboard profile • The profile matches the phrase to a key or a macro • The client app then sends the request over a serial data writer to an Arduino Leonardo • The Arduino then sends USB keyboard commands back to the host computer • The computer executes the action in game The app also allows the user to customize their profiles from their phone as well as their desktop client. So if you want to quickly create a new command or macro, you can register it right within the app. Here's a quick gif of it in action in Fallguys where I use my voice, facial expressions, and hand gestures to control the character. fallguys snapchat sample If you watch the bottom left, you can see my phone screen where I say "attack" which then triggers the right intent in LUIS, and then sends it to Voicify, to the SuaveKeys API, to my desktop, to Arduino, and actually fires the gun in the game to get a headshot. Here's an example of a Fall Guys profile of commands - select a key, give a list of commands, and when you speak them, it works! keyboard You can also add macros to a profile: macros Supported Input Platforms Suave Keys is meant to be useable and useful for anyone on any device and thus allows for voice and expression input support from: • Alexa Skill: Voice • Google Action: Voice • Bixby Capsule: Voice • Android: Voice and Expression • iOS: Voice and Expression • Windows (UWP): Voice and Expression • Twitch chat: Distributed chat • Snapchat: Expression You can also use any combination of these at the same time! Want to use your microphone from your desktop, but video from your Android phone? Great! No mic on your PC but a web cam? Use Alexa for voice and your camera for expression. How I built it The SuaveKeys mobile and desktop apps are built using C#, .NET, and Xamarin with the help of LUIS, Voicify, Azure App Service, Azure Postgres, Azure Cognitive Services, SignalR, and a whole lot of abstraction and dependency injection. Infrastructure and Azure Resources This project uses many different Azure resources and services to run at scale: • Azure app services: Hosts .net core web api and signalr project • Azure PostgreSQL: Data storage for user info and keyboard preferences • LUIS: Used for NLU processing before sending detected inputs to the SuaveKeys API • Azure Cognitive Services Face API: Used for detecting expression from live video feed that is used to map to key inputs Software Implementation While the SuaveKeys API and Authentication layers already existed, we were able to build the client apps to act as both ends of the equation. Each page in the app is built using XAML, C#, and MVVM. To handle differences in platforms such as: • Speech to text providers • Camera previews and frame ingestion • UI differences • Changes in business logic I built a dependency abstraction that lets us create an interface in the shared code, an implementation of that interface separately in each platform project, then inject it back into shared code. For example, our ViewModel that handles the Speech to text flow that lets us actually talk to our app and have it work looks like this: public class MicrophonePageViewModel : BaseViewModel { private readonly ISpeechToTextService _speechToTextService; private readonly IKeyboardService _keyboardService; public ICommand StartCommand { get; set; } public ICommand StopCommand { get; set; } public bool IsListening { get; set; } public MicrophonePageViewModel() { _speechToTextService = App.Current.Container.Resolve<ISpeechToTextService>(); _keyboardService = App.Current.Container.Resolve<IKeyboardService>(); _speechToTextService.OnSpeechRecognized += SpeechToTextService_OnSpeechRecognized; StartCommand = new Command(async () => { await _speechToTextService?.InitializeAsync(); await _speechToTextService?.StartAsync(); IsListening = true; }); StopCommand = new Command(() => { IsListening = false; }); } private async void SpeechToTextService_OnSpeechRecognized(object sender, Models.SpeechRecognizedEventArgs e) { _keyboardService?.Press(e.Speech); if (IsListening) await _speechToTextService?.StartAsync(); } } This means, we need to implement and inject our IKeyboardService and our ISpeechToTextService. So to let Android actually use the built-in SpeechRecognizer activity and pass it to LUIS then voicify, we implement it like this: public class AndroidSpeechToTextService : ISpeechToTextService { private readonly MainActivity _context; private readonly ILanguageService _languageService; private readonly ICustomAssistantApi _customAssistantApi; private readonly IAuthService _authService; private string sessionId; public event EventHandler<SpeechRecognizedEventArgs> OnSpeechRecognized; public AndroidSpeechToTextService(MainActivity context, ILanguageService languageService, ICustomAssistantApi customAssistantApi, IAuthService authService) { _context = context; _languageService = languageService; _customAssistantApi = customAssistantApi; _authService = authService; _context.OnSpeechRecognized += Context_OnSpeechRecognized; } private async void Context_OnSpeechRecognized(object sender, SpeechRecognizedEventArgs e) { var languageResult = await _languageService.ProcessLanguage(e.Speech).ConfigureAwait(false); var tokenResult = await _authService.GetCurrentAccessToken(); var voicifyResponse = await _customAssistantApi.HandleRequestAsync(VoicifyKeys.ApplicationId, VoicifyKeys.ApplicationSecret, new CustomAssistantRequestBody( requestId: Guid.NewGuid().ToString(), context: new CustomAssistantRequestContext(sessionId, noTracking: false, requestType: "IntentRequest", requestName: languageResult.Data.Name, slots: languageResult.Data.Slots, originalInput: e.Speech, channel: "Android App", requiresLanguageUnderstanding: false, locale: "en-us"), new CustomAssistantDevice(Guid.NewGuid().ToString(), "Android Device"), new CustomAssistantUser(sessionId, "Android User") )); OnSpeechRecognized?.Invoke(this, e); } public Task InitializeAsync() { sessionId = Guid.NewGuid().ToString(); // we don't need to init. return Task.CompletedTask; } public Task StartAsync() { var voiceIntent = new Android.Content.Intent(RecognizerIntent.ActionRecognizeSpeech); voiceIntent.PutExtra(RecognizerIntent.ExtraLanguageModel, RecognizerIntent.LanguageModelFreeForm); voiceIntent.PutExtra(RecognizerIntent.ExtraSpeechInputCompleteSilenceLengthMillis, 1500); voiceIntent.PutExtra(RecognizerIntent.ExtraSpeechInputPossiblyCompleteSilenceLengthMillis, 1500); voiceIntent.PutExtra(RecognizerIntent.ExtraSpeechInputMinimumLengthMillis, 15000); voiceIntent.PutExtra(RecognizerIntent.ExtraMaxResults, 1); voiceIntent.PutExtra(RecognizerIntent.ExtraLanguage, Java.Util.Locale.Default); _context.StartActivityForResult(voiceIntent, MainActivity.VOICE_RESULT); return Task.CompletedTask; } } The gist of it is kicking off the speech recognition, then when we process the speech, send it to our ILanguageService (this is where we implement the LUIS call), then fire that processed data off to the voicify ICustomAssistantApi. Here's the gist of the LuisLanguageService that is then injected into our android service: public class LuisLanguageUnderstandingService : ILanguageService { private HttpClient _client; public LuisLanguageUnderstandingService(HttpClient client) { _client = client; } public async Task<Result<Intent>> ProcessLanguage(string input) { try { var result = await _client.GetAsync($"https://suavekeys.cognitiveservices.azure.com/luis/prediction/v3.0/apps/fda4acbe-3c37-410d-a630-c66ec1722b12/slots/production/predict?subscription-key={LuisKeys.PredictionKey}&verbose=true&show-all-intents=true&log=true&query={input}"); if (!result.IsSuccessStatusCode) return new InvalidResult<Intent>("Unable to handle request/response from LUIS"); var json = await result.Content.ReadAsStringAsync(); var luisResponse = JsonConvert.DeserializeObject<LuisPredictionResponse>(json); // map to intent var model = new Intent { Name = luisResponse.Prediction.TopIntent, Slots = luisResponse.Prediction.Entities?.Where(kvp => kvp.Key != "$instance")?.Select(kvp => new Slot { Name = kvp.Key, SlotType = kvp.Key, Value = kvp.Value.FirstOrDefault()?.Value<string>() }).ToArray() }; return new SuccessResult<Intent>(model); } catch (Exception ex) { Console.WriteLine(ex); return new UnexpectedResult<Intent>(); } } } Here, we send the request off to our LUIS app, then take the output and map it to an simplified model that we can send to the Voicify app. So all-in-all the flow of data/logic is: • User signs in • User goes to microphone page • User taps "start" • User speaks • Android STT service listens and processes text • Android STT service takes output text and sends to LUIS for alignment • Android STT takes aligned NL and sends it to Voicify • Voicify processes the aligned NL against the built app • Voicify sends request to SuaveKeys API webhook • SuaveKeys API sends websocket request to any connected client (UWP app) • UWP app takes request and sends it to Arduino via serial connection • Arduino sends USB data for keyboard input • Action happens in the game or other software Expression Management with Webcams and Face API On top of using your voice with the mic on the device of your choice, you can also use the webcam on either your desktop or mobile device to map expressions to keyboard commands such as: • Smiling • Head position • pitch • yaw • roll • Emotion This is implemented by using another platform specific abstraction where each platform implements a custom control that: • Gets the user's camera permissions • Projects a preview of the camera feed to the screen • Grabs the current preview frame every 500ms (configurable) • Encodes the preview frame to jpeg • Sends the jpeg frame to the Azure Face API to detect the face elements • Determines the expressions and translates them to Suave Keys commands • Sends the command to the Suave Keys API to eventually send to the final keyboard execution on the client device Here's an example of doing so with a custom CameraFragment in the Android application: class CameraFragment : Fragment, TextureView.ISurfaceTextureListener { // ... private fields removed for readability public CameraExpressionDetectionView Element { get; set; } public CameraFragment() { } public CameraFragment(IntPtr javaReference, JniHandleOwnership transfer) : base(javaReference, transfer) { } public override Android.Views.View OnCreateView(LayoutInflater inflater, ViewGroup container, Bundle savedInstanceState) => inflater.Inflate(Resource.Layout.CameraFragment, null); public override void OnViewCreated(Android.Views.View view, Bundle savedInstanceState) => texture = view.FindViewById<AutoFitTextureView>(Resource.Id.cameratexture); public async Task RetrieveCameraDevice(bool force = false) { await RequestCameraPermissions(); if (!cameraPermissionsGranted) return; if (!captureSessionOpenCloseLock.TryAcquire(2500, TimeUnit.Milliseconds)) throw new RuntimeException("Timeout waiting to lock camera opening."); IsBusy = true; cameraId = GetCameraId(); if (string.IsNullOrEmpty(cameraId)) { IsBusy = false; captureSessionOpenCloseLock.Release(); Console.WriteLine("No camera found"); } else { try { CameraCharacteristics characteristics = Manager.GetCameraCharacteristics(cameraId); StreamConfigurationMap map = (StreamConfigurationMap)characteristics.Get(CameraCharacteristics.ScalerStreamConfigurationMap); previewSize = ChooseOptimalSize(map.GetOutputSizes(Class.FromType(typeof(SurfaceTexture))), texture.Width, texture.Height, GetMaxSize(map.GetOutputSizes((int)ImageFormatType.Jpeg))); sensorOrientation = (int)characteristics.Get(CameraCharacteristics.SensorOrientation); cameraType = (LensFacing)(int)characteristics.Get(CameraCharacteristics.LensFacing); if (Resources.Configuration.Orientation == Android.Content.Res.Orientation.Landscape) { texture.SetAspectRatio(previewSize.Width, previewSize.Height); } else { texture.SetAspectRatio(previewSize.Height, previewSize.Width); } initTaskSource = new TaskCompletionSource<CameraDevice>(); Manager.OpenCamera(cameraId, new CameraStateListener { OnOpenedAction = device => initTaskSource?.TrySetResult(device), OnDisconnectedAction = device => { initTaskSource?.TrySetResult(null); CloseDevice(device); }, OnErrorAction = (device, error) => { initTaskSource?.TrySetResult(device); Console.WriteLine($"Camera device error: {error}"); CloseDevice(device); }, OnClosedAction = device => { initTaskSource?.TrySetResult(null); CloseDevice(device); } }, backgroundHandler); captureSessionOpenCloseLock.Release(); device = await initTaskSource.Task; initTaskSource = null; if (device != null) { await PrepareSession(); } } catch (Java.Lang.Exception ex) { Console.WriteLine("Failed to open camera.", ex); Available = false; } finally { IsBusy = false; } } } public void UpdateRepeatingRequest() { if (session == null || sessionBuilder == null) return; IsBusy = true; try { if (repeatingIsRunning) { session.StopRepeating(); } sessionBuilder.Set(CaptureRequest.ControlMode, (int)ControlMode.Auto); sessionBuilder.Set(CaptureRequest.ControlAeMode, (int)ControlAEMode.On); session.SetRepeatingRequest(sessionBuilder.Build(), listener: null, backgroundHandler); repeatingIsRunning = true; } catch (Java.Lang.Exception error) { Console.WriteLine("Update preview exception.", error); } finally { IsBusy = false; } } // ... permission methods removed for readability Bitmap captureBitmap; bool _readyToProcessFrame = false; async void TextureView.ISurfaceTextureListener.OnSurfaceTextureAvailable(SurfaceTexture surface, int width, int height) { View?.SetBackgroundColor(Element.BackgroundColor.ToAndroid()); cameraTemplate = CameraTemplate.Preview; // Get the preview image width and height captureBitmap = Bitmap.CreateBitmap(width, height, Bitmap.Config.Argb8888); Device.StartTimer(TimeSpan.FromMilliseconds(500), () => { _readyToProcessFrame = true; return true; // return true to repeat counting, false to stop timer }); await RetrieveCameraDevice(); } bool TextureView.ISurfaceTextureListener.OnSurfaceTextureDestroyed(SurfaceTexture surface) { CloseDevice(); return true; } void TextureView.ISurfaceTextureListener.OnSurfaceTextureSizeChanged(SurfaceTexture surface, int width, int height) => ConfigureTransform(width, height); void TextureView.ISurfaceTextureListener.OnSurfaceTextureUpdated(SurfaceTexture surface) { if (captureBitmap != null && _readyToProcessFrame) { _readyToProcessFrame = false; using (var stream = new MemoryStream()) { var bitmap = texture.GetBitmap(captureBitmap); bitmap.Compress(Bitmap.CompressFormat.Jpeg, 100, stream); var bytes = stream.ToArray(); Element?.ProcessFrameStream(bytes); } } } } So if you smile at your camera within the 500ms timer, it will detect the smile and send "smile" as a command to the API in the same way that saying the word "smile" to your voice input device would. There are tons of other things we can do going forward with a custom Face API model beyond what is built in with Cognitive services such as: • Detecting blinking • Detecting eye brow raising • Detecting expression change speed Extendability Suave Keys is built to reach any user anywhere on any device. Thus, allowing other developers to build tools that can provide new input means where Suave Keys lacks an existing input is an important step. To build an extension, developers can authenticate the user using OAuth Auth code grant flow with a certified client, then send any input on behalf of the user to send to any output device. Here are some examples of extensions that allow for other third party platforms to provide input to Suave Keys: Challenges I ran into With regards to performance, I'm exploring a couple things including: • Balancing the process timing while speaking • Running intermittent spoken word against LUIS to see if it is valid ahead of time • Cognitive services costs at scale for Face APIs. If we can reliably and more cost effectively make requests from the live video feed to cog services or operate on it offline, we can allow for faster performance from expression to action Accomplishments that I'm proud of The biggest accomplishment was being able to see it in use! I was able to play games like Call of Duty, Sekiro, and Fall Guys using my voice! I was also able to even write some code in Visual Studio using my voice (pretty meta)! It feels like it's closer to a real option for people with disabilities to play competitive, fast paced, and difficult games and use software that isn't inherently accessible with as much ease as able-bodied people do and really level the playing field. What I learned I learned a lot about camera frame processing in both UWP and Android as well as how to use the Face API. I have had previous experience with other Azure services and even many of the other Cognitive services, but the Face API was entirely new for me and works amazingly for the goal of this project. What's next for Suave Keys As part of this new work on Project Suave Keys, I've surpassed my expectations for what the tech could really do and am excited to announce the next phase with the introduction of my new organization, Enabled Play! More details will follow soon for Enabled Play and the tech that goes along with it. The goal of the organization is to work with community and technology leaders to bring more accessible hardware and software to the masses and doing so by creating an extendable and usable platform to enable everyone to work and play together. As part of this, Project Suave Keys will be graduating to a more complete product: The Enabled Keyboard and Enabled Controller which will continue development on: • Supporting more platforms and input control • Extending developer support for extensions and first-party support in games and tools • Performance of requests at scale • UX design that works for everyone while also not looking like the ugly mess it is now • Mass production of custom hardware Be sure to follow me on twitter and twitch as we roll out more announcements and development live! https://twitter.com/suave_pirate https://twitch.com/suave_pirate Built With Share this project: Updates
__label__pos
0.87383
How To Fix Gmail Queued And Failed Error Gmail is one of the most widely used email services around the world. This email service is pretty useful for sending business emails, attachments, media, or anything else. However, some Android users face a Gmail queued issue while sending emails with PDF attachments. The users could not send the emails as the emails get stuck in the outbox folder for some reason. Later, the users receive the failed error for sending the email that is stuck in the outbox folder for hours. We understand this can be frustrating when you are trying to send a business mail to your boss or some assignment to your teacher. Therefore, to help you out, we have a small guide that you can follow to fix Gmail queued and failed error. Fix Gmail Queued And Failed Error Fix Gmail Queued And Failed Error What are the reasons for Gmail queued and failed error? Gmail queue means that Gmail is unable to send your mail at the moment, and that is why the mail goes straight to the outbox mail. The mails in the outbox folder are sent out later. However, when Gmail is unable to send the mail from the Outbox, the users get the failed error. We are mentioning some of the possible reasons behind Gmail queued and failed error: 1. Gmail exceeding the threshold limit  Every email service platform has a limitation for sending emails at one time. So there are chances that you are exceeding this limit while sending a specific mail on Gmail. Therefore, when you try to send a mail, it goes to your Outbox and is queued to send later.  2. Network related issue There are possibilities that Gmail’s server may be down for some time, and there is a network-related issue between Gmail and the server.  3. Low storage space on the phone If you send a mail on Gmail, it will occupy the storage space on the app. So if you have low storage on your phone, then there are possibilities that Gmail cannot adjust the data size because of less storage. Therefore, with less storage space on your phone, Gmail may not be able to send out an email, and your email is queued in the Outbox folder. 5 Ways to Fix Gmail Queued and Failed Error Before discussing different ways via which you can fix the Gmail queued and failed error, there are a few things that you must consider: • Make sure that the issues are only with the Gmail app and not the web version of Gmail. This way, you can know whether the Gmail server is down or not. However, if you face the same issue on the web version of Gmail, then it is probably some server related issue from the Gmail side.  • Make sure that you are using the latest version of the Gmail app that you install from the Google play store and not from an unknown source.  • Ensure that you are not sending the mail with attachments exceeding 50MB file size.  • Make sure you have a stable internet connection.   After ensuring the above steps, you can try out the following methods to fix Gmail queued and failed error: Method 1: Clear Gmail’s Cache & data To fix queued and failed error on Gmail, you can try to clear the Gmail app’s cache and data. Make sure you close the Gmail app before you clear the cache and data.  1. Open Settings on your Android phone.  2. Go to the ‘Apps’ tab then tap open ‘Manage Apps.’ In Settings, locate and go to the ‘Apps’ section. | Fix Gmail Queued And Failed Error  Go to ‘Manage apps’. 3. Locate and open your Gmail app from the list of applications that you see on the screen.  Gmail app | Fix Gmail Queued And Failed Error 4. Now tap on ‘Clear data’ at the bottom of the screen. A window will pop up, where you have to select ‘Clear cache.’  Now click on ‘Clear data’    select ‘Clear cache.’ | Fix Gmail Queued And Failed Error 5. Finally, this will clear the cache and data for your Gmail app. Also Read: Fix Email Address Not Found in Gmail Method 2: Enable & Disable Gmail Sync Temporarily You can try to enable and disable the Gmail sync option on your phone to check whether it is functioning properly or not. 1. Open Settings on your Android phone.  2. Scroll down and tap on ‘Accounts and sync.’ Accounts and sync 3. In your Accounts and Sync section, you have to tap on ‘Google’ to access your google account.  In your Accounts and Sync section, you have to click on ‘Google’ to access your google account.  4. Now, choose the email account that you have linked with Gmail.  5. Uncheck the circle next to ‘Gmail.’ Uncheck the circle next to ‘Gmail.’ | Fix Gmail Queued And Failed Error 6. Finally, Restart your phone and again enable the ‘Gmail’ sync option. Method 3: Remove and Set up your Gmail Account again This can be a lengthy process for users. You can try to remove your google account from your phone and set your account again. 1. Open the Settings on your phone.  2. Go to ‘Accounts and sync.’  3. In your Accounts and Sync section, you have to tap on ‘Google’ to access your google account.  In your Accounts and Sync section, you have to click on ‘Google’ to access your google account.  4. Select your email account that is linked with your Gmail.  5. Now, tap on ‘More’ at the bottom of the screen.  click on ‘More’ at the bottom of the screen. | Fix Gmail Queued And Failed Error 6. Tap on ‘Remove account’ from the list of options. Click on ‘Remove account’ 7. Clear cache and data for Gmail and Restart your phone.  8. Finally, set your Gmail account on your phone again. Also Read: Fix Gmail not sending emails on Android Method 4: Decrease the Days to Sync option Your Gmail account usually retrieve the mails for a few days when you configure the phone with Gmail. Therefore, when you use your Gmail account, it syncs your old emails as well, which may increase the cache and storage size for Gmail. So the best option is to decrease the days for the sync option. This way, Gmail will destroy all the emails from the storage that are over 5 days period.  1. Open your Gmail app on your Android phone.  2. Tap on the hamburger icon at the top left corner of the screen.  Click on the hamburger icon | Fix Gmail Queued And Failed Error 3. Scroll down and open Settings. Scroll down and open Settings. 4. Choose your email account.  5. Now, scroll down and tap on ‘Days of emails to sync.’ tap on ‘Days of emails to sync.’ | Fix Gmail Queued And Failed Error 6. Finally, decrease the days to 30 days or less. In our case, we are making it 15 days.  decrease the days to 30 days or less After you make the changes, make sure you clear the cache and data for Gmail. Also Read: Fix Mozilla Firefox Couldn’t Load XPCOM Error on Windows 10 Method 5: Keep Background Data-Enabled for Gmail Usually, the background data is enabled by default for the Gmail app. However, if you have mistakenly disabled this feature, you may enable it by following the below steps: 1. Open the Settings on your Android phone.  2. Go to the ‘Connection and Sharing’ tab.  Go to the ‘Connection and Sharing’ tab. | Fix Gmail Queued And Failed Error 3. Open ‘Data usage’ in the connection and sharing tab.  Open ‘Data usage’ in the connection and sharing tab.  4. Scroll down and locate your Gmail app.  5. Finally, ensure that the toggle for ‘Background data’ is On ensure that the toggle for ‘Background data’ is On. | Fix Gmail Queued And Failed Error You must ensure that you have a stable internet connection and there are no network issues. Recommended: We hope this guide was helpful and you were able to fix Gmail queued and failed error on your Android phone. If any of the methods worked for you, let us know in the comments below. Leave a Comment Your email address will not be published. Required fields are marked *
__label__pos
0.790702
Komponen CPU Posted on Komponen Cpu- Pengertian, Unit Kontrol, Register,, ALU, Bagian, Dll- Hallo sahabat pembaca yang budiman, pada kesempatan yang berbahagia kali ini kita akan membahas makalah tentang Pengertian Komponen CPU dan beberapa Contoh Komponen – komponennya dengan disertai penjelasannya. Untuk itu, mari langsung saja kita simak uraian pembahasannya di bawah berikut ini: Pengertian CPU CPU atau kepanjangannya adalah Central Processing Unit merupakan sebuah perangkat keras pada komputer yang memproses dan melaksanakan perintah dari komputer serta tempat penyimpanan data dari perangkat lunak komputer. CPU ini memiliki istilah sebutan yang lain yaitu : Mikroprosesor. Mikroprosesor ini ialah sebuah CPU yang di buat di dalam sistem sirkuit terpadu.Mikroprosesor sirkuit terpadu ini semenjak tahun 1970 sudah umum di pakai serta di jadikan sebagai aspek yang penting dalam sebuah pelaksanaan CPU. Berikut adalah gambarnya : CPU (Central Processing Unti) Komponen – Komponen dari CPU Ada beberapa komponen yang tersusun didalam CPU ini, di bawah ini adalah gambarnya : Setelah kita perhatikan gambar di atas, selanjutnya mari kita simak beberapa penjelasan dari masing – masing komponen yang ada dalam CPU tersebut ! Unit Kontrol Fungsi dari unit kontrol ini adalah yang mengatur sebuah jalannya program. Sebuah komponen ini tentu ada pada semua CPU. Pada CPU tersebut bertugas untuk mengontrol komputer sehingga sinkronisasi yang terjadi di antara komponen akan bekerja dalam menjalankan fungsi nya sebagai operasi. Dan juga termasuk dari tanggung jawab unit kontrol yaitu untuk mengambil perintah, mengambil instruksi dari memori utama dan untuk menentukan sebuah jenis instruksi. Apabila pada instruksi bagi aritmatika atau yang ainnya seperti perbandingan logika, maka unit kontrol itu akan mengirim instruksi ke sistem ALU. Hasilnya dari pengolahan data tersebut dibawa oleh unit kendali kepada memori utama agar disimpan, hingga waktu akan disajikan ke pada alat output. Maka, dengan demikian tugas dari unit kendali tersebut ialah sebagai berikut : 1. Mengontrol serta mengatur pada alat input (masukan) dan pada alat output (keluaran). 2. Menerima sebuah instruksi dari pada memori utama. 3. Menerima data yang berasal dari memori utama (apabila diperlukan) agar diproses. 4. Mengirimkan sebuah instruksi ke ALU apabila terdapat perhitungan aritmatika dan juga perbandingan logika, kemudian mengawasi kerja dari sistem ALU. 5. Menyimpannya hasil dari proses kedalam memori utamanya. Baca Juga :   Bahasa Arab Sepak Bola Register Pengertian dari register ialah sebuah perangkat penyimpanan kecil yang mempunyai sebuah akses ke kecepatan yang cukup tinggi, yang bisa di pakai untuk menyimpan beberapa data dan beberpa instruksi yang sedang di kerjakan. Pada memori ini akan bersifat sementara, dan biasanya dipakai untuk menyimpan sebuah data pada saat diolah ataupun pada data yang untuk diproses lebih lanjut. Register ini dapat di analogikan sebagai memori yang ada di dalam otak pada saat kita akan melakukan pengolahan secara manual. Sedangkan CPU dapat di ibaratkan sebagai otak tersebut. Unit ALU Sistem unit ALU ini berfungsi untuk melakukan sebuah operasi aritmetika dan juga operasi logika berdasarkan instruksi yang telah ditentukan. ALU ini juga sering disebuts dengan bahasa mesin disebabkan pada bagian ini terdapat dua bagian, yaitu : bagian arithmetika satuan dan bagian boolean unit logika, yang mana dari masing – masing mempunyai spesifikasi pekerjaan tersendiri. ALU memiliki tugas untuk melakukan semua perhitungan tentang aritmatika yang terjadi sesuai dari perintah program. ALU ini akan melakukan semua operasi aritmatika dengan berdasar penjumlahan sehingga pada sirkuit elektronik yang digunakan tersebut di namakan Adder. Ada tugas yang lain dari ALU ini ialah untuk membuat sebuah keputusan dari pada operasi logika sesuai dengan perintah suatu program. Pada operasi logika tersebut yang meliputi suatu perbandingan dua operand dengan memakai sebuah operator logika tertentu,yakni seperti : sama dengan (=), kurang dari (<), tidak sama dengan (¹), kurang dari atau sama dengan (£), lebih besar dari (>), dan lainnya. CPU Interconnections CPU Interconnections ini ialah sebuah sistem koneksi dan bus yang mana dapat menghubungkan suatu komponen internal CPU, yakni : ALU, unit kontrol dan juga unit register-register serta juga dengan bus-bus eksternal CPU yang bisa menghubungkan dengan system yang lainnya. Contohnya seperti : piranti masukkan atau keluaran dan memori utama. Bagian-Bagian dari CPU Mari kita perhatikan gambar di bawah berikut ini : Setelah kita perhatikan gambar di atas, selanjutnya mari kita simak penjelasannya dibawah berikut ini! Baca Juga :   Download Bahasa Arab untuk Windows 7 Casing Casing ialah suatu bagian paling luar yang ada dari PC yang mana sering disebut dengan CPU ini. Fungsi dari casing ini adalah sebagai penutup, pelindung dan tempat dudukan suatu komponen lain serta berfungsi untuk menghalau kotoran yang mungkin bisa masuk dan menempel pada komponen – komponen CPU tersebut. Processor Processor adalah ibarat otak atau pusat pemrosesan pada komputer. Kecepatan pemrosesan dalam komputer tergantung pada tipe processor tersebut. Motherboard Motherboard adalah salah satu komponen yang berfungsi sebagai tempat dudukan atau papan sirkuit tempat dimana komponen elektronik akan di tempatkan. Fungsi yang paling utama pada motherbiard ini adalah untuk menghubungkan tiap – tiap komponen kepada komponen lainnya supaya bisa saling berkomunikasi dan bertukar data dengan komponen lainnya. RAM Pengertian RAM atau Random Access Memory adalah sebuah memori utama pada komputer yang berfungsi untuk sebagai tempat penyimpanan sebuah data yang sudah diproses oleh processor ayng sebelum dilanjutkan kepada bagian lain yang membutuhkannya. Oleh karena itu RAM juga sering biasa disebut sebagai memori penyimpanan data untuk sementara. Harddisk Harddisk memiliki fungsi sebagai suatu tempat penyimpanan data secara konvensional yang mana umum untuk digunakan. Pada umumnya harddisk pada saat ini mempunyai sebuah kapasitas penyimpanan yang cukup sangat besar, yaitu mulai dari ratusan GB hingga mencapai tingkat TB. Contohnya seperti : data yang disimpan di dalam harddisk adalah seperti : lagu, foto, video, gambar, dokumen dan lain sebagainya termsuk aplikasi. VGA Card Istilah VGA (Video Graphic Editor) Card ialah sebuah komponen yang memiliki suatu fungsi untuk mengolah data berbentuk grafis yang akan di tampilkan pada monitor. VGA Card ini ialah salah satu dari beberapa komponen penting pada saat menjalankan suatu aplikasi dan menampilkan grafis di monitor. Contohnya seperti : video, game, dan lain – lain. Sound Card Sound Card atau kartu suara memiliki fungsi untuk mengolah suatu data berupa audio atau suara yang berasal dari perangkat ke perangkat keras yang terkaitcontohnya seperti pada : mic dan speaker. Optical Disk Drive Optical Disk Drive atau biasa lebih sering disebut dengan nama CD/DVD Room adalah suatu komponen yang berfungsi sebagai alat pembaca dan menulis pada sebuah piringan kaset CD / DVD. 9. Power Supply Komponen Power Supply berfungsi agar untuk meneruskan atau mengalirkan arus listrik ke pada tiap komponen komputer supaya bisa beroperasi. Fungsi dari CPU Cpu berfungsi sebagai pengolah suatu data operasi arimatika dan logika dalam sebuah data yang di dapat dari sebuah memori ataupun dari sebuah informasi yang di masukkan lewat beberapa perangkat keras misalnya : mic, keyboard, scanner, mouse dan tuas control. Baca Juga :   Prediksi Soal Latihan USBN SD/MI 2019 Dan Kunci Jawaban Cpu ini adapat di berikan intruksi melalui sebuah perangakt lunak pada komputer. Cara Kerja dari CPU Pada saat data atau instruksi telah dimasukkan ke dalam processing-perangkat, pertama kali yang akan ditempatkan adalah pada MAA yaitu melalui input storage. Apabila dalam bentuk instruksi telah disimpan oleh Control Unit pada Program storage namun data yang di simpan masih dalam dalam Working storage. Kemudian apabila register telah siap untuk menerima pelaksanaan suatu pekerjaan, maka Control Unit tersebut akan mengambil sebuah instruksi dari Program storage untuk kemudian ditampungkan ke dalam Instruction Register, sedangkan pada alamat memori yang telah berisi instruksi yang disimpan didalam program counter. Sedangkan sebuah datayang diambil oleh Control Unit Kerjastorage untuk kemudian ditampung pada register tujuan umum atau dalam hal ini di Operand daftar. Apabila pengerjaan yang dilaksanakan oleh instruksi dalam aritmatika dan logika, maka ALU akan mengambil alih sebuah operasi yang harus dikerjakan berdasarkan pada set instruksi. Lalu hasilnya akan disimpan dalam sebuah akumulator. Apabila hasil pengolahan tersebut telah selesai, maka Control Unit yang akan mengambil hasil dari pengolahan pada akumulator untuk ditampung kembali ke dalam Working storage. Kemudian apabila pembangunan secara keseluruhan sudah selesai, maka Control Unit tersebut akan mengambil pengolahan Kerja penyimpanan untuk kemudian ditampung kedalam Output storage. Dan kemudian nanti output storage, hasil pengolahannya akan ditampilkan ke dalam perangkat output terakhir. Demikianlah pembahasan makalah tentang Komponen CPU. Semoga bermanfaat ya …. Baca juga :
__label__pos
0.992744
Using T4 Templates to Generate C# Enums from SQL Server Database Tables When building applications with C# and SQL Server, it is often necessary to define codes in the database that correspond with enums in the application. However, it can be burdensome to maintain the enums manually, as codes are added, modified, and deleted in the database. In this blog post, I’ll share a T4 template that I wrote which does this automatically. I looked online first, and did find a few different solutions for this, but none that worked for me as-is. So, I built this generic T4 template to do the job, and you can use it too. Let’s say you’ve got a Color table and ErrorType table in the database, populated as follows: Now you’d like enums in your C# application that correspond to these rows. The T4 template will generate them as follows: Before showing the code that generates this, let’s point out a few things. First, notice that the Color table has only an ID and a name column, while the ErrorType table has an ID, name, and description columns. Thus, the ErrorType enums include XML comments based on the descriptions in the database, which appear in IntelliSense when referencing the enums in Visual Studio. Also notice that one enum was generated for the Color table, while two enums were generated for the ErrorType table. This is because we instructed the T4 template to create different enums for different value ranges (0 – 999 for SystemErrorType, and 1000 – 1999 for CustomerErrorType) in the same table. Finally, the Color enum includes an additional member Undefined with a value of 0 that does not appear in the database, while the other enums don’t include the Undefined member. This is because, in the case of Color, we’d like an enum to represent Undefined, but we don’t want it in the database because we don’t want to allow 0 as a foreign key into the Color table. This gives you an idea of the flexibility that this T4 template gives you for generating enums from the database. The enums above were generated with the following T4 template: <#@include file="EnumsDb.ttinclude" #> <# var configFilePath = "app.config"; var enums = new [] { new EnumEntry ("Supported colors", "DemoDatabase", "dbo", "Color", "ColorId", "ColorName", "ColorName") { GenerateUndefinedMember = true }, new EnumEntry ("System error types", "DemoDatabase", "dbo", "ErrorType", "ErrorTypeId", "ErrorTypeName", "Description") { EnumName = "SystemErrorType", ValueRange = new Tuple<long, long>(0, 999) }, new EnumEntry ("Customer error types", "DemoDatabase", "dbo", "ErrorType", "ErrorTypeId", "ErrorTypeName", "Description") { EnumName = "CustomerErrorType", ValueRange = new Tuple<long, long>(1000, 1999) }, }; var code = this.GenerateEnums(configFilePath, enums); return code; #> You can see that there is very little code that you need to write in the template, and that’s because all the real work is contained inside the EnumsDb.ttinclude file (referenced at the top with @include), which contains the actual code to generate the enums, and can be shared throughout your application. A complete listing of the EnumsDb.ttinclude file appears at the end of this blog post. All you need to do is call GenerateEnums with two parameters. The first is a path your application’s configuration file, which holds the database connection string(s). The second is an array of EnumEntry objects, which drives the creation of enums in C#. Specifically, in this case, there are three EnumEntry objects, which results in the three enums that were generated. Each enum entry contains the following properties: 1. Description (e.g., “Supported colors”). This description appears as XML comments for the enum. 2. Database connection string key (e.g., “DemoDatabase”). It is expected that a connection string named by this key is defined in the application’s configuration file. Since this property exists for each enum entry, it is entirely possible to generate different enums from different databases. 3. Schema name (e.g., “dbo”). This is the schema that the table is defined in. 4. Table name (e.g., “Color”). This is the table that contains rows of data for each enum member. 5. ID column name (e.g., “ColorId”). This is the column that contains the numeric value for each enum member. It must be an integer type, but it can be an integer of any size. The generator automatically maps tinyint to byte, int to int, smallint to short, and bigint to long. 6. Member column name (e.g., “ColorName”). This is the column that contains the actual name of the enum member. 7. Description column name. This should be the same as the Member column name (e.g., “ColorName”) if there is no description, or it can reference another column containing the description if one exists. If there is a description, then it is generated as XML comments above each enum member. 8. Optional: 1. EnumName. If not specified, the enum is named after the table. Otherwise, you can choose another name if you don’t want the enum named after the table name. 2. GenerateUndefinedMember. If specified, then the template will generate an Undefined member with a value of 0. 3. ValueRange. If specified, then enum members are generated only for rows in the database that fall within the specified range of values. The ErrorType table uses this technique to generate two enums from the table; one named SystemErrorType with values 0 – 999, and another named CustomerErrorType with values 1000 – 1999. This design offers a great deal of flexibility, while constantly ensuring that the enums defined in your C# application are always in sync with the values defined in the database. And of course, you can modify the template as desired to suit any additional needs that are unique to your application. The full template code is listed below. I hope you enjoy using it as much as I enjoyed writing it. Happy coding! 😊 <#@ template hostspecific="true" language="C#" debug="false" #> <#@ assembly name="System.Configuration" #> <#@ assembly name="System.Data" #> <#@ import namespace="System.Configuration" #> <#@ import namespace="System.Data" #> <#@ import namespace="System.Data.SqlClient" #> <#@ import namespace="System.Text" #> <#+ private string GenerateEnums(string configFilePath, EnumEntry[] entries) { if (entries == null) { return string.Empty; } var ns = System.Runtime.Remoting.Messaging.CallContext.LogicalGetData("NamespaceHint"); var sb = new StringBuilder(); sb.AppendLine(this.GenerateHeaderComments()); sb.AppendLine($"namespace {ns}"); sb.AppendLine("{"); foreach(var entry in entries) { try { sb.Append(this.GenerateEnumMembers(configFilePath, entry)); } catch (Exception ex) { sb.AppendLine($"#warning Error generating enums for {entry.EnumDescription}"); sb.AppendLine($" // Message: {ex.Message}"); } } sb.AppendLine(); sb.AppendLine("}"); return sb.ToString(); } private string GenerateHeaderComments() { var comments = $@" // ------------------------------------------------------------------------------------------------ // <auto-generated> // This code was generated by a C# code generator // Generated at {DateTime.Now} // // Warning: Do not make changes directly to this file; they will get overwritten on the next // code generation. // </auto-generated> // ------------------------------------------------------------------------------------------------ "; return comments; } private string GenerateEnumMembers(string configFilePath, EnumEntry entry) { var code = new StringBuilder(); var connStr = this.GetConnectionString(configFilePath, entry.ConnectionStringKey); var enumDataType = default(string); using (var conn = new SqlConnection(connStr)) { conn.Open(); using (var cmd = conn.CreateCommand()) { cmd.CommandText = @" SELECT DATA_TYPE FROM INFORMATION_SCHEMA.COLUMNS WHERE TABLE_SCHEMA = @SchemaName AND TABLE_NAME = @TableName AND COLUMN_NAME = @IdColumnName "; cmd.Parameters.AddWithValue("@SchemaName", entry.SchemaName); cmd.Parameters.AddWithValue("@TableName", entry.TableName); cmd.Parameters.AddWithValue("@IdColumnName", entry.IdColumnName); var sqlDataType = cmd.ExecuteScalar(); if(sqlDataType == null) { throw new Exception($"Could not discover ID column [{entry.IdColumnName}] data type for enum table [{entry.SchemaName}].[{entry.TableName}]"); } enumDataType = this.GetEnumDataType(sqlDataType.ToString()); var whereClause = string.Empty; if (entry.ValueRange != null) { whereClause = $"WHERE [{entry.IdColumnName}] BETWEEN {entry.ValueRange.Item1} AND {entry.ValueRange.Item2}"; } cmd.CommandText = $@" SELECT Id = [{entry.IdColumnName}], Name = [{entry.NameColumnName}], Description = [{entry.DescriptionColumnName}] FROM [{entry.SchemaName}].[{entry.TableName}] {whereClause} ORDER BY [{entry.IdColumnName}] "; cmd.Parameters.Clear(); var innerCode = new StringBuilder(); var hasUndefinedMember = false; using (var rdr = cmd.ExecuteReader()) { while (rdr.Read()) { if (rdr["Name"].ToString() == "Undefined" || rdr["Id"].ToString() == "0") { hasUndefinedMember = true; } if (entry.NameColumnName != entry.DescriptionColumnName) { innerCode.AppendLine("\t\t/// <summary>"); innerCode.AppendLine($"\t\t/// {rdr["Description"]}"); innerCode.AppendLine("\t\t/// </summary>"); } innerCode.AppendLine($"\t\t{rdr["Name"].ToString().Replace("-", "_")} = {rdr["Id"]},"); } rdr.Close(); } if ((entry.GenerateUndefinedMember) && (!hasUndefinedMember)) { var undefined = new StringBuilder(); undefined.AppendLine("\t\t/// <summary>"); undefined.AppendLine("\t\t/// Undefined (not mapped in database)"); undefined.AppendLine("\t\t/// </summary>"); undefined.AppendLine("\t\tUndefined = 0,"); innerCode.Insert(0, undefined); } code.Append(innerCode.ToString()); } conn.Close(); } var final = $@" /// <summary> /// {entry.EnumDescription} /// </summary> public enum {entry.EnumName} : {enumDataType} {{ // Database information: // {entry.DbInfo} // ConnectionString: {connStr} {code} }} "; return final; } private string GetConnectionString(string configFilePath, string key) { var map = new ExeConfigurationFileMap(); map.ExeConfigFilename = this.Host.ResolvePath(configFilePath); var config = ConfigurationManager.OpenMappedExeConfiguration(map, ConfigurationUserLevel.None); var connectionString = config.ConnectionStrings.ConnectionStrings[key].ConnectionString; return connectionString; } private string GetEnumDataType(string sqlDataType) { var enumDataType = default(string); switch (sqlDataType.ToString()) { case "tinyint" : return "byte"; case "smallint" : return "short"; case "int" : return "int"; case "bigint" : return "long"; default : throw new Exception($"SQL data type {sqlDataType} is not valid as an enum ID column"); } } private class EnumEntry { public string EnumDescription { get; } public string ConnectionStringKey { get; } public string SchemaName { get; } public string TableName { get; } public string IdColumnName { get; } public string NameColumnName { get; } public string DescriptionColumnName { get; } public string EnumName { get; set; } public Tuple<long, long> ValueRange { get; set; } public bool GenerateUndefinedMember { get; set; } public string DbInfo { get { var info = $"ConnectionStringKey: {this.ConnectionStringKey}; Table: [{this.SchemaName}].[{this.TableName}]; IdColumn: [{this.IdColumnName}]; NameColumn: [{this.NameColumnName}]; DescriptionColumn: [{this.DescriptionColumnName}]"; if (this.ValueRange != null) { info = $"{info}; Range: {this.ValueRange.Item1} to {this.ValueRange.Item2}"; } return info; } } public EnumEntry( string enumDescription, string connectionStringKey, string schemaName, string tableName, string idColumnName = null, string nameColumnName = null, string descriptionColumnName = null ) { this.EnumDescription = enumDescription; this.ConnectionStringKey = connectionStringKey; this.SchemaName = schemaName; this.TableName = tableName; this.IdColumnName = idColumnName ?? tableName + "Id"; this.NameColumnName = nameColumnName ?? "Name"; this.DescriptionColumnName = descriptionColumnName ?? "Description"; this.EnumName = tableName; } } #> Leave a Reply Fill in your details below or click an icon to log in: WordPress.com Logo You are commenting using your WordPress.com account. Log Out / Change ) Twitter picture You are commenting using your Twitter account. Log Out / Change ) Facebook photo You are commenting using your Facebook account. Log Out / Change ) Google+ photo You are commenting using your Google+ account. Log Out / Change ) Connecting to %s %d bloggers like this:
__label__pos
0.994958
Subject: CVS commit: pkgsrc/audio/SDL2_mixer From: Adam Ciarcinski Date: 2018-11-01 22:12:40 Message id: [email protected] Log Message: SDL2_mixer: updated to 2.0.4 2.0.4: * Removed smpeg support for mp3 music, now that it's replaced by libmpg123 * Fixed mp3 mad decoder to skip tags, which otherwise would lead to crashes * Added support for Opus music playback using opusfile library 2.0.3: * Fixed regression where Mix_Init() would return 0 for available music formats Files: RevisionActionfile 1.1addpkgsrc/audio/SDL2_mixer/patches/patch-configure 1.3modifypkgsrc/audio/SDL2_mixer/options.mk 1.6modifypkgsrc/audio/SDL2_mixer/distinfo 1.8modifypkgsrc/audio/SDL2_mixer/Makefile
__label__pos
0.843307
Need help? Check out our Support site, then Imported WordPress Blog Posts Not Found 1. I have recently imported a file to my WordPress Blog (I used used Movable Type and TypePad) because somehow my XML import earlier didn't amount to anything. So anyway with this new format, I went to my 'Posts' and saw that I had 800 posts imported (under the 'All' tab) but yet no posts were showing up. Has anyone encountered this issue as well? The blog I need help with is icyabstract.wordpress.com. 2. Alternatively I was looking at the option emptying the site content and was directed to support again. Can anyone out there help me with this? Just wipe my posts from backend, I don't really want to keep any of it. Please, and thank you! 3. Spamming my thread but apparently I found out all imported posts are filled under 'Pending Review' and I don't have access to that tab. Alternatively can someone help me swap them all to drafts back end or is wiping all the posts quicker? Thank you! 4. Okay. No... the pending review thing doesn't help :( Sorry for the spam, if someone could be kind enough to wipe out the posts somehow, backend, that would be fab. Thank you 5. You can go to the dashboard Posts page and use the bulk editor to publish them. Select All, Bulk Action: Publish. Or Delete, whichever you prefer. 6. No, actually that's the funny thing. The posts don't show up under there. So I have nothing to 'check'. But yet the number still says I have 800 plus posts. It's almost like I want to delete but there is nothing to delete from the dashboard. 7. Hi there! Just to be clear -- is this the blog you wanted to empty? -- http://icyabstract.wordpress.com/ Let me know so I can help you with your request. Thanks! 8. Hi Druesome, yes that's the site! It'll be great if you could wipe only the posts portion. Is that possible? 9. I do see the strangeness that occurred in your last import. Would you like me to give the import another try? I'd be glad to help you out. Let me know! 10. Hi Druesome! Sure thing, I would certainly love to give the import one more try because it's best if I manage to salvage some of the imported content. Thank you so, so much! If it's of any help, I think the problem lies with the import not having a 'status'. Probably because my earlier platform doesn't have that. So now these posts aren't showing up because they are neither drafts nor published. Hope this helps some! 11. Hey there! Ok, before I attempt another import, I would have to start from a clean slate. Would it be ok to empty your blog of its contents? 12. Hi Druesome! By contents would you mean post only? Or the whole site? And approximately how long would that take? Sorry for the questions! Just want to know so I can respond to getting the site back up once it's all done! Thank you again! :) 13. Hey, I meant emptying the entire site. Would that be alright? 14. Hi Druesome, okay yeap sure... Go ahead! I've backed up what I need to so go do your thing! Just let me know once it's done! Thank you again! :) 15. p.s. sorry for this stupid question but emptying the site wouldn't affect the following I have established already, yeah? 16. No, it shouldn't. Let me know if you'd still like to proceed with it. :) 17. Hi Druesome, Yes please proceed! I've backed up what I need so you can go ahead and do what you have to do, thank you! 18. Hi there! I've gone ahead and processed the import for you, and looks like it was a success this time. :) The last post imported is dated 2014/07/29 but I believe you've made a few more posts after that date. Feel free to publish them again from the backup that you have. Can you kindly visit your admin panel to check if everything's accounted for? Thanks! 19. Hello Druesome, Wow thank you!! Everything seems to be fine and the last posted date (from import) is right. Thank you so so much for helping with this! :) Cheers, Sara 20. Nice, you're welcome! :) Topic Closed This topic has been closed to new replies. About this Topic Tags No tags yet.
__label__pos
0.721939
Gunakan tanda [] untuk mencari tag gan! contoh [flutter, javascript] Pemrograman Javascript - Tipe Data Pemrograman Javascript - Tipe Data Artikel ini dibukukan pada buku Javascript dengan mudah access_time 13 Ags 2022 remove_red_eye 991 Kali spellcheck 435 Kata, 3563 Karakter #javascript #tipe data #web developer Pada tutorial kali ini, mari kita bahas tentang beberapa tipe data pada javascript dan beberapa browser / runtime type javascript yang support untuk digunakan dengan tipe data tersebut. Perlu diketahui bahwa javascript merupakan bahasa pemrograman yang dinamis terhadap tipe data, jadi javascript otomatis langsung meng-convert ke string, integer, maupun float. 1. let Tipe data ini bisa digunakan untuk sebuah variable yang bisa berubah (mutable), dan tidak bisa dideklarasi ulang. Contoh penggunaannya adalah sebagai berikut let a = 20; a+=20; console.log(a); //hasilnya adalah 40 Tipe data ini bisa berubah, sesuai kebutuhan anda. Browser support by caniuse.com 2. const Tipe data const merupakan tipe data yang tidak bisa berubah (immutable). const a = 20; a+=20; // error -> Cannot assign to "a" because it is a constant console.log(a); Browser support by caniuse.com 3. var Tipe data ini mirip dengan let. Dimana varibale pada tipe data ini bisa (mutable). Tapi, tipe data ini bisa ditumpuk dengan tipe data yang sama meskipun sudah dideskripsikan (bisa dideklarasi ulang). Contoh: var a = 10; var a = 20; //tidak menyebabkan error console.log(a); //output nya adalah 20 4. integer (num) Tipe data integer otomatis akan dideteksi oleh javascript. Tipe data ini merupakan angka bulat (tidak ada koma). contoh: const a = 20+10 console.log(a); //hasilnya adalah 30 Support Semua Browser / runtime type 5. Float (num) Tipe data float otomatis akan dideteksi oleh javascript. Tipe data ini merupakan angka yang memiliki pecahan. contoh: const a = 10/20 console.log(a); //hasilnya adalah 0.5 Support Semua Browser / runtime type 6. String Tipe data string otomatis akan dideteksi oleh javascript. Tipe data ini merupakan rangkain huruf, tanda baca lain, maupun angka. contoh: const a = "Gilang Pratama" console.log(a); Support Semua Browser / runtime type 7. Function Tipe data function otomatis akan dideteksi oleh javascript. Tipe data ini merupakan tipe data yang dapat menjalankan sebuah logika. Tipe data ini biasa digunakan sebagai callback. Mari kita lihat contoh yang sedikit rumit ini. Contoh Function Biasa const penjumlahan = (a, b)=>{ return a+b; } const jumlah = penjumlahan(2,3) console.log(jumlah) //outputnya adalah 5 Contoh Function Callback const penjumlahan = (a, b)=>{ return a+b(); //b() adalah cara menjalankan function } const jumlah = penjumlahan(2, //disini mendeklarasikan callback function ()=>{ return 40 //disini mengembalikan angka 40 }) console.log(jumlah)//outputnya adalah 42 Support Semua Browser / runtime type 8. Promise Tipe data ini otomatis dideteksi oleh javascript sebagai asynchronous object. Dalam javascript, asynchronous merupakan cara eksekusi tanpa memblokir proses lainnya.  Contoh Promise: const main = async ()=>{ //harus menggunakan async agar bisa await data const luas = await linkaran(7) console.log(luas) } const linkaran = (jarijari)=> new Promise((resolve, reject)=>{ resolve(Math.round(jarijari*jarijari*3.14)) //untuk mengembalikan data }) main(); Browser support by caniuse.com 9. Array (Tumpukan data) Array juga otomatis terdeteksi oleh javascript. Array digunakan untuk menuliskan data yang bertumpuk. Contoh: const cars = ['mitsubisi', 'honda']; console.log(cars)//(2) ["mitsubisi", "honda"] Support semua browser / runtime type 10. Object Object merupakan data yang mendeklarasikan key dengan dinamis. Contoh const car = { body: 'red', wheel: 'black' } console.log(car) //(2) {body: "red", wheel: "black"} Support semua browser / runtime type Artikel ini dibukukan pada buku Javascript dengan mudah Navigasi Konten
__label__pos
0.983722
 Figures Figure 2 Figure 2 Figure 2 Commonly used Shim APIs API Description CorBindToRuntimeEx The main entry point to the shim. Use to load a particular version of the runtime into the process. GetCorVersion Returns the version of the runtime loaded into the current process. GetCorSystemDirectory Returns the install directory for the version of the runtime that is loaded in the current process. Figure 3 Calling CorBindToRuntimeEx LPWSTR pszVer = L"v1.0.2121"; LPWSTR pszFlavor = L"wks"; ICorRuntimeHost *pHost = NULL; hr = CorBindToRuntimeEx( //version pszVer, // svr or wks pszFlavor, //domain-neutral"ness" and gc settings STARTUP_LOADER_OPTIMIZATION_MULTI_DOMAIN_HOST | STARTUP_CONCURRENT_GC, CLSID_CorRuntimeHost, IID_ICorRuntimeHost, (void **)&pHost); if (SUCCEEDED(hr)) { } Figure 4 Loader Optimization Settings Setting Description No assemblies are loaded domain-neutral (except mscorlib, which is always loaded domain-neutral) This setting is termed "Single Domain" because it is commonly used when the host is running a single application in the process. Pass STARTUP_LOADER_ OPTIMIZATION_SINGLE_DOMAIN as the dwFlags parameter to CorBindToRuntimeEx. All assemblies are loaded domain-neutral This setting is typically used when there are multiple domains in the process, all running the same code. For example, a host may choose to run the same application in numerous domains isolated by user. Pass STARTUP_LOADER_ OPTIMIZATION_MULTI_DOMAIN as the dwFlags parameter to CorBindToRuntimeEx. Shared assemblies (those with strong names) are loaded domain-neutral Use this setting when running multiple different applications in the same process. Pass STARTUP_LOADER_ OPTIMIZATION_MULTI_DOMAIN_HOST as the dwFlags parameter to CorBindToRuntimeEx. Strong names is the naming technique used to ensure that shared assemblies (those that are used by more than one application) have globally unique names and that the assembly hasn't been altered, either accidentally or maliciously, from the time it was built to the time it was deployed. Figure 5 ICorConfiguration Methods Method Description SetAppDomainLoadEvent Registers a callback that notifies the host when a new application domain has been created in the process. SetGCThreadControl Registers a callback that allows the host to schedule threads for non-runtime tasks when they would otherwise be blocked for a GC. SetGCHostControl Allows the host to customize the size of the GC heap used by the CLR. SetDebuggerThreadControl Registers a callback that allows the host to be notified when threads are started and stopped by the debugger. AddDebuggerSpecialThread Allows the host to indicate to the debugging services that a particular thread should be allowed to continue executing while the debugger has an application stopped. Figure 7 Loading Hosting Code // import the type library that defines the COM interfaces used to access // the managed class in the mscorlib. Specifically, we need to call // through _AppDomain to an instance of the managed class // System.AppDomain. This typelib ships in the SDK #import <mscorlib.tlb> raw_interfaces_only high_property_prefixes("_get","_put","_putref") using namespace ComRuntimeLibrary; // import the type library for our managed hosting code. This typelib // was built using the tlbexp SDK tool #import <MyManagedHost.tlb> raw_interfaces_only high_property_prefixes("_get","_put","_putref") using namespace MyManagedHost; // Declare a variable to hold the hosting interface ICorRuntimeHost *pCLR = NULL; // Load the CLR by calling CorBindToRuntime and get back an IP to // ICorRuntimeHost as shown in the // previous code sample (code is omitted here) // Get the default domain _AppDomain *pDefaultDomain = NULL; IUnknown *pAppDomainPunk = NULL; HRESULT hr = pCLR->GetDefaultDomain(&pAppDomainPunk); _ASSERT(pAppDomainPunk); hr = pAppDomainPunk->QueryInterface(__uuidof(_AppDomain), (void**) &pDefaultDomain); _ASSERT(pDefaultDomain); // Create an instance of HostProcessRequest in MyManagedHost.dll. The // interface pointer to the instance of HostProcessRequest is used from // now on to direct user requests into managed code. _ObjectHandle *pObjHandle = NULL; hr = pDefaultDomain->CreateInstance( _bstr_t("MyManagedHost")), // assembly name _bstr_t("HostProcessRequest")), // type name &pObjHandle); // returned handle to object _ASSERT(pObjHandle); VARIANT v; VariantInit(&v); hr = pObjHandle->Unwrap(&v); _ASSERT(v.pdispVal); _HostProcessRequest *pPR = NULL; hr = v.pdispVal->QueryInterface(__uuidof(_HostProcessRequest), (void**) &pPR); _ASSERT(pPR); // Remember to call Release() on everything! Figure 8 Registering to Receive the TypeResolveEvent Public class Host { private Assembly TLoadHandler(System.Object sender, EventArgs e) { // find the assembly or create one and return it. } public static void Main(string[] args) { Host host = new Host(); AppDomain appDomain = AppDomain.CreateDomain("MyDomain", null, null, null); // hook up the handler ResolveEventHandlertrh= new ResolveEventHandler ( host.TLoadHandler); appDomain.AddOnTypeResolve(trh); // run the app, etc ... } Figure 9 Creating the Domain-level Policy Using System.Security: Using System.Security.Policy; Using System.Security.Permissions; // Create the domain-level policy PolicyLevel pl = PolicyLevel.CreateAppDomainLevel(); // include a code group that gives permissions only to assemblies // that are strong named with s_somePublicKey UnionCodeGroup snCG = new UnionCodeGroup( new StrongNameMembershipCondition(new StrongNamePublicKeyBlob( s_somePublicKey ), null, null ), new PolicyStatement(new PermissionSet(PermissionState.Unrestricted))); pl.RootCodeGroup.AddChild(snCG); AppDomain appDomain = AppDomain.CreateDomain("MyDomain", null, null, null); // set our new policy as the domain-level policy appDomain.SetAppDomainPolicy(pl); Page view tracker
__label__pos
0.774571
By Michael / Last Updated April 27, 2023 With the increasing development of computer technology, nowadays more and more disk problems occur to users everyday. Likewise, some format hard drive software can play an important role in resolving some disk troubles. Why do we Need These Software to Format Hard Drive? There are some situations that need the powerful format hard drive software. As we all know, format hard drive can help to clear data quickly. The situations listed below need the formatting tools. 1. If you buy a new disk and want to replace the old one, after that, do not forget to format the old hard drive to avoid divulging of personal information. 2. If the partition suffers some serious virus, format partition software can eliminate the virus easily. 3. If some game players want to convert the file system of NTFS to FAT32, these format hard drive software can give a hand. Because some classic os, games, such as, Xbox, PS3, just can work with the file system of FAT32. Introduction of Formatting Disk with Disk Management As a Windows snap-in tool, the Disk Management utility possesses the function of formatting hard drive. It can complete the formatting task pretty well. Also, during the formatting process, the partition label, file system and cluster size can be reset based on the users' individual needs. However, there is a small drawback, that is, if the partition is larger than 32 GB, Disk Management can only format it to NTFS. Therefore, if the partition is lager than 32 GB, and you want to play Xbox or PS3, there will be some trouble. Is there a software that can make up for this limitation? Best Software to Format Hard Drive As an all-in-one free partition manager, AOMEI Partition Assistant Standard Edition can resolve the task of formatting hard drive perfectly. The most amazing benefit is that it can format partition of NTFS to FAT32 up to 2048GB. The picture below is its main interface. AOMEI Partition Assistant Standard Edition What's more, the operation prompt is very clear, and the users can complete the tasks easily and securely. So just download partition freeware and resolve the problems as soon as possible.
__label__pos
0.989264
Datepicker behaves differently with moment-timezone vs luxon-tz the api data timezone is in utc, local timezone is utc+1: using moment-timezone (with default options): [timezonePlugin]=“momentTimezone” [controls]="[‘time’]" • select a time, say 6PM, then the time saved is ‘2022-12-21T17:00:00.000Z’, as expected. if I switch the timezone plugin to luxon: then the time offset starts happening in the picker (selecting 6PM auto-selects 7PM) which seems really confusing and unpredictable for a user, especially if timezone difference is more than an hour. I tried playing with different [dataTimezone] & [displayTimezone] configs but it still wasn’t as intuitive out of the box as with moment-tz, which I’m trying to get rid of. what am I missing here? Hello @Geom :wave: Could you please share your relevant code example so I can take a look at it? with luxon: <input nz-input mbsc-datepicker [(ngModel)]="newTimestamp" [controls]="['time']" [touchUi]="false" theme="ios" [timezonePlugin]="luxonTimezone"> with moment: <input nz-input mbsc-datepicker [(ngModel)]="newTimestamp" [controls]="['time']" [touchUi]="false" theme="ios" [timezonePlugin]="momentTimezone"> the plugins are imported as expected in the docs: import * as luxon from ‘luxon’; import {luxonTimezone} from ‘@mobiscroll/angular’; luxonTimezone.luxon = luxon; thanks @Hunor Hi @Geom I tried to reproduce the problem you described but failed. Can you tell us which version of moment, moment-timezone and luxon were you using in your example? Also did you experience any javascript errors in the console in either of cases? Thanks, Zoli thanks for the quick reply @Zoli that’s weird, was the local timezone also offset? I’m using "luxon": "^1.28.0", "moment-timezone": "^0.5.40", "@mobiscroll/angular": "^5.21.2", "@angular/core": "^14.0.2", no console errors. Any idea what might be causing this? Also is support for dayjs timezone planned by any chance? Thanks again Thanks for the details @Geom Are you swithcing the timezone plugin dynamically on the same component? Or are these totally different pages/components? Actually I’m only switching the plugin to test the behavior (i.e. just commenting/uncommenting the relevant lines). Ideally I just need one plugin in my bundle (luxon). Does that answer your question? @Zoli
__label__pos
0.902649
ITKeyword,专注技术干货聚合推荐 注册 | 登录 hibernate缓存详解 lmb55 分享于 2016-07-22 为什么要用hibernate缓存? hibernate是一个持久层框架,经常访问物理数据库。为了降低应用程序对物理数据源访问的次数,从而提高应用程序的运行性能,我们想到使用hibernate缓存机制。缓存内的数据是对物理数据源中的数据的复制,应用程序在运行时从缓存读写数据,在特定的时刻或事件会同步缓存和物理数据源的数据。 hibernate缓存的原理 缓存的主要作用是查询 hibernate缓存包括三大类:hibernate一级缓存、hibernate二级缓存和hibernate查询缓存 一级缓存 一级缓存是hibernate自带的,不受用户干预。 hibernate是一个线程对应一个session,一个线程可以看成一个用户。也就是说session级缓存只能给一个线程用,别的线程用不了,一级缓存就是和线程绑定的。其生命周期和session的生命周期一致,当前session一旦关闭,一级缓存就会消失,因此,一级缓存也叫session缓存或者事务级缓存,一级缓存只存储实体对象,不会缓存一般的对象属性,即:当获得对象后,就将该对象缓存起来,如果在同一个session中再去获取这个对象时,它会先判断缓存中有没有这个对象的ID,如果有,就直接从缓存中取出,否则,则去访问数据库,取了以后同时会将这个对象缓存起来。 Session内置不能被卸载,Session的缓存是事务范围的缓存(Session对象的生命周期通常对应一个数据库事务或者一个应用事务)。一级缓存中,持久化类的每个实例都具有唯一的OID。 缓存和连接池的区别:缓存和池都是放在内存里,实现是一样的,都是为了提高性能的。但有细微的差别,池是重量级的,里面的数据是一样的,比如一个里放100个connection连接对象,这100个对象都是一样的。而缓存里的数据,每个都不一样。比如读取100条数据库记录放到缓存里,这100条记录都不一样。 以下我们结合具体示例来学习一级缓存: import java.io.Serializable; import java.util.Iterator; import org.hibernate.Session; import org.hibernate.SessionFactory; import org.hibernate.cfg.Configuration; public class CacheDemo { private static SessionFactory sessionFactory; static{ sessionFactory = new Configuration().configure().buildSessionFactory(); } /** * 同一个Session发出两次load方法查询 * * 第二次查询的数据和第一次相同,第二次load方法是从缓存里取数据,而不会再发出sql语句到数据库中进行查询。 */ public void cacheTest1(){ Session session = sessionFactory.openSession(); session.beginTransaction(); //第一次load查询 Student student = session.load(Student.class, 1); System.out.println("student.name : " + student.getName()); System.out.println("*************************************"); //第二次load查询(使用缓存) student = session.load(Student.class, 1); System.out.println("student.name : " + student.getName()); } /** * 同一个Session发出两次get方法查询 * * 第二次查询的数据和第一次相同,第二次get方法是从缓存里取数据,而不会再发出sql语句到数据库中进行查询。 */ public void cacheTest2(){ Session session = sessionFactory.openSession(); session.beginTransaction(); //第一次get查询 Student student = session.get(Student.class, 1); System.out.println("student.name : " + student.getName()); System.out.println("*************************************"); //第二次get查询(使用缓存) student = session.get(Student.class, 1); System.out.println("student.name : " + student.getName()); } /** * 同一个Session发出两次iterator查询对象 * * 说起iterator查询,我们会想到,iterator查询在没有缓存的情况下会有N+1的问题。 * 执行上面的代码,我们可以看到,第一次iterator查询会发出N+1条语句,第一条sql语句查询所有的ID, * 然后根据ID查询实体对象,有N个ID就出N条语句查询实体。第二次iterator查询,却只发一条sql语句, * 查询所有的ID,然后根据ID到缓存里取实体对象,不再发sql语句到数据库里查询了。 */ public void cacheTest3(){ Session session = sessionFactory.openSession(); session.beginTransaction(); //第一次iterator查询 Iterator iter = session.createQuery("from student s where s.id < 5").iterate(); while (iter.hasNext()) { Student student = (Student)iter.next(); System.out.println("student.name : " + student.getName()); } System.out.println("*************************************"); //第二次iterator查询(使用缓存) iter = session.createQuery("from student s where s.id < 5").iterate(); while (iter.hasNext()) { Student student = (Student)iter.next(); System.out.println("student.name : " + student.getName()); } } /** * 同一个Session发出两次iterator查询对象的普通属性 * * 执行上面的代码,我们可以看到,第一次iterator查询会发出N+1条语句,第二次iterator查询还是发了N+1条sql语句, * 因为一级缓存只是缓存实体对象,而不缓存对象属性,所以在iterate第二次查询普通属性时,无法从缓存中获取, * 从而跟第一次查询发出相同条数的sql语句。 */ public void cacheTest4(){ Session session = sessionFactory.openSession(); session.beginTransaction(); //第一次iterator查询 Iterator iter = session.createQuery("select s.name from student s where s.id < 5").iterate(); while (iter.hasNext()) { String name = (String)iter.next(); System.out.println("student.name : " + name); } System.out.println("*************************************"); //第二次iterator查询(使用缓存) iter = session.createQuery("select s.name from student s where s.id < 5").iterate(); while (iter.hasNext()) { String name = (String)iter.next(); System.out.println("student.name : " + name); } } /** * 两个session,每个session发出一个load方法查询实体对象 * * 第一个session的load方法会发出sql语句查询实体对象,第二个session的load方法也会发出sql语句查询实体对象。 * session间不能共享一级缓存的数据,第一个session中的数据在其被关闭的时候就已经不存在了, * 所以第二个session的load方法查询相同的数据还是要到数据库中查询。 */ public void cacheTest5(){ Session session = null; //第一个session的load查询 try { session = sessionFactory.openSession(); session.beginTransaction(); Student student = session.load(Student.class, 1); System.out.println("student.name : " + student.getName()); session.getTransaction().commit(); } catch (Exception e) { session.getTransaction().rollback(); }finally{ session.close();//关闭session } System.out.println("***********************************"); //第二个session的load查询 try { session = sessionFactory.openSession(); session.beginTransaction(); Student student = session.load(Student.class, 1); System.out.println("student.class : " + student.getName()); session.getTransaction().commit(); } catch (Exception e) { session.getTransaction().rollback(); }finally{ session.close(); } } /** * 同一个session,先调用save方法再调用load方法查询刚刚save的数据 * * 先save保存实体对象,再用load方法查询刚刚save的实体对象,则load方法不会发出sql语句到数据库查询的, * 而是到缓存里取数据,因为save方法也支持缓存。当然前提是同一个session。 */ public void cacheTest6(){ Session session = sessionFactory.openSession(); session.beginTransaction(); Student student = new Student().setName("Tom"); //save方法返回实体对象的ID Serializable id = session.save(student); //load查询刚刚save的数据 student = session.load(Student.class, 1); System.out.println("student.name : " + student.getName()); } /** * 大批量的数据添加 * * 大批量数据添加时,会造成内存溢出的,因为save方法支持缓存,每save一个对象就往缓存里放,如果对象足够多内存肯定要溢出。 * 一般的做法是先判断一下save了多少个对象,如果save了40个对象就对缓存手动的清理缓存,这样就不会造成内存溢出。 * * 注意:清理缓存前,要手动调用flush方法同步到数据库,否则save的对象就没有保存到数据库里。 * * 建议:大批量数据的添加还是不要使用hibernate,这是hibernate弱项。 * 可以使用jdbc(速度也不会太快,只是比hibernate好一点),或者使用工具产品来实现, * 比如oracle的Oracle SQL Loader,导入数据特别快。 */ public void cacheTest7(){ Session session = sessionFactory.openSession(); session.beginTransaction(); for (int i = 0; i < 100; i++) { Student student = new Student().setName("Tom"+i); session.save(student); //每40条数据更新一次 if (i % 40 == 0) { session.flush(); session.clear();//清除缓存数据 } } } } 二级缓存 二级缓存也称为进程缓存或者sessionFactory级的缓存,它可以被所有的session共享,二级缓存的生命周期和sessionFactory的生命周期一致,二级缓存也是只存储实体对象。 二级缓存的一般过程如下 ①:条件查询的时候,获取查询到的实体对象 ②:把获得到的所有数据对象根据ID放到二级缓存中 ③:当Hibernate根据ID访问数据对象时,首先从sesison的一级缓存中查,查不到的时候如果配置了二级缓存,会从二级缓存中查找,如果还查不到,再查询数据库,把结果按照ID放入到缓存中 ④:进行delete、update、add操作时会同时更新缓存 由于SessionFactory对象的生命周期和应用程序的整个过程对应,因此Hibernate二级缓存是进程范围或者集群范围的缓存,有可能出现并发问题,因此需要采用适当的并发访问策略,该策略为被缓存的数据提供了事务隔离级别。 二级缓存是可选的,是一个可配置的插件,默认下SessionFactory不会启用这个插件。Hibernate提供了org.hibernate.cache.CacheProvider接口,它充当缓存插件与Hibernate之间的适配器。 二级缓存比较复杂,hibernate做了一些优化,和一些第三方的缓存产品做了集成。。hibernate提供了一个简单实现,用Hashtable做的,只能作为我们的测试使用,商用还是需要第三方产品。 使用缓存,肯定是长时间不改变的数据,如果经常变化的数据放到缓存里就没有太大意义了。因为经常变化,还是需要经常到数据库里查询,那就没有必要用缓存了。 什么样的数据适合存放到第二级缓存中?    1) 很少被修改的数据    2) 不是很重要的数据,允许出现偶尔并发的数据    3) 不会被并发访问的数据    4) 常量数据    什么样的数据不适合存放到第二级缓存中?    1) 经常被修改的数据    2) 绝对不允许出现并发访问的数据,如财务数据,绝对不允许出现并发 3) 与其他应用共享的数据。 我们以一个和EHCache二级缓存产品集成的示例来看一下程序代码实现:EHCache的jar文件在hibernate的lib里,我们还需要设置一系列的缓存使用策略,需要一个配置文件ehcache.xml来配置。这个文件放在类路径下。 推荐:Hibernate一级缓存详解及优化 1.Session 级别的缓存,它同session邦定。它的生命周期和session相同。Session消毁,它也同时消毁;管理一级缓存,一级缓存无法取消,用两个方法管理,clear(),e //默认配置,所有的类都遵循这个配置 <defaultCache //缓存里可以放10000个对象 maxElementsInMemory="10000" //过不过期,如果是true就是永远不过期 eternal="false" //一个对象被访问后多长时间还没有访问就失效(120秒还没有再次访问就失效) timeToIdleSeconds="120" //对象存活时间(120秒),如果设置永不过期,这个就没有必要设了 timeToLiveSeconds="120" //溢出的问题,如果设成true,缓存里超过10000个对象就保存到磁盘里 overflowToDisk="true" /> 我们也可以对某个对象单独配置: <cache name="com.bjpowernode.hibernate.Student" maxElementsInMemory="100" eternal="false" timeToIdleSeconds="10000" timeToLiveSeconds="10000" overflowToDisk="true" /> <!-- 配置缓存提供商 --> <property name="hibernate.cache.provider_class">org.hibernate.cache.EhCacheProvider</property> <!-- 启用二级缓存,这也是它的默认配置 --> <property name="hibernate.cache.use_second_level_cache">true</property> 启用二级缓存的配置可以不写的,因为默认就是true开启二级缓存。必须还手动指定那些实体类的对象放到缓存里在hibernate.cfg.xml里: //在<sessionfactory>标签里,在<mapping>标签后配置 <class-cache class="com.bjpowernode.hibernate.Student" usage="read-only"/> 或者在实体类映射文件里: //在<class>标签里,<id>标签前配置 <cache usage="read-only"/> usage属性表示使用缓存的策略,一般优先使用read-only,表示如果这个数据放到缓存里了,则不允许修改,如果修改就会报错。这就要注意我们放入缓存的数据不允许修改。因为放缓存里的数据经常修改,也就没有必要放到缓存里。 使用read-only策略效率好,因为不能改缓存。但是可能会出现脏数据的问题,这个问题解决方法只能依赖缓存的超时,比如上面我们设置了超时为120秒,120后就可以对缓存里对象进行修改,而在120秒之内访问这个对象可能会查询脏数据的问题,因为我们修改对象后数据库里改变了,而缓存却不能改变,这样造成数据不同步,也就是脏数据的问题。 第二种缓存策略read-write,当持久对象发生变化,缓存里就会跟着变化,数据库中也改变了。这种方式需要加解锁,效率要比第一种慢。 下面我们结合具体示例来学习二级缓存: /** * 开启二级缓存,两个session,每个session发出一个load/get方法查询实体对象 * * 第二次查询的数据和第一次相同,不会发出查询语句,因为配置二级缓存,session可以共享二级缓存中的数据。 * 所以第二次load方法是从缓存里取数据,而不会再发出sql语句到数据库中进行查询。 */ public void cacheTest1(){ Session session = null; //第一次load查询 try { session = sessionFactory.openSession(); session.beginTransaction(); Student student = session.load(Student.class, 1); System.out.println("student.name : " + student.getName()); session.getTransaction().commit(); } catch (Exception e) { session.getTransaction().rollback(); }finally{ session.close();//关闭session } System.out.println("***********************************"); //第二次load查询 try { session = sessionFactory.openSession(); session.beginTransaction(); Student student = session.load(Student.class, 1); System.out.println("student.class : " + student.getName()); session.getTransaction().commit(); } catch (Exception e) { session.getTransaction().rollback(); }finally{ session.close(); } } } 注意:二级缓存必须让sessionfactory管理,让sessionfactory来清除二级缓存。 sessionFactory.evict(Student.class);//清除二级缓存中所有student对象 sessionFactory.evict(Student.class,1);//清除二级缓存中id为1的student对象 如果在第一个session调用load或get方法查询数据后,把二级缓存清除了,那么第二个session调用load或get方法查询相同的数据时,还是会发出sql语句查询数据库的,因为缓存里没有数据只能到数据库里查询。 我们查询数据后会默认自动的放到二级和一级缓存里,如果我们想查询的数据不放到缓存里,也是可以的。也就是说我们可以控制一级缓存和二级缓存的交换。 session.setCacheMode(CacheMode.IGNORE);禁止将一级缓存中的数据往二级缓存里放。 还是用上面代码测试,在第一个session调用load方法前,执行session.setCacheMode(CacheMode.IGNORE);这样load方法查询的数据不会放到二级缓存里。那么第二个session执行load方法查询相同的数据,会发出sql语句到数据库中查询,因为二级缓存里没有数据,一级缓存因为不同的session不能共享,所以只能到数据库里查询。 上面我们讲过大批量的数据添加时可能会出现溢出,解决办法是每当天就40个对象后就清理一次一级缓存。如果我们使用了二级缓存,光清理一级缓存是不够的,还要禁止一二级缓存交互,在save方法前调用 session.setCacheMode(CacheMode.IGNORE) 二级缓存也不会存放普通属性的查询数据,这和一级缓存是一样的,只存放实体对象。session级的缓存对性能的提高没有太大的意义,因为生命周期太短了。 查询缓存 查询缓存意义不大,查询缓存说白了就是存放由list方法或iterate方法查询的数据。我们在查询时很少出现完全相同条件的查询,这也就是命中率低,这样缓存里的数据总是变化的,所以说意义不大。除非是多次查询都是查询相同条件的数据,也就是说返回的结果总是一样,这样配置查询缓存才有意义。我们这里不做过多解释。 Hibernate查找对象如何应用缓存? 当Hibernate根据ID访问数据对象的时候,首先从Session一级缓存中查;查不到,如果配置了二级缓存,那么从二级缓存中查;如果都查不到,再查询数据库,把结果按照ID放入到缓存删除、更新、增加数据的时候,同时更新缓存。 最后贴出一个一级缓存和二级缓存的对比图: 这里写图片描述 这里写图片描述 推荐:Hibernate 所有缓存机制详解 Hibernate 所有缓存机制详解 hibernate 提供的一级缓存 hibernate是一个线程对应一个session,一个线程可以看成一个用户。也就是说session级缓存(一级缓存)只能 为什么要用hibernate缓存? hibernate是一个持久层框架,经常访问物理数据库。为了降低应用程序对物理数据源访问的次数,从而提高应用程序的运行性能,我们想到使用hibernate缓存机制。缓存内的 相关阅读排行 用户评论 游客 相关内容推荐 最新文章 × × 请激活账号 为了能正常使用评论、编辑功能及以后陆续为用户提供的其他产品,请激活账号。 您的注册邮箱: 修改 重新发送激活邮件 进入我的邮箱 如果您没有收到激活邮件,请注意检查垃圾箱。
__label__pos
0.86726
informa 12 min read Features Practical Fluid Dynamics: Part 1 In this technical article originally printed in Game Developer magazine, Neversoft co-founder Mick West looks at how to efficiently implement fluid effects - from smoke to water and beyond - in video games, with example code [In this technical article originally printed in Game Developer magazine, Neversoft co-founder Mick West looks at how to efficiently implement fluid effects - from smoke to water and beyond - in video games, with example code.] Fluid effects, such as rising smoke and turbulent water flow, are everywhere in nature but are seldom implemented convincingly in computer games. The simulation of fluids (which covers both liquids and gases) is computationally very expensive. It's also mentally expensive, with even introductory papers on the subject relying on the reader to have math skills at least at the undergraduate calculus level. In this two-part article, I will attempt to address both these problems from the perspective of a game programmer who's not necessarily conversant with vector calculus. I'll explain how certain fluid effects work without using advanced equations and without too much new terminology. I'll also describe one way of implementing the simulation of fluids in an efficient manner without the expensive iterative diffusion and projection steps found in other implementations. A working demonstration in ZIP form with source code accompanies this article; example output from this can be seen in Figure 1.   FIGURE 1 Smoke output is shown from the accompanying code.     Grids and Particles There are several ways of simulating the motion of fluids, but they all generally divide into two common styles: grid methods and particle methods. In a grid method, the fluid is represented by dividing up the space a fluid might occupy into individual cells and storing how much of the fluid is in each cell. In a particle method, the fluid is modeled as a large number of particles that move around and react to collisions with the environment, interacting with nearby particles. Let's focus first on simulating fluids with grids. The simplest way to discuss the grid method is in respect to a regular two-dimensional grid, although the techniques apply equally well in three dimensions. At the most basic level, to simulate fluid in the space covered by a grid you need two grids: one to store the density of liquid or gas at each point and another to store the velocity of the fluid. Figure 2 shows a representation of this, with each point having a velocity vector and containing a density value (not shown). The actual implementation of these grids in C/C++ is most efficiently done as one-dimensional arrays. The amount of fluid in each cell is represented as a float. The velocity grid (also referred to as a velocity field, or vector field) could be represented as an array of 2D vectors, but for coding simplicity it's best represented as two separate arrays of floats, one for X and one for Y.   FIGURE 2 The fluid density moves over a field of velocities, with a density stored at each point. In addition to these two grids, we can have any number of other matching grids that store various attributes. Again, each will be stored as a matching array of floats, which can store factors such as the temperature of the fluid at each point or the color of the fluid (whereby you can mix multiple fluids together). You can also store more esoteric quantities such as humidity, for example, if you were simulating steam or cloud formation. Advection The fundamental operation in grid-based fluid dynamics is advection. Advection is basically the moving of things on the grid, but more specifically, it's moving the quantities stored in one array by the movement vectors stored in the velocity arrays. It's quite simple to understand what's happening if you think of each point on the grid as an individual particle, with some attribute (density) and a velocity. Then you are familiar with the process of moving a particle by adding the velocity vector to the position vector. On the grid, however, the possible positions are fixed, so all we can do is move (advect) the quantity (density) from one grid point to another. In addition to advecting the density value, we also need to advect all the other quantities associated with the point. This would include additional attributes such as temperature and color, but also the velocity of the point itself. The process of moving a velocity field over itself is referred to as self-advection. The grid does not represent a series of discrete quantities, density or otherwise; it actually represents (inaccurately) a smooth surface, with the grid points just being sample points on that surface. Think of the points as being X,Y vertices of a 3D mesh, with the density field being the Z height. Thus, you can pick any X and Y position on the mesh, and find the Z value at that point by interpolating between the closest four points. Similarly while advecting a value across the grid, the destination point will not fall directly on a grid point, and you'll have to interpolate the value into the four grid points closest to the target position.     FIGURE 3 Forward advection: The value in P moves forward to A, B, C, and D. This dissipates it when moving diagonally.   In Figure 3, point P has a velocity V, which, after a time step of t, will put it in position P´=P+t*V. This point falls between the points A, B, C, and D, and so a bit of P has to go into each of them. Generally, t*V will be significantly smaller than the width of a cell, so one of the points A, B, C, or D will be P itself. There are various inaccuracies when advecting an entire grid like this, particularly that quantities dissipate when moving in a direction that is not axis-aligned. But this inaccuracy can be turned to our advantage. Stam's Advection Programmers looking into grid-based fluid dynamics for the first time will most often come across the work of Jos Stam and Ron Fedkiw, particularly Stam's paper "Real-Time Fluid Dynamics for Games," which he presented at the 2003 Game Developers Conference. He describes a very short procedure for making a grid-based fluid simulator. In particular, he shows how to implement the advection step using what he calls a "linear backtrace," which simply means that instead of moving the point forward in space, he inverts the velocity and finds the source point in the opposite direction, essentially back in time. He then takes the interpolated density value from that source (which, again, will lay between four actual grid points), and moves the value into the point P. See Figure 4 for an example.     FIGURE 4 Reverse advection: The new value in P is gathered from E, F, G, and H, one of which (H) is usually the same point as P. Stam's approach produces visually pleasing results, yet suffers from a number of problems. First, the specific collection of techniques discussed may be covered by an existing patent (U.S. #6,266,071), although as Stam notes, backtracing dates back to 1952. Check with a lawyer if this is a concern to you. On a more practical note, the advection alone as Stam describes it simply does not work accurately unless the velocity field is smooth in a way termed mass conserving, or incompressible. Consider the case of a vector field in which all the velocities are zero except for one. The velocity cannot move (advect) forward through the field, since there's nothing ahead of it to "pull" it forward. Instead, the velocity simply bleeds backward. The resultant velocity field will terminate at the original point, and any quantities moving through this field will end up there. We can solve this particular problem by adding a step to the algorithm termed projection. Projection essentially smooths out the velocity by making it incompressible, allowing the backtracing advection to work perfectly and making the paths formed by the velocity "swirly," as if it were real water. The problem with this approach is that projection is quite expensive, requiring 20 iterations over the velocity field in order to "relax" it to a usable state. Another performance problem with Stam's approach is that he uses a diffusion step, which also involves 20 iterations over a field. It's necessary to allow the gas to spread out from areas of high density to areas of low density. If the diffusion step were missing, solid blocks of the fluid would remain solid as they moved over the velocity field. Diffusion is an important cosmetic step. Accounting Advection If a velocity field is not mass conserving, then some points will have multiple velocity vectors from other points pointing toward them. If we simply move our scalar quantities (like density) along these vectors, there will be multiple quantities going to (or coming from) the same point, and the result will be a net loss or gain of the scalar quantity. So, the total amount of something such as the density would either fade to zero or gradually (or perhaps explosively) increase. The usual solution to this problem is to make sure the vector field is incompressible and mass conserving. But as mentioned before, it's computationally expensive to implement. One partial solution is to make the advection step mass conserving, regardless of whether the velocity field is actually mass conserving. The basis of this solution is to always account for any movement of a quantity by subtracting in one place what is added in another. Advection uses a source and destination buffer to keep it independent of update order. In Stam's implementation, the destination buffer is simply filled one cell at a time by combining a value from four cells in the source buffer and placing this value into the destination buffer. To properly account for compressible motion, we need to change this copying to accumulating, and initially make the destination buffer a copy of the source buffer. As we move quantities from one place to another, we can subtract them in the source and add them in the destination. With the forward advection in Figure 3, we're moving a quantity from point P to points A, B, C, and D. To account for this, we simply subtract the original source value in P from the destination value in P, and then add it (interpolated appropriately) to A, B, C, and D. The net change on the destination buffer is zero. With the reverse advection in Figure 4, as used by Stam, the solution would initially seem to be symmetrically the same: Subtract the interpolated source values in E, F, G, and H from the destination buffer, and add them to P. While this method works fine for signed quantities such as velocity, the problem here is that quantities such as density are positive values. They cannot drop below zero, as you cannot have a negative quantity of liquid. Suppose that point E was one source point for two destinations P1 and P2, both of which wanted 0.8 of E. If we follow our initial plan and subtract 0.8*E from E and add 0.8*E to both P1 and P2, the net effect is zero, but now the value at E is negative. If we clamp E to zero, then there is a net gain of 0.6*E. If we subtract 0.8*E from the source value of E after updating P1, then when we update P2, it will only get 0.8*0.2*E, when clearly both P1 and P2 should get equal amounts. Intuitively, it seems they should both get 0.5*E, and the resulting value in E should be zero, leading to a net zero change. To achieve this result, we create a list that for each point, noting the four points of origin for each and the fraction of each point they want. Simultaneously, we can accumulate the fractions asked of each source point. In an ideal world, this would add up to one, as the entire value is being moved somewhere (including partially back to where it started). But with our compressible field, the amount of the value in each point that is being moved could be greater than or less than one. If the total fraction required is greater than one, then we can simply scale all the requested fraction by this value, and the total will be one. If less than one, the requesting points can have the full amount requested. We should not scale in this case, as it will lead to significant errors. With the mass conservation of advection fully accounted for in both directions, it turns out that neither forward nor backward linear advection alone will produce smooth results. After some experimentation, I determined that applying forward advection followed by backward advection worked very well and gives a smooth and artifact-free flow of fluid over a compressible velocity field. Now What? We can now perform both forward and reverse advection in a mass-conserving manner, meaning we can move fluid around its own velocity field. But even though our velocity field does not need to be mass-conserving, we actually still want it to be, since the velocity fields of real world fluids generally are incompressible. Stam solves this problem by expensively forcing the field to be fully mass conserving after every change. It's necessary, since the reverse advection requires it. The key difference is that since our advection step does not require the field to be mass-conserving, we're really only doing it for cosmetic purposes. To that end, any method that rapidly approaches that state over several time steps will suit our purpose. That method, and the method of diffusion, can be found in the accompanying code, and I will discuss how they work in the next instalment of the article. [EDITOR'S NOTE: This article was independently written and commissioned by Gamasutra's editors, since it was deemed of value to the community. Its publishing has been made possible by Intel, as a platform and vendor-agnostic part of Intel's Visual Computing microsite.] Latest Jobs Treyarch Playa Vista, California 6.20.22 Audio Engineer Digital Extremes London, Ontario, Canada 6.20.22 Communications Director High Moon Studios Carlsbad, California 6.20.22 Senior Producer Build a Rocket Boy Games Edinburgh, Scotland 6.20.22 Lead UI Programmer More Jobs    CONNECT WITH US Register for a Subscribe to Follow us Game Developer Account Game Developer Newsletter @gamedevdotcom Register for a Game Developer Account Gain full access to resources (events, white paper, webinars, reports, etc) Single sign-on to all Informa products Register Subscribe to Game Developer Newsletter Get daily Game Developer top stories every morning straight into your inbox Subscribe Follow us @gamedevdotcom Follow us @gamedevdotcom to stay up-to-date with the latest news & insider information about events & more
__label__pos
0.892906
Published on Written by Jacky Chou Jumping To The Start Of The Next Data Entry Row In Excel Key Takeaway: • Jumping to the start of the next data entry row in Excel can save time and increase accuracy. By using the shortcut key, pressing ENTER, or navigating with the arrow keys, users can quickly move from one row to the next without losing their place in the spreadsheet. • The shortcut key for jumping to the next data entry row in Excel is “Ctrl + Enter”. This allows users to stay in the same column while quickly moving down to the start of the next row. • Users can also use the ENTER key to move to the next row in Excel, which automatically selects the cell directly below the current cell. This method is useful for quick data entry but may not be as efficient as using the shortcut key or arrow keys for longer spreadsheets. Do you want to quickly move to the start of the next data entry row when working with large datasets in Microsoft Excel? This blog will help you optimize your data entry workflow. You’ll learn how to swiftly jump to the beginning of each new row, saving you valuable time. Jump to next row Jump to the next data entry row in Excel with ease! Learn about some simple shortcuts. These can help you enter data quicker and more efficiently. In this section, we’ll discuss the ways you can go to the next row. Keyboard shortcuts and the Enter key are two methods. Try them out! Jump to next row-Jumping to the Start of the Next Data Entry Row in Excel, Image credits: chouprojects.com by Harry Arnold Shortcut key for jumping to next row To conveniently move to the next data entry row in Excel, you can use a keystroke. Follow these 3 steps to jump to the start of the next data entry row: 1. Click on the cell that you want to update or edit. 2. Press “Tab” on your keyboard to take you to the right of that cell. 3. Then, press “Enter” on your keyboard. This will take you directly to the first column of the next row. It’s important to note that if you have any filters applied or hidden cells within consecutive rows, this shortcut key may not work effectively. It’s essential to be able to navigate quickly and efficiently in Excel when dealing with large amounts of data. For instance, imagine sorting through a massive sales spreadsheet with thousands of rows. Every second counts while continually scrolling and searching for particular pieces of information. A friend of mine recently landed their first job out of college working as an analyst at a large corporation. He felt pretty overwhelmed with all the new tools he was required to learn for this role, including Excel. One day he accidentally discovered this shortcut key by pressing several keys but then realized how much easier it made his workload during his daily data entry tasks! It goes without saying that keystrokes can save time and hassle once we become familiar with them – giving us more time and room for other critical work areas. Why take the stairs when the Enter key can elevate you to the next row? Using the Enter key to move to the next row When entering data in a spreadsheet, it is essential to move to the next row smoothly. Excel has several keys that can help you do that without using the mouse. Using the Enter key is one of them. Here is a 4-step guide to using the Enter key to jump to the next row in Excel: 1. Open your Excel sheet and select a cell where you want to start entering. 2. Type data into this cell. 3. Press the Enter Key, which will move you to the next row of your worksheet. 4. The active cell will move down one cell each time you press Enter, allowing you to enter continuous data entries comfortably. Additionally, if you want to enter text in multiple lines within a single cell, then use ALT + ENTER; pressing these keys moves you down one level inside the same cell. When using this method continuously, ensure AutoCorrect enabled for automatically completing common words and phrases. Another best way is creating Data Drop-down lists so that they save prime work while decreasing errors; make sure unique values are entered in there. Using the Enter key may seem like a small task. Still, it can add speed and efficiency when working with large amounts of data in any Excel Sheet. Just when you thought Excel couldn’t get any more exciting, here come the other ways to navigate data. Other ways to navigate data Navigating data in Excel can be seamless. Other than mouse clicks, try alternatives. There are two sub-sections. Use arrow keys to move up and down. Go to a specific row. These tips can save time and boost productivity when dealing with big spreadsheets. Other ways to navigate data-Jumping to the Start of the Next Data Entry Row in Excel, Image credits: chouprojects.com by Adam Jones Using the arrow keys to move up or down Using Keyboard Navigation to Traverse Data Rows To quickly scan or locate entries in a spreadsheet, utilizing keyboard navigation can substantially save time and effort. One common method is using the arrow keys to traverse through rows. A Three-Step Guide for Keyboard Navigation 1. Select the cell from which you would like to begin traversing rows. 2. Press the down or up arrow keys to move downwards or upwards through each row in turn. 3. To stop movement before finding a specific data entry or after finishing traversal, depress the Enter key. In addition to moving up and down one row at a time, this technique also allows for moving horizontally between consecutive columns. However, it might not be as efficient as other navigation methods when searching through vast datasets because of its slow pace. An Interesting Fact The use of arrow keys was not initially acted upon in spreadsheets until IBM’s Lotus 1-2-3 released their first version with this feature in 1983. The innovation had since become standard within all subsequent solutions developed by various software houses such as Microsoft’s Excel. Excel: the only place where you can travel to a specific row without ever leaving your seat. Going to a specific row To easily navigate to a particular row in Excel, use the function of ‘jumping to a specific entry point’. This approach can save time and guide one directly to where they want to be within a given spreadsheet. Below is a visual aid outlining how to ‘Jump to a Specific Entry Point’ in Excel: Shortcut KeyAction Ctrl + GOpens ‘GoTo’ Dialogue Box Enter Row NumberInput desired row number Press EnterNavigates directly to specified row It is important to keep in mind that this shortcut only works when there are no filtered cells. Furthermore, it can be used for navigating around workbooks also – just enter the sheet and cell references you’d like. Lastly, another workaround for quickly finding the start of the next data entry row is using conditional formatting. Setting it up can allow users to see blank rows immediately as they come across them, indicating exactly where new data should begin. By selecting Conditional Formatting > New Rule > Use Formula To Determine Which Cells To Format > entering “” (two quotation marks) into the formula bar > setting up your formatting as preferred, one can ensure they identify any large blocks of blank lines quickly and easily. Five Facts About Jumping to the Start of the Next Data Entry Row in Excel: • ✅ To jump to the start of the next data entry row in Excel, press the “Tab” key. (Source: Excel Easy) • ✅ If your spreadsheet has filters enabled, pressing “Ctrl+Arrow Down” will jump to the next filtered row, even if it’s hidden. (Source: Excel Campus) • ✅ You can use the “Go To” feature in Excel to jump to a specific row or column by entering its address. (Source: How To Geek) • ✅ To quickly jump back to the previous data entry row, press “Shift+Tab.” (Source: Excel Jet) • ✅ Excel also offers various options for navigating and selecting data more efficiently, such as using keyboard shortcuts or the “Name Box.” (Source: Microsoft Support) FAQs about Jumping To The Start Of The Next Data Entry Row In Excel What is Jumping to the Start of the Next Data Entry Row in Excel? Jumping to the start of the next data entry row in Excel is a quick way to navigate to the first cell of the next empty row for data entry. How do I jump to the start of the next data entry row in Excel using the keyboard? To jump to the start of the next data entry row in Excel using the keyboard, simply press the “Ctrl + Down Arrow” keys together. This will take you to the first cell of the next empty row for data entry. Can I jump to the start of the next data entry row in Excel using the mouse? Yes, you can also jump to the start of the next data entry row in Excel using the mouse. Simply click on the first cell of the current row and drag the mouse down to the first cell of the next empty row. Is it possible to jump to a specific data entry row in Excel? Yes, you can jump to a specific data entry row in Excel by simply entering the row number in the “Name Box” located next to the Formula Bar. Pressing Enter will take you straight to that row. Can I customize the keyboard shortcut for jumping to the start of the next data entry row in Excel? Yes, you can customize the keyboard shortcut for jumping to the start of the next data entry row in Excel by going to “File” > “Options” > “Advanced” > “Editing options” > “After pressing Enter, move selection”. Here, you can choose your preferred shortcut key. Is there a quicker way to jump to the next data entry row in Excel? Yes, there is a quicker way to jump to the next data entry row in Excel. You can use the “Tab” key to move to the next data entry cell in the same row, and then use the “Ctrl + Down Arrow” keyboard shortcut to move to the start of the next empty row. Related Articles Disabling Shift Key Use When Opening A Workbook In Excel Key Takeaway: Understanding the Shift key functionality: The Shift key ... How To Create A Histogram In Excel: Step-By-Step Guide Key Takeaways: Histograms in Excel are a powerful data analysis ... How To Highlight A Column In Excel Using A Shortcut Key Takeaway: Shortcuts in Excel are essential for increasing efficiency ...
__label__pos
0.941847
powernap (2.6-0ubuntu1) natty; urgency=low * debian/copyright: Update upstream authors and license years. * powernap/monitors/IOMonitor.py: If processes do not have a command line, search regex in the 'Name:' field of /proc//status (LP: #735452) * actions/cpu_frequency: Fix saving/restoring of wrong governor (LP: #743682) Thanks to Mathieu Berard for the patch. - Additionally, save 'ondemand' as default when acpi-support and ondemand are run on boot to handle special case when running on battery. * debian/powernap.{preinst,postinst}: Add logic to handle the upgrade of the config file as format has changed. (LP: #744588) - install copy of config file in /usr/share/powernap to help with this. -- Andres Rodriguez Mon, 28 Mar 2011 17:38:34 -0400 powernap (2.5-0ubuntu1) natty; urgency=low [ Andres Rodriguez ] * Add support to load confs from /etc/powernap/config.d. Added to leverage compatibility with other applications, such as Eucalytpus. (LP: #711587) -- Dustin Kirkland Mon, 14 Mar 2011 17:37:16 -0500 powernap (2.4-0ubuntu1) natty; urgency=low [ Andres Rodriguez ] * Fix wall message timestamp (LP: #718242) * Add powerwake-now support. Sends signal to powernapd to wakeup powersave mode if the daemon is in powersave. * debian/powernap.upstart: Add pre-stop. - Issues 'powerwake-now' to wake up system if stopping while in powersave. - Add 'sleep 3' to wait for recover action to take place before actually stopping powernapd. Otherwise, it won't recover. * bin/powernapd: Set flags back to initial values if GRACE PERIOD completed. Ensures that flags have correct values if powerwake-now signal was received. [ Dustin Kirkland ] * sbin/powernapd: only display wall message if action is other than powersave -- Dustin Kirkland Thu, 17 Feb 2011 07:36:20 -0600 powernap (2.3-0ubuntu1) natty; urgency=low [ Andres Rodriguez ] * config,sbin/powernapd,powernap/powernap.py: New option to decide if powernapd will WARN the user via a wall message. Default yes. * sbin/powernapd: Use a Timestamp when sending the wall message. * InputMonitor now only enables/disables keyboard and mouse monitoring. - powernap/monitors/InputMonitor.py: Match regex in /by-id for kbd. - powernap/powernap.py: Support only mouse/keyboard - config: Only add options for mouse/keyboard. * Enabled InputMonitors by default. - config: Enable. - powernap.py: Only try to initialize if "mouse" or "kbd" are connected by looking into /dev/input/by-id. * powernapd: Add SIGIO handler. Ignores failure caused when disconnecting a monitored USB InputDevice, that caused powernapd to stop running. [ Dustin Kirkland ] * man/powernap-action.8, man/powernap_calculator.1: fix lintian warnings, escpae hyphens * debian/control: update package descriptions -- Dustin Kirkland Tue, 08 Feb 2011 23:10:40 -0600 powernap (2.2-0ubuntu1) natty; urgency=low [ Andres Rodriguez ] * actions: - cpu_smp_sched,eth_wol: Dropped. pm-utils provides its own verion. - cpu_frequency: Store default governor to reset it to default value. * debian/rules: run dh_installdeb *after* dh_pycentral. * Add lintian-overrides (LP: #706974) * sbin/powernap: Fix pm-powersave command path. -- Dustin Kirkland Fri, 28 Jan 2011 10:46:38 -0600 powernap (2.1-0ubuntu1) natty; urgency=low [ Andres Rodriguez ] * config: - Default ACTION to pm-powersave instead of best-effort. - Enable WoLMonitor and ConsoleMonitor by default - Disable ProcessMonitor by default. - Change powersave defaults to 0 instead of 4. Update related files. * powernap/monitors: - Rename RemoteMonitor to UDPMonitor. - Add WoLMonitor and support to run it. - Add ConsoleMonitor and support to run it. * powernap/powernap.py: - Improve config loading method for Monitors. * sbin/powernapd: Change approach of powernapd_loop. - GRACE period is time between ABSENT_SECONDS and (ABSENT_SECONDS - GRACE_SECONDS) - Only send a Wall message when on GRACE Period. - Enable Monitors to run at all times. Specifically when in PowerSave. * man/powernapd.8: Update to list available Monitors with description. * Update Copyright years/email in some of the files. [ Dustin Kirkland ] * config, powernap/monitors/TCPMonitor.py, powernap/powernap.py: - add a TCP Monitor * config, powernap/monitors/LoadMonitor.py, powernap/powernap.py: - add a system Load Monitor * debian/control: bump standards version * debian/rules: make sure powernapd starts on install, LP: #705959 * config: update service restart method -- Dustin Kirkland Thu, 27 Jan 2011 16:38:22 -0600 powernap (2.0-0ubuntu1) natty; urgency=low [ Andres Rodriguez ] * powernap/monitors: Re-work Monitors to integrate them with PowerNap. * powernap/powernap.py: Minor fixes to not crash when loading monitors. * sbin/powernapd: Integrate Monitors approach to work with Daemon. * Update copyright headers for some files * Update manpages and add one for powernap-action. * Update packaging: - debian/control: Add python related fileds. - debian/rules: Add dh_pycentral related rules. * Add new package to contain new modules and monitors: - debian/control: Add package powernap-common. - debian/powernap-common.install: Install new files (PowerNap class and Monitors); Move action scripts installation to this package. * sbin/powernapd: Improve WoL monitor to listen in eth* interfaces. * config: Update and cleanup. * sbin/powernapd,bin/powerwake: Fix generation of WoL Magic Packet. Apparently, more data than required was addeed. (LP: #705943) -- Dustin Kirkland Sat, 22 Jan 2011 09:09:04 -0600 powernap (1.12-0ubuntu1) natty; urgency=low [ Andres Rodriguez ] * Revert Adam changes to some files (rev148 - Original PowerNap behavior) - powernapd, powernapd.8, action, config. * Re-organize the source and update the packaging: - sbin: Moved powernap, powernapd, powernapd-now. - bin: Moved powerwake, powernap_calculator. - man: Moved manpages here. - debian/{powernap,powerwake}.install: Update accordingly. - debian/{powernap,powerwake}.manpages: Update accordingly. - powernap/monitors: Moved Adam's Monitors. * Add new powersave method, using pm-powersave: - actions/: Add a set of scripts to reduce power consumption - powernap-action: Add tool to enable/disable and list powersave actions. - debian/powernap.install: Install new files. * Provide ability to select a PowerNap method to perform. - config: Add config variable to select method. - powernap/{powernap,powernapd}: Make necessary changes. * Add WoL monitor to 'wake up' for when 'pm-powersave' action is taken. - sbin/powernapd: Add WoL Monitor. - sbin/powerna: Take pm-powersave action correctly. * Add class to handle config: - powernap/powernap.py: Add class that handles config and initialization of different monitors -- Dustin Kirkland Tue, 11 Jan 2011 12:59:13 -0600 powernap (1.11-0ubuntu1) natty; urgency=low [ Adam Sutton ] * reworked config file and config file parsing * modularized the monitoring to a set of logical plugins * added a monitor on a UDP port for staying awake [ Nobuto MURATA ] * powerwake_completion: add bash completion from $HOME/.cache/ethers, LP: #675445 * debian/powernap.upstart: fix LP: #598241 cannot enable Wake-on-LAN when ifconfig output is translated [ Dustin Kirkland ] * powerwake: test if home dir is writable (might not be, eg: www-data) -- Dustin Kirkland Wed, 17 Nov 2010 17:15:25 -0600 powernap (1.10-0ubuntu1) maverick; urgency=low [ Nobuto MURATA ] * debian/powerwake.install, powerwake_completion: add bash completion for powerwake, LP: #551073 [ Dustin Kirkland ] * powerwake: create the ~/.cache/ethers file, if it does not already exist, LP: #582381 -- Dustin Kirkland Tue, 18 May 2010 12:33:32 -0500 powernap (1.9-0ubuntu1) lucid; urgency=low * debian/powernap.upstart: fix LP: #531950 - fix ethtool regex - allow for admin customized ethtool script, when powernap's is incorrect or undesired -- Dustin Kirkland Thu, 04 Mar 2010 11:04:49 -0600 powernap (1.8-0ubuntu1) lucid; urgency=low * powerwake: - test ethers file for writability, LP: #458163 - since non-priv users cannot write to globally cached ethers, support local user eth caches * powernap-now, debian/install: - add a powernap-now utility, for sending the 'now' signal to the daemon * debian/powernap.manpages, powernap-now.8: - add powernap-now manpages * debian/powernap.init, debian/powernap.upstart, debian/rules: - upstart-ify powernap -- Dustin Kirkland Sat, 06 Feb 2010 22:34:34 -0600 powernap (1.7-0ubuntu1) karmic; urgency=low * debian/powernap.init: enable WoL at boot on interface(s) that support wake-on-lan, to ensure that a powernapping system can be awoken later, LP: #445950 -- Dustin Kirkland Wed, 19 Aug 2009 00:19:13 -0500 powernap (1.6-0ubuntu1) karmic; urgency=low [ Dan Nurmi ] * powerwake: add support for a broadcast argument, add getopt support [ Dustin Kirkland ] * powerwake.1: updated to handle Dan's extensions and new arguments * powernap_calculator, powernap_calculator.1: new script to help determine the expected power savings usings powernap in a cloud environment; manpage documentation added * debian/powernap.logrotate: rotate the powernap log * powernapd: overhaul powernap's logging using python's built-in logging functionality * debian/control: bump standards version -- Dustin Kirkland Tue, 18 Aug 2009 19:12:43 -0500 powernap (1.5-0ubuntu1) karmic; urgency=low * powerwake: handle more gracefully the lack of an /etc/ethers file * powernapd: fix timestamp -- Dustin Kirkland Fri, 10 Jul 2009 17:37:54 -0500 powernap (1.4-0ubuntu1) karmic; urgency=low * powerwake: maintain and use a cache of mac addresses, in /var/cache/powerwake/ethers; test is_mac() before adding to arp hash * debian/powerwake.dirs, debian/powerwake.install, debian/powerwake.postinst, debian/control: add a separate powerwake package * debian/manpages, debian/powerwake.manpages: added manpage debhelper files * debian/control: recommend ethtool, which might be necessary to enable wake-on-lan on your ethernet card, powernap depends on pm-utils * powernap -> powernapd, debian/init, debian/install: rename the python powernap daemon 'powernapd' * powernap.1 -> powernapd.8: renamed, note ethtool * powernap: new script that will either take a specified action, or run one of (pm-suspend, pm-hibernate, poweroff) * powernap.8: document new script * action: conffile describing what should go there * debian/init: drop stdout on 'now' status check * debian/*: use powernap.* and powerwake.* to remove any ambiguity * powernapd: look for activity on /dev/* consoles and in /proc/interrupts during the grace period, such that any activity will cancel the powernap operation * powerwake, powerwake.1: update to allow for static configuration override in /etc/ethers -- Dustin Kirkland Thu, 09 Jul 2009 17:28:44 -0500 powernap (1.3-0ubuntu1) karmic; urgency=low * config: add default value statement to each item; add sane defaults; add grace period section * powernap: add a system-wide warning message using 'wall', and a grace seconds interval to cancel the operation of 60 seconds (by default); move to using global variables for options defined in the config file * debian/control: depend on bsdutils for 'wall' utility -- Dustin Kirkland Mon, 29 Jun 2009 14:49:55 -0700 powernap (1.2-0ubuntu1) karmic; urgency=low * config, debian/control, powernap.1, powernap.py: lower the default polling period from 10 seconds to 1 second; polling /proc is cheap, and empirical testing has shown a negligible performance impact; add a note about DEBUG * powernap.py -> powernap: - abstract take_action() to a function, add a handler for --now - daemonize within the python script - eliminate the shell wrapper - log to /var/log/powernap.* - add signal handling for "now", USR1 * powernap.1: add log files * debian/init: add 'now' action and signal passing * powerwake: initial cut at powerwake utility * powerwake.1: initial cut at powerwake documentation -- Dustin Kirkland Fri, 26 Jun 2009 17:23:22 -0500 powernap (1.1-0ubuntu1) karmic; urgency=low * debian/copyright: updated for Ubuntu inclusion -- Dustin Kirkland Fri, 12 Jun 2009 12:53:54 -0500 powernap (1.0-0ubuntu1) karmic; urgency=low [ Initial release ] * powernap: shell wrapper script * powernap.py: python daemon * config: global configuration file * powernap.1: manpage documentation -- Dustin Kirkland Thu, 11 Jun 2009 17:30:16 -0500
__label__pos
0.745031
Related Articles Related Articles Round to next smaller multiple of 8 • Last Updated : 04 Jan, 2019 Given an unsigned integer x. Round it down to the next smaller multiple of 8 using bitwise operations only. Examples: Input : 35 Output : 32 Input : 40 Output : 40 As 40 is already a multiple of 8. So, no modification is done. Solution 1: A naive approach to solve this problem using arithmetic operators is : Let x be the number then, x = x – (x % 8) This will round down x to the next smaller multiple of 8. But we are not allowed to use arithmetic operators. Solution 2: An efficient approach to solve this problem using bitwise AND operation is: x = x & (-8) This will round down x to the next smaller multiple of 8. The idea is based on the fact that last three bits in a multiple of 8 must be 0, Below is the implementation of above idea: C++ filter_none edit close play_arrow link brightness_4 code // CPP program to find next smaller // multiple of 8. #include <bits/stdc++.h> using namespace std;    int RoundDown(int& a) {     return a & (-8); }    int main() {     int x = 39;     cout << RoundDown(x);     return 0; } chevron_right Java filter_none edit close play_arrow link brightness_4 code //Java program to find next smaller // multiple of 8.    import java.io.*;    class GFG { static int RoundDown(int a) {     return a & (-8); }        public static void main (String[] args) {        int x = 39;     System.out.println (RoundDown(x));     } } //This Code is Contributed by ajit chevron_right Python3 filter_none edit close play_arrow link brightness_4 code # Python 3 program to find next  # smaller multiple of 8.    def RoundDown(a):     return a & (-8)    # Driver Code if __name__ == '__main__':     x = 39     print(RoundDown(x))    # This code is contributed  # by Surendra_Gangwar chevron_right C# filter_none edit close play_arrow link brightness_4 code // C# program to find next smaller // multiple of 8. using System;    class GFG { static int RoundDown(int a) {     return a & (-8); }    public static void Main() {     int x = 39;     Console.Write(RoundDown(x)); } }    // This code is contributed // by Akanksha Rai chevron_right PHP filter_none edit close play_arrow link brightness_4 code <?php // PHP program to find next smaller  // multiple of 8.     function RoundDown($a     return ($a & (-8));     // Driver Code $x = 39;  echo RoundDown($x);     // This code is contributed by jit_t ?> chevron_right Output: 32 Time Complexity: The time complexity of this approach is O(1) Space Complexity: The space complexity of this approach is O(1) Attention reader! Don’t stop learning now. Get hold of all the important DSA concepts with the DSA Self Paced Course at a student-friendly price and become industry ready. My Personal Notes arrow_drop_up Recommended Articles Page :
__label__pos
0.691638
// Выполняет формирование таблицы значений для загрузки ее в табличную часть Начисления // Параметры: // Начисления - выборка результатат запроса // ПериодЗаполнения - дата периода для которого выполняем заполнение документа // // Возвращаемое значение: // таблица значений // Функция СформироватьТаблицуНачислений(Начисления, ОкончаниеПериодаЗаполнения, ВремТЗНачисления, МассивВалютСНезаданнымКурсом) ТЗНачисления = Новый ТаблицаЗначений(); ТЗНачисления.Колонки.Добавить("Сотрудник"); ТЗНачисления.Колонки.Добавить("Физлицо"); ТЗНачисления.Колонки.Добавить("ВидРасчета"); ТЗНачисления.Колонки.Добавить("КодВычета"); ТЗНачисления.Колонки.Добавить("Основное"); // Признак основного начисления ТЗНачисления.Колонки.Добавить("Показатель1"); ТЗНачисления.Колонки.Добавить("Показатель2"); ТЗНачисления.Колонки.Добавить("Показатель3"); ТЗНачисления.Колонки.Добавить("Показатель4"); ТЗНачисления.Колонки.Добавить("Показатель5"); ТЗНачисления.Колонки.Добавить("Показатель6"); ТЗНачисления.Колонки.Добавить("Размер"); ТЗНачисления.Колонки.Добавить("ДатаНачала", ОбщегоНазначения.ПолучитьОписаниеТиповДаты(ЧастиДаты.ДатаВремя)); ТЗНачисления.Колонки.Добавить("ДатаНачалаСобытия", ОбщегоНазначения.ПолучитьОписаниеТиповДаты(ЧастиДаты.ДатаВремя)); ТЗНачисления.Колонки.Добавить("ДатаОкончания", ОбщегоНазначения.ПолучитьОписаниеТиповДаты(ЧастиДаты.ДатаВремя)); ТЗНачисления.Колонки.Добавить("ПодразделениеОрганизации"); // для заполнения графика и вида учета времени ТЗНачисления.Колонки.Добавить("ГрафикРаботы"); ТЗНачисления.Колонки.Добавить("СуммированныйУчетРабочегоВремени"); ТЗНачисления.Колонки.Добавить("СпособРасчета"); ТЗНачисления.Колонки.Добавить("ВидВремени"); // Билет 9 ++ ТЗНачисления.Колонки.Добавить("ВидАттестата"); // Билет 9 -- // Массив незакрытых строк таблицы значений, т.е. тех, которым не проставлена дата окончания НезакрытыеСтроки = Новый Массив; // Текущие значения полей выборки для отслеживания изменения работника и даты назначения ТекущийСотрудник = Справочники.СотрудникиОрганизаций.ПустаяСсылка(); ТекущаяДатаНазначения = '00010101'; ТекущаяЧасоваяСтавкаОсновногоНачисления = 0; // признаки того, что вечерние или ночные часы введены по плановым начислениям ЕстьПлановаяДоплатаЗаНочные = Ложь; ЕстьПлановаяДоплатаЗаВечерние = Ложь; СтрокиНачисленийТекущегоНазначения = Новый Массив; СтрокиВечернихТекущегоНазначения = Новый Массив; СтрокиНочныхТекущегоНазначения = Новый Массив; ДоплатыЗаНочныеВечерниеЧасы = ПолучитьДоплатыЗаНочныеВечерние(); ДоплатаЗаНочные = ДоплатыЗаНочныеВечерниеЧасы.ДоплатаЗаНочныеЧасы; ДоплатаЗаВечерние = ДоплатыЗаНочныеВечерниеЧасы.ДоплатаЗаВечерниеЧасы; ПроцентДоплатыЗаНочныеЧасы = ДоплатыЗаНочныеВечерниеЧасы.ПроцентДоплатыЗаНочныеЧасы; ПроцентДоплатыЗаВечерниеЧасы = ДоплатыЗаНочныеВечерниеЧасы.ПроцентДоплатыЗаВечерниеЧасы; Пока Начисления.Следующий() Цикл Если Начисления.Сотрудник <> ТекущийСотрудник Тогда // закрываем незакрытые строки начислений предыдущего работника концом месяца ЗакрытьСтроки(ТЗНачисления, НезакрытыеСтроки, ОкончаниеПериодаЗаполнения); Если ЕстьПлановаяДоплатаЗаНочные Тогда // удаляем те строки ночных часов, которые введены на основании табеля Для Каждого Строка Из СтрокиНочныхТекущегоНазначения Цикл ТЗНачисления.Удалить(Строка); КонецЦикла; КонецЕсли; Если ЕстьПлановаяДоплатаЗаВечерние Тогда // удаляем те строки вечерних часов, которые введены на основании табеля Для Каждого Строка Из СтрокиВечернихТекущегоНазначения Цикл ТЗНачисления.Удалить(Строка); КонецЦикла; КонецЕсли; // меняется текущий работник и дата его движения ТекущийСотрудник = Начисления.Сотрудник; ТекущаяДатаНазначения = Начисления.ПериодРаботники; ТекущаяЧасоваяСтавкаОсновногоНачисления = 0; СтрокиНачисленийТекущегоНазначения.Очистить(); СтрокиВечернихТекущегоНазначения.Очистить(); СтрокиНочныхТекущегоНазначения.Очистить(); ЕстьПлановаяДоплатаЗаНочные = Ложь; ЕстьПлановаяДоплатаЗаВечерние = Ложь; ИначеЕсли Начисления.ПериодРаботники <> ТекущаяДатаНазначения И (Начисления.НачисляетсяВЦеломЗаМесяц = Null Или (НЕ Начисления.НачисляетсяВЦеломЗаМесяц)) Тогда Для каждого СтрокаТекущегоНазначения Из СтрокиНачисленийТекущегоНазначения Цикл ИндексСтроки = 0; Для каждого СтрокаМассива Из НезакрытыеСтроки Цикл Если СтрокаМассива = СтрокаТекущегоНазначения Тогда НезакрытыеСтроки.Удалить(ИндексСтроки); Прервать; Иначе ИндексСтроки = ИндексСтроки + 1; КонецЕсли; КонецЦикла; КонецЦикла; // закрываем все строки предыдущего назначения датой предшествующей новому назначению ЗакрытьСтроки(ТЗНачисления, СтрокиНачисленийТекущегоНазначения, Начисления.ПериодРаботники - 1); // меняется текущая дата движения работника ТекущаяДатаНазначения = Начисления.ПериодРаботники; КонецЕсли; Если НЕ Начисления.Подходит ИЛИ Начисления.УжеПроведен Тогда // пропускаем такие записи: они уже введена другими документами или соответствуют окончанию назначения работника Продолжить; КонецЕсли; Если Начисления.Действие <> Перечисления.ВидыДействияСНачислением.Прекратить Тогда Для Сч = 1 По 6 Цикл Если ЗначениеЗаполнено(Начисления["Валюта"+Сч]) И (Начисления["КурсВалюты"+Сч] = NULL ИЛИ Начисления["КурсВалюты"+Сч] = 0) Тогда МассивВалютСНезаданнымКурсом[Начисления["Валюта"+ Сч]] = Истина; КонецЕсли; КонецЦикла; КонецЕсли; // Расчет размера начисления // для вечерних и ночных часов размер в регистре плановых начислений сожержит процент оплаты от тарифа/оклада Если Начисления.ВидРасчета = ПланыВидовРасчета.ОсновныеНачисленияОрганизаций.ДоплатаЗаВечерниеЧасы ИЛИ Начисления.ВидРасчета = ПланыВидовРасчета.ОсновныеНачисленияОрганизаций.ДоплатаЗаНочныеЧасы ИЛИ Начисления.ВидРасчета = ДоплатаЗаНочные ИЛИ Начисления.ВидРасчета = ДоплатаЗаВечерние Тогда Показатель1 = Начисления.Показатель1 / 100 * ТекущаяЧасоваяСтавкаОсновногоНачисления; Показатель2 = 0; Показатель3 = 0; Показатель4 = 0; Показатель5 = 0; Показатель6 = 0; Иначе // расчета не требуется Показатель1 = Начисления.Показатель1; Показатель2 = Начисления.Показатель2; Показатель3 = Начисления.Показатель3; Показатель4 = Начисления.Показатель4; Показатель5 = Начисления.Показатель5; Показатель6 = Начисления.Показатель6; КонецЕсли; // признак того, что вечерние и/или ночные часы введены по плановым начислениям Если Начисления.ВидРасчета = ПланыВидовРасчета.ОсновныеНачисленияОрганизаций.ДоплатаЗаВечерниеЧасы Тогда ЕстьПлановаяДоплатаЗаВечерние = Истина; // удалим строки с веченими начислениями, введенными на основании графика, из // всех массивос строк Для Каждого Строка Из СтрокиВечернихТекущегоНазначения Цикл Для Счетчик = 0 По НезакрытыеСтроки.ВГраница() Цикл Если ТЗНачисления.Индекс(НезакрытыеСтроки[Счетчик]) = ТЗНачисления.Индекс(Строка) Тогда // нашли строку с вечерними, введенными на основании графика НезакрытыеСтроки.Удалить(Счетчик); Прервать; КонецЕсли; КонецЦикла; Для Счетчик = 0 По СтрокиНачисленийТекущегоНазначения.ВГраница() Цикл Если ТЗНачисления.Индекс(СтрокиНачисленийТекущегоНазначения[Счетчик]) = ТЗНачисления.Индекс(Строка) Тогда // нашли строку с вечерними, введенными на основании графика СтрокиНачисленийТекущегоНазначения.Удалить(Счетчик); Прервать; КонецЕсли; КонецЦикла; ТЗНачисления.Удалить(Строка); КонецЦикла; СтрокиВечернихТекущегоНазначения.Очистить(); КонецЕсли; Если Начисления.ВидРасчета = ПланыВидовРасчета.ОсновныеНачисленияОрганизаций.ДоплатаЗаНочныеЧасы Тогда ЕстьПлановаяДоплатаЗаНочные = Истина; // удалим строки с ночными начислениями, введенными на основании графика, из // всех массивос строк Для Каждого Строка Из СтрокиНочныхТекущегоНазначения Цикл Для Счетчик = 0 По НезакрытыеСтроки.ВГраница() Цикл Если ТЗНачисления.Индекс(НезакрытыеСтроки[Счетчик]) = ТЗНачисления.Индекс(Строка) Тогда // нашли строку с вечерними, введенными на основании графика НезакрытыеСтроки.Удалить(Счетчик); Прервать; КонецЕсли; КонецЦикла; Для Счетчик = 0 По СтрокиНачисленийТекущегоНазначения.ВГраница() Цикл Если ТЗНачисления.Индекс(СтрокиНачисленийТекущегоНазначения[Счетчик]) = ТЗНачисления.Индекс(Строка) Тогда // нашли строку с вечерними, введенными на основании графика СтрокиНачисленийТекущегоНазначения.Удалить(Счетчик); Прервать; КонецЕсли; КонецЦикла; ТЗНачисления.Удалить(Строка); КонецЦикла; СтрокиНочныхТекущегоНазначения.Очистить(); КонецЕсли; // Сохраним размер основного начисления для расчета размера оплаты ночных и вечерних часов // (основное начисление в выборке должно идти раньше других видов расчета) Если Начисления.ОсновноеНачисление и Начисления.ВидРасчета <> Неопределено Тогда ТекущаяЧасоваяСтавкаОсновногоНачисления = Начисления.ЧасоваяТарифнаяСтавка; КонецЕсли; // Найдем такой же вид расчета среди незакрытых строк с целью завершения его действия Строки = Новый Массив; Если Начисления.ВидРасчета <> ПланыВидовРасчета.ОсновныеНачисленияОрганизаций.ДоплатаЗаКвалификацию Тогда // Билет 9 НайтиСредиНезакрытых(НезакрытыеСтроки, Начисления.ОсновноеНачисление, Начисления.ВидРасчета, Строки); КонецЕсли; // Билет 9 ПрерватьОбход = Ложь; Если Строки.Количество() > 0 Тогда // нашли Для Каждого Строка Из Строки Цикл Если Строка.ДатаНачала = Начисления.Период Тогда Если Начисления.Действие = Перечисления.ВидыДействияСНачислением.Прекратить Тогда ИндексСтроки = НезакрытыеСтроки.Найти(Строка); НезакрытыеСтроки.Удалить(ИндексСтроки); ИндексСтроки = 0; Для каждого СтрокаМассива Из СтрокиНачисленийТекущегоНазначения Цикл Если СтрокаМассива = Строка Тогда СтрокиНачисленийТекущегоНазначения.Удалить(ИндексСтроки); Прервать; Иначе ИндексСтроки = ИндексСтроки + 1; КонецЕсли; КонецЦикла; ТЗНачисления.Удалить(Строка); Иначе // если дата та же - новую строку в формируемую таблицу значений не вводим, а меняем данные // и оставляем строку "незакрытой" Строка.ВидРасчета = Начисления.ВидРасчета;// вид расчета необходимо переопределять для основного начисления Строка.ПодразделениеОрганизации = Начисления.ПодразделениеОрганизации; Строка.СпособРасчета = Начисления.СпособРасчета; Строка.ГрафикРаботы = Начисления.ГрафикРаботы; Строка.ДатаНачалаСобытия = Начисления.ДатаНачалаСобытия; Строка.СуммированныйУчетРабочегоВремени = Начисления.СуммированныйУчетРабочегоВремени; Строка.Показатель1 = Показатель1; Строка.Показатель2 = Показатель2; Строка.Показатель3 = Показатель3; Строка.Показатель4 = Показатель4; Строка.Показатель5 = Показатель5; Строка.Показатель6 = Показатель6; Если Начисления.ОсновноеНачисление Тогда // если это ОсновноеНачисление, то нужно подменить данные и в тех строках // ночных и вечерних, которые введены не основании графика Для Каждого СтрокаДоплаты Из СтрокиВечернихТекущегоНазначения Цикл Для Каждого НезакрытаяСтрока Из НезакрытыеСтроки Цикл Если ТЗНачисления.Индекс(НезакрытаяСтрока) = ТЗНачисления.Индекс(СтрокаДоплаты) Тогда // нашли строку с вечерними, введенными на основании графика СтрокаДоплаты.Показатель1 = (Начисления.ЧасоваяТарифнаяСтавка * ПроцентДоплатыЗаВечерниеЧасы) / 100; КонецЕсли; КонецЦикла; КонецЦикла; Для Каждого СтрокаДоплаты Из СтрокиНочныхТекущегоНазначения Цикл Для Каждого НезакрытаяСтрока Из НезакрытыеСтроки Цикл Если ТЗНачисления.Индекс(НезакрытаяСтрока) = ТЗНачисления.Индекс(СтрокаДоплаты) Тогда // нашли строку с ночными, введенными на основании графика СтрокаДоплаты.Показатель1 = (Начисления.ЧасоваяТарифнаяСтавка * ПроцентДоплатыЗаНочныеЧасы) / 100; КонецЕсли; КонецЦикла; КонецЦикла; КонецЕсли; КонецЕсли; ПрерватьОбход = Истина; ИначеЕсли Начисления.НачисляетсяВЦеломЗаМесяц = Null Или (НЕ Начисления.НачисляетсяВЦеломЗаМесяц) Тогда Если Не ЗначениеЗаполнено(Строка.ДатаОкончания) Или Строка.ДатаОкончания > Начисления.Период-1 Тогда Строка.ДатаОкончания = Начисления.Период-1; КонецЕсли; ИндексСтроки = НезакрытыеСтроки.Найти(Строка); НезакрытыеСтроки.Удалить(ИндексСтроки); Если Начисления.ОсновноеНачисление Тогда // если это ОсновноеНачисление, то нужно закрыть и те строки // ночных и вечерних, которые введены не основании графика Для Каждого СтрокаДоплаты Из СтрокиВечернихТекущегоНазначения Цикл Для Каждого НезакрытаяСтрока Из НезакрытыеСтроки Цикл Если ТЗНачисления.Индекс(НезакрытаяСтрока) = ТЗНачисления.Индекс(СтрокаДоплаты) Тогда // нашли строку с вечерними, введенными на основании графика Если Не ЗначениеЗаполнено(СтрокаДоплаты.ДатаОкончания) Или СтрокаДоплаты.ДатаОкончания > Начисления.Период-1 Тогда СтрокаДоплаты.ДатаОкончания = Начисления.Период-1; КонецЕсли; ИндексСтроки = НезакрытыеСтроки.Найти(СтрокаДоплаты); НезакрытыеСтроки.Удалить(ИндексСтроки); КонецЕсли; КонецЦикла; КонецЦикла; Для Каждого СтрокаДоплаты Из СтрокиНочныхТекущегоНазначения Цикл Для Каждого НезакрытаяСтрока Из НезакрытыеСтроки Цикл Если ТЗНачисления.Индекс(НезакрытаяСтрока) = ТЗНачисления.Индекс(СтрокаДоплаты) Тогда // нашли строку с ночными, введенными на основании графика Если Не ЗначениеЗаполнено(СтрокаДоплаты.ДатаОкончания) Или СтрокаДоплаты.ДатаОкончания > Начисления.Период-1 Тогда СтрокаДоплаты.ДатаОкончания = Начисления.Период-1; КонецЕсли; ИндексСтроки = НезакрытыеСтроки.Найти(СтрокаДоплаты); НезакрытыеСтроки.Удалить(ИндексСтроки); КонецЕсли; КонецЦикла; КонецЦикла; КонецЕсли; ИначеЕсли Начисления.ПериодНачисления = Начисления.Период Тогда Строка.ДатаОкончания = ОкончаниеПериодаЗаполнения; // удаляем из массива незакрытых - если пользователем был изменен размер начисления Если Начисления.Действие = Перечисления.ВидыДействияСНачислением.Изменить Тогда ИндексСтроки = НезакрытыеСтроки.Найти(Строка); НезакрытыеСтроки.Удалить(ИндексСтроки); КонецЕсли; Если Начисления.ОсновноеНачисление Тогда // если это ОсновноеНачисление, то нужно закрыть и те строки // ночных и вечерних, которые введены не основании графика Для Каждого СтрокаДоплаты Из СтрокиВечернихТекущегоНазначения Цикл Для Каждого НезакрытаяСтрока Из НезакрытыеСтроки Цикл Если ТЗНачисления.Индекс(НезакрытаяСтрока) = ТЗНачисления.Индекс(СтрокаДоплаты) Тогда // нашли строку с вечерними, введенными на основании графика СтрокаДоплаты.ДатаОкончания = ОкончаниеПериодаЗаполнения; // удаляем из массива незакрытых - если пользователем был изменен размер начисления Если Начисления.Действие = Перечисления.ВидыДействияСНачислением.Изменить Тогда ИндексСтроки = НезакрытыеСтроки.Найти(СтрокаДоплаты); НезакрытыеСтроки.Удалить(ИндексСтроки); КонецЕсли; КонецЕсли; КонецЦикла; КонецЦикла; Для Каждого СтрокаДоплаты Из СтрокиНочныхТекущегоНазначения Цикл Для Каждого НезакрытаяСтрока Из НезакрытыеСтроки Цикл Если ТЗНачисления.Индекс(НезакрытаяСтрока) = ТЗНачисления.Индекс(СтрокаДоплаты) Тогда // нашли строку с ночными, введенными на основании графика СтрокаДоплаты.ДатаОкончания = ОкончаниеПериодаЗаполнения; // удаляем из массива незакрытых - если пользователем был изменен размер начисления Если Начисления.Действие = Перечисления.ВидыДействияСНачислением.Изменить Тогда ИндексСтроки = НезакрытыеСтроки.Найти(СтрокаДоплаты); НезакрытыеСтроки.Удалить(ИндексСтроки); КонецЕсли; КонецЕсли; КонецЦикла; КонецЦикла; КонецЕсли; Иначе ПрерватьОбход = Истина; КонецЕсли; КонецЦикла; КонецЕсли; Если ПрерватьОбход Или Начисления.Действие = Перечисления.ВидыДействияСНачислением.Прекратить Тогда Продолжить; ИначеЕсли Начисления.НачисляетсяВЦеломЗаМесяц Тогда // проверем не введено ли данное начисление уже СтрокаПоиска= Новый Структура; СтрокаПоиска.Вставить("Сотрудник", Начисления.Сотрудник); СтрокаПоиска.Вставить("ВидРасчета", Начисления.ВидРасчета); НайденныеСтроки = ТЗНачисления.НайтиСтроки(СтрокаПоиска); Если НайденныеСтроки.Количество() > 0 Тогда Продолжить; КонецЕсли; КонецЕсли; // Добавим новую строку начислений НоваяСтрока = ТЗНачисления.Добавить(); НоваяСтрока.Сотрудник = Начисления.Сотрудник; НоваяСтрока.Физлицо = Начисления.Физлицо; НоваяСтрока.ВидРасчета = Начисления.ВидРасчета; НоваяСтрока.КодВычета = Начисления.КодВычета; // Билет 9 ++ НоваяСтрока.ВидАттестата = Начисления.ВидАттестата; // Билет 9 -- Если ПроведениеРасчетов.ЭтоРасчетСеверныхНадбавок(Начисления.СпособРасчета) И Начисления.ДатаРегистрацииСеверногоСтажа <> Null Тогда // получим массив процентов северных надбавок с датами их действия для нашего периода ПроцентыСевернойНадбавки = ПроведениеРасчетов.ПолучитьПроцентыСевернойНадбавкиЗаПериод( Начисления.ДатаРегистрацииСеверногоСтажа, Начисления.ПорядокНачисленияСеверныхНадбавок, Начисления.НачальныйПроцентСевернойНадбавки, Начисления.СеверныйСтажМесяцев, Начисления.СеверныйСтажДней, Начисления.Период, ОкончаниеПериодаЗаполнения); НоваяСтрока.Показатель1 = ПроцентыСевернойНадбавки[0].Процент; ИначеЕсли ПроведениеРасчетов.ЭтоРасчетОтСтажа(Начисления.СпособРасчета) Тогда Если Начисления.КоэффициентСтажа = Null Тогда КоэффициентСтажа = 0; ОбщегоНазначения.КомментарийРасчета("Для " + Начисления.СотрудникНаименование + ", вид расчета """ + Начисления.ВидРасчетаНаименование+ """ не подобран размер начисления в зависимости от стажа """ + Начисления.ВидРасчетаВидСтажа + """ Размер принят равным нулю.", , Начисления.ФизЛицо, Начисления.ФизЛицо, Перечисления.ВидыСообщений.Ошибка); Иначе КоэффициентСтажа = Начисления.КоэффициентСтажа; КонецЕсли; НоваяСтрока.Показатель1 = КоэффициентСтажа; Иначе НоваяСтрока.Показатель1 = Показатель1; НоваяСтрока.Показатель2 = Показатель2; НоваяСтрока.Показатель3 = Показатель3; НоваяСтрока.Показатель4 = Показатель4; НоваяСтрока.Показатель5 = Показатель5; НоваяСтрока.Показатель6 = Показатель6; КонецЕсли; НоваяСтрока.Основное = Начисления.ОсновноеНачисление; НоваяСтрока.ДатаНачала = Начисления.Период; НоваяСтрока.ДатаНачалаСобытия = Начисления.ДатаНачалаСобытия; Если Начисления.НачисляетсяВЦеломЗаМесяц Тогда НоваяСтрока.ДатаОкончания = ОкончаниеПериодаЗаполнения; КонецЕсли; НоваяСтрока.ПодразделениеОрганизации = Начисления.ПодразделениеОрганизации; НоваяСтрока.СпособРасчета = Начисления.СпособРасчета; НоваяСтрока.ВидВремени = Начисления.ВидВремени; НоваяСтрока.ГрафикРаботы = Начисления.ГрафикРаботы; НоваяСтрока.СуммированныйУчетРабочегоВремени = Начисления.СуммированныйУчетРабочегоВремени; Если Не Начисления.НачисляетсяВЦеломЗаМесяц Тогда НезакрытыеСтроки.Добавить(НоваяСтрока); СтрокиНачисленийТекущегоНазначения.Добавить(НоваяСтрока); КонецЕсли; // Добавим начисления по ночным и вечерним, если требуется и если это - основной вид расчета Если Начисления.ОсновноеНачисление И Начисления.ЕстьНочные И НЕ ЕстьПлановаяДоплатаЗаНочные Тогда НоваяСтрока = ТЗНачисления.Добавить(); НоваяСтрока.Сотрудник = Начисления.Сотрудник; НоваяСтрока.Физлицо = Начисления.Физлицо; НоваяСтрока.ВидРасчета = ДоплатаЗаНочные; НоваяСтрока.Показатель1 = (Начисления.ЧасоваяТарифнаяСтавка * ПроцентДоплатыЗаНочныеЧасы) / 100; НоваяСтрока.Основное = Ложь; НоваяСтрока.ДатаНачала = Начисления.Период; НоваяСтрока.ПодразделениеОрганизации = Начисления.ПодразделениеОрганизации; НоваяСтрока.СпособРасчета = Перечисления.СпособыРасчетаОплатыТруда.ДоплатаЗаНочныеЧасы; НоваяСтрока.ВидВремени = Перечисления.ВидыУчетаВремени.ПоНочнымЧасам; НоваяСтрока.ГрафикРаботы = Начисления.ГрафикРаботы; НоваяСтрока.СуммированныйУчетРабочегоВремени = Начисления.СуммированныйУчетРабочегоВремени; НезакрытыеСтроки.Добавить(НоваяСтрока); СтрокиНачисленийТекущегоНазначения.Добавить(НоваяСтрока); СтрокиНочныхТекущегоНазначения.Добавить(НоваяСтрока); КонецЕсли; Если Начисления.ОсновноеНачисление И Начисления.ЕстьВечерние И НЕ ЕстьПлановаяДоплатаЗаВечерние Тогда НоваяСтрока = ТЗНачисления.Добавить(); НоваяСтрока.Сотрудник = Начисления.Сотрудник; НоваяСтрока.Физлицо = Начисления.Физлицо; НоваяСтрока.ВидРасчета = ДоплатаЗаВечерние; НоваяСтрока.Показатель1 = (Начисления.ЧасоваяТарифнаяСтавка * ПроцентДоплатыЗаВечерниеЧасы) / 100; НоваяСтрока.Основное = Ложь; НоваяСтрока.ДатаНачала = Начисления.Период; НоваяСтрока.ПодразделениеОрганизации = Начисления.ПодразделениеОрганизации; НоваяСтрока.СпособРасчета = Перечисления.СпособыРасчетаОплатыТруда.ДоплатаЗаВечерниеЧасы; НоваяСтрока.ВидВремени = Перечисления.ВидыУчетаВремени.ПоВечернимЧасам; НоваяСтрока.ГрафикРаботы = Начисления.ГрафикРаботы; НоваяСтрока.СуммированныйУчетРабочегоВремени = Начисления.СуммированныйУчетРабочегоВремени; НезакрытыеСтроки.Добавить(НоваяСтрока); СтрокиНачисленийТекущегоНазначения.Добавить(НоваяСтрока); СтрокиВечернихТекущегоНазначения.Добавить(НоваяСтрока); КонецЕсли; Если Начисления.ВидРасчета = ПланыВидовРасчета.ОсновныеНачисленияОрганизаций.ДоплатаЗаВечерниеЧасы ИЛИ Начисления.ВидРасчета = ПланыВидовРасчета.ОсновныеНачисленияОрганизаций.ДоплатаЗаНочныеЧасы ИЛИ Начисления.ВидРасчета = ДоплатаЗаНочные ИЛИ Начисления.ВидРасчета = ДоплатаЗаВечерние Тогда СтрокаДоплаты = НоваяСтрока; Отбор = Новый Структура(); Отбор.Вставить("Сотрудник", Начисления.Сотрудник); Отбор.Вставить("Физлицо", Начисления.Физлицо); Отбор.Вставить("ОсновноеНачисление", Истина); Строки = ВремТЗНачисления.НайтиСтроки(Отбор); ОтборДоплаты = Новый Структура(); ОтборДоплаты.Вставить("Сотрудник", Начисления.Сотрудник); ОтборДоплаты.Вставить("Физлицо", Начисления.Физлицо); ОтборДоплаты.Вставить("ВидРасчета", Начисления.ВидРасчета); СтрокиДоплаты = ВремТЗНачисления.НайтиСтроки(ОтборДоплаты); Если Строки.Количество() > 1 И СтрокиДоплаты.Количество() < 2 Тогда КоличествоСтрок = Строки.Количество(); НомерСтр = 0; ДатаНачала = СтрокаДоплаты.ДатаНачала; Для Каждого СтрокаТЗНачисления Из Строки Цикл НомерСтр = НомерСтр + 1; Если НомерСтр > 1 Тогда Если ДатаНачала >= СтрокаТЗНачисления.Период Тогда ИндексСтрокиНезакрытые = НезакрытыеСтроки.Найти(СтрокаДоплаты); НезакрытыеСтроки.Удалить(ИндексСтрокиНезакрытые); ИндексСтрокиТекНазнач = СтрокиНачисленийТекущегоНазначения.Найти(СтрокаДоплаты); СтрокиНачисленийТекущегоНазначения.Удалить(ИндексСтрокиТекНазнач); ИндексСтроки = ТЗНачисления.Найти(СтрокаДоплаты); ТЗНачисления.Удалить(ИндексСтроки); ИначеЕсли НомерСтр = 2 Тогда // если не удаляем строку доплаты, то "закроем" ее (ставим дату окончания) СтрокаДоплаты.ДатаОкончания = СтрокаТЗНачисления.Период - 1; ИндексСтрокиНезакрытые = НезакрытыеСтроки.Найти(СтрокаДоплаты); НезакрытыеСтроки.Удалить(ИндексСтрокиНезакрытые); КонецЕсли; СтрокаДоплаты = ТЗНачисления.Добавить(); СтрокаДоплаты.Сотрудник = СтрокаТЗНачисления.Сотрудник; СтрокаДоплаты.Физлицо = СтрокаТЗНачисления.Физлицо; СтрокаДоплаты.ВидРасчета = Начисления.ВидРасчета; СтрокаДоплаты.КодВычета = Начисления.КодВычета; СтрокаДоплаты.Показатель1 = Начисления.Показатель1 / 100 * СтрокаТЗНачисления.ЧасоваяТарифнаяСтавка; СтрокаДоплаты.Основное = Начисления.ОсновноеНачисление; СтрокаДоплаты.ДатаНачала = СтрокаТЗНачисления.Период; СтрокаДоплаты.ДатаНачалаСобытия = СтрокаТЗНачисления.ДатаНачалаСобытия; СтрокаДоплаты.ПодразделениеОрганизации = СтрокаТЗНачисления.ПодразделениеОрганизации; СтрокаДоплаты.СпособРасчета = Начисления.СпособРасчета; СтрокаДоплаты.ВидВремени = Начисления.ВидВремени; СтрокаДоплаты.ГрафикРаботы = Начисления.ГрафикРаботы; СтрокаДоплаты.СуммированныйУчетРабочегоВремени = Начисления.СуммированныйУчетРабочегоВремени; ДатаНачала = СтрокаДоплаты.ДатаНачала; НезакрытыеСтроки.Добавить(СтрокаДоплаты); СтрокиНачисленийТекущегоНазначения.Добавить(СтрокаДоплаты); КонецЕсли; КонецЦикла; ИначеЕсли Строки.Количество() > 1 Тогда Для Каждого СтрокаТЗДопНачисления Из СтрокиДоплаты Цикл Если СтрокаТЗДопНачисления.Период = НоваяСтрока.ДатаНачала Тогда Для Каждого СтрокаТЗНачисления Из Строки Цикл Если СтрокаТЗНачисления.Период = НоваяСтрока.ДатаНачала Тогда НоваяСтрока.Показатель1 = Начисления.Показатель1 / 100 * СтрокаТЗНачисления.ЧасоваяТарифнаяСтавка; КонецЕсли; КонецЦикла; КонецЕсли; КонецЦикла; КонецЕсли; КонецЕсли; Если ПроведениеРасчетов.ЭтоРасчетОтСтажа(Начисления.СпособРасчета) И Начисления.ДеньСменыКоэффициентаСтажа <> Null Тогда // установим дату окончания предыдущего расчета от стажа Если День(КонецМесяца(Начисления.Период)) >= Начисления.ДеньСменыКоэффициентаСтажа Тогда ДатаСменыКоэффициентаСтажа = Дата(Год(Начисления.Период), Месяц(Начисления.Период), Начисления.ДеньСменыКоэффициентаСтажа) - 1; Если ДатаСменыКоэффициентаСтажа > НоваяСтрока.ДатаНачала Тогда НоваяСтрока.ДатаОкончания = ДатаСменыКоэффициентаСтажа; // ... и введем еще один ДопНоваяСтрока = ТЗНачисления.Добавить(); ДопНоваяСтрока.Сотрудник = Начисления.Сотрудник; ДопНоваяСтрока.Физлицо = Начисления.Физлицо; ДопНоваяСтрока.ВидРасчета = Начисления.ВидРасчета; ДопНоваяСтрока.КодВычета = Начисления.КодВычета; ДопНоваяСтрока.Показатель1 = Начисления.СледКоэффициентСтажа; ДопНоваяСтрока.Основное = Начисления.ОсновноеНачисление; ДопНоваяСтрока.ДатаНачала = ДатаСменыКоэффициентаСтажа + 1; ДопНоваяСтрока.ДатаНачалаСобытия = Начисления.ДатаНачалаСобытия; ДопНоваяСтрока.ПодразделениеОрганизации = Начисления.ПодразделениеОрганизации; ДопНоваяСтрока.СпособРасчета = Начисления.СпособРасчета; ДопНоваяСтрока.ГрафикРаботы = Начисления.ГрафикРаботы; ДопНоваяСтрока.СуммированныйУчетРабочегоВремени = Начисления.СуммированныйУчетРабочегоВремени; НезакрытыеСтроки.Добавить(ДопНоваяСтрока); СтрокиНачисленийТекущегоНазначения.Добавить(ДопНоваяСтрока); Иначе // период записи начиался после дня изменения коэффициента стажа - запишем в ту же запись новое значение НоваяСтрока.Показатель1 = Начисления.СледКоэффициентСтажа; КонецЕсли; КонецЕсли; ИначеЕсли ПроведениеРасчетов.ЭтоРасчетСеверныхНадбавок(Начисления.СпособРасчета) И Начисления.ДатаРегистрацииСеверногоСтажа <> Null Тогда Если ПроцентыСевернойНадбавки.Количество() > 1 Тогда Если ПроцентыСевернойНадбавки[0].Процент <= 0 Тогда НоваяСтрока.ДатаНачала = Дата(Год(Начисления.Период), Месяц(Начисления.Период), День(ПроцентыСевернойНадбавки[1].Период)); НоваяСтрока.Показатель1 = ПроцентыСевернойНадбавки[1].Процент; Иначе // установим дату окончания предыдущего расчета от стажа НоваяСтрока.ДатаОкончания = Дата(Год(Начисления.Период), Месяц(Начисления.Период), День(ПроцентыСевернойНадбавки[1].Период)) - 1; // ... и введем дополнительную // предполагаем, что в течение одного расчетного периода (месяца) не может произойти несколько // изменений процента северной надбавки ДопНоваяСтрока = ТЗНачисления.Добавить(); ДопНоваяСтрока.Сотрудник = Начисления.Сотрудник; ДопНоваяСтрока.Физлицо = Начисления.Физлицо; ДопНоваяСтрока.ВидРасчета = Начисления.ВидРасчета; ДопНоваяСтрока.КодВычета = Начисления.КодВычета; ДопНоваяСтрока.Показатель1 = ПроцентыСевернойНадбавки[1].Процент; ДопНоваяСтрока.Основное = Начисления.ОсновноеНачисление; ДопНоваяСтрока.ДатаНачала = Дата(Год(Начисления.Период), Месяц(Начисления.Период), День(ПроцентыСевернойНадбавки[1].Период)); ДопНоваяСтрока.ДатаНачалаСобытия = Начисления.ДатаНачалаСобытия; ДопНоваяСтрока.ПодразделениеОрганизации = Начисления.ПодразделениеОрганизации; ДопНоваяСтрока.СпособРасчета = Начисления.СпособРасчета; ДопНоваяСтрока.ГрафикРаботы = Начисления.ГрафикРаботы; ДопНоваяСтрока.СуммированныйУчетРабочегоВремени = Начисления.СуммированныйУчетРабочегоВремени; НезакрытыеСтроки.Добавить(ДопНоваяСтрока); СтрокиНачисленийТекущегоНазначения.Добавить(ДопНоваяСтрока); КонецЕсли; КонецЕсли; КонецЕсли; КонецЦикла; // закрываем незакрытые строки по последнему работнику концом месяца ЗакрытьСтроки(ТЗНачисления, НезакрытыеСтроки, ОкончаниеПериодаЗаполнения); Возврат ТЗНачисления; КонецФункции //СформироватьТаблицуНачислений
__label__pos
0.770922
Guides Plan-enterprise-icon Expo Application Services API Reference Linking URLs are the most powerful way to launch native applications. Native operating systems like macOS, iOS, Android, Windows, etc. have built-in link handling which chooses an app to handle a URL based on the URL scheme. The most common URL schemes are https and http which are delegated to web browsers like Chrome, or Safari. Native apps, like the ones built with React Native, can implement any URL scheme, and the JavaScript React layer can handle the URL used to launch the corresponding native app. Linking from your app The expo-linking API universally abstracts over native linking APIs (like window.history on web). import * as Linking from 'expo-linking'; Linking.openURL('https://expo.dev'); Web browsers have additional link functionality like right-click to copy, and hover to preview. You can use the package @expo/html-elements to get a universal <A /> element: Terminal → npx expo install @expo/html-elements import { A } from '@expo/html-elements'; export default function App() { return <A href="https://google.com">Go to Google</A>; } This renders an <a /> on web and a interactive <Text /> which uses the Linking API on native. Routers like React Navigation have built-in linking components that you should use to move around your app. There are some URL schemes for core functionality that exist on every platform. The following is a non-exhaustive list, but covers the most commonly used schemes. SchemeDescription https / httpOpen web browser app, eg: https://expo.dev mailtoOpen mail app, eg: mailto: [email protected] telOpen phone app, eg: tel:+123456789 smsOpen SMS app, eg: sms:+123456789 If you know the custom scheme for another app you can link to it. Some services provide documentation for deep linking, for example the Lyft deep linking documentation describes how to link directly to a specific pickup location and destination: lyft://ridetype?id=lyft&pickup[latitude]=37.764728&pickup[longitude]=-122.422999&destination[latitude]=37.7763592&destination[longitude]=-122.4242038 It's possible that the user doesn't have the Lyft app installed, in which case you may want to open the App / Play Store, or let them know that they need to install it first. We recommend using the library react-native-app-link for these cases. On iOS, Linking.canOpenURL requires additional configuration to query other apps' linking schemes. You can use the expo.ios.infoPlist key in your Expo config (app.json, app.config.js) to specify a list of schemes your app needs to query. For example: { "expo": { "ios": { "infoPlist": { "LSApplicationQueriesSchemes": ["lyft"] } } } } If you don't specify this list, Linking.canOpenURL may return false regardless of whether the device has the app installed. Note that this configuration can only be tested in development builds, because it requires native changes that will not be applied when testing in Expo Go. To save you the trouble of inserting a bunch of conditionals based on the environment that you're in and hardcoding urls, we provide some helper methods in our extension of the Linking module. When you want to provide a service with a url that it needs to redirect back into your app, you can call Linking.createURL() and it will resolve to the following: • Custom builds: myapp:// • Development in Expo Go: exp://127.0.0.1:19000 • Published app in Expo Go: exp://u.expo.dev/[project-id]?channel-name=[channel-name]&runtime-version=[runtime-version] You can also change the returned url by passing optional parameters into Linking.createURL(). These will be used by your app to receive data, which we will talk about in the next section. Info-icon Linking.createURL() is available in [email protected] and higher. If you are using an older version, use Linking.makeUrl() instead. To pass some data to an app, you can append it as a path or query string on your url. Linking.createURL(path, { queryParams }) will construct a working url automatically for you. Example: const redirectUrl = Linking.createURL('path/into/app', { queryParams: { hello: 'world' }, }); This will resolve into the following, depending on the environment: • Custom builds: myapp://path/into/app?hello=world • Development in Expo Go: exp://127.0.0.1:19000/--/path/into/app?hello=world • Published app in Expo Go: exp://u.expo.dev/[project-id]?channel-name=[channel-name]&runtime-version=[runtime-version]/--/path/into/app?hello=world Info-icon Notice in Expo Go that /--/ is added to the URL when a path is specified. This indicates to Expo Go that the substring after it corresponds to the deep link path, and is not part of the path to the app itself. The expo-linking API enables you to open a URL with the operating system's preferred application, you can use the expo-web-browser module to open URLs with an in-app browser. In-app browsers are especially useful for secure authentication. Terminal → npx expo install expo-web-browser WebBrowser vs Linking import React from 'react'; import { Button, View, StyleSheet } from 'react-native'; import * as Linking from 'expo-linking'; import * as WebBrowser from 'expo-web-browser'; export default function App() { return ( <View style={styles.container}> <Button title="Open URL with the system browser" onPress={() => Linking.openURL('https://expo.dev')} style={styles.button} /> <Button title="Open URL with an in-app browser" onPress={() => WebBrowser.openBrowserAsync('https://expo.dev')} style={styles.button} /> </View> ); } const styles = StyleSheet.create({ container: { flex: 1, alignItems: 'center', justifyContent: 'center', }, button: { marginVertical: 10, }, }); To link to your development build or standalone app, you need to specify a custom URL scheme for your app. You can register a scheme in your Expo config (app.json, app.config.js) by adding a string under the scheme key: { "expo": { "scheme": "myapp" } } Once you build and install your app, you will be able to open it with links to myapp://. Info-icon Expo Prebuild automatically adds the app's iOS bundle identifier/Android package as a URL scheme. Triangle-down-icon Working in a bare React Native app? In bare apps, you can use the uri-scheme package to easily add, remove, list, and open your URIs. To make your native app handle myapp:// simply run: Terminal → npx uri-scheme add myapp You should now be able to see a list of all your project's schemes by running: Terminal → npx uri-scheme list You can test it to ensure it works like this: Terminal # Rebuild the native apps, be sure to use an emulatoryarn androidyarn ios # Open a URI schemenpx uri-scheme open myapp://some/redirect Expo Go uses the exp:// scheme, but if we link to exp:// without any address afterwards, it will open the app to the home screen. In development, your app will live at a url like exp://127.0.0.1:19000. When published, an experience will be hosted at a URL like exp://u.expo.dev/[project-id]?channel-name=[channel-name]&runtime-version=[runtime-version], where u.expo.dev/[project-id] is the hosted URL that Expo Go fetches from. You can test this mechanism in your mobile browser by searching exp://u.expo.dev/F767ADF57-B487-4D8F-9522-85549C39F43F?channel-name=main&runtime-version=exposdk:45.0.0, this will redirect to your experience in the Expo Go app. By default exp:// is replaced with http:// when opening a URL in Expo Go. Similarly you can use exps:// to open https:// URLs. exps:// does not currently support loading sites with insecure TLS certificates. Links that launched your app can be observed using the Linking.useURL React hook: import * as Linking from 'expo-linking'; import { Text } from 'react-native'; export default function App() { const url = Linking.useURL(); return <Text>URL: {url}</Text>; } Behind the scenes this hook uses the following imperative API methods: 1. The link that started the app is initially returned with: Linking.getInitialURL 2. Any new links that were triggered while the app was already open are observed with: Linking.addEventListener('url', callback) Learn more in the API documentation. Parse the path, hostname, and query parameters from a URL with the Linking.parse() function. Unlike other URL parsing methods, this function considers nonstandard implementations like Expo Go linking. Example: function App() { const url = Linking.useURL(); if (url) { const { hostname, path, queryParams } = Linking.parse(url); console.log( `Linked to app with hostname: ${hostname}, path: ${path} and data: ${JSON.stringify( queryParams )}` ); } return null; } Info-icon Adding schemes will require a rebuilding your custom app. You can open a URL like: Terminal # Custom builds → npx uri-scheme open myapp://somepath/into/app?hello=world --ios # Expo Go in development (adjust the `127.0.0.1:19000` to match your dev server URL) → npx uri-scheme open exp://127.0.0.1:19000/--/somepath/into/app?hello=world --ios You can also open a URL by searching for it on the device's native browser. For example, opening Safari on iOS and typing exp:// then searching will prompt you to open Expo Go (if installed). Deep linking Setup iOS universal links and Android deep links. Arrow-right-icon Authentication Use linking to implement web-based authentication. Arrow-right-icon Routing Setup React Navigation linking for in-app routing. Arrow-up-right-icon
__label__pos
0.940106
5 \$\begingroup\$ I've been working on a text-based RPG for a while now on-and-off. After a long hiatus, I've come back to the project. My goal as of right now is to port the game from it's current Print-output version to a version that works in a Pygame window before I add anything else. Now it's worth mentioning that I'm a complete noob to programming. This is my first project. I'm still pretty uncertain on the proper ways of separating logic/general code architecture. This being said, here is my code. Game File (FlubbosMagicForest.py): from gameworld import * def main(): player = Player("Jeff", 100) bag = Bag([]) location = Location('introd') command = ' ' while command != "": command = input('>>> ') if command in location.room.exits: location.travel(command, bag) elif command == 'look': location.room_desc() elif command == '': print('You have to say what it is you want to do!') command = '#' elif command == 'search': location.search_room() elif command.split()[0] == 'Take': location.check_take(command.split()[1], bag, location) elif command == 'Inventory': bag.check_inv() else: print('Invalid command') if __name__ == '__main__': main() gameworld.py: from gameitems import * class Room: def __init__(self, name, description, exits, actions, roominv, roomkey, lock): self.name = name self.description = description self.exits = exits self.actions = actions self.roominv = roominv self.roomkey = roomkey self.lock = lock class Player: def __init__(self, name, health): self.name = name self.health = health class Location: def __init__(self, room): self.room = world[room] def travel(self, direction, bag): if direction not in self.room.exits.keys(): self.no_exit() else: self.set_new_room_name(direction, bag) def set_new_room_name(self, direction, bag): new_room_name = self.room.exits[direction] print("moving to", new_room_name) self.key_check(new_room_name, bag) def key_check(self, new_room_name, bag): if world[new_room_name].lock and world[new_room_name].roomkey not in bag.inventory: self.no_key() else: world[new_room_name].lock = False self.set_room(new_room_name) self.room_desc() def set_room(self, new_room_name): self.room = world[new_room_name] def no_exit(self): print("You can't go that way!") def no_key(self): print('The door is locked! You need the right key!') def room_desc(self): print(self.room.description) print(self.room.actions) def search_room(self): if self.room.roominv: for item in list(self.room.roominv.keys()): print("you find a", item) else: print("You don't find anything") def none_here(self, key): print("You can't find a", key) def check_take(self, key, bag, location): if self.room.roominv and key in self.room.roominv: bag.add_to_inv(key, location) print('you take the', key) else: self.none_here(key) class Bag(): def __init__(self, inventory): self.inventory = inventory def add_to_inv(self, key, location): self.inventory.append(location.room.roominv[key]) del location.room.roominv[key] def check_inv(self): for item in list(self.inventory): print("Your bag contains:", item.name) world = {} world['introd'] = Room('introd', "You are in a forest, you can hear wildlife all around you. There seems to be a clearing in the distance.", {'n': "clearing"}, {"Search the ground", "Go North"}, {'Sword': Sword}, None, False) world['clearing'] = Room('clearing', "You are in a clearing surrounded by forest. Sunlight is streaming in, illuminating a bright white flower in the center of the clearing. \ To the South is the way you entered the forest. A well worn path goes to the East. In the distance a harp can be heard.", {'s': "introd", 'e': "forest path"}, {"Take flower", "Go south", "Go East"}, {'Flower': Flower}, None, False) world['forest path'] = Room('forest path', "You begin walking down a well beaten path. The sounds of the forest surround you. Ahead you can see a fork in the road branching to the South and East.\ You can smell smoke coming from the South, and can hear a stream to the East", {'s': "cottage", 'e': "stream", 'w': "clearing"}, {"Go South", "Go East", "Go West"}, {'Stick': Stick}, None, False) world['stream'] = Room('stream', "You come upon a relaxing stream at the edge of the woods. It looks like there is something shiny in the water. To your South is a rickety looking shack, \ to your West is the forest path you came down", {'s': "shack", 'w': "forest path"}, {"Go South", "Go West"}, {'Rusty_Key': Rusty_Key}, None, False) world['shack'] = Room('shack', "In front of you is a shack, possibly used as an outpost for hunting. It looks dilapidated.", {'s': "inside shack", 'n': "stream"}, {"Go South", "Go North"}, None, None, False) world['inside shack'] = Room('inside shack', "The inside of the shack is dirty. Bits of ragged fur are scattered about the floor and on a table against the back wall.\ A sharp looking knife is on the table. There is an ornate key hanging on the wall by a string.", {'n': "shack"}, {"Go North", "Take Knife", "Take Key"}, {'Knife': Knife, 'Ornate_Key': Ornate_Key}, Rusty_Key, True) world['cottage'] = Room('cottage', "A quaint cottage sits in the middle of a small clearing, smoke drifting lazily from the chimney.", {'n': "forest path"}, {"Go north"}, None, None, False) world['inside cottage'] = Room('inside cottage', "The inside of the cottage is warm and cozy. It reeks like death.", {'n': 'outside cottage'}, None, {'Moonstone': Moonstone}, Ornate_Key, True) gameitems.py: class Items: def __init__(self, name, info, weight): self.name = name self.info = info self.weight = weight class DoorKeys(Items): def __init__(self, name, info, weight): super().__init__(name, info, weight) class Weapon(Items): def __init__(self, name, info, damage, speed, weight): super().__init__(name, info, weight) self.damage = damage self.speed = speed Sword = Weapon("Sword", "A sharp looking sword. Good for fighting goblins!", 7, 5, 5) Knife = Weapon("Knife", "A wicked looking knife, seems sharp!", 5, 7, 3) Stick = Weapon("Stick", "You could probably hit someone with this stick if you needed to", 2, 3, 3) Rusty_Key = DoorKeys("Rusty_Key", "A key! I wonder what it opens.", .01) Ornate_Key = DoorKeys("Ornate_Key", "An ornate key with an engraving of a small cottage on one side", .01) Moonstone = Items("Moonstone", "A smooth white stone that seems to radiate soft white light", .05) Flower = Items("Flower", "A beautiful wildflower", .001) Please let me know if anything is blaringly wrong, I'm sure there is. If anyone has any tips on restructuring the code to prepare it for a Pygame adaptation, that would be much appreciated. \$\endgroup\$ 3 \$\begingroup\$ main program: while command != "": could be while command: but you have to initialize command to non-empty. I'd do: while True: then at the end: if not command: break gameworld.py: in Locations.travel, don't use in xxx.keys(). It doesn't matter much in python 3, but python 2 makes that a list, whereas the simplest & universal way is: if direction not in self.room.exits: # no .keys(), dicts support "in" In Locations.check_take, don't check if the dict is empty, it's cumbersome: if self.room.roominv and key in self.room.roominv: should just be: if key in self.room.roominv: In Locations.check_inv: for item in list(self.inventory): why forcing iteration on self.inventory? Just do: for item in self.inventory: I'd like to add that your initialization of world is clumsy & error prone. You should loop on the items of a dictionary/list of dictionaries contained in a json configuration file instead, so anyone (even not a python coder) can improve the game map. world['introd'] = Room('introd', "You are in a forest, you can hear wildlife all around you. There seems to be a clearing in the distance.", {'n': "clearing"}, {"Search the ground", "Go North"}, {'Sword': Sword}, None, False) (and the other entries) could be loaded from a dict containing the key (introd), the description, and the dictionary of directions & contents. This is more complex because you're using an object like Sword as key (so you'd need some kind of evaluation/lookup table to create the object, not a big deal). Think about it. gameitems.py: not much, except that this code is redundant: class DoorKeys(Items): def __init__(self, name, info, weight): super().__init__(name, info, weight) could just be: class DoorKeys(Items): pass at this point, since you're not adding anything specific. So DoorKeys object is probably redundant as well unless you're planning to add specific stuff. |improve this answer||||| \$\endgroup\$ 0 \$\begingroup\$ I feel like utilizing classes for some parts but disregarding them for others is throwing the entire thing off. The massive if/elif/else block in the command file could easily be a list of predefined Command() instances, and a step further, referenced inside of a Commandlist() instance. It might take a little bit more time to code all of the class definitions, but everything would be stored more neatly, and would be easier to reference. As a side note, cmd2 works pretty well for all text-based, has an easy way to define commands, and could be used to test that things are functioning properly. It would also be fairly easy, and far more organized, to define a World class with various attributes. class World(): def __init__(self, rooms, start_id = 1, num_of_rooms = 3): self.rooms = _rooms self.id = _id self.num_of_rooms = _num_of_rooms You could easily add other attributes to World if needed, and it helps to set up for what @Jean mentioned, which is feeding in data from json files. Surprisingly easy to do, and while the files aren't the prettiest format, it's still much more appealing than scrolling through a wall of code to change a specific room's name because it was hardcoded into a file. This opens up the possibility for using many other class instances to hold all of your data and storing them within a World() instance for easier access. All you would need is a method to load the data as a dictionary and unpack it to the arguments of an object instance. Note, I'm using pkg_resources with this, so you would need to import pkg_resources or just the specific re def _build_room(self, id): jsontext = resource_string(__name__, 'data/dungeon{}/room{}.json'.format(self._id, id)) d = json.loads(jsontext.decode('utf-8')) d['id'] = id room = R.Room(**d) return room Of course you would also need a method which utilizes build_room() to create a dictionary of rooms for World.rooms, but that's as easy as... def _update_rooms(self): d = {} for i in range(self._start_id, self._num_of_rooms + 1): try: d[i] = self._build_room(i) except FileNotFoundError: print("File not found. Please check to make sure it exists") self._rooms = d This same principle could be used to create a container for all items in a game, an inventory system, characters, and many other possibilities. Note that I'm not advocating this as the best way, or even a good way. I'm still a beginner and there are many concepts I'm still fuzzy on. This is just the conclusion I reached from all of my various reading of things other people who seem to actually know what they're talking about have typed out. Also, I'm no help down the road towards GUI, and I have no idea if any of this would muddle an approach towards it or ease it. I'm actually working on something very similar to this. If you're interested, you can find it all here. |improve this answer||||| \$\endgroup\$ • \$\begingroup\$ Hrm...now I'm curious about the downvote. I didn't think anything in my post was inherently bad, as I was mostly just saying that it would make sense to make everything OOP instead of just half of the code. Maybe my examples weren't the best...but whatever. \$\endgroup\$ – Noved Jun 16 '18 at 16:27 Your Answer By clicking “Post Your Answer”, you agree to our terms of service, privacy policy and cookie policy Not the answer you're looking for? Browse other questions tagged or ask your own question.
__label__pos
0.743736
How to Make 3-button Hidden? All other buttons can be hidden using command modifiers, eg. @hidenosel:type=*.wav. I'm trying to make a 3-button that only shows up when .wav files are selected. Is there any way to accomplish that? Edit: Attaching my button in this post. <?xml version="1.0"?> <button backcol="none" display="both" icon_size="large" label_pos="bottom" textcol="none" type="three_button"> <label>Play</label> <icon1>#play</icon1> <button backcol="none" display="both" hotkey="shift+alt+f11" icon_size="large" label_pos="bottom" textcol="none"> <label>Play</label> <tip>Play currently selected sound (.wav) files</tip> <icon1>#play</icon1> <function type="normal"> <instruction>@hidenosel:type=*.wav</instruction> <instruction>@nodeselect</instruction> <instruction>Play STOPALL</instruction> <instruction>Play {allfile} QUIET</instruction> </function> </button> <button backcol="none" display="both" icon_size="large" label_pos="bottom" textcol="none"> <label>Play</label> <tip>Stop all currently playing audio</tip> <icon1>#play</icon1> <function type="normal"> <instruction>@hidenosel:type=*.wav</instruction> <instruction>@nodeselect</instruction> <instruction /> <instruction>Play STOPALL</instruction> <instruction /> <instruction>// Also stop any playing audio stream in fmedia* by forcefully</instruction> <instruction>// killing its process and any child processes started by it.</instruction> <instruction>// (*I use fmedia to play any audio files other than .wav files)</instruction> <instruction>@runmode:hide</instruction> <instruction>taskkill /IM fmedia.exe /T /F</instruction> </function> </button> </button> There isn't currently a way to make three-buttons hidden conditionally. Splitting the two actions on it into two separate normal buttons would work, though. 1 Like
__label__pos
0.997563
background preloader SQLITE Facebook Twitter Mostrar datos SQLite. Tutorial 46 en Español. Desarrollo de Aplicaciones Android. Creating multiple sqlite database tables in Android. Most of the Android database examples you will find on the web will usually contain only one table to demonstrate the basic database concepts. Ver blog tiene cosas muy interesantes – diegounanue That's great, the only problem with this is that most non-trivial database implementations will contain more than one table. Creating multiple sqlite database tables in Android The standard database creation string for a single table will probably look a lot like the below:private static final String CREATE_TABLE_1 =" create table " + table1 +" (_id integer primary key autoincrement," +" title text not null, body text not null);"; Which is called in your DB Adapter class like this: @Overridepublic void onCreate(SQLiteDatabase db) { db.execSQL(CREATE_TABLE_1);} So what to do if you want to create more than one table? Private static final String DATABASE_CREATE_MULTIPLE_TABLES =" create table " + ITEMS_TABLE +" (_id integer primary key autoincrement," +" title text not null)" + " create table " + TAGS_TABLE +" (_id integer primary key autoincrement," +" tagName text not null)" + 'Sorry! ..error message. .. Bases de Datos en Android (III): Consultar/Recuperar registros. En el anterior artículo del curso vimos todas las opciones disponibles a la hora de insertar, actualizar y eliminar datos de una base de datos SQLite en Android. Bases de Datos en Android (III): Consultar/Recuperar registros En esta nueva entrega vamos a describir la última de las tareas importantes de tratamiento de datos que nos queda por ver, la selección y recuperación de datos. De forma análoga a lo que vimos para las sentencias de modificación de datos, vamos a tener dos opciones principales para recuperar registros de una base de datos SQLite en Android. La primera de ellas utilizando directamente un comando de selección SQL, y como segunda opción utilizando un método específico donde parametrizaremos la consulta a la base de datos. Para la primera opción utilizaremos el método rawQuery() de la clase SQLiteDatabase. Este método recibe directamente como parámetro un comando SQL completo, donde indicamos los campos a recuperar y los criterios de selección. Curso de Programación Android en PDF Más información: Ver si una tabla esta vacia. How to create SQLite foreign keys. By Alvin Alexander. How to create SQLite foreign keys Last updated: Oct 8, 2013 SQLite foreign keys FAQ: Can you show me how to define foreign keys in a SQLite database table design? The SQLite database does support foreign keys, and its foreign key syntax is similar to other databases. Here's a quick SQLite foreign key example. A SQLite foreign key example First, we define two database tables that don't have any foreign keys: -- -- salespeople -- CREATE TABLE salespeople ( id INTEGER PRIMARY KEY, first_name TEXT NOT NULL, last_name TEXT NOT NULL, commission_rate REAL NOT NULL ); -- -- customers -- CREATE TABLE customers ( id INTEGER PRIMARY KEY, company_name TEXT NOT NULL, street_address TEXT NOT NULL, city TEXT NOT NULL, state TEXT NOT NULL, zip TEXT NOT NULL ); Next, we define a SQLite table that has two foreign keys, one that relates our new orders table back to the customers table, and a second foreign key that relates the orders table back to the salespeople table: Sample SQLite foreign key data. Foreign Key Support. Small. Foreign Key Support Fast. Reliable.Choose any three. SQlite - Android - Foreign key syntax. Android Tutorial 10 - Display Data from Database in List.
__label__pos
0.761178
atoi Convert ASCII characters to integers Examples Arguments None. Attributes Common Box Attributes annotation [symbol] Sets the text that will be displayed in the Clue window when the user moves the mouse over the object. background [int] (default: 0) Adds or removes the object from the patcher's background layer. background 1 adds the object to the background layer, background 0 removes it. Objects in the background layer are shown behind all objects in the default foreground layer. color [4 floats] Sets the color for the object box outline. fontface [int] Sets the type style used by the object. The options are: plain bold italic bold italic Possible values: 0 = 'regular' 1 = 'bold' 2 = 'italic' 3 = 'bold italic' fontname [symbol] Sets the object's font. fontsize [float] Sets the object's font size (in points). Possible values: '8' '9' '10' '11' '12' '13' '14' '16' '18' '20' '24' '30' '36' '48' '64' '72' hidden [int] (default: 0) Toggles whether an object is hidden when the patcher is locked. hint [symbol] Sets the text that will be displayed in as a pop-up hint when the user moves the mouse over the object in a locked patcher. ignoreclick [int] (default: 0) Toggles whether an object ignores mouse clicks in a locked patcher. patching_rect [4 floats] (default: 0. 0. 100. 0.) Sets the position and size of the object in the patcher window. position [2 floats] g/s(set) Sets the object's x and y position in both patching and presentation modes (if the object belongs to its patcher's presentation), leaving its size unchanged. presentation [int] (default: 0) Sets whether an object belongs to the patcher's presentation. presentation_rect [4 floats] (default: 0. 0. 0. 0.) Sets the x and y position and width and height of the object in the patcher's presentation, leaving its patching position unchanged. rect [4 floats] g/s(set) Sets the x and y position and width and height of the object in both patching and presentation modes (if the object belongs to its patcher's presentation). size [2 floats] g/s(set) Sets the object's width and height in both patching and presentation modes (if the object belongs to its patcher's presentation), leaving its position unchanged. textcolor [float] Sets the color for the object's text in RGBA format. textjustification [int] Text Justification Possible values: 0 = 'left' 1 = 'center' 2 = 'right' varname [symbol] Sets the patcher's scripting name, which can be used to address the object by name in pattr, scripting messages to thispatcher, and the js object. Messages bang In left inlet: a bang message can be used to trigger the output of the currently stored numerical list. A bang in the right two inlets is treated as a symbol. int Arguments input [int] In left inlet: The ASCII value of each of the digits of the number is stored internally and sent out the outlet as a list. float Arguments input [float] In left inlet: The ASCII value of each of the digits of the number is stored internally and sent out the outlet as a list. list Arguments input [list] Each int in the list is converted to ASCII as described above, and a space character (ASCII value 32) is inserted between items in the list. The middle inlet is used to append to the currently stored list, and the right inlet will set the contents of the internally stored list, without causing output. anything Arguments input [list] In left inlet: The ASCII value of each letter, digit, or other character in the symbol is stored internally and sent out the outlet as a list. In middle inlet: The ASCII value of each letter, digit, or other character in the symbol is appended to the currently stored list. No output is triggered. In right inlet: The ASCII value of each letter, digit, or other character in the symbol is stored internally, replacing the previously stored list, but not output. clear In left inlet: The clear message is used to clear the contents of the internally-stored numerical list. The word clear in the right two inlets is treated as a symbol. Output list The ASCII representation of the input is sent out as a list of integers. See Also Name Description itoa Convert integers to UTF-8 (Unicode) characters key Report keyboard presses keyup Report key information on release message Send any message regexp Use regular expressions to process input spell Convert input to UTF-8 (Unicode) codes sprintf Format a message of words and numbers
__label__pos
0.643448
Class: Capybara::Selenium::Node Inherits: Driver::Node show all Includes: Find, Scroll Defined in: lib/capybara/selenium/node.rb, lib/capybara/selenium/extensions/html5_drag.rb, lib/capybara/selenium/extensions/modifier_keys_stack.rb, lib/capybara/selenium/extensions/file_input_click_emulation.rb Direct Known Subclasses ChromeNode, EdgeNode, FirefoxNode, IENode, SafariNode Defined Under Namespace Modules: FileInputClickEmulation, Html5Drag Classes: ModifierKeysStack Instance Attribute Summary Attributes inherited from Driver::Node #driver, #initial_cache, #native Instance Method Summary collapse Methods included from Scroll #scroll_by, #scroll_to Methods included from Find #find_css, #find_xpath Methods inherited from Driver::Node #initialize, #inspect, #scroll_by, #scroll_to, #trigger Constructor Details This class inherits a constructor from Capybara::Driver::Node Instance Method Details #==(other) ⇒ Object 177 178 179 # File 'lib/capybara/selenium/node.rb', line 177 def ==(other) native == other.native end #[](name) ⇒ Object 25 26 27 28 29 # File 'lib/capybara/selenium/node.rb', line 25 def [](name) native.attribute(name.to_s) rescue Selenium::WebDriver::Error::WebDriverError nil end #all_textObject 16 17 18 19 20 21 22 23 # File 'lib/capybara/selenium/node.rb', line 16 def all_text text = driver.evaluate_script('arguments[0].textContent', self) text.gsub(/[\u200b\u200e\u200f]/, '') .gsub(/[\ \n\f\t\v\u2028\u2029]+/, ' ') .gsub(/\A[[:space:]&&[^\u00a0]]+/, '') .gsub(/[[:space:]&&[^\u00a0]]+\z/, '') .tr("\u00a0", ' ') end #click(keys = [], **options) ⇒ Object 103 104 105 106 107 108 109 110 111 112 113 114 115 # File 'lib/capybara/selenium/node.rb', line 103 def click(keys = [], **options) click_options = ClickOptions.new(keys, options) return native.click if click_options.empty? click_with_options(click_options) rescue StandardError => e if e.is_a?(::Selenium::WebDriver::Error::ElementClickInterceptedError) || e.message.match?(/Other element would receive the click/) scroll_to_center end raise e end #content_editable?Boolean Returns: • (Boolean) 173 174 175 # File 'lib/capybara/selenium/node.rb', line 173 def content_editable? native.attribute('isContentEditable') == 'true' end #disabled?Boolean Returns: • (Boolean) 166 167 168 169 170 171 # File 'lib/capybara/selenium/node.rb', line 166 def disabled? return true unless native.enabled? # WebDriver only defines `disabled?` for form controls but fieldset makes sense too find_xpath('self::fieldset/ancestor-or-self::fieldset[@disabled]').any? end #double_click(keys = [], **options) ⇒ Object 124 125 126 127 128 129 # File 'lib/capybara/selenium/node.rb', line 124 def double_click(keys = [], **options) click_options = ClickOptions.new(keys, options) click_with_options(click_options) do |action| click_options.coords? ? action.double_click : action.double_click(native) end end #drag_to(element, drop_modifiers: []) ⇒ Object 139 140 141 142 143 144 145 146 147 148 149 150 # File 'lib/capybara/selenium/node.rb', line 139 def drag_to(element, drop_modifiers: [], **) drop_modifiers = Array(drop_modifiers) # Due to W3C spec compliance - The Actions API no longer scrolls to elements when necessary # which means Seleniums `drag_and_drop` is now broken - do it manually scroll_if_needed { browser_action.click_and_hold(native).perform } # element.scroll_if_needed { browser_action.move_to(element.native).release.perform } element.scroll_if_needed do keys_down = modifiers_down(browser_action, drop_modifiers) keys_up = modifiers_up(keys_down.move_to(element.native).release, drop_modifiers) keys_up.perform end end #drop(*_) ⇒ Object Raises: • (NotImplementedError) 152 153 154 # File 'lib/capybara/selenium/node.rb', line 152 def drop(*_) raise NotImplementedError, 'Out of browser drop emulation is not implemented for the current browser' end #hoverObject 135 136 137 # File 'lib/capybara/selenium/node.rb', line 135 def hover scroll_if_needed { browser_action.move_to(native).perform } end #multiple?Boolean Returns: • (Boolean) 162 # File 'lib/capybara/selenium/node.rb', line 162 def multiple?; boolean_attr(self[:multiple]); end #obscured?(x: nil, y: nil) ⇒ Boolean Returns: • (Boolean) 185 186 187 188 189 190 # File 'lib/capybara/selenium/node.rb', line 185 def obscured?(x: nil, y: nil) res = driver.evaluate_script(OBSCURED_OR_OFFSET_SCRIPT, self, x, y) return true if res == true driver.frame_obscured_at?(x: res['x'], y: res['y']) end #pathObject 181 182 183 # File 'lib/capybara/selenium/node.rb', line 181 def path driver.evaluate_script GET_XPATH_SCRIPT, self end #readonly?Boolean Returns: • (Boolean) 161 # File 'lib/capybara/selenium/node.rb', line 161 def readonly?; boolean_attr(self[:readonly]); end #rectObject 192 193 194 # File 'lib/capybara/selenium/node.rb', line 192 def rect native.rect end #right_click(keys = [], **options) ⇒ Object 117 118 119 120 121 122 # File 'lib/capybara/selenium/node.rb', line 117 def right_click(keys = [], **options) click_options = ClickOptions.new(keys, options) click_with_options(click_options) do |action| click_options.coords? ? action.context_click : action.context_click(native) end end #select_optionObject 93 94 95 # File 'lib/capybara/selenium/node.rb', line 93 def select_option click unless selected? || disabled? end #selected?Boolean Also known as: checked? Returns: • (Boolean) 163 # File 'lib/capybara/selenium/node.rb', line 163 def selected?; boolean_attr(native.selected?); end #send_keys(*args) ⇒ Object 131 132 133 # File 'lib/capybara/selenium/node.rb', line 131 def send_keys(*args) native.send_keys(*args) end #set(value, **options) ⇒ Object Set the value of the form element to the given value. Parameters: • value (String) The new value • options (Hash{}) Driver specific options for how to set the value Options Hash (**options): • :clear (Symbol, Array) — default: nil The method used to clear the previous value nil => clear via javascript :none => append the new value to the existing value :backspace => send backspace keystrokes to clear the field Array => an array of keys to send before the value being set, e.g. [[:command, 'a'], :backspace] 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 # File 'lib/capybara/selenium/node.rb', line 56 def set(value, **options) if value.is_a?(Array) && !multiple? raise ArgumentError, "Value cannot be an Array when 'multiple' attribute is not present. Not a #{value.class}" end tag_name, type = attrs(:tagName, :type).map { |val| val&.downcase } @tag_name ||= tag_name case tag_name when 'input' case type when 'radio' click when 'checkbox' click if value ^ checked? when 'file' set_file(value) when 'date' set_date(value) when 'time' set_time(value) when 'datetime-local' set_datetime_local(value) when 'color' set_color(value) when 'range' set_range(value) else set_text(value, **options) end when 'textarea' set_text(value, **options) else set_content_editable(value) end end #style(styles) ⇒ Object 39 40 41 42 43 # File 'lib/capybara/selenium/node.rb', line 39 def style(styles) styles.each_with_object({}) do |style, result| result[style] = native.css_value(style) end end #tag_nameObject 156 157 158 # File 'lib/capybara/selenium/node.rb', line 156 def tag_name @tag_name ||= native.tag_name.downcase end #unselect_optionObject 97 98 99 100 101 # File 'lib/capybara/selenium/node.rb', line 97 def unselect_option raise Capybara::UnselectNotAllowed, 'Cannot unselect option from single select box.' unless select_node.multiple? click if selected? end #valueObject 31 32 33 34 35 36 37 # File 'lib/capybara/selenium/node.rb', line 31 def value if tag_name == 'select' && multiple? native.find_elements(:css, 'option:checked').map { |el| el[:value] || el.text } else native[:value] end end #visible?Boolean Returns: • (Boolean) 160 # File 'lib/capybara/selenium/node.rb', line 160 def visible?; boolean_attr(native.displayed?); end #visible_textObject 12 13 14 # File 'lib/capybara/selenium/node.rb', line 12 def visible_text native.text end
__label__pos
0.953867
Jennifer Marsman Machine Learning, Big Data, Azure, and Windows Development Multitouch Part 1: Getting Started with Multitouch in Windows 7 Multitouch Part 1: Getting Started with Multitouch in Windows 7 • Comments 9 This is the first post in a week-long series on multitouch in Windows 7.  multitouchWhat is multitouch? Most everyone is comfortable with using a mouse to navigate on a computer.  An alternate form of user input is touch.  For some time, we have had touch machines (such as tablets) that allow us to substitute one finger on the screen for the mouse.  Multitouch takes this paradigm one step further: the computer can recognize and respond to multiple touch points at the same time (for instance, multiple fingers on the screen).  Multitouch is a huge part of the NUI (Natural User Interface) movement.  What machines support multitouch? I own the HP TouchSmart tx2 and the Acer machine that was given away at the Microsoft PDC in 2009.  There are also multitouch machines made by Dell, Toshiba, and Lenovo.  The number of touch points is dependent on the hardware and drivers that you are using.  With the drivers I’m using, I get 4 touch points on the HP TouchSmart and 2 touch points on the Acer machine.  In what scenarios is multitouch compelling?  In the consumer space, multitouch rocks for navigating the web, viewing photos, playing casual games, consuming music and video, and navigating files/arranging windows.  Also, my personal favorite use for multitouch is in navigating maps.  I travel a lot for my job and check maps for driving directions frequently.  Panning around on a map using my fingers makes the map easier to explore, and using the “pinch” gesture to zoom out is so much more intuitive than Shift+Click or searching for a “Zoom Out” button.  In the enterprise space, multitouch is compelling in kiosk scenarios, manufacturing plants (where using keyboards is difficult due to wearing heavy gloves), retail displays, and hotel and airport checkins.  How do I set up multitouch on my machine? Your experience will vary based on your machine’s manufacturer, but they should always include the multitouch drivers.  (If, for some crazy reason, you have a multitouch machine without the multitouch drivers, contact the manufacturer.  You can also try downloading them from here, but I am not responsible if you hose your machine.)  Now, open up the Pen and Touch settings in the Control Panel.  (Click the Start button, Control Panel, “Hardware and Sound”, and then “Pen and Touch”.  Alternatively, you can type “Pen and Touch” in the search box.)  Click on the “Touch” tab.  Ensure that “Enable multi-touch gestures and inking” is checked.  TouchSettings How do I test that multitouch is working? The official way:  in the Start Menu, right-click on Computer and select “Properties”.  Near the bottom of the “System” section, you should see a “Pen and Touch” field which will tell you that it’s available and the number of touch points that you have.  TouchPoints The fun way: open up Paint and swipe your hand across the screen with all 5 fingers spread out, so that each of your five fingers drags across the screen at the same time.  The number of lines that are drawn is the number of touch points supported.  Smile  Does Windows 7 support gestures? Gestures are known motions with a single or multiple fingers.  Out of the box, Windows 7 recognizes many pre-defined gestures: • Pan (also called Translate) – put a finger or fingers down and drag • Rotate – touch with two fingers at opposite ends and turn fingers in a circle • Zoom – touch with two fingers at opposite ends and move the fingers closer or further apart • Tap – touching and quickly lifting with a finger; equivalent to a click • Double-tap – quickly tapping twice; equivalent to a double-click • Press and tap (also called Finger Roll) – place one finger on the screen, place second finger on the screen, lift the second finger immediately, and then lift the first finger.  This is essentially holding one finger down while tapping with a second finger.  This gesture, by default, is equivalent to a right-click.  You can also create your own custom gestures.  Are there APIs to code against multitouch input? Yes, my friends, there are.  In all of the documentation on multitouch development in Windows 7, you will see information on the three “levels” of multitouch development (which are called Good, Better, and Best).  The “Good” level is the support for multitouch that Windows 7 provides out of the box, with no extra coding required.  For example, you can use the “flick” gesture to scroll wherever there is a scroll bar.  You can use the press and tap gesture (described above) to right-click in any application.  The “Better” level is the support for coding with gestures.  At this level, you can code your application to respond to gestures like Rotate and Zoom using the Touch APIs.  The “Best” level is the support for coding at the raw touch input level.  For example, you can create a finger-painting application where each touch leaves a mark on the screen, or you can create custom gestures.  At this level, you can code your application to respond to each finger placed on it and/or finger lifted using the Touch APIs (similar to MouseDown etc.).  Stay tuned for tomorrow’s post, when we will look at gesture support in more depth and write some code.  Other blog posts in this series: Multitouch Part 1: Getting Started with Multitouch in Windows 7 Multitouch Part 2: Support for Gestures in Windows 7 Multitouch Part 3: Multitouch in managed code and WPF Multitouch Part 4: Multitouch in Silverlight Multitouch Part 5: User Experience with Multitouch • Loading... Leave a Comment • Please add 6 and 8 and type the answer here: • Post
__label__pos
0.781138
www.webdeveloper.com Results 1 to 14 of 14 Thread: [RESOLVED] UPDATE using Table Fields? 1. #1 Join Date Mar 2007 Location Cotswolds, England Posts 105 resolved [RESOLVED] UPDATE using Table Fields? Hi, I have an UPDATE question regarding MySQL and PHP. I have a field that needs to be updated in all rows, excess of 900 rows, called PRICE. I calculate the new price value and want to update the PRICE field, but also the FACTORPRICE field with a value passed in from a FORM. QUERY: UPDATE cookers SET PRICE=$newPrice, FACTORPRICE=$factorPrice $newPrice = ((WEIGHT * $factorPrice) + ($iAuxilaryCosts + (RINGQTY * 8))) * 2; $factorPrice = formValue; uppercase names = table fields Can this UPDATE operation be created without a need of a previous SELECT statement and looping through the entire resultset and updating row by row? Thanks, Barton. 2. #2 Join Date Jan 2007 Location Wisconsin Posts 2,120 You should be able run it almost exactly as you've posted it ... Code: UPDATE cookers SET PRICE ((WEIGHT * $factorPrice) + ($iAuxilaryCosts + (RINGQTY * 8))) * 2, FACTORPRICE=$factorPrice; Obviously, you want to sanitize and enquote your form values first (make sure they're numbers before interpolation). 3. #3 Join Date Mar 2007 Location Cotswolds, England Posts 105 Exclamation Hi, thanks, a neat solution, it's amazing what you can learn I've just found the killer, $factorPrice is variable depending upon the SKU. Am I correct in that assumption that this elegant solution will not work? Sorry for not seeing that before, and thanks again, Barton. 4. #4 Join Date Jan 2007 Location Wisconsin Posts 2,120 I'm not sure. What does your SKU look like? And what is the algorithm that incorporates it into $factorPrice? 5. #5 Join Date Mar 2007 Location Cotswolds, England Posts 105 Exclamation Hi again, SKU = ModelNumber-SKUID00-0000 (0000-nn00-0000) my SKUID can be any of '01','05','18','28' or '66', and the factorprice is an array with the key as the SKUID. $sSKUID = getIdFromSKU(SKU); $newPrice = ($_POST['factorprice'][$sSKUID ] * WEIGHT) + (RINGQTY * 8) + $iAuxCosts; Hope that makes sense, thanks, Barton. 6. #6 Join Date Jan 2007 Location Wisconsin Posts 2,120 Could I see the code you're currently using, starting from the capture of form information to the execution of your queries? 7. #7 Join Date Jan 2007 Location Wisconsin Posts 2,120 ... encased in the appropriate CODE tags, please. 8. #8 Join Date Mar 2007 Location Cotswolds, England Posts 105 Exclamation Hello again, code as requested sir: FactorPrice is now jcprice HTML Code: <div id="wrapper"> <h2>Reprice Items based on JCPrice</h2> <!-- Reprice Form --> <form action="" id="frmReprice" onsubmit="return validateForm(this);" method="post"> <fieldset> <legend>Enter Repricing Information</legend><br /> <ol class="forms"> <li><label for="txtProduct">Product Prices to Update:</label> <!-- Decides which table name to UPDATE --> <select name="txtProduct" id="txtProduct"> <option value="-1" selected="selected">Please Select...</option> <?php $selected = (isset($_POST['txtProduct'])) ? $_POST['txtProduct'] : '-1'; print createOptionFromArray($arrProducts,$selected); ?> </select> </li> <li><label for="jcprice1">JCPrice - 1<?php echo (isset($currency))? ' ('.$currency.')':'';?></label> <input type="text" name="jcprice[01]" id="jcprice[01]" value="40.90" /> </li> <li><label for="jcprice2">JCPrice - 28<?php echo (isset($currency))? ' ('.$currency.')':'';?></label> <input type="text" name="jcprice[28]" id="jcprice[28]" value="27.21" /> <input type="hidden" name="jcprice[18]" id="jcprice[18]" value="27.21" /> <input type="hidden" name="jcprice[05]" id="jcprice[05]" value="27.21" /> </li> <li><label for="jcprice3">JCPrice - 66<?php echo (isset($currency))? ' ('.$currency.')':'';?></label> <input type="text" name="jcprice[66]" id="jcprice[66]" value="10.54" /> </li> <li><label>&nbsp;</label> <input type="hidden" name="submitted" id="submitted" value="true"/> <input class="button" type="submit" name="butReprice" id="butReprice" value="Reprice"/> </li> <li><label name="txtStatusText" id="txtStatusText" style="width:90&#37;; color:navy; text-align:center; padding-bottom:10px"><?php echo (isset($_GET['result']))? $_GET['result'] : $result;?></label></li> </ol> </fieldset> </form> <div style="clear:both"></div></div> PHP Code: $arrProducts = array(      'Table01'=>'Table 01',      'Table28'=>'Table 28',      'Table66'=>'Table 66' ); /* process for posted data */ if(isset($_POST['submitted'])){    $sFilename="pricing.inc";if(file_exists($sFilename)){include_once($sFilename);}    // MySQL Server Connection Details   //$dbTestDBNameExtension = "MTEST";    $dbDatabase "MyDB"// The master database being connected to    $dbTable $_POST['txtProduct']; // Table to update pricing in this Table   // Load the database connection information    connection.inc";if(file_exists($sFilename)){require($sFilename);}   // UPDATE row data - was in a SELECT * from Table and iterate through...   // New PRICE = ((WEIGHT * JCPRICE) + ( $iAuxilaryCosts + (RINGQTY * 8))) * 2;    $sMaterialID = getMetalIdFromSKU($tableRow['SKU']);    $newPrice = round((($_POST['jcprice'][$sMaterialID] * $tableRow['WEIGHT']) + getAuxilaryCosts($tableRow['RINGQTY']))*2); The above is the meat of the code 9. #9 Join Date Jan 2007 Location Wisconsin Posts 2,120 It looks like each row in the database is affected by a distinct POST variable. So, if you're looking at updating less than 100 or so records, you may as well run perform the series of little updates. However, if you're looking at a significantly greater number of rows, use the POST data to populate a temporary table to JOIN to/from in a single update statement. Code: update prices p left join temp_table tt on (p.some_id=tt.pre_extracted_id) set p.price=...; 10. #10 Join Date Mar 2007 Location Cotswolds, England Posts 105 Thank you once again... Just having another after you said about series of UPDATEs. Could the UPDATE have a WHERE to find the records with the same SKUID? As this is the only variable, and there is only 5 possible SKUIDs. But is the WHERE clause flexiable enough? SKU = ModelNumber-SKUID00-0000 (0000-nn00-0000) SKUID can be any of '01','05','18','28' or '66'. Could this be a solution? Sorry to be a pain, Barton. 11. #11 Join Date Jan 2007 Location Wisconsin Posts 2,120 Sure. Something like this? Code: UPDATE tablename set this=if(skuid=1,some_expression1,if(skuid=2,some_expression2,NULL)) where skuid in (1,2); I have to be honest, I'm a bit confused about what you're really trying to do here--so I'm having to tough time making the appropriate suggestions ... 12. #12 Join Date Mar 2007 Location Cotswolds, England Posts 105 Not surprising lol SPEC: I need to reprice every item in the database Table, and the price is based on the following: $newPrice = ((WEIGHT * $factorPrice) + ($iAuxilaryCosts + (RINGQTY * 8))) * 2; $iAuxilaryCosts is a constant value of &#163;92. $factorPrice is an array of ratio's, indexed by the SKU-ID, which is part of the item SKU: SKU is defined as ModelNumber-SKUID00-0000 (0000-nn00-0000), and is already stored in the record. I just need derive the SKU ID from the SKU vlaue in the field. SKU ID can be any of '01','05','18','28' or '66'. I hope that helps? So it would be nice to create 5 UPDATE statements based on the SKU ID, as that is the only variable. Hope that clears up my self-inflicted mire, Barton. 13. #13 Join Date Jan 2007 Location Wisconsin Posts 2,120 So, would the following 5 queries do what you need? Code: UPDATE cookers SET PRICE=((WEIGHT * $factorPrice['01']) + ($iAuxilaryCosts + (RINGQTY * 8))) * 2, FACTORPRICE=$factorPrice['01'] WHERE SUBSTRING(SKU, S, 2) = '01'; UPDATE cookers SET PRICE=((WEIGHT * $factorPrice['05']) + ($iAuxilaryCosts + (RINGQTY * 8))) * 2, FACTORPRICE=$factorPrice['05'] WHERE SUBSTRING(SKU, S, 2) = '05'; UPDATE cookers SET PRICE=((WEIGHT * $factorPrice['18']) + ($iAuxilaryCosts + (RINGQTY * 8))) * 2, FACTORPRICE=$factorPrice['18'] WHERE SUBSTRING(SKU, S, 2) = '18'; UPDATE cookers SET PRICE=((WEIGHT * $factorPrice['28']) + ($iAuxilaryCosts + (RINGQTY * 8))) * 2, FACTORPRICE=$factorPrice['28'] WHERE SUBSTRING(SKU, S, 2) = '28'; UPDATE cookers SET PRICE=((WEIGHT * $factorPrice['66']) + ($iAuxilaryCosts + (RINGQTY * 8))) * 2, FACTORPRICE=$factorPrice['66'] WHERE SUBSTRING(SKU, S, 2) = '66'; ... where S is the start location of the SKUID in the string, which will need to be derived using something like LOCATE if the model number is variable length. Also, bear in mind, of course, that matching on a substring like this will not be able to take advantage of any indexes--each query will cause a table-scan (which is probably OK if you're dealing with a few thousand rows or less). 14. #14 Join Date Mar 2007 Location Cotswolds, England Posts 105 Exclamation Thank you so much... Worked like a charm, and the operation completed in a fraction of the original query time. Thank you, Barton Thread Information Users Browsing this Thread There are currently 1 users browsing this thread. (0 members and 1 guests) Tags for this Thread Posting Permissions • You may not post new threads • You may not post replies • You may not post attachments • You may not edit your posts •   HTML5 Development Center Recent Articles
__label__pos
0.745895
Ehcache工具类(java工程中单独使用) xbbg 贡献于2013-11-26 作者 HJ  创建于2013-11-13 14:43:00   修改者HJ  修改于2013-11-13 14:43:00字数2653 文档摘要:看网上有人说Ehcache,给的例子不是很细致,写了一个在java工程中单独使用的Ehcache工具类,代码中没有留故意的障碍,有别的错误的话各位自己甄别,具体工程在我电脑上如下. 关键词: 看网上有人说Ehcache,给的例子不是很细致, 写了一个在java工程中单独使用的Ehcache工具类,代码中没有留故意的障碍,有别的错误的话各位自己甄别,具体工程在我电脑上如下: 几个主要API如下: http://ehcache.org/apidocs/net/sf/ehcache/Cache.html http://ehcache.org/apidocs/net/sf/ehcache/Element.html http://ehcache.org/apidocs/net/sf/ehcache/CacheManager.html Ehcache配置文件ehcache.xml如下: 工具类如下: package util; import net.sf.ehcache.Cache; import net.sf.ehcache.CacheManager; import net.sf.ehcache.Element; /** * @author HJ * **/ public class EhcacheUtil { //指定配置文件 private static final String appointPath="src/ehcache.xml"; private CacheManager cacheManager; private static EhcacheUtil ehcacheUtil; private EhcacheUtil(String appointPath) { cacheManager = CacheManager.create(appointPath); } public static EhcacheUtil getInstance() { if (ehcacheUtil== null) { ehcacheUtil= new EhcacheUtil(appointPath); } return ehcacheUtil; } /** * 将键值对放入内存 * */ public void put(String cacheName, Object key, Object value) { Cache cache = cacheManager.getCache(cacheName); if(cache == null){ /** * 该类设计为java工程中单独使用,旨在保存一些常用的不经常更新的数据在缓存中 * 所以构造Cache时 * 第二个参数为允许内存中缓存对象的大小,这里为10000000 * 第三个参数为允许在内存达到最大后写入磁盘 * 第四个参数表示永久保存 * 最后两个参数表示Element存活时间无穷大 * **/ cache= new Cache(cacheName, 10000000, true, true, 0, 0); cacheManager.addCache(cache); } Element element = new Element(key, value); cache.put(element); } /** * 获取Element的键值 * **/ public Object getElement(String cacheName, Object key) { Cache cache = cacheManager.getCache(cacheName); if(cache != null){ Element element = cache.get(key); return element == null ? null : element.getObjectValue(); } return null; } /** * 获取Cache值 * **/ public Cache getCache(String cacheName) { return cacheManager.getCache(cacheName); } /** * 删除某个键值对 * **/ public void removeElement(String cacheName, Object key) { Cache cache = cacheManager.getCache(cacheName); if(cache !=null ){ cache.remove(key); } } /** * 删除某个cache * **/ public void removeCache(String cacheName) { Cache cache = cacheManager.getCache(cacheName); if(cache !=null ){ cacheManager.removeCache(cacheName); } } } 测试类如下: package util; public class Main { /** * @author HJ */ public static void main(String[] args) { EhcacheUtil.getInstance().put("memory-1","key-11","value-11"); EhcacheUtil.getInstance().put("memory-1","key-12","value-12"); EhcacheUtil.getInstance().put("memory-2","key-21","value-21"); EhcacheUtil.getInstance().put("memory-2","key-22","value-22"); String value= EhcacheUtil.getInstance().getElement("memory-2", "key-21").toString(); System.out.println(value); value= EhcacheUtil.getInstance().getElement("memory-1", "key-11").toString(); System.out.println(value); EhcacheUtil.getInstance().removeElement("memory-2", "key-21"); System.out.println(EhcacheUtil.getInstance().getElement("memory-2", "key-21")); } } 下载文档到电脑,查找使用更方便 文档的实际排版效果,会与网站的显示效果略有不同!! 需要 5 金币 [ 分享文档获得金币 ] 4 人已下载 下载文档
__label__pos
0.947863
WMI Performance Adapter (wmiApSrv) Service Defaults in Windows 7 Provides performance library information from Windows Management Instrumentation (WMI) providers to clients on the network. This service only runs when Performance Data Helper is activated. Default Settings Startup type:Manual Display name:WMI Performance Adapter Service name:wmiApSrv Service type:own Error control:normal Object:localSystem Path:%SystemRoot%\system32\wbem\WmiApSrv.exe Registry key:HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\wmiApSrv Default Behavior WMI Performance Adapter is a Win32 service. In Windows 7 it won't be started if the user doesn't start it. When the WMI Performance Adapter service is started, it is running as localSystem in its own process of WmiApSrv.exe. If WMI Performance Adapter fails to start, Windows 7 attempts to write the failure details into Event Log. Then Windows 7 startup should proceed and the user should be notified that the wmiApSrv service is not running because of the error. Restore Default Startup Configuration for WMI Performance Adapter 1. Run the Command Prompt as an administrator. 2. Copy the command below, paste it into the command window and press ENTER: sc config wmiApSrv start= demand 3. Close the command window and restart the computer. The wmiApSrv service is using the WmiApSrv.exe file that is located in the %WinDir%\system32\wbem folder. If the file is changed, damaged or deleted, you can restore its original version from Windows 7 installation media.
__label__pos
0.767427
E410 Autopilot error Hello, I have a cnPilot E410 with firmware version 3.11.4-r9. I restored factory default settings, and tried to activate the Autopilot mode. Right after I selected the Autopilot "Master" option and clicked the Save button, the E410 became completely inaccessible. I can't even ping it using zeroconf IP (169.24.x.x). The only way I can restore the access to the E410 is by doing a factory reset. But when I try to activate the Autopilot option again, the same thing happens. Can someone help me on this? Thank you. As we are not seeing this issue in our lab and to narrow down the root cause of the issue can you please provide below requested info  1. Any Static IP is configured on device before enabling Auto-Pilot mode and after device factory reset ? 2. When the device is in factory resetted could you please download the tech support of AP 1. Download Techsupport option is available under "Options Tab" * Any Static IP is configured on device before enabling Auto-Pilot mode and after device factory reset ? I've tried when the device is fresh from factory reset (DHCP), and also after I've set a static IP (10.0.1.xx), and the issue appeared in both cases. * When the device is in factory resetted could you please download the tech support of AP Sure, here it is. Thank you. Hello, I've just got my second unit of cnPilot E410, and unfortunately, it also has the same issue regarding the autopilot. This time I didn't update the original firmware and leave it as-is at version 3.11.3-r7. Here's the techsupport file.
__label__pos
0.757174
RuntimeError: Expected object of scalar type Long but got scalar type Float when using CrossEntropyLoss I have a NN that ends with the following linear layers dense = nn.Linear(input_size, 1) if I use CrossEntropyLoss as loss function (as I’m y is supposed to be the class number) I get the following error RuntimeError Traceback (most recent call last) <ipython-input-39-72a754e03ca3> in <module>() 1 lr = 2e-2 2 learner = SimpleLearner([train_dl, test_dl], model, loss_func) ----> 3 history = learner.fit(10) <ipython-input-37-121ec7440a76> in fit(self, epochs, lr) 26 losses = [] 27 for x,y in self.data[0]: ---> 28 losses.append(self.update(x, y , lr)) 29 history['losses'].append(np.mean(losses)) 30 return history <ipython-input-37-121ec7440a76> in update(self, x, y, lr) 10 for p in model.parameters(): w2 += (p**2).sum() 11 # add to regular loss ---> 12 loss = loss_func(y_hat, y) + w2 * self.wd 13 loss.backward() 14 with torch.no_grad(): /usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs) 477 result = self._slow_forward(*input, **kwargs) 478 else: --> 479 result = self.forward(*input, **kwargs) 480 for hook in self._forward_hooks.values(): 481 hook_result = hook(self, input, result) /usr/local/lib/python3.6/dist-packages/torch/nn/modules/loss.py in forward(self, input, target) 865 def forward(self, input, target): 866 return F.cross_entropy(input, target, weight=self.weight, --> 867 ignore_index=self.ignore_index, reduction=self.reduction) 868 869 /usr/local/lib/python3.6/dist-packages/torch/nn/functional.py in cross_entropy(input, target, weight, size_average, ignore_index, reduce, reduction) 1778 if size_average is not None or reduce is not None: 1779 reduction = _Reduction.legacy_get_string(size_average, reduce) -> 1780 return nll_loss(log_softmax(input, 1), target, weight, None, ignore_index, None, reduction) 1781 1782 /usr/local/lib/python3.6/dist-packages/torch/nn/functional.py in nll_loss(input, target, weight, size_average, ignore_index, reduce, reduction) 1623 .format(input.size(0), target.size(0))) 1624 if dim == 2: -> 1625 ret = torch._C._nn.nll_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index) 1626 elif dim == 4: 1627 ret = torch._C._nn.nll_loss2d(input, target, weight, _Reduction.get_enum(reduction), ignore_index) RuntimeError: Expected object of scalar type Long but got scalar type Float for argument #2 'target' What should be the loss (similarly for the accuracy) function that I should be using? if CrossEntropyLoss is the good one that should I do with the output of the linear layer? The output layer should have the number of classes as out_features. Currently your output layer only returns one neuron, which corresponds to class0. For a binary use case, this should work: batch_size = 5 nb_classes = 2 in_features = 10 model = nn.Linear(in_features, nb_classes) criterion = nn.CrossEntropyLoss() x = torch.randn(batch_size, in_features) target = torch.empty(batch_size, dtype=torch.long).random_(nb_classes) output = model(x) loss = criterion(output, target) loss.backward() However, this doesn’t seem to be the error you are seeing here. As you can see in my example, target should be of type torch.long. Try to fix the shapes and call target = target.long() to transform the data type. Alternatively, you could return just one neuron and use nn.BCEWithLogitsLoss as your criterion. This would also work with a float target. 9 Likes I figured the problem, I was creating the target tensor and passing float as dtype :disappointed_relieved: the following fixed the issue. y_tensor = torch.tensor(y_train, dtype=torch.long, device=device) 5 Likes target should be of type torch.long . Why does the type of target have to be torch.long ? Thank you! The target should be a LongTensor using nn.CrossEntropyLoss (or nn.NLLLoss), since it is used to index the output logit (or log probability) for the current target class as shown in this formula (note the indexing in x[class]). 2 Likes I’ve specified the dtype to be torch.long, and still the error pops out, but not in nn.CrossEntropyLoss in the prediction line, here is the error: RuntimeError Traceback (most recent call last) in ----> 1 train(model,train_loader,valid_loader) in train(model, train_dataloader, valid_dataloader, epochs, lr) 18 #print(len(target[0][‘label’])) 19 optimizer.zero_grad() —> 20 output = model(data,target) 21 print(‘check’) 22 loss = criterion(output,target) /opt/conda/lib/python3.6/site-packages/torch/nn/modules/module.py in call(self, *input, **kwargs) 491 result = self._slow_forward(*input, **kwargs) 492 else: –> 493 result = self.forward(*input, **kwargs) 494 for hook in self._forward_hooks.values(): 495 hook_result = hook(self, input, result) /opt/conda/lib/python3.6/site-packages/torchvision/models/detection/generalized_rcnn.py in forward(self, images, targets) 49 if isinstance(features, torch.Tensor): 50 features = OrderedDict([(0, features)]) —> 51 proposals, proposal_losses = self.rpn(images, features, targets) 52 detections, detector_losses = self.roi_heads(features, proposals, images.image_sizes, targets) 53 detections = self.transform.postprocess(detections, images.image_sizes, original_image_sizes) /opt/conda/lib/python3.6/site-packages/torch/nn/modules/module.py in call(self, *input, **kwargs) 491 result = self._slow_forward(*input, **kwargs) 492 else: –> 493 result = self.forward(*input, **kwargs) 494 for hook in self._forward_hooks.values(): 495 hook_result = hook(self, input, result) /opt/conda/lib/python3.6/site-packages/torchvision/models/detection/rpn.py in forward(self, images, features, targets) 413 losses = {} 414 if self.training: –> 415 labels, matched_gt_boxes = self.assign_targets_to_anchors(anchors, targets) 416 regression_targets = self.box_coder.encode(matched_gt_boxes, anchors) 417 loss_objectness, loss_rpn_box_reg = self.compute_loss( /opt/conda/lib/python3.6/site-packages/torchvision/models/detection/rpn.py in assign_targets_to_anchors(self, anchors, targets) 272 for anchors_per_image, targets_per_image in zip(anchors, targets): 273 gt_boxes = targets_per_image[“boxes”] –> 274 match_quality_matrix = self.box_similarity(gt_boxes, anchors_per_image) 275 matched_idxs = self.proposal_matcher(match_quality_matrix) 276 # get the targets corresponding GT for each proposal /opt/conda/lib/python3.6/site-packages/torchvision/ops/boxes.py in box_iou(boxes1, boxes2) 130 area2 = box_area(boxes2) 131 –> 132 lt = torch.max(boxes1[:, None, :2], boxes2[:, :2]) # [N,M,2] 133 rb = torch.min(boxes1[:, None, 2:], boxes2[:, 2:]) # [N,M,2] 134 RuntimeError: Expected object of scalar type Long but got scalar type Float for argument #2 ‘other’ I’m not familiar with your code, but it seems boxes1 is passed as a LongTensor, while boxes2 is a FloatTensor. Could this be the case? If so, you could call float() in the first or long() on the second argument, depending on your expected result. thanks for your response, I was wondering why there is boxes 1 and boxes 2, my input is about an image with a lot of labels in it and each label have 4 values for it box, so actually each label have one box only, so what does boxes 2 is for, if you have any idea in general?? It is helpful for me
__label__pos
0.987215
# This is an automatically generated code sample. # To make this code sample work in your Oracle Cloud tenancy, # please replace the values for any parameters whose current values do not fit # your use case (such as resource IDs, strings containing ‘EXAMPLE’ or ‘unique_id’, and # boolean, number, and enum parameters with values not fitting your use case). import oci # Create a default config using DEFAULT profile in default location # Refer to # https://docs.cloud.oracle.com/en-us/iaas/Content/API/Concepts/sdkconfig.htm#SDK_and_CLI_Configuration_File # for more info config = oci.config.from_file() # Initialize service client with default config file waas_client = oci.waas.WaasClient(config) # Send the request to service, some parameters are not required, see API # doc for more info list_whitelists_response = waas_client.list_whitelists( waas_policy_id="ocid1.test.oc1..<unique_ID>EXAMPLE-waasPolicyId-Value", opc_request_id="F9PDK13DANNNWEVEXOCB<unique_ID>", limit=60, page="EXAMPLE-page-Value") # Get the data from response print(list_whitelists_response.data)
__label__pos
0.856069
Working with lines in three.js This month I have been working towards developing a solid understanding of the basics of three.js as it is a great project that helps with everything, and anything 3d in a javaScript environment. As such it was only a matter of time until I would get around to working out a few quick demos about how to work with lines in three.js. Doing so is not that hard at all, and can quickly become very fun allowing me to draw in 3d. There is only so much to write about with the Line, and LineSegments constructors in three.js, so to help keep this post from being to thin I will also be writing about LineLoop, Line3, and the Materials that can be used with Lines including the LineBasicMatreial and LineDashedMaterial. There is also the Path constructor that can be used to make 2d shapes, making it similar to the 2d canvas drawing context. So there is a great deal to know about when it comes to making lines in three.js for both 3d, and 2d actually. I say that because there is also drawing lines in a 2d canvas using the 2d drawing context, and then using that as a way to skin the faces of a geometry. However in this post I will be briefly covering the Line Constructor and topics closely related to that. 1 - What you should know before hand This is a post on just one little aspect of three.js which is a javaScript project that allows for doing things involving solid geometry. It is not a getting started post on three.js, or any additional aspects of javaScript in general that are required in order to work with the library. You will want to know about the Vector3 constructor as that is what is used to define points in 3d space in three.js. You should be aware of Materials, Cameras, Renderer’s, and the Scene that are all needed to make a three.js project. 1.1 - Version Numbers matter As I say in every three.js post of mine, three.js is a project where the version number matters big time. When I first wrote this post I was using three.js 0.91.0 ( or just r91 for short ), and the last time I edited the post I was using three.s r127. Sense then many code breaking changes have happened in three.js with all sorts of things, and when it comes to lines the geometry now has to be an Instance of Buffer Geometry. 1.2 - A word On Materials when working with lines. If you are just making lines, and nothing that will compose a solid object or face, then it does not make sense to use a material that is designed to be used with something that is just a string of points in space. So if you aim to just draw some lines, and not something that will compose a solid object there are two special materials in three.js that are intended to be used with just lines. There materials are the LineBasicMaterial, and the LineDashedMaterial materials. 1.3 - Using the Dashed Line material If you are trying to use the dashed line material rather than the basic material, but are scratching your head wondering why it is that it is not dashed, then changes are you have not called a 1 2 3 4 5 6 7 8 9 var line = new THREE.Line(geometry, new THREE.LineDashedMaterial({ color: 0x0000ff, linewidth: 3, scale: .1, dashSize: .3, gapSize: .1 })); line.computeLineDistances(); scene.add(line); Certain properties such as the line width might not work as expected on all platforms, as such it might be best to always expect a width of only 1, or at least be happy with how it looks when it is just 1. 1.4 - The Line, and LineSegments Constructors One of the best ways to go about getting started with lines in three.js is to just use the Line constructor. There is also the LineSegments constructor that works pretty much the same way only it uses a different rendering method. A basic example of one of these would be to just create a geometry, push points to an array, and then use that geometry with a line material to create an instance of Line that can then be added to a scene. However the process of doing so has changed a little when it comes to more recent versions of three.js 1.4.1 - Using the BufferGeometry Constructor In general I will want to use the Buffer Geometry constructor to create the geometry of a line. In fact in late versions of three.js this is the only way to do so now. 1 2 3 4 5 6 7 8 9 10 11 12 var points = []; points.push( new THREE.Vector3(-10, -10, 0), new THREE.Vector3(10, 0, 0), new THREE.Vector3(-10, 10, 0)); var geometry = new THREE.BufferGeometry().setFromPoints( points ); // CREATE THE LINE var line = new THREE.Line( geometry, new THREE.LineBasicMaterial({ color: 0x0000ff })); 1.4.2 - Using the Geometry Constructor ( removed as of r125+ ) When I first wrote this post I was using r91 of three.js, back then I could make likes by using the geometry constructor. I guess I can still level these examples up but I will of course have to just make it clear that code like this will break on recent versions of three.js unless you can bring back the geometry constructor by some kind of means involving additional extremal files. 1 2 3 4 5 6 7 8 9 var geometry = new THREE.Geometry(); geometry.vertices.push( new THREE.Vector3(0, -10, 0), new THREE.Vector3(10, 0, 0), new THREE.Vector3(0, 10, 0)); scene.add(new THREE.Line(geometry, new THREE.LineBasicMaterial({ color: 0x0000ff }))); 2 - Full basic line demo examples As with any three.js example that is fully complete there must be a scene, camera, and renderer on top of the use of the Line constructor, geometry, and line materials. In this section I will be going over a few basic hello world style example that are full working examples that take everything into account. 2.1 - First off a new threejs r127 example using BufferGemoetry If I am using a late version of three.js that is r125 or higher I have to use the Buffer Geometry Constructor for the geometry of the line as the old Geometry constructor has been removed from that point forward. So then the first thing I need to do is create an array and then use the vecor3 class to create the points that I want for the line. After that I can use the setFromPoints method of a Buffer Geometry instance to create an instance of buffer geometry with this array of points. The resulting geometry can then be used with the THREE.Line constructor by passing the geometry as the first argument followed by the kine of line material that I want to use. Once I have my instance of THREE.line I can then add it to a scene, then create a camera, and a renderer and use the scene and camera with the render just like any other example. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 (function () { // Scene var scene = new THREE.Scene(); var points = []; points.push( new THREE.Vector3(-10, -10, 0), new THREE.Vector3(10, 0, 0), new THREE.Vector3(-10, 10, 0)); var geometry = new THREE.BufferGeometry().setFromPoints( points ); // CREATE THE LINE var line = new THREE.Line( geometry, new THREE.LineBasicMaterial({ color: 0x0000ff })); scene.add(line); // Camera var camera = new THREE.PerspectiveCamera(45, 4 / 3, .5, 100); camera.position.set(0, 0, -30); camera.lookAt(0, 0, 0); // Render var renderer = new THREE.WebGLRenderer(); renderer.setSize(649, 480); document.getElementById('demo').appendChild(renderer.domElement); renderer.render(scene, camera); } ()); 2.2 - My old r91 example Using the now removed Geometry constructor as of r125+ If I am using an older version of three.js or can somehow get the old geometry constructor on a new version of three.js I can create the geometry that way. Aside from that there is not much of any difference when it comes to everything else. I can not say that I will be creating actual projects like this any more, but I thought I should leave this up for historical reasons. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 (function () { // Scene var scene = new THREE.Scene(); // Camera var camera = new THREE.PerspectiveCamera(45, 4 / 3, .5, 100); camera.position.set(0, 0, -30); camera.lookAt(0, 0, 0); // GEOMETRY var geometry = new THREE.Geometry(); geometry.vertices.push( new THREE.Vector3(0, -10, 0), new THREE.Vector3(10, 0, 0), new THREE.Vector3(0, 10, 0)); // The Line var line = new THREE.Line(geometry, new THREE.LineBasicMaterial({ color: 0x0000ff })); scene.add(line); // Render var renderer = new THREE.WebGLRenderer(); renderer.setSize(320, 240); document.getElementById('demo').appendChild(renderer.domElement); renderer.render(scene, camera); } ()); I often place these examples just to have a complete copy and paste, functioning example, and also to cover some additional things that must be done with respect to the other components that make up a three.js project. Although in this case nothing special needs to be done compared to any other example this time around. Just the usual pitfalls to look out for such as making sure the camera is positioned away from, and looking at, what you are working with. 3 - Using 2d lines made in a canvas project with three.js I have wrote a full post on using canvas to make a texture in three.js, and when doing so there is drawing 2d lines on a canvas element and then using that to skin the face of a geometry. So then because I wrote a post on that in great detail I will not be getting into that here, but I think it is worth mentioning in this post. How it is done in a nut shell is to use the 2d canvas drawing context line methods to draw a line like normal, then pass the canvas to the Texture constructor, or better yet the CanvasTexture constructor that is put in place for this specific purpose. The texture can then be used with a material that is used in a Mesh for the various types of maps such as the plain color map, alpha map, and so forth. The Mesh can then use any geometry that will have one or more faces that will make use of the texture. 3.1 - Example using canvas to draw a line for a texture The Basic idea here is to just create a canvas, draw lines to the canvas using the 2d drawing context, and then create a texture with the canvas element. When it comes to using a canvas to create a texture in three.js there is the canvas texture constructor, but the regular texture constructor can also be used by just setting the needs update boolean to true. The resulting texture can the be used with a materials such as the basic material by making the texture the value of something like the map property of the material. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 // GEOMETRY var geometry = new THREE.BoxGeometry(1, 1, 1); // CANVAS var canvas = document.createElement('canvas'), ctx = canvas.getContext('2d'); canvas.width = 8; canvas.height = 8; ctx.fillStyle = '#000000'; ctx.fillRect(0, 0, canvas.width, canvas.height); ctx.strokeStyle = '#ff00ff'; ctx.strokeRect(0, 0, canvas.width, canvas.height); var texture = new THREE.Texture(canvas); texture.needsUpdate = true; // MATERIAL var material = new THREE.MeshBasicMaterial({ map: texture }); // MESH var mesh = new THREE.Mesh(geometry, material); scene.add(mesh); I will not be getting into the canvas 2d drawing api in detail here, but because it is another way of drawing lines in three.js it is sure worth mentioning to say the least. 4 - Conclusion So that is it for now when it comes to drawing lines in three.js, I am sure that there might be more to write about on this topic in the future but I have to get some time to work on some more examples first. There is not just using the Line constructor, but also creating some kind of custom tube line geometry that can then be skeined with any of the materials that are used for solid geometries. That is something that I would like to look into more sooner or later whenI can get around to it.
__label__pos
0.570701
Cody Solution 1136263 Submitted on 9 Mar 2017 by Mehmet OZC This solution is locked. To view this solution, you need to provide a solution of the same size or smaller. Test Suite Test Status Code Input and Output 1   Pass x = [1 2 3 4 5] n = 4 y_correct = [1 1 1 1 2 2 2 2 3 3 3 3 4 4 4 4 5 5 5 5] assert(isequal(NplicateMe(x,n),y_correct)) x = 1 2 3 4 5 n = 4 y_correct = Columns 1 through 16 1 1 1 1 2 2 2 2 3 3 3 3 4 4 4 4 Columns 17 through 20 5 5 5 5 2   Pass x = [10 7 8] n = 2 y_correct = [10 10 7 7 8 8] assert(isequal(NplicateMe(x,n),y_correct)) x = 10 7 8 n = 2 y_correct = 10 10 7 7 8 8
__label__pos
0.687584
web analytics Quick Inquiry Quick Inquiry Data Mining Techniques to Grow Business Revenue data mining You must have heard of the term big data before, but you are probably in the dark about the dynamics of this emerging trend. Well, big data refer to large data sets that are more complex than what is usually known as traditional data. Data mining, on the other hand, refers to the art and science of discovering and exploiting trends in data. It is apparent that data is an important decision tool for businesses today owing to the amount of information that can be accrued from data analytics. In fact, it has been predicted that many businesses in the future will benefit from sophisticated data mining methods. This article digs into the world of data mining in a bid to explain to you what data mining entails coupled by its importance to business today. Data Mining Techniques: There are many classifications for data mining techniques with the main ones being – association, classification, clustering, prediction, sequential patterns and decision tree. 1. Decision Trees Decision trees are one of the most adopted data mining methods thanks to its simplicity and efficiency in the analysis of data. This technique starts with the representation of conditions or questions with a root of a tree. The root forms an answer to the given question and then leads to its own set of conditions and answers. The process is continuous and recurrent until all the roots terminate at the last set of conditions/answers. With this technique, data can be represented in a graphical nature. 1. Associative Learning Have you ever wondered how the Amazon product and service recommendation system works? Well, the trick lies in associative learning. Association rules are basically IF & THEN statements that reveal information in databases. For example, if a customer buys an egg, there is also 85% chance to buy milk. Associative learning is achieved by analyzing data through IF-THEN statements, then finding out the number of times a product appears in a database. The next step is to find out the reliability of a number of times IF-THEN relationships. It is invaluable in predicting behavioral characteristics of customers and can be integrated into building programs that exhibit Artificial Intelligence. 1. Clustering This is a data mining procedure that makes the most of information obtained from a cluster of objects that exhibit similar characteristics. To understand the technique even better, one should understand the meaning of clustering. In simple terms, clustering requires groups of abstract objects to form groups of similar objects.  To perform clustering, you will first need to partition a set of data into groups, and then assign labels to the groups. Take an example of a library with a wide variety of books available for readers. Of course, the first challenge that arises is making these books available for users interested in a particular niche. To solve this through clustering, one could arrange books with similarities in a given section, then label them in a way that is easy to identify. 1. Sequential Patterns In this technique, businesses try to identify some regular patterns and trends of a given transaction data and within a stipulated period of time. A good example is analyzing transactions to find out products or services that customers buy in particular seasons of the year. 1. Predictive analysis As the name suggests, this is a technique in data mining whereby information from data is used to predict trends and patterns. The unknown event will normally be in the future, but at times it can be of the past. The key part of the predictive analysis is that relationship between explanatory variables and predicted variables that are used to find the unknown patterns. The accuracy of this technique relies on data analysis levels and quality when making assumptions. 1. Classification This data mining technique is often confused with clustering. Unlike in clustering, objects, in this case, are assigned into predefined classes or groups. In data mining, this technique has its foundations of machine learning. For example, spam filter does identify spam emails and put them in a different group. It adopts complex mathematical concepts in the realm of statistics, neural networks, linear programming among others. With such principles, one can develop a system that is able to learn and classify data items into groups. 1. Regression Data mining can be used to build a model based on several variables, for example, regression can decide the price of a house on the base of the number of rooms, location, and size. Thus, the target is house value, other aspects are predictors and the output data helps to make a case. Regression predicts the value of the target in the build data set. The historical regression data are divided into two sets like building the model and testing the model. The list is not exhaustive as there are a bunch of other useful data mining techniques like regression, anomaly detection and many more. What Can Data Mining Do for You? Data mining has got a wide array of advantages to businesses in the world today. Here, we take a look at some of the most common benefits you could accrue from adopting data mining techniques. 1. Sales Forecasting By using data mining techniques, you can be able to analyze behavioral characteristics of your customers and predict future sales. This way, you can determine complimentary products or services you can offer. You can also use it for the number of customers in the market to predict how many will buy from you. 1. Merchandise Planning Merchandise planning is a very important aspect and data mining can be colossal in getting you the right stocking and warehouse options. Things include in merchandise planning like choosing the right product, price, stock balancing, etc. This can be done through database mining strategies- a technique that will surely improve your decision making. 1. Marketing Strategies Marketing is one very important aspect of business meaning that a lot has to be invested in getting the right strategies tailored to your brand. Database marketing is vital in the sense that it gives pinpoint statistical approaches to marketing. Through analysis of demographics, psychographics, one can get to save the cost and optimize the efficiency of marketing campaigns. 1. Customer Relationship Management In the world where competition is dynamic, keeping customers happy is no longer an option, thus CRM is seen as an important factor in business. Retaining customers is the number one target for many businesses and data mining presents ways to get this done efficiently. By using tools that track employee data, essential social media data and other relevant data, a business can build powerful CRM systems. How Data Mining is implemented? A successful data mining procedure is usually broken down into three phases: Targeting Study, Proof of Concept and Deployment. The targeting study is the initial stage whereby the potential user of the mined information, gives an overview of the pattern and trends they might be interested in. On the Proof of concept stage, estimates are done to determine the ROI of the procedure and the technical risks involved. The last stage involves extraction of and cleaning of data. Conclusion: This is just a drop of water in the sea. The future is bound to be better with businesses being granted the power of data mining. Even the most successful companies are adopting this approach so expect the trend to grow bigger in the near future. In other words, it’s time to jump into the wagon and grow your business revenue.
__label__pos
0.989527
Perl Programming Question: Download Questions PDF What happens when you return a reference to a private variable? Answer: Perl keeps track of your variables, whether dynamic or otherwise, and doesn't free things before you're done using them. Download Perl Programming Interview Questions And Answers PDF Previous QuestionNext Question What is Perl one-liner?How to turn on Perl warnings? Why is that important? 
__label__pos
0.998332
[Community Puzzle] Mathematics for big ears Coding Games and Programming Challenges to Code Better Send your feedback or ask for help here! Created by @nicola,validated by @Jumpmaster,@Nagato_Uzumaki and @davilla. If you have any issues, feel free to ping them. Hey there! The first two tests are easily understandable, but I don’t understand the Klein test. Why would (1,2) and (3,4) generate 4 solutions? you can have () by combining either one with themselves, but where is the 4th possibility coming from? The four permutations are: (), (1 2), (3 4) and (1 2)(3 4). 1 Like Thanks @nicola ! 1 Like
__label__pos
0.528238
Is there a limit to the length of alt text? When adding an Alt Text it’s important to keep it concise (around of 140 characters) yet descriptive. With a 140 character limit, it is unnecessary to start the Alt Text with “image of”, “graph of”, “photo of”, etc. A screen reader will recognize the file as an image and let the user know for you. What is an image alt attribute? Definition: An alt tag, also known as “alt attribute” and “alt description,” is an HTML attribute applied to image tags to provide a text alternative for search engines. Applying images to alt tags such as product photos can positively impact an ecommerce store’s search engine rankings. What alt attribute should be assigned to an image? Every image should have an alt attribute, even if it’s alt=”” (sometimes called “null” alternative text). What is the maximum value for the alt attribute text string? value The value for alt attribute is a text string of upto 1024 characters. What is the Longdesc attribute? The longdesc attribute is a URI , the target of which contains a long description of the non-text content. Authors can provide a description for an image by including text in a separate resource or within the text of the page containing the image. How do I alt tag a photo? Image Alt Text Best Practices 1. Describe the image, and be specific. 2. Add context that relates to the topic of the page. 3. Keep your alt text fewer than 125 characters. 4. Don’t start alt text with “picture of…” or “Image of…” Jump right into the image’s description. 5. Use your keywords, but sparingly. Do images need Alttext? Decorative images usually don’t need alt text. They may exist on the page for purely aesthetic reasons – in other words, to make the page look pretty. Or they may be repeating information that is already on the page as text. In that case, adding alt text to the image is redundant. Do background images need alt? CSS background images should not have alternative text if the image is truly a background image. Decorative (i.e removing it from the page causes no impact to the page’s meaning or function) CSS background images do not need additional markup. Is the value for the alt attribute is a text string of 2000 characters? The value for alt attribute is a text string of upto 1024 characters. Which attribute is used to change the size of image horizontally? HTML | size Attribute. The HTML size Attribute is used to specify the height of the horizontal line in terms of pixels. What is long DESC? What is the attribute value for alt text? Attribute Values. Guidelines for the alt text: The text should describe the image if the image contains information. The text should explain where the link goes if the image is inside an element. Use alt=”” if the image is only for decoration. What are the Alt and title attributes of an image? The alt and title attributes of an image are commonly referred to as alt tag or alt text and title tag even though they’re not technically tags. The alt text describes what’s on the image and the function of the image on the page. What is the maximum length for alt attributes in HTML? The HTML specification does not define a maximum length for “alt” attributes. Current versions of the leading screen reader programs have no limits on the amount of alternate text they will read. How long should alt text be for images? Alt text should therefore be short enough to reasonably fit within the space allocated for the image. Essentially, alt text should be as long as it needs to be in order to effectively describe the content, but should be succinct. For complex images such as charts, graphs, and diagrams that require more lengthy descriptions, there are other options.
__label__pos
1
What is Coomer.party and Why You Should Be Careful What is Coomer.party What is Coomer.party? Coomer.party is a website that has lots of adult content. It shows pictures and videos that are not safe for kids. But that’s not all! Coomer.party can also try to trick you. Coomer.party sometimes shows bad ads that can harm your computer. These ads might want to steal your personal info or put a virus on your device. So, it’s very important to be careful if you ever come across Coomer.party. What is Coomer.party? An Introduction What is Coomer.party? Coomer.party is a website with lots of adult pictures and videos. These are not safe for kids. The content on Coomer.party is very explicit and meant only for adults. But there’s more you need to know. Coomer.party doesn’t just show adult content. It can also try to trick people. Sometimes, it shows ads that can be harmful. These ads might try to steal your personal information or put viruses on your computer. It’s important to understand what Coomer.party is so you can stay safe. Always be careful with websites like this. They can pose many risks, not just because of the content, but also because of the tricky ads they show. Why You Should Avoid Coomer.party Coomer.party can be very dangerous. It shows ads that might harm your computer. These ads can try to steal your personal information or give your device a virus. So, it’s best to stay away from sites like Coomer.party. When you visit Coomer.party, you risk your privacy and security. The site might try to trick you with fake ads. These ads can lead to scams or harmful websites. This is why it’s important to avoid Coomer.party. Being safe online is very important. By avoiding Coomer.party, you can protect yourself from harmful ads and content. Always be careful and think twice before visiting sites like Coomer.party. How Coomer.party Can Trick You Coomer.party often shows tricky ads. These ads can look real but are very dangerous. They might try to get your personal info or put bad software on your device. It’s important to know how these tricks work. Some ads on Coomer.party might say you won a prize. But these are just tricks to get your information. They might ask for your email or phone number. Once they have it, they can send you more bad stuff. Other ads might look like updates for your computer. But clicking them can download viruses. These viruses can harm your device. Always be careful and avoid clicking ads on Coomer.party. What You Need to Know About Coomer.party Coomer.party is not just an adult website. It also has many risks. The ads on Coomer.party can be very harmful. They can steal your personal info or put viruses on your device. This makes it very dangerous to visit. When you visit Coomer.party, you risk more than just seeing adult content. You risk your privacy and security. The site can try to trick you with fake ads. These ads can lead to scams or harmful websites. It’s important to know what Coomer.party is and why it’s risky. By understanding the dangers, you can protect yourself. Always be careful and think twice before visiting sites like Coomer.party. Is Coomer.party Safe to Visit? No, Coomer.party is not safe to visit. The site has a lot of harmful ads. These ads can steal your personal info or give your device a virus. It’s best to stay away from Coomer.party to stay safe. Visiting Coomer.party can put your privacy and security at risk. The site might try to trick you with fake ads. These ads can lead to scams or harmful websites. This makes Coomer.party very dangerous. Always be careful with websites like Coomer.party. They can cause many problems for your device and personal information. It’s better to stay safe and avoid visiting Coomer.party altogether. What is Coomer.party: Dangers Explained Coomer.party has many dangers. The main danger is the harmful ads. These ads can steal your personal info or give your device a virus. This makes Coomer.party very risky to visit. Another danger of Coomer.party is the scams. The site might show fake ads that try to trick you. These ads can lead to harmful websites or ask for your personal information. It’s important to be aware of these dangers. Knowing the risks of Coomer.party can help you stay safe. Always be careful and avoid clicking on ads from this site. By understanding the dangers, you can protect yourself from harm. How to Protect Yourself from Coomer.party Protecting yourself from Coomer.party is very important. The first step is to avoid visiting the site. This can help you stay safe from harmful ads and scams. It’s best to stay away from sites like Coomer.party. If you do visit Coomer.party, be very careful. Don’t click on any ads. These ads can be harmful and might try to steal your personal information. Always keep your guard up. Another way to protect yourself is to use antivirus software. This can help block harmful ads and keep your device safe. By taking these steps, you can protect yourself from Coomer.party. What is Coomer.party and How It Affects You Coomer.party can affect you in many bad ways. The site has harmful ads that can steal your personal info. These ads can also put viruses on your device. This makes Coomer.party very dangerous. When you visit Coomer.party, you risk your privacy and security. The site might show fake ads that try to trick you. These ads can lead to scams or harmful websites. This is why it’s important to avoid Coomer.party. Understanding how Coomer.party affects you can help you stay safe. Always be careful and avoid visiting sites like Coomer.party. This can help protect your personal information and your device. Coomer.party: More Than Just Adult Content Coomer.party is more than just an adult website. It has many risks that go beyond adult content. The site shows harmful ads that can steal your personal info or put viruses on your device. These ads can be very tricky. They might look real but are very dangerous. They can lead to scams or harmful websites. This makes Coomer.party very risky to visit. By understanding that Coomer.party is more than just adult content, you can stay safe. Always be careful and avoid clicking on ads from this site. This can help protect your personal information and your device. The Hidden Risks of Visiting Coomer.party There are many hidden risks when visiting Coomer.party. The main risk is the harmful ads. These ads can steal your personal info or give your device a virus. This makes Coomer.party very dangerous. Another hidden risk is the scams. The site might show fake ads that try to trick you. These ads can lead to harmful websites or ask for your personal information. It’s important to be aware of these risks. By knowing the hidden risks of Coomer.party, you can protect yourself. Always be careful and avoid visiting sites like Coomer.party. This can help keep your personal information and device safe. What is Coomer.party and Why It’s Harmful Coomer.party is harmful for many reasons. The site has harmful ads that can steal your personal info. These ads can also put viruses on your device. This makes Coomer.party very dangerous. Another reason Coomer.party is harmful is because of the scams. The site might show fake ads that try to trick you. These ads can lead to harmful websites or ask for your personal information. Knowing why Coomer.party is harmful can help you stay safe. Always be careful and avoid visiting sites like Coomer.party. This can help protect your personal information and your device. Tips to Stay Safe from Coomer.party Staying safe from Coomer.party is very important. The first tip is to avoid visiting the site. This can help you stay safe from harmful ads and scams. It’s best to stay away from sites like Coomer.party. If you do visit Coomer.party, be very careful. Don’t click on any ads. These ads can be harmful and might try to steal your personal information. Always keep your guard up. Using antivirus software can also help you stay safe. This can block harmful ads and keep your device secure. By following these tips, you can protect yourself from Coomer.party. Understanding Coomer.party’s Content Coomer.party mainly hosts adult content, which includes explicit images and videos. This content is intended for adults only and is not suitable for children or teenagers. It’s important to be aware that visiting Coomer.party means you may encounter material that is sexually explicit and not appropriate for all audiences. The Legality of Coomer.party Coomer.party operates in a legal gray area. While adult content itself is not illegal in many jurisdictions, the way it is presented and the potential for deceptive practices can be. Websites like Coomer.party may face legal challenges due to the nature of their content and the risks associated with it, such as misleading advertisements and potential security threats. Coomer.party’s Impact on Online Safety Coomer.party poses risks to online safety due to its deceptive practices and potential for exposing users to harmful content and ads. Visiting such websites can compromise your device’s security and expose you to scams or malware. It’s crucial to prioritize online safety and avoid visiting sites like Coomer.party to protect yourself and your personal information. Why Coomer.party Needs Awareness Awareness about Coomer.party is important because it helps users make informed decisions about their online activities. By understanding the risks associated with visiting such websites, individuals can take proactive steps to safeguard their privacy and security. Education and awareness can empower users to recognize and avoid potentially harmful online content and ads. Reporting Coomer.party and Similar Sites If you come across Coomer.party or similar websites engaging in deceptive practices or hosting harmful content, it’s important to report them. Many internet service providers and authorities have mechanisms for reporting websites that violate policies or engage in illegal activities. Reporting helps protect other users and contributes to a safer online environment. These paragraphs provide additional insights into different aspects of Coomer.party, focusing on content, legality, online safety, awareness, and reporting. Conclusion In conclusion, Coomer.party is a website with adult content that can be risky to visit. It shows explicit pictures and videos meant only for adults, which can be harmful for younger audiences. The site also displays ads that might try to trick you into giving away personal information or downloading harmful software. It’s important to be careful online and avoid visiting sites like Coomer.party to stay safe. Always remember, your online safety is very important. By staying away from risky websites and being cautious about what you click on, you can protect yourself and your devices. If you ever come across a site like Coomer.party that seems unsafe or shows content that makes you uncomfortable, it’s best to close the page and talk to a trusted adult about what you saw. Stay safe and enjoy the internet responsibly! Leave a Reply Your email address will not be published. Required fields are marked *
__label__pos
0.960459
aboutsummaryrefslogtreecommitdiffstats path: root/engine/plain_helpers.cpp diff options context: space: mode: Diffstat (limited to 'engine/plain_helpers.cpp') -rw-r--r--engine/plain_helpers.cpp238 1 files changed, 238 insertions, 0 deletions diff --git a/engine/plain_helpers.cpp b/engine/plain_helpers.cpp new file mode 100644 index 000000000000..52b1bc74fe10 --- /dev/null +++ b/engine/plain_helpers.cpp @@ -0,0 +1,238 @@ +// Copyright 2011 The Kyua Authors. +// All rights reserved. +// +// Redistribution and use in source and binary forms, with or without +// modification, are permitted provided that the following conditions are +// met: +// +// * Redistributions of source code must retain the above copyright +// notice, this list of conditions and the following disclaimer. +// * Redistributions in binary form must reproduce the above copyright +// notice, this list of conditions and the following disclaimer in the +// documentation and/or other materials provided with the distribution. +// * Neither the name of Google Inc. nor the names of its contributors +// may be used to endorse or promote products derived from this software +// without specific prior written permission. +// +// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR +// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT +// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, +// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT +// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, +// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY +// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT +// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE +// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + +extern "C" { +#include <sys/stat.h> + +#include <unistd.h> + +extern char** environ; +} + +#include <cstdlib> +#include <cstring> +#include <fstream> +#include <iostream> +#include <sstream> + +#include "utils/env.hpp" +#include "utils/format/containers.ipp" +#include "utils/format/macros.hpp" +#include "utils/fs/operations.hpp" +#include "utils/fs/path.hpp" +#include "utils/optional.ipp" +#include "utils/test_utils.ipp" + +namespace fs = utils::fs; + +using utils::optional; + + +namespace { + + +/// Gets the name of the test case to run. +/// +/// We use the value of the TEST_CASE environment variable if present, or +/// else the basename of the test program. +/// +/// \param arg0 Value of argv[0] as passed to main(). +/// +/// \return A test case name. The name may not be valid. +static std::string +guess_test_case_name(const char* arg0) +{ + const optional< std::string > test_case_env = utils::getenv("TEST_CASE"); + if (test_case_env) { + return test_case_env.get(); + } else { + return fs::path(arg0).leaf_name(); + } +} + + +/// Logs an error message and exits the test with an error code. +/// +/// \param str The error message to log. +static void +fail(const std::string& str) +{ + std::cerr << str << '\n'; + std::exit(EXIT_FAILURE); +} + + +/// A test case that validates the TEST_ENV_* variables. +static void +test_check_configuration_variables(void) +{ + std::set< std::string > vars; + char** iter; + for (iter = environ; *iter != NULL; ++iter) { + if (std::strstr(*iter, "TEST_ENV_") == *iter) { + vars.insert(*iter); + } + } + + std::set< std::string > exp_vars; + exp_vars.insert("TEST_ENV_first=some value"); + exp_vars.insert("TEST_ENV_second=some other value"); + if (vars != exp_vars) { + fail(F("Expected: %s\nFound: %s\n") % exp_vars % vars); + } +} + + +/// A test case that crashes. +static void +test_crash(void) +{ + utils::abort_without_coredump(); +} + + +/// A test case that exits with a non-zero exit code, and not 1. +static void +test_fail(void) +{ + std::exit(8); +} + + +/// A test case that passes. +static void +test_pass(void) +{ +} + + +/// A test case that spawns a subchild that gets stuck. +/// +/// This test case is used by the caller to validate that the whole process tree +/// is terminated when the test case is killed. +static void +test_spawn_blocking_child(void) +{ + pid_t pid = ::fork(); + if (pid == -1) + fail("Cannot fork subprocess"); + else if (pid == 0) { + for (;;) + ::pause(); + } else { + const fs::path name = fs::path(utils::getenv("CONTROL_DIR").get()) / + "pid"; + std::ofstream pidfile(name.c_str()); + if (!pidfile) + fail("Failed to create the pidfile"); + pidfile << pid; + pidfile.close(); + } +} + + +/// A test case that times out. +/// +/// Note that the timeout is defined in the Kyuafile, as the plain interface has +/// no means for test programs to specify this by themselves. +static void +test_timeout(void) +{ + ::sleep(10); + const fs::path control_dir = fs::path(utils::getenv("CONTROL_DIR").get()); + std::ofstream file((control_dir / "cookie").c_str()); + if (!file) + fail("Failed to create the control cookie"); + file.close(); +} + + +/// A test case that performs basic checks on the runtime environment. +/// +/// If the runtime environment does not look clean (according to the rules in +/// the Kyua runtime properties), the test fails. +static void +test_validate_isolation(void) +{ + if (utils::getenv("HOME").get() == "fake-value") + fail("HOME not reset"); + if (utils::getenv("LANG")) + fail("LANG not unset"); +} + + +} // anonymous namespace + + +/// Entry point to the test program. +/// +/// The caller can select which test case to run by defining the TEST_CASE +/// environment variable. This is not "standard", in the sense this is not a +/// generic property of the plain test case interface. +/// +/// \todo It may be worth to split this binary into separate, smaller binaries, +/// one for every "test case". We use this program as a dispatcher for +/// different "main"s, the only reason being to keep the amount of helper test +/// programs to a minimum. However, putting this each function in its own +/// binary could simplify many other things. +/// +/// \param argc The number of CLI arguments. +/// \param argv The CLI arguments themselves. These are not used because +/// Kyua will not pass any arguments to the plain test program. +int +main(int argc, char** argv) +{ + if (argc != 1) { + std::cerr << "No arguments allowed; select the test case with the " + "TEST_CASE variable"; + return EXIT_FAILURE; + } + + const std::string& test_case = guess_test_case_name(argv[0]); + + if (test_case == "check_configuration_variables") + test_check_configuration_variables(); + else if (test_case == "crash") + test_crash(); + else if (test_case == "fail") + test_fail(); + else if (test_case == "pass") + test_pass(); + else if (test_case == "spawn_blocking_child") + test_spawn_blocking_child(); + else if (test_case == "timeout") + test_timeout(); + else if (test_case == "validate_isolation") + test_validate_isolation(); + else { + std::cerr << "Unknown test case"; + return EXIT_FAILURE; + } + + return EXIT_SUCCESS; +}
__label__pos
0.612751
1 # 2 # Copyright (c) 2015, 2019, Oracle and/or its affiliates. All rights reserved. 3 # DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER. 4 # 5 # This code is free software; you can redistribute it and/or modify it 6 # under the terms of the GNU General Public License version 2 only, as 7 # published by the Free Software Foundation. Oracle designates this 8 # particular file as subject to the "Classpath" exception as provided 9 # by Oracle in the LICENSE file that accompanied this code. 10 # 11 # This code is distributed in the hope that it will be useful, but WITHOUT 12 # ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or 13 # FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License 14 # version 2 for more details (a copy is included in the LICENSE file that 15 # accompanied this code). 16 # 17 # You should have received a copy of the GNU General Public License version 18 # 2 along with this work; if not, write to the Free Software Foundation, 19 # Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA. 20 # 21 # Please contact Oracle, 500 Oracle Parkway, Redwood Shores, CA 94065 USA 22 # or visit www.oracle.com if you need additional information or have any 23 # questions. 24 # 25 26 ifndef _TEST_FILES_COMPILATION_GMK 27 _TEST_FILES_COMPILATION_GMK := 1 28 29 ifeq (,$(_MAKEBASE_GMK)) 30 $(error You must include MakeBase.gmk prior to including TestFilesCompilation.gmk) 31 endif 32 33 34 include NativeCompilation.gmk 35 36 # Setup make rules for creating a set of native test files (libraries or 37 # executables). This will locate native files matching a certain pattern, 38 # and compile these into libraries or executables. 39 # 40 # Parameter 1 is the name of the rule. This name is used as variable prefix, 41 # and the targets generated are listed in a variable by that name. 42 # 43 # Remaining parameters are named arguments. These include: 44 # TYPE Must be either PROGRAM or LIBRARY. 45 # SOURCE_DIRS A list of source directories to search 46 # OUTPUT_DIR Where to put the resulting files 47 # EXCLUDE A list of filenames to exclude from compilation 48 SetupTestFilesCompilation = $(NamedParamsMacroTemplate) 49 define SetupTestFilesCompilationBody 50 51 # Check for duplicate base file names. That would have failed later anyhow, but 52 # this gives a better error message. 53 $1_DUPLICATED_NAMES := $$(call dups, $$(notdir $$($1_FILE_LIST))) 54 ifneq ($$($1_DUPLICATED_NAMES), ) 55 $$(error There are duplicate test file names for $1: $$($1_DUPLICATED_NAMES)) 56 endif 57 58 # The list to depend on starts out empty 59 $1 := 60 ifeq ($$($1_TYPE), LIBRARY) 61 $1_PREFIX = lib 62 $1_OUTPUT_SUBDIR := lib 63 $1_CFLAGS := $(CFLAGS_TESTLIB) 64 $1_LDFLAGS := $(LDFLAGS_TESTLIB) $(call SET_SHARED_LIBRARY_ORIGIN) 65 $1_COMPILATION_TYPE := LIBRARY 66 else ifeq ($$($1_TYPE), PROGRAM) 67 $1_PREFIX = exe 68 $1_OUTPUT_SUBDIR := bin 69 $1_CFLAGS := $(CFLAGS_TESTEXE) 70 $1_LDFLAGS := $(LDFLAGS_TESTEXE) 71 $1_COMPILATION_TYPE := EXECUTABLE 72 else 73 $$(error Unknown type: $$($1_TYPE)) 74 endif 75 76 # Locate all files with the matching prefix 77 $1_FILE_LIST := \ 78 $$(call FindFiles, $$($1_SOURCE_DIRS), $$($1_PREFIX)*.c) 79 80 $1_EXCLUDE_PATTERN := $$(addprefix %/, $$($1_EXCLUDE)) 81 $1_FILTERED_FILE_LIST := $$(filter-out $$($1_EXCLUDE_PATTERN), $$($1_FILE_LIST)) 82 83 # Setup a compilation for each and every one of them 84 $$(foreach file, $$($1_FILTERED_FILE_LIST),\ 85 $$(eval name := $$(strip $$(basename $$(notdir $$(file))))) \ 86 $$(eval unprefixed_name := $$(patsubst $$($1_PREFIX)%, %, $$(name))) \ 87 $$(eval $$(call SetupNativeCompilation, BUILD_TEST_$$(name), \ 88 NAME := $$(unprefixed_name), \ 89 TYPE := $$($1_COMPILATION_TYPE), \ 90 SRC := $$(patsubst %/,%,$$(dir $$(file))), \ 91 INCLUDE_FILES := $$(notdir $$(file)), \ 92 OBJECT_DIR := $$($1_OUTPUT_DIR)/support/$$(name), \ 93 OUTPUT_DIR := $$($1_OUTPUT_DIR)/$$($1_OUTPUT_SUBDIR), \ 94 CFLAGS := $$($1_CFLAGS) $$($1_CFLAGS_$$(name)), \ 95 LDFLAGS := $$($1_LDFLAGS) $$($1_LDFLAGS_$$(name)), \ 96 LIBS := $$($1_LIBS_$$(name)), \ 97 OPTIMIZATION := $$(if $$($1_OPTIMIZATION_$$(name)),$$($1_OPTIMIZATION_$$(name)),LOW), \ 98 COPY_DEBUG_SYMBOLS := false, \ 99 STRIP_SYMBOLS := false, \ 100 )) \ 101 $$(eval $1 += $$(BUILD_TEST_$$(name)) ) \ 102 ) 103 104 endef 105 106 endif # _TEST_FILES_COMPILATION_GMK
__label__pos
0.802621
How Many Days Until January 8th? icon for a calendar with one day highlighted red Time Remaining Until January 8, 2025: 155 days • 5 months 2 days • 22 weeks 1 day • 3,720 hours There are one hundred and fifty-five days remaining until January 8, 2025. This is calculated from today's date, which is August 6, 2024. The following chart shows the days remaining until January 8th from today and various other days. On DateCountdown to January 8th August 2, 2024159 days August 3, 2024158 days August 4, 2024157 days August 5, 2024156 days August 6, 2024155 days August 7, 2024154 days August 8, 2024153 days August 9, 2024152 days August 10, 2024151 days How Many Work Days Are Left Until January 8th? Weekdays Until January 8th 111 days You can use our business days calculator to find how many working days are between any two dates. It's important to note that this does not consider holidays that may fall on a weekday, such as New Year's. So, you'll need to adjust this to account for holidays that you do not work. How To Calculate the Days Until January 8th You can count down the days until 1/8 in a few ways. The easiest is to use a calculator, such as our days until date calculator. You can also calculate the days manually or use a spreadsheet formula. Method One: Calculate the Days Manually If the current date falls in January, then you can simply subtract the current day of the month from 8. The resulting value is the number of days remaining. days until 1/8 = 8 – current day in January If the current date is not in January, then you can subtract the current day of the month from the number of days in the current month to find the remaining days in that month. Then, to that value, you can add the number of days in each month before January, and then add 8 for the number of days in January. days until 1/8 = days left in the current month + days in next month + … + 8 Method Two: How To Calculate the Days Using Google Sheets You can also calculate the number of days remaining until 1/8 using spreadsheet software such as Google Sheets or Microsoft Excel. You can do this using a few different formulas. From the Current Date If you want to find the number of days remaining from the current day, you can use the following function in a cell to calculate that number: =DATE(2025, 1, 8) - TODAY() Note that this formula calculates and displays the number of days once you type the formula into a cell and hit Enter. From Any Date If you want to find the number of days remaining from a specific date listed in another cell, in this case, from a date in cell A1, then you can use the following function to display the number of days remaining from the date displayed in cell A1: =DATE(2025, 1, 8) - A1 Countdown to More Dates
__label__pos
0.935164
Showing error 996 User: Jiri Slaby Error type: Leaving function in locked state Error type description: Some lock is not unlocked on all paths of a function, so it is leaked File location: fs/xfs/xfs_mru_cache.c Line in file: 563 Project: Linux Kernel Project version: 2.6.28 Tools: Stanse (1.2) Undetermined 1 Entered: 2012-03-02 21:35:18 UTC Source: 1/* 2 * Copyright (c) 2006-2007 Silicon Graphics, Inc. 3 * All Rights Reserved. 4 * 5 * This program is free software; you can redistribute it and/or 6 * modify it under the terms of the GNU General Public License as 7 * published by the Free Software Foundation. 8 * 9 * This program is distributed in the hope that it would be useful, 10 * but WITHOUT ANY WARRANTY; without even the implied warranty of 11 * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 12 * GNU General Public License for more details. 13 * 14 * You should have received a copy of the GNU General Public License 15 * along with this program; if not, write the Free Software Foundation, 16 * Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA 17 */ 18#include "xfs.h" 19#include "xfs_mru_cache.h" 20 21/* 22 * The MRU Cache data structure consists of a data store, an array of lists and 23 * a lock to protect its internal state. At initialisation time, the client 24 * supplies an element lifetime in milliseconds and a group count, as well as a 25 * function pointer to call when deleting elements. A data structure for 26 * queueing up work in the form of timed callbacks is also included. 27 * 28 * The group count controls how many lists are created, and thereby how finely 29 * the elements are grouped in time. When reaping occurs, all the elements in 30 * all the lists whose time has expired are deleted. 31 * 32 * To give an example of how this works in practice, consider a client that 33 * initialises an MRU Cache with a lifetime of ten seconds and a group count of 34 * five. Five internal lists will be created, each representing a two second 35 * period in time. When the first element is added, time zero for the data 36 * structure is initialised to the current time. 37 * 38 * All the elements added in the first two seconds are appended to the first 39 * list. Elements added in the third second go into the second list, and so on. 40 * If an element is accessed at any point, it is removed from its list and 41 * inserted at the head of the current most-recently-used list. 42 * 43 * The reaper function will have nothing to do until at least twelve seconds 44 * have elapsed since the first element was added. The reason for this is that 45 * if it were called at t=11s, there could be elements in the first list that 46 * have only been inactive for nine seconds, so it still does nothing. If it is 47 * called anywhere between t=12 and t=14 seconds, it will delete all the 48 * elements that remain in the first list. It's therefore possible for elements 49 * to remain in the data store even after they've been inactive for up to 50 * (t + t/g) seconds, where t is the inactive element lifetime and g is the 51 * number of groups. 52 * 53 * The above example assumes that the reaper function gets called at least once 54 * every (t/g) seconds. If it is called less frequently, unused elements will 55 * accumulate in the reap list until the reaper function is eventually called. 56 * The current implementation uses work queue callbacks to carefully time the 57 * reaper function calls, so this should happen rarely, if at all. 58 * 59 * From a design perspective, the primary reason for the choice of a list array 60 * representing discrete time intervals is that it's only practical to reap 61 * expired elements in groups of some appreciable size. This automatically 62 * introduces a granularity to element lifetimes, so there's no point storing an 63 * individual timeout with each element that specifies a more precise reap time. 64 * The bonus is a saving of sizeof(long) bytes of memory per element stored. 65 * 66 * The elements could have been stored in just one list, but an array of 67 * counters or pointers would need to be maintained to allow them to be divided 68 * up into discrete time groups. More critically, the process of touching or 69 * removing an element would involve walking large portions of the entire list, 70 * which would have a detrimental effect on performance. The additional memory 71 * requirement for the array of list heads is minimal. 72 * 73 * When an element is touched or deleted, it needs to be removed from its 74 * current list. Doubly linked lists are used to make the list maintenance 75 * portion of these operations O(1). Since reaper timing can be imprecise, 76 * inserts and lookups can occur when there are no free lists available. When 77 * this happens, all the elements on the LRU list need to be migrated to the end 78 * of the reap list. To keep the list maintenance portion of these operations 79 * O(1) also, list tails need to be accessible without walking the entire list. 80 * This is the reason why doubly linked list heads are used. 81 */ 82 83/* 84 * An MRU Cache is a dynamic data structure that stores its elements in a way 85 * that allows efficient lookups, but also groups them into discrete time 86 * intervals based on insertion time. This allows elements to be efficiently 87 * and automatically reaped after a fixed period of inactivity. 88 * 89 * When a client data pointer is stored in the MRU Cache it needs to be added to 90 * both the data store and to one of the lists. It must also be possible to 91 * access each of these entries via the other, i.e. to: 92 * 93 * a) Walk a list, removing the corresponding data store entry for each item. 94 * b) Look up a data store entry, then access its list entry directly. 95 * 96 * To achieve both of these goals, each entry must contain both a list entry and 97 * a key, in addition to the user's data pointer. Note that it's not a good 98 * idea to have the client embed one of these structures at the top of their own 99 * data structure, because inserting the same item more than once would most 100 * likely result in a loop in one of the lists. That's a sure-fire recipe for 101 * an infinite loop in the code. 102 */ 103typedef struct xfs_mru_cache_elem 104{ 105 struct list_head list_node; 106 unsigned long key; 107 void *value; 108} xfs_mru_cache_elem_t; 109 110static kmem_zone_t *xfs_mru_elem_zone; 111static struct workqueue_struct *xfs_mru_reap_wq; 112 113/* 114 * When inserting, destroying or reaping, it's first necessary to update the 115 * lists relative to a particular time. In the case of destroying, that time 116 * will be well in the future to ensure that all items are moved to the reap 117 * list. In all other cases though, the time will be the current time. 118 * 119 * This function enters a loop, moving the contents of the LRU list to the reap 120 * list again and again until either a) the lists are all empty, or b) time zero 121 * has been advanced sufficiently to be within the immediate element lifetime. 122 * 123 * Case a) above is detected by counting how many groups are migrated and 124 * stopping when they've all been moved. Case b) is detected by monitoring the 125 * time_zero field, which is updated as each group is migrated. 126 * 127 * The return value is the earliest time that more migration could be needed, or 128 * zero if there's no need to schedule more work because the lists are empty. 129 */ 130STATIC unsigned long 131_xfs_mru_cache_migrate( 132 xfs_mru_cache_t *mru, 133 unsigned long now) 134{ 135 unsigned int grp; 136 unsigned int migrated = 0; 137 struct list_head *lru_list; 138 139 /* Nothing to do if the data store is empty. */ 140 if (!mru->time_zero) 141 return 0; 142 143 /* While time zero is older than the time spanned by all the lists. */ 144 while (mru->time_zero <= now - mru->grp_count * mru->grp_time) { 145 146 /* 147 * If the LRU list isn't empty, migrate its elements to the tail 148 * of the reap list. 149 */ 150 lru_list = mru->lists + mru->lru_grp; 151 if (!list_empty(lru_list)) 152 list_splice_init(lru_list, mru->reap_list.prev); 153 154 /* 155 * Advance the LRU group number, freeing the old LRU list to 156 * become the new MRU list; advance time zero accordingly. 157 */ 158 mru->lru_grp = (mru->lru_grp + 1) % mru->grp_count; 159 mru->time_zero += mru->grp_time; 160 161 /* 162 * If reaping is so far behind that all the elements on all the 163 * lists have been migrated to the reap list, it's now empty. 164 */ 165 if (++migrated == mru->grp_count) { 166 mru->lru_grp = 0; 167 mru->time_zero = 0; 168 return 0; 169 } 170 } 171 172 /* Find the first non-empty list from the LRU end. */ 173 for (grp = 0; grp < mru->grp_count; grp++) { 174 175 /* Check the grp'th list from the LRU end. */ 176 lru_list = mru->lists + ((mru->lru_grp + grp) % mru->grp_count); 177 if (!list_empty(lru_list)) 178 return mru->time_zero + 179 (mru->grp_count + grp) * mru->grp_time; 180 } 181 182 /* All the lists must be empty. */ 183 mru->lru_grp = 0; 184 mru->time_zero = 0; 185 return 0; 186} 187 188/* 189 * When inserting or doing a lookup, an element needs to be inserted into the 190 * MRU list. The lists must be migrated first to ensure that they're 191 * up-to-date, otherwise the new element could be given a shorter lifetime in 192 * the cache than it should. 193 */ 194STATIC void 195_xfs_mru_cache_list_insert( 196 xfs_mru_cache_t *mru, 197 xfs_mru_cache_elem_t *elem) 198{ 199 unsigned int grp = 0; 200 unsigned long now = jiffies; 201 202 /* 203 * If the data store is empty, initialise time zero, leave grp set to 204 * zero and start the work queue timer if necessary. Otherwise, set grp 205 * to the number of group times that have elapsed since time zero. 206 */ 207 if (!_xfs_mru_cache_migrate(mru, now)) { 208 mru->time_zero = now; 209 if (!mru->queued) { 210 mru->queued = 1; 211 queue_delayed_work(xfs_mru_reap_wq, &mru->work, 212 mru->grp_count * mru->grp_time); 213 } 214 } else { 215 grp = (now - mru->time_zero) / mru->grp_time; 216 grp = (mru->lru_grp + grp) % mru->grp_count; 217 } 218 219 /* Insert the element at the tail of the corresponding list. */ 220 list_add_tail(&elem->list_node, mru->lists + grp); 221} 222 223/* 224 * When destroying or reaping, all the elements that were migrated to the reap 225 * list need to be deleted. For each element this involves removing it from the 226 * data store, removing it from the reap list, calling the client's free 227 * function and deleting the element from the element zone. 228 * 229 * We get called holding the mru->lock, which we drop and then reacquire. 230 * Sparse need special help with this to tell it we know what we are doing. 231 */ 232STATIC void 233_xfs_mru_cache_clear_reap_list( 234 xfs_mru_cache_t *mru) __releases(mru->lock) __acquires(mru->lock) 235 236{ 237 xfs_mru_cache_elem_t *elem, *next; 238 struct list_head tmp; 239 240 INIT_LIST_HEAD(&tmp); 241 list_for_each_entry_safe(elem, next, &mru->reap_list, list_node) { 242 243 /* Remove the element from the data store. */ 244 radix_tree_delete(&mru->store, elem->key); 245 246 /* 247 * remove to temp list so it can be freed without 248 * needing to hold the lock 249 */ 250 list_move(&elem->list_node, &tmp); 251 } 252 spin_unlock(&mru->lock); 253 254 list_for_each_entry_safe(elem, next, &tmp, list_node) { 255 256 /* Remove the element from the reap list. */ 257 list_del_init(&elem->list_node); 258 259 /* Call the client's free function with the key and value pointer. */ 260 mru->free_func(elem->key, elem->value); 261 262 /* Free the element structure. */ 263 kmem_zone_free(xfs_mru_elem_zone, elem); 264 } 265 266 spin_lock(&mru->lock); 267} 268 269/* 270 * We fire the reap timer every group expiry interval so 271 * we always have a reaper ready to run. This makes shutdown 272 * and flushing of the reaper easy to do. Hence we need to 273 * keep when the next reap must occur so we can determine 274 * at each interval whether there is anything we need to do. 275 */ 276STATIC void 277_xfs_mru_cache_reap( 278 struct work_struct *work) 279{ 280 xfs_mru_cache_t *mru = container_of(work, xfs_mru_cache_t, work.work); 281 unsigned long now, next; 282 283 ASSERT(mru && mru->lists); 284 if (!mru || !mru->lists) 285 return; 286 287 spin_lock(&mru->lock); 288 next = _xfs_mru_cache_migrate(mru, jiffies); 289 _xfs_mru_cache_clear_reap_list(mru); 290 291 mru->queued = next; 292 if ((mru->queued > 0)) { 293 now = jiffies; 294 if (next <= now) 295 next = 0; 296 else 297 next -= now; 298 queue_delayed_work(xfs_mru_reap_wq, &mru->work, next); 299 } 300 301 spin_unlock(&mru->lock); 302} 303 304int 305xfs_mru_cache_init(void) 306{ 307 xfs_mru_elem_zone = kmem_zone_init(sizeof(xfs_mru_cache_elem_t), 308 "xfs_mru_cache_elem"); 309 if (!xfs_mru_elem_zone) 310 goto out; 311 312 xfs_mru_reap_wq = create_singlethread_workqueue("xfs_mru_cache"); 313 if (!xfs_mru_reap_wq) 314 goto out_destroy_mru_elem_zone; 315 316 return 0; 317 318 out_destroy_mru_elem_zone: 319 kmem_zone_destroy(xfs_mru_elem_zone); 320 out: 321 return -ENOMEM; 322} 323 324void 325xfs_mru_cache_uninit(void) 326{ 327 destroy_workqueue(xfs_mru_reap_wq); 328 kmem_zone_destroy(xfs_mru_elem_zone); 329} 330 331/* 332 * To initialise a struct xfs_mru_cache pointer, call xfs_mru_cache_create() 333 * with the address of the pointer, a lifetime value in milliseconds, a group 334 * count and a free function to use when deleting elements. This function 335 * returns 0 if the initialisation was successful. 336 */ 337int 338xfs_mru_cache_create( 339 xfs_mru_cache_t **mrup, 340 unsigned int lifetime_ms, 341 unsigned int grp_count, 342 xfs_mru_cache_free_func_t free_func) 343{ 344 xfs_mru_cache_t *mru = NULL; 345 int err = 0, grp; 346 unsigned int grp_time; 347 348 if (mrup) 349 *mrup = NULL; 350 351 if (!mrup || !grp_count || !lifetime_ms || !free_func) 352 return EINVAL; 353 354 if (!(grp_time = msecs_to_jiffies(lifetime_ms) / grp_count)) 355 return EINVAL; 356 357 if (!(mru = kmem_zalloc(sizeof(*mru), KM_SLEEP))) 358 return ENOMEM; 359 360 /* An extra list is needed to avoid reaping up to a grp_time early. */ 361 mru->grp_count = grp_count + 1; 362 mru->lists = kmem_zalloc(mru->grp_count * sizeof(*mru->lists), KM_SLEEP); 363 364 if (!mru->lists) { 365 err = ENOMEM; 366 goto exit; 367 } 368 369 for (grp = 0; grp < mru->grp_count; grp++) 370 INIT_LIST_HEAD(mru->lists + grp); 371 372 /* 373 * We use GFP_KERNEL radix tree preload and do inserts under a 374 * spinlock so GFP_ATOMIC is appropriate for the radix tree itself. 375 */ 376 INIT_RADIX_TREE(&mru->store, GFP_ATOMIC); 377 INIT_LIST_HEAD(&mru->reap_list); 378 spin_lock_init(&mru->lock); 379 INIT_DELAYED_WORK(&mru->work, _xfs_mru_cache_reap); 380 381 mru->grp_time = grp_time; 382 mru->free_func = free_func; 383 384 *mrup = mru; 385 386exit: 387 if (err && mru && mru->lists) 388 kmem_free(mru->lists); 389 if (err && mru) 390 kmem_free(mru); 391 392 return err; 393} 394 395/* 396 * Call xfs_mru_cache_flush() to flush out all cached entries, calling their 397 * free functions as they're deleted. When this function returns, the caller is 398 * guaranteed that all the free functions for all the elements have finished 399 * executing and the reaper is not running. 400 */ 401void 402xfs_mru_cache_flush( 403 xfs_mru_cache_t *mru) 404{ 405 if (!mru || !mru->lists) 406 return; 407 408 spin_lock(&mru->lock); 409 if (mru->queued) { 410 spin_unlock(&mru->lock); 411 cancel_rearming_delayed_workqueue(xfs_mru_reap_wq, &mru->work); 412 spin_lock(&mru->lock); 413 } 414 415 _xfs_mru_cache_migrate(mru, jiffies + mru->grp_count * mru->grp_time); 416 _xfs_mru_cache_clear_reap_list(mru); 417 418 spin_unlock(&mru->lock); 419} 420 421void 422xfs_mru_cache_destroy( 423 xfs_mru_cache_t *mru) 424{ 425 if (!mru || !mru->lists) 426 return; 427 428 xfs_mru_cache_flush(mru); 429 430 kmem_free(mru->lists); 431 kmem_free(mru); 432} 433 434/* 435 * To insert an element, call xfs_mru_cache_insert() with the data store, the 436 * element's key and the client data pointer. This function returns 0 on 437 * success or ENOMEM if memory for the data element couldn't be allocated. 438 */ 439int 440xfs_mru_cache_insert( 441 xfs_mru_cache_t *mru, 442 unsigned long key, 443 void *value) 444{ 445 xfs_mru_cache_elem_t *elem; 446 447 ASSERT(mru && mru->lists); 448 if (!mru || !mru->lists) 449 return EINVAL; 450 451 elem = kmem_zone_zalloc(xfs_mru_elem_zone, KM_SLEEP); 452 if (!elem) 453 return ENOMEM; 454 455 if (radix_tree_preload(GFP_KERNEL)) { 456 kmem_zone_free(xfs_mru_elem_zone, elem); 457 return ENOMEM; 458 } 459 460 INIT_LIST_HEAD(&elem->list_node); 461 elem->key = key; 462 elem->value = value; 463 464 spin_lock(&mru->lock); 465 466 radix_tree_insert(&mru->store, key, elem); 467 radix_tree_preload_end(); 468 _xfs_mru_cache_list_insert(mru, elem); 469 470 spin_unlock(&mru->lock); 471 472 return 0; 473} 474 475/* 476 * To remove an element without calling the free function, call 477 * xfs_mru_cache_remove() with the data store and the element's key. On success 478 * the client data pointer for the removed element is returned, otherwise this 479 * function will return a NULL pointer. 480 */ 481void * 482xfs_mru_cache_remove( 483 xfs_mru_cache_t *mru, 484 unsigned long key) 485{ 486 xfs_mru_cache_elem_t *elem; 487 void *value = NULL; 488 489 ASSERT(mru && mru->lists); 490 if (!mru || !mru->lists) 491 return NULL; 492 493 spin_lock(&mru->lock); 494 elem = radix_tree_delete(&mru->store, key); 495 if (elem) { 496 value = elem->value; 497 list_del(&elem->list_node); 498 } 499 500 spin_unlock(&mru->lock); 501 502 if (elem) 503 kmem_zone_free(xfs_mru_elem_zone, elem); 504 505 return value; 506} 507 508/* 509 * To remove and element and call the free function, call xfs_mru_cache_delete() 510 * with the data store and the element's key. 511 */ 512void 513xfs_mru_cache_delete( 514 xfs_mru_cache_t *mru, 515 unsigned long key) 516{ 517 void *value = xfs_mru_cache_remove(mru, key); 518 519 if (value) 520 mru->free_func(key, value); 521} 522 523/* 524 * To look up an element using its key, call xfs_mru_cache_lookup() with the 525 * data store and the element's key. If found, the element will be moved to the 526 * head of the MRU list to indicate that it's been touched. 527 * 528 * The internal data structures are protected by a spinlock that is STILL HELD 529 * when this function returns. Call xfs_mru_cache_done() to release it. Note 530 * that it is not safe to call any function that might sleep in the interim. 531 * 532 * The implementation could have used reference counting to avoid this 533 * restriction, but since most clients simply want to get, set or test a member 534 * of the returned data structure, the extra per-element memory isn't warranted. 535 * 536 * If the element isn't found, this function returns NULL and the spinlock is 537 * released. xfs_mru_cache_done() should NOT be called when this occurs. 538 * 539 * Because sparse isn't smart enough to know about conditional lock return 540 * status, we need to help it get it right by annotating the path that does 541 * not release the lock. 542 */ 543void * 544xfs_mru_cache_lookup( 545 xfs_mru_cache_t *mru, 546 unsigned long key) 547{ 548 xfs_mru_cache_elem_t *elem; 549 550 ASSERT(mru && mru->lists); 551 if (!mru || !mru->lists) 552 return NULL; 553 554 spin_lock(&mru->lock); 555 elem = radix_tree_lookup(&mru->store, key); 556 if (elem) { 557 list_del(&elem->list_node); 558 _xfs_mru_cache_list_insert(mru, elem); 559 __release(mru_lock); /* help sparse not be stupid */ 560 } else 561 spin_unlock(&mru->lock); 562 563 return elem ? elem->value : NULL; 564} 565 566/* 567 * To look up an element using its key, but leave its location in the internal 568 * lists alone, call xfs_mru_cache_peek(). If the element isn't found, this 569 * function returns NULL. 570 * 571 * See the comments above the declaration of the xfs_mru_cache_lookup() function 572 * for important locking information pertaining to this call. 573 */ 574void * 575xfs_mru_cache_peek( 576 xfs_mru_cache_t *mru, 577 unsigned long key) 578{ 579 xfs_mru_cache_elem_t *elem; 580 581 ASSERT(mru && mru->lists); 582 if (!mru || !mru->lists) 583 return NULL; 584 585 spin_lock(&mru->lock); 586 elem = radix_tree_lookup(&mru->store, key); 587 if (!elem) 588 spin_unlock(&mru->lock); 589 else 590 __release(mru_lock); /* help sparse not be stupid */ 591 592 return elem ? elem->value : NULL; 593} 594 595/* 596 * To release the internal data structure spinlock after having performed an 597 * xfs_mru_cache_lookup() or an xfs_mru_cache_peek(), call xfs_mru_cache_done() 598 * with the data store pointer. 599 */ 600void 601xfs_mru_cache_done( 602 xfs_mru_cache_t *mru) __releases(mru->lock) 603{ 604 spin_unlock(&mru->lock); 605}
__label__pos
0.995702
Financial Health Lab Logo phone with thumb tapping field for email address A Plethora of Passwords By Kelley Presley, Curriculum Development Specialist and Tech Coach With so much of our personal, confidential information being stored and shared online, it’s more important than ever to be aware of the passwords we use to secure our online accounts. Passwords are your defense between your sensitive information and those who may be trying to steal that information. And depending upon how many accounts you have to keep up with, it may be challenging to remember all the passwords. However, there are tools called password managers that some people use to help with that. Some are apps that you can download but many browsers, smartphones and computers also will store your passwords for you if you give them permission. Creating Strong Passwords We can’t overstate how critical it is to create a strong password that would be almost impossible to crack. Strong passwords are at least 12 characters long, consist of a combination of uppercase and lowercase letters, include numbers and special characters with no ties to your personal information and no words that appear in the dictionary. Experts discourage people from using the same password across multiple accounts. Additionally, it is recommended that you change your passwords every thirty, sixty or ninety days; depending on what the password is used for. Not a strong password and probably not a good place to store it. A fun tip for creating a password is to take the first letter of each word of a line of a movie, poem or song that you really enjoy, and use that to help create your password; chances are you’ll have a better shot at remembering it. For example, if you love Dua Lipa’s songs, here’s the first line of Break My Heart: “I’ve always been the one to say the first goodbye.” Your password would be: Iabtotstfg (keeping the I capitalized). Or, even better, turn some of the letters into numbers: Iabt12st1g. Give it a try! Storing Your Passwords If you’re thinking of storing your passwords digitally, there are some things you should consider. Determine how many passwords you want to keep track of and on what devices you would need to access those accounts. You could make a list or spreadsheet on your computer. Think of how much easier the cognitive load will be on your brain, but also consider the impact if you were unable to access your device with the stored passwords. Would it make more sense to use a password manager program that you can access from any computer, tablet or phone? This article has a great comparison of password managers, some of which are free and some of which have paid options. It is crucial for all of us to create strong, unique passwords for each and every account we have, and to change those passwords on a regular basis. But it is up to each of us to decide whether or not to store those passwords on paper, on a device or by using a password manager. There are pros and cons to using each of these tools, so consider those before jumping in. Verified by MonsterInsights
__label__pos
0.558315
Testing your Front-end Application in a Right Way Image for post Image for post Developing an application without bugs is almost impossible. Especially when we need to develop major features and each of them needs to be implemented by a different developer. Once these features will be integrated, bugs likely will show themself. Obviously, we never want to bugs exist on our codes intentionally, it just appears accidentally due ignorance of other edge cases or interprets a feature incorrectly. For the sake of business, we don’t want our users or customers to taste those bugs, that’s why we need to test our codes before it launches to production. Testing is a crucial task, besides it ensures the quality of your codes, it helps you to keep focus on the feature you are currently working on without having worry about existing features. In practice, actually we create a lot of small independent tasks for the test, each of them we called it a unit test. The unit test is a black-box way to test, there is no need to know what happens inside, just concern about what we expect after we give something to the function we want to test. For back-end, creating a test is clearly implementable since all existing functions give you clear input parameters and output. Front-end engineers found it hard to test like what unit-test does because we want to test the shape or content of the component that has a lot of ambiguity in terms of style and layout implementation. In other words, the output is not much clear for a test task to test, that’s why some developers achieved testing by creating a test that needs to look inside rather than treat your test to see as normal people do to your component. In any worlds, test in a white-box way is never a good choice. It’s not independent for developers, they need to rely so much on the test we created. In this article, I want to share with you how I and my team test our components in front-end in a black-box way as possible. This article will cover basic knowledge of testing in front-end and I will do it on React Native. But don’t worry, surely it’s also applicable to other front-end frameworks. Setup test We are using Jest to test our React application both on the web and mobile. In simple words for the explanation, jest is a test framework that gives us the ability to detect test files, select codes that need to be considered as testable codes, show coverage codes, and others. Check this out for further exploration. It officially comes from React itself after we initialize the application for the first time. So there is no need to do for installing a Jest at first. Let’s go to the real things now. Since all our source codes are inside the src directory and using Typescript, then this is how we tell jest to locate and recognize those on mobile. On /package.json we put these lines "jest": { "preset": "react-native", "moduleFileExtensions": [ "ts", "tsx", "js", "jsx", "json", "node" ] }, Note that there is .tsx extension there to indicate React Typescript file. We don’t need to tell Jest where exactly our test file is, as I mentioned before Jest would automatically find a test file by itself, Jest recognizes a test file by looking at the file name that follows this pattern. *.test.*\ As we know a good unit test should be an atomic representation of functionality. For doing so, we mark a component as that atomic representation. We decided to put a component code and a test file in the same directory, then the test file only test all codes within the same directory. This how it looks like: Inside components, we defined each test (index.test.tsx) in the same level hierarchy of component (index.tsx) In a management view, it is more maintainable and accessible to work with such management rather than define all tests in one directory or one file somewhere in a project. When we had already large source codes, it’s roughly hard to find your test file in no time. Additionally, if you come with a similar name between files it tends to deceive your eyes to find a test in the same directory or file, even you are able to use search tools on your IDE then you need to memorize file name before you search it, it’s still not a good practice though. At another time, I would tell you more about this management. Let’s get back to the topic again. Let’s say we want to test this component, a Button. on /src/components/Button/index.tsx import React, { useContext } from 'react'; import styled, { ThemeContext } from 'styled-components/native'; import { Text } from 'components'; interface ParentProps { width?: string; } interface StyledButtonProps extends ParentProps { background: string; borderWidth?: string; borderColor?: string; theme: ThemeProps; } const StyledButton = styled.TouchableOpacity` justify-content: center; align-items: center; padding: 12px 36px; height: auto; width: auto; flex: 1 0 auto; width: ${(props: StyledButtonProps) => props.width || "auto"}; background: ${(props: StyledButtonProps) => props.background}; border-width: ${(props: StyledButtonProps) => props.borderWidth || '0px'}; border-color: ${(props: StyledButtonProps) => props.borderColor || 'transparent'}; border-radius: 3px; `; enum ButtonType { Filled = 1, Outline = 2, } interface ButtonProps extends ParentProps { type?: ButtonType, children: string; clickable?: boolean; scale?: number; isBold?: boolean; onPress?: () => void; } const DEFAULT_MAIN_COLOR = "#42c41d"; const DEFAULT_DISABLE_COLOR = "#646663"; const DEFAULT_TEXT_COLOR = "#9d9e9d"; function Button({ type = ButtonType.Filled, children, clickable = true, isBold = true, width, onPress = () => {} }: ButtonProps) { const { colors } = useContext(ThemeContext) || {}; // Define all colors const filledButtonMainColor: string = colors?.green || DEFAULT_MAIN_COLOR; const filledButtonDisableColor: string = colors?.lightGray || DEFAULT_DISABLE_COLOR; const filledButtonTextColor: string = colors?.almostWhite || DEFAULT_TEXT_COLOR; const outlineButtonMainColor: string = colors?.black || DEFAULT_MAIN_COLOR; const outlineButtonDisableColor: string = colors?.mediumGray || DEFAULT_DISABLE_COLOR; const outlineButtonTextColor: string = colors?.black || DEFAULT_TEXT_COLOR; if (type === ButtonType.Filled) { return ( <StyledButton onPress={clickable? onPress: () => {}} width={width} background={clickable? filledButtonMainColor: filledButtonDisableColor} > <Text type={Text.StyleType.Medium} color={filledButtonTextColor} isBold={isBold} > {children} </Text> </StyledButton> ); } else { return ( <StyledButton onPress={clickable? onPress: () => {}} width={width} background="transparent" borderWidth='0.8px' borderColor={clickable? outlineButtonMainColor: outlineButtonDisableColor} > <Text type={Text.StyleType.Medium} color={clickable? outlineButtonTextColor: outlineButtonDisableColor} isBold={isBold} > {children} </Text> </StyledButton> ) } } Button.Type = ButtonType;export default Button; Just focus on the Button function. In summary, there are types of buttons, parameters to customize the Button component, and also component itself that is able to be clicked whenever a clickable parameter is true. Assume we are outsiders that don’t know a specific location of component on Button that able to click, but we do know what type of clickable component. In ReactNative, a clickable component is TouchableOpacity. So the way we test our component is first finding the location of TouchableOpacity inside the Button component, after we found it we can click this button as people do by call onPress function on their property. The challenge is how we find TouchableOpacity inside the Button component. Luckily there is a library that doing so for us, introduce you, react-test-renderer. This library does pretty much for us, such as test whether the component able to render properly or not, and also find something we want inside the component without knowing full information of the component. This is full codes of test for Button component on /src/components/Button/index.test.tsx /** * @format */ import React from 'react'; import { TouchableOpacity } from 'react-native'; import Button from '.'; import renderer from 'react-test-renderer'; // Note: test renderer must be required after react-native. describe("Button tests", () => { describe("Filled button test", () => { it('Filled button works properly', () => { // Default button with a proper callback function let number = 0; const button = renderer.create( <Button onPress={() => number += 1} > Button </Button> ); expect(button).toBeTruthy(); button.root.findByType(TouchableOpacity).props.onPress(); expect(number).toEqual(1); }); it('Prevent action for unclickable filled button', () => { let number = 0; const button = renderer.create( <Button clickable={false} onPress={() => number += 1} > Button </Button> ); expect(button).toBeTruthy(); button.root.findByType(TouchableOpacity).props.onPress(); expect(number).toEqual(0); }) }) describe("Outline button test", () => { it('Outline button works properly', () => { let number = 0; const button = renderer.create( <Button type={Button.Type.Outline} onPress={() => number += 1} > Button </Button> ); expect(button).toBeTruthy(); button.root.findByType(TouchableOpacity).props.onPress(); expect(number).toEqual(1); }); it('Prevent action for unclickable outline button', () => { let number = 0; const button = renderer.create( <Button type={Button.Type.Outline} clickable={false} onPress={() => number += 1} > Button </Button> ); expect(button).toBeTruthy(); button.root.findByType(TouchableOpacity).props.onPress(); expect(number).toEqual(0); }) }) }) As you can see there is no need to look states or variable inside components, we only look component as people do. This test we called it a functional test, this what we did it in best practice. These simple test codes follow all of TDD FIRST principles as well. Notice we do a lot of different tests with different parameters in the same component because we want to cover all possible customizing and clearly the test is independent, so it’s able to run repeatedly without worry to break another system or functionality with the same result as long there is no changes significantly inside components. Also, the number of codes is not too much rather than the codes of the component itself so we are able to write it in no time. Since it’s an automatic test then it’s self-checking. The test is also so fast to run, as you can see a screenshot that I attached below Image for post Image for post In total 58 ms need to run Takeaways The thing that you always need to remember is always trying to achieve a black-box way to test as possible, it’s an absolute choice. This article is just covering one of the scenarios to test, in the real-word, there are a lot of codes hardly tested by this method. Let’s say test a form, in practice, it’s easier to test a state than test a couple of fields and buttons, but still could be solved this way. The point is don’t give up, keep practice, by doing this way actually we help yourself and other developers in order to refactor or implement component independently without knowing how a test task tests our codes. Note, this concept is not only for React Native but also applicable to other front-end frameworks. Written by Mainly a Full-stack Developer, Software Architecture Engineer. I'm a passionate developer, love to learn new things and write an article about it Get the Medium app A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
__label__pos
0.937665
IntelliJ IDEA 2023.1 Help Modules In IntelliJ IDEA, a module is an essential part of any project – it's created automatically together with a project. Projects can contain multiple modules – you can add new modules, group them, and unload the modules you don't need at the moment. Generally, modules consist of one or several content roots and a module file, however, modules can exist without content roots. A content root is a folder where you store your code. Usually, it contains subfolders for source code, unit tests, resource files, and so on. A module file (the .iml file) is used for keeping module configuration. Modules allow you to combine several technologies and frameworks in one application. In IntelliJ IDEA, you can create several modules for a project and each of them can be responsible for its own framework. For more information, refer to Add frameworks (facets). Module composition shown on a scheme IntelliJ IDEA modules vs Java modules In version 9, Java introduced the Java Platform Module System. IntelliJ IDEA had already had a concept of modules: every IntelliJ IDEA module built its own classpath. With the introduction of the new Java platform module system, there appeared two systems of modularity: the IntelliJ IDEA modules, and the new Java 9 modules that are configured using module-info.java. This documentation section describes IntelliJ IDEA modules. For more information on Java 9 support in IntelliJ IDEA refer to the Support for Java 9 Modules in IntelliJ IDEA 2017.1 and Java 9 and IntelliJ IDEA blog posts. Projects with multiple modules IntelliJ IDEA allows you to have many modules in one project, and they shouldn't be just Java. You can have one module for a Java application and another module for a Ruby on Rails application or for any other supported technology. An application that consists of a client side and a server side is a good example a two-module project. Add a new module to your project 1. Select the top-level directory in the Project tool window and press Alt+Insert or select New | Module from the context menu. The New Module wizard opens. 2. From the list on the left, select a module type. Name the new module. 3. From the Language list, select the language that you want to use in your application. If you want to use a language that is not available in IntelliJ IDEA out of the box (for example, Python or PHP), click the App general add button and select the necessary option. The IDE will open a dialog in which you can select and install the necessary language plugin. After that, you can close the dialog and keep configuring the new module. 4. Select the build system that you want to use in your project: the native IntelliJ builder, Maven, or Gradle. For Gradle, you will also need to select a language for the build script: Groovy or Kotlin. 5. Select a JDK that you want to use from the JDK list. You can use the project SDK or specify a new one. 6. Click Create. Import an existing module You can import a module to your project by adding the .iml file from another project: 1. From the main menu, select File | New | Module from Existing Sources. 2. In the dialog that opens, specify the path the .iml file of the module that you want to import, and click Open. By doing so, you are attaching another module to the project without physically moving any files. If you don't need the modules to be located in one folder, the module import is finished, and you can start working with the project normally. Import a module from existing sources Use these steps to import a project as a module if the project comes from an external model or if you want to create a module from the existing source code that is not necessarily an exported project. 1. From the main menu, select File | New | Module from Existing Sources. 2. Select the directory in which your sources, libraries, and other assets are located and click Open. 3. In the dialog that opens, select Create module from existing sources if you want to create a new module from the existing source code. Otherwise, select Import project from external model, select the external model that the project uses, and follow the steps of the wizard. Group modules In IntelliJ IDEA, you can logically group modules. If you have a large project with multiple modules, grouping will make it easier to navigate through your project. Module groups can be nested: a group can contain other subgroups. Create a new module group (deprecated) In earlier versions (2017.2 and earlier), IntelliJ IDEA used explicit groups for joining modules together. If you've configured manual module groups, you will be able to continue working with them in later versions of the IDE. Alternatively, you can convert module groups and use qualified names instead. 1. In the Project tool window (Alt+1), select the modules that you want to group. You can also do so on the Modules page of the Project Structure dialog (Ctrl+Alt+Shift+S). 2. From the context menu, select Move Module to Group | New Top Level Group. 3. Name the new group and click OK. The new group is now created and is marked with the the module group icon. Select Outside Any Group to exclude the selected module from the group, To This Group to add the module to the group, or To New Subgroup to create a new group in another group. Convert module groups to qualified names (deprecated) 1. From the main menu, select File | Convert Module Groups to Qualified Names. 2. In the next dialog, review the new module names and adjust them if necessary. 3. Apply the changes and close the dialog. Group modules by fully qualified names IntelliJ IDEA 2017.3 and later uses fully qualified names to group modules. For example, if you want to group all CDI modules, add the cdi prefix to their names. 1. Open the Project Structure dialog Ctrl+Alt+Shift+S and click Modules. 2. Select the modules you want to group, open the context menu, and click Change Module Names. 3. Specify a prefix and apply the changes. To view all modules on the same level in the Project Structure dialog, use the Flatten Modules context menu option. Grouping modules using a prefix Last modified: 27 October 2022
__label__pos
0.807545
Sensor From Second Life Wiki Jump to navigation Jump to search Description Event: sensor( integer num_detected ){ ; } Results from a call to either llSensor or llSensorRepeat. • integer num_detected number of objects/avatars found The results are ordered from nearest to furthest. num_detected is always greater than zero, the no_sensor event is triggered if no objects/avatars were found. Caveats • Lindens in administrative mode cannot be sensed by sensors in the same region as the Linden. • Sensors placed in attachments will use the direction the avatar is facing as their forward vector. In mouselook, this means that it will be wherever the avatar is looking, while out of mouselook, this means whichever way the avatar is pointing. This does not include where the avatar's head is pointing, or what animation the avatar is doing, just the direction the avatar would move in if you walked forward. This is the case, regardless of where the object is attached. • A sensor running in an attachment will not detect the avatar wearing it. • A sensor will only return the first 16 objects/avatars found. • This event is not executed when nothing is detected, means, you never get the result 0 returned. Use no_sensor for that. • on logout all avatars leave a Ghost for a few moments, this results in Failures in llDetected functions in sensor events. Examples default { touch_start(integer num_detected) { // do a 10m spherical sweep llSensor("", NULL_KEY, AGENT_BY_LEGACY_NAME, 10.0, PI); } sensor (integer num_detected) { string message = "Detected " + (string)num_detected + " avatar(s): " + llDetectedName(0); // we already added the first avatar above, so continue from index 1 integer index = 1; while (index < num_detected) message += ", " + llDetectedName(index++); llWhisper(PUBLIC_CHANNEL, message); } no_sensor() { llWhisper(PUBLIC_CHANNEL, "Nobody is near me at present."); } } Notes KBcaution.png Important: You might want to use llGetAgentList instead of using sensors to get a list of all avatars within the same parcel or region. See Also Functions •  llSensor •  llSensorRepeat Articles •  Detected Deep Notes Signature event void sensor( integer num_detected );
__label__pos
0.566443
Search Close this search box. Home Do AirTags Make Noise and How To Fix? Do Apple AirTags make noise? AirTags have the ability to emit sounds to aid in locating lost items. In this article, we will delve into the world of AirTag sounds, exploring whether AirTags make noise, what these sounds mean, and whether it’s possible to turn off the sound. We will also discuss troubleshooting tips for AirTag noise-related issues. Do AirTags Make Noise? Yes, AirTags make noise. They can make a variety of sounds depending on the situation, including: • Setup: When you first set up an AirTag, it will make a chime sound to indicate that it is ready to be paired. • Finding: If you lose an AirTag, you can use the Find My app to play a sound on it. The AirTag will beep loudly until you find it. • Anti-Stalking: If an AirTag that is not yours is traveling with you, it will start to beep after a period of time to alert you of its presence. This is to prevent AirTags from being used to track people without their knowledge. What Do the Different AirTag Sounds Mean? Here is a breakdown of the different AirTag sounds and what they mean: • Connected Chime: This sound indicates that the AirTag is ready to be set up. • Setup Chime: This sound indicates that the AirTag has been successfully set up and ready to use. • Find My Chime: This sound indicates that the when your locating the AirTag. • Moving With You Chime: When an unknown Airtag moving with you over time • Find Unknown Airtag Chime: This sound plays when you are locating an unknown Airtag. • Rapid Beeping: This sound plays when an AirTag is separated from its owner for more than 24 hours.  Is It Possible to Turn Off  Sound in AirTag? • You can turn off the sound on your AirTag in the Find My app. To do this, open the Find My app and tap the Items tab. Then, tap the AirTag that you want to mute and tap the Toggle Sound button. • However, keep in mind that muting the sound on your AirTag will make it more difficult to find if you lose it. Troubleshooting AirTag Sound If your AirTag is making Sound and you are not sure why, there are a few things you can do to troubleshoot the issue: • Check The Battery: If your AirTag is low on battery, it may start to beep to let you know. Replace the battery to see if this fixes the issue. • Update The Find My App: Make sure that you are using the latest version of the Find My app. Apple regularly releases updates that include bug fixes and improvements. • Contact Apple Support: If you are still having problems with your AirTag’s noise, contact Apple support for further assistance. Airtags – FAQs 1. Can I use AirTags to track people? Ans: No, using AirTags to track individuals without their consent is a breach of privacy and is illegal in many jurisdictions. 2. What is the range of AirTags Bluetooth signal? Ans: The Bluetooth range of AirTags is approximately 30 feet, depending on environmental factors. 3. Are AirTags recyclable? Ans: Apple has implemented a recycling program for AirTags to reduce electronic waste. Check their website for details on recycling options. 4. How Do I Make My AirTag Ring? Ans: To make your AirTag ring, open the “Find My” app, select the AirTag, and choose the “Play Sound” option. 5. Can I Customize the Sound? Ans: No, you can’t customize the sound of your AirTag. It emits a default chirping noise. 6. How Loud is the AirTag Sound? Ans: The sound emitted by the AirTag is relatively loud, making it easier to locate your lost item. Conclusion AirTags are a great way to track your belongings, but they can also be noisy. If you are not sure why your AirTag is making noise, check the battery, restart your AirTag, and update the Find My app. If you are still having problems, contact Apple support for assistance. Leave a Reply Your email address will not be published. Required fields are marked *
__label__pos
0.999423
PDA View Full Version : Seeking design advise TorAn 1st March 2019, 23:33 I have following scenario: My code invokes external function (some type of print) in the loop. In each iteration, after variable amount of time (< 2 mins) it saves the file in the specific directory or it may fail to print and I need to advance to the next loop cycle. Logically it is like that: while (1) { // waitfor returns true if file is found. If 3 minutes passed and file // is not found where it is supposed to be, do nothing if (waitfor (printfile(), 3)) { // do something with the file } } Printfile should monitor the appearance of the file in a certain directory. If not found in 2 mins, proceed to the next loop. I am perfectly capable of doing something with straight c++ or boost. I am interested in understanding how it can be implemented using specialized Qt classes, like QFileSystemWatcher, QFuture or others. In QFileSystemWatcher, for example, I don't see how can I do wait without putting some wait conditions in the slot; Advise is greatly appreciated. Thanks. ChrisW67 2nd March 2019, 06:04 Are you expecting your waiting program to be doing something while it waits? d_stranz 2nd March 2019, 06:46 My code invokes external function (some type of print) in the loop. Is this something you spawn using QProcess? Then you can implement a slot that handles one of the QProcess signals. Are you expecting your waiting program to be doing something while it waits? If not, then using something like QFileSystemWatcher might be a good solution, but it won't handle the case where creating the file fails (ie. the print doesn't work). But any code that uses a while( true ) infinite loop is potential for big trouble. If you have to do that, then you should at least insert a processEvents() call in the loop to keep the app from freezing. The QFuture methods that query for a result are blocking, so they will cause your app to hang until that process is complete. That's about as bad as an infinite loop. I am curious, though. Most operating systems I am familiar with implement some kind of print queue. So you should be able to send multiple print requests, and the printing system will queue them up and process them in order. Why do you have to wait until one finishes before starting the next one? The OS print queue will do that for you. So if you can do away with waiting, then QFileSystemWatcher will probably serve to tell you when a print job has created a file successfully. TorAn 2nd March 2019, 09:45 Are you expecting your waiting program to be doing something while it waits? No, just sit and wait until either new file is generated or timeout occures. Added after 6 minutes: The process I am describing is a specialized screen scraping process, so it is sequential in nature and I do have to wait until individual screen view is processed. anda_skoa 2nd March 2019, 13:23 In QFileSystemWatcher, for example, I don't see how can I do wait without putting some wait conditions in the slot; With this approach the waiting is done implicitly, i.e. the object registers itself with the platform's file notification system and relays notifications for files that it is told to watch. The "loop" in this case is the thread's event loop so it is not really useful of you are running your own loop. So its main use case is to trigger functionality when a file appears, is changed or disappears. I.e. as the source that triggers an action, not as part of a larger processing chain. Cheers, _ TorAn 6th March 2019, 02:46 I wrote the test that I think covers suggestions that were given (and much appreciated): class scraper: public QRunnable { private: int _id; QMutex& _mtx; QString _path; public: scraper(int id, QMutex& m, QString p) : _id(id), _mtx(m), _path(p) {} void run() { QFileSystemWatcher qfs; qfs.addPath(_path); // this slot is never called, despite moving file to the path. I think it is because it executes in the main thread; if (! QObject::connect(&qfs, &QFileSystemWatcher::directoryChanged, [&](const QString& p){ qDebug() << "cycle:" << QString::number(_id) << "file:" <<p; _mtx.unlock(); })) { qDebug() << "failure to connect to QFileSystemWatcher::directoryChanged signal"; return; } qDebug() << "waiting for file in cycle " << QString::number(_id); } }; int main(int argc, char *argv[]) { QCoreApplication a(argc, argv); qDebug()<< "started"; QMutex mtx; QString pt="c:/temp/files"; QWaitCondition wc; QTimer::singleShot(2000,&a, [&](){ for (int i = 0; i < 20; ++i) { qDebug() << "starting cycle " << QString::number(i); scraper* myscraper = new scraper(i, mtx, pt); QThreadPool::globalInstance()->start(myscraper); mtx.lock(); wc.wait(&mtx, 20000); mtx.unlock(); } }); return a.exec(); } The issue I am having is that the signal directoryChanged is never called. I think it is because it is a queued call and it is executed in the context of the main thread which is blocked. Perhaps I should use QThread instead of QRunnable? Suggestions and critiques are always welcomed and appreciated. Thanks! Lesiok 6th March 2019, 07:47 I think that directoryChanged signal is emited but qfs is deleted on exit from run(). Declare qfs like _mtx and _path. TorAn 6th March 2019, 11:54 Thanks, I will try that. Update - I don't think that the destruction of the instance is the problem. I changed the code to this: class scraper: public QRunnable { private: int _id; QMutex& _mtx; QString _path; QFileSystemWatcher qfs; public: scraper(int id, QMutex& m, QString p) : _id(id), _mtx(m), _path(p) { setAutoDelete(false); } So, autodestruction does not happen and QFileSystemWatcher is "alive". Slot is not called though when I copy the file to the directory. anda_skoa 9th March 2019, 10:46 The issue I am having is that the signal directoryChanged is never called. Your thread doesn't run an event loop, so the file system watcher never start its work. I.e. your run() method immediately exists after setting up the object and its connection. void run() { QEventLoop loop; QFileSystemWatcher qfs; qfs.addPath(_path); // this slot is never called, despite moving file to the path. I think it is because it executes in the main thread; if (! QObject::connect(&qfs, &QFileSystemWatcher::directoryChanged, [&](const QString& p){ qDebug() << "cycle:" << QString::number(_id) << "file:" <<p; loop.quit(); _mtx.unlock(); })) { qDebug() << "failure to connect to QFileSystemWatcher::directoryChanged signal"; return; } qDebug() << "waiting for file in cycle " << QString::number(_id); loop.exec(); } I think it is because it is a queued call and it is executed in the context of the main thread which is blocked. No, the QFileSystemWatcher is created in run(), which is executed by a separate thread. So all its event handling happens in that thread. But handling events requires a running event loop. Perhaps I should use QThread instead of QRunnable? I don't really understand why you need a thread at all. Cheers, _
__label__pos
0.617033
Take the 2-minute tour × Stack Overflow is a question and answer site for professional and enthusiast programmers. It's 100% free, no registration required. I have dataTable in my page. Initially I want it to be hidden, and show after fetching data by AJAX request. I know how to fetch data and put into table, but I don't know how to show table if it is hidden. Here is the code: <h:commandButton value="aa"> <f:ajax execute="from to validTo" render="transportOffers"/> </h:commandButton> <p:dataTable id="transportOffers" value="${cargoOffer.transportsForCargo}" var="transport"> <p:column> <h:outputText value="${transport.company}"/> </p:column> </p:dataTable> Table is visible initially, even if it is empty. If I set rendered="false" it is invisible, and remains invisible also after AJAX request. How can I make it hidden initially, and to show up after populating with data? share|improve this question 2 Answers 2 up vote 2 down vote accepted You could try having the dataTable to render conditionally based on the size of the list: rendered = "#{cargoOffer.transportsForCargo.size() != 0}" share|improve this answer      That almost works :) Problem is that when it is not rendered initially, ajax response want's to update it, but fails to find it. Solution is to wrap it into something else, and update this something else. Thanks! –  amorfis May 11 '10 at 11:09 I think if rendered=false then the element isn't created, so the AJAX request can't find it. share|improve this answer Your Answer   discard By posting your answer, you agree to the privacy policy and terms of service. Not the answer you're looking for? Browse other questions tagged or ask your own question.
__label__pos
0.542154
Convert 947 to unsigned binary (base 2) from a base 10 decimal system unsigned (positive) integer number How to convert an unsigned (positive) integer in decimal system (in base 10): 947(10) to an unsigned binary (base 2) 1. Divide the number repeatedly by 2: Keep track of each remainder. We stop when we get a quotient that is equal to zero. • division = quotient + remainder; • 947 ÷ 2 = 473 + 1; • 473 ÷ 2 = 236 + 1; • 236 ÷ 2 = 118 + 0; • 118 ÷ 2 = 59 + 0; • 59 ÷ 2 = 29 + 1; • 29 ÷ 2 = 14 + 1; • 14 ÷ 2 = 7 + 0; • 7 ÷ 2 = 3 + 1; • 3 ÷ 2 = 1 + 1; • 1 ÷ 2 = 0 + 1; 2. Construct the base 2 representation of the positive number: Take all the remainders starting from the bottom of the list constructed above. 947(10) = 11 1011 0011(2) Conclusion: Number 947(10), a positive integer (no sign), converted from decimal system (base 10) to an unsigned binary (base 2): 947(10) = 11 1011 0011(2) Spaces used to group digits: for binary, by 4. More operations of this kind: 946 = ? | 948 = ? Convert positive integer numbers (unsigned) from the decimal system (base ten) to binary (base two) How to convert a base 10 positive integer number to base 2: 1) Divide the number repeatedly by 2, keeping track of each remainder, until getting a quotient that is equal to 0; 2) Construct the base 2 representation by taking all the previously calculated remainders starting from the last remainder up to the first one, in that order. Latest positive integer numbers (unsigned) converted from decimal (base ten) to unsigned binary (base two) How to convert unsigned integer numbers (positive) from decimal system (base 10) to binary = simply convert from base ten to base two Follow the steps below to convert a base ten unsigned integer number to base two: • 1. Divide repeatedly by 2 the positive integer number that has to be converted to binary, keeping track of each remainder, until we get a QUOTIENT that is equal to ZERO. • 2. Construct the base 2 representation of the positive integer number, by taking all the remainders starting from the bottom of the list constructed above. Thus, the last remainder of the divisions becomes the first symbol (the leftmost) of the base two number, while the first remainder becomes the last symbol (the rightmost). Example: convert the positive integer number 55 from decimal system (base ten) to binary code (base two): • 1. Divide repeatedly 55 by 2, keeping track of each remainder, until we get a quotient that is equal to zero: • division = quotient + remainder; • 55 ÷ 2 = 27 + 1; • 27 ÷ 2 = 13 + 1; • 13 ÷ 2 = 6 + 1; • 6 ÷ 2 = 3 + 0; • 3 ÷ 2 = 1 + 1; • 1 ÷ 2 = 0 + 1; • 2. Construct the base 2 representation of the positive integer number, by taking all the remainders starting from the bottom of the list constructed above: 55(10) = 11 0111(2) • Number 5510, positive integer (no sign), converted from decimal system (base 10) to unsigned binary (base 2) = 11 0111(2)
__label__pos
0.998951
Skip to content Last update: January 30, 2024 createContact mutation This mutation creates a contact. Arguments The InputCreateContactType! represents the input object for creating a contact. Field Description id {String!} The Id of the contact. outerId {String} The external Id of the contact. memberType {String!} The type of the contact. name {String} The name of the contact. status {String} The status of the contact. phones {[String]!} The phone numbers associated with the contact. emails {[String]!} The email addresses associated with the contact. groups {[String]!} The groups to which the contact belongs. seoObjectType {String!} The type of object that the contact is associated with. seoInfo(...) {SeoInfo} SEO information related to the contact. defaultBillingAddress {MemberAddressType} The default billing address of the contact. defaultShippingAddress {MemberAddressType} The default shipping address of the contact. addresses(...) {MemberAddressConnection} A connection to a list of addresses associated with the contact. dynamicProperties(...) {DynamicPropertyValueType} Dynamic properties of the contact. firstName {String!} The first name of the contact. lastName {String!} The last name of the contact. middleName {String} The middle name of the contact. fullName {String!} The full name of the contact. about {String!} Information about the contact. birthDate {Date} The birth date of the contact. securityAccounts {UserType} Security accounts associated with the contact. organizationId {String} The Id of the organization associated with the contact. organizationsIds {[String]!} The Ids of the organizations associated with the contact. organizations(...) {OrganizationConnection} A connection to a list of organizations associated with the contact. Possible returns Possible return Description ContactType A contact and various fields to describe the contact's information. 1 2 3 4 5 6 7 8 mutation createContact($command: InputCreateContactType!) { createContact(command: $command) { fullName id lastName name } } { "command": { "name": "UserA", "memberType": "Contact", "addresses": [], "fullName": "UserA", "firstName": "UserA", "lastName": "UserA" } }
__label__pos
0.700765
62 is what percent (%) of 298? is what % of ? 20.81% How to solve this problem A step by step guide This type of problem is simple to solve and has many uses in day to day life. One example of this calculating what percentage grade you scored on a test. Solving this type of problem involes two simple math problems. Like many percentage problems, the only operations involved are multiplication and division. In this case, one division and one multiplication. As with any type of division and multiplication, these operations can be done in any order you like. Here are the steps you have to take to solve this problem: Step 1: Divide 62 by 298; The first step is to divide the numerator of this problem by the denominator. The numerator in this case is 62 and the denominator is 298. Here is the equation for this operation: $$ \frac{62}{298} = 62 \div 298 = 0.20805369127517 $$ Step 2: Multiply 0.20805369127517 by 100 The second step is to multiply the result of step 1 by 100. This will turn our original answer into a percentage. This is the final answer to the problem. Here is the equation for: $$ 0.20805369127517 \times 100 = 20.81 $$
__label__pos
0.996558
<body style="height: 100%;" onload="window.print();"> Entonces este es mi código en PHP, luego hago eco del html pero solo imprime la primera página. Todas las soluciones que he encontrado dicen que debería poner la altura / anchura al 100%, pero todavía no funciona. Ya probado body, html, #wrapper { width: 100%; height:100%; } @media print { ... } Este es el cuerpo de la página de impresión, puede ver el foreach por eso en algunos casos es más largo que una página. ¿Por qué está pasando esto? <body onload="window.print();"> <div class="wrapper"> <section class="invoice"> <!-- title row --> <div class="row"> <div class="col-xs-12"> <h2 class="page-header"> '.$company_info['company_name'].' <small class="pull-right">Date: '.$order_date.'</small> </h2> </div> <!-- /.col --> </div> <!-- info row --> <div class="row invoice-info"> <div class="col-xs-12 "> <b>'.$this->lang->line('orderno').':</b> '.$order_data['bill_no'].'<br> <b>'.$this->lang->line('customername').':</b> '.$name['customer_name'].'<br> <b>'.$this->lang->line('customeradress').':</b> '.$name['customer_address'].' <br /> <b>'.$this->lang->line('customerphonenumber').':</b> '.$name['customer_phone'].' </div> <!-- /.col --> </div> <!-- /.row --> </br> <!-- Table row --> <div class="row"> <div class="col-xs-12 table-responsive"> <table class="table table-striped"> <thead> <tr style="border:1px"> <th colspan="2"></th> <th style="text-align:center" colspan="2">'.$this->lang->line('qty').'</th> <th colspan="6"></th> </tr> <tr> <th>'.$this->lang->line('productname').'</th> <th>'.$this->lang->line('price').'</th> <th>Commandée</th> <th>à Livrer</th> <th>'.$this->lang->line('vat').'</th> <th>'.$this->lang->line('discount').'</th> <th>'.$this->lang->line('amount').'</th> </tr> </thead> <tbody>'; foreach ($orders_items as $k => $v) { $product_data = $this->model_products->getProductData($v['product_id']); setlocale(LC_MONETARY, 'en_US'); $html .= '<tr> <td>'.$product_data['name'].'</td> <td>'.number_format($v['rate'], 3).'</td> <td>'.$v['qty'].'</td> <td>'.$v['qty_liv'].'</td> <td>'.$v['vat'].' %</td> <td>'.$v['discount'].' %</td> <td>'.number_format($v['amount'], 3).'</td> </tr>'; if($v['free'] > 0){ $html .= '<tr> <td>'.$product_data['name'].'</td> <td>0</td> <td>0</td> <td>'.$v['free'].'</td> <td>0</td> <td>0</td> <td>0</td> </tr>'; } } $html .= '</tbody> </table> </div> <!-- /.col --> </div> <!-- /.row --> <div class="row"> <div class="col-xs-6 pull pull-right"> <div class="table-responsive"> <table class="table"> <tr> <th style="width:50%">'.$this->lang->line('grossamount').':</th> <td>'.$order_data['gross_amount'].'</td> </tr>'; $html .= '<tr> <th>'.$this->lang->line('totalvat').':</th> <td>'.$order_data['total_vat'].'</td> </tr>'; $html .=' <tr> <th>'.$this->lang->line('netamount').':</th> <td>'.$order_data['net_amount'].'</td> </tr> <tr> <th>'.$this->lang->line('status').':</th> <td>'.$confiem_status.'</td> </tr> </table> </div> </div> <!-- /.col --> </div> <!-- /.row --> </section> <!-- /.content --> </div> </body> En algunos casos el foreach va por más de una página, así que ... 0 ChiKa LiO 16 sep. 2019 a las 13:27 3 respuestas La mejor respuesta Quite <link rel="stylesheet" href="'.base_url('assets/dist/css/AdminLTE.min.css').'"> Entonces adminss css fue la causa ... 0 ChiKa LiO 16 sep. 2019 a las 15:07 ¿Intentaste con ?: <body style="height: 100wh;" onload="window.print();"> Puede intentar explicar un poco mejor cuál es el propósito de su código y por qué está tratando de imprimir muchas páginas. 0 Iñigo 16 sep. 2019 a las 11:27 Esto explica algunas ideas básicas para crear rápidamente un conjunto de estilos de impresión utilizando la sintaxis CSS3 @media. Por supuesto, muchos de nosotros ahora descuidamos los estilos de impresión por completo. Sin embargo, se necesita relativamente poco esfuerzo para crear algo simple que pueda evitar que los usuarios impriman todas las cosas desde una página que probablemente no necesitan. Recientemente agregué algunos estilos de impresión para este sitio, así que espero que puedan ser útiles para otros: En primer lugar, estos estilos se colocan mejor al final de todos sus otros estilos. Esto significa que se les otorga un mayor peso debido a la cascada CSS y es menos probable que se sobrescriban con otras reglas en otros lugares. En primer lugar, utilizando consultas de medios CSS3, podemos dirigir estilos para imprimir de esta manera: @media print { /* styles go here */ } 0 dılo sürücü 16 sep. 2019 a las 11:08
__label__pos
0.567104
Index: /branches/working-0711/ccl/level-1/l1-clos.lisp =================================================================== --- /branches/working-0711/ccl/level-1/l1-clos.lisp (revision 8852) +++ /branches/working-0711/ccl/level-1/l1-clos.lisp (revision 8853) @@ -556,6 +556,6 @@ (unless (memq class seen) (or (if (forward-referenced-class-p class) class) - (progn - (push class seen) + (let ((seen (cons class seen))) + (declare (dynamic-extent seen)) (dolist (s (%class-direct-superclasses class)) (when (eq s original) @@ -565,10 +565,29 @@ (scan-forward-refs original ()))) +(defun class-forward-referenced-superclasses (original) + (labels ((scan-forward-refs (class seen fwdrefs) + (unless (memq class seen) + (if (forward-referenced-class-p class) + (push class fwdrefs) + (let ((seen (cons class seen))) + (declare (dynamic-extent seen)) + (dolist (s (%class-direct-superclasses class)) + (when (eq s original) + (error "circular class hierarchy: the class ~s is a superclass of at least one of its superclasses (~s)." original class)) + (setq fwdrefs (scan-forward-refs s seen fwdrefs)))))) + fwdrefs)) + (scan-forward-refs original () ()))) + + (defmethod compute-class-precedence-list ((class class)) - (let* ((fwdref (class-has-a-forward-referenced-superclass-p class))) - (when fwdref - (error "~&Class ~s can't be finalized because at least one of its superclasses (~s) is a FORWARD-REFERENCED-CLASS." class fwdref))) - (compute-cpl class)) + (let* ((fwdrefs (class-forward-referenced-superclasses class))) + (if fwdrefs + (if (cdr fwdrefs) + (error "Class ~s can't be finalized because superclasses ~s are not defined yet" + class (mapcar #'%class-name fwdrefs)) + (error "Class ~s can't be finalized because superclass ~s is not defined yet" + class (%class-name (car fwdrefs)))) + (compute-cpl class)))) ;;; Classes that can't be instantiated via MAKE-INSTANCE have no Index: /branches/working-0711/ccl/level-1/l1-error-system.lisp =================================================================== --- /branches/working-0711/ccl/level-1/l1-error-system.lisp (revision 8852) +++ /branches/working-0711/ccl/level-1/l1-error-system.lisp (revision 8853) @@ -619,5 +619,10 @@ (if (subtypep name 'condition) (apply #'make-instance name init-list) - (error "~S is not a defined condition type name" name))) + (let ((class (if (classp name) + name + (find-class name)))) ;; elicit an error if no such class + (unless (class-finalized-p class) + (finalize-inheritance class)) ;; elicit an error if forward refs. + (error "~S is not a condition class" class)))) (defmethod print-object ((c condition) stream)
__label__pos
0.864966
#!/usr/bin/env python # vim: set ts=4 sts=4 sw=4 textwidth=112 : VERSION="2.0.0" PROGRAM="rarslave2" import re, os, sys, optparse import par2parser import RarslaveConfig import RarslaveLogger # Global Variables (TYPE_OLDRAR, TYPE_NEWRAR, TYPE_ZIP, TYPE_NOEXTRACT) = range (4) (SUCCESS, ECHECK, EEXTRACT, EDELETE) = range(4) config = RarslaveConfig.RarslaveConfig() logger = RarslaveLogger.RarslaveLogger () # Global options to be set / used later. options = None class RarslaveExtractor (object): def __init__ (self, type): self.type = type self.heads = [] def addHead (self, dir, head): assert os.path.isdir (dir) # REQUIRES that the dir is valid, but not that the file is valid, so that # we can move a file that doesn't exist yet. # FIXME: probably CAN add this back, since we should be running this AFTER repair. #assert os.path.isfile (os.path.join (dir, head)) full_head = os.path.join (dir, head) logger.addMessage ('Adding extraction head: %s' % full_head, RarslaveLogger.MessageType.Debug) self.heads.append (full_head) def extract (self, todir=None): # Extract all heads of this set # Create the directory $todir if it doesn't exist if todir != None and not os.path.isdir (todir): logger.addMessage ('Creating directory: %s' % todir, RarslaveLogger.MessageType.Verbose) try: os.makedirs (todir) except OSError: logger.addMessage ('FAILED to create directory: %s' % todir, RarslaveLogger.MessageType.Fatal) return -EEXTRACT # Extract all heads extraction_func = \ { TYPE_OLDRAR : self.__extract_rar, TYPE_NEWRAR : self.__extract_rar, TYPE_ZIP : self.__extract_zip, TYPE_NOEXTRACT : self.__extract_noextract }[self.type] # Call the extraction function on each head for h in self.heads: if todir == None: # Run in the head's directory ret = extraction_func (h, os.path.dirname (h)) else: ret = extraction_func (h, todir) logger.addMessage ('Extraction Function returned: %d' % ret, RarslaveLogger.MessageType.Debug) # Check error code if ret != SUCCESS: logger.addMessage ('Failed extracting: %s' % h, RarslaveLogger.MessageType.Fatal) return -EEXTRACT return SUCCESS def __extract_rar (self, file, todir): assert os.path.isfile (file) assert os.path.isdir (todir) RAR_CMD = config.get_value ('commands', 'unrar') cmd = '%s \"%s\"' % (RAR_CMD, file) ret = run_command (cmd, todir) # Check error code if ret != 0: return -EEXTRACT return SUCCESS def __extract_zip (self, file, todir): ZIP_CMD = config.get_value ('commands', 'unzip') cmd = ZIP_CMD % (file, todir) ret = run_command (cmd) # Check error code if ret != 0: return -EEXTRACT return SUCCESS def __extract_noextract (self, file, todir): # Just move this file to the $todir, since no extraction is needed # FIXME: NOTE: mv will fail by itself if you're moving to the same dir! NOEXTRACT_CMD = config.get_value ('commands', 'noextract') cmd = NOEXTRACT_CMD % (file, todir) ret = run_command (cmd) # Check error code if ret != 0: return -EEXTRACT return SUCCESS class RarslaveRepairer (object): # Verify (and repair) the set # Make sure it worked, otherwise clean up and return failure def __init__ (self, dir, file, join=False): self.dir = dir # the directory containing the par2 file self.file = file # the par2 file self.join = join # True if the par2 set is 001 002 ... assert os.path.isdir (dir) assert os.path.isfile (os.path.join (dir, file)) def checkAndRepair (self): # Form the command: # par2repair -- PAR2 PAR2_EXTRA [JOIN_FILES] PAR2_CMD = config.get_value ('commands', 'par2repair') # Get set up basename = get_basename (self.file) all_files = find_likely_files (basename, self.dir) all_files.sort () par2_files = find_par2_files (all_files) # assemble the command command = "%s \"%s\" " % (PAR2_CMD, self.file) for f in par2_files: if f != self.file: command += "\"%s\" " % os.path.split (f)[1] if self.join: for f in all_files: if f not in par2_files: command += "\"%s\" " % os.path.split (f)[1] # run the command ret = run_command (command, self.dir) # check the result if ret != 0: logger.addMessage ('PAR2 Check / Repair failed: %s' % self.file, RarslaveLogger.MessageType.Fatal) return -ECHECK return SUCCESS def run_command (cmd, indir=None): # Runs the specified command-line in the directory given (or, in the current directory # if none is given). It returns the status code given by the application. pwd = os.getcwd () if indir != None: assert os.path.isdir (indir) # MUST be a directory! os.chdir (indir) # FIXME: re-enable this after testing print 'RUNNING (%s): %s' % (indir, cmd) return SUCCESS # return os.system (cmd) def full_abspath (p): return os.path.abspath (os.path.expanduser (p)) def get_basename (name): """Strips most kinds of endings from a filename""" regex = config.get_value ('regular expressions', 'basename_regex') r = re.compile (regex, re.IGNORECASE) done = False while not done: done = True if r.match (name): g = r.match (name).groups() name = g[0] done = False return name def find_likely_files (name, dir): """Finds files which are likely to be part of the set corresponding to $name in the directory $dir""" if not os.path.isdir (os.path.abspath (dir)): raise ValueError # bad directory given dir = os.path.abspath (dir) ename = re.escape (name) regex = re.compile ('^%s.*$' % (ename, )) return [f for f in os.listdir (dir) if regex.match (f)] def find_par2_files (files): """Find all par2 files in the list $files""" PAR2_REGEX = config.get_value ('regular expressions', 'par2_regex') regex = re.compile (PAR2_REGEX, re.IGNORECASE) return [f for f in files if regex.match (f)] def find_all_par2_files (dir): """Finds all par2 files in a directory""" # NOTE: does NOT return absolute paths if not os.path.isdir (os.path.abspath (dir)): raise ValueError # bad directory given dir = os.path.abspath (dir) files = os.listdir (dir) return find_par2_files (files) def has_extension (f, ext): """Checks if f has the extension ext""" if ext[0] != '.': ext = '.' + ext ext = re.escape (ext) regex = re.compile ('^.*%s$' % (ext, ), re.IGNORECASE) return regex.match (f) def find_extraction_heads (dir, files): """Takes a list of possible files and finds likely heads of extraction.""" # NOTE: perhaps this should happen AFTER repair is # NOTE: successful. That way all files would already exist # According to various sources online: # 1) pre rar-3.0: .rar .r00 .r01 ... # 2) post rar-3.0: .part01.rar .part02.rar # 3) zip all ver: .zip extractor = None p2files = find_par2_files (files) # Old RAR type, find all files ending in .rar if is_oldrar (files): extractor = RarslaveExtractor (TYPE_OLDRAR) regex = re.compile ('^.*\.rar$', re.IGNORECASE) for f in files: if regex.match (f): extractor.addHead (dir, f) if is_newrar (files): extractor = RarslaveExtractor (TYPE_NEWRAR) regex = re.compile ('^.*\.part01.rar$', re.IGNORECASE) for f in files: if regex.match (f): extractor.addHead (dir, f) if is_zip (files): extractor = RarslaveExtractor (TYPE_ZIP) regex = re.compile ('^.*\.zip$', re.IGNORECASE) for f in files: if regex.match (f): extractor.addHead (dir, f) if is_noextract (files): # Use the Par2 Parser (from cfv) here to find out what files are protected. # Since these are not being extracted, they will be mv'd to another directory # later. extractor = RarslaveExtractor (TYPE_NOEXTRACT) for f in p2files: done = False try: prot_files = par2parser.get_protected_files (dir, f) done = True except: #FIXME: add the actual exceptions logger.addMessage ('Error parsing PAR2 file: %s', f) continue if done: break if done: for f in prot_files: extractor.addHead (dir, f) else: logger.addMessage ('Error parsing all PAR2 files in this set ...') # Make sure we found the type if extractor == None: logger.addMessage ('Not able to find an extractor for this type of set: %s' % p2files[0], RarslaveLogger.MessageType.Fatal) # No-heads here, but it's better than failing completely extractor = RarslaveExtractor (TYPE_NOEXTRACT) return extractor def is_oldrar (files): for f in files: if has_extension (f, '.r00'): return True return False def is_newrar (files): for f in files: if has_extension (f, '.part01.rar'): return True return False def is_zip (files): for f in files: if has_extension (f, '.zip'): return True return False def is_noextract (files): # Type that needs no extraction. # TODO: Add others ??? for f in files: if has_extension (f, '.001'): return True return False def find_deleteable_files (files): # Deleteable types regex should come from the config dfiles = [] DELETE_REGEX = config.get_value ('regular expressions', 'delete_regex') dregex = re.compile (DELETE_REGEX, re.IGNORECASE) return [f for f in files if dregex.match (f)] def printlist (li): for f in li: print f class PAR2Set (object): dir = None file = None likely_files = [] def __init__ (self, dir, file): assert os.path.isdir (dir) assert os.path.isfile (os.path.join (dir, file)) self.dir = dir self.file = file basename = get_basename (file) self.likely_files = find_likely_files (basename, dir) def __list_eq (self, l1, l2): if len(l1) != len(l2): return False for e in l1: if e not in l2: return False return True def __eq__ (self, rhs): return self.__list_eq (self.likely_files, rhs.likely_files) def run_all (self): par2files = find_par2_files (self.likely_files) par2head = par2files[0] join = is_noextract (self.likely_files) # Repair Stage repairer = RarslaveRepairer (self.dir, par2head, join) ret = repairer.checkAndRepair () if ret != SUCCESS: logger.addMessage ('Repair stage failed for: %s' % par2head, RarslaveLogger.MessageType.Fatal) return -ECHECK # Extraction Stage EXTRACT_DIR = options.extract_dir extractor = find_extraction_heads (self.dir, self.likely_files) ret = extractor.extract (EXTRACT_DIR) if ret != SUCCESS: logger.addMessage ('Extraction stage failed for: %s' % par2head, RarslaveLogger.MessageType.Fatal) return -EEXTRACT # Deletion Stage DELETE_INTERACTIVE = options.interactive deleteable_files = find_deleteable_files (self.likely_files) ret = delete_list (deleteable_files, DELETE_INTERACTIVE) if ret != SUCCESS: logger.addMessage ('Deletion stage failed for: %s' % par2head, RarslaveLogger.MessageType.Fatal) return -EDELETE logger.addMessage ('Successfully completed: %s' % par2head) return SUCCESS def delete_list (files, interactive=False): # Delete a list of files done = False valid_y = ['Y', 'YES'] valid_n = ['N', 'NO'] if interactive: while not done: print 'Do you want to delete the following?:' s = raw_input ('Delete [y/N]: ').upper() if s in valid_y + valid_n: done = True if s in valid_n: return SUCCESS for f in files: # FIXME: re-enable this in production # os.remove (f) print 'rm \"%s\"' % f return SUCCESS def generate_all_parsets (dir): # Generate all parsets in the given directory. assert os.path.isdir (dir) # Directory MUST be valid parsets = [] p2files = find_all_par2_files (dir) for f in p2files: p = PAR2Set (dir, f) if p not in parsets: parsets.append (p) return parsets def check_required_progs(): """Check if the required programs are installed""" shell_not_found = 32512 needed = [] if run_command ('par2repair --help > /dev/null 2>&1') == shell_not_found: needed.append ('par2repair') if run_command ('unrar --help > /dev/null 2>&1') == shell_not_found: needed.append ('unrar') if run_command ('unzip --help > /dev/null 2>&1') == shell_not_found: needed.append ('unzip') if needed: for n in needed: print 'Needed program "%s" not found in $PATH' % (n, ) sys.exit(1) def run_options (options): # Fix directories options.work_dir = full_abspath (options.work_dir) # Make sure that the directory is valid if not os.path.isdir (options.work_dir): sys.stderr.write ('\"%s\" is not a valid directory. Use the \"-d\"\n' % options.work_dir) sys.stderr.write ('option to override the working directory temporarily, or edit the\n') sys.stderr.write ('configuration file to override the working directory permanently.\n') sys.exit (1) if options.extract_dir != None: options.extract_dir = full_abspath (options.extract_dir) if options.version: print PROGRAM + ' - ' + VERSION print print 'Copyright (c) 2005,2006 Ira W. Snyder ([email protected])' print print 'This program comes with ABSOLUTELY NO WARRANTY.' print 'This is free software, and you are welcome to redistribute it' print 'under certain conditions. See the file COPYING for details.' sys.exit (0) if options.check_progs: check_required_progs () if options.write_def_config: config.write_config (default=True) if options.write_config: config.write_config () def find_loglevel (options): loglevel = options.verbose - options.quiet if loglevel < RarslaveLogger.MessageType.Fatal: loglevel = RarslaveLogger.MessageType.Fatal if loglevel > RarslaveLogger.MessageType.Debug: loglevel = RarslaveLogger.MessageType.Debug return loglevel def printMessageTable (loglevel): if logger.hasFatalMessages (): print '\nFatal Messages\n' + '=' * 80 logger.printLoglevel (RarslaveLogger.MessageType.Fatal) if loglevel == RarslaveLogger.MessageType.Fatal: return if logger.hasNormalMessages (): print '\nNormal Messages\n' + '=' * 80 logger.printLoglevel (RarslaveLogger.MessageType.Normal) if loglevel == RarslaveLogger.MessageType.Normal: return if logger.hasVerboseMessages (): print '\nVerbose Messages\n' + '=' * 80 logger.printLoglevel (RarslaveLogger.MessageType.Verbose) if loglevel == RarslaveLogger.MessageType.Verbose: return if logger.hasDebugMessages (): print '\nDebug Messages\n' + '=' * 80 logger.printLoglevel (RarslaveLogger.MessageType.Debug) return def main (): # Build the OptionParser parser = optparse.OptionParser() parser.add_option('-n', '--not-recursive', action='store_false', dest='recursive', default=config.get_value('options', 'recursive'), help="Don't run recursively") parser.add_option('-d', '--work-dir', dest='work_dir', type='string', default=config.get_value('directories', 'working_directory'), help="Start running at DIR", metavar='DIR') parser.add_option('-e', '--extract-dir', dest='extract_dir', type='string', default=config.get_value('directories', 'extract_directory'), help="Extract to DIR", metavar='DIR') parser.add_option('-p', '--check-required-programs', action='store_true', dest='check_progs', default=False, help="Check for required programs") parser.add_option('-f', '--write-default-config', action='store_true', dest='write_def_config', default=False, help="Write out a new default config") parser.add_option('-c', '--write-new-config', action='store_true', dest='write_config', default=False, help="Write out the current config") parser.add_option('-i', '--interactive', dest='interactive', action='store_true', default=config.get_value('options', 'interactive'), help="Confirm before removing files") parser.add_option('-q', '--quiet', dest='quiet', action='count', default=0, help="Output fatal messages only") parser.add_option('-v', '--verbose', dest='verbose', action='count', default=0, help="Output extra information") parser.add_option('-V', '--version', dest='version', action='store_true', default=False, help="Output version information") parser.version = VERSION # Parse the given options global options (options, args) = parser.parse_args() # Run any special actions that are needed on these options run_options (options) # Find the loglevel using the options given loglevel = find_loglevel (options) # Run recursively if options.recursive: for (dir, subdirs, files) in os.walk (options.work_dir): parsets = generate_all_parsets (dir) for p in parsets: p.run_all () # Non-recursive else: parsets = generate_all_parsets (options.work_dir) for p in parsets: p.run_all () # Print the results printMessageTable (loglevel) # Done! return 0 if __name__ == '__main__': main ()
__label__pos
0.953041
Ignoring upper case and lower case in Java Tags: , , , I want to know how to make whatever the user inputs to ignore case in my method: public static void findPatient() { if (myPatientList.getNumPatients() == 0) { System.out.println("No patient information is stored."); } else { System.out.print("Enter part of the patient name: "); String name = sc.next(); sc.nextLine(); System.out.print(myPatientList.showPatients(name)); } } Answer You have to use the String method .toLowerCase() or .toUpperCase() on both the input and the string you are trying to match it with. Example: public static void findPatient() { System.out.print("Enter part of the patient name: "); String name = sc.nextLine(); System.out.print(myPatientList.showPatients(name)); } //the other class ArrayList<String> patientList; public void showPatients(String name) { boolean match = false; for(String matchingname : patientList) { if (matchingname.toLowerCase().contains(name.toLowerCase())) { match = true; } } } Source: stackoverflow
__label__pos
0.999983
How to Split Screen on a Chromebook (5 Methods) Chrome OS received its 100th update earlier this year, and Google didn’t miss the opportunity to release fresh new features for its desktop OS. Among them, a new Chrome OS launcher and a built-in screen recording tool for Chrome OS were introduced. Fast forward now, and Google has added a new Partial Split feature that works similarly to Snap Layouts on Windows 11. You can quickly split the screen on your Chromebook and work with two windows side by side with just a click. So in this article, we have explained 5 different ways to split screen on a Chromebook to help you multitask effortlessly. Split Screen on a Chromebook (2022) In this tutorial, we have included five different ways to snap windows and split the screen in different positions. One of them requires you to enable the new Partial Split Chrome flag, along with the “Always on top” functionality. On that note, let’s begin. 1. Split Screen on a Chromebook with the Maximize Button Apart from maximizing and restoring the window size, the Maximize button has other utilities too. You can use it to split the screen on your Chromebook. Here is how you can do it. 1. On an active window, click and hold the “Maximize” button on the title bar. An arrow indicator will then appear on the left and right sides of the maximize button. Simply drag to the left side, and the window will snap to the left side. How to Split Screen on a Chromebook (5 Methods) 2. Repeat the same process for another window. Click and hold the “Maximize” button and drag it to the right. And voila, you have successfully split the screen on your Chromebook. Now, you can see two windows at the same time. Split Screen on a Chromebook Using the Maximize Button 2. Split Screen on a Chromebook Using Keyboard Shortcuts Much like Windows 11 keyboard shortcuts, you can easily split the screen on a Chromebook using shortcuts as well. Here’s how it works: 1. When you are on an active window, simply press “Shift + [” to snap the window to the left side. Split Screen on a Chromebook Using Keyboard Shortcuts 2. To snap another window to the right side of your Chromebook screen, click on that window to make it active and press “Shift + ]“. This way, you can quickly split the screen on your Chromebook. Split Screen on a Chromebook Using Keyboard Shortcuts 3. To re-adjust the window size in split-screen mode, hover the mouse cursor on the center where the two windows meet. A slider will appear right there. Now, hold and move the slider to whichever side you want to resize the windows automatically. Split Screen on a Chromebook Using Keyboard Shortcuts 3. Split Screen on a Chromebook Using Touchpad Gestures Apart from keyboard shortcuts, you can use touchpad gestures to split the screen on your Chromebook. Yeah, even Chrome OS supports a handful of intuitive touch gestures, and here’s how they work: 1. If you have multiple windows open, do a three-finger swipe up on the touchpad to open the overview menu. Split Screen on a Chromebook Using Touchpad Gestures 2. Now, click and hold on one of the windows and drag it to the left or right side, as per your preference. The window will snap to that position instantly. Split Screen on a Chromebook Using Touchpad Gestures 3. On the other side, click on your choice of window, and it will split the screen on your Chromebook. That’s effortless, right? Split Screen on a Chromebook Using Touchpad Gestures 4. How to Split Screen on a Touchscreen Chromebook If you are using a Chrome OS tablet or a touchscreen Chromebook in tent or tablet mode, you can split the screen using touch gestures. It works similarly to touchpad gestures, but there are some more functionalities. Here is how to use it. 1. Similar to Android phone gestures, do a one-finger swipe up and hold to open the overview menu. Here, press and hold the window of your choice and drag it to either left or right. Split Screen on a Touchscreen Chromebook 2. After that, you can tap on the second window on the other side, and the screen will be split into two windows. Split Screen on a Touchscreen Chromebook 3. You can also swipe up from the Shelf (Taskbar) to open new apps in split-screen mode on your Chromebook. Split Screen on a Touchscreen Chromebook 4. Apart from that, you can quickly change windows on either side. Simply do a one-finger swipe up on either side and select a different window. Split Screen on a Touchscreen Chromebook 5. Snap Windows on Chromebook Using Windows 11-Like Snap Layouts One of the best features of Windows 11 is Snap Layouts, which lets you quickly snap windows to different positions on your screen. Taking inspiration from that, Google has also been working on a similar window-snapping feature called “Partial Split”. The feature is already live on all Chrome OS channels – Stable, Beta, and Dev (Chrome OS 105 or above). That said, it’s still hidden behind some Chrome flags, so you need to enable Partial Split on your Chromebook manually. Here is how to go about it. 1. Make sure your Chromebook is updated to Chrome OS 105 or above. After that, open your Chrome browser and paste the below address. Now, enable the feature from the drop-down menu. chrome://flags/#partial-split Snap Windows on Chromebooks Similar to Snap Layouts 2. Next, paste the below address and enable this feature as well. It will turn on the “Stay on top” feature within the Partial Split menu. Now, click on “Restart” to apply the changes. chrome://flags/#cros-labs-float-window Snap Windows on Chromebooks Similar to Snap Layouts 3. Once you are logged in, simply hover the mouse cursor on the “Maximize” button, and the Partial Split menu will appear almost instantly. You can split the screen in half, partial, or full-screen mode on your Chromebook. There is also the “float on top” feature that lets you pin a window on top of everything. Snap Windows on Chromebooks Similar to Snap Layouts 4. This is how the Partial Split feature works on the Chromebook. Since the feature is still locked behind a flag, we expect Google to add more split views, keyboard shortcuts, and features before a wider rollout. Snap Windows on Chromebooks Similar to Snap Layouts Easily Multi-task With Multiple Windows on Chromebooks So these are the five methods you can utilize to handle multiple windows at once on your Chromebook. Partial Split is another great multi-tasking split-screen feature coming to all Chromebooks. We await the general rollout of the feature to wider users. Meanwhile, you can use keyboard shortcuts and touchpad gestures to snap your windows. To learn more such Chromebook tips and tricks, you can head to our in-depth article. And if you are wondering how to screenshot on a Chromebook, we have a guide in place for that as well. Finally, if you have any questions, let us know in the comment section below. Source link Leave a Reply Your email address will not be published. Ads Blocker Image Powered by Code Help Pro Ads Blocker Detected!!! We have detected that you are using extensions to block ads. Please support us by disabling these ads blocker.
__label__pos
0.511427
1 • Are Hard Drives Destined To Be Extinct? With the onset of ever-increasing storage space on flash drives is it ok to ask the experts here if the hard drive technology will soon be obsolete only to be overtaken by flash drives and 3D memory chips? Minipod2000205 pointsBadges: • Allocate Unallocated space on a flash drive… I recently downloaded a Dell diagnostic tool to my flash drive. It converted it from an 8GB NTFS drive to a 2GB FAT drive with 6GB of unallocated space. I thought this was fine at the time as I planned to simply convert it back to NTFS and reallocate the memory. However, I'm unable to recover the... Lordhowe175 pointsBadges: • Asus transfer pictures to flash drive How do you transfer pictures to a flash drive? JoleneSchonchin5 pointsBadges: • How do I download pictures to my flash drive? How do I download pictures to my flash drive? jeb13315 pointsBadges: • how to run c++ code off of flash drive Hi,I want to know if its possible to run my c++ programs that i write when i connect the flash drive to my PC. Just run them as portable apps and that they would function properly. Some of the programs are really simple like a calculator, etc. I also want a very simple UI to works.Thanks! ddatvo5 pointsBadges: • Flash Drive syntax error I have a flash drive with very precious information on it, but today when I went to edit a document on it evey file I have on there comes up in jiberish computer language. when I try to open a file to get to some of my documents it says this file has a "syntax error" I've been looking around on the... Madpawn220 pointsBadges: • Undo ISO on flash drive I created an ISO image on a flash drive. Now that I'm done with it I would like to delete the ISO. the drive doesn't show up in my computer in windows 7 or XP. I can see it in the device manager but can not format it from there. Any ideas? GrapeApe15 pointsBadges: • U3-type behaviour on non-sandisk flash drives Hi, and thanks for taking a minute to read my post. I wonder if members can increase my undrstanding of this issue: U3 creates a virtual CD drive and a "normal" hard-drive-like storage location when the flash drive is inserted.  The technology was invented by Sandisk andmost of the removall tools... Mballard10 pointsBadges: • Windows flash drive formatting issue I tried to format a flash drive in NTFS, but it seems to take forever to complete formatting. Is this normal? I also tried formatting the same flash drive in exFAT, which took much less time to complete. However, I can't use the flash drive with my laptop because it doesn't have exFAT as a file... WindowsServerATE335 pointsBadges: • SANDISK data life span Is it possible to get data corruption on a sandisk compact flash if it has been stored without power for 4 or 5 years. A23857410 pointsBadges: • Removable Data Storage Possible Virus? I have been storing data on a removable device (flash disk) for a bit of time. Currently once I plug in the removable device to any of our PC's the VDU is showing only a number of data while some data are not viewable, but now on the plug & play for the device it is registering that the files... Jumwa5 pointsBadges: • Converting Lotus Approach files to Excel 2007 I received a Lotus Approach 97 .APR file extension file on a flash drive. I need to import it into Excel (Microsoft Office 2007). I am a beginner with Excel and I need to know the steps required to import the Lotus Approach 97 file on the Flash Drive to Excel. I would appreciate any information or... MelanieYarbrough6,345 pointsBadges: • Password Protect Thumb Drive Is there software to password protect CURRENT thumb drives you have or do you have to buy the thumb drive with the software on it? Lob465 pointsBadges: • Flash drive usage policy Has anyone created a flash drive usage policy? Vwalters355 pointsBadges: • terminal server 2008 I have a terminal server 2008 that I am using hp thin clients on. I am tring to get the thin clients to where you can use flash drives on them. Currently when you put a flash drive in you can see it in My Computer, but when you try to access it I get the error. "\\tsclient\files\hard disk2 is not... Tkoppa10 pointsBadges: 1 Forgot Password No problem! Submit your e-mail address below. We'll send you an e-mail containing your password. Your password has been sent to: To follow this tag... There was an error processing your information. Please try again later. REGISTER or login: Forgot Password? By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy Thanks! We'll email you when relevant content is added and updated. Following
__label__pos
0.704218
Course Content Class 10th Science 0/32 Class 10th Maths 0/86 Online Class For 10th Standard Students (CBSE) (English Medium) About Lesson Circles Full Video || Chapter – 10 || Class 10th (For Hindi Medium) Circle and line in a plane For a circle and a line on a plane, there can be three possibilities. i) they can be non-intersecting ii) they can have a single common point: in this case, the line touches the circle. ii) they can have two common points: in this case, the line cuts the circle. (i) Non intersecting   (ii) Touching  (iii) Intersecting Tangent tangent to a circle is a line which touches the circle at exactly one point. For every point on the circle, there is a unique tangent passing through it. Tangent To know more about Tangent, visit here. Secant secant to a circle is a line which has two points in common with the circle. It cuts the circle at two points, forming a chord of the circle. Secant To know more about Secant, visit here. Tangent as a special case of Secant Tangent as a special case of Secant The tangent to a circle can be seen as a special case of the secant when the two endpoints of its corresponding chord coincide. Two parallel tangents at most for a given secant For every given secant of a circle, there are exactly two tangents which are parallel to it and touches the circle at two diametrically opposite points. Parallel tangents Theorems Tangent perpendicular to the radius at the point of contact Theorem: The theorem states that “the tangent to the circle at any point is the perpendicular to the radius of the circle that passes through the point of contact”. Tangent and radius Here, O is the centre and OP⊥XY. The number of tangents drawn from a given point i) If the point is in an interior region of the circle, any line through that point will be a secant. So, no tangent can be drawn to a circle which passes through a point that lies inside it. AB is a secant drawn through the point S ii) When a point of tangency lies on the circle, there is exactly one tangent to a circle that passes through it. A tangent passing through a point lying on the circle iii) When the point lies outside of the circle, there are accurately two tangents to a circle through it Tangents to a circle from an external point Length of a tangent The length of the tangent from the point (Say P) to the circle is defined as the segment of the tangent from the external point P to the point of tangency I with the circle. In this case, PI is the tangent length. Lengths of tangents drawn from an external point Theorem: Two tangents are of equal length when the tangent is drawn from an external point to a circle. Tangents to a circle from an external point PT1=PT2 Join the conversation Wisdom TechSavvy Academy
__label__pos
0.695224
3.3. Liste, liste e ancora liste Abbiamo fatto pratica con le variabili e le funzioni, ora si entra nella palude fangosa delle liste di Scheme. 3.3.1. Definire una lista Prima di approfondire le liste è necessario comprendere la differenza tra valori atomici e liste. Abbiamo già visto i valori atomici quando abbiamo inizializzato le variabili nella sessione precedente. Un valore atomico è un valore singolo. Ad esempio possiamo assegnare alla variabile «x» il singolo valore 8 nell'istruzione seguente: (let* ( (x 8) ) x) (abbiamo aggiunto l'espressione x alla fine per stampare il valore assegnato a x; normalmente non si dovrebbe averne bisogno. Si noti come let* operi come una funzione: il valore dell'ultima istruzione è il valore restituito) Una variabile può anche riferirsi ad una lista di valori piuttosto che a un singolo valore. Per assegnare alla variabile x la lista di valori 1, 3, 5 si digiti: (let* ( (x '(1 3 5))) x) Si provi a digitare entrambe le istruzioni nella console Script-fu e si osservino le risposte. Quando si digita la prima istruzione si ottiene semplicemente il risultato: 8 Mentre se si digita l'altra istruzione si ottiene il seguente risultato: (1 3 5) Quando si ottiene il valore 8 l'interprete sta informando che x contiene il valore atomico 8 mentre quando si ottiene (1 3 5) sta informando che x contiene non un valore singolo bensì una lista di valori. Si noti che non ci sono virgole nella dichiarazione o assegnamento della lista, tantomeno nel risultato stampato. La sintassi per definire una lista è: '(a b c) dove a, b, e c sono letterali. Si usa l'apostrofo (') per indicare che ciò che segue nelle parentesi è una lista di valori letterali piuttosto che una funzione o un'espressione. Una lista vuota può essere definita come segue: '() o semplicemente: () Le liste possono contenere valori atomici così come altre liste: (let* ( (x '("GIMP" (1 2 3) ("è" ("grande" () ) ) ) ) ) x ) Si noti che dopo il primo apostrofo non vi è più bisogno di utilizzare un apostrofo per definire le liste interne. Si provi a copiare l'istruzione nella console Script-Fu e ad eseguirla per vedere cosa restituisce. Si noti come il risultato restituito non è una lista di valori atomici singoli ma piuttosto è una lista di letterali ("GIMP"), la lista (1 2 3), ecc. 3.3.2. Come concepire le liste È utile pensare alle liste come composte di una «testa» e una «coda». La testa è l'elemento iniziale della lista, la coda è la parte restante. Si capirà l'importanza di questo concetto quando si parlerà di come si compongono le liste e come accedere agli elementi di una lista. 3.3.3. Creazione di liste attraverso la concatenazione (la funzione Cons) Una delle funzioni più comuni che si incontrano è la funzione cons. Prende un valore e lo mette in testa al suo secondo argomento, una lista. Nel capitolo precedente si è suggerito di pensare una lista come composta da un elemento (la testa) è la parte restante (la coda), questo è esattamente il comportamento della funzione cons: aggiunge un elemento in testa alla lista. Si potrebbe creare una lista come segue: (cons 1 '(2 3 4) ) Il risultato è la lista (1 2 3 4). Si può anche creare una lista con un solo elemento: (cons 1 () ) Si possono utilizzare variabili dichiarate in precedenza al posto di qualunque letterale come ci si aspetta. 3.3.4. Definizione di una lista usando la funzione list Per definire una lista composta da letterali oppure da variabili precedentemente dichiarate si utilizza la funzione list: (list 5 4 3 a b c) Ciò costruirà e resituirà una lista contenente i valori mantenuti dalle variabili a, b e c. Ad esempio: (let* ( (a 1) (b 2) (c 3) ) (list 5 4 3 a b c) ) Questo codice crea la lista (5 4 3 1 2 3). 3.3.5. Accedere ai valori contenuti in una lista Per accedere ai valori in una lista usare le funzioni car e cdr, che restituiscono rispettivamente il primo elemento della lista e la porzione restante. Queste funzioni spezzano la lista nel costrutto testa::coda precedentemente menzionato. 3.3.6. La funzione car car returns the first element of the list (the head of the list). The list needs to be non-null (not empty). Thus, the following returns the first element of the list: (car '("primo" 2 "terzo")) che è: "primo" 3.3.7. La funzione cdr cdr returns the remainder of the list after the first element (the tail of the list). If there is only one element in the list, it returns an empty list. (cdr '("primo" 2 "secondo")) restituisce: (2 "terzo") mentre l'istruzione seguente: (cdr '("uno e solo")) restituisce: () 3.3.8. Accedere ad altri elementi di una lista Bene, si è ora in grado di ottenere il primo elemento di una lista così come la parte restante ma come si accede agli altri elementi di una lista? Esistono parecchie funzioni di "convenienza" per accedere alla testa della testa (caadr) o analogamente alla coda di una lista (cddr), ecc. La convenzione sui nomi di base è semplice: le a e le d rappresentano le teste e le code quindi (car (cdr (car x) ) ) si potrebbe scrivere come: (cadar x) Per impratichirsi con le funzioni di accesso alle liste si provi a digitare quanto segue (su di un'unica riga se si utilizza la console), si utilizzino differenti varianti di car e cdr per accedere ai differenti elementi della lista: (let* ( (x '( (1 2 (3 4 5) 6) 7 8 (9 10) ) ) ) ; metti il tuo codice car/cdr qui ) Si provi ad accedere al numero 3 nella lista utilizzando solo due chiamate a funzione. Se si è in grado di farlo si è sulla buona strada per diventare un maestro di Script-Fu! [Nota] Nota In Scheme, a semicolon (;) marks the beginning of a comment. It, and everything that follows it on the same line, are ignored by the script interpreter, so you can use this to add comments to refresh your memory when you look at the script later.
__label__pos
0.539915
question about layout Peter Mueller ([email protected]) Tue, 4 Jan 94 15:53:13 +0100 Hello, I've recognized the following problem, when specifying <code><i>filename</i>.rec</code> within a sentence. Under some circumstances the result looks like (in Mosaic) bla bla bla bla bla filename .rec bla bla bla although there is *no* space between the specification. Is this word wrapping really necessary? I choose the specification above to indicate, that the part 'filename' is to be replaced by an actual value. So it *must* appear as (in the case of Mosaic): filmname.rec ^^^^ code-style ^^^^^^^^ ---- italics (or emphasized, whatever) Is there a possibility to keep words un-wrapped? Hope you all have reached 1994 in a well manner, Thanks for your help, Peter Mueller
__label__pos
0.99205
If you are using WordPress for a long time, it is possible that you have many spam comments, revisions, transients cache etc. on your wp database. In this post, I listed most useful SQL queries you can use to clean up your WordPress database and reduce its size around 85%! Tools Required My favorite tool for database cleanup is phpMyAdmin. If your hosting provider has CPanel, you can access your database using phpMyAdmin. If you don’t have CPanel, you can use a WordPress plugin to run SQL queries on your WordPress database. Backup Your Database First If you are going to run a manual query on WordPress database, I highly recommend you to backup your database first. Even when you think a query is harmless, a small mistake on the query can cause an unrecoverable damage on your tables. 1. Replace Old Links on Posts UPDATE `wp_posts` SET `post_content` = REPLACE( `post_content`, "http://shailan.com", "https://metinsaylan.com" ) WHERE `post_type`="post"; This snippet replaces all occurrences of a link on your post contents. I have added filterpost_type, so it will replace links only on posts. 2. Replace or Remove Old Shortcodes UPDATE `wp_posts` SET `post_content` = REPLACE( `post_content`, "[html]", '<pre class="html">' ) WHERE `post_type`="post"; UPDATE `wp_posts` SET `post_content` = REPLACE( `post_content`, "[/html]", "</pre>" ) WHERE `post_type`="post"; This snippet replaces removed shortcodes with HTML tags. If you just want to remove the shortcode you can use the following sample: UPDATE `wp_posts` SET `post_content` = REPLACE( `post_content`, "[adsense]", '' ) WHERE `post_type`="post"; 3. Delete All Post Revisions DELETE FROM `wp_posts` WHERE `post_type`="revision" This query snippet removes all post revisions from the database. Please note that this code doesn’t remove any post meta or term relationships. See tip 11 for removing orphaned post meta. 4. Close Comments on All Posts UPDATE `wp_posts` SET `comment_status` = 'closed' WHERE `post_type`="post"; UPDATE `wp_posts` SET `ping_status` = 'closed' WHERE `post_type`="post"; This query closes comments and pings for all posts on your WordPress blog. If you want to enable comments, you just need to change “closed” to “open”. 5. Delete All Trashed Posts DELETE FROM `wp_posts` WHERE `post_status`="trash" This SQL query snippet removes all trashed posts from your database. It can save you a good amount of memory if you have many posts. 6. Delete All oEmbed Cache DELETE FROM `wp_posts` WHERE `post_type`="oembed_cache" I recently disabled oEmbed on my WordPress blog. So, if you are not using oEmbeds, this is also unnecessary data in your database. This query will remove all oEmbed cache from your database. 7. Delete Old Contact Forms Shortcodes UPDATE `wp_posts` SET `post_content` = REPLACE( `post_content`, '[contact-form-7 id="4339"]', '' ) WHERE `post_type`="post"; This query deletes all contact forms with the given id from your posts. 8. Delete All Pingbacks DELETE FROM `wp_comments` WHERE `comment_type` = 'pingback'; This snippet deletes all pingbacks from the comments table. 9. Delete All Spam Comments DELETE FROM `wp_comments` WHERE `comment_approved` = 'spam'; This query removes all spam comments from comments table. 10. Delete All Transients On Options Table DELETE FROM `wp_options` WHERE `option_name` LIKE '%_transient%'; This query deletes all transient cache from your options table. This query can save you a huge amount of memory depending on your WordPress database age. 11. Delete All Orphaned Post Meta DELETE m FROM `wp_postmeta` AS m LEFT JOIN `wp_posts` AS p ON m.`post_id` = p.`ID` WHERE p.`ID` IS NULL This query deletes all post meta that is not linked to any post. This query needs to be run if you manually deleted post revisions or posts using SQL. 12. Get a Full List Of Meta Keys SELECT DISTINCT meta_key FROM `wp_postmeta` This is not a database cleanup query. But, once you get a list of meta keys that are added, you can use those keys to delete unused meta keys on your database. See next query tip for an example. 13. Delete Removed Plugin Meta Keys DELETE FROM `wp_postmeta` WHERE `meta_key` LIKE '%aktt%' This query deletes all meta keys including a keyword. If you have a word common to a plugin meta, you can use that keyword to remove all meta keys used by that plugin. 14. Update Post Author ID on All Posts UPDATE `wp_posts` SET `post_author` = '1' WHERE `post_type`='post' AND `post_status`='publish' This query doesn’t remove any rows, but updates author for all posts to a specified author ID. If you had guest authors in the past, you can use this query to remove all old authors. BONUS: Trigger to Empty Trashed Posts on Post Publish CREATE TRIGGER `EMPTY_TRASH_ON_PUBLISH` AFTER INSERT ON `wp_posts` FOR EACH ROW DELETE FROM `wp_posts` WHERE `post_status`="trash" This query creates a trigger on your database to empty your deleted posts on post publish.   BONUS: One Query For All I merged all cleanup queries above in one box, so you can run this directly on SQL field: DELETE FROM `wp_posts` WHERE `post_type`="revision"; UPDATE `wp_posts` SET `comment_status` = 'closed' WHERE `post_type`="post"; UPDATE `wp_posts` SET `ping_status` = 'closed' WHERE `post_type`="post"; DELETE FROM `wp_posts` WHERE `post_status`="trash"; DELETE FROM `wp_posts` WHERE `post_type`="oembed_cache"; DELETE FROM `wp_comments` WHERE `comment_type` = 'pingback'; DELETE FROM `wp_comments` WHERE `comment_approved` = 'spam'; DELETE FROM `wp_options` WHERE `option_name` LIKE '%_transient%'; DELETE m FROM `wp_postmeta` AS m LEFT JOIN `wp_posts` AS p ON m.`post_id` = p.`ID` WHERE p.`ID` IS NULL; I hope you found those queries useful. Follow me on twitter for more tips. Enjoy! More Like This
__label__pos
0.998971
how to use javafx textfield maxlength Question How use this code in my main class of javafx. So that I can set maxlength of characters in javafx texfield. class LimitedTextField extends TextField { private final int limit; public LimitedTextField(int limit) { this.limit = limit; } @Override public void replaceText(int start, int end, String text) { super.replaceText(start, end, text); verify(); } @Override public void replaceSelection(String text) { super.replaceSelection(text); verify(); } private void verify() { if (getText().length() > limit) { setText(getText().substring(0, limit)); } } }; My java fx main class is given below public class TextFiled extends Application { @Override public void start(Stage primaryStage) { final TextField t_fname = new TextField(); StackPane root = new StackPane(); root.getChildren().add(t_fname); Scene scene = new Scene(root, 300, 250); primaryStage.setTitle("Hello World!"); primaryStage.setScene(scene); primaryStage.show(); } public static void main(String[] args) { launch(args); } } 1 5 10/19/2015 3:16:27 PM Accepted Answer This is my solution: public static void addTextLimiter(final TextField tf, final int maxLength) { tf.textProperty().addListener(new ChangeListener<String>() { @Override public void changed(final ObservableValue<? extends String> ov, final String oldValue, final String newValue) { if (tf.getText().length() > maxLength) { String s = tf.getText().substring(0, maxLength); tf.setText(s); } } }); } See JavaFX 2.2 TextField maxlength and Prefer composition over inheritance? 4 5/23/2017 12:01:46 PM While the OP's technical problem is correctly answered (though not accepted), the solution to the base issue - how to restrict/validate input in a TextField which is answered in the other posts - has changed over time. With java8u40 we got a new class TextFormatter: one of its main responsibilities is to provide a hook into any change of text input before it gets comitted to the content. To fulfill the requirement of limiting the input to a certain length (and - just for fun - show a context menu with an error message) we would • implement a UnaryOperator that analyses all changes • reject those which would result into a longer text (and show the message) • accept all other changes • instantiate a TextFormatter with the operator • configure the TextField with the TextFormatter A code snippet: int len = 20; TextField field = new TextField("max chars: " + len ); // here we reject any change which exceeds the length UnaryOperator<Change> rejectChange = c -> { // check if the change might effect the validating predicate if (c.isContentChange()) { // check if change is valid if (c.getControlNewText().length() > len) { // invalid change // sugar: show a context menu with error message final ContextMenu menu = new ContextMenu(); menu.getItems().add(new MenuItem("This field takes\n"+len+" characters only.")); menu.show(c.getControl(), Side.BOTTOM, 0, 0); // return null to reject the change return null; } } // valid change: accept the change by returning it return c; }; field.setTextFormatter(new TextFormatter(rejectChange)); Aside: Modifying the state of a sender while it is notifying its listeners about a change of that state is generally a bad idea and might easily lead to unexpected and hard-to-track side-effects (I suspect - though don't know - that the undo bug mentioned in other answers is such a side-effect) Licensed under: CC-BY-SA with attribution Not affiliated with: Stack Overflow Icon
__label__pos
0.999215
Correct Code, Still Wrong? Help! Hey guys! I just started coding not too long ago and I’ve run into my first real issue. (It’s kinda sad because this challenge is relatively easy) I have typed in the code exactly as the challenge instructed and am still getting the message, " Your h1 element with the text ‘I am red!’ should have the color red. You should use rgb for the color red. Your h1 element with the text ‘I am blue!’ should have the color blue. You should use rgb for the color blue. " The texts are the colors they’re supposed to be. I messaged the FCC support team and they said my code was correct! They also told me to try using Google Chrome. I switched browsers and am getting this same message on Chrome! Please help, and let me know what could be wrong or if you’ve had a similar issue. Thanks!! Here is my Code: <style> .red-text { color: rgb(225, 0, 0); } .orchid-text { color: rgb(218, 112, 214); } .sienna-text { color: rgb(160, 82, 45); } .blue-text { color: rgb(0, 0, 225); } </style> <h1 class="red-text">I am red!</h1> <h1 class="orchid-text">I am orchid!</h1> <h1 class="sienna-text">I am sienna!</h1> <h1 class="blue-text">I am blue!</h1> Your browser information: User Agent is: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/83.0.4103.116 Safari/537.36. Challenge: Use RGB to Mix Colors Link to the challenge: You have some of the numbers wrong in your rgb values. So dear not send emai Just in case it is not clear. The value 225 is not the max numeric value rgb() takes, it is 255. So full green for example is rgb(0, 255, 0) I’m still confused because those are the exact rgb values that challenge provides…? Thanks! ohh! I see! thank you , I guess the support team didn’t catch it either lol I see now! Thank you!
__label__pos
0.685311
user3398505 user3398505 - 3 months ago 16 Python Question Return a tuple of objects In the Person object, there is already a support for an inventory, and when the Person object takes a Weapon object or Food object, the object would go to the inventory. For our Tribute object, we want a way for us to retrieve the Weapon and Food objects from the inventory. Create a new method in the Tribute class, get weapons() , which would return a tuple of Weapon objects that the Tribute currently has in his inventory class Tribute(Person): def get_weapon(self): for item in self.get_inventory(): if isinstance(item, self.RangedWeapon): return tuple(item) else: pass cc = Tribute("Chee Chin", 100) chicken = Food("chicken", 5) aloe_vera = Medicine("aloe vera", 2, 5) bow = RangedWeapon("bow", 4, 10) sword = Weapon("sword", 2, 5) Base = Place("base") Base.add_object(cc) Base.add_object(chicken) Base.add_object(aloe_vera) Base.add_object(bow) Base.add_object(sword) cc.take(bow) # Chee Chin took bow cc.take(sword) # Chee Chin took sword cc.take(chicken) # Chee Chin took chicken cc.take(aloe_vera) # Chee Chin took aloe_vera def add_object(self, new_object): if isinstance(new_object, Thing) or isinstance(new_object, LivingThing): self.objects.append(new_object) new_object.place = self else: GAME_LOGGER.warning("You can only add Thing or LivingThing to {}".format(self.get_name())) def named_col(col): # Only accepts tuple/list type_col = type(col) if type_col != list and type_col != tuple: return None return type_col(map(lambda x: x.get_name() if isinstance(x, NamedObject) else x, col)) When I try to print(named_col(cc.get_weapons())) I am getting an error AttributeError: 'RangedWeapon' object has no attribute 'owner' class Thing(MobileObject): def __init__(self, name): super().__init__(name, None) self.owner = None def set_owner(self, owner): self.owner = owner def get_owner(self): return self.owner def is_owned(self): return self.owner is not None RangedWeapon is previously define under this class class RangedWeapon(Weapon): Answer Here: if isinstance(item, self.RangedWeapon): return tuple(item) You aren't returning a tuple of all weapons, you're converting one weapon from a named tuple to a regular tuple and returning the single object. This strips the named attributes, hence the error. Instead, you need something like: def get_weapon(self): weapons = [] for item in self.get_inventory(): if isinstance(item, RangedWeapon): weapons.append(item) return tuple(weapons) You should also move all of the code outside get_weapon (cc = Tribute("Chee Chin", 100) onwards) outside the class entirely, i.e. dedent it all one more tab.
__label__pos
0.746148
W3cubDocs /C++ std::ranges::reverse_copy, std::ranges::reverse_copy_result Defined in header <algorithm> Call signature template< std::bidirectional_iterator I, std::sentinel_for<I> S, std::weakly_incrementable O > requires std::indirectly_copyable<I, O> constexpr reverse_copy_result<I, O> reverse_copy( I first, S last, O result ); (1) (since C++20) template< ranges::bidirectional_range R, std::weakly_incrementable O > requires std::indirectly_copyable<ranges::iterator_t<R>, O> constexpr reverse_copy_result<ranges::borrowed_iterator_t<R>, O> reverse_copy( R&& r, O result ); (2) (since C++20) Helper types template< class I, class O > using reverse_copy_result = ranges::in_out_result<I, O>; (3) (since C++20) 1) Copies the elements from the source range [firstlast) to the destination range [resultresult + N), where N is ranges::distance(first, last), in such a way that the elements in the new range are in reverse order. Behaves as if by executing the assignment *(result + N - 1 - i) = *(first + i) once for each integer i in [​0​N). The behavior is undefined if the source and destination ranges overlap. 2) Same as (1), but uses r as the source range, as if using ranges::begin(r) as first and ranges::end(r) as last. The function-like entities described on this page are niebloids, that is: In practice, they may be implemented as function objects, or with special compiler extensions. Parameters first, last - the range of elements to copy r - the range of elements to copy result - the beginning of the destination range. Return value {last, result + N}. Complexity Exactly N assignments. Notes Implementations (e.g. MSVC STL) may enable vectorization when the both iterator types model contiguous_iterator and have the same value type, and the value type is TriviallyCopyable. Possible implementation See also the implementations in MSVC STL and libstdc++. struct reverse_copy_fn { template<std::bidirectional_iterator I, std::sentinel_for<I> S, std::weakly_incrementable O> requires std::indirectly_copyable<I, O> constexpr ranges::reverse_copy_result<I, O> operator()(I first, S last, O result) const { auto ret = ranges::next(first, last); for (; last != first; *result = *--last, ++result); return {std::move(ret), std::move(result)}; } template<ranges::bidirectional_range R, std::weakly_incrementable O> requires std::indirectly_copyable<ranges::iterator_t<R>, O> constexpr ranges::reverse_copy_result<ranges::borrowed_iterator_t<R>, O> operator()(R&& r, O result) const { return (*this)(ranges::begin(r), ranges::end(r), std::move(result)); } }; inline constexpr reverse_copy_fn reverse_copy {}; Example #include <algorithm> #include <iostream> #include <string> int main() { std::string x {"12345"}, y(x.size(), ' '); std::cout << x << " → "; std::ranges::reverse_copy(x.begin(), x.end(), y.begin()); std::cout << y << " → "; std::ranges::reverse_copy(y, x.begin()); std::cout << x << '\n'; } Output: 12345 → 54321 → 12345 See also (C++20) reverses the order of elements in a range (niebloid) creates a copy of a range that is reversed (function template) © cppreference.com Licensed under the Creative Commons Attribution-ShareAlike Unported License v3.0. https://en.cppreference.com/w/cpp/algorithm/ranges/reverse_copy
__label__pos
0.852464
1640 How can you convert a byte array to a hexadecimal string and vice versa? 0 53 Answers 53 1699 You can use Convert.ToHexString starting with .NET 5. There's also a method for the reverse operation: Convert.FromHexString. For older versions of .NET you can either use: public static string ByteArrayToString(byte[] ba) { StringBuilder hex = new StringBuilder(ba.Length * 2); foreach (byte b in ba) hex.AppendFormat("{0:x2}", b); return hex.ToString(); } or: public static string ByteArrayToString(byte[] ba) { return BitConverter.ToString(ba).Replace("-",""); } There are even more variants of doing it, for example here. The reverse conversion would go like this: public static byte[] StringToByteArray(String hex) { int NumberChars = hex.Length; byte[] bytes = new byte[NumberChars / 2]; for (int i = 0; i < NumberChars; i += 2) bytes[i / 2] = Convert.ToByte(hex.Substring(i, 2), 16); return bytes; } Using Substring is the best option in combination with Convert.ToByte. See this answer for more information. If you need better performance, you must avoid Convert.ToByte before you can drop SubString. 19 • 31 You're using SubString. Doesn't this loop allocate a horrible amount of string objects? – Wim Coenen Mar 6, 2009 at 16:36 • 33 Honestly - until it tears down performance dramatically, I would tend to ignore this and trust the Runtime and the GC to take care of it. – Tomalak Mar 6, 2009 at 17:11 • 94 Because a byte is two nibbles, any hex string that validly represents a byte array must have an even character count. A 0 should not be added anywhere - to add one would be making an assumption about invalid data that is potentially dangerous. If anything, the StringToByteArray method should throw a FormatException if the hex string contains an odd number of characters. Mar 9, 2010 at 19:01 • 8 @00jt You must make an assumption that F == 0F. Either it is the same as 0F, or the input was clipped and F is actually the start of something you have not received. It is up to your context to make those assumptions, but I believe a general purpose function should reject odd characters as invalid instead of making that assumption for the calling code. Jan 28, 2013 at 15:35 • 12 @DavidBoike The question had NOTHING to do with "how to handle possibly clipped stream values" Its talking about a String. String myValue = 10.ToString("X"); myValue is "A" not "0A". Now go read that string back into bytes, oops you broke it. – 00jt Jan 30, 2013 at 19:25 551 Performance Analysis Note: new leader as of 2015-08-20. I ran each of the various conversion methods through some crude Stopwatch performance testing, a run with a random sentence (n=61, 1000 iterations) and a run with a Project Gutenburg text (n=1,238,957, 150 iterations). Here are the results, roughly from fastest to slowest. All measurements are in ticks (10,000 ticks = 1 ms) and all relative notes are compared to the [slowest] StringBuilder implementation. For the code used, see below or the test framework repo where I now maintain the code for running this. Disclaimer WARNING: Do not rely on these stats for anything concrete; they are simply a sample run of sample data. If you really need top-notch performance, please test these methods in an environment representative of your production needs with data representative of what you will use. Results Lookup tables have taken the lead over byte manipulation. Basically, there is some form of precomputing what any given nibble or byte will be in hex. Then, as you rip through the data, you simply look up the next portion to see what hex string it would be. That value is then added to the resulting string output in some fashion. For a long time byte manipulation, potentially harder to read by some developers, was the top-performing approach. Your best bet is still going to be finding some representative data and trying it out in a production-like environment. If you have different memory constraints, you may prefer a method with fewer allocations to one that would be faster but consume more memory. Testing Code Feel free to play with the testing code I used. A version is included here but feel free to clone the repo and add your own methods. Please submit a pull request if you find anything interesting or want to help improve the testing framework it uses. 1. Add the new static method (Func<byte[], string>) to /Tests/ConvertByteArrayToHexString/Test.cs. 2. Add that method's name to the TestCandidates return value in that same class. 3. Make sure you are running the input version you want, sentence or text, by toggling the comments in GenerateTestInput in that same class. 4. Hit F5 and wait for the output (an HTML dump is also generated in the /bin folder). static string ByteArrayToHexStringViaStringJoinArrayConvertAll(byte[] bytes) { return string.Join(string.Empty, Array.ConvertAll(bytes, b => b.ToString("X2"))); } static string ByteArrayToHexStringViaStringConcatArrayConvertAll(byte[] bytes) { return string.Concat(Array.ConvertAll(bytes, b => b.ToString("X2"))); } static string ByteArrayToHexStringViaBitConverter(byte[] bytes) { string hex = BitConverter.ToString(bytes); return hex.Replace("-", ""); } static string ByteArrayToHexStringViaStringBuilderAggregateByteToString(byte[] bytes) { return bytes.Aggregate(new StringBuilder(bytes.Length * 2), (sb, b) => sb.Append(b.ToString("X2"))).ToString(); } static string ByteArrayToHexStringViaStringBuilderForEachByteToString(byte[] bytes) { StringBuilder hex = new StringBuilder(bytes.Length * 2); foreach (byte b in bytes) hex.Append(b.ToString("X2")); return hex.ToString(); } static string ByteArrayToHexStringViaStringBuilderAggregateAppendFormat(byte[] bytes) { return bytes.Aggregate(new StringBuilder(bytes.Length * 2), (sb, b) => sb.AppendFormat("{0:X2}", b)).ToString(); } static string ByteArrayToHexStringViaStringBuilderForEachAppendFormat(byte[] bytes) { StringBuilder hex = new StringBuilder(bytes.Length * 2); foreach (byte b in bytes) hex.AppendFormat("{0:X2}", b); return hex.ToString(); } static string ByteArrayToHexViaByteManipulation(byte[] bytes) { char[] c = new char[bytes.Length * 2]; byte b; for (int i = 0; i < bytes.Length; i++) { b = ((byte)(bytes[i] >> 4)); c[i * 2] = (char)(b > 9 ? b + 0x37 : b + 0x30); b = ((byte)(bytes[i] & 0xF)); c[i * 2 + 1] = (char)(b > 9 ? b + 0x37 : b + 0x30); } return new string(c); } static string ByteArrayToHexViaByteManipulation2(byte[] bytes) { char[] c = new char[bytes.Length * 2]; int b; for (int i = 0; i < bytes.Length; i++) { b = bytes[i] >> 4; c[i * 2] = (char)(55 + b + (((b - 10) >> 31) & -7)); b = bytes[i] & 0xF; c[i * 2 + 1] = (char)(55 + b + (((b - 10) >> 31) & -7)); } return new string(c); } static string ByteArrayToHexViaSoapHexBinary(byte[] bytes) { SoapHexBinary soapHexBinary = new SoapHexBinary(bytes); return soapHexBinary.ToString(); } static string ByteArrayToHexViaLookupAndShift(byte[] bytes) { StringBuilder result = new StringBuilder(bytes.Length * 2); string hexAlphabet = "0123456789ABCDEF"; foreach (byte b in bytes) { result.Append(hexAlphabet[(int)(b >> 4)]); result.Append(hexAlphabet[(int)(b & 0xF)]); } return result.ToString(); } static readonly uint* _lookup32UnsafeP = (uint*)GCHandle.Alloc(_Lookup32, GCHandleType.Pinned).AddrOfPinnedObject(); static string ByteArrayToHexViaLookup32UnsafeDirect(byte[] bytes) { var lookupP = _lookup32UnsafeP; var result = new string((char)0, bytes.Length * 2); fixed (byte* bytesP = bytes) fixed (char* resultP = result) { uint* resultP2 = (uint*)resultP; for (int i = 0; i < bytes.Length; i++) { resultP2[i] = lookupP[bytesP[i]]; } } return result; } static uint[] _Lookup32 = Enumerable.Range(0, 255).Select(i => { string s = i.ToString("X2"); return ((uint)s[0]) + ((uint)s[1] << 16); }).ToArray(); static string ByteArrayToHexViaLookupPerByte(byte[] bytes) { var result = new char[bytes.Length * 2]; for (int i = 0; i < bytes.Length; i++) { var val = _Lookup32[bytes[i]]; result[2*i] = (char)val; result[2*i + 1] = (char) (val >> 16); } return new string(result); } static string ByteArrayToHexViaLookup(byte[] bytes) { string[] hexStringTable = new string[] { "00", "01", "02", "03", "04", "05", "06", "07", "08", "09", "0A", "0B", "0C", "0D", "0E", "0F", "10", "11", "12", "13", "14", "15", "16", "17", "18", "19", "1A", "1B", "1C", "1D", "1E", "1F", "20", "21", "22", "23", "24", "25", "26", "27", "28", "29", "2A", "2B", "2C", "2D", "2E", "2F", "30", "31", "32", "33", "34", "35", "36", "37", "38", "39", "3A", "3B", "3C", "3D", "3E", "3F", "40", "41", "42", "43", "44", "45", "46", "47", "48", "49", "4A", "4B", "4C", "4D", "4E", "4F", "50", "51", "52", "53", "54", "55", "56", "57", "58", "59", "5A", "5B", "5C", "5D", "5E", "5F", "60", "61", "62", "63", "64", "65", "66", "67", "68", "69", "6A", "6B", "6C", "6D", "6E", "6F", "70", "71", "72", "73", "74", "75", "76", "77", "78", "79", "7A", "7B", "7C", "7D", "7E", "7F", "80", "81", "82", "83", "84", "85", "86", "87", "88", "89", "8A", "8B", "8C", "8D", "8E", "8F", "90", "91", "92", "93", "94", "95", "96", "97", "98", "99", "9A", "9B", "9C", "9D", "9E", "9F", "A0", "A1", "A2", "A3", "A4", "A5", "A6", "A7", "A8", "A9", "AA", "AB", "AC", "AD", "AE", "AF", "B0", "B1", "B2", "B3", "B4", "B5", "B6", "B7", "B8", "B9", "BA", "BB", "BC", "BD", "BE", "BF", "C0", "C1", "C2", "C3", "C4", "C5", "C6", "C7", "C8", "C9", "CA", "CB", "CC", "CD", "CE", "CF", "D0", "D1", "D2", "D3", "D4", "D5", "D6", "D7", "D8", "D9", "DA", "DB", "DC", "DD", "DE", "DF", "E0", "E1", "E2", "E3", "E4", "E5", "E6", "E7", "E8", "E9", "EA", "EB", "EC", "ED", "EE", "EF", "F0", "F1", "F2", "F3", "F4", "F5", "F6", "F7", "F8", "F9", "FA", "FB", "FC", "FD", "FE", "FF", }; StringBuilder result = new StringBuilder(bytes.Length * 2); foreach (byte b in bytes) { result.Append(hexStringTable[b]); } return result.ToString(); } Update (2010-01-13) Added Waleed's answer to analysis. Quite fast. Update (2011-10-05) Added string.Concat Array.ConvertAll variant for completeness (requires .NET 4.0). On par with string.Join version. Update (2012-02-05) Test repo includes more variants such as StringBuilder.Append(b.ToString("X2")). None upset the results any. foreach is faster than {IEnumerable}.Aggregate, for instance, but BitConverter still wins. Update (2012-04-03) Added Mykroft's SoapHexBinary answer to analysis, which took over third place. Update (2013-01-15) Added CodesInChaos's byte manipulation answer, which took over first place (by a large margin on large blocks of text). Update (2013-05-23) Added Nathan Moinvaziri's lookup answer and the variant from Brian Lambert's blog. Both rather fast, but not taking the lead on the test machine I used (AMD Phenom 9750). Update (2014-07-31) Added @CodesInChaos's new byte-based lookup answer. It appears to have taken the lead on both the sentence tests and the full-text tests. Update (2015-08-20) Added airbreather's optimizations and unsafe variant to this answer's repo. If you want to play in the unsafe game, you can get some huge performance gains over any of the prior top winners on both short strings and large texts. 22 • 7 Despite making the code available for you to do the very thing you requested on your own, I updated the testing code to include Waleed answer. All grumpiness aside, it is much faster. – patridge Jan 13, 2010 at 16:29 • 2 @CodesInChaos Done. And it won in my tests by quite a bit as well. I don't pretend to fully understand either of the top methods yet, but they are easily hidden from direct interaction. – patridge Jan 15, 2013 at 18:01 • 7 This answer has no intention of answering the question of what is "natural" or commonplace. The goal is to give people some basic performance benchmarks since, when you need to do these conversion, you tend to do them a lot. If someone needs raw speed, they just run the benchmarks with some appropriate test data in their desired computing environment. Then, tuck that method away into an extension method where you never look its implementation again (e.g., bytes.ToHexStringAtLudicrousSpeed()). – patridge Apr 8, 2013 at 20:37 • 2 Just produced a high performance lookup table based implementation. Its safe variant is about 30% faster than the current leader on my CPU. The unsafe variants are even faster. stackoverflow.com/a/24343727/445517 Jun 21, 2014 at 17:12 • 3 @Goodies I've discovered that the simple Convert.ToBase64String() is VERY fast (faster than Lookup by byte (via CodesInChaos) ) in my testing - so if anyone doesn't care about the output being hexadecimal, that's a quick one-line replacement. Aug 10, 2018 at 9:44 263 There's a class called SoapHexBinary that does exactly what you want. using System.Runtime.Remoting.Metadata.W3cXsd2001; public static byte[] GetStringToBytes(string value) { SoapHexBinary shb = SoapHexBinary.Parse(value); return shb.Value; } public static string GetBytesToString(byte[] value) { SoapHexBinary shb = new SoapHexBinary(value); return shb.ToString(); } 8 • 40 SoapHexBinary is available from .NET 1.0 and is in mscorlib. Despite it's funny namespace, it does exactly what the question asked. Jun 28, 2011 at 6:48 • 4 Great find! Note that you will need to pad odd strings with a leading 0 for GetStringToBytes, like the other solution. Oct 31, 2011 at 17:10 • Have you seen the implementation thought? The accepted answer has a better one IMHO. – mfloryan Jan 26, 2012 at 13:42 • 7 Interesting to see the Mono implementation here: github.com/mono/mono/blob/master/mcs/class/corlib/… – Jeremy Apr 29, 2012 at 4:40 • 12 SoapHexBinary is not supported in .NET Core/ .NET Standard... – juFo Mar 11, 2020 at 9:12 161 When writing crypto code it's common to avoid data dependent branches and table lookups to ensure the runtime doesn't depend on the data, since data dependent timing can lead to side-channel attacks. It's also pretty fast. static string ByteToHexBitFiddle(byte[] bytes) { char[] c = new char[bytes.Length * 2]; int b; for (int i = 0; i < bytes.Length; i++) { b = bytes[i] >> 4; c[i * 2] = (char)(55 + b + (((b-10)>>31)&-7)); b = bytes[i] & 0xF; c[i * 2 + 1] = (char)(55 + b + (((b-10)>>31)&-7)); } return new string(c); } Ph'nglui mglw'nafh Cthulhu R'lyeh wgah'nagl fhtagn Abandon all hope, ye who enter here An explanation of the weird bit fiddling: 1. bytes[i] >> 4 extracts the high nibble of a byte bytes[i] & 0xF extracts the low nibble of a byte 2. b - 10 is < 0 for values b < 10, which will become a decimal digit is >= 0 for values b > 10, which will become a letter from A to F. 3. Using i >> 31 on a signed 32 bit integer extracts the sign, thanks to sign extension. It will be -1 for i < 0 and 0 for i >= 0. 4. Combining 2) and 3), shows that (b-10)>>31 will be 0 for letters and -1 for digits. 5. Looking at the case for letters, the last summand becomes 0, and b is in the range 10 to 15. We want to map it to A(65) to F(70), which implies adding 55 ('A'-10). 6. Looking at the case for digits, we want to adapt the last summand so it maps b from the range 0 to 9 to the range 0(48) to 9(57). This means it needs to become -7 ('0' - 55). Now we could just multiply with 7. But since -1 is represented by all bits being 1, we can instead use & -7 since (0 & -7) == 0 and (-1 & -7) == -7. Some further considerations: • I didn't use a second loop variable to index into c, since measurement shows that calculating it from i is cheaper. • Using exactly i < bytes.Length as upper bound of the loop allows the JITter to eliminate bounds checks on bytes[i], so I chose that variant. • Making b an int allows unnecessary conversions from and to byte. The same thing can be implemented using the new string.Create function, which avoids having to allocate a separate char[] array. • We can also factor out the conversion of each nibble into a function. • Adding AggressiveInlining should allow that function to disappear from the JIT. • We can adjust by 32 to get a lower-case result. • We can also use Memory<byte> instead of an array, this allows a wider range of memory buffers (including arrays). [MethodImpl(MethodImplOptions.AggressiveInlining)] static string ByteToHexBitFiddle(Memory<byte> bytes, bool lowercase = false) => lowercase ? string.Create(bytes.Length * 2, bytes, LowercaseFillHex) : string.Create(bytes.Length * 2, bytes, UppercaseFillHex); static void UppercaseFillHex(Span<char> span, Memory<byte> mem) { var bytes = mem.Span; for (int i = 0; i < bytes.Length; i++) { span[i * 2] = ConvertNibble(bytes[i] >> 4, 0); span[i * 2 + 1] = ConvertNibble(bytes[i] & 0xF, 0); } } static void LowercaseFillHex(Span<char> span, Memory<byte> mem) { var bytes = mem.Span; for (int i = 0; i < bytes.Length; i++) { span[i * 2] = ConvertNibble(bytes[i] >> 4, 32); span[i * 2 + 1] = ConvertNibble(bytes[i] & 0xF, 32); } } [MethodImpl(MethodImplOptions.AggressiveInlining)] static char ConvertNibble(int nibble, int adjust) => (char)(55 + adjust + nibble + (((nibble - 10) >> 31) & (-7 - adjust))); 15 • 12 And hex string to byte[] array? – AaA Jan 18, 2013 at 7:56 • 18 +1 for properly citing your source after invoking that bit of black magic. All hail Cthulhu. – Edward Aug 2, 2013 at 20:41 • 6 What about string to byte[]? Nov 6, 2013 at 10:14 • 10 Nice! For those who need lowercase output, the expression obviously changes to 87 + b + (((b-10)>>31)&-39) – eXavier Jan 6, 2014 at 17:36 • 2 @AaA You said "byte[] array", which literally means an array of byte arrays, or byte[][]. I was just poking fun. – CoolOppo Jun 10, 2015 at 3:09 119 If you want more flexibility than BitConverter, but don't want those clunky 1990s-style explicit loops, then you can do: String.Join(String.Empty, Array.ConvertAll(bytes, x => x.ToString("X2"))); Or, if you're using .NET 4.0: String.Concat(Array.ConvertAll(bytes, x => x.ToString("X2"))); (The latter from a comment on the original post.) 6 • 23 Even shorter: String.Concat(Array.ConvertAll(bytes, x => x.ToString("X2")) – Nestor Nov 25, 2009 at 15:04 • 18 Even shorter: String.Concat(bytes.Select(b => b.ToString("X2"))) [.NET4] Jun 16, 2011 at 6:39 • 14 Only answers half the question. Jun 28, 2011 at 6:50 • 2 Why does the second one need .Net 4? String.Concat is in .Net 2.0. – Polyfun Oct 17, 2014 at 11:42 • 5 those "90's style" loops are generally faster, but by a negligible enough amount that it wont matter in most contexts. Still worth mentioning though Oct 24, 2017 at 19:47 88 Another lookup table based approach. This one uses only one lookup table for each byte, instead of a lookup table per nibble. private static readonly uint[] _lookup32 = CreateLookup32(); private static uint[] CreateLookup32() { var result = new uint[256]; for (int i = 0; i < 256; i++) { string s=i.ToString("X2"); result[i] = ((uint)s[0]) + ((uint)s[1] << 16); } return result; } private static string ByteArrayToHexViaLookup32(byte[] bytes) { var lookup32 = _lookup32; var result = new char[bytes.Length * 2]; for (int i = 0; i < bytes.Length; i++) { var val = lookup32[bytes[i]]; result[2*i] = (char)val; result[2*i + 1] = (char) (val >> 16); } return new string(result); } I also tested variants of this using ushort, struct{char X1, X2}, struct{byte X1, X2} in the lookup table. Depending on the compilation target (x86, X64) those either had the approximately same performance or were slightly slower than this variant. And for even higher performance, its unsafe sibling: private static readonly uint[] _lookup32Unsafe = CreateLookup32Unsafe(); private static readonly uint* _lookup32UnsafeP = (uint*)GCHandle.Alloc(_lookup32Unsafe,GCHandleType.Pinned).AddrOfPinnedObject(); private static uint[] CreateLookup32Unsafe() { var result = new uint[256]; for (int i = 0; i < 256; i++) { string s=i.ToString("X2"); if(BitConverter.IsLittleEndian) result[i] = ((uint)s[0]) + ((uint)s[1] << 16); else result[i] = ((uint)s[1]) + ((uint)s[0] << 16); } return result; } public static string ByteArrayToHexViaLookup32Unsafe(byte[] bytes) { var lookupP = _lookup32UnsafeP; var result = new char[bytes.Length * 2]; fixed(byte* bytesP = bytes) fixed (char* resultP = result) { uint* resultP2 = (uint*)resultP; for (int i = 0; i < bytes.Length; i++) { resultP2[i] = lookupP[bytesP[i]]; } } return new string(result); } Or if you consider it acceptable to write into the string directly: public static string ByteArrayToHexViaLookup32UnsafeDirect(byte[] bytes) { var lookupP = _lookup32UnsafeP; var result = new string((char)0, bytes.Length * 2); fixed (byte* bytesP = bytes) fixed (char* resultP = result) { uint* resultP2 = (uint*)resultP; for (int i = 0; i < bytes.Length; i++) { resultP2[i] = lookupP[bytesP[i]]; } } return result; } 12 • 1 Why does creating the lookup table in the unsafe version swap the nibbles of the precomputed byte ? I thought endianness only changed ordering of entities that were formed of multiple bytes. – Raif Atef Nov 5, 2014 at 13:13 • 1 @RaifAtef What matters here isn't the order of the nibbles. But the order of 16 bit words in a 32 bit integer. But I'm considering rewriting it so the same code can run regardless of endianness. Nov 7, 2014 at 12:09 • 2 All right, I'll bite -- what advantage is there to pinning _lookup32Unsafe indefinitely instead of just doing a third fixed statement and letting the GC relocate the array to its heart's content whenever this method isn't running? – Joe Amenta Jan 9, 2016 at 12:24 • 7 This just answer half of the question... How about from hex string to bytes? – Narvalex Mar 8, 2017 at 17:28 • 8 @CodesInChaos I wonder if Span can be used now instead of unsafe ?? – Konrad Dec 4, 2019 at 13:12 79 You can use the BitConverter.ToString method: byte[] bytes = {0, 1, 2, 4, 8, 16, 32, 64, 128, 255}; Console.WriteLine( BitConverter.ToString(bytes)); Output: 00-01-02-04-08-10-20-40-80-FF More information: BitConverter.ToString Method (Byte[]) 3 • 17 Only answers half the question. Jun 28, 2011 at 6:49 • 4 Where is the second part of the answer? – Saw Dec 25, 2012 at 9:12 • 2 I hope the fact that 256 is converted to "FF" is just a typo... – Franz D. May 13, 2021 at 16:10 62 I just encountered the very same problem today, and I came across this code: private static string ByteArrayToHex(byte[] barray) { char[] c = new char[barray.Length * 2]; byte b; for (int i = 0; i < barray.Length; ++i) { b = ((byte)(barray[i] >> 4)); c[i * 2] = (char)(b > 9 ? b + 0x37 : b + 0x30); b = ((byte)(barray[i] & 0xF)); c[i * 2 + 1] = (char)(b > 9 ? b + 0x37 : b + 0x30); } return new string(c); } Source: Forum post byte[] Array to Hex String (see the post by PZahra). I modified the code a little to remove the 0x prefix. I did some performance testing to the code and it was almost eight times faster than using BitConverter.ToString() (the fastest according to patridge's post). 7 • not to mention that this uses the least memory. No intermediate strings created whatsoever. – Chochos Oct 16, 2009 at 17:36 • 9 Only answers half the question. Jun 28, 2011 at 6:50 • This is great because it works on basically any version of NET, including NETMF. A winner! Feb 6, 2012 at 4:26 • 2 The accepted answer provides 2 excellent HexToByteArray methods, which represent the other half of the question. Waleed's solution answers the running question of how to do this without creating a huge number of strings in the process. Oct 10, 2012 at 16:08 • Does new string(c) copy and re-allocate or is it smart enough to know when it can simply wrap the char[]? – jjxtra Oct 15, 2013 at 17:24 38 As of .NET 5 RC2 you can use: Overloads are available that take span parameters. 1 • 3 In .NET 6, Convert.ToHexString uses SSSE3 instruction set on CPU, so it is not only convenient to use as in .NET 5, but also more performant for inputs more than 3 bytes. Performance difference is more clear as the input size increases. – MÇT Aug 26, 2021 at 9:04 26 This is an answer to revision 4 of Tomalak's highly popular answer (and subsequent edits). I'll make the case that this edit is wrong, and explain why it could be reverted. Along the way, you might learn a thing or two about some internals, and see yet another example of what premature optimization really is and how it can bite you. tl;dr: Just use Convert.ToByte and String.Substring if you're in a hurry ("Original code" below), it's the best combination if you don't want to re-implement Convert.ToByte. Use something more advanced (see other answers) that doesn't use Convert.ToByte if you need performance. Do not use anything else other than String.Substring in combination with Convert.ToByte, unless someone has something interesting to say about this in the comments of this answer. warning: This answer may become obsolete if a Convert.ToByte(char[], Int32) overload is implemented in the framework. This is unlikely to happen soon. As a general rule, I don't much like to say "don't optimize prematurely", because nobody knows when "premature" is. The only thing you must consider when deciding whether to optimize or not is: "Do I have the time and resources to investigate optimization approaches properly?". If you don't, then it's too soon, wait until your project is more mature or until you need the performance (if there is a real need, then you will make the time). In the meantime, do the simplest thing that could possibly work instead. Original code: public static byte[] HexadecimalStringToByteArray_Original(string input) { var outputLength = input.Length / 2; var output = new byte[outputLength]; for (var i = 0; i < outputLength; i++) output[i] = Convert.ToByte(input.Substring(i * 2, 2), 16); return output; } Revision 4: public static byte[] HexadecimalStringToByteArray_Rev4(string input) { var outputLength = input.Length / 2; var output = new byte[outputLength]; using (var sr = new StringReader(input)) { for (var i = 0; i < outputLength; i++) output[i] = Convert.ToByte(new string(new char[2] { (char)sr.Read(), (char)sr.Read() }), 16); } return output; } The revision avoids String.Substring and uses a StringReader instead. The given reason is: Edit: you can improve performance for long strings by using a single pass parser, like so: Well, looking at the reference code for String.Substring, it's clearly "single-pass" already; and why shouldn't it be? It operates at byte-level, not on surrogate pairs. It does allocate a new string however, but then you need to allocate one to pass to Convert.ToByte anyway. Furthermore, the solution provided in the revision allocates yet another object on every iteration (the two-char array); you can safely put that allocation outside the loop and reuse the array to avoid that. public static byte[] HexadecimalStringToByteArray(string input) { var outputLength = input.Length / 2; var output = new byte[outputLength]; var numeral = new char[2]; using (var sr = new StringReader(input)) { for (var i = 0; i < outputLength; i++) { numeral[0] = (char)sr.Read(); numeral[1] = (char)sr.Read(); output[i] = Convert.ToByte(new string(numeral), 16); } } return output; } Each hexadecimal numeral represents a single octet using two digits (symbols). But then, why call StringReader.Read twice? Just call its second overload and ask it to read two characters in the two-char array at once; and reduce the amount of calls by two. public static byte[] HexadecimalStringToByteArray(string input) { var outputLength = input.Length / 2; var output = new byte[outputLength]; var numeral = new char[2]; using (var sr = new StringReader(input)) { for (var i = 0; i < outputLength; i++) { var read = sr.Read(numeral, 0, 2); Debug.Assert(read == 2); output[i] = Convert.ToByte(new string(numeral), 16); } } return output; } What you're left with is a string reader whose only added "value" is a parallel index (internal _pos) which you could have declared yourself (as j for example), a redundant length variable (internal _length), and a redundant reference to the input string (internal _s). In other words, it's useless. If you wonder how Read "reads", just look at the code, all it does is call String.CopyTo on the input string. The rest is just book-keeping overhead to maintain values we don't need. So, remove the string reader already, and call CopyTo yourself; it's simpler, clearer, and more efficient. public static byte[] HexadecimalStringToByteArray(string input) { var outputLength = input.Length / 2; var output = new byte[outputLength]; var numeral = new char[2]; for (int i = 0, j = 0; i < outputLength; i++, j += 2) { input.CopyTo(j, numeral, 0, 2); output[i] = Convert.ToByte(new string(numeral), 16); } return output; } Do you really need a j index that increments in steps of two parallel to i? Of course not, just multiply i by two (which the compiler should be able to optimize to an addition). public static byte[] HexadecimalStringToByteArray_BestEffort(string input) { var outputLength = input.Length / 2; var output = new byte[outputLength]; var numeral = new char[2]; for (int i = 0; i < outputLength; i++) { input.CopyTo(i * 2, numeral, 0, 2); output[i] = Convert.ToByte(new string(numeral), 16); } return output; } What does the solution look like now? Exactly like it was at the beginning, only instead of using String.Substring to allocate the string and copy the data to it, you're using an intermediary array to which you copy the hexadecimal numerals to, then allocate the string yourself and copy the data again from the array and into the string (when you pass it in the string constructor). The second copy might be optimized-out if the string is already in the intern pool, but then String.Substring will also be able to avoid it in these cases. In fact, if you look at String.Substring again, you see that it uses some low-level internal knowledge of how strings are constructed to allocate the string faster than you could normally do it, and it inlines the same code used by CopyTo directly in there to avoid the call overhead. String.Substring • Worst-case: One fast allocation, one fast copy. • Best-case: No allocation, no copy. Manual method • Worst-case: Two normal allocations, one normal copy, one fast copy. • Best-case: One normal allocation, one normal copy. Conclusion? If you want to use Convert.ToByte(String, Int32) (because you don't want to re-implement that functionality yourself), there doesn't seem to be a way to beat String.Substring; all you do is run in circles, re-inventing the wheel (only with sub-optimal materials). Note that using Convert.ToByte and String.Substring is a perfectly valid choice if you don't need extreme performance. Remember: only opt for an alternative if you have the time and resources to investigate how it works properly. If there was a Convert.ToByte(char[], Int32), things would be different of course (it would be possible to do what I described above and completely avoid String). I suspect that people who report better performance by "avoiding String.Substring" also avoid Convert.ToByte(String, Int32), which you should really be doing if you need the performance anyway. Look at the countless other answers to discover all the different approaches to do that. Disclaimer: I haven't decompiled the latest version of the framework to verify that the reference source is up-to-date, I assume it is. Now, it all sounds good and logical, hopefully even obvious if you've managed to get so far. But is it true? Intel(R) Core(TM) i7-3720QM CPU @ 2.60GHz Cores: 8 Current Clock Speed: 2600 Max Clock Speed: 2600 -------------------- Parsing hexadecimal string into an array of bytes -------------------- HexadecimalStringToByteArray_Original: 7,777.09 average ticks (over 10000 runs), 1.2X HexadecimalStringToByteArray_BestEffort: 8,550.82 average ticks (over 10000 runs), 1.1X HexadecimalStringToByteArray_Rev4: 9,218.03 average ticks (over 10000 runs), 1.0X Yes! Props to Partridge for the bench framework, it's easy to hack. The input used is the following SHA-1 hash repeated 5000 times to make a 100,000 bytes long string. 209113288F93A9AB8E474EA78D899AFDBB874355 Have fun! (But optimize with moderation.) 1 • 1 error : {"Could not find any recognizable digits."} Apr 21, 2020 at 20:09 26 Converting byte[] to a hexadecimal string - benchmark / performance analysis Updated on: 2022-04-17 Since .NET 5 you should use Convert.ToHexString(bytes[])! using System; string result = Convert.ToHexString(bytesToConvert); About this leaderboard and the benchmark The comparison from Thymine seems to be outdated and incomplete, especially after .NET 5 with its Convert.ToHexString, so I decided to ~~fall into the bytes to hex string rabbit hole~~ create a new, updated comparison with more methods from answers to both of these two questions. I went with BenchamrkDotNet instead of a custom-made benchmarking script, which will, hopefully, make the result more accurate. Remember that micro-benchmarking won't ever represent the actual situation, and you should do your tests. I ran these benchmarks on a Linux with Kernel 5.15.32 on an AMD Ryzen 5800H with 2x8 GB DDR4 @ 2133 MHz. Be aware that the whole benchmark might take a lot of time to complete - around 40 minutes on my machine. UPPERCASE (capitalized) vs lowercase output All methods mentioned (unless stated otherwise) focus on UPPERCASE output only. That means the output will look like B33F69, not b33f69. The output from Convert.ToHexString is always uppercase. Still, thankfully there isn't any significant performance drop when paired with ToLower(), although both unsafe methods will be faster if that's your concern. Making the string lowercase efficiently might be a challenge in some methods (especially the ones with bit operators magic), but in most, it's enough to change a parameter X2 to x2 or change the letters from uppercase to lowercase in a mapping. Leaderboard It is sorted by Mean N=100. The reference point is the StringBuilderForEachByte method. Method (means are in nanoseconds) Mean N=10 Ratio N=10 Mean N=100 Ratio N=100 Mean N=500 Ratio N=500 Mean N=1k Ratio N=1k Mean N=10k Ratio N=10k Mean N=100k Ratio N=100k StringBuilderAggregateBytesAppendFormat 364.92 1.48 3,680.00 1.74 18,928.33 1.86 38,362.94 1.87 380,994.74 1.72 42,618,861.57 1.62 StringBuilderForEachAppendFormat 309.59 1.26 3,203.11 1.52 20,775.07 2.04 41,398.07 2.02 426,839.96 1.93 37,220,750.15 1.41 StringJoinSelect 310.84 1.26 2,765.91 1.31 13,549.12 1.33 28,691.16 1.40 304,163.97 1.38 63,541,601.12 2.41 StringConcatSelect 301.34 1.22 2,733.64 1.29 14,449.53 1.42 29,174.83 1.42 307,196.94 1.39 32,877,994.95 1.25 StringJoinArrayConvertAll 279.21 1.13 2,608.71 1.23 13,305.96 1.30 27,207.12 1.32 295,589.61 1.34 62,950,871.38 2.39 StringBuilderAggregateBytesAppend 276.18 1.12 2,599.62 1.23 12,788.11 1.25 26,043.54 1.27 255,389.06 1.16 27,664,344.41 1.05 StringConcatArrayConvertAll 244.81 0.99 2,361.08 1.12 11,881.18 1.16 23,709.21 1.15 265,197.33 1.20 56,044,744.44 2.12 StringBuilderForEachByte 246.09 1.00 2,112.77 1.00 10,200.36 1.00 20,540.77 1.00 220,993.95 1.00 26,387,941.13 1.00 StringBuilderForEachBytePreAllocated 213.85 0.87 1,897.19 0.90 9,340.66 0.92 19,142.27 0.93 204,968.88 0.93 24,902,075.81 0.94 BitConverterReplace 140.09 0.57 1,207.74 0.57 6,170.46 0.60 12,438.23 0.61 145,022.35 0.66 17,719,082.72 0.67 LookupPerNibble 63.78 0.26 421.75 0.20 1,978.22 0.19 3,957.58 0.19 35,358.21 0.16 4,993,649.91 0.19 LookupAndShift 53.22 0.22 311.56 0.15 1,461.15 0.14 2,924.11 0.14 26,180.11 0.12 3,771,827.62 0.14 WhilePropertyLookup 41.83 0.17 308.59 0.15 1,473.10 0.14 2,925.66 0.14 28,440.28 0.13 5,060,341.10 0.19 LookupAndShiftAlphabetArray 37.06 0.15 290.96 0.14 1,387.01 0.14 3,087.86 0.15 29,883.54 0.14 5,136,607.61 0.19 ByteManipulationDecimal 35.29 0.14 251.69 0.12 1,180.38 0.12 2,347.56 0.11 22,731.55 0.10 4,645,593.05 0.18 ByteManipulationHexMultiply 35.45 0.14 235.22 0.11 1,342.50 0.13 2,661.25 0.13 25,810.54 0.12 7,833,116.68 0.30 ByteManipulationHexIncrement 36.43 0.15 234.31 0.11 1,345.38 0.13 2,737.89 0.13 26,413.92 0.12 7,820,224.57 0.30 WhileLocalLookup 42.03 0.17 223.59 0.11 1,016.93 0.10 1,979.24 0.10 19,360.07 0.09 4,150,234.71 0.16 LookupAndShiftAlphabetSpan 30.00 0.12 216.51 0.10 1,020.65 0.10 2,316.99 0.11 22,357.13 0.10 4,580,277.95 0.17 LookupAndShiftAlphabetSpanMultiply 29.04 0.12 207.38 0.10 985.94 0.10 2,259.29 0.11 22,287.12 0.10 4,563,518.13 0.17 LookupPerByte 32.45 0.13 205.84 0.10 951.30 0.09 1,906.27 0.09 18,311.03 0.08 3,908,692.66 0.15 LookupSpanPerByteSpan 25.69 0.10 184.29 0.09 863.79 0.08 2,035.55 0.10 19,448.30 0.09 4,086,961.29 0.15 LookupPerByteSpan 27.03 0.11 184.26 0.09 866.03 0.08 2,005.34 0.10 19,760.55 0.09 4,192,457.14 0.16 Lookup32SpanUnsafeDirect 16.90 0.07 99.20 0.05 436.66 0.04 895.23 0.04 8,266.69 0.04 1,506,058.05 0.06 Lookup32UnsafeDirect 16.51 0.07 98.64 0.05 436.49 0.04 878.28 0.04 8,278.18 0.04 1,753,655.67 0.07 ConvertToHexString 19.27 0.08 64.83 0.03 295.15 0.03 585.86 0.03 5,445.73 0.02 1,478,363.32 0.06 ConvertToHexString.ToLower() 45.66 - 175.16 - 787.86 - 1,516.65 - 13,939.71 - 2,620,046.76 - Conclusion The method ConvertToHexString is undoubtedly the fastest out there, and in my perspective, it should always be used if you have the option - it's swift and clean. using System; string result = Convert.ToHexString(bytesToConvert); If not, I decided to highlight two other methods I consider worthy below. I decided not to highlight unsafe methods since such code might be not only, well, unsafe, but most projects I've worked with don't allow such code. Worthy mentions The first one is LookupPerByteSpan. The code is almost identical to the code in LookupPerByte by CodesInChaos from this answer. This one is the fastest not-unsafe method benchmarked. The difference between the original and this one is using stack allocation for shorter inputs (up to 512 bytes). This makes this method around 10 % faster on these inputs but around 5 % slower on larger ones. Since most of the data I work with is shorter than larger, I opted for this one. LookupSpanPerByteSpan is also very fast, but the code size of its ReadOnlySpan<byte> mapping is too large compared to all other methods. private static readonly uint[] Lookup32 = Enumerable.Range(0, 256).Select(i => { string s = i.ToString("X2"); return s[0] + ((uint)s[1] << 16); }).ToArray(); public string ToHexString(byte[] bytes) { var result = bytes.Length * 2 <= 1024 ? stackalloc char[bytes.Length * 2] : new char[bytes.Length * 2]; for (int i = 0; i < bytes.Length; i++) { var val = Lookup32[bytes[i]]; result[2 * i] = (char)val; result[2 * i + 1] = (char)(val >> 16); } return new string(result); } The second one is LookupAndShiftAlphabetSpanMultiply. First, I would like to mention that this one is my creation. However, I believe this method is not only pretty fast but also simple to understand. The speed comes from a change that happened in C# 7.3, where declared ReadOnlySpan<byte> methods returning a constant array initialization - new byte {1, 2, 3, ...} - are compiled as the program's static data, therefore omitting a redundant memory allocations. [source] private static ReadOnlySpan<byte> HexAlphabetSpan => new[] { (byte)'0', (byte)'1', (byte)'2', (byte)'3', (byte)'4', (byte)'5', (byte)'6', (byte)'7', (byte)'8', (byte)'9', (byte)'A', (byte)'B', (byte)'C', (byte)'D', (byte)'E', (byte)'F' }; public static string ToHexString(byte[] bytes) { var res = bytes.Length * 2 <= 1024 ? stackalloc char[bytes.Length * 2] : new char[bytes.Length * 2]; for (var i = 0; i < bytes.Length; ++i) { var j = i * 2; res[j] = (char)HexAlphabetSpan[bytes[i] >> 4]; res[j + 1] = (char)HexAlphabetSpan[bytes[i] & 0xF]; } return new string(res); } Source code The source code for all methods, the benchmark, and this answer can be found here as a Gist on my GitHub. 4 • 1 The ToHexString method is very useful. It seems FromHexString does the reverse operation – user4779 Jun 20, 2022 at 15:30 • Why do I get the compile error cannot convert 'System.Span<char>' to 'char*' on return new string(res)? Thanks Jun 13 at 14:21 • @RuiCaramalho Since you're using .Net Framework and not .NET you should use res.ToString() instead of new string(res). You might also need to install NuGet package System.Memory to support System.Span. Be aware that this code was tested using .NET and not .Net Framework so there might (or might not) be some difference in the performance. Jun 13 at 17:25 • @antoninkriz Thanks Jun 13 at 18:13 25 Dotnet 5 Update To convert from byte[] (byte array) to hexadecimal string, use: System.Convert.ToHexString var myBytes = new byte[100]; var myString = System.Convert.ToHexString(myBytes); To convert from hexadecimal string to byte[], use: System.Convert.FromHexString var myString = "E10B116E8530A340BCC7B3EAC208487B"; var myBytes = System.Convert.FromHexString(myString); 22 Complement to answer by @CodesInChaos (reversed method) public static byte[] HexToByteUsingByteManipulation(string s) { byte[] bytes = new byte[s.Length / 2]; for (int i = 0; i < bytes.Length; i++) { int hi = s[i*2] - 65; hi = hi + 10 + ((hi >> 31) & 7); int lo = s[i*2 + 1] - 65; lo = lo + 10 + ((lo >> 31) & 7) & 0x0f; bytes[i] = (byte) (lo | hi << 4); } return bytes; } Explanation: & 0x0f is to support also lower case letters hi = hi + 10 + ((hi >> 31) & 7); is the same as: hi = ch-65 + 10 + (((ch-65) >> 31) & 7); For '0'..'9' it is the same as hi = ch - 65 + 10 + 7; which is hi = ch - 48 (this is because of 0xffffffff & 7). For 'A'..'F' it is hi = ch - 65 + 10; (this is because of 0x00000000 & 7). For 'a'..'f' we have to big numbers so we must subtract 32 from default version by making some bits 0 by using & 0x0f. 65 is code for 'A' 48 is code for '0' 7 is the number of letters between '9' and 'A' in the ASCII table (...456789:;<=>?@ABCD...). 20 This problem could also be solved using a look-up table. This would require a small amount of static memory for both the encoder and decoder. This method will however be fast: • Encoder table 512 bytes or 1024 bytes (twice the size if both upper and lower case is needed) • Decoder table 256 bytes or 64 KiB (either a single char look-up or dual char look-up) My solution uses 1024 bytes for the encoding table, and 256 bytes for decoding. Decoding private static readonly byte[] LookupTable = new byte[] { 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07, 0x08, 0x09, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0x0A, 0x0B, 0x0C, 0x0D, 0x0E, 0x0F, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0x0A, 0x0B, 0x0C, 0x0D, 0x0E, 0x0F, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF }; private static byte Lookup(char c) { var b = LookupTable[c]; if (b == 255) throw new IOException("Expected a hex character, got " + c); return b; } public static byte ToByte(char[] chars, int offset) { return (byte)(Lookup(chars[offset]) << 4 | Lookup(chars[offset + 1])); } Encoding private static readonly char[][] LookupTableUpper; private static readonly char[][] LookupTableLower; static Hex() { LookupTableLower = new char[256][]; LookupTableUpper = new char[256][]; for (var i = 0; i < 256; i++) { LookupTableLower[i] = i.ToString("x2").ToCharArray(); LookupTableUpper[i] = i.ToString("X2").ToCharArray(); } } public static char[] ToCharLower(byte[] b, int bOffset) { return LookupTableLower[b[bOffset]]; } public static char[] ToCharUpper(byte[] b, int bOffset) { return LookupTableUpper[b[bOffset]]; } Comparison StringBuilderToStringFromBytes: 106148 BitConverterToStringFromBytes: 15783 ArrayConvertAllToStringFromBytes: 54290 ByteManipulationToCharArray: 8444 TableBasedToCharArray: 5651 * * this solution Note During decoding IOException and IndexOutOfRangeException could occur (if a character has a too high value > 256). Methods for de/encoding streams or arrays should be implemented, this is just a proof of concept. 1 • 2 Memory usage of 256 bytes is negligible when you run code on the CLR. – dolmen Aug 21, 2013 at 0:05 14 Why make it complex? This is simple in Visual Studio 2008: C#: string hex = BitConverter.ToString(YourByteArray).Replace("-", ""); VB: Dim hex As String = BitConverter.ToString(YourByteArray).Replace("-", "") 1 • 3 the reason is performance, when you need high performance solution. :) – Ricky Aug 4, 2016 at 6:28 12 This is a great post. I like Waleed's solution. I haven't run it through patridge's test but it seems to be quite fast. I also needed the reverse process, converting a hex string to a byte array, so I wrote it as a reversal of Waleed's solution. Not sure if it's any faster than Tomalak's original solution. Again, I did not run the reverse process through patridge's test either. private byte[] HexStringToByteArray(string hexString) { int hexStringLength = hexString.Length; byte[] b = new byte[hexStringLength / 2]; for (int i = 0; i < hexStringLength; i += 2) { int topChar = (hexString[i] > 0x40 ? hexString[i] - 0x37 : hexString[i] - 0x30) << 4; int bottomChar = hexString[i + 1] > 0x40 ? hexString[i + 1] - 0x37 : hexString[i + 1] - 0x30; b[i / 2] = Convert.ToByte(topChar + bottomChar); } return b; } 4 • 1 This code assumes the hex string uses upper case alpha chars, and blows up if the hex string uses lower case alpha. Might want to do a "uppercase" conversion on the input string to be safe. Jan 26, 2010 at 19:17 • That's an astute observation Marc. The code was written to reverse Waleed's solution. The ToUpper call would slow down the algorithm some, but would allow it to handle lower case alpha chars. – Chris F Jan 26, 2010 at 20:27 • 3 Convert.ToByte(topChar + bottomChar) can be written as (byte)(topChar + bottomChar) Feb 12, 2011 at 21:17 • To handle both cases without a large performance penalty, hexString[i] &= ~0x20; – Ben Voigt Jul 31, 2014 at 22:31 9 Safe versions: public static class HexHelper { [System.Diagnostics.Contracts.Pure] public static string ToHex(this byte[] value) { if (value == null) throw new ArgumentNullException("value"); const string hexAlphabet = @"0123456789ABCDEF"; var chars = new char[checked(value.Length * 2)]; unchecked { for (int i = 0; i < value.Length; i++) { chars[i * 2] = hexAlphabet[value[i] >> 4]; chars[i * 2 + 1] = hexAlphabet[value[i] & 0xF]; } } return new string(chars); } [System.Diagnostics.Contracts.Pure] public static byte[] FromHex(this string value) { if (value == null) throw new ArgumentNullException("value"); if (value.Length % 2 != 0) throw new ArgumentException("Hexadecimal value length must be even.", "value"); unchecked { byte[] result = new byte[value.Length / 2]; for (int i = 0; i < result.Length; i++) { // 0(48) - 9(57) -> 0 - 9 // A(65) - F(70) -> 10 - 15 int b = value[i * 2]; // High 4 bits. int val = ((b - '0') + ((('9' - b) >> 31) & -7)) << 4; b = value[i * 2 + 1]; // Low 4 bits. val += (b - '0') + ((('9' - b) >> 31) & -7); result[i] = checked((byte)val); } return result; } } } Unsafe versions For those who prefer performance and do not afraid of unsafeness. About 35% faster ToHex and 10% faster FromHex. public static class HexUnsafeHelper { [System.Diagnostics.Contracts.Pure] public static unsafe string ToHex(this byte[] value) { if (value == null) throw new ArgumentNullException("value"); const string alphabet = @"0123456789ABCDEF"; string result = new string(' ', checked(value.Length * 2)); fixed (char* alphabetPtr = alphabet) fixed (char* resultPtr = result) { char* ptr = resultPtr; unchecked { for (int i = 0; i < value.Length; i++) { *ptr++ = *(alphabetPtr + (value[i] >> 4)); *ptr++ = *(alphabetPtr + (value[i] & 0xF)); } } } return result; } [System.Diagnostics.Contracts.Pure] public static unsafe byte[] FromHex(this string value) { if (value == null) throw new ArgumentNullException("value"); if (value.Length % 2 != 0) throw new ArgumentException("Hexadecimal value length must be even.", "value"); unchecked { byte[] result = new byte[value.Length / 2]; fixed (char* valuePtr = value) { char* valPtr = valuePtr; for (int i = 0; i < result.Length; i++) { // 0(48) - 9(57) -> 0 - 9 // A(65) - F(70) -> 10 - 15 int b = *valPtr++; // High 4 bits. int val = ((b - '0') + ((('9' - b) >> 31) & -7)) << 4; b = *valPtr++; // Low 4 bits. val += (b - '0') + ((('9' - b) >> 31) & -7); result[i] = checked((byte)val); } } return result; } } } BTW For benchmark testing initializing alphabet every time convert function called is wrong, alphabet must be const (for string) or static readonly (for char[]). Then alphabet-based conversion of byte[] to string becomes as fast as byte manipulation versions. And of course test must be compiled in Release (with optimization) and with debug option "Suppress JIT optimization" turned off (same for "Enable Just My Code" if code must be debuggable). 9 Not to pile on to the many answers here, but I found a fairly optimal (~4.5x better than accepted), straightforward implementation of the hex string parser. First, output from my tests (the first batch is my implementation): Give me that string: 04c63f7842740c77e545bb0b2ade90b384f119f6ab57b680b7aa575a2f40939f Time to parse 100,000 times: 50.4192 ms Result as base64: BMY/eEJ0DHflRbsLKt6Qs4TxGfarV7aAt6pXWi9Ak58= BitConverter'd: 04-C6-3F-78-42-74-0C-77-E5-45-BB-0B-2A-DE-90-B3-84-F1-19-F6-AB-5 7-B6-80-B7-AA-57-5A-2F-40-93-9F Accepted answer: (StringToByteArray) Time to parse 100000 times: 233.1264ms Result as base64: BMY/eEJ0DHflRbsLKt6Qs4TxGfarV7aAt6pXWi9Ak58= BitConverter'd: 04-C6-3F-78-42-74-0C-77-E5-45-BB-0B-2A-DE-90-B3-84-F1-19-F6-AB-5 7-B6-80-B7-AA-57-5A-2F-40-93-9F With Mono's implementation: Time to parse 100000 times: 777.2544ms Result as base64: BMY/eEJ0DHflRbsLKt6Qs4TxGfarV7aAt6pXWi9Ak58= BitConverter'd: 04-C6-3F-78-42-74-0C-77-E5-45-BB-0B-2A-DE-90-B3-84-F1-19-F6-AB-5 7-B6-80-B7-AA-57-5A-2F-40-93-9F With SoapHexBinary: Time to parse 100000 times: 845.1456ms Result as base64: BMY/eEJ0DHflRbsLKt6Qs4TxGfarV7aAt6pXWi9Ak58= BitConverter'd: 04-C6-3F-78-42-74-0C-77-E5-45-BB-0B-2A-DE-90-B3-84-F1-19-F6-AB-5 7-B6-80-B7-AA-57-5A-2F-40-93-9F The base64 and 'BitConverter'd' lines are there to test for correctness. Note that they are equal. The implementation: public static byte[] ToByteArrayFromHex(string hexString) { if (hexString.Length % 2 != 0) throw new ArgumentException("String must have an even length"); var array = new byte[hexString.Length / 2]; for (int i = 0; i < hexString.Length; i += 2) { array[i/2] = ByteFromTwoChars(hexString[i], hexString[i + 1]); } return array; } private static byte ByteFromTwoChars(char p, char p_2) { byte ret; if (p <= '9' && p >= '0') { ret = (byte) ((p - '0') << 4); } else if (p <= 'f' && p >= 'a') { ret = (byte) ((p - 'a' + 10) << 4); } else if (p <= 'F' && p >= 'A') { ret = (byte) ((p - 'A' + 10) << 4); } else throw new ArgumentException("Char is not a hex digit: " + p,"p"); if (p_2 <= '9' && p_2 >= '0') { ret |= (byte) ((p_2 - '0')); } else if (p_2 <= 'f' && p_2 >= 'a') { ret |= (byte) ((p_2 - 'a' + 10)); } else if (p_2 <= 'F' && p_2 >= 'A') { ret |= (byte) ((p_2 - 'A' + 10)); } else throw new ArgumentException("Char is not a hex digit: " + p_2, "p_2"); return ret; } I tried some stuff with unsafe and moving the (clearly redundant) character-to-nibble if sequence to another method, but this was the fastest it got. (I concede that this answers half the question. I felt that the string->byte[] conversion was underrepresented, while the byte[]->string angle seems to be well covered. Thus, this answer.) 1 • 1 For the followers of Knuth: I did this because I need to parse a few thousand hex strings every few minutes or so, so it's important that it be as fast as possible (in the inner loop, as it were). Tomalak's solution is not notably slower if many such parses are not occurring. – Ben Mosher May 22, 2012 at 17:01 9 From Microsoft's developers, a nice, simple conversion: public static string ByteArrayToString(byte[] ba) { // Concatenate the bytes into one long string return ba.Aggregate(new StringBuilder(32), (sb, b) => sb.Append(b.ToString("X2")) ).ToString(); } While the above is clean and compact, performance junkies will scream about it using enumerators. You can get peak performance with an improved version of Tomalak's original answer: public static string ByteArrayToString(byte[] ba) { StringBuilder hex = new StringBuilder(ba.Length * 2); for(int i=0; i < ba.Length; i++) // <-- Use for loop is faster than foreach hex.Append(ba[i].ToString("X2")); // <-- ToString is faster than AppendFormat return hex.ToString(); } This is the fastest of all the routines I've seen posted here so far. Don't just take my word for it... performance test each routine and inspect its CIL code for yourself. 1 • 3 The iterator is not the main problem of this code. You should benchmark b.ToSting("X2"). – dolmen Aug 20, 2013 at 23:49 7 Inverse function for Waleed Eissa code (Hex String To Byte Array): public static byte[] HexToBytes(this string hexString) { byte[] b = new byte[hexString.Length / 2]; char c; for (int i = 0; i < hexString.Length / 2; i++) { c = hexString[i * 2]; b[i] = (byte)((c < 0x40 ? c - 0x30 : (c < 0x47 ? c - 0x37 : c - 0x57)) << 4); c = hexString[i * 2 + 1]; b[i] += (byte)(c < 0x40 ? c - 0x30 : (c < 0x47 ? c - 0x37 : c - 0x57)); } return b; } Waleed Eissa function with lower case support: public static string BytesToHex(this byte[] barray, bool toLowerCase = true) { byte addByte = 0x37; if (toLowerCase) addByte = 0x57; char[] c = new char[barray.Length * 2]; byte b; for (int i = 0; i < barray.Length; ++i) { b = ((byte)(barray[i] >> 4)); c[i * 2] = (char)(b > 9 ? b + addByte : b + 0x30); b = ((byte)(barray[i] & 0xF)); c[i * 2 + 1] = (char)(b > 9 ? b + addByte : b + 0x30); } return new string(c); } 7 Extension methods (disclaimer: completely untested code, BTW...): public static class ByteExtensions { public static string ToHexString(this byte[] ba) { StringBuilder hex = new StringBuilder(ba.Length * 2); foreach (byte b in ba) { hex.AppendFormat("{0:x2}", b); } return hex.ToString(); } } etc.. Use either of Tomalak's three solutions (with the last one being an extension method on a string). 1 • 1 You should probably test the code before you offer it up for a question like this. – jww Feb 16, 2017 at 19:08 7 Fastest method for old school people... miss you pointers static public byte[] HexStrToByteArray(string str) { byte[] res = new byte[(str.Length % 2 != 0 ? 0 : str.Length / 2)]; //check and allocate memory for (int i = 0, j = 0; j < res.Length; i += 2, j++) //convert loop res[j] = (byte)((str[i] % 32 + 9) % 25 * 16 + (str[i + 1] % 32 + 9) % 25); return res; } 7 .NET 5 has added the Convert.ToHexString method. For those using an older version of .NET internal static class ByteArrayExtensions { public static string ToHexString(this byte[] bytes, Casing casing = Casing.Upper) { Span<char> result = stackalloc char[0]; if (bytes.Length > 16) { var array = new char[bytes.Length * 2]; result = array.AsSpan(); } else { result = stackalloc char[bytes.Length * 2]; } int pos = 0; foreach (byte b in bytes) { ToCharsBuffer(b, result, pos, casing); pos += 2; } return result.ToString(); } private static void ToCharsBuffer(byte value, Span<char> buffer, int startingIndex = 0, Casing casing = Casing.Upper) { uint difference = (((uint)value & 0xF0U) << 4) + ((uint)value & 0x0FU) - 0x8989U; uint packedResult = ((((uint)(-(int)difference) & 0x7070U) >> 4) + difference + 0xB9B9U) | (uint)casing; buffer[startingIndex + 1] = (char)(packedResult & 0xFF); buffer[startingIndex] = (char)(packedResult >> 8); } } public enum Casing : uint { // Output [ '0' .. '9' ] and [ 'A' .. 'F' ]. Upper = 0, // Output [ '0' .. '9' ] and [ 'a' .. 'f' ]. Lower = 0x2020U, } Adapted from the .NET repository https://github.com/dotnet/runtime/blob/v5.0.3/src/libraries/System.Private.CoreLib/src/System/Convert.cs https://github.com/dotnet/runtime/blob/v5.0.3/src/libraries/Common/src/System/HexConverter.cs 6 Tests: Hex String To Byte Array I noticed that most of tests were performed on functions that convert Bytes array to Hex string. So, in this post I will focus on the other side: functions that convert Hex String To Byte Array. If you are interested in result only, you could skip down to Summary section. The test code file is supplied at the end of the post. Labels I would like to name the function from the accepted answer (by Tomalak) StringToByteArrayV1, or to shortcut it to V1. rest of functions will be named in same way: V2, V3, V4, ..., etc. Index of Participating Functions Correctness Test I have tested correctness by passing all 256 possible values of 1 byte, then checking output to see if correct. Result: • V18 has issue with strings start with "00" (see Roger Stewart comment on it ). other than that it passes all tests. • if hex string alphabet letters are uppercase: all functions successfully passed • if hex string alphabet letters are lowercase then the following functions failed: V5_1, V5_2, v7, V8, V15, V19 note: V5_3 solves this issue (of V5_1 and V5_2) Performance Test I have done performance tests using Stopwatch class. • Performance for long strings input length: 10,000,000 bytes runs: 100 average elapsed time per run: V1 = 136.4ms V2 = 104.5ms V3 = 22.0ms V4 = 9.9ms V5_1 = 10.2ms V5_2 = 9.0ms V5_3 = 9.3ms V6 = 18.3ms V7 = 9.8ms V8 = 8.8ms V9 = 10.2ms V10 = 19.0ms V11 = 12.2ms V12 = 27.4ms V13 = 21.8ms V14 = 12.0ms V15 = 14.9ms V16 = 15.3ms V17 = 9.5ms V18 got excluded from this test, because it was very slow when using very long string V19 = 222.8ms V20 = 66.0ms V21 = 15.4ms V1 average ticks per run: 1363529.4 V2 is more fast than V1 by: 1.3 times (ticks ratio) V3 is more fast than V1 by: 6.2 times (ticks ratio) V4 is more fast than V1 by: 13.8 times (ticks ratio) V5_1 is more fast than V1 by: 13.3 times (ticks ratio) V5_2 is more fast than V1 by: 15.2 times (ticks ratio) V5_3 is more fast than V1 by: 14.8 times (ticks ratio) V6 is more fast than V1 by: 7.4 times (ticks ratio) V7 is more fast than V1 by: 13.9 times (ticks ratio) V8 is more fast than V1 by: 15.4 times (ticks ratio) V9 is more fast than V1 by: 13.4 times (ticks ratio) V10 is more fast than V1 by: 7.2 times (ticks ratio) V11 is more fast than V1 by: 11.1 times (ticks ratio) V12 is more fast than V1 by: 5.0 times (ticks ratio) V13 is more fast than V1 by: 6.3 times (ticks ratio) V14 is more fast than V1 by: 11.4 times (ticks ratio) V15 is more fast than V1 by: 9.2 times (ticks ratio) V16 is more fast than V1 by: 8.9 times (ticks ratio) V17 is more fast than V1 by: 14.4 times (ticks ratio) V19 is more SLOW than V1 by: 1.6 times (ticks ratio) V20 is more fast than V1 by: 2.1 times (ticks ratio) V21 is more fast than V1 by: 8.9 times (ticks ratio) • Performance of V18 for long strings V18 took long time at the previous test, so let's decrease length for it: input length: 1,000,000 bytes runs: 100 average elapsed time per run: V1 = 14.1ms , V18 = 146.7ms V1 average ticks per run: 140630.3 V18 is more SLOW than V1 by: 10.4 times (ticks ratio) • Performance for short strings input length: 100 byte runs: 1,000,000 V1 average ticks per run: 14.6 V2 is more fast than V1 by: 1.4 times (ticks ratio) V3 is more fast than V1 by: 5.9 times (ticks ratio) V4 is more fast than V1 by: 15.7 times (ticks ratio) V5_1 is more fast than V1 by: 15.1 times (ticks ratio) V5_2 is more fast than V1 by: 18.4 times (ticks ratio) V5_3 is more fast than V1 by: 16.3 times (ticks ratio) V6 is more fast than V1 by: 5.3 times (ticks ratio) V7 is more fast than V1 by: 15.7 times (ticks ratio) V8 is more fast than V1 by: 18.0 times (ticks ratio) V9 is more fast than V1 by: 15.5 times (ticks ratio) V10 is more fast than V1 by: 7.8 times (ticks ratio) V11 is more fast than V1 by: 12.4 times (ticks ratio) V12 is more fast than V1 by: 5.3 times (ticks ratio) V13 is more fast than V1 by: 5.2 times (ticks ratio) V14 is more fast than V1 by: 13.4 times (ticks ratio) V15 is more fast than V1 by: 9.9 times (ticks ratio) V16 is more fast than V1 by: 9.2 times (ticks ratio) V17 is more fast than V1 by: 16.2 times (ticks ratio) V18 is more fast than V1 by: 1.1 times (ticks ratio) V19 is more SLOW than V1 by: 1.6 times (ticks ratio) V20 is more fast than V1 by: 1.9 times (ticks ratio) V21 is more fast than V1 by: 11.4 times (ticks ratio) Testing Code It is good idea to read Disclaimer section down here in this post, before using any from the following code https://github.com/Ghosticollis/performance-tests/blob/main/MTestPerformance.cs Summary I recommend using one of the following functions, because of the good performance, and support both upper and lower case: Here is the final shape of V5_3: static byte[] HexStringToByteArrayV5_3(string hexString) { int hexStringLength = hexString.Length; byte[] b = new byte[hexStringLength / 2]; for (int i = 0; i < hexStringLength; i += 2) { int topChar = hexString[i]; topChar = (topChar > 0x40 ? (topChar & ~0x20) - 0x37 : topChar - 0x30) << 4; int bottomChar = hexString[i + 1]; bottomChar = bottomChar > 0x40 ? (bottomChar & ~0x20) - 0x37 : bottomChar - 0x30; b[i / 2] = (byte)(topChar + bottomChar); } return b; } Disclaimer WARNING: I don't have proper knowledge in testing. The main purpose of these primitive tests is to give quick overview on what might be good from all of posted functions. If you need accurate results, please use proper testing tools. Finally, I would like to say I am new to be active at stackoverflow, sorry if my post is lacking. comments to enhance this post would be appreciated. 1 • 1 Wow, that's a lot of effort! Jul 7, 2021 at 16:55 4 And for inserting into an SQL string (if you're not using command parameters): public static String ByteArrayToSQLHexString(byte[] Source) { return = "0x" + BitConverter.ToString(Source).Replace("-", ""); } 1 • if Source == null or Source.Length == 0 we have a problem sir! Jun 7, 2019 at 17:37 4 In terms of speed, this seems to be better than anything here: public static string ToHexString(byte[] data) { byte b; int i, j, k; int l = data.Length; char[] r = new char[l * 2]; for (i = 0, j = 0; i < l; ++i) { b = data[i]; k = b >> 4; r[j++] = (char)(k > 9 ? k + 0x37 : k + 0x30); k = b & 15; r[j++] = (char)(k > 9 ? k + 0x37 : k + 0x30); } return new string(r); } 4 I did not get the code you suggested to work, Olipro. hex[i] + hex[i+1] apparently returned an int. I did, however have some success by taking some hints from Waleeds code and hammering this together. It's ugly as hell but it seems to work and performs at 1/3 of the time compared to the others according to my tests (using patridges testing mechanism). Depending on input size. Switching around the ?:s to separate out 0-9 first would probably yield a slightly faster result since there are more numbers than letters. public static byte[] StringToByteArray2(string hex) { byte[] bytes = new byte[hex.Length/2]; int bl = bytes.Length; for (int i = 0; i < bl; ++i) { bytes[i] = (byte)((hex[2 * i] > 'F' ? hex[2 * i] - 0x57 : hex[2 * i] > '9' ? hex[2 * i] - 0x37 : hex[2 * i] - 0x30) << 4); bytes[i] |= (byte)(hex[2 * i + 1] > 'F' ? hex[2 * i + 1] - 0x57 : hex[2 * i + 1] > '9' ? hex[2 * i + 1] - 0x37 : hex[2 * i + 1] - 0x30); } return bytes; } 4 This version of ByteArrayToHexViaByteManipulation could be faster. From my reports: • ByteArrayToHexViaByteManipulation3: 1,68 average ticks (over 1000 runs), 17,5X • ByteArrayToHexViaByteManipulation2: 1,73 average ticks (over 1000 runs), 16,9X • ByteArrayToHexViaByteManipulation: 2,90 average ticks (over 1000 runs), 10,1X • ByteArrayToHexViaLookupAndShift: 3,22 average ticks (over 1000 runs), 9,1X • ... static private readonly char[] hexAlphabet = new char[] {'0','1','2','3','4','5','6','7','8','9','A','B','C','D','E','F'}; static string ByteArrayToHexViaByteManipulation3(byte[] bytes) { char[] c = new char[bytes.Length * 2]; byte b; for (int i = 0; i < bytes.Length; i++) { b = ((byte)(bytes[i] >> 4)); c[i * 2] = hexAlphabet[b]; b = ((byte)(bytes[i] & 0xF)); c[i * 2 + 1] = hexAlphabet[b]; } return new string(c); } And I think this one is an optimization: static private readonly char[] hexAlphabet = new char[] {'0','1','2','3','4','5','6','7','8','9','A','B','C','D','E','F'}; static string ByteArrayToHexViaByteManipulation4(byte[] bytes) { char[] c = new char[bytes.Length * 2]; for (int i = 0, ptr = 0; i < bytes.Length; i++, ptr += 2) { byte b = bytes[i]; c[ptr] = hexAlphabet[b >> 4]; c[ptr + 1] = hexAlphabet[b & 0xF]; } return new string(c); } 4 I'll enter this bit fiddling competition as I have an answer that also uses bit-fiddling to decode hexadecimals. Note that using character arrays may be even faster as calling StringBuilder methods will take time as well. public static String ToHex (byte[] data) { int dataLength = data.Length; // pre-create the stringbuilder using the length of the data * 2, precisely enough StringBuilder sb = new StringBuilder (dataLength * 2); for (int i = 0; i < dataLength; i++) { int b = data [i]; // check using calculation over bits to see if first tuple is a letter // isLetter is zero if it is a digit, 1 if it is a letter int isLetter = (b >> 7) & ((b >> 6) | (b >> 5)) & 1; // calculate the code using a multiplication to make up the difference between // a digit character and an alphanumerical character int code = '0' + ((b >> 4) & 0xF) + isLetter * ('A' - '9' - 1); // now append the result, after casting the code point to a character sb.Append ((Char)code); // do the same with the lower (less significant) tuple isLetter = (b >> 3) & ((b >> 2) | (b >> 1)) & 1; code = '0' + (b & 0xF) + isLetter * ('A' - '9' - 1); sb.Append ((Char)code); } return sb.ToString (); } public static byte[] FromHex (String hex) { // pre-create the array int resultLength = hex.Length / 2; byte[] result = new byte[resultLength]; // set validity = 0 (0 = valid, anything else is not valid) int validity = 0; int c, isLetter, value, validDigitStruct, validDigit, validLetterStruct, validLetter; for (int i = 0, hexOffset = 0; i < resultLength; i++, hexOffset += 2) { c = hex [hexOffset]; // check using calculation over bits to see if first char is a letter // isLetter is zero if it is a digit, 1 if it is a letter (upper & lowercase) isLetter = (c >> 6) & 1; // calculate the tuple value using a multiplication to make up the difference between // a digit character and an alphanumerical character // minus 1 for the fact that the letters are not zero based value = ((c & 0xF) + isLetter * (-1 + 10)) << 4; // check validity of all the other bits validity |= c >> 7; // changed to >>, maybe not OK, use UInt? validDigitStruct = (c & 0x30) ^ 0x30; validDigit = ((c & 0x8) >> 3) * (c & 0x6); validity |= (isLetter ^ 1) * (validDigitStruct | validDigit); validLetterStruct = c & 0x18; validLetter = (((c - 1) & 0x4) >> 2) * ((c - 1) & 0x2); validity |= isLetter * (validLetterStruct | validLetter); // do the same with the lower (less significant) tuple c = hex [hexOffset + 1]; isLetter = (c >> 6) & 1; value ^= (c & 0xF) + isLetter * (-1 + 10); result [i] = (byte)value; // check validity of all the other bits validity |= c >> 7; // changed to >>, maybe not OK, use UInt? validDigitStruct = (c & 0x30) ^ 0x30; validDigit = ((c & 0x8) >> 3) * (c & 0x6); validity |= (isLetter ^ 1) * (validDigitStruct | validDigit); validLetterStruct = c & 0x18; validLetter = (((c - 1) & 0x4) >> 2) * ((c - 1) & 0x2); validity |= isLetter * (validLetterStruct | validLetter); } if (validity != 0) { throw new ArgumentException ("Hexadecimal encoding incorrect for input " + hex); } return result; } Converted from Java code. 2 • 1 Hmm, I really should optimize this for Char[] and use Char internally instead of ints... Jan 20, 2014 at 23:46 • 1 For C#, initializing the variables where they are used, instead of outside the loop, is probably preferred to let the compiler optimize. I get equivalent performance either way. – Peteter Jun 12, 2019 at 16:50 4 For performance I would go with drphrozens solution. A tiny optimization for the decoder could be to use a table for either char to get rid of the "<< 4". Clearly the two method calls are costly. If some kind of check is made either on input or output data (could be CRC, checksum or whatever) the if (b == 255)... could be skipped and thereby also the method calls altogether. Using offset++ and offset instead of offset and offset + 1 might give some theoretical benefit but I suspect the compiler handles this better than me. private static readonly byte[] LookupTableLow = new byte[] { 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07, 0x08, 0x09, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0x0A, 0x0B, 0x0C, 0x0D, 0x0E, 0x0F, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0x0A, 0x0B, 0x0C, 0x0D, 0x0E, 0x0F, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF }; private static readonly byte[] LookupTableHigh = new byte[] { 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0x00, 0x10, 0x20, 0x30, 0x40, 0x50, 0x60, 0x70, 0x80, 0x90, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xA0, 0xB0, 0xC0, 0xD0, 0xE0, 0xF0, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xA0, 0xB0, 0xC0, 0xD0, 0xE0, 0xF0, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF }; private static byte LookupLow(char c) { var b = LookupTableLow[c]; if (b == 255) throw new IOException("Expected a hex character, got " + c); return b; } private static byte LookupHigh(char c) { var b = LookupTableHigh[c]; if (b == 255) throw new IOException("Expected a hex character, got " + c); return b; } public static byte ToByte(char[] chars, int offset) { return (byte)(LookupHigh(chars[offset++]) | LookupLow(chars[offset])); } This is just off the top of my head and has not been tested or benchmarked. Not the answer you're looking for? Browse other questions tagged or ask your own question.
__label__pos
0.676093
LibreOffice Module svx (master)  1 sdrobjectuser.hxx Go to the documentation of this file. 1 /* -*- Mode: C++; tab-width: 4; indent-tabs-mode: nil; c-basic-offset: 4 -*- */ 2 /* 3  * This file is part of the LibreOffice project. 4  * 5  * This Source Code Form is subject to the terms of the Mozilla Public 6  * License, v. 2.0. If a copy of the MPL was not distributed with this 7  * file, You can obtain one at http://mozilla.org/MPL/2.0/. 8  * 9  * This file incorporates work covered by the following license notice: 10  * 11  * Licensed to the Apache Software Foundation (ASF) under one or more 12  * contributor license agreements. See the NOTICE file distributed 13  * with this work for additional information regarding copyright 14  * ownership. The ASF licenses this file to you under the Apache 15  * License, Version 2.0 (the "License"); you may not use this file 16  * except in compliance with the License. You may obtain a copy of 17  * the License at http://www.apache.org/licenses/LICENSE-2.0 . 18  */ 19  20 #ifndef INCLUDED_SVX_SDROBJECTUSER_HXX 21 #define INCLUDED_SVX_SDROBJECTUSER_HXX 22  23 #include <vector> 24  25 class SdrObject; 26  27 // To make things more safe, allow users of an object to register at it. The users need to be derived 28 // from sdr::ObjectUser to get a call. The users do not need to call RemoveObjectUser() at the page 29 // when they get called from ObjectInDestruction(). 30  31 namespace sdr 32 { 33  class ObjectUser 34  { 35  public: 36  // this method is called from the destructor of the referenced page. 37  // do all necessary action to forget the page. It is not necessary to call 38  // RemovePageUser(), that is done from the destructor. 39  virtual void ObjectInDestruction(const SdrObject& rObject) = 0; 40  41  protected: 43  }; 44  45  // typedef for ObjectUserVector 46  typedef ::std::vector< ObjectUser* > ObjectUserVector; 47 } // end of namespace sdr 48  49 #endif // INCLUDED_SVX_SDROBJECTUSER_HXX 50  51 /* vim:set shiftwidth=4 softtabstop=4 expandtab: */ ::std::vector< ObjectUser * > ObjectUserVector Abstract DrawObject. Definition: svdobj.hxx:260 virtual void ObjectInDestruction(const SdrObject &rObject)=0
__label__pos
0.921489
dva_dart 0.1.0 dva_dart # A dart lib for managing app's state. The idea is from dva.js ,which is a lightweight inspired by redux, redux-saga, elm. Though in dart, we have redux and flutter-redux developed by community. However it is not easy to learn and use even I have learned Redux in Javascript. When I try use bloc from flutter's community, I find Stream in dart and StreamBuilder for flutter is powerful. But it's the different setup and idea from Redux. Finnally, I decided to use the idea from dva.js and the basic structure from bloc, using rxdart to combine them toghther. And it might work in a way. About flutter # This library is under development, and I know a lot of use case should be flutter based. And I will try to ship them into Flutter ,and create another flutter-dva repo somehow. Will update this changelog until v1.0.0 # example/example.dart import 'dart:async'; import 'package:dva_dart/src/Model.dart'; import 'package:dva_dart/src/Effect.dart'; import 'package:dva_dart/src/State.dart'; import 'package:dva_dart/src/Action.dart'; import 'package:dva_dart/src/Store.dart'; import 'package:dva_dart/src/Reducer.dart'; // // // // class TestState implements DvaState { final int a; final int b; final int c; TestState(this.a, this.b, this.c); @override String toString() { // TODO: implement toString return 'TestState($a,$b,$c)'; } } class MutatedState implements DvaState { final String a; MutatedState(this.a); @override String toString() { return 'MutatedState(${this.a})'; } } class MyReducerDelegate implements ReducerDelegate { @override void onReducer(DvaReducer reducer) { print(reducer.toString()); } } void main() async { var pl1 = Payload<Map>({'a': 1}); var pl2 = Payload<String>('i am a payload'); Future add(p) async { return await p + 1; } // // // // ReducerWatcher().delegate = MyReducerDelegate(); DvaModel model = DvaModel(nameSpace: 'test', initialState: TestState(1, 2, 3), reducers: { 'updateState': (DvaState state, Payload payload) { return MutatedState(payload.toString()); }, }, effects: { 'asyncAdd': (Payload<Map> payload) async* { var added = await add(payload.payloadObject['payload']['a']); payload.payloadObject['payload'] .update('a', (value) => value = added, ifAbsent: () => {'a': added}); await Future<void>.delayed(Duration(seconds: 1)); yield PutEffect(key: 'updateState', payload: payload); }, 'appending': (Payload payload) async* { yield PutEffect(key: 'updateState', payload: payload); } }); DvaModel model2 = DvaModel(nameSpace: 'test2', initialState: TestState(1, 2, 3), reducers: { 'updateState': (DvaState state, Payload payload) { return MutatedState(payload.toString() + 'mutated'); }, }, effects: { 'appending': (Payload payload) async* { yield PutEffect(key: 'updateState', payload: payload); yield PutEffect(key: 'test/appending', payload: Payload('fuck you')); } }); DvaStore store = DvaStore(models: <DvaModel>[model, model2]); Action abc1 = createAction('test/asyncAdd')(pl1); Action abc2 = createAction('test2/appending')(pl2); // final StreamSubscription subscription = // store.storeController.stream.listen((onData) { // print(onData); // }); // store.dispatch(abc1); // store.dispatch(abc2); // 初始化一个监听 // store.stateStream.listen((onData) { // print(onData); // }); //var listner = store.getStream('test2'); // listner.listen((onData) { // print(onData); // }); store.getStream('test').listen((onData) { print(onData); }); // store.dispatch(abc1); // store.dispatch(abc2); // store.dispatch(abc1); store.dispatch(abc2); // store.dispatch(abc2); // store.dispatch(abc3); // var initState = State(initialState: {'abc': '@@@'}); // var result = DvaStore(abc, initState); // var putInitialized = PutEffect(actionType: abc.type); // var putReducer=Reducer(actionType: abc.type) // print(putInitialized.actionType); // result.dispatch(abc); // print(result.mapActionToState(result.currentState, abc)); //print(abc.payload.payload['foo']['baz']); // var result = asynchronousNaturalsTo( // n: Future.value(100), // k: (d) async { // return await d + 1; // }).last; // print(await result); // print(result); // var eee = ContractStatus.REJECTED.toString(); // var value = ({Payload payload, Function callFunc}) async* { // yield callFunc(payload); // }; // var key = 'func'; // Map effect = Map.fromEntries([MapEntry(key, value)]); // effect['func'](payload: pl, callFunc: print).toList().then((d) => d); } Use this package as a library 1. Depend on it Add this to your package's pubspec.yaml file: dependencies: dva_dart: ^0.1.0 2. Install it You can install packages from the command line: with pub: $ pub get with Flutter: $ flutter pub get Alternatively, your editor might support pub get or flutter pub get. Check the docs for your editor to learn more. 3. Import it Now in your Dart code, you can use: import 'package:dva_dart/dva_dart.dart'; Popularity: Describes how popular the package is relative to other packages. [more] 9 Health: Code health derived from static analysis. [more] 99 Maintenance: Reflects how tidy and up-to-date the package is. [more] 90 Overall: Weighted score of the above. [more] 52 Learn more about scoring. We analyzed this package on Aug 18, 2019, and provided a score, details, and suggestions below. Analysis was completed with status completed using: • Dart: 2.4.0 • pana: 0.12.19 Platforms Detected platforms: Flutter, web, other No platform restriction found in primary library package:dva_dart/dva_dart.dart. Health issues and suggestions Document public APIs. (-0.42 points) 99 out of 101 API elements have no dartdoc comment.Providing good documentation for libraries, classes, functions, and other API elements improves code readability and helps developers find and use your API. Fix lib/src/Model.dart. (-0.50 points) Analysis of lib/src/Model.dart reported 1 hint: line 127 col 30: Use ; instead of {} for empty constructor bodies. Maintenance issues and suggestions Support latest dependencies. (-10 points) The version constraint in pubspec.yaml does not support the latest published versions for 1 dependency (rxdart). Dependencies Package Constraint Resolved Available Direct dependencies Dart SDK >=1.8.0 <3.0.0 meta ^1.1.6 1.1.7 rxdart ^0.20.0 0.20.0 0.22.1+1 Dev dependencies test ^1.3.2
__label__pos
0.91251
Hi, can anyone help with the following? Given the Language L\ = \{\sigma1^{3k}}\in \{0,1\}^* \mid \sigma\ codes\ a\ Turing\ Machine\ M\ which\ on\ insert\ \sigma1^{3k}}\ halts\ within\ at\ most\ 2^k\ steps\ without\ accepting\ \} Show that L \notin P and find a function T(n) such that L \in Time(T(n)) This the first time I have been asked to show a language is not in some class, and I'm quite stumped.
__label__pos
0.999986
Write a computational program that reads a file of station coordinates and observations and then:... Write a computational program that reads a file of station coordinates and observations and then: (a) writes the data to a file in a formatted fashion. (b) computes the J, K, and W matrices. (c) writes the matrices to a file that is compatible with the MATRIX program. (d) Demonstrate this program with Problem 16.6. Problem 16.6 Using the program ADJUST, do a weighted least squares adjustment using the data given in Problem 15.7 with the additional distances given below. (a) What is the reference standard deviation, S0? (b) List the adjusted coordinates of the unknown stations and the standard deviations. (c) Tabulate the adjusted observations, the residuals, and the standard deviations. (d) List the inverted normal matrix used in the last iteration. Expert's Answer Submit Your Questions Here ! Copy and paste your question here... Attach Files 187 Wolf Road, Albany New York, 12205, USA Level 6/140 Creek Street, Brisbane, QLD 4000, Australia [email protected] Reach us on: © 2007-2022 Transweb Global Inc. All rights reserved.
__label__pos
0.576831
{"draft":"draft-ietf-dime-extended-naptr-09","doc_id":"RFC6408","title":"Diameter Straightforward-Naming Authority Pointer (S-NAPTR) Usage","authors":["M. Jones","J. Korhonen","L. Morand"],"format":["ASCII","HTML"],"page_count":"14","pub_status":"PROPOSED STANDARD","status":"PROPOSED STANDARD","source":"Diameter Maintenance and Extensions","abstract":"The Diameter base protocol specifies mechanisms whereby a given realm\r\nmay advertise Diameter nodes and the supported transport protocol.\r\nHowever, these mechanisms do not reveal the Diameter applications\r\nthat each node supports. A peer outside the realm would have to\r\nperform a Diameter capability exchange with every node until it\r\ndiscovers one that supports the required application. This document\r\nupdates RFC 3588, \"Diameter Base Protocol\", and describes an\r\nimprovement using an extended format for the Straightforward-Naming\r\nAuthority Pointer (S-NAPTR) application service tag that allows for\r\ndiscovery of the supported applications without doing Diameter\r\ncapability exchange beforehand. [STANDARDS-TRACK]","pub_date":"November 2011","keywords":["[--------]","Services Field","Peer Discovery"],"obsoletes":[],"obsoleted_by":[],"updates":["RFC3588"],"updated_by":[],"see_also":[],"doi":"10.17487\/RFC6408","errata_url":null}
__label__pos
0.914059