content
stringlengths
228
999k
pred_label
stringclasses
1 value
pred_score
float64
0.5
1
Unity Editor 替换 UISprite 引用 Altas 2017-10-17 雨辰 Unity3D using UnityEngine; using UnityEditor; using System.Collections.Generic; /// <summary> /// 根据鼠标点中的对象批量修改所有精灵 图集,脚本位于Editor文件夹 /// </summary> public class ChangeAltasWindow : EditorWindow { private static bool isChangAltas = false; private static int altCount = 1; public static List<UIAtlas> atles = new List<UIAtlas>(); [MenuItem("Window/修改图集引用")] private static void ShowWindow() { ChangeAltasWindow cw = GetWindow<ChangeAltasWindow>(true, "修改图集引用"); cw.minSize = new Vector2(310, 200); cw.maxSize = new Vector2(310, 300); //重置 面板 atles.Clear(); altCount = 1; } private void OnGUI() { GUILayout.Space(5); isChangAltas = EditorGUILayout.Toggle("是否改变当前图集", isChangAltas); GUILayout.Space(5); if (isChangAltas) { altCount = EditorGUILayout.IntField("修改图集数量", altCount); for (int i = 0; i < altCount; i++) { atles.Add(new UIAtlas()); atles[i] = (UIAtlas)EditorGUILayout.ObjectField("目标图集" + i.ToString(), atles[i], typeof(UIAtlas), true); } } GUILayout.Space(5); //创建确认按钮 if (GUILayout.Button("确认替换", GUILayout.Height(30), GUILayout.Width(300))) { Change(); } } public static void Change() { if (!isChangAltas) { return; } if (atles.Count == 0) { return; } Object[] sprites = Selection.GetFiltered(typeof(UISprite), SelectionMode.Deep); foreach (Object item in sprites) { UISprite sprite = (UISprite)item; if (sprite.atlas != null) { foreach (UIAtlas atl in atles) { if (atl != null && sprite.atlas.name == atl.name) { sprite.atlas = atl; Debug.Log(">>>>>>>>UIAtlas name <" + sprite.atlas.name + "> sprite name <" + sprite.name +">" ); break; } } } EditorUtility.SetDirty(item); //重要(有点像应用设置的意思) } } } 标签: Unity3D-Editer 发表评论: 雨辰 joyimp|@2011-2018 京ICP备16030765号
__label__pos
0.997575
Display Device Display Device Display devices are used as output devices when a connection is provided to a single or multiple video sources. They provide output in the visual form in a display; the video adapter gets a signal from the computer. Now this video adaptor displays a particular graphic or text or any other character depending on the signal. Then, the adaptor sends it to the display system which has been connected to the device via a connection. But, before sending any signal, the adapters render it first. Rendering means to convert a single instruction into several instructions. These instructions tell a display device how to draw the graphic. The display device that has been connected to the computer will decide how the character or the information will be displayed to the user. In this section, you will study about various   types   of displays and their properties. Display Types • LCD (Liquid crystal display) • Plasma Display • Projection System • OLED (organic light emitting diode) Display Please follow and like us: Leave a Reply Your email address will not be published. Required fields are marked *
__label__pos
0.85867
Python in TouchDesigner | Op Class | TouchDesigner The OP Class Taking some time to really understand how you might take better advantage of classes when using Python in TouchDesigner is well worth the time and effort – even if frustrating and intimidating at first. Chances are, you’ve already used some class methods without even knowing it, and here we’re going to take a quick opportunity to better understand them, and how you might use them. In the last example we looked at functions, and I mentioned that methods are also functions – in the same way that all squares are rectangles, but squares are a special kind of rectangle. Similarly, methods are a special kind of function. Special in that they belong to a class. In a highly simplified way, we might think of a class as a grouping a functions with a particular purpose. We can also use dot notation to access the members of a class. Let’s look at a simple example. For a hot second we’re going to depart from TouchDesigner and just talk about this problem as a programmer, then we’ll return to how this works and looks in Touch. Let’s imagine you want to put together a set of conversion tools. One approach would be to put all of your conversion functions together into one big class. If you’re only dealing with a few hundred lines of code that might be fine, but over time you’re likely going to need to keep updating this class, or you might find that it’s thousands of lines long suddenly and a bit unruly to wrangle. You might instead choose to separate functions into different classes that are thematically related. We might, in this case, choose to write a Temperature Conversion Class, and a Measurement Conversion Class as separate collections of code. That might look like this: class TemperatureConversion(): def F_to_C( self, temp_in_F ): temp_in_C = ( temp_in_F - 32 ) * ( 5/9 ) return temp_in_C def C_to_F( self, temp_in_C ): temp_in_F = ( temp_in_C * ( 9/5 ) ) + 32 return temp_in_F class MeasurementConversion(): def Inches_to_Centimeters( self, inches ): centimeters = inches * 2.54 return centimeters def Centimeters_to_Inches( self centimeters ): inches = centimeters * 0.39 return inches So what’s the benefit here? Now when calling one of these functions we can use dot notation in order to get the results. For example: print( TemperatureConversion.F_to_C( 50 ) ) print( TemperatureConversion.C_to_F( 100 ) ) print( MeasurementConversion.Inches_to_Centimeters( 12 ) ) print( MeasurementConversion.Centimeters_to_Inches( 1200 ) ) Organizationally, here we can easily see how our different classes give us a quick way to separate functions. Whew. Alright, that’s a lot of back-story in order to help us have a way to think about classes in TouchDesigner. We have to think / know about classes because that’s a part of organizational structure that we’re relying on when we use any dot notation for a method call. The Op Class applies to all operators in TouchDesigner, which means that the methods associated with it can be called in relation to any op – part of the reason we’re working through what that means. Okay, let’s look at some examples. If you haven’t already looked at the wiki page about the Op Class, you should do that now. In the next group of examples we’re going to use the Eval DAT in order to see how we can evaluate expressions quickly and easily. I frequently use the Eval DAT for just this reason, so I can see which parts of my expressions are working and which parts aren’t. Okay. Let’s first look at digits: me.digits Digits returns the integer number associated with an operator. In the example above we get the digits for the operator in question. In the example network it’s returning the digits for the operator table1. Let’s look at another use of digits: op( 'table2' ).digits In this example we’re asking for the digits for table2. Now, it might seem a little useless to ask for the digits of an operator you already know the digits for, but it’s not hard to imagine a situation where this becomes very handy. This is especially useful when we use replicators. parent().digits The above, for example, is a great way to get get the digits of a parent. When using a replicator you might use this approach to increment through the rows of a source table. Let’s look at some variations on the way you might retrieve the name of an operator: parent().name me.parent().name op( '..' ).name op( '/python_in_touchdesigner/example_op_class' ).name All of the above return the same result. parent() is a method that accepts an argument for relational distance. Let’s say we wanted to get information from our grandparent component: parent(2).name me.parent(2).name op( '../..' ).name op( '/python_in_touchdesigner' ).name Great, but what other kinds of methods can we use? Before the findOp DAT existed, you might use the findChildren() method to retrieve information about operators in a given component. In this case, I’m using a table generate rows for every operator in a component, and then using an Eval DAT to write one expression that’s uniquely evaluated for each row: parent().children[ me.inputRow ] Alright, one more time let’s go back to the wiki article on the Op Class. This time we’re going to take what we’ve learned about the Eval DAT, and what we’ve learned about classes to look at all of the methods we have access to for a text TOP. Let’s write out an expression for each of the methods: 'valid' op( 'text2' ).valid 'id' op( 'text2' ).id 'name' op( 'text2' ).name 'path' op( 'text2' ).path 'digits' op( 'text2' ).digits 'base' op( 'text2' ).base 'passive' op( 'text2' ).passive 'time' op( 'text2' ).time 'activeViewer' op( 'text2' ).activeViewer 'allowCooking' op( 'text2' ).allowCooking 'bypass' op( 'text2' ).bypass 'cloneImmune' op( 'text2' ).cloneImmune 'current' op( 'text2' ).current 'display' op( 'text2' ).display 'expose' op( 'text2' ).expose 'lock' op( 'text2' ).lock 'selected' op( 'text2' ).selected 'render' op( 'text2' ).render 'viewer' op( 'text2' ).viewer 'nodeHeight' op( 'text2' ).nodeHeight 'nodeWidth' op( 'text2' ).nodeWidth 'nodeX' op( 'text2' ).nodeX 'nodeY' op( 'text2' ).nodeY 'nodeCenterX' op( 'text2' ).nodeCenterX 'nodeCenterY' op( 'text2' ).nodeCenterY 'inputs' op( 'text2' ).inputs 'outputs' op( 'text2' ).outputs 'type' op( 'text2' ).type 'subType' op( 'text2' ).subType 'label' op( 'text2' ).label 'family' op( 'text2' ).family 'isFilter' op( 'text2' ).isFilter 'minInputs' op( 'text2' ).minInputs 'maxInputs' op( 'text2' ).maxInputs 'isMultiInputs' op( 'text2' ).isMultiInputs 'visibleLevel' op( 'text2' ).visibleLevel 'isBase' op( 'text2' ).isBase 'isCHOP' op( 'text2' ).isCHOP 'isCOMP' op( 'text2' ).isCOMP 'isDAT' op( 'text2' ).isDAT 'isMAT' op( 'text2' ).isMAT 'isObject' op( 'text2' ).isObject 'isPanel' op( 'text2' ).isPanel 'isSOP' op( 'text2' ).isSOP 'isTOP' op( 'text2' ).isTOP You’ll notice that I’ve separated the name of the method from the expression with a tab. This way when we feed our Eval DAT we get two columns – one with the name of the method, and another with the returned value. You’ll notice that some methods are marked as ( Read Only ). This means that we can see information in calling these methods, but we can’t change anything about our Operator. Let’s look at an example of how we can make a change to an operator. Color is something we can change for any operator. I’m going to add three text DATs to my network. One operator to act on, and two text DATs where I’m going to write a simple script. First let’s change the color of our operator to red: target_op = op( 'text1' ) target_op.color = ( 1 , 0 , 0 ) If we right click and run this script we’ll see that we’ve changed the color of our operator! Wait, let’s change it back: target_op = op( 'text1' ) target_op.color = ( 0.5450000166893005 , 0.5450000166893005 , 0.5450000166893005 ) Perfect. This might seem like a silly example, but it brings to our attention how we might use various class methods to make changes to our networks. As a quick note, you might notice that I’ve written my scripts in two lines when I could have written them in one. Right? Why write: target_op = op( 'text1' ) target_op.color = ( 1 , 0 , 0 ) When I could just write: op( 'text1' ).color = ( 1 , 0 , 0 ) Part of the way that I work these days is to anticipate that I’m going to incorporate the pieces of a test script into a larger method or function. Separating the operator from the function call makes it much easier to begin thinking about how I might extend this simple script in the future. I could easily start to think of writing a function that looked like: def Make_ops_red( op_path ): target_op = op( op_path ) target_op.color = ( 1, 0, 0 ) return Okay, let’s look at one other interesting thing we might consider. What if we wanted to script the processing adding ops to a network? We can do just this with the copy method: # create a new variable called new_op # this is also a copy of the operator out 1 new_op = parent().copy( op( 'moviefilein1' ) ) # since we've defind our new op with the variable # name new_op we can continue to use this name # our next step will be to give it a name new_op.name = 'moviefilein_new_op' # finally we're going to change the location of # our new operator. In this example we want it # created at a location in relation to our original # operator. We start by finding the original operator's # y position, and then subtract 200 new_op.nodeY = op( 'moviefilein1' ).nodeY - 100 There are, of course, many more things you can do with the Op Class – my hope is that this helps you get a sense of where to start and pushes you to start experimenting a little more. 1 thought on “Python in TouchDesigner | Op Class | TouchDesigner 1. Pingback: Python in TouchDesigner | Dictionary Loops | TouchDesigner | Matthew Ragan Comments are closed.
__label__pos
0.798296
An element in an indexed free module. AUTHORS: class sage.modules.with_basis.indexed_element.IndexedFreeModuleElement Bases: sage.structure.element.ModuleElement Create a combinatorial module element. This should never be called directly, but only through the parent combinatorial free module’s __call__() method. monomial_coefficients(copy=True) Return the internal dictionary which has the combinatorial objects indexing the basis as keys and their corresponding coefficients as values. INPUT: • copy – (default: True) if self is internally represented by a dictionary d, then make a copy of d; if False, then this can cause undesired behavior by mutating d EXAMPLES: sage: F = CombinatorialFreeModule(QQ, ['a','b','c']) sage: B = F.basis() sage: f = B['a'] + 3*B['c'] sage: d = f.monomial_coefficients() sage: d['a'] 1 sage: d['c'] 3 To run through the monomials of an element, it is better to use the idiom: sage: for (t,c) in f: ....: print("{} {}".format(t,c)) a 1 c 3 sage: s = SymmetricFunctions(QQ).schur() sage: a = s([2,1])+2*s([3,2]) sage: d = a.monomial_coefficients() sage: type(d) <... 'dict'> sage: d[ Partition([2,1]) ] 1 sage: d[ Partition([3,2]) ] 2 to_vector(new_base_ring=None, order=None, sparse=False) Return self as a vector. INPUT: • new_base_ring – a ring (default: None) • order – (optional) an ordering of the support of self • sparse – (default: False) whether to return a sparse vector or a dense vector OUTPUT: a FreeModule() vector Warning This will crash/run forever if self is infinite dimensional! See also EXAMPLES: sage: F = CombinatorialFreeModule(QQ, ['a','b','c']) sage: B = F.basis() sage: f = B['a'] - 3*B['c'] sage: f._vector_() (1, 0, -3) One can use equivalently: sage: f.to_vector() (1, 0, -3) sage: vector(f) (1, 0, -3) More examples: sage: QS3 = SymmetricGroupAlgebra(QQ, 3) sage: a = 2*QS3([1,2,3]) + 4*QS3([3,2,1]) sage: a._vector_() (2, 0, 0, 0, 0, 4) sage: a.to_vector() (2, 0, 0, 0, 0, 4) sage: vector(a) (2, 0, 0, 0, 0, 4) sage: a == QS3.from_vector(a.to_vector()) True sage: a.to_vector(sparse=True) (2, 0, 0, 0, 0, 4) If new_base_ring is specified, then a vector over new_base_ring is returned: sage: a._vector_(RDF) (2.0, 0.0, 0.0, 0.0, 0.0, 4.0) Note trac ticket #13406: the current implementation has been optimized, at the price of breaking the encapsulation for FreeModule elements creation, with the following use case as metric, on a 2008’ Macbook Pro: sage: F = CombinatorialFreeModule(QQ, range(10)) sage: f = F.an_element() sage: %timeit f._vector_() # not tested 625 loops, best of 3: 17.5 micros per loop Other use cases may call for different or further optimizations.
__label__pos
0.979626
How to clean system older trash dan 7 days When I rebuild (switch) my configuration.nix file with the below code to clean the system for older than 7 days it returns in errors. ### clean system nix = { settings.auto-optimise-store = true; gc = { automatic = true; dates = "weekly"; options = "--delete-older-than 7d"; }; }; What errors does it return? It looks correct to me. I’d also suggest setting nix.gc.persistent if your computer won’t be running 24/7. Is it just because there’s a missing semi-colon at the end of the options line? 1 Like [henry@nixos:~/git/mynixos]$ sudo nixos-rebuild switch error: syntax error, unexpected '=', expecting end of file at /etc/nixos/configuration.nix:352:5: 351| ### clean system 352| nix = { | ^ 353| settings.auto-optimise-store = true; (use '--show-trace' to show detailed location information) building Nix... error: syntax error, unexpected '=', expecting end of file at /etc/nixos/configuration.nix:352:5: 351| ### clean system 352| nix = { | ^ 353| settings.auto-optimise-store = true; (use '--show-trace' to show detailed location information) building the system configuration... error: syntax error, unexpected '=', expecting end of file at /etc/nixos/configuration.nix:352:5: 351| ### clean system 352| nix = { | ^ 353| settings.auto-optimise-store = true; (use '--show-trace' to show detailed location information) Indeed, yet it does not solve all errors. No, it is not a server, but a desktop. However, sometimes the machine will be on for many days at a time. What would you advise in such situation? What about: system.autoUpgrade.enable = true; nix.settings.auto-optimise-store = true; nix.gc.automatic = true; nix.gc.dates = "daily"; nix.gc.options = "--delete-older-than 7d"; Stupid me!!! It was a ‘}’ in the wrong place afterall. OMG Thanks though guys! 1 Like I would have persistent on pretty much all the time. It just makes sure that if your computer isn’t on when the timer triggers, it will do the garbage collection when you next turn it on. I think that’s the least surprising behavior, frankly. 1 Like Like this right? ### clean system nix = { settings.auto-optimise-store = true; gc = { automatic = true; persistent = true; dates = "weekly"; options = "--delete-older-than 7d"; }; }; 1 Like
__label__pos
0.999078
Unable to grab multiple frames from a Dalsa Linea We have two Dalsa Linea 8K GigE cameras that should be encoder-triggered per line and capturing 8192x2048 frames using CVB 13.02.004. However, we’re having a issue capturing more than one frame from each camera from a single grab. We initially noticed this issue in our own software, but the same issue occurs in Genicam Browser with a single camera, whether triggered or in free run mode. The strange thing is that if I connect to a camera in Sapera CamExpert and grab a frame, subsequently grabbing within our software or Genicam Browser works fine for that camera, at least until it is restarted. The Sapera version is 8.50. Has anyone seen anything similar or have an idea how to address this? Hello ChrisE, one possible cause for the behavior described is the setting of parameter Acquisition Mode. For the value Single Frame only one frame is captured for each AcquisitionStart command which is executed in the background. The following picture is showing the parameter. GenICamBrowser_Linea_AcquisitionMode The CamExpert of the SaperaLT seems to ignore this parameter. The following image is showing the parameter viewed with the CamExpert. The value of the parameter is still preserved after closing CamExpert and reentering GenICamBrowser. So a possible reason for the described behavior might be that the parameter Acquisition Mode was set to Single Frame accidentally. If this is the case readjust the value to Continuous. 1 Like Hi KVo, That fixed it, thanks!
__label__pos
0.845053
method-combination-utilities https://github.com/sellout/method-combination-utilities.git git clone 'https://github.com/sellout/method-combination-utilities.git' (ql:quickload :method-combination-utilities) 10 Method combinations are one of the more obscure bits of Common Lisp. I think they're pretty fantastic and should be understood and used by more developers. This library is an attempt to provide some tools to aid in defining new method combinations as well as a few simple combinations that may be generally useful. (METHOD-COMBINATION-EXPAND form) This macro is to method combinations what MACROEXPAND is to macros. Given a function call form, it'll expand to the form used to call the methods. For example, given: (defgeneric print-slots (object) (:method-combination basic progn t) (:method progn ((object superclass)) ...) (:method progn ((object subclass)) ...)) (pprint (method-combination-expand (print-slots subclass-instance))) the result should look something like: (PROGN (CALL-METHOD #<STANDARD-METHOD PRINT-SLOTS PROGN (SUBCLASS)>) (CALL-METHOD #<STANDARD-METHOD PRINT-SLOTS PROGN (SUPERCLASS)>)) This can be extremely helpful both for users of method combinations and developers of them. Definition Helpers (CALL-METHODS methods) This is FLETed (or expanded in-line) in almost all method combinations, including every DEFINE-METHOD-COMBINATION example in the spec. The name isn't the best, but it has a strong tradition. This function just returns a list of CALL-METHOD forms, one for each method passed in. EG: (call-methods '(1 2 3)) => ((CALL-METHOD 1) (CALL-METHOD 2) (CALL-METHOD 3)) (COMBINE-STANDARD-METHODS primary-methods &optional around-methods before-methods after-methods) In a lot of custom method combinations there is some attempt to keep the behavior of the STANDARD method combination. This function manages that portion of the method combination so other components can be layered on top. This example converts the 55-line WRAPPING-STANDARD method combination from arnesi into a much cleaner 17-line version. (define-method-combination wrapping-standard (&key (wrap-around-order :most-specific-last) (around-order :most-specific-first) (before-order :most-specific-first) (wrapping-order :most-specific-last) (primary-order :most-specific-first) (after-order :most-specific-last)) ((wrap-around (:wrap-around) :order wrap-around-order) (around (:around) :order around-order) (before (:before) :order before-order) (wrapping (:wrapping) :order wrapping-order) (primary () :order primary-order :required t) (after (:after) :order after-order)) "Same semantics as standard method combination but allows \"wrapping\" methods. Ordering of methods: (wrap-around (around (before) (wrapping (primary)) (after))) :wrap-around and :wrapping methods can use call-next-method." ;; :WRAP-AROUND is similar to :AROUND and :WRAPPING is similar to primary, so ;; each pair can be concatenated and then we can just apply the standard ;; combination. (combine-standard-methods (append wrapping primary) (append wrap-around around) before after)) (WRAP-PRIMARY-FORM primary-form &optional around-methods before-methods after-methods) This is similar to COMBINE-STANDARD-METHODS, but it takes an already-computed primary form, rather than a list of primary methods. This is because it's fairly common to have some custom behavior for the primary methods and then combine it with the usual :AROUND/:BEFORE/:AFTER methods. This example is simplified from pretty much the entire nisp-standard-combination.lisp file from the nisp project. Note that (based on the pathname), this combination (or at least the version I linked to) is probably obsolete. (define-method-combination nisp-standard (&key hook) ((defaulting (:defaulting) :order :most-specific-last) (meta-around (:meta-around)) (around (:around)) (before (:before)) (primary () :required t) (after (:after) :order :most-specific-last)) "This behaves similarly to `STANDARD`, but wraps the whole thing with :DEFAULTING (most-recent-last) and :META-AROUND methods. Also, if a HOOK is passed, the primary methods are treated as in the BASIC combination, with HOOK as the operator. (defaulting (meta-around (around (before) (primary) (after))))" (wrap-primary-form (if hook `(,hook ,@(call-methods primary)) `(call-method ,(first primary) ,(rest primary))) (append defaulting meta-around around) before after)) Method Combinations (PRIMARY) The PRIMARY method combination is a stripped-down version of the STANDARD method combination that only allows primary methods. Taken from ISLISP’s NIL method combination (but renamed because CLs with package locks don't like it) after it was suggested by Pascal Costanza. (LAX) The LAX method combination is intended for use in cases where you are handling your qualifiers in a custom method class and don't really need a custom method combination, but you need the STANDARD method combination to quietly ignore your special qualifiers. (a la Closer’s Filtered Functions) (BASIC operator &optional identity-with-one-argument-p order) The intent of the BASIC method combination is to obviate the need for the short form of DEFINE-METHOD-COMBINATION. With this method combination, you have the same functionality but without having to define the method combination separately from where it's used. Of course, I've never actually seen any code that uses the short form of DEFINE-METHOD-COMBINATION (and I've looked). However, since all the (non-STANDARD) built-in method combinations behave as if they were created using the short form, this also acts as a proposal for any updates to Common Lisp – rather than having ten built-in combinations (four of which I have never seen used), have only two: STANDARD and BASIC, and eliminate the short form of DEFINE-METHOD-COMBINATION. For example, the built-in + combination can be replicated with (basic + t). (APPEND/NCONC &optional order) Using either the built-in APPEND or NCONC combinations (or those functions as the operator for the BASIC combination) misses something important. NCONC is largely a non-consing optimization of APPEND (if you are somehow using the method combination to make circular structures or something, then the built-in NCONC is probably a better bet). If you use the built-in combinations, then every method on your function needs to share the same qualifier, say, APPEND. If you later decide that it is safe to use NCONC, then you need to change the combination on the generic function and change all of your methods – and hope that no users of your library have added additional APPEND methods to the function. The APPEND/NCONC combination, however, allows the use of either qualifier. Methods that return a list safe to be RPLACDed can use the NCONC qualifier, while those that don't can use APPEND. If all but the last applicable primary method (most- or least-specific, depending on the ORDER parameter) use the NCONC qualifier, than the results will be concatenated with NCONC, otherwise it will use APPEND. This allows the optimization to be added on a case-by-case basis and the gains to be had where possible, without tying users to your specific implementation decision.
__label__pos
0.89159
Click here to monitor SSC SQLServerCentral is supported by Red Gate Software Ltd.   Log in  ::  Register  ::  Not logged in       Converting a File with Parent-Child Records in SSIS By Jeff Singleton, Overview A common requirement in ETL scenarios is to extract data from a single file that has multiple record types with those records having parent-child relationships or a single record that can have one or more detail records that relate to it. SQL Server Integration Services (SSIS) offers a number of solutions for this scenario. In this article, I will demonstrate a data flow solution for this scenario through the use of conditional split, script component (using VB.NET) and merge join. The input file is a comma separated sales report. The parent records detail an item bought and it's attributes, while the child records contain information about customers who have purchased that item in a single field and information regarding their purchases (e.g. price, quantity) The methodology will be to identify "good records" and allow those through the data flow. Next, a script component will be created that will assign a parent-child key to each record and direct each row to it's respective output. After each row has been directed to a separate output, I will need to scrub the customer address field using another script component. Finally, both the customer and product records will be sorted by their parent-child key and merged together using a merge join transformation and output into a SQL table. Implementation First, let's take a look at the sample file to be loaded Figure 1 - Sample file As you can see, there are two header records that contain column names: product and customer respectively. The third row has product information, while the fourth and fifth rows have customer information. The sixth row contains summary information that is not needed for this transformation. The next step will be setting up a connection manager to extract this file. Notice that I have put double quotes (") in as a text qualifier. This will ensure that all of the customer information is captured in a single field and not delimited by 'City, State'. Figure 2 - Flat File Connection Manager Editor To extract only non-blank records into the actual data flow, I will add a conditional split to the transformation. The condition will check the first column to see if there is any data contained in that column. If there is data in that column the transformation sends the record to the output 'GoodRecs'. Figure 3 - Conditional Split Transformation- excludes blank records Once the conditional split is complete, drag and drop a 'Script Component' transformation into the data flow and choose type 'Transformation'. Select all columns under Input Columns then choose the 'Inputs and Outputs' view. Here, add 3 Outputs (Product, Customer & Summary) and add an output column to both the Product (ProdKey) and Customer (CustKey) Outputs. For the common properties (right side of the editor) of each output, you will need to change the field 'ExclusionGroup' to match the exclusion group of the Input (typically 1) and choose 'input "YourInput" (001)' as the 'SynchronousInputID'. Figure 4 - Script Transformation Editor - splitting records by type Next, choose the 'Script' view and click 'Design Script' at the bottom right hand corner of this box. Copy the below script into the script task. I have created variables CustPattern and SumPattern that identify patterns unique to each record type. When directing a customer record to the customer output, this variable will be called to validate that the record is, in fact, a customer record. Imports System Imports System.Data Imports System.Math Imports Microsoft.SqlServer.Dts.Pipeline.Wrapper Imports Microsoft.SqlServer.Dts.Runtime.Wrapper Public Class ScriptMain Inherits UserComponent Dim counter As Integer 'Record Pattern for Customer Records Dim CustPattern As String = "*;*" 'Record Pattern for Summary Records Dim SumPattern As String = "*(Entries*" 'Record Pattern for Product Records left blank Public Overrides Sub PreExecute() counter = 0 'Initialize counter = 0 End Sub Public Overrides Sub Input0_ProcessInputRow(ByVal Row As Input0Buffer) 'Increment counter by 1 on each pass of blank second column If Row.Column1 <> "" Then counter += 1 End If counter = counter 'Set keys = counter value Row.CustKey = counter.ToString Row.ProdKey = counter.ToString 'Output records based on record patterns If Row.Column0 Like CustPattern Then Row.DirectRowToCustomerRecord() ElseIf Row.Column0 Like SumPattern Then Row.DirectRowToSummaryRecord() Else : Row.DirectRowToProductRecord() End If End Sub End Class Now your data flow should look like Figure 5 below. Figure 5 - Data Flow Now, we need to scrub those nasty customer records with another script component. As before, create a 'Script Component' and choose type 'Transformation'. Select all columns under Input Columns then choose the 'Inputs and Outputs' view. Next, create an output called ScrubbedOuput and add output columns Zip, State, City, Address, CustomerName, CustomerNbr. For the common properties of each output, you will need to change the field 'ExclusionGroup' to match the exclusion group of the Input (typically 1) and choose 'input "YourInput" (001)' as the 'SynchronousInputID' just like before. Figure 6 - Script Transformation Editor - scrub customer records Next, choose the 'Script' view and click 'Design Script' at the bottom right hand corner. Copy the below script into your script task. Here, I have created the variable 'Scrub', just to make the script easier to read and modify. Imports System Imports System.Data Imports System.Math Imports Microsoft.SqlServer.Dts.Pipeline.Wrapper Imports Microsoft.SqlServer.Dts.Runtime.Wrapper Public Class ScriptMain Inherits UserComponent Dim Scrub As String Public Overrides Sub Input0_ProcessInputRow(ByVal Row As Input0Buffer) ' Scrub = Row.Column0 Row.CustomerNbr = Scrub.Substring(0, 8) Row.CustomerName = Scrub.Substring(8, Scrub.IndexOf(";", 1) - 8) Row.Address = Scrub.Substring(Scrub.IndexOf(";", 1) + 2, (Scrub.LastIndexOf(";")) - (Scrub.IndexOf(";", 1)) - 2) Row.City = Scrub.Substring(Scrub.LastIndexOf(";") + 2, (Scrub.LastIndexOf(",")) - (Scrub.LastIndexOf(";")) - 2) Row.State = Scrub.Substring(Scrub.LastIndexOf(",") + 2, 2) Row.Zip = Scrub.Substring(Scrub.LastIndexOf(",") + 6, 5) Row.DirectRowToScrubbedOutput() ' End Sub End Class Now your data flow should look like Figure 7 below. Figure 7 - Data Flow Task Before we can merge these two record types together, we need to add 'Sort' transformations to both the ProductRecord flow and the CustomerRecord flow on the ProdKey and CustKey fields. Once that is complete, we can add a 'Merge Join' transformation joining the two record types on ProdKey = CustKey. Figure 8 - Merge Join Transformation Editor - join on ProdKey = CustKey Once the join is complete, add your destination. Now your data flow should look like Figure 9 below. With the package configuration completed, you can execute the package and watch the data flow to multiple record types and then back together to create a single flattened record. Figure 9 - Final Data Flow Task Conclusion In this article, I've demonstrated how to use SQL Server Integration Services (SSIS) to create a data flow task that can handle multiple record types with a parent-child relationship. The transformation extracts non-blank records, creates a key for each record and splits that record by type and then merges the records back together to create a flattened record with a 1-to-1 relationship. Total article views: 5077 | Views in the last 30 days: 2   Related Articles FORUM Custom Transforms Anyone know of a custom transform library? FORUM Transformation VB Script Import Transformation VB Script ARTICLE Using the Script Component With Multiple Outputs This article demonstrates the use of the SSIS Script Component to create multiple outputs from uncon... FORUM understanding the load_Transform_salescreditcard script understanding the load_Transform_salescreditcard script FORUM Data Flow Destination columns copying to Data Flow Transformation Output How can I get Data Flow Destination columns metadata copied to Data Flow Transformation Output?   Contribute Join the most active online SQL Server Community SQL knowledge, delivered daily, free: Email address:   You make SSC a better place As a member of SQLServerCentral, you get free access to loads of fresh content: thousands of articles and SQL scripts, a library of free eBooks, a weekly database news roundup, a great Q & A platform… And it’s our huge, buzzing community of SQL Server Professionals that makes it such a success. Join us! Steve Jones Editor, SQLServerCentral.com Already a member? Jump in: Email address:   Password:   Remember me: Forgotten your password? Steve Jones    
__label__pos
0.726309
back to article Sean Parker launches Chatroulette killer: For why? Celebrity billionaire tech investor Sean Parker thinks video chat is what the world needs most, and is putting his money where his mouth is. Now pinch yourself that it isn't 1995. Video chat is the innovation that nobody has ever wanted, but has never gone away. The Facebook and Spotify investor, who was a driving force … COMMENTS This topic is closed for new posts. 1. EvanPyle Trollface "Parker believes he can stop the inevitable spiral into filth by building in "abuse prevention" filters." The man who backed a P2P network honestly believes he can stop exactly what people will use his service for, Porn and Trolling. Oh yeah, has he head of this neat application called Skype? 2. Anonymous Coward Anonymous Coward What are companies like MS and facebook seeing that no one else is? I know of only one person who currently uses video to chat, and that's because they just discovered it on their Iphone. Skype users have increased (kinda) at my work location, but it's almost like a fad, I saw people using it, now I see less and less doing so. We've got a whole conference room setup with cameras, microphones, large projection screens, etc.... for doing exactly this, but.....it's not done because no one show interest in doing so.... I guess I could see it with the younger gen, and kinda becoming a norm....so maybe..... 1. Anonymous Coward Anonymous Coward My entire family use Skype to stay in touch, especially to see my bouncing baby nephew throw stuff around my sister's pristine front room. Then at work all our meeting rooms have video conferencing facilities and multiple 42" LCDs to view from, which we use to have multi-continent meetings - China, Singapore and Aus in the mornings, US in the afternoons. All our laptops have webcams built in and we use that when we can't get a meeting room. It's not exactly chat roulette, but video is hugely important to both my personal and professional life. 2. The Indomitable Gall Teaching...? There's a reasonable market in internet language teaching, and let me tell you it works a lot better as a video conference than voice only. I teach one-on-one using Skype, and I was taking French lessons with the OU in a virtual classroom with no video -- Skype's a million times better. When either party is trying to deal with an unfamiliar language, visual cues are far more important. Although you have to learn to nod and shake your head very slowly.... 3. Reality Dysfunction Childcatcher inevitable spiral into filth by building in "abuse prevention" filters. So every user will get a current photo of Margaret Thatcher to stick up next to the computer?? 1. Havin_it Coffee/keyboard >a current photo of Margaret Thatcher As opposed to back in the 80s when she was hawt? 4. JDX Gold badge >>But today, even with free telephony and ubiquitous webcams, video chat hasn't broken out of its niche market of murky exhibitionists. People use video-chat on Phones/FaceTime all the time. 1. Irongut When / where? I have never once had anyone try to video call me on my phone and have never seen anyone using that feature either. Even 99% of the Skype use I've seen / participated in has just been using it for a cheap audio only conference call. 1. RAMChYLD Boffin Re: When / where? East Asian countries, particularly South Korea and Japan. Especially Japan. People there seem to use video calls for practically everything. Then again, Japanese culture is complex (for example, they seem to be perfectly fine with things westerners find revolting) and doesn't apply much anywhere else. Elsewhere, enterprises are starting to find it important to have video at teleconferences for some reason. I know companies who're looking into installing video conferencing kits into their branches because other companies insist on seeing the who's on the other end before being willing to get down to business. 2. geejayoh Stop People use video-chat on Phones/FaceTime all the time. But do they? Let's take FaceTime as an example - seeing as you've decided that it is it's own entity and innovation separate from other "phones" (not just a single enabling technology such as skype, facebook (which has video chat) , google talk et al.) (#I smell a fanbois, or just a goon susceptible to marketing). FaceTime only works over wifi. And you have to be friends with all the other apple fanbois. And you have to log into facetime. And so do they. If I'm sitting at my lappy or my computer I'll use Skype. But then it's just generally easier to chat over text. Especially if you're busy or don't want to be disturbed. I also read somewhere that 70% of men regularly make phone calls in the news. (Source: Scientific Proof Magazine). I think most of the time that would make me not want to make a video call. Unless it was some significant other on the other side. Plus - when the boss calls and asks why you're late, and you're still in your PJs and hungover from the night before, it's easier to hide behind a phone than video. 3. electricmonk >>> People use video-chat on Phones/FaceTime all the time in the commercials. There, fixed that for you. 5. Don Jefe Meh Fiber in the 80's? Most places are still selling their souls for fiber. The 'rest' of the world the author obviously hasn't done business is still, at best ISDN, it must be nice in your ivory tower. Video confrencing is used in many large companies - HP, Oracle, IBM and Cisco all use it daily. 6. frank ly Silver badge The Pink'Un .... ...is what people expected to see on ChatRoulette. I suspect that little will change. 7. LinkOfHyrule Gimp Cam 2 Cam? Sometimes people just invest in things they happen to like even if the business model is shit. Maybe he's into "cyber"? 8. Ryan 7 That video is painfully unfunny Much better example of chatroulette: http://www.youtube.com/watch?v=Eoa-KqIwW8s 9. Nick Ryan Silver badge Video calling... Pushed by the channels, not wanted by many other than web 2.0 sales pushers. The few that I know who went down the line with video calling/conferencing quickly gave up as the dream that was sold was nowhere the hard reality. From codecs where video is the driver resulting in choppy audio (choppy video is passable, choppy audio isn't) to the rampant lies about required upload bandwidth which when combined with the upload bandwidth lies given by ISPs results in nowhere enough bandwidth, it's often off to a very bad start. That's before the environmental issues of audio, lighting and visual presentation come in - you do have a dedicated room for video conferencing don't you? Video conferencing can work, and when it does it can work well but it takes a lot more effort than most solution pushers will ever admit. Of course, that's the commercial video conferencing side - while personal video conferencing shares the same bandwidth and performance issues and is frequently made useless by environmental factors (for example users with windows behind them), many users just prefer not to be visible due to the additional stress of trying to look good on camera. This said, it's fantastic for widely separated families to keep in touch, even if they don't often use the video option it's there as an option so they can see each other. As for using it on mobiles... forget anything other than wifi otherwise there's never anywhere near enough bandwidth to upload the video stream. 10. geejayoh Go Why has video not taken up it's mantle? Well, Possibly because it's too unreliable. Still. I use video chat on skype and QQ on my home broadband connection, and even across the world it seems to work fine. I would list several reasons why (and it's not the end point application like Naptard says it is) 1. Incompatible products - everyone has a phone and single identifiable phone number. If I want to voice call my friends and family, I just call their number. I have about 25% of my contact book on Skype. The others, only their phone number. Either they don't have Skype, or I just can't be bothered to get their Skype address. So for every new app that is released that can do video, there's a separate contact list to keep. As an example, Skype is terrible on phones. It never stays logged in, or it takes up too much memory that garbage collection comes around and kills it off if I open 1 or 2 new apps. This happens on Android and my iPad under iOS. So I'm never online long enough for people to call me. THEREFORE I can't just pick up my phone and dial like I would a phone number. Everyone has to be online at the right time. Apple may have made an attempt with facetime to integrate it a little - but there is a REASON why they made it Wi-Fi only (however restricting that is). The current networks are just not good enough, as are the methods of actually initiating the call. If there was some way to start a video chat simply by dialling the telephone number and choosing voice or video - I think a whole bunch more people would go for it. It would become second nature. I think 3 in the UK tried this for a while - do they still do it? But again, the problem is here - VENDOR LOCK-IN. Apple, 3 - only if you mates are on the same network or device, can you make easy, established ways to make video calls. So we've got some key limiting factors: time, place, device, accessibility. 2. Video chews through bandwidth. Had the telcos kept the unlimited usage wonderlands, we might have finally got around to make video a part of our everyday business, but right now, they just don't want you / I / us using video all the time. They haven't turned on enough of their dormant capacity / aren't through charging exorbitant rates yet to welcome wholesale use of video 3. Video is time sensitive to delivery of packets. Voice is too if over an IP network, but it's a connected network and the established infrastructure is there to handle it. True, voice will also cut off in a tunnel, but the 3G / HSPA services are just not realiable enough, on the move, to give us satisfactory video chats. Skype suffers immeasurably over a wi-fi network for good quality video. It's different to standard video streaming with all the processing that needs to be done either side. 4. Devices aren't good enough. The cameras that go into phones - especially the front ones, go in as an after thought. They're ostensibly put there for the very subject of this article, for video chat. But nobody ever uses them - because they're not integrated as natural addition to the telephone network. Until every phone network goes IP and Data only and even voice is across the data streams, so we can connect video to our phone numbers, then we won't take to the all seeing-all dancing video future. When this happens, over the top apps (Talk, Skype, et al) won't be needed, because the video call will be second nature. Pick up my phone, press the picture of my wife on the screen and a few seconds later it's replaced by a real-life grinning version of her. Moreover on top of this - I've seen plenty of articles suggesting that telcos are worried about becoming simply commodity players, only providing the tubes for the dominating companies (Google, Facebook et al). Well, if they got their act together and actually started moving towards integrating the video into the network like this, without the need for apps or other providers, they could have a bonafide reason to start offering package deals and making money off of it, like they did with SMSs. Once again, 3 already did it. Why not the rest? I for one, would welcome our new video overlords. Although I don't really fancy sitting on my metro to work in China listening to all the locals getting their face-time in with their squeeze early in the morning.. They're already loud enough on the phone - I fear with video the din would be deafening. 1. Captain Save-a-ho Coat Re: Why has video not taken up it's mantle? "If there was some way to start a video chat simply by dialling the telephone number and choosing voice or video - I think a whole bunch more people would go for it." It's called SIP. It's been around for a long time and most VoIP installations are based on it. While the uptake has been slow, it's mainstream now than ever and it's only a matter of time (years) before mobile phones convert to SIP for signaling voice/video and everything rides over a mobile data stream. I know Sprint has been planning this for a while, as part of their conversion to IPv6 and I suspect other carriers are considering the same. Once that happens, video will become more commonplace, albeit still not so useful after a bender or if you just like sitting around in your underwear. 1. Anonymous Coward Anonymous Coward Re: Why has video not taken up it's mantle? I would think that would be when it was most useful*. * - doesn't happen as often as it used to, with me. 2. Fuzz Re: Why has video not taken up it's mantle? "If there was some way to start a video chat simply by dialling the telephone number and choosing voice or video - I think a whole bunch more people would go for it." isn't that how video calling works? A lot of 3G phones have forward facing cameras for this feature, the reason why it's not on all 3G phones is that no one uses it. The reason no one uses it is because it is not because it's difficult it's because it isn't very useful and it costs more than a voice call, most people simply don't want to make video calls. 1. DaveDaveDave Re: Why has video not taken up it's mantle? Video-calling is a bit too difficult to access, but in any case doesn't really work on phones. The problem isn't making a call - you could imagine calling someone up and saying 'hey, look at this cool scenery' or something in the same way you might snap a pic or record a video. The problem is that we don't look at videos and pictures on our phones very much - we prefer to view them on larger screens. Skype video calling, on a big screen and with a decent quality webcam and microphone at the other end, is like having a magic window to somewhere a long way away. It's a brilliant concept, when used right, and when the technology works. 2. geejayoh Facepalm Re: Why has video not taken up it's mantle? Yes it is called SIP. But that's another protocol and it requires extra software to work. iOS, Android don't support SIP numbers out of the box. Like I said - unless it's integrated and made invisible. Are you willing to keep 2 numbers, one for voice and one for video. Then we're back to the same problem. That your friends and family have to have a SIP number too. Integration will be key here. 3. Eddy Ito Silver badge Re: Why has video not taken up it's mantle? "I've seen plenty of articles suggesting that telcos are worried about becoming simply commodity players... Well, if they got their act together and actually started moving towards integrating the video into the network like this, without the need for apps or other providers" There's the rub, getting their act together would require creating a standard that works with their competitors and that only comes when differentiability and lock-in are placed on the sacrificial altar and that's one more step along the road to commodity bit pipe provider. Even if they did manage to find the holy grail of common protocols and lock-in I don't think the uptake would be that high anyway given how and where phones are used. Generally it's bad enough when I call someone only to hear flushing, etc. in the background and know they are in the bog, with video calls being on the same line I can only pray they use the audio only button to answer it. Then again, it would bring new meaning to the phrase "fat fingered the phone". 11. Justicesays Coat The next "next big thing" It just came to me, the ultimate ever anticipated technology. 3D Video Chat. I'm off the see if I can patent that and found a company right away... 12. johnnytruant That's all very well But I don't see what Norwich City Football Club's newsletter has to do with it? 1. diodesign (Written by Reg staff) Silver badge Re: That's all very well Very good, but it's the Financial Times. C. 13. Anonymous Coward Anonymous Coward 'What's different this time?' They'll monitor and record your conversations. See http://www.technolog.msnbc.msn.com/technology/technolog/airtime-yes-we-look-your-video-chats-your-own-good-816686 Whether or not that will actually prevent it from becoming another Chatroulette is debatable. 14. Anonymous Coward Anonymous Coward "abuse prevention" filters Frankly, go down that route and they'll be business destruction filters. Barring the occasional grandparent/cute kid in faraway country scenario - already well catered for through Skype, Yahoo etc - the only thing that will conceivably make video chat even slightly useful to the majority is if a brings a new angle to online wank sharing, so filtering it out has to be suicide. I've only ever done a couple of personal video calls to friends away travelling, and frankly the whole thing is just too much of a strain. It's easy enough to chat by phone for an hour, but you can get on with other things, pick your nose etc while you do it. Throw in an image and it becomes very, very hard work. All the 'natural' and extremely nuanced body language we use so easily when in the same room with freedom of movement and no latency needs to be ramped up to even be seen, and it loses much of its immediacy. In short, its tiring, and a conversation with a very close friend starts to take on the texture of a rather more distant and formal same room relationship. Given the propensity of one of my close colleagues to wander around his home studio in his cacks till mid-afternoon, there are just more ways in which video chat is a bad idea (like; really don't want to go there moments) than there are good. So Shawn; it's embrace beaver sharing or the dustbin of history. Your choice. This topic is closed for new posts. Biting the hand that feeds IT © 1998–2019
__label__pos
0.828843
Fixed effects model From Wikipedia, the free encyclopedia   (Redirected from Fixed effects estimator) Jump to: navigation, search In statistics, a fixed effects model is a statistical model that represents the observed quantities in terms of explanatory variables that are treated as if the quantities were non-random. This is in contrast to random effects models and mixed models in which either all or some of the explanatory variables are treated as if they arise from random causes. Contrast this to the biostatistics definitions,[1][2][3][4] as biostatisticians use "fixed" and "random" effects to respectively refer to the population-average and subject-specific effects (and where the latter are generally assumed to be unknown, latent variables). Often the same structure of model, which is usually a linear regression model, can be treated as any of the three types depending on the analyst's viewpoint, although there may be a natural choice in any given situation. In panel data analysis, the term fixed effects estimator (also known as the within estimator) is used to refer to an estimator for the coefficients in the regression model. If we assume fixed effects, we impose time independent effects for each entity that are possibly correlated with the regressors. Qualitative description[edit] Such models assist in controlling for unobserved heterogeneity when this heterogeneity is constant over time. This constant can be removed from the data through differencing, for example by taking a first difference which will remove any time invariant components of the model. There are two common assumptions made about the individual specific effect, the random effects assumption and the fixed effects assumption. The random effects assumption (made in a random effects model) is that the individual specific effects are uncorrelated with the independent variables. The fixed effect assumption is that the individual specific effect is correlated with the independent variables. If the random effects assumption holds, the random effects model is more efficient than the fixed effects model. However, if this assumption does not hold, the random effects model is not consistent. The Durbin–Wu–Hausman test is often used to discriminate between the fixed and the random effects model.[5][6] Formal description[edit] Consider the linear unobserved effects model for observations and time periods: for and where is the dependent variable observed for individual at time is the time-variant regressor matrix, is the unobserved time-invariant individual effect and is the error term. Unlike , cannot be observed by the econometrician. Common examples for time-invariant effects are innate ability for individuals or historical and institutional factors for countries. Unlike the random effects (RE) model where the unobserved is independent of for all , the FE model allows to be correlated with the regressor matrix . Strict exogeneity with respect to the idiosyncratic error term , however, is still required. Since is not observable, it cannot be directly controlled for. The FE model eliminates by demeaning the variables using the within transformation: where and . Since is constant, and hence the effect is eliminated. The FE estimator is then obtained by an OLS regression of on . At least three alternatives to the within transformation exist with variations. One is to add a dummy variable for each individual (omitting the first individual because of Multicollinearity). This is numerically, but not computationally, equivalent to the fixed effect model and only works if the sum of the number of series and the number of global parameters is smaller than the number of observations.[7] The dummy variable approach is particularly demanding with respect to computer memory usage and it is not recommended for problems larger than the available RAM, and the applied program compilation, can accommodate. Second alternative is to use consecutive reiterations approach to local and global estimations.[8] This approach is very suitable for low memory systems on which it is much more computationally efficient than the dummy variable approach. The third approach is a nested estimation whereby the local estimation for individual series is programmed in as a part of the model definition.[9] This approach is the most computationally and memory efficient, but it requires proficient programming skills and access to the model programming code; although, it can be programmed even in SAS.[10][11] Finally, each of the above alternatives can be improved if the series-specific estimation is linear (within a nonlinear model), in which case the direct linear solution for individual series can be programmed in as part of the nonlinear model definition.[12] Equality of Fixed Effects (FE) and First Differences (FD) estimators when T=2[edit] For the special two period case (), the FE estimator and the FD estimator are numerically equivalent. This is because the FE estimator effectively "doubles the data set" used in the FD estimator. To see this, establish that the fixed effects estimator is: Since each can be re-written as , we'll re-write the line as: Hausman–Taylor method[edit] Need to have more than one time-variant regressor () and time-invariant regressor () and at least one and one that are uncorrelated with . Partition the and variables such that where and are uncorrelated with . Need . Estimating via OLS on using and as instruments yields a consistent estimate. Testing fixed effects (FE) vs. random effects (RE)[edit] We can test whether a fixed or random effects model is appropriate using a Durbin–Wu–Hausman test. : : If is true, both and are consistent, but only is efficient. If is true, is consistent and is not. where The Hausman test is a specification test so a large test statistic might be indication that there might be errors-in-variables (EIV) or our model is misspecified. If the FE assumption is true, we should find that . A simple heuristic is that if there could be EIV. Steps in Fixed Effects Model for sample data[edit] 1. Calculate group and grand means 2. Calculate k=number of groups, n=number of observations per group, N=total number of observations (k x n) 3. Calculate SS-total (or total variance) as: (Each score - Grand mean)^2 then summed 4. Calculate SS-treat (or treatment effect) as: (Each group mean- Grand mean)^2 then summed x n 5. Calculate SS-error (or error effect) as (Each score - Its group mean)^2 then summed 6. Calculate df-total: N-1, df-treat: k-1 and df-error k(n-1) 7. Calculate Mean Square MS-treat: SS-treat/df-treat, then MS-error: SS-error/df-error 8. Calculate obtained f value: MS-treat/MS-error 9. Use F-table or probability function, to look up critical f value with a certain significance level 10. Conclude as to whether treatment effect significantly affects the variable of interest See also[edit] Notes[edit] 1. ^ Diggle, Peter J.; Heagerty, Patrick; Liang, Kung-Yee; Zeger, Scott L. (2002). Analysis of Longitudinal Data (2nd ed.). Oxford University Press. pp. 169–171. ISBN 0-19-852484-6.  2. ^ Fitzmaurice, Garrett M.; Laird, Nan M.; Ware, James H. (2004). Applied Longitudinal Analysis. Hoboken: John Wiley & Sons. pp. 326–328. ISBN 0-471-21487-6.  3. ^ Laird, Nan M.; Ware, James H. (1982). "Random-Effects Models for Longitudinal Data". Biometrics. 38 (4): 963–974. JSTOR 2529876.  4. ^ Gardiner, Joseph C.; Luo, Zhehui; Roman, Lee Anne (2009). "Fixed effects, random effects and GEE: What are the differences?". Statistics in Medicine. 28: 221–239. doi:10.1002/sim.3478.  5. ^ Cameron, A. Colin; Trivedi, Pravin K. (2005). Microeconometrics: Methods and Applications. Cambridge University Press. pp. 717–19.  6. ^ Nerlove, Marc (2005). Essays in Panel Data Econometrics. Cambridge University Press. pp. 36–39.  7. ^ Garcia, Oscar. (1983). "A stochastic differential equation model for the height growth of forest stands". Biometrics: 1059–1072.  8. ^ Tait, David; Cieszewski, Chris J.; Bella, Imre E. (1986). "The stand dynamics of lodgepole pine". Can. J. For. Res. 18: 1255–1260.  9. ^ Strub, Mike; Cieszewski, Chris J. (2006). "Base–age invariance properties of two techniques for estimating the parameters of site index models". Forest Science. 52 (2): 182–186.  10. ^ Strub, Mike; Cieszewski, Chris J. (2003). "Fitting global site index parameters when plot or tree site index is treated as a local nuisance parameter In: Burkhart HA, editor. Proceedings of the Symposium on Statistics and Information Technology in Forestry; 2002 September 8–12; Blacksburg, Virginia: Virginia Polytechnic Institute and State University": 97–107.  11. ^ Cieszewski, Chris J.; Harrison, Mike; Martin, Stacey W. (2000). "Practical methods for estimating non-biased parameters in self-referencing growth and yield models" (PDF). PMRC Technical Report. 2000 (7): 12.  12. ^ Schnute, Jon; McKinnell, Skip (1984). "A biologically meaningful approach to response surface analysis". Can. J. Fish. Aquat. 41: 936–953.  References[edit] • Christensen, Ronald (2002). Plane Answers to Complex Questions: The Theory of Linear Models (Third ed.). New York: Springer. ISBN 0-387-95361-2.  • Gujarati, Damodar N.; Porter, Dawn C. (2009). "Panel Data Regression Models". Basic Econometrics (Fifth international ed.). Boston: McGraw-Hill. pp. 591–616. ISBN 978-007-127625-2.  • Hsiao, Cheng (2003). "Fixed-effects models". Analysis of Panel Data (2nd ed.). New York: Cambridge University Press. pp. 95–103. ISBN 0-521-52271-4.  • Wooldridge, Jeffrey M. (2013). "Fixed Effects Estimation". Introductory Econometrics: A Modern Approach (Fifth international ed.). Mason, OH: South-Western. pp. 466–474. ISBN 978-1-111-53439-4.  External links[edit]
__label__pos
0.807738
Get free ebooK with 50 must do coding Question for Product Based Companies solved Fill the details & get ebook over email Thank You! We have sent the Ebook on 50 Must Do Coding Questions for Product Based Companies Solved over your email. All the best! Matrix Multiplication Program in C Last Updated on May 18, 2023 by Prepbytes Matrix multiplication in C is a fundamental operation that involves multiplying two matrices to produce a resultant matrix. Unlike matrix addition, matrix multiplication has stricter criteria and requires careful handling of rows and columns. In this article, we explore the criteria and method for matrix multiplication and provide a C program for performing the operation. Understanding matrix multiplication is essential for various applications in mathematics and computer science. Matrix Multiplication in C Matrix multiplication is more complex than matrix addition. Consider the 2 matrices being added as shown below So, the matrix addition is pretty simple. If both matrices have the same number of rows and columns, they can be added. This is called the matrix addition rule or matrix addition criteria. They are added by adding their corresponding values. However, the criteria for multiplying two matrices is not that simple. Consider the two matrices shown below. So, we have a matrix of size 2×3 i.e. 2 rows and 3 columns and we have another matrix of size 3×3 i.e., 3 rows and 2 columns. These 2 matrices cannot be added as they don’t have the same number of rows and columns. But, what if we say that these matrices can be multiplied? Yes, in fact, these matrices can be multiplied. The matrix multiplication criteria, or the rule to multiply 2 matrices, are as follows. For two matrices to be multiplied, the number of columns of the first matrix must be equal to the number of rows of the second matrix, and the resultant matrix will be of the order r1x c2, where r1 is the number of rows of the first matrix and c2 is the number of columns of the second matrix. So, we have understood the matrix multiplication criteria. However, we still need to understand the method to multiply the matrices. So, let us now study the method to multiply matrices. Matrix Multiplication Consider the matrices shown below. So, since the number of columns of the first matrix is equal to the number of rows of the second matrix, these 2 matrices can be multiplied, and the resultant will be of the size row1 x col2 = 2 x 3. Now, the question is how do I get the answer to multiplication? For calculating each element of the resultant matrix, we need to multiply the corresponding rows and columns of the 2 input matrices. For instance, if we need to find the element a12, we will take the first row of the first matrix and the second column of the second matrix and multiply their corresponding elements. If we take a23, we will take the second row of the first matrix and the third column of the second matrix, and multiply their corresponding elements. Let us multiply the 2 matrices shown below. The step-by-step multiplication for the calculation of each element of the resultant matrix is as follows. So, to get the element a11, we will multiply the first row of the first matrix with the first column of the second matrix as shown below. Now, we will calculate the next element i.e. a12. For that, we will multiply the first row of the first matrix with the second column of the second matrix as shown below. Now, we will calculate a13 by multiplying the first row of the first matrix with the third column of the second matrix as shown below. Now, we will find out the value of a21. For that, we multiply the second row of the first matrix with the first column of the second matrix as shown below. Now, to calculate the value of a22, we will multiply the second row of the first matrix with the second column of the second matrix as shown below. Finally, to calculate the value of a23, we will multiply the second row of the first matrix by the third column of the second matrix. So, this is how matrix multiplication takes place. So, now that we have understood the complete procedure of matrix multiplication, let us now write the matrix multiplication program in C. Matrix Multiplication Program in C The matrix multiplication program in C is shown below. # include <stdio.h> # include <stdlib.h> void input(int arr[][10], int m, int n) { int i, j; printf("\nEnter elements of matrix:\n"); for (i = 0; i < m; ++i) { for (j = 0; j < n; ++j) { printf("Enter elements a%d%d: ", i + 1, j + 1); scanf("%d", & arr[i][j]); } } } void display(int arr[][10], int m, int n) { int i, j; for (i = 0; i < m; i++) { for (j = 0; j < n; j++) { printf("%d", arr[i][j]); } } } void multiply(int a[][10], int b[][10], int c[][10], int m, int n, int p, int q) { int i, j, k; for (i = 0; i < m; ++i) { for (j = 0; j < q; ++j) { c[i][j] = 0; } } for (i = 0; i < m; ++i) { for (j = 0; j < q; ++j) { for (k = 0; k < n; ++k) { c[i][j] += a[i][k] * b[k][j]; } } } } int main(){ int m, n, p, q, i, j, k; int a[10][10], b[10][10], res[10][10]; printf("Enter the no of rows and cols of first matrix\n"); scanf("%d%d", & m, & n); printf("Enter the number of rows and cols of the second matrix\n"); scanf("%d%d", & p, & q); if (n != p) { printf("Matrix is incompatible for multiplication\n"); } else { printf("Enter the elements of Matrix-A:\n"); for (i = 0; i < m; i++) { for (j = 0; j < n; j++) { scanf("%d", & a[i][j]); } } printf("Enter the elements of Matrix-B:\n"); for (i = 0; i < p; i++) { for (j = 0; j < q; j++) { scanf("%d", & b[i][j]); } } for (i = 0; i < m; i++) { for (j = 0; j < q; j++) { res[i][j] = 0; for (k = 0; k < p; k++) { res[i][j] += a[i][k] * b[k][j]; } } } printf("The product of the two matrices is:-\n"); for (i = 0; i < m; i++) { for (j = 0; j < q; j++) { printf("%d\t", res[i][j]); } printf("\n"); } } return 0; } Matrix Multiplication Algorithm in C: So, we have to replicate the procedure studied above in the code. The algorithm for writing a matrix multiplication program in C is as follows. 1. Take r1 and c1 input from the user. 2. Take r1xc1 elements as input from the user. These are the elements of the first matrix. 3. Take r2 and c2 input from the user. 4. Take r2xc2 elements as input from the user. These are the elements of the second matrix. 5. Now check for matrix multiplication criteria i.e. check if c1 equals r2. If they are not equal, print that the matrix multiplication is not possible and exit from the program. 6. If they are equal, perform matrix multiplication by multiplying each column with the corresponding row to get the corresponding row elements of the resultant matrix. 7. Print the resultant matrix. Time Complexity of the Matrix Multiplication Program in C: The time complexity of the matrix multiplication program in C is O(n3) for a square matrix or O(m x n x k) for rectangular matrices, where n is the dimension of the square matrix, m is the number of rows of the first matrix, n is the number of columns of the second matrix, and k is the common element, i.e., the number of columns of the first matrix or the number of rows of the second matrix. Space Complexity of the Matrix Multiplication Program in C: There is no auxiliary space used to multiply the matrices. So, the auxiliary space is O(1). However, we have multiplied the matrices and stored the result in another matrix. So, the output space is O(n2) for the square matrix and O(m x n) for rectangular matrices, where m is the number of rows of the first matrix and n is the number of columns of the second matrix. Conclusion Matrix multiplication in C is a crucial operation in linear algebra, with strict criteria and a method that involves multiplying corresponding rows and columns. Through this article, we have explored the criteria, method, and provided a C program for matrix multiplication. By understanding and implementing this operation, readers can gain a solid foundation in matrix manipulation, essential for various mathematical and computational tasks. Frequently Asked Questions (FAQs) Q1. Can I perform matrix multiplication in C without using loops? Ans. No, matrix multiplication inherently requires the use of loops to iterate over the elements of the matrices and perform the necessary calculations. Q2. What happens if the dimensions of the matrices don’t meet the multiplication criteria? Ans. If the number of columns in the first matrix is not equal to the number of rows in the second matrix, matrix multiplication is not possible, and an error message should be displayed. Q3. Is the order of matrix multiplication important? Ans. Yes, the order of matrix multiplication matters. Changing the order of multiplication can yield different results. Matrix multiplication is not commutative, unlike matrix addition. Q4. Are there any special libraries or functions in C for matrix multiplication? Ans. C does not provide built-in functions specifically for matrix multiplication. However, you can implement matrix multiplication algorithms using loops and basic arithmetic operations. Q5. What are some real-world applications of matrix multiplication in C? Ans. Matrix multiplication is widely used in various fields, including computer graphics, scientific computing, data analysis, and machine learning. It plays a crucial role in transformations, simulations, optimization problems, and linear algebraic operations. Leave a Reply Your email address will not be published. Required fields are marked *
__label__pos
0.999897
Client-Side Exporting and Printing Although the DevExtreme Data Visualization widgets can be displayed in any browser on any platform, there are cases when printing a chart or having it as an image or a document may be necessary for an end user. For these cases, the DevExtreme Data Visualization widgets provide the client-side export and printing features. This guide shows how to use these features in the UI and in code. Also, it explains how to set up a server-side proxy that is necessary if you plan to support these features in Safari on Mac OS and IE9. Watch Video Exporting and Printing in the UI To export or print a widget, a user clicks DevExtreme HTML5 DataVisualization Charts Export Print and selects a command from the drop-down menu that appears. The Print command opens the Print window in the browser that lets the user to select preferred printing options and to send the print job to the printer. The other commands save a file of the selected format on the user's local storage. DevExtreme HTML5 DataVisualization Charts Export Print Exporting and printing in the UI are configured by the export object. The following exporting and printing characteristics can be changed using the fields of this object. • Availability To enable exporting, assign true to the enabled field. With this setting, printing becomes available as well. If you need only exporting, disable printing by setting the printingEnabled field to false. • Formats and File Name By default, a user can export the widget into five formats: PNG, PDF, JPEG, SVG and GIF. To alter this set, use the formats option. In addition, you can change the default name for the file with the exported widget using the fileName option. • Background Color By default, the background of the widget is transparent. To fill it with a color of your choice, specify the backgroundColor option. See Also • Setting Up a Server-Side Proxy - shows how to set up a server-side proxy to support the exporting and printing features in the IE9 and Safari on Mac OS browsers. Exporting and Printing in Code To export a widget in code, call its exportTo(fileName, format) method passing the needed file name and format ('PNG', 'PDF', 'JPEG', 'SVG' or 'GIF') as the arguments. JavaScript widgetInstance.exportTo('Test Chart', 'PDF'); To print a widget, call its print() method. Like the Print command in the Exporting/Printing menu, this method opens the Print window in the browser. JavaScript widgetInstance.print(); Also, the DevExtreme Data Visualization widgets fire the following exporting-related events. • exporting Allows you to request export details or prevent exporting. • exported Allows you to notify an end user when exporting is completed. • fileSaving Allows you to access exported data in the BLOB format and/or prevent it from being saved in a file on the user's local storage. Setting Up a Server-Side Proxy If your application will be used in browsers that do not implement an API for saving files (for instance, IE9 and Safari on Mac OS) and you need the exporting feature to work correctly, you can implement a server-side proxy, which will stream the file back to an end user in response to a POST request. The proxy implementation is different for each platform. ASPx If your server runs the ASP.NET web application framework, you can implement a proxy using the following HTTP handler. С# using System; using System.Web; namespace ExportService { public class ExportHandler : IHttpHandler { public void ProcessRequest(HttpContext context) { if(context.Request.Form["contentType"] != null && context.Request.Form["fileName"] != null && context.Request.Form["data"] != null) { context.Response.Clear(); context.Response.ContentType = context.Request.Form["contentType"].ToString(); context.Response.Charset = "UTF-8"; context.Response.Expires = 0; context.Response.AppendHeader("Content-transfer-encoding", "binary"); context.Response.AppendHeader("Content-Disposition", "attachment; filename=" + context.Request.Form["fileName"].ToString()); context.Response.BinaryWrite(Convert.FromBase64String(context.Request.Form["data"].ToString())); context.Response.Flush(); context.Response.End(); } } public bool IsReusable { get { return false; } } } } Visual Basic Imports System Imports System.Web Namespace ExportService Public Class ExportHandler Implements IHttpHandler Public Sub ProcessRequest(ByVal context As HttpContext) Implements IHttpHandler.ProcessRequest If context.Request.Form("contentType") IsNot Nothing AndAlso context.Request.Form("fileName") IsNot Nothing AndAlso context.Request.Form("data") IsNot Nothing Then context.Response.Clear() context.Response.ContentType = context.Request.Form("contentType").ToString() context.Response.Charset = "UTF-8" context.Response.Expires = 0 context.Response.AppendHeader("Content-transfer-encoding", "binary") context.Response.AppendHeader("Content-Disposition", "attachment; filename=" & context.Request.Form("fileName").ToString()) context.Response.BinaryWrite(Convert.FromBase64String(context.Request.Form("data").ToString())) context.Response.Flush() context.Response.End() End If End Sub Public ReadOnly Property IsReusable() As Boolean Implements IHttpHandler.IsReusable Get Return False End Get End Property End Class End Namespace PHP If your server-side language is PHP, add a page with the following code to your website. <?php if(!empty($_POST["data"]) && !empty($_POST["contentType"]) && !empty($_POST["fileName"])) { header("Access-Control-Allow-Origin: *"); header("Content-type: {$_POST[contentType]};\n"); header("Content-Transfer-Encoding: binary"); header("Content-length: ".strlen($_POST["data"]).";\n"); header("Content-disposition: attachment; filename=\"{$_POST[fileName]}\""); die(base64_decode($_POST["data"])); } ?> Usage To enable server-side proxy support in a widget, set the export | proxyUrl option to the proxy that will stream the file to an end user. dxExporter IMPORTANT: This topic describes obsolete tools and approaches. Instead of them, consider using techniques described earlier in this guide. Also, you can use the Exporter widget to provide an end user with the exporting and printing capabilities. To operate, this widget requires the PhantomJS WebKit running on a server. This topic shows how to deploy a server and configure the Exporter widget. Deploy a Server The Exporter widget can export any widget from the DevExtreme Data Visualization library into a PNG, PDF, SVG, JPEG or GIF file. In order to work, Exporter requires the PhantomJS WebKit. This WebKit allows you to use the client-server model where PhantomJS performs as a server. In this article, the Exporter basics are explained using a simple example where a single computer is used as the client and as the server. Start by deploying the server that will process requests by following the steps below. • Download the zip-archive with the version 1.9.X of PhantomJS and unpack it. Name the folder PhantomJS. • Copy the Exporter folder to the PhantomJS folder. You can find the Exporter folder in the DevExtreme zip archive or in the folder where you have installed DevExtreme. • Open the system command line and specify the directory with the phantomjs.exe file as the current directory. • Type the following line. phantomjs Exporter/exporter-server.js 127.0.0.1 3003 Here, Exporter/exporter-server.js refers to the script that implements the server logic. This script is supplied by DevExtreme. It is followed by the IP address that is assigned to your server and the number of the port that will be used for listening. Since the requests will be sent and received on the same machine in this example, the IP address corresponding to localhost is used. If the command line responds with the "OK, PhantomJS is ready." message, the server is deployed successfully. NOTE It is necessary for the phantomjs.exe process to operate on the server during export. Therefore, do not close the PhantomJS console window until you have finished exporting your widget. Embed dxExporter To add Exporter to your page, do the following. • Provide links to the exporter client script and one of the external stylesheets as it is shown below. HTML <link rel="stylesheet" type="text/css" href="dx.exporter.light.css"> <!-- <link rel="stylesheet" type="text/css" href="dx.exporter.dark.css"> --> <script src="Exporter/dx.exporter.js"></script> • Add a div container for the Exporter widget. HTML <div id='exportMenu'></div> • Create the Exporter widget within this container using the jQuery, Knockout or AngularJS approach. In the code below, the widget is created using the jQuery approach. JavaScript $(function () { $("#exportMenu").dxExporter(); }); If you run your page, you will see the Exporter Icon ChartJS and Exporter Icon ChartJS icons on it. But exporting and printing do not work yet, because the Exporter widget is not tuned properly. Refer to the next step to tune it. Tune dxExporter To ensure proper operation by the Exporter, several options should be specified. • Assign a jQuery selector to the sourceContainer option. This selector should specify the div element that contains the widget you wish to export. • Specify the URL of your server using the serverUrl option. In this example, it is 'http://127.0.0.1:3003'. • Additionally you can alter the default file name using the fileName option, change the set of available formats using the exportFormat option and enable/disable the entire export menu or only the printing button using the showMenu and printingEnabled options. Now everything is ready for you to try the Exporter widget. Hover over the Exporter Icon ChartJS icon and choose the appropriate format to export your widget. In addition, you can change the print parameters within the Print window. To call this window, click the Exporter Icon ChartJS icon. As you can see, the Exporter widget is very easy to use and presents your charts even when scripts are disabled.
__label__pos
0.616754
You copied the Doc URL to your clipboard. Compiler intrinsics for Digital Signal Processing (DSP) The compiler provides intrinsics that assist in the implementation of DSP algorithms. These intrinsics introduce the appropriate target instructions for: • ARM, on architectures from ARMv5TE onwards • Thumb, on architectures with Thumb-2 technology. Not every instruction has its own intrinsic. The compiler can combine several intrinsics, or combinations of intrinsics and C operators to generate more powerful instructions. For example, the ARMv5TE QDADD instruction is generated by a combination of __qadd and __qdbl.
__label__pos
0.754403
Jump to content VerhoevenJ Member • Posts 11 • Joined • Last visited Posts posted by VerhoevenJ 1. Thnx,   I thought that this was not possible. We work a lot with RDP so maybe our IT guys can search for an solution through RDP.   It is no problem with our Mac computers it is just the problem with the Windows computers. We work with ±30 representatives so i don't think they will buy 30 workstations with a lot of GPU. For me there's no problem now i can ask for a Macbook pro and do my plan proposals this way. 😀 2. I redesigned my main symbol bib so the geometry is less complex. Now the 3D files are created in Nomad and the Cloud services website.   Now the problem is that the GPU of most of the computers is only 450mb dedicated memory. The 3D file's aren't displayed or if they do it is very slow to navigate. Are there system requirements to see the Webview?    Is there a possibility to upload the files to a server and that the viewing of the file's is generated through the GPU of the server and are there people who have already done this?  3. Hi, I work for a big company in Belgium. I want to start to use Webview, at this point the server to upload the webview gets blocked. I am trying to convince the network specialist to open the connection. While this is happening is there another way to get my webview on the Vectorworks server?   4. I have made a symbol and attached a record. In this record I have made a field where you can give a length and a drop down menu that has 4 options. Now I want to make a worksheet that gives me the sum of the length fields with the specific option selected in the drop down menu. Can anyone help me?  Worksheet_&_records.vwx 5. Thanks for the very quick response, Can you send me the VW 2016 files, the dutch version of Vectorworks 2017 is not yet available.   The result i want to achieve is not that easy to explain. In the store's we are designing  we have to use big refrigerators. These refrigerators exist of diffrent modules, each module has its own tempature, sort of light and a number.   I want to create a marionette tool where i can create a series of refrigerators each module in the series will have its own length, temperature indication and other features. I added a preview of what i want to accomplish. In the future i want to add a 3d view Refrigerators.vwx × × • Create New...
__label__pos
0.687642
lua-users home lua-l archive [Date Prev][Date Next][Thread Prev][Thread Next] [Date Index] [Thread Index] On 9-Jan-06, at 1:59 PM, Mike Pall wrote: This is not a problem because you can only create number objects for numbers which are representable with lua_Number. So even if the array grows larger (with a sparse tail), you just can't set or get these elements because you are lacking the proper key. That was my first thought, too. However, it's not quite true; you can use lua_rawseti to set any key which fits in an int, provided the array part is large enough to hold it (which it might be if the table were created with lua_createtable().) However, if the array part is not big enough, the key gets bounced into the hash part and is then truncated into a lua_Number. This can have odd consequences; in particular, if the array part has 2^24 elements and an attempt is made to add element 2^24+1 (which cannot be represented as a float), Lua will resize the array to the same size before overwriting the value at 2^24, which is a fairly time-consuming operation. So, for example, if you do a lot of table.inserts, to append to an empty table, you'll find the the first 2^24 of them execute in a few seconds, but the next 100 take as long as the first 16 million. (Of course, none of these succeed in actually adding to the length of the table.)
__label__pos
0.885276
DNS Server Not Responding : Easy Solutions DNS Server Not Responding : Easy Solutions. The DNS (Domain Name System) server is a crucial component of the internet infrastructure. It translates human-readable domain names (like www.example.com) into IP addresses that computers use to identify each other on the network. Essentially, DNS serves as a directory that helps your computer locate the correct server corresponding to the domain you’re trying to access. DNS servers maintain a database of domain names and their corresponding IP addresses. When you request a specific domain, the DNS server returns the associated IP address, allowing your computer to connect to the correct server. If you’re encountering a “DNS server not responding” error, it could indicate that there’s an issue with your internet connection or your DNS settings. Here are some steps you can take to troubleshoot the issue: DNS Server Not Responding 1. Check Your Internet Connection: • Ensure that your internet connection is stable. If you’re using Wi-Fi, try connecting via an Ethernet cable to see if the issue persists. 2. Restart Your Router and Modem: • Turn off your router and modem, wait for about 10-15 seconds, and then turn them back on. This can help refresh the network connection. 3. Flush DNS Cache: • Open the Command Prompt as an administrator and type the following command: codeipconfig /flushdns Press Enter. This command clears the DNS resolver cache. 1. Change DNS Servers: • Consider using a different DNS server. You can manually set your DNS server addresses to those provided by Google (8.8.8.8 and 8.8.4.4) or Cloudflare (1.1.1.1). • On Windows: Go to Network and Sharing Center > Change adapter settings > Right-click on your connection > Properties > Internet Protocol Version 4 (TCP/IPv4) > Use the following DNS server addresses. • On Mac: Go to System Preferences > Network > Advanced > DNS tab. 2. Check Firewall and Security Software: • Your firewall or security software may be blocking the connection. Temporarily disable them to see if the issue persists. 3. Update Network Drivers: • Ensure that your network drivers are up to date. You can do this through the Device Manager on Windows or the System Preferences on Mac. 4. Restart Your Computer: • A simple restart can often resolve network-related issues. 5. Contact Your ISP: • If the problem persists, contact your Internet Service Provider (ISP) to check if there are any issues on their end. 6. Use Automatic DNS Configuration: • Set your DNS configuration to obtain the DNS server address automatically. This is usually the default setting. If you’ve followed the troubleshooting steps for the “DNS server not responding” issue and the problem persists, it’s advisable to contact your Internet Service Provider (ISP) or seek assistance from technical support. Here’s what you can do: 1. Contact Your ISP: • Reach out to your Internet Service Provider’s customer support. They can check if there are any known issues with their DNS servers or your internet connection. Provide them with details about the problem, the steps you’ve taken to troubleshoot, and any error messages you’ve encountered. 2. Technical Support: • If you’re unable to resolve the issue with your ISP or suspect a more complex technical problem, consider seeking assistance from technical support. This could be the support provided by your device manufacturer, operating system provider, or a professional IT service. 3. Online Forums and Communities: • You can also explore online forums and communities related to networking or your specific operating system. Other users might have faced similar issues and could provide additional insights or solutions. When contacting support, be prepared to provide details about your network setup, the troubleshooting steps you’ve taken, and any error messages you’ve encountered. This information will help the support team identify and address the root cause of the problem more effectively. Remember that technical issues can vary, and the appropriate solution may depend on the specific circumstances of your network setup, device configuration, and service provider. Share your love Leave a Reply Your email address will not be published. Required fields are marked *
__label__pos
0.994787
1 When I use C-a to compile a source tex file, its final step is to call Evince to display the pdf (through xdg-open, I believe). However, in my computer this makes Emacs freeze for many seconds and finish with the following message: LaTeX: successfully formatted {16} pages error in process sentinel: dbus-call-method: D-Bus error: "Message recipient disconnected from message bus without replying" error in process sentinel: D-Bus error: "Message recipient disconnected from message bus without replying" Unable to load color "unspecified-fg" I'd rather disable the call to Evince completely than bothering trying to fix this. And I don't want to abandon C-a in favor or C-c because then I'd have to type Return to compile. How can I rebind TeX-command-Show to something that won't freeze Emacs? 1 Answer 1 0 In AucTeX, the PDF viewer is specified via the variable TeX-view-program-selection. Find the entry for output-pdf and select a viewer you don't have. You'll get a small warning message in the mini-buffer, but otherwise you won't be interrupted. Alternatively, you could set the value to something you do have that isn't Evince, and you might solve your other problem. Your Answer By clicking “Post Your Answer”, you agree to our terms of service and acknowledge that you have read and understand our privacy policy and code of conduct. Not the answer you're looking for? Browse other questions tagged or ask your own question.
__label__pos
0.847206
Русское сообщество по скриптингу Резет Скоре + (Reset Score Plus) Плагины для AMX Mod X, которые не удовлетворяют правилам оформления. Модератор: Leonidddd Правила форума 1. Запрещено материться и оскорблять других участников форума. 2. Запрещен флуд, оффтоп, дабл постинг во всех разделах форума, кроме раздела "Болтовня". 3. Запрещено взламывать сайт/форум или наносить любой вред проекту. 4. Запрещено рекламировать другие ресурсы. 5. Запрещено создавать темы без информативного названия. Название темы должно отображать ее смысл. Резет Скоре + (Reset Score Plus) Сообщение 9iky6 » 06 май 2012, 20:54 Авторы: maeStro (9iky6) Версия: 1.0 Описание: Данный плагин позволяет сбросить игровую статистику убийств/поражений (frags/kills), а так же убрать смысл рекконекта. Точнее рекконект теперь полностью заблокирован! При рекконекте у игрока сохраняется игровая статистика (время настраивается кваром). Использовать команду /rs теперь тоже можно только 1 раз за несколько раундов (настраивается кваром) Используемые модули: amxmodx, fun, cstrike, hamsandwich - для Counter-Strike amxmodx, hamsandwich, fakemeta, fun - для Half-Life Настройки: Для Counter-Strike серверов: rs_block_rounds (3) - На сколько раундов блокировать ввод /rs rs_reconnect_time (20) - На какое время (секунды) сохранять счет при выходе из игры Для Half-Life серверов: amx_f_every_spowns (15) - количество возрождений, после которых можно ввести /rs amx_reconnect_stime (20) - время, в течении которого сохраняется счет (при переподключении к серверу) Список изменений: v1.0 - Сделан простой /rs, с показом в чат игрока, который сбросил статистику. v2.0 - Добавлено время для сохранения статистики при рекконекте, добавленно ограничение на ввод команды (против флудеров, которых прикалывает сбрасывать её). v2.0.1 - Исправлена ошибка с блокированием команды для всех игроков! Теперь отсчет раундов идет для каждого игрока по отдельности. v3.0 (1.0) - Плагин решено переименовать в "Reset Score Plus". Рекконект теперь полностью заблокирован! Не утверждено. Отсутствуют файлы и его поддержка // Leonidddd Последний раз редактировалось 9iky6 05 июн 2012, 22:24, всего редактировалось 16 раз(а). Аватара пользователя 9iky6   Сообщения: 2178 Зарегистрирован: 30 янв 2012, 19:07 Откуда: Россия Благодарил (а): 375 раз. Поблагодарили: 701 раз. Re: Резет Скоре (Reset Score) Сообщение 9iky6 » 06 май 2012, 20:59 Делал для своего сервера, вроде бы пока что окончательная версия. Конечно можно ввести квар времени, чтобы если игрок сделал рекконект, допустим в течении 7 секунд, то его при появлении в игре убивало бы, но мне оно кажется лишним. Итак простенький плагин слишком большим стал. Аватара пользователя 9iky6   Сообщения: 2178 Зарегистрирован: 30 янв 2012, 19:07 Откуда: Россия Благодарил (а): 375 раз. Поблагодарили: 701 раз. Re: Резет Скоре (Reset Score) Сообщение kenZZo » 06 май 2012, 23:50 А сделай если можещ, тогда в антиреконннект.амхх не будет пользы))) Аватара пользователя kenZZo   Сообщения: 23 Зарегистрирован: 05 май 2012, 14:09 Благодарил (а): 9 раз. Поблагодарили: 3 раз. Re: Резет Скоре (Reset Score) Сообщение 9iky6 » 07 май 2012, 19:44 Может кто-то посмотрит и увидит: поскажите, 1 раз делал ограничение на раунды, и такая ошибка: ограничивает всех игроков! А не 1. :( Я знаю, что надо вписать (id), но не пойму куда именно. Нужная часть кода: [pawn] 1.   2. public reset_score(id) 3. { 4.         if(g_Round_counter >= get_cvar_num("amx_f_every_rounds")) 5.         { 6.          set_user_frags(id, 0) 7.          cs_set_user_deaths(id, 0) 8.   9.          new name[33] 10.          get_user_name(id, name, 32) 11.          client_print(0,print_chat,"[AMXX] Игрок %s сбросил счет!", name) 12.          set_hudmessage(150, 150, 150, -1.0, 0.71, 2, 6.0, 3.0, 0.1, 1.5 ) 13.          show_hudmessage(id, "%s, Вы успешно сбросили счет :-)", name) 14.          client_cmd(id, "spk fvox/bell") 15.   16.          g_Round_counter = 0 17.         } 18.         else 19.         { 20.         client_print(id,print_chat,"[AMXX] Ввод данной команды станет доступен через %d раунд(а)",get_cvar_num("amx_f_every_rounds")-g_Round_counter) 21.     } 22. } 23.   [/pawn] По идее (id) должен идти прямо в условие, то есть чтобы проверяло игрока. Я прав? Аватара пользователя 9iky6   Сообщения: 2178 Зарегистрирован: 30 янв 2012, 19:07 Откуда: Россия Благодарил (а): 375 раз. Поблагодарили: 701 раз. Re: Резет Скоре (Reset Score) Сообщение vinipux » 07 май 2012, 19:46 [pawn] 1. new g_Round_counter [/pawn] -> [pawn] 1. new g_Round_counter[33] [/pawn] [pawn] 1. if(g_Round_counter >= get_cvar_num("amx_f_every_rounds")) [/pawn] -> [pawn] 1. if(g_Round_counter[id] >= get_cvar_num("amx_f_every_rounds")) [/pawn] [pawn] 1. g_Round_counter = 0 [/pawn] -> [pawn] 1. g_Round_counter[id] = 0 [/pawn] [pawn] 1.         client_print(id,print_chat,"[AMXX] Ввод данной команды станет доступен через %d раунд(а)",get_cvar_num("amx_f_every_rounds")-g_Round_counter) [/pawn] -> [pawn] 1.         client_print(id,print_chat,"[AMXX] Ввод данной команды станет доступен через %d раунд(а)",get_cvar_num("amx_f_every_rounds")-g_Round_counter[id]) [/pawn] Наконец-то открыл мониторинг с masterserver'ом http://mon.cs-niceserv.ru Добавляем свои сервера! Написание плагинов на заказ. Скайп vinipux1112 Аватара пользователя vinipux   Сообщения: 1362 Зарегистрирован: 24 сен 2011, 21:13 Благодарил (а): 59 раз. Поблагодарили: 416 раз. Опыт программирования: Около года Языки программирования: ╔═══════════════╗ ║Counter-Strike 1.6 ║Delphi 7 ║VHE ║MilkShape3D(Чучуть) ╚═══════════════╝ Re: Резет Скоре (Reset Score) Сообщение 9iky6 » 07 май 2012, 20:22 vinipux, попробовал твой вариант, +поискал в интернете, разобрался, дописал код :yahoo: Побежал тестировать, спасибо :yes: Аватара пользователя 9iky6   Сообщения: 2178 Зарегистрирован: 30 янв 2012, 19:07 Откуда: Россия Благодарил (а): 375 раз. Поблагодарили: 701 раз. Re: Резет Скоре (Reset Score) Сообщение vinipux » 07 май 2012, 20:32 g_Round_counter осталась без id. Но т.к это plugin_init то там id нету. Уберай квар... Наконец-то открыл мониторинг с masterserver'ом http://mon.cs-niceserv.ru Добавляем свои сервера! Написание плагинов на заказ. Скайп vinipux1112 Аватара пользователя vinipux   Сообщения: 1362 Зарегистрирован: 24 сен 2011, 21:13 Благодарил (а): 59 раз. Поблагодарили: 416 раз. Опыт программирования: Около года Языки программирования: ╔═══════════════╗ ║Counter-Strike 1.6 ║Delphi 7 ║VHE ║MilkShape3D(Чучуть) ╚═══════════════╝ Re: Резет Скоре (Reset Score) Сообщение 9iky6 » 07 май 2012, 20:35 vinipux, квар оставил, просто пришлось его переименовать... Щас исправлю ещё 1 ошибку (уже незначительную) и выложу исходник. Аватара пользователя 9iky6   Сообщения: 2178 Зарегистрирован: 30 янв 2012, 19:07 Откуда: Россия Благодарил (а): 375 раз. Поблагодарили: 701 раз. Re: Резет Скоре (Reset Score) Сообщение 9iky6 » 07 май 2012, 20:57 vinipux, я наверное просто гений :-D Теперь при заходе в игру 3 раунда нельзя ввести команду! Дальше плагин стабильно работает. [pawn] 1.   2. #include <amxmodx> 3. #include <fun> 4. #include <cstrike> 5.   6. new t_scoresave[33]             = {0,...} 7. new ips[33][24] 8. new sfrags[33]                  = {0,...} 9. new sdeaths[33]                         = {0,...} 10. new useretry[33]                        = {0,...} 11.   12. new g_Round_counter[33] 13.   14. new gi_Rs_Save 15. new gi_Round_counter 16.   17. public plugin_init() 18. { 19.         register_plugin("Reset Score", "2.0", "maeStro") 20.         21.         register_clcmd("say /resetscore",       "reset_score") 22.         register_clcmd("say /rs",               "reset_score") 23.   24.         gi_Round_counter        = register_cvar("amx_f_every_rounds","3") 25.         gi_Rs_Save              = register_cvar("amx_reconnect_stime", "20") 26.   27.         register_event("HLTV", "RoundStart", "a", "1=0", "2=0") 28.         register_event("TeamInfo","outspec","a") 29.         30. } 31.   32. public RoundStart() 33. { 34.    new iPlayer[32], iNum 35.    get_players(iPlayer, iNum) 36.   37.    for(new i; i < iNum; i++) 38.     { 39.     g_Round_counter[iPlayer[i]]++ 40.     } 41. } 42.   43. public reset_score(id) 44. { 45.         if(g_Round_counter[id] >= get_pcvar_num(gi_Round_counter)) 46.         { 47.          set_user_frags(id, 0) 48.          cs_set_user_deaths(id, 0) 49.   50.          new name[33] 51.          get_user_name(id, name, 32) 52.          client_print(0,print_chat,"[AMXX] Игрок %s сбросил счет!", name) 53.          set_hudmessage(150, 150, 150, -1.0, 0.71, 2, 6.0, 3.0, 0.1, 1.5 ) 54.          show_hudmessage(id, "%s, Вы успешно сбросили счет :-)", name) 55.          client_cmd(id, "spk fvox/bell") 56.   57.          g_Round_counter[id] = 0 58.         } 59.         else 60.         { 61.         client_print(id,print_chat,"[AMXX] Ввод данной команды станет доступен через %d раунд(а)",get_pcvar_num(gi_Round_counter)-g_Round_counter[id]) 62.     } 63. } 64.   65. public client_connect(id) 66. { 67.         new ip[24] 68.         get_user_ip(id,ip,23,0) 69.   70.         new Float:nexTime = get_gametime() 71.   72.         if (t_scoresave[id] <= nexTime) 73.         { 74.          sdeaths[id]=0 75.          sfrags[id]=0 76.          useretry[id]=0 77.         } 78. } 79.   80. public outspec() 81. { 82.  new id=read_data(1) 83.  if ((useretry[id]==1) && (is_user_connected(id))) 84.  { 85.   set_user_frags(id,sfrags[id]) 86.   cs_set_user_deaths(id,sdeaths[id]) 87.   useretry[id]=0 88.   sdeaths[id]=0 89.   sfrags[id]=0 90.  } 91.  return PLUGIN_CONTINUE 92. } 93.   94. public client_disconnect(id) 95. { 96.         new maxstata = get_pcvar_num(gi_Rs_Save) 97.   98.         new Float:theTime = get_gametime() 99.         t_scoresave[id] = floatround(theTime) + maxstata 100.         get_user_ip(id,ips[id],23,0) 101.         { 102.          sdeaths[id] = get_user_deaths(id) 103.          sfrags[id] = get_user_frags(id) 104.          useretry[id]=1 105.         } 106. } 107.   [/pawn] Аватара пользователя 9iky6   Сообщения: 2178 Зарегистрирован: 30 янв 2012, 19:07 Откуда: Россия Благодарил (а): 375 раз. Поблагодарили: 701 раз. Re: Резет Скоре (Reset Score) Сообщение smurfavr » 11 май 2012, 11:06 будет работать на half life? Форум за HALF LIFE http://smurfa.bulgarianforum.net/ Аватара пользователя smurfavr   Сообщения: 65 Зарегистрирован: 02 авг 2011, 20:03 Откуда: България Благодарил (а): 40 раз. Поблагодарили: 2 раз. След. Вернуться в Неутвержденные плагины Кто сейчас на конференции Сейчас этот форум просматривают: нет зарегистрированных пользователей и гости: 2
__label__pos
0.600919
12 I'm working on K-medoids algorithm implementation. It is a clustering algorithm and one of its steps includes finding the most representative point in a cluster. So, here's the thing • I have a certain number of clusters • Each cluster contains a certain number of points • I need to find the point in each cluster that results with the least error if it is picked as a cluster representative • Distance from each point to all the other in the cluster needs to be calculated • This distance calculation could be simple as Euclidean or more complex like DTW (Dynamic Time Warping) between two signals There are two approaches, one is to calculate distance matrix that will save values between all the points in the dataset and the other is to calculate distances during clustering, which results that distances between some points will be calculated repeatedly. On one hand, to build distance matrix you must calculate distances between all points in the whole dataset and some of calculated values will never be used. On the other hand, if you don't build the distance matrix, you will repeat some calculations in certain number of iterations. Which is the better approach? I'm also considering MapReduce implementation, so opinions from that angle are also welcome. Thanks 1 Answer 1 4 +100 A 3rd approach could be a combination of both, and is lazily evaluating the distance matrix. Initialize a matrix with default values (unrealistic values, like negative ones), and when you need to calculate distance between two points, if the values is already present in the matrix - just take it from it. Otherwise, calculate it and store it in the matrix. This approach trades calculations (and is optimal in doing the lowest number of possible pair calculations), for more branches in the code, and a few more instructions. However, due to branch predictors, I assume this overhead will not be that dramatic. I predict it will have better performance when the calculation is relatively expansive. Another optimization of it could be to dynamically switch for a plain matrix implementation (and calculate the remaining part of the matrix) when the number of already calculated exceeds a certain threshold. This can be achieved pretty nicely in OOP languages, by switching the implementation of the interface when a certain threshold is met. Which is actually better implementation is going to rely heavily on the cost of the distance function, and the data you are clustering, as some will need to calculate the same points more often than other data sets. I suggest doing a benchmark, and using statistical tools to evaluate which method is actually better. 8 • Thank you for your answer. Do you have any comments on the MapReduce implementation part? Commented Apr 28, 2015 at 16:20 • @pera A map-reduce implementation with O(n^2) communications from a mapper is trivial, and can improve performance significantly if the distance calculation is very expansive, are you interested on it? Or is it irrelevant for your case? – amit Commented Apr 29, 2015 at 9:04 • Actually, I am interested. Could you elaborate more on it? I was thinking about some sort of K-means like algorithm, very you basically try out all the elements in cluster and find the element that is the most similar to all the others. But, that is O(n^2) after all, and we're talking about big data. Besides that, a lot of data needs to be written to HDFS, to allow all of that calculation,and also a lot of data needs to transfered. I'm not sure what idea do you have? Commented Apr 29, 2015 at 13:18 • @pera Might have misuderstood you, I neant calculating the distances in map-reduce with O(n^2) communication is trivial. One mapper that generates (pi,pj) for each i,j. Then, multiple reducers, each working on different point (i for example), and calcualte d(pi,pj) for all j. This is a good distribution of the d(.,.) calculation if it's expansive, but there is still O(n^2) battle neck for the mapper, though it's relatively simple battleneck, since no distance calculations at all in it. And obviously, no combiner. – amit Commented Apr 29, 2015 at 21:12 • That sounds like a good idea, but you do need to create n^2 amount of data for the calculations.But I'm not sure I understand you correctly, you said - "One mapper that generates (pi,pj) for each i,j. Then, multiple reducers, each working on different point (i for example), and calculate d(pi,pj) for all j", but isn't it the opposite way?I mean, you cannot influence on the number of mappers,while you can on reducers.And it seems more reasonable to give all the input to mappers,so they could calculate d(pi,pj).Regardless of this,how would you use that matrix?It would be large,how to distribute? Commented Apr 30, 2015 at 9:43 Your Answer By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy. Not the answer you're looking for? Browse other questions tagged or ask your own question.
__label__pos
0.749012
Back to index php5  5.3.10 vdbe.c Go to the documentation of this file. 00001 /* 00002 ** 2001 September 15 00003 ** 00004 ** The author disclaims copyright to this source code. In place of 00005 ** a legal notice, here is a blessing: 00006 ** 00007 ** May you do good and not evil. 00008 ** May you find forgiveness for yourself and forgive others. 00009 ** May you share freely, never taking more than you give. 00010 ** 00011 ************************************************************************* 00012 ** The code in this file implements execution method of the 00013 ** Virtual Database Engine (VDBE). A separate file ("vdbeaux.c") 00014 ** handles housekeeping details such as creating and deleting 00015 ** VDBE instances. This file is solely interested in executing 00016 ** the VDBE program. 00017 ** 00018 ** In the external interface, an "sqlite_vm*" is an opaque pointer 00019 ** to a VDBE. 00020 ** 00021 ** The SQL parser generates a program which is then executed by 00022 ** the VDBE to do the work of the SQL statement. VDBE programs are 00023 ** similar in form to assembly language. The program consists of 00024 ** a linear sequence of operations. Each operation has an opcode 00025 ** and 3 operands. Operands P1 and P2 are integers. Operand P3 00026 ** is a null-terminated string. The P2 operand must be non-negative. 00027 ** Opcodes will typically ignore one or more operands. Many opcodes 00028 ** ignore all three operands. 00029 ** 00030 ** Computation results are stored on a stack. Each entry on the 00031 ** stack is either an integer, a null-terminated string, a floating point 00032 ** number, or the SQL "NULL" value. An inplicit conversion from one 00033 ** type to the other occurs as necessary. 00034 ** 00035 ** Most of the code in this file is taken up by the sqliteVdbeExec() 00036 ** function which does the work of interpreting a VDBE program. 00037 ** But other routines are also provided to help in building up 00038 ** a program instruction by instruction. 00039 ** 00040 ** Various scripts scan this source file in order to generate HTML 00041 ** documentation, headers files, or other derived files. The formatting 00042 ** of the code in this file is, therefore, important. See other comments 00043 ** in this file for details. If in doubt, do not deviate from existing 00044 ** commenting and indentation practices when changing or adding code. 00045 ** 00046 ** $Id: vdbe.c 219681 2006-09-09 10:59:05Z tony2001 $ 00047 */ 00048 #include "sqliteInt.h" 00049 #include "os.h" 00050 #include <ctype.h> 00051 #include "vdbeInt.h" 00052 00053 /* 00054 ** The following global variable is incremented every time a cursor 00055 ** moves, either by the OP_MoveTo or the OP_Next opcode. The test 00056 ** procedures use this information to make sure that indices are 00057 ** working correctly. This variable has no function other than to 00058 ** help verify the correct operation of the library. 00059 */ 00060 int sqlite_search_count = 0; 00061 00062 /* 00063 ** When this global variable is positive, it gets decremented once before 00064 ** each instruction in the VDBE. When reaches zero, the SQLITE_Interrupt 00065 ** of the db.flags field is set in order to simulate an interrupt. 00066 ** 00067 ** This facility is used for testing purposes only. It does not function 00068 ** in an ordinary build. 00069 */ 00070 int sqlite_interrupt_count = 0; 00071 00072 /* 00073 ** Advance the virtual machine to the next output row. 00074 ** 00075 ** The return vale will be either SQLITE_BUSY, SQLITE_DONE, 00076 ** SQLITE_ROW, SQLITE_ERROR, or SQLITE_MISUSE. 00077 ** 00078 ** SQLITE_BUSY means that the virtual machine attempted to open 00079 ** a locked database and there is no busy callback registered. 00080 ** Call sqlite_step() again to retry the open. *pN is set to 0 00081 ** and *pazColName and *pazValue are both set to NULL. 00082 ** 00083 ** SQLITE_DONE means that the virtual machine has finished 00084 ** executing. sqlite_step() should not be called again on this 00085 ** virtual machine. *pN and *pazColName are set appropriately 00086 ** but *pazValue is set to NULL. 00087 ** 00088 ** SQLITE_ROW means that the virtual machine has generated another 00089 ** row of the result set. *pN is set to the number of columns in 00090 ** the row. *pazColName is set to the names of the columns followed 00091 ** by the column datatypes. *pazValue is set to the values of each 00092 ** column in the row. The value of the i-th column is (*pazValue)[i]. 00093 ** The name of the i-th column is (*pazColName)[i] and the datatype 00094 ** of the i-th column is (*pazColName)[i+*pN]. 00095 ** 00096 ** SQLITE_ERROR means that a run-time error (such as a constraint 00097 ** violation) has occurred. The details of the error will be returned 00098 ** by the next call to sqlite_finalize(). sqlite_step() should not 00099 ** be called again on the VM. 00100 ** 00101 ** SQLITE_MISUSE means that the this routine was called inappropriately. 00102 ** Perhaps it was called on a virtual machine that had already been 00103 ** finalized or on one that had previously returned SQLITE_ERROR or 00104 ** SQLITE_DONE. Or it could be the case the the same database connection 00105 ** is being used simulataneously by two or more threads. 00106 */ 00107 int sqlite_step( 00108 sqlite_vm *pVm, /* The virtual machine to execute */ 00109 int *pN, /* OUT: Number of columns in result */ 00110 const char ***pazValue, /* OUT: Column data */ 00111 const char ***pazColName /* OUT: Column names and datatypes */ 00112 ){ 00113 Vdbe *p = (Vdbe*)pVm; 00114 sqlite *db; 00115 int rc; 00116 00117 if( !p || p->magic!=VDBE_MAGIC_RUN ){ 00118 return SQLITE_MISUSE; 00119 } 00120 db = p->db; 00121 if( sqliteSafetyOn(db) ){ 00122 p->rc = SQLITE_MISUSE; 00123 return SQLITE_MISUSE; 00124 } 00125 if( p->explain ){ 00126 rc = sqliteVdbeList(p); 00127 }else{ 00128 rc = sqliteVdbeExec(p); 00129 } 00130 if( rc==SQLITE_DONE || rc==SQLITE_ROW ){ 00131 if( pazColName ) *pazColName = (const char**)p->azColName; 00132 if( pN ) *pN = p->nResColumn; 00133 }else{ 00134 if( pazColName) *pazColName = 0; 00135 if( pN ) *pN = 0; 00136 } 00137 if( pazValue ){ 00138 if( rc==SQLITE_ROW ){ 00139 *pazValue = (const char**)p->azResColumn; 00140 }else{ 00141 *pazValue = 0; 00142 } 00143 } 00144 if( sqliteSafetyOff(db) ){ 00145 return SQLITE_MISUSE; 00146 } 00147 return rc; 00148 } 00149 00150 /* 00151 ** Insert a new aggregate element and make it the element that 00152 ** has focus. 00153 ** 00154 ** Return 0 on success and 1 if memory is exhausted. 00155 */ 00156 static int AggInsert(Agg *p, char *zKey, int nKey){ 00157 AggElem *pElem, *pOld; 00158 int i; 00159 Mem *pMem; 00160 pElem = sqliteMalloc( sizeof(AggElem) + nKey + 00161 (p->nMem-1)*sizeof(pElem->aMem[0]) ); 00162 if( pElem==0 ) return 1; 00163 pElem->zKey = (char*)&pElem->aMem[p->nMem]; 00164 memcpy(pElem->zKey, zKey, nKey); 00165 pElem->nKey = nKey; 00166 pOld = sqliteHashInsert(&p->hash, pElem->zKey, pElem->nKey, pElem); 00167 if( pOld!=0 ){ 00168 assert( pOld==pElem ); /* Malloc failed on insert */ 00169 sqliteFree(pOld); 00170 return 0; 00171 } 00172 for(i=0, pMem=pElem->aMem; i<p->nMem; i++, pMem++){ 00173 pMem->flags = MEM_Null; 00174 } 00175 p->pCurrent = pElem; 00176 return 0; 00177 } 00178 00179 /* 00180 ** Get the AggElem currently in focus 00181 */ 00182 #define AggInFocus(P) ((P).pCurrent ? (P).pCurrent : _AggInFocus(&(P))) 00183 static AggElem *_AggInFocus(Agg *p){ 00184 HashElem *pElem = sqliteHashFirst(&p->hash); 00185 if( pElem==0 ){ 00186 AggInsert(p,"",1); 00187 pElem = sqliteHashFirst(&p->hash); 00188 } 00189 return pElem ? sqliteHashData(pElem) : 0; 00190 } 00191 00192 /* 00193 ** Convert the given stack entity into a string if it isn't one 00194 ** already. 00195 */ 00196 #define Stringify(P) if(((P)->flags & MEM_Str)==0){hardStringify(P);} 00197 static int hardStringify(Mem *pStack){ 00198 int fg = pStack->flags; 00199 if( fg & MEM_Real ){ 00200 sqlite_snprintf(sizeof(pStack->zShort),pStack->zShort,"%.15g",pStack->r); 00201 }else if( fg & MEM_Int ){ 00202 sqlite_snprintf(sizeof(pStack->zShort),pStack->zShort,"%d",pStack->i); 00203 }else{ 00204 pStack->zShort[0] = 0; 00205 } 00206 pStack->z = pStack->zShort; 00207 pStack->n = strlen(pStack->zShort)+1; 00208 pStack->flags = MEM_Str | MEM_Short; 00209 return 0; 00210 } 00211 00212 /* 00213 ** Convert the given stack entity into a string that has been obtained 00214 ** from sqliteMalloc(). This is different from Stringify() above in that 00215 ** Stringify() will use the NBFS bytes of static string space if the string 00216 ** will fit but this routine always mallocs for space. 00217 ** Return non-zero if we run out of memory. 00218 */ 00219 #define Dynamicify(P) (((P)->flags & MEM_Dyn)==0 ? hardDynamicify(P):0) 00220 static int hardDynamicify(Mem *pStack){ 00221 int fg = pStack->flags; 00222 char *z; 00223 if( (fg & MEM_Str)==0 ){ 00224 hardStringify(pStack); 00225 } 00226 assert( (fg & MEM_Dyn)==0 ); 00227 z = sqliteMallocRaw( pStack->n ); 00228 if( z==0 ) return 1; 00229 memcpy(z, pStack->z, pStack->n); 00230 pStack->z = z; 00231 pStack->flags |= MEM_Dyn; 00232 return 0; 00233 } 00234 00235 /* 00236 ** An ephemeral string value (signified by the MEM_Ephem flag) contains 00237 ** a pointer to a dynamically allocated string where some other entity 00238 ** is responsible for deallocating that string. Because the stack entry 00239 ** does not control the string, it might be deleted without the stack 00240 ** entry knowing it. 00241 ** 00242 ** This routine converts an ephemeral string into a dynamically allocated 00243 ** string that the stack entry itself controls. In other words, it 00244 ** converts an MEM_Ephem string into an MEM_Dyn string. 00245 */ 00246 #define Deephemeralize(P) \ 00247 if( ((P)->flags&MEM_Ephem)!=0 && hardDeephem(P) ){ goto no_mem;} 00248 static int hardDeephem(Mem *pStack){ 00249 char *z; 00250 assert( (pStack->flags & MEM_Ephem)!=0 ); 00251 z = sqliteMallocRaw( pStack->n ); 00252 if( z==0 ) return 1; 00253 memcpy(z, pStack->z, pStack->n); 00254 pStack->z = z; 00255 pStack->flags &= ~MEM_Ephem; 00256 pStack->flags |= MEM_Dyn; 00257 return 0; 00258 } 00259 00260 /* 00261 ** Release the memory associated with the given stack level. This 00262 ** leaves the Mem.flags field in an inconsistent state. 00263 */ 00264 #define Release(P) if((P)->flags&MEM_Dyn){ sqliteFree((P)->z); } 00265 00266 /* 00267 ** Pop the stack N times. 00268 */ 00269 static void popStack(Mem **ppTos, int N){ 00270 Mem *pTos = *ppTos; 00271 while( N>0 ){ 00272 N--; 00273 Release(pTos); 00274 pTos--; 00275 } 00276 *ppTos = pTos; 00277 } 00278 00279 /* 00280 ** Return TRUE if zNum is a 32-bit signed integer and write 00281 ** the value of the integer into *pNum. If zNum is not an integer 00282 ** or is an integer that is too large to be expressed with just 32 00283 ** bits, then return false. 00284 ** 00285 ** Under Linux (RedHat 7.2) this routine is much faster than atoi() 00286 ** for converting strings into integers. 00287 */ 00288 static int toInt(const char *zNum, int *pNum){ 00289 int v = 0; 00290 int neg; 00291 int i, c; 00292 if( *zNum=='-' ){ 00293 neg = 1; 00294 zNum++; 00295 }else if( *zNum=='+' ){ 00296 neg = 0; 00297 zNum++; 00298 }else{ 00299 neg = 0; 00300 } 00301 for(i=0; (c=zNum[i])>='0' && c<='9'; i++){ 00302 v = v*10 + c - '0'; 00303 } 00304 *pNum = neg ? -v : v; 00305 return c==0 && i>0 && (i<10 || (i==10 && memcmp(zNum,"2147483647",10)<=0)); 00306 } 00307 00308 /* 00309 ** Convert the given stack entity into a integer if it isn't one 00310 ** already. 00311 ** 00312 ** Any prior string or real representation is invalidated. 00313 ** NULLs are converted into 0. 00314 */ 00315 #define Integerify(P) if(((P)->flags&MEM_Int)==0){ hardIntegerify(P); } 00316 static void hardIntegerify(Mem *pStack){ 00317 if( pStack->flags & MEM_Real ){ 00318 pStack->i = (int)pStack->r; 00319 Release(pStack); 00320 }else if( pStack->flags & MEM_Str ){ 00321 toInt(pStack->z, &pStack->i); 00322 Release(pStack); 00323 }else{ 00324 pStack->i = 0; 00325 } 00326 pStack->flags = MEM_Int; 00327 } 00328 00329 /* 00330 ** Get a valid Real representation for the given stack element. 00331 ** 00332 ** Any prior string or integer representation is retained. 00333 ** NULLs are converted into 0.0. 00334 */ 00335 #define Realify(P) if(((P)->flags&MEM_Real)==0){ hardRealify(P); } 00336 static void hardRealify(Mem *pStack){ 00337 if( pStack->flags & MEM_Str ){ 00338 pStack->r = sqliteAtoF(pStack->z, 0); 00339 }else if( pStack->flags & MEM_Int ){ 00340 pStack->r = pStack->i; 00341 }else{ 00342 pStack->r = 0.0; 00343 } 00344 pStack->flags |= MEM_Real; 00345 } 00346 00347 /* 00348 ** The parameters are pointers to the head of two sorted lists 00349 ** of Sorter structures. Merge these two lists together and return 00350 ** a single sorted list. This routine forms the core of the merge-sort 00351 ** algorithm. 00352 ** 00353 ** In the case of a tie, left sorts in front of right. 00354 */ 00355 static Sorter *Merge(Sorter *pLeft, Sorter *pRight){ 00356 Sorter sHead; 00357 Sorter *pTail; 00358 pTail = &sHead; 00359 pTail->pNext = 0; 00360 while( pLeft && pRight ){ 00361 int c = sqliteSortCompare(pLeft->zKey, pRight->zKey); 00362 if( c<=0 ){ 00363 pTail->pNext = pLeft; 00364 pLeft = pLeft->pNext; 00365 }else{ 00366 pTail->pNext = pRight; 00367 pRight = pRight->pNext; 00368 } 00369 pTail = pTail->pNext; 00370 } 00371 if( pLeft ){ 00372 pTail->pNext = pLeft; 00373 }else if( pRight ){ 00374 pTail->pNext = pRight; 00375 } 00376 return sHead.pNext; 00377 } 00378 00379 /* 00380 ** The following routine works like a replacement for the standard 00381 ** library routine fgets(). The difference is in how end-of-line (EOL) 00382 ** is handled. Standard fgets() uses LF for EOL under unix, CRLF 00383 ** under windows, and CR under mac. This routine accepts any of these 00384 ** character sequences as an EOL mark. The EOL mark is replaced by 00385 ** a single LF character in zBuf. 00386 */ 00387 static char *vdbe_fgets(char *zBuf, int nBuf, FILE *in){ 00388 int i, c; 00389 for(i=0; i<nBuf-1 && (c=getc(in))!=EOF; i++){ 00390 zBuf[i] = c; 00391 if( c=='\r' || c=='\n' ){ 00392 if( c=='\r' ){ 00393 zBuf[i] = '\n'; 00394 c = getc(in); 00395 if( c!=EOF && c!='\n' ) ungetc(c, in); 00396 } 00397 i++; 00398 break; 00399 } 00400 } 00401 zBuf[i] = 0; 00402 return i>0 ? zBuf : 0; 00403 } 00404 00405 /* 00406 ** Make sure there is space in the Vdbe structure to hold at least 00407 ** mxCursor cursors. If there is not currently enough space, then 00408 ** allocate more. 00409 ** 00410 ** If a memory allocation error occurs, return 1. Return 0 if 00411 ** everything works. 00412 */ 00413 static int expandCursorArraySize(Vdbe *p, int mxCursor){ 00414 if( mxCursor>=p->nCursor ){ 00415 Cursor *aCsr = sqliteRealloc( p->aCsr, (mxCursor+1)*sizeof(Cursor) ); 00416 if( aCsr==0 ) return 1; 00417 p->aCsr = aCsr; 00418 memset(&p->aCsr[p->nCursor], 0, sizeof(Cursor)*(mxCursor+1-p->nCursor)); 00419 p->nCursor = mxCursor+1; 00420 } 00421 return 0; 00422 } 00423 00424 #ifdef VDBE_PROFILE 00425 /* 00426 ** The following routine only works on pentium-class processors. 00427 ** It uses the RDTSC opcode to read cycle count value out of the 00428 ** processor and returns that value. This can be used for high-res 00429 ** profiling. 00430 */ 00431 __inline__ unsigned long long int hwtime(void){ 00432 unsigned long long int x; 00433 __asm__("rdtsc\n\t" 00434 "mov %%edx, %%ecx\n\t" 00435 :"=A" (x)); 00436 return x; 00437 } 00438 #endif 00439 00440 /* 00441 ** The CHECK_FOR_INTERRUPT macro defined here looks to see if the 00442 ** sqlite_interrupt() routine has been called. If it has been, then 00443 ** processing of the VDBE program is interrupted. 00444 ** 00445 ** This macro added to every instruction that does a jump in order to 00446 ** implement a loop. This test used to be on every single instruction, 00447 ** but that meant we more testing that we needed. By only testing the 00448 ** flag on jump instructions, we get a (small) speed improvement. 00449 */ 00450 #define CHECK_FOR_INTERRUPT \ 00451 if( db->flags & SQLITE_Interrupt ) goto abort_due_to_interrupt; 00452 00453 00454 /* 00455 ** Execute as much of a VDBE program as we can then return. 00456 ** 00457 ** sqliteVdbeMakeReady() must be called before this routine in order to 00458 ** close the program with a final OP_Halt and to set up the callbacks 00459 ** and the error message pointer. 00460 ** 00461 ** Whenever a row or result data is available, this routine will either 00462 ** invoke the result callback (if there is one) or return with 00463 ** SQLITE_ROW. 00464 ** 00465 ** If an attempt is made to open a locked database, then this routine 00466 ** will either invoke the busy callback (if there is one) or it will 00467 ** return SQLITE_BUSY. 00468 ** 00469 ** If an error occurs, an error message is written to memory obtained 00470 ** from sqliteMalloc() and p->zErrMsg is made to point to that memory. 00471 ** The error code is stored in p->rc and this routine returns SQLITE_ERROR. 00472 ** 00473 ** If the callback ever returns non-zero, then the program exits 00474 ** immediately. There will be no error message but the p->rc field is 00475 ** set to SQLITE_ABORT and this routine will return SQLITE_ERROR. 00476 ** 00477 ** A memory allocation error causes p->rc to be set to SQLITE_NOMEM and this 00478 ** routine to return SQLITE_ERROR. 00479 ** 00480 ** Other fatal errors return SQLITE_ERROR. 00481 ** 00482 ** After this routine has finished, sqliteVdbeFinalize() should be 00483 ** used to clean up the mess that was left behind. 00484 */ 00485 int sqliteVdbeExec( 00486 Vdbe *p /* The VDBE */ 00487 ){ 00488 int pc; /* The program counter */ 00489 Op *pOp; /* Current operation */ 00490 int rc = SQLITE_OK; /* Value to return */ 00491 sqlite *db = p->db; /* The database */ 00492 Mem *pTos; /* Top entry in the operand stack */ 00493 char zBuf[100]; /* Space to sprintf() an integer */ 00494 #ifdef VDBE_PROFILE 00495 unsigned long long start; /* CPU clock count at start of opcode */ 00496 int origPc; /* Program counter at start of opcode */ 00497 #endif 00498 #ifndef SQLITE_OMIT_PROGRESS_CALLBACK 00499 int nProgressOps = 0; /* Opcodes executed since progress callback. */ 00500 #endif 00501 00502 if( p->magic!=VDBE_MAGIC_RUN ) return SQLITE_MISUSE; 00503 assert( db->magic==SQLITE_MAGIC_BUSY ); 00504 assert( p->rc==SQLITE_OK || p->rc==SQLITE_BUSY ); 00505 p->rc = SQLITE_OK; 00506 assert( p->explain==0 ); 00507 if( sqlite_malloc_failed ) goto no_mem; 00508 pTos = p->pTos; 00509 if( p->popStack ){ 00510 popStack(&pTos, p->popStack); 00511 p->popStack = 0; 00512 } 00513 CHECK_FOR_INTERRUPT; 00514 for(pc=p->pc; rc==SQLITE_OK; pc++){ 00515 assert( pc>=0 && pc<p->nOp ); 00516 assert( pTos<=&p->aStack[pc] ); 00517 #ifdef VDBE_PROFILE 00518 origPc = pc; 00519 start = hwtime(); 00520 #endif 00521 pOp = &p->aOp[pc]; 00522 00523 /* Only allow tracing if NDEBUG is not defined. 00524 */ 00525 #ifndef NDEBUG 00526 if( p->trace ){ 00527 sqliteVdbePrintOp(p->trace, pc, pOp); 00528 } 00529 #endif 00530 00531 /* Check to see if we need to simulate an interrupt. This only happens 00532 ** if we have a special test build. 00533 */ 00534 #ifdef SQLITE_TEST 00535 if( sqlite_interrupt_count>0 ){ 00536 sqlite_interrupt_count--; 00537 if( sqlite_interrupt_count==0 ){ 00538 sqlite_interrupt(db); 00539 } 00540 } 00541 #endif 00542 00543 #ifndef SQLITE_OMIT_PROGRESS_CALLBACK 00544 /* Call the progress callback if it is configured and the required number 00545 ** of VDBE ops have been executed (either since this invocation of 00546 ** sqliteVdbeExec() or since last time the progress callback was called). 00547 ** If the progress callback returns non-zero, exit the virtual machine with 00548 ** a return code SQLITE_ABORT. 00549 */ 00550 if( db->xProgress ){ 00551 if( db->nProgressOps==nProgressOps ){ 00552 if( db->xProgress(db->pProgressArg)!=0 ){ 00553 rc = SQLITE_ABORT; 00554 continue; /* skip to the next iteration of the for loop */ 00555 } 00556 nProgressOps = 0; 00557 } 00558 nProgressOps++; 00559 } 00560 #endif 00561 00562 switch( pOp->opcode ){ 00563 00564 /***************************************************************************** 00565 ** What follows is a massive switch statement where each case implements a 00566 ** separate instruction in the virtual machine. If we follow the usual 00567 ** indentation conventions, each case should be indented by 6 spaces. But 00568 ** that is a lot of wasted space on the left margin. So the code within 00569 ** the switch statement will break with convention and be flush-left. Another 00570 ** big comment (similar to this one) will mark the point in the code where 00571 ** we transition back to normal indentation. 00572 ** 00573 ** The formatting of each case is important. The makefile for SQLite 00574 ** generates two C files "opcodes.h" and "opcodes.c" by scanning this 00575 ** file looking for lines that begin with "case OP_". The opcodes.h files 00576 ** will be filled with #defines that give unique integer values to each 00577 ** opcode and the opcodes.c file is filled with an array of strings where 00578 ** each string is the symbolic name for the corresponding opcode. 00579 ** 00580 ** Documentation about VDBE opcodes is generated by scanning this file 00581 ** for lines of that contain "Opcode:". That line and all subsequent 00582 ** comment lines are used in the generation of the opcode.html documentation 00583 ** file. 00584 ** 00585 ** SUMMARY: 00586 ** 00587 ** Formatting is important to scripts that scan this file. 00588 ** Do not deviate from the formatting style currently in use. 00589 ** 00590 *****************************************************************************/ 00591 00592 /* Opcode: Goto * P2 * 00593 ** 00594 ** An unconditional jump to address P2. 00595 ** The next instruction executed will be 00596 ** the one at index P2 from the beginning of 00597 ** the program. 00598 */ 00599 case OP_Goto: { 00600 CHECK_FOR_INTERRUPT; 00601 pc = pOp->p2 - 1; 00602 break; 00603 } 00604 00605 /* Opcode: Gosub * P2 * 00606 ** 00607 ** Push the current address plus 1 onto the return address stack 00608 ** and then jump to address P2. 00609 ** 00610 ** The return address stack is of limited depth. If too many 00611 ** OP_Gosub operations occur without intervening OP_Returns, then 00612 ** the return address stack will fill up and processing will abort 00613 ** with a fatal error. 00614 */ 00615 case OP_Gosub: { 00616 if( p->returnDepth>=sizeof(p->returnStack)/sizeof(p->returnStack[0]) ){ 00617 sqliteSetString(&p->zErrMsg, "return address stack overflow", (char*)0); 00618 p->rc = SQLITE_INTERNAL; 00619 return SQLITE_ERROR; 00620 } 00621 p->returnStack[p->returnDepth++] = pc+1; 00622 pc = pOp->p2 - 1; 00623 break; 00624 } 00625 00626 /* Opcode: Return * * * 00627 ** 00628 ** Jump immediately to the next instruction after the last unreturned 00629 ** OP_Gosub. If an OP_Return has occurred for all OP_Gosubs, then 00630 ** processing aborts with a fatal error. 00631 */ 00632 case OP_Return: { 00633 if( p->returnDepth<=0 ){ 00634 sqliteSetString(&p->zErrMsg, "return address stack underflow", (char*)0); 00635 p->rc = SQLITE_INTERNAL; 00636 return SQLITE_ERROR; 00637 } 00638 p->returnDepth--; 00639 pc = p->returnStack[p->returnDepth] - 1; 00640 break; 00641 } 00642 00643 /* Opcode: Halt P1 P2 * 00644 ** 00645 ** Exit immediately. All open cursors, Lists, Sorts, etc are closed 00646 ** automatically. 00647 ** 00648 ** P1 is the result code returned by sqlite_exec(). For a normal 00649 ** halt, this should be SQLITE_OK (0). For errors, it can be some 00650 ** other value. If P1!=0 then P2 will determine whether or not to 00651 ** rollback the current transaction. Do not rollback if P2==OE_Fail. 00652 ** Do the rollback if P2==OE_Rollback. If P2==OE_Abort, then back 00653 ** out all changes that have occurred during this execution of the 00654 ** VDBE, but do not rollback the transaction. 00655 ** 00656 ** There is an implied "Halt 0 0 0" instruction inserted at the very end of 00657 ** every program. So a jump past the last instruction of the program 00658 ** is the same as executing Halt. 00659 */ 00660 case OP_Halt: { 00661 p->magic = VDBE_MAGIC_HALT; 00662 p->pTos = pTos; 00663 if( pOp->p1!=SQLITE_OK ){ 00664 p->rc = pOp->p1; 00665 p->errorAction = pOp->p2; 00666 if( pOp->p3 ){ 00667 sqliteSetString(&p->zErrMsg, pOp->p3, (char*)0); 00668 } 00669 return SQLITE_ERROR; 00670 }else{ 00671 p->rc = SQLITE_OK; 00672 return SQLITE_DONE; 00673 } 00674 } 00675 00676 /* Opcode: Integer P1 * P3 00677 ** 00678 ** The integer value P1 is pushed onto the stack. If P3 is not zero 00679 ** then it is assumed to be a string representation of the same integer. 00680 */ 00681 case OP_Integer: { 00682 pTos++; 00683 pTos->i = pOp->p1; 00684 pTos->flags = MEM_Int; 00685 if( pOp->p3 ){ 00686 pTos->z = pOp->p3; 00687 pTos->flags |= MEM_Str | MEM_Static; 00688 pTos->n = strlen(pOp->p3)+1; 00689 } 00690 break; 00691 } 00692 00693 /* Opcode: String * * P3 00694 ** 00695 ** The string value P3 is pushed onto the stack. If P3==0 then a 00696 ** NULL is pushed onto the stack. 00697 */ 00698 case OP_String: { 00699 char *z = pOp->p3; 00700 pTos++; 00701 if( z==0 ){ 00702 pTos->flags = MEM_Null; 00703 }else{ 00704 pTos->z = z; 00705 pTos->n = strlen(z) + 1; 00706 pTos->flags = MEM_Str | MEM_Static; 00707 } 00708 break; 00709 } 00710 00711 /* Opcode: Variable P1 * * 00712 ** 00713 ** Push the value of variable P1 onto the stack. A variable is 00714 ** an unknown in the original SQL string as handed to sqlite_compile(). 00715 ** Any occurance of the '?' character in the original SQL is considered 00716 ** a variable. Variables in the SQL string are number from left to 00717 ** right beginning with 1. The values of variables are set using the 00718 ** sqlite_bind() API. 00719 */ 00720 case OP_Variable: { 00721 int j = pOp->p1 - 1; 00722 pTos++; 00723 if( j>=0 && j<p->nVar && p->azVar[j]!=0 ){ 00724 pTos->z = p->azVar[j]; 00725 pTos->n = p->anVar[j]; 00726 pTos->flags = MEM_Str | MEM_Static; 00727 }else{ 00728 pTos->flags = MEM_Null; 00729 } 00730 break; 00731 } 00732 00733 /* Opcode: Pop P1 * * 00734 ** 00735 ** P1 elements are popped off of the top of stack and discarded. 00736 */ 00737 case OP_Pop: { 00738 assert( pOp->p1>=0 ); 00739 popStack(&pTos, pOp->p1); 00740 assert( pTos>=&p->aStack[-1] ); 00741 break; 00742 } 00743 00744 /* Opcode: Dup P1 P2 * 00745 ** 00746 ** A copy of the P1-th element of the stack 00747 ** is made and pushed onto the top of the stack. 00748 ** The top of the stack is element 0. So the 00749 ** instruction "Dup 0 0 0" will make a copy of the 00750 ** top of the stack. 00751 ** 00752 ** If the content of the P1-th element is a dynamically 00753 ** allocated string, then a new copy of that string 00754 ** is made if P2==0. If P2!=0, then just a pointer 00755 ** to the string is copied. 00756 ** 00757 ** Also see the Pull instruction. 00758 */ 00759 case OP_Dup: { 00760 Mem *pFrom = &pTos[-pOp->p1]; 00761 assert( pFrom<=pTos && pFrom>=p->aStack ); 00762 pTos++; 00763 memcpy(pTos, pFrom, sizeof(*pFrom)-NBFS); 00764 if( pTos->flags & MEM_Str ){ 00765 if( pOp->p2 && (pTos->flags & (MEM_Dyn|MEM_Ephem)) ){ 00766 pTos->flags &= ~MEM_Dyn; 00767 pTos->flags |= MEM_Ephem; 00768 }else if( pTos->flags & MEM_Short ){ 00769 memcpy(pTos->zShort, pFrom->zShort, pTos->n); 00770 pTos->z = pTos->zShort; 00771 }else if( (pTos->flags & MEM_Static)==0 ){ 00772 pTos->z = sqliteMallocRaw(pFrom->n); 00773 if( sqlite_malloc_failed ) goto no_mem; 00774 memcpy(pTos->z, pFrom->z, pFrom->n); 00775 pTos->flags &= ~(MEM_Static|MEM_Ephem|MEM_Short); 00776 pTos->flags |= MEM_Dyn; 00777 } 00778 } 00779 break; 00780 } 00781 00782 /* Opcode: Pull P1 * * 00783 ** 00784 ** The P1-th element is removed from its current location on 00785 ** the stack and pushed back on top of the stack. The 00786 ** top of the stack is element 0, so "Pull 0 0 0" is 00787 ** a no-op. "Pull 1 0 0" swaps the top two elements of 00788 ** the stack. 00789 ** 00790 ** See also the Dup instruction. 00791 */ 00792 case OP_Pull: { 00793 Mem *pFrom = &pTos[-pOp->p1]; 00794 int i; 00795 Mem ts; 00796 00797 ts = *pFrom; 00798 Deephemeralize(pTos); 00799 for(i=0; i<pOp->p1; i++, pFrom++){ 00800 Deephemeralize(&pFrom[1]); 00801 *pFrom = pFrom[1]; 00802 assert( (pFrom->flags & MEM_Ephem)==0 ); 00803 if( pFrom->flags & MEM_Short ){ 00804 assert( pFrom->flags & MEM_Str ); 00805 assert( pFrom->z==pFrom[1].zShort ); 00806 pFrom->z = pFrom->zShort; 00807 } 00808 } 00809 *pTos = ts; 00810 if( pTos->flags & MEM_Short ){ 00811 assert( pTos->flags & MEM_Str ); 00812 assert( pTos->z==pTos[-pOp->p1].zShort ); 00813 pTos->z = pTos->zShort; 00814 } 00815 break; 00816 } 00817 00818 /* Opcode: Push P1 * * 00819 ** 00820 ** Overwrite the value of the P1-th element down on the 00821 ** stack (P1==0 is the top of the stack) with the value 00822 ** of the top of the stack. Then pop the top of the stack. 00823 */ 00824 case OP_Push: { 00825 Mem *pTo = &pTos[-pOp->p1]; 00826 00827 assert( pTo>=p->aStack ); 00828 Deephemeralize(pTos); 00829 Release(pTo); 00830 *pTo = *pTos; 00831 if( pTo->flags & MEM_Short ){ 00832 assert( pTo->z==pTos->zShort ); 00833 pTo->z = pTo->zShort; 00834 } 00835 pTos--; 00836 break; 00837 } 00838 00839 00840 /* Opcode: ColumnName P1 P2 P3 00841 ** 00842 ** P3 becomes the P1-th column name (first is 0). An array of pointers 00843 ** to all column names is passed as the 4th parameter to the callback. 00844 ** If P2==1 then this is the last column in the result set and thus the 00845 ** number of columns in the result set will be P1. There must be at least 00846 ** one OP_ColumnName with a P2==1 before invoking OP_Callback and the 00847 ** number of columns specified in OP_Callback must one more than the P1 00848 ** value of the OP_ColumnName that has P2==1. 00849 */ 00850 case OP_ColumnName: { 00851 assert( pOp->p1>=0 && pOp->p1<p->nOp ); 00852 p->azColName[pOp->p1] = pOp->p3; 00853 p->nCallback = 0; 00854 if( pOp->p2 ) p->nResColumn = pOp->p1+1; 00855 break; 00856 } 00857 00858 /* Opcode: Callback P1 * * 00859 ** 00860 ** Pop P1 values off the stack and form them into an array. Then 00861 ** invoke the callback function using the newly formed array as the 00862 ** 3rd parameter. 00863 */ 00864 case OP_Callback: { 00865 int i; 00866 char **azArgv = p->zArgv; 00867 Mem *pCol; 00868 00869 pCol = &pTos[1-pOp->p1]; 00870 assert( pCol>=p->aStack ); 00871 for(i=0; i<pOp->p1; i++, pCol++){ 00872 if( pCol->flags & MEM_Null ){ 00873 azArgv[i] = 0; 00874 }else{ 00875 Stringify(pCol); 00876 azArgv[i] = pCol->z; 00877 } 00878 } 00879 azArgv[i] = 0; 00880 p->nCallback++; 00881 p->azResColumn = azArgv; 00882 assert( p->nResColumn==pOp->p1 ); 00883 p->popStack = pOp->p1; 00884 p->pc = pc + 1; 00885 p->pTos = pTos; 00886 return SQLITE_ROW; 00887 } 00888 00889 /* Opcode: Concat P1 P2 P3 00890 ** 00891 ** Look at the first P1 elements of the stack. Append them all 00892 ** together with the lowest element first. Use P3 as a separator. 00893 ** Put the result on the top of the stack. The original P1 elements 00894 ** are popped from the stack if P2==0 and retained if P2==1. If 00895 ** any element of the stack is NULL, then the result is NULL. 00896 ** 00897 ** If P3 is NULL, then use no separator. When P1==1, this routine 00898 ** makes a copy of the top stack element into memory obtained 00899 ** from sqliteMalloc(). 00900 */ 00901 case OP_Concat: { 00902 char *zNew; 00903 int nByte; 00904 int nField; 00905 int i, j; 00906 char *zSep; 00907 int nSep; 00908 Mem *pTerm; 00909 00910 nField = pOp->p1; 00911 zSep = pOp->p3; 00912 if( zSep==0 ) zSep = ""; 00913 nSep = strlen(zSep); 00914 assert( &pTos[1-nField] >= p->aStack ); 00915 nByte = 1 - nSep; 00916 pTerm = &pTos[1-nField]; 00917 for(i=0; i<nField; i++, pTerm++){ 00918 if( pTerm->flags & MEM_Null ){ 00919 nByte = -1; 00920 break; 00921 }else{ 00922 Stringify(pTerm); 00923 nByte += pTerm->n - 1 + nSep; 00924 } 00925 } 00926 if( nByte<0 ){ 00927 if( pOp->p2==0 ){ 00928 popStack(&pTos, nField); 00929 } 00930 pTos++; 00931 pTos->flags = MEM_Null; 00932 break; 00933 } 00934 zNew = sqliteMallocRaw( nByte ); 00935 if( zNew==0 ) goto no_mem; 00936 j = 0; 00937 pTerm = &pTos[1-nField]; 00938 for(i=j=0; i<nField; i++, pTerm++){ 00939 assert( pTerm->flags & MEM_Str ); 00940 memcpy(&zNew[j], pTerm->z, pTerm->n-1); 00941 j += pTerm->n-1; 00942 if( nSep>0 && i<nField-1 ){ 00943 memcpy(&zNew[j], zSep, nSep); 00944 j += nSep; 00945 } 00946 } 00947 zNew[j] = 0; 00948 if( pOp->p2==0 ){ 00949 popStack(&pTos, nField); 00950 } 00951 pTos++; 00952 pTos->n = nByte; 00953 pTos->flags = MEM_Str|MEM_Dyn; 00954 pTos->z = zNew; 00955 break; 00956 } 00957 00958 /* Opcode: Add * * * 00959 ** 00960 ** Pop the top two elements from the stack, add them together, 00961 ** and push the result back onto the stack. If either element 00962 ** is a string then it is converted to a double using the atof() 00963 ** function before the addition. 00964 ** If either operand is NULL, the result is NULL. 00965 */ 00966 /* Opcode: Multiply * * * 00967 ** 00968 ** Pop the top two elements from the stack, multiply them together, 00969 ** and push the result back onto the stack. If either element 00970 ** is a string then it is converted to a double using the atof() 00971 ** function before the multiplication. 00972 ** If either operand is NULL, the result is NULL. 00973 */ 00974 /* Opcode: Subtract * * * 00975 ** 00976 ** Pop the top two elements from the stack, subtract the 00977 ** first (what was on top of the stack) from the second (the 00978 ** next on stack) 00979 ** and push the result back onto the stack. If either element 00980 ** is a string then it is converted to a double using the atof() 00981 ** function before the subtraction. 00982 ** If either operand is NULL, the result is NULL. 00983 */ 00984 /* Opcode: Divide * * * 00985 ** 00986 ** Pop the top two elements from the stack, divide the 00987 ** first (what was on top of the stack) from the second (the 00988 ** next on stack) 00989 ** and push the result back onto the stack. If either element 00990 ** is a string then it is converted to a double using the atof() 00991 ** function before the division. Division by zero returns NULL. 00992 ** If either operand is NULL, the result is NULL. 00993 */ 00994 /* Opcode: Remainder * * * 00995 ** 00996 ** Pop the top two elements from the stack, divide the 00997 ** first (what was on top of the stack) from the second (the 00998 ** next on stack) 00999 ** and push the remainder after division onto the stack. If either element 01000 ** is a string then it is converted to a double using the atof() 01001 ** function before the division. Division by zero returns NULL. 01002 ** If either operand is NULL, the result is NULL. 01003 */ 01004 case OP_Add: 01005 case OP_Subtract: 01006 case OP_Multiply: 01007 case OP_Divide: 01008 case OP_Remainder: { 01009 Mem *pNos = &pTos[-1]; 01010 assert( pNos>=p->aStack ); 01011 if( ((pTos->flags | pNos->flags) & MEM_Null)!=0 ){ 01012 Release(pTos); 01013 pTos--; 01014 Release(pTos); 01015 pTos->flags = MEM_Null; 01016 }else if( (pTos->flags & pNos->flags & MEM_Int)==MEM_Int ){ 01017 int a, b; 01018 a = pTos->i; 01019 b = pNos->i; 01020 switch( pOp->opcode ){ 01021 case OP_Add: b += a; break; 01022 case OP_Subtract: b -= a; break; 01023 case OP_Multiply: b *= a; break; 01024 case OP_Divide: { 01025 if( a==0 ) goto divide_by_zero; 01026 b /= a; 01027 break; 01028 } 01029 default: { 01030 if( a==0 ) goto divide_by_zero; 01031 b %= a; 01032 break; 01033 } 01034 } 01035 Release(pTos); 01036 pTos--; 01037 Release(pTos); 01038 pTos->i = b; 01039 pTos->flags = MEM_Int; 01040 }else{ 01041 double a, b; 01042 Realify(pTos); 01043 Realify(pNos); 01044 a = pTos->r; 01045 b = pNos->r; 01046 switch( pOp->opcode ){ 01047 case OP_Add: b += a; break; 01048 case OP_Subtract: b -= a; break; 01049 case OP_Multiply: b *= a; break; 01050 case OP_Divide: { 01051 if( a==0.0 ) goto divide_by_zero; 01052 b /= a; 01053 break; 01054 } 01055 default: { 01056 int ia = (int)a; 01057 int ib = (int)b; 01058 if( ia==0.0 ) goto divide_by_zero; 01059 b = ib % ia; 01060 break; 01061 } 01062 } 01063 Release(pTos); 01064 pTos--; 01065 Release(pTos); 01066 pTos->r = b; 01067 pTos->flags = MEM_Real; 01068 } 01069 break; 01070 01071 divide_by_zero: 01072 Release(pTos); 01073 pTos--; 01074 Release(pTos); 01075 pTos->flags = MEM_Null; 01076 break; 01077 } 01078 01079 /* Opcode: Function P1 * P3 01080 ** 01081 ** Invoke a user function (P3 is a pointer to a Function structure that 01082 ** defines the function) with P1 string arguments taken from the stack. 01083 ** Pop all arguments from the stack and push back the result. 01084 ** 01085 ** See also: AggFunc 01086 */ 01087 case OP_Function: { 01088 int n, i; 01089 Mem *pArg; 01090 char **azArgv; 01091 sqlite_func ctx; 01092 01093 n = pOp->p1; 01094 pArg = &pTos[1-n]; 01095 azArgv = p->zArgv; 01096 for(i=0; i<n; i++, pArg++){ 01097 if( pArg->flags & MEM_Null ){ 01098 azArgv[i] = 0; 01099 }else{ 01100 Stringify(pArg); 01101 azArgv[i] = pArg->z; 01102 } 01103 } 01104 ctx.pFunc = (FuncDef*)pOp->p3; 01105 ctx.s.flags = MEM_Null; 01106 ctx.s.z = 0; 01107 ctx.isError = 0; 01108 ctx.isStep = 0; 01109 if( sqliteSafetyOff(db) ) goto abort_due_to_misuse; 01110 (*ctx.pFunc->xFunc)(&ctx, n, (const char**)azArgv); 01111 if( sqliteSafetyOn(db) ) goto abort_due_to_misuse; 01112 popStack(&pTos, n); 01113 pTos++; 01114 *pTos = ctx.s; 01115 if( pTos->flags & MEM_Short ){ 01116 pTos->z = pTos->zShort; 01117 } 01118 if( ctx.isError ){ 01119 sqliteSetString(&p->zErrMsg, 01120 (pTos->flags & MEM_Str)!=0 ? pTos->z : "user function error", (char*)0); 01121 rc = SQLITE_ERROR; 01122 } 01123 break; 01124 } 01125 01126 /* Opcode: BitAnd * * * 01127 ** 01128 ** Pop the top two elements from the stack. Convert both elements 01129 ** to integers. Push back onto the stack the bit-wise AND of the 01130 ** two elements. 01131 ** If either operand is NULL, the result is NULL. 01132 */ 01133 /* Opcode: BitOr * * * 01134 ** 01135 ** Pop the top two elements from the stack. Convert both elements 01136 ** to integers. Push back onto the stack the bit-wise OR of the 01137 ** two elements. 01138 ** If either operand is NULL, the result is NULL. 01139 */ 01140 /* Opcode: ShiftLeft * * * 01141 ** 01142 ** Pop the top two elements from the stack. Convert both elements 01143 ** to integers. Push back onto the stack the top element shifted 01144 ** left by N bits where N is the second element on the stack. 01145 ** If either operand is NULL, the result is NULL. 01146 */ 01147 /* Opcode: ShiftRight * * * 01148 ** 01149 ** Pop the top two elements from the stack. Convert both elements 01150 ** to integers. Push back onto the stack the top element shifted 01151 ** right by N bits where N is the second element on the stack. 01152 ** If either operand is NULL, the result is NULL. 01153 */ 01154 case OP_BitAnd: 01155 case OP_BitOr: 01156 case OP_ShiftLeft: 01157 case OP_ShiftRight: { 01158 Mem *pNos = &pTos[-1]; 01159 int a, b; 01160 01161 assert( pNos>=p->aStack ); 01162 if( (pTos->flags | pNos->flags) & MEM_Null ){ 01163 popStack(&pTos, 2); 01164 pTos++; 01165 pTos->flags = MEM_Null; 01166 break; 01167 } 01168 Integerify(pTos); 01169 Integerify(pNos); 01170 a = pTos->i; 01171 b = pNos->i; 01172 switch( pOp->opcode ){ 01173 case OP_BitAnd: a &= b; break; 01174 case OP_BitOr: a |= b; break; 01175 case OP_ShiftLeft: a <<= b; break; 01176 case OP_ShiftRight: a >>= b; break; 01177 default: /* CANT HAPPEN */ break; 01178 } 01179 assert( (pTos->flags & MEM_Dyn)==0 ); 01180 assert( (pNos->flags & MEM_Dyn)==0 ); 01181 pTos--; 01182 Release(pTos); 01183 pTos->i = a; 01184 pTos->flags = MEM_Int; 01185 break; 01186 } 01187 01188 /* Opcode: AddImm P1 * * 01189 ** 01190 ** Add the value P1 to whatever is on top of the stack. The result 01191 ** is always an integer. 01192 ** 01193 ** To force the top of the stack to be an integer, just add 0. 01194 */ 01195 case OP_AddImm: { 01196 assert( pTos>=p->aStack ); 01197 Integerify(pTos); 01198 pTos->i += pOp->p1; 01199 break; 01200 } 01201 01202 /* Opcode: ForceInt P1 P2 * 01203 ** 01204 ** Convert the top of the stack into an integer. If the current top of 01205 ** the stack is not numeric (meaning that is is a NULL or a string that 01206 ** does not look like an integer or floating point number) then pop the 01207 ** stack and jump to P2. If the top of the stack is numeric then 01208 ** convert it into the least integer that is greater than or equal to its 01209 ** current value if P1==0, or to the least integer that is strictly 01210 ** greater than its current value if P1==1. 01211 */ 01212 case OP_ForceInt: { 01213 int v; 01214 assert( pTos>=p->aStack ); 01215 if( (pTos->flags & (MEM_Int|MEM_Real))==0 01216 && ((pTos->flags & MEM_Str)==0 || sqliteIsNumber(pTos->z)==0) ){ 01217 Release(pTos); 01218 pTos--; 01219 pc = pOp->p2 - 1; 01220 break; 01221 } 01222 if( pTos->flags & MEM_Int ){ 01223 v = pTos->i + (pOp->p1!=0); 01224 }else{ 01225 Realify(pTos); 01226 v = (int)pTos->r; 01227 if( pTos->r>(double)v ) v++; 01228 if( pOp->p1 && pTos->r==(double)v ) v++; 01229 } 01230 Release(pTos); 01231 pTos->i = v; 01232 pTos->flags = MEM_Int; 01233 break; 01234 } 01235 01236 /* Opcode: MustBeInt P1 P2 * 01237 ** 01238 ** Force the top of the stack to be an integer. If the top of the 01239 ** stack is not an integer and cannot be converted into an integer 01240 ** with out data loss, then jump immediately to P2, or if P2==0 01241 ** raise an SQLITE_MISMATCH exception. 01242 ** 01243 ** If the top of the stack is not an integer and P2 is not zero and 01244 ** P1 is 1, then the stack is popped. In all other cases, the depth 01245 ** of the stack is unchanged. 01246 */ 01247 case OP_MustBeInt: { 01248 assert( pTos>=p->aStack ); 01249 if( pTos->flags & MEM_Int ){ 01250 /* Do nothing */ 01251 }else if( pTos->flags & MEM_Real ){ 01252 int i = (int)pTos->r; 01253 double r = (double)i; 01254 if( r!=pTos->r ){ 01255 goto mismatch; 01256 } 01257 pTos->i = i; 01258 }else if( pTos->flags & MEM_Str ){ 01259 int v; 01260 if( !toInt(pTos->z, &v) ){ 01261 double r; 01262 if( !sqliteIsNumber(pTos->z) ){ 01263 goto mismatch; 01264 } 01265 Realify(pTos); 01266 v = (int)pTos->r; 01267 r = (double)v; 01268 if( r!=pTos->r ){ 01269 goto mismatch; 01270 } 01271 } 01272 pTos->i = v; 01273 }else{ 01274 goto mismatch; 01275 } 01276 Release(pTos); 01277 pTos->flags = MEM_Int; 01278 break; 01279 01280 mismatch: 01281 if( pOp->p2==0 ){ 01282 rc = SQLITE_MISMATCH; 01283 goto abort_due_to_error; 01284 }else{ 01285 if( pOp->p1 ) popStack(&pTos, 1); 01286 pc = pOp->p2 - 1; 01287 } 01288 break; 01289 } 01290 01291 /* Opcode: Eq P1 P2 * 01292 ** 01293 ** Pop the top two elements from the stack. If they are equal, then 01294 ** jump to instruction P2. Otherwise, continue to the next instruction. 01295 ** 01296 ** If either operand is NULL (and thus if the result is unknown) then 01297 ** take the jump if P1 is true. 01298 ** 01299 ** If both values are numeric, they are converted to doubles using atof() 01300 ** and compared for equality that way. Otherwise the strcmp() library 01301 ** routine is used for the comparison. For a pure text comparison 01302 ** use OP_StrEq. 01303 ** 01304 ** If P2 is zero, do not jump. Instead, push an integer 1 onto the 01305 ** stack if the jump would have been taken, or a 0 if not. Push a 01306 ** NULL if either operand was NULL. 01307 */ 01308 /* Opcode: Ne P1 P2 * 01309 ** 01310 ** Pop the top two elements from the stack. If they are not equal, then 01311 ** jump to instruction P2. Otherwise, continue to the next instruction. 01312 ** 01313 ** If either operand is NULL (and thus if the result is unknown) then 01314 ** take the jump if P1 is true. 01315 ** 01316 ** If both values are numeric, they are converted to doubles using atof() 01317 ** and compared in that format. Otherwise the strcmp() library 01318 ** routine is used for the comparison. For a pure text comparison 01319 ** use OP_StrNe. 01320 ** 01321 ** If P2 is zero, do not jump. Instead, push an integer 1 onto the 01322 ** stack if the jump would have been taken, or a 0 if not. Push a 01323 ** NULL if either operand was NULL. 01324 */ 01325 /* Opcode: Lt P1 P2 * 01326 ** 01327 ** Pop the top two elements from the stack. If second element (the 01328 ** next on stack) is less than the first (the top of stack), then 01329 ** jump to instruction P2. Otherwise, continue to the next instruction. 01330 ** In other words, jump if NOS<TOS. 01331 ** 01332 ** If either operand is NULL (and thus if the result is unknown) then 01333 ** take the jump if P1 is true. 01334 ** 01335 ** If both values are numeric, they are converted to doubles using atof() 01336 ** and compared in that format. Numeric values are always less than 01337 ** non-numeric values. If both operands are non-numeric, the strcmp() library 01338 ** routine is used for the comparison. For a pure text comparison 01339 ** use OP_StrLt. 01340 ** 01341 ** If P2 is zero, do not jump. Instead, push an integer 1 onto the 01342 ** stack if the jump would have been taken, or a 0 if not. Push a 01343 ** NULL if either operand was NULL. 01344 */ 01345 /* Opcode: Le P1 P2 * 01346 ** 01347 ** Pop the top two elements from the stack. If second element (the 01348 ** next on stack) is less than or equal to the first (the top of stack), 01349 ** then jump to instruction P2. In other words, jump if NOS<=TOS. 01350 ** 01351 ** If either operand is NULL (and thus if the result is unknown) then 01352 ** take the jump if P1 is true. 01353 ** 01354 ** If both values are numeric, they are converted to doubles using atof() 01355 ** and compared in that format. Numeric values are always less than 01356 ** non-numeric values. If both operands are non-numeric, the strcmp() library 01357 ** routine is used for the comparison. For a pure text comparison 01358 ** use OP_StrLe. 01359 ** 01360 ** If P2 is zero, do not jump. Instead, push an integer 1 onto the 01361 ** stack if the jump would have been taken, or a 0 if not. Push a 01362 ** NULL if either operand was NULL. 01363 */ 01364 /* Opcode: Gt P1 P2 * 01365 ** 01366 ** Pop the top two elements from the stack. If second element (the 01367 ** next on stack) is greater than the first (the top of stack), 01368 ** then jump to instruction P2. In other words, jump if NOS>TOS. 01369 ** 01370 ** If either operand is NULL (and thus if the result is unknown) then 01371 ** take the jump if P1 is true. 01372 ** 01373 ** If both values are numeric, they are converted to doubles using atof() 01374 ** and compared in that format. Numeric values are always less than 01375 ** non-numeric values. If both operands are non-numeric, the strcmp() library 01376 ** routine is used for the comparison. For a pure text comparison 01377 ** use OP_StrGt. 01378 ** 01379 ** If P2 is zero, do not jump. Instead, push an integer 1 onto the 01380 ** stack if the jump would have been taken, or a 0 if not. Push a 01381 ** NULL if either operand was NULL. 01382 */ 01383 /* Opcode: Ge P1 P2 * 01384 ** 01385 ** Pop the top two elements from the stack. If second element (the next 01386 ** on stack) is greater than or equal to the first (the top of stack), 01387 ** then jump to instruction P2. In other words, jump if NOS>=TOS. 01388 ** 01389 ** If either operand is NULL (and thus if the result is unknown) then 01390 ** take the jump if P1 is true. 01391 ** 01392 ** If both values are numeric, they are converted to doubles using atof() 01393 ** and compared in that format. Numeric values are always less than 01394 ** non-numeric values. If both operands are non-numeric, the strcmp() library 01395 ** routine is used for the comparison. For a pure text comparison 01396 ** use OP_StrGe. 01397 ** 01398 ** If P2 is zero, do not jump. Instead, push an integer 1 onto the 01399 ** stack if the jump would have been taken, or a 0 if not. Push a 01400 ** NULL if either operand was NULL. 01401 */ 01402 case OP_Eq: 01403 case OP_Ne: 01404 case OP_Lt: 01405 case OP_Le: 01406 case OP_Gt: 01407 case OP_Ge: { 01408 Mem *pNos = &pTos[-1]; 01409 int c, v; 01410 int ft, fn; 01411 assert( pNos>=p->aStack ); 01412 ft = pTos->flags; 01413 fn = pNos->flags; 01414 if( (ft | fn) & MEM_Null ){ 01415 popStack(&pTos, 2); 01416 if( pOp->p2 ){ 01417 if( pOp->p1 ) pc = pOp->p2-1; 01418 }else{ 01419 pTos++; 01420 pTos->flags = MEM_Null; 01421 } 01422 break; 01423 }else if( (ft & fn & MEM_Int)==MEM_Int ){ 01424 c = pNos->i - pTos->i; 01425 }else if( (ft & MEM_Int)!=0 && (fn & MEM_Str)!=0 && toInt(pNos->z,&v) ){ 01426 c = v - pTos->i; 01427 }else if( (fn & MEM_Int)!=0 && (ft & MEM_Str)!=0 && toInt(pTos->z,&v) ){ 01428 c = pNos->i - v; 01429 }else{ 01430 Stringify(pTos); 01431 Stringify(pNos); 01432 c = sqliteCompare(pNos->z, pTos->z); 01433 } 01434 switch( pOp->opcode ){ 01435 case OP_Eq: c = c==0; break; 01436 case OP_Ne: c = c!=0; break; 01437 case OP_Lt: c = c<0; break; 01438 case OP_Le: c = c<=0; break; 01439 case OP_Gt: c = c>0; break; 01440 default: c = c>=0; break; 01441 } 01442 popStack(&pTos, 2); 01443 if( pOp->p2 ){ 01444 if( c ) pc = pOp->p2-1; 01445 }else{ 01446 pTos++; 01447 pTos->i = c; 01448 pTos->flags = MEM_Int; 01449 } 01450 break; 01451 } 01452 /* INSERT NO CODE HERE! 01453 ** 01454 ** The opcode numbers are extracted from this source file by doing 01455 ** 01456 ** grep '^case OP_' vdbe.c | ... >opcodes.h 01457 ** 01458 ** The opcodes are numbered in the order that they appear in this file. 01459 ** But in order for the expression generating code to work right, the 01460 ** string comparison operators that follow must be numbered exactly 6 01461 ** greater than the numeric comparison opcodes above. So no other 01462 ** cases can appear between the two. 01463 */ 01464 /* Opcode: StrEq P1 P2 * 01465 ** 01466 ** Pop the top two elements from the stack. If they are equal, then 01467 ** jump to instruction P2. Otherwise, continue to the next instruction. 01468 ** 01469 ** If either operand is NULL (and thus if the result is unknown) then 01470 ** take the jump if P1 is true. 01471 ** 01472 ** The strcmp() library routine is used for the comparison. For a 01473 ** numeric comparison, use OP_Eq. 01474 ** 01475 ** If P2 is zero, do not jump. Instead, push an integer 1 onto the 01476 ** stack if the jump would have been taken, or a 0 if not. Push a 01477 ** NULL if either operand was NULL. 01478 */ 01479 /* Opcode: StrNe P1 P2 * 01480 ** 01481 ** Pop the top two elements from the stack. If they are not equal, then 01482 ** jump to instruction P2. Otherwise, continue to the next instruction. 01483 ** 01484 ** If either operand is NULL (and thus if the result is unknown) then 01485 ** take the jump if P1 is true. 01486 ** 01487 ** The strcmp() library routine is used for the comparison. For a 01488 ** numeric comparison, use OP_Ne. 01489 ** 01490 ** If P2 is zero, do not jump. Instead, push an integer 1 onto the 01491 ** stack if the jump would have been taken, or a 0 if not. Push a 01492 ** NULL if either operand was NULL. 01493 */ 01494 /* Opcode: StrLt P1 P2 * 01495 ** 01496 ** Pop the top two elements from the stack. If second element (the 01497 ** next on stack) is less than the first (the top of stack), then 01498 ** jump to instruction P2. Otherwise, continue to the next instruction. 01499 ** In other words, jump if NOS<TOS. 01500 ** 01501 ** If either operand is NULL (and thus if the result is unknown) then 01502 ** take the jump if P1 is true. 01503 ** 01504 ** The strcmp() library routine is used for the comparison. For a 01505 ** numeric comparison, use OP_Lt. 01506 ** 01507 ** If P2 is zero, do not jump. Instead, push an integer 1 onto the 01508 ** stack if the jump would have been taken, or a 0 if not. Push a 01509 ** NULL if either operand was NULL. 01510 */ 01511 /* Opcode: StrLe P1 P2 * 01512 ** 01513 ** Pop the top two elements from the stack. If second element (the 01514 ** next on stack) is less than or equal to the first (the top of stack), 01515 ** then jump to instruction P2. In other words, jump if NOS<=TOS. 01516 ** 01517 ** If either operand is NULL (and thus if the result is unknown) then 01518 ** take the jump if P1 is true. 01519 ** 01520 ** The strcmp() library routine is used for the comparison. For a 01521 ** numeric comparison, use OP_Le. 01522 ** 01523 ** If P2 is zero, do not jump. Instead, push an integer 1 onto the 01524 ** stack if the jump would have been taken, or a 0 if not. Push a 01525 ** NULL if either operand was NULL. 01526 */ 01527 /* Opcode: StrGt P1 P2 * 01528 ** 01529 ** Pop the top two elements from the stack. If second element (the 01530 ** next on stack) is greater than the first (the top of stack), 01531 ** then jump to instruction P2. In other words, jump if NOS>TOS. 01532 ** 01533 ** If either operand is NULL (and thus if the result is unknown) then 01534 ** take the jump if P1 is true. 01535 ** 01536 ** The strcmp() library routine is used for the comparison. For a 01537 ** numeric comparison, use OP_Gt. 01538 ** 01539 ** If P2 is zero, do not jump. Instead, push an integer 1 onto the 01540 ** stack if the jump would have been taken, or a 0 if not. Push a 01541 ** NULL if either operand was NULL. 01542 */ 01543 /* Opcode: StrGe P1 P2 * 01544 ** 01545 ** Pop the top two elements from the stack. If second element (the next 01546 ** on stack) is greater than or equal to the first (the top of stack), 01547 ** then jump to instruction P2. In other words, jump if NOS>=TOS. 01548 ** 01549 ** If either operand is NULL (and thus if the result is unknown) then 01550 ** take the jump if P1 is true. 01551 ** 01552 ** The strcmp() library routine is used for the comparison. For a 01553 ** numeric comparison, use OP_Ge. 01554 ** 01555 ** If P2 is zero, do not jump. Instead, push an integer 1 onto the 01556 ** stack if the jump would have been taken, or a 0 if not. Push a 01557 ** NULL if either operand was NULL. 01558 */ 01559 case OP_StrEq: 01560 case OP_StrNe: 01561 case OP_StrLt: 01562 case OP_StrLe: 01563 case OP_StrGt: 01564 case OP_StrGe: { 01565 Mem *pNos = &pTos[-1]; 01566 int c; 01567 assert( pNos>=p->aStack ); 01568 if( (pNos->flags | pTos->flags) & MEM_Null ){ 01569 popStack(&pTos, 2); 01570 if( pOp->p2 ){ 01571 if( pOp->p1 ) pc = pOp->p2-1; 01572 }else{ 01573 pTos++; 01574 pTos->flags = MEM_Null; 01575 } 01576 break; 01577 }else{ 01578 Stringify(pTos); 01579 Stringify(pNos); 01580 c = strcmp(pNos->z, pTos->z); 01581 } 01582 /* The asserts on each case of the following switch are there to verify 01583 ** that string comparison opcodes are always exactly 6 greater than the 01584 ** corresponding numeric comparison opcodes. The code generator depends 01585 ** on this fact. 01586 */ 01587 switch( pOp->opcode ){ 01588 case OP_StrEq: c = c==0; assert( pOp->opcode-6==OP_Eq ); break; 01589 case OP_StrNe: c = c!=0; assert( pOp->opcode-6==OP_Ne ); break; 01590 case OP_StrLt: c = c<0; assert( pOp->opcode-6==OP_Lt ); break; 01591 case OP_StrLe: c = c<=0; assert( pOp->opcode-6==OP_Le ); break; 01592 case OP_StrGt: c = c>0; assert( pOp->opcode-6==OP_Gt ); break; 01593 default: c = c>=0; assert( pOp->opcode-6==OP_Ge ); break; 01594 } 01595 popStack(&pTos, 2); 01596 if( pOp->p2 ){ 01597 if( c ) pc = pOp->p2-1; 01598 }else{ 01599 pTos++; 01600 pTos->flags = MEM_Int; 01601 pTos->i = c; 01602 } 01603 break; 01604 } 01605 01606 /* Opcode: And * * * 01607 ** 01608 ** Pop two values off the stack. Take the logical AND of the 01609 ** two values and push the resulting boolean value back onto the 01610 ** stack. 01611 */ 01612 /* Opcode: Or * * * 01613 ** 01614 ** Pop two values off the stack. Take the logical OR of the 01615 ** two values and push the resulting boolean value back onto the 01616 ** stack. 01617 */ 01618 case OP_And: 01619 case OP_Or: { 01620 Mem *pNos = &pTos[-1]; 01621 int v1, v2; /* 0==TRUE, 1==FALSE, 2==UNKNOWN or NULL */ 01622 01623 assert( pNos>=p->aStack ); 01624 if( pTos->flags & MEM_Null ){ 01625 v1 = 2; 01626 }else{ 01627 Integerify(pTos); 01628 v1 = pTos->i==0; 01629 } 01630 if( pNos->flags & MEM_Null ){ 01631 v2 = 2; 01632 }else{ 01633 Integerify(pNos); 01634 v2 = pNos->i==0; 01635 } 01636 if( pOp->opcode==OP_And ){ 01637 static const unsigned char and_logic[] = { 0, 1, 2, 1, 1, 1, 2, 1, 2 }; 01638 v1 = and_logic[v1*3+v2]; 01639 }else{ 01640 static const unsigned char or_logic[] = { 0, 0, 0, 0, 1, 2, 0, 2, 2 }; 01641 v1 = or_logic[v1*3+v2]; 01642 } 01643 popStack(&pTos, 2); 01644 pTos++; 01645 if( v1==2 ){ 01646 pTos->flags = MEM_Null; 01647 }else{ 01648 pTos->i = v1==0; 01649 pTos->flags = MEM_Int; 01650 } 01651 break; 01652 } 01653 01654 /* Opcode: Negative * * * 01655 ** 01656 ** Treat the top of the stack as a numeric quantity. Replace it 01657 ** with its additive inverse. If the top of the stack is NULL 01658 ** its value is unchanged. 01659 */ 01660 /* Opcode: AbsValue * * * 01661 ** 01662 ** Treat the top of the stack as a numeric quantity. Replace it 01663 ** with its absolute value. If the top of the stack is NULL 01664 ** its value is unchanged. 01665 */ 01666 case OP_Negative: 01667 case OP_AbsValue: { 01668 assert( pTos>=p->aStack ); 01669 if( pTos->flags & MEM_Real ){ 01670 Release(pTos); 01671 if( pOp->opcode==OP_Negative || pTos->r<0.0 ){ 01672 pTos->r = -pTos->r; 01673 } 01674 pTos->flags = MEM_Real; 01675 }else if( pTos->flags & MEM_Int ){ 01676 Release(pTos); 01677 if( pOp->opcode==OP_Negative || pTos->i<0 ){ 01678 pTos->i = -pTos->i; 01679 } 01680 pTos->flags = MEM_Int; 01681 }else if( pTos->flags & MEM_Null ){ 01682 /* Do nothing */ 01683 }else{ 01684 Realify(pTos); 01685 Release(pTos); 01686 if( pOp->opcode==OP_Negative || pTos->r<0.0 ){ 01687 pTos->r = -pTos->r; 01688 } 01689 pTos->flags = MEM_Real; 01690 } 01691 break; 01692 } 01693 01694 /* Opcode: Not * * * 01695 ** 01696 ** Interpret the top of the stack as a boolean value. Replace it 01697 ** with its complement. If the top of the stack is NULL its value 01698 ** is unchanged. 01699 */ 01700 case OP_Not: { 01701 assert( pTos>=p->aStack ); 01702 if( pTos->flags & MEM_Null ) break; /* Do nothing to NULLs */ 01703 Integerify(pTos); 01704 Release(pTos); 01705 pTos->i = !pTos->i; 01706 pTos->flags = MEM_Int; 01707 break; 01708 } 01709 01710 /* Opcode: BitNot * * * 01711 ** 01712 ** Interpret the top of the stack as an value. Replace it 01713 ** with its ones-complement. If the top of the stack is NULL its 01714 ** value is unchanged. 01715 */ 01716 case OP_BitNot: { 01717 assert( pTos>=p->aStack ); 01718 if( pTos->flags & MEM_Null ) break; /* Do nothing to NULLs */ 01719 Integerify(pTos); 01720 Release(pTos); 01721 pTos->i = ~pTos->i; 01722 pTos->flags = MEM_Int; 01723 break; 01724 } 01725 01726 /* Opcode: Noop * * * 01727 ** 01728 ** Do nothing. This instruction is often useful as a jump 01729 ** destination. 01730 */ 01731 case OP_Noop: { 01732 break; 01733 } 01734 01735 /* Opcode: If P1 P2 * 01736 ** 01737 ** Pop a single boolean from the stack. If the boolean popped is 01738 ** true, then jump to p2. Otherwise continue to the next instruction. 01739 ** An integer is false if zero and true otherwise. A string is 01740 ** false if it has zero length and true otherwise. 01741 ** 01742 ** If the value popped of the stack is NULL, then take the jump if P1 01743 ** is true and fall through if P1 is false. 01744 */ 01745 /* Opcode: IfNot P1 P2 * 01746 ** 01747 ** Pop a single boolean from the stack. If the boolean popped is 01748 ** false, then jump to p2. Otherwise continue to the next instruction. 01749 ** An integer is false if zero and true otherwise. A string is 01750 ** false if it has zero length and true otherwise. 01751 ** 01752 ** If the value popped of the stack is NULL, then take the jump if P1 01753 ** is true and fall through if P1 is false. 01754 */ 01755 case OP_If: 01756 case OP_IfNot: { 01757 int c; 01758 assert( pTos>=p->aStack ); 01759 if( pTos->flags & MEM_Null ){ 01760 c = pOp->p1; 01761 }else{ 01762 Integerify(pTos); 01763 c = pTos->i; 01764 if( pOp->opcode==OP_IfNot ) c = !c; 01765 } 01766 assert( (pTos->flags & MEM_Dyn)==0 ); 01767 pTos--; 01768 if( c ) pc = pOp->p2-1; 01769 break; 01770 } 01771 01772 /* Opcode: IsNull P1 P2 * 01773 ** 01774 ** If any of the top abs(P1) values on the stack are NULL, then jump 01775 ** to P2. Pop the stack P1 times if P1>0. If P1<0 leave the stack 01776 ** unchanged. 01777 */ 01778 case OP_IsNull: { 01779 int i, cnt; 01780 Mem *pTerm; 01781 cnt = pOp->p1; 01782 if( cnt<0 ) cnt = -cnt; 01783 pTerm = &pTos[1-cnt]; 01784 assert( pTerm>=p->aStack ); 01785 for(i=0; i<cnt; i++, pTerm++){ 01786 if( pTerm->flags & MEM_Null ){ 01787 pc = pOp->p2-1; 01788 break; 01789 } 01790 } 01791 if( pOp->p1>0 ) popStack(&pTos, cnt); 01792 break; 01793 } 01794 01795 /* Opcode: NotNull P1 P2 * 01796 ** 01797 ** Jump to P2 if the top P1 values on the stack are all not NULL. Pop the 01798 ** stack if P1 times if P1 is greater than zero. If P1 is less than 01799 ** zero then leave the stack unchanged. 01800 */ 01801 case OP_NotNull: { 01802 int i, cnt; 01803 cnt = pOp->p1; 01804 if( cnt<0 ) cnt = -cnt; 01805 assert( &pTos[1-cnt] >= p->aStack ); 01806 for(i=0; i<cnt && (pTos[1+i-cnt].flags & MEM_Null)==0; i++){} 01807 if( i>=cnt ) pc = pOp->p2-1; 01808 if( pOp->p1>0 ) popStack(&pTos, cnt); 01809 break; 01810 } 01811 01812 /* Opcode: MakeRecord P1 P2 * 01813 ** 01814 ** Convert the top P1 entries of the stack into a single entry 01815 ** suitable for use as a data record in a database table. The 01816 ** details of the format are irrelavant as long as the OP_Column 01817 ** opcode can decode the record later. Refer to source code 01818 ** comments for the details of the record format. 01819 ** 01820 ** If P2 is true (non-zero) and one or more of the P1 entries 01821 ** that go into building the record is NULL, then add some extra 01822 ** bytes to the record to make it distinct for other entries created 01823 ** during the same run of the VDBE. The extra bytes added are a 01824 ** counter that is reset with each run of the VDBE, so records 01825 ** created this way will not necessarily be distinct across runs. 01826 ** But they should be distinct for transient tables (created using 01827 ** OP_OpenTemp) which is what they are intended for. 01828 ** 01829 ** (Later:) The P2==1 option was intended to make NULLs distinct 01830 ** for the UNION operator. But I have since discovered that NULLs 01831 ** are indistinct for UNION. So this option is never used. 01832 */ 01833 case OP_MakeRecord: { 01834 char *zNewRecord; 01835 int nByte; 01836 int nField; 01837 int i, j; 01838 int idxWidth; 01839 u32 addr; 01840 Mem *pRec; 01841 int addUnique = 0; /* True to cause bytes to be added to make the 01842 ** generated record distinct */ 01843 char zTemp[NBFS]; /* Temp space for small records */ 01844 01845 /* Assuming the record contains N fields, the record format looks 01846 ** like this: 01847 ** 01848 ** ------------------------------------------------------------------- 01849 ** | idx0 | idx1 | ... | idx(N-1) | idx(N) | data0 | ... | data(N-1) | 01850 ** ------------------------------------------------------------------- 01851 ** 01852 ** All data fields are converted to strings before being stored and 01853 ** are stored with their null terminators. NULL entries omit the 01854 ** null terminator. Thus an empty string uses 1 byte and a NULL uses 01855 ** zero bytes. Data(0) is taken from the lowest element of the stack 01856 ** and data(N-1) is the top of the stack. 01857 ** 01858 ** Each of the idx() entries is either 1, 2, or 3 bytes depending on 01859 ** how big the total record is. Idx(0) contains the offset to the start 01860 ** of data(0). Idx(k) contains the offset to the start of data(k). 01861 ** Idx(N) contains the total number of bytes in the record. 01862 */ 01863 nField = pOp->p1; 01864 pRec = &pTos[1-nField]; 01865 assert( pRec>=p->aStack ); 01866 nByte = 0; 01867 for(i=0; i<nField; i++, pRec++){ 01868 if( pRec->flags & MEM_Null ){ 01869 addUnique = pOp->p2; 01870 }else{ 01871 Stringify(pRec); 01872 nByte += pRec->n; 01873 } 01874 } 01875 if( addUnique ) nByte += sizeof(p->uniqueCnt); 01876 if( nByte + nField + 1 < 256 ){ 01877 idxWidth = 1; 01878 }else if( nByte + 2*nField + 2 < 65536 ){ 01879 idxWidth = 2; 01880 }else{ 01881 idxWidth = 3; 01882 } 01883 nByte += idxWidth*(nField + 1); 01884 if( nByte>MAX_BYTES_PER_ROW ){ 01885 rc = SQLITE_TOOBIG; 01886 goto abort_due_to_error; 01887 } 01888 if( nByte<=NBFS ){ 01889 zNewRecord = zTemp; 01890 }else{ 01891 zNewRecord = sqliteMallocRaw( nByte ); 01892 if( zNewRecord==0 ) goto no_mem; 01893 } 01894 j = 0; 01895 addr = idxWidth*(nField+1) + addUnique*sizeof(p->uniqueCnt); 01896 for(i=0, pRec=&pTos[1-nField]; i<nField; i++, pRec++){ 01897 zNewRecord[j++] = addr & 0xff; 01898 if( idxWidth>1 ){ 01899 zNewRecord[j++] = (addr>>8)&0xff; 01900 if( idxWidth>2 ){ 01901 zNewRecord[j++] = (addr>>16)&0xff; 01902 } 01903 } 01904 if( (pRec->flags & MEM_Null)==0 ){ 01905 addr += pRec->n; 01906 } 01907 } 01908 zNewRecord[j++] = addr & 0xff; 01909 if( idxWidth>1 ){ 01910 zNewRecord[j++] = (addr>>8)&0xff; 01911 if( idxWidth>2 ){ 01912 zNewRecord[j++] = (addr>>16)&0xff; 01913 } 01914 } 01915 if( addUnique ){ 01916 memcpy(&zNewRecord[j], &p->uniqueCnt, sizeof(p->uniqueCnt)); 01917 p->uniqueCnt++; 01918 j += sizeof(p->uniqueCnt); 01919 } 01920 for(i=0, pRec=&pTos[1-nField]; i<nField; i++, pRec++){ 01921 if( (pRec->flags & MEM_Null)==0 ){ 01922 memcpy(&zNewRecord[j], pRec->z, pRec->n); 01923 j += pRec->n; 01924 } 01925 } 01926 popStack(&pTos, nField); 01927 pTos++; 01928 pTos->n = nByte; 01929 if( nByte<=NBFS ){ 01930 assert( zNewRecord==zTemp ); 01931 memcpy(pTos->zShort, zTemp, nByte); 01932 pTos->z = pTos->zShort; 01933 pTos->flags = MEM_Str | MEM_Short; 01934 }else{ 01935 assert( zNewRecord!=zTemp ); 01936 pTos->z = zNewRecord; 01937 pTos->flags = MEM_Str | MEM_Dyn; 01938 } 01939 break; 01940 } 01941 01942 /* Opcode: MakeKey P1 P2 P3 01943 ** 01944 ** Convert the top P1 entries of the stack into a single entry suitable 01945 ** for use as the key in an index. The top P1 records are 01946 ** converted to strings and merged. The null-terminators 01947 ** are retained and used as separators. 01948 ** The lowest entry in the stack is the first field and the top of the 01949 ** stack becomes the last. 01950 ** 01951 ** If P2 is not zero, then the original entries remain on the stack 01952 ** and the new key is pushed on top. If P2 is zero, the original 01953 ** data is popped off the stack first then the new key is pushed 01954 ** back in its place. 01955 ** 01956 ** P3 is a string that is P1 characters long. Each character is either 01957 ** an 'n' or a 't' to indicates if the argument should be intepreted as 01958 ** numeric or text type. The first character of P3 corresponds to the 01959 ** lowest element on the stack. If P3 is NULL then all arguments are 01960 ** assumed to be of the numeric type. 01961 ** 01962 ** The type makes a difference in that text-type fields may not be 01963 ** introduced by 'b' (as described in the next paragraph). The 01964 ** first character of a text-type field must be either 'a' (if it is NULL) 01965 ** or 'c'. Numeric fields will be introduced by 'b' if their content 01966 ** looks like a well-formed number. Otherwise the 'a' or 'c' will be 01967 ** used. 01968 ** 01969 ** The key is a concatenation of fields. Each field is terminated by 01970 ** a single 0x00 character. A NULL field is introduced by an 'a' and 01971 ** is followed immediately by its 0x00 terminator. A numeric field is 01972 ** introduced by a single character 'b' and is followed by a sequence 01973 ** of characters that represent the number such that a comparison of 01974 ** the character string using memcpy() sorts the numbers in numerical 01975 ** order. The character strings for numbers are generated using the 01976 ** sqliteRealToSortable() function. A text field is introduced by a 01977 ** 'c' character and is followed by the exact text of the field. The 01978 ** use of an 'a', 'b', or 'c' character at the beginning of each field 01979 ** guarantees that NULLs sort before numbers and that numbers sort 01980 ** before text. 0x00 characters do not occur except as separators 01981 ** between fields. 01982 ** 01983 ** See also: MakeIdxKey, SortMakeKey 01984 */ 01985 /* Opcode: MakeIdxKey P1 P2 P3 01986 ** 01987 ** Convert the top P1 entries of the stack into a single entry suitable 01988 ** for use as the key in an index. In addition, take one additional integer 01989 ** off of the stack, treat that integer as a four-byte record number, and 01990 ** append the four bytes to the key. Thus a total of P1+1 entries are 01991 ** popped from the stack for this instruction and a single entry is pushed 01992 ** back. The first P1 entries that are popped are strings and the last 01993 ** entry (the lowest on the stack) is an integer record number. 01994 ** 01995 ** The converstion of the first P1 string entries occurs just like in 01996 ** MakeKey. Each entry is separated from the others by a null. 01997 ** The entire concatenation is null-terminated. The lowest entry 01998 ** in the stack is the first field and the top of the stack becomes the 01999 ** last. 02000 ** 02001 ** If P2 is not zero and one or more of the P1 entries that go into the 02002 ** generated key is NULL, then jump to P2 after the new key has been 02003 ** pushed on the stack. In other words, jump to P2 if the key is 02004 ** guaranteed to be unique. This jump can be used to skip a subsequent 02005 ** uniqueness test. 02006 ** 02007 ** P3 is a string that is P1 characters long. Each character is either 02008 ** an 'n' or a 't' to indicates if the argument should be numeric or 02009 ** text. The first character corresponds to the lowest element on the 02010 ** stack. If P3 is null then all arguments are assumed to be numeric. 02011 ** 02012 ** See also: MakeKey, SortMakeKey 02013 */ 02014 case OP_MakeIdxKey: 02015 case OP_MakeKey: { 02016 char *zNewKey; 02017 int nByte; 02018 int nField; 02019 int addRowid; 02020 int i, j; 02021 int containsNull = 0; 02022 Mem *pRec; 02023 char zTemp[NBFS]; 02024 02025 addRowid = pOp->opcode==OP_MakeIdxKey; 02026 nField = pOp->p1; 02027 pRec = &pTos[1-nField]; 02028 assert( pRec>=p->aStack ); 02029 nByte = 0; 02030 for(j=0, i=0; i<nField; i++, j++, pRec++){ 02031 int flags = pRec->flags; 02032 int len; 02033 char *z; 02034 if( flags & MEM_Null ){ 02035 nByte += 2; 02036 containsNull = 1; 02037 }else if( pOp->p3 && pOp->p3[j]=='t' ){ 02038 Stringify(pRec); 02039 pRec->flags &= ~(MEM_Int|MEM_Real); 02040 nByte += pRec->n+1; 02041 }else if( (flags & (MEM_Real|MEM_Int))!=0 || sqliteIsNumber(pRec->z) ){ 02042 if( (flags & (MEM_Real|MEM_Int))==MEM_Int ){ 02043 pRec->r = pRec->i; 02044 }else if( (flags & (MEM_Real|MEM_Int))==0 ){ 02045 pRec->r = sqliteAtoF(pRec->z, 0); 02046 } 02047 Release(pRec); 02048 z = pRec->zShort; 02049 sqliteRealToSortable(pRec->r, z); 02050 len = strlen(z); 02051 pRec->z = 0; 02052 pRec->flags = MEM_Real; 02053 pRec->n = len+1; 02054 nByte += pRec->n+1; 02055 }else{ 02056 nByte += pRec->n+1; 02057 } 02058 } 02059 if( nByte+sizeof(u32)>MAX_BYTES_PER_ROW ){ 02060 rc = SQLITE_TOOBIG; 02061 goto abort_due_to_error; 02062 } 02063 if( addRowid ) nByte += sizeof(u32); 02064 if( nByte<=NBFS ){ 02065 zNewKey = zTemp; 02066 }else{ 02067 zNewKey = sqliteMallocRaw( nByte ); 02068 if( zNewKey==0 ) goto no_mem; 02069 } 02070 j = 0; 02071 pRec = &pTos[1-nField]; 02072 for(i=0; i<nField; i++, pRec++){ 02073 if( pRec->flags & MEM_Null ){ 02074 zNewKey[j++] = 'a'; 02075 zNewKey[j++] = 0; 02076 }else if( pRec->flags==MEM_Real ){ 02077 zNewKey[j++] = 'b'; 02078 memcpy(&zNewKey[j], pRec->zShort, pRec->n); 02079 j += pRec->n; 02080 }else{ 02081 assert( pRec->flags & MEM_Str ); 02082 zNewKey[j++] = 'c'; 02083 memcpy(&zNewKey[j], pRec->z, pRec->n); 02084 j += pRec->n; 02085 } 02086 } 02087 if( addRowid ){ 02088 u32 iKey; 02089 pRec = &pTos[-nField]; 02090 assert( pRec>=p->aStack ); 02091 Integerify(pRec); 02092 iKey = intToKey(pRec->i); 02093 memcpy(&zNewKey[j], &iKey, sizeof(u32)); 02094 popStack(&pTos, nField+1); 02095 if( pOp->p2 && containsNull ) pc = pOp->p2 - 1; 02096 }else{ 02097 if( pOp->p2==0 ) popStack(&pTos, nField); 02098 } 02099 pTos++; 02100 pTos->n = nByte; 02101 if( nByte<=NBFS ){ 02102 assert( zNewKey==zTemp ); 02103 pTos->z = pTos->zShort; 02104 memcpy(pTos->zShort, zTemp, nByte); 02105 pTos->flags = MEM_Str | MEM_Short; 02106 }else{ 02107 pTos->z = zNewKey; 02108 pTos->flags = MEM_Str | MEM_Dyn; 02109 } 02110 break; 02111 } 02112 02113 /* Opcode: IncrKey * * * 02114 ** 02115 ** The top of the stack should contain an index key generated by 02116 ** The MakeKey opcode. This routine increases the least significant 02117 ** byte of that key by one. This is used so that the MoveTo opcode 02118 ** will move to the first entry greater than the key rather than to 02119 ** the key itself. 02120 */ 02121 case OP_IncrKey: { 02122 assert( pTos>=p->aStack ); 02123 /* The IncrKey opcode is only applied to keys generated by 02124 ** MakeKey or MakeIdxKey and the results of those operands 02125 ** are always dynamic strings or zShort[] strings. So we 02126 ** are always free to modify the string in place. 02127 */ 02128 assert( pTos->flags & (MEM_Dyn|MEM_Short) ); 02129 pTos->z[pTos->n-1]++; 02130 break; 02131 } 02132 02133 /* Opcode: Checkpoint P1 * * 02134 ** 02135 ** Begin a checkpoint. A checkpoint is the beginning of a operation that 02136 ** is part of a larger transaction but which might need to be rolled back 02137 ** itself without effecting the containing transaction. A checkpoint will 02138 ** be automatically committed or rollback when the VDBE halts. 02139 ** 02140 ** The checkpoint is begun on the database file with index P1. The main 02141 ** database file has an index of 0 and the file used for temporary tables 02142 ** has an index of 1. 02143 */ 02144 case OP_Checkpoint: { 02145 int i = pOp->p1; 02146 if( i>=0 && i<db->nDb && db->aDb[i].pBt && db->aDb[i].inTrans==1 ){ 02147 rc = sqliteBtreeBeginCkpt(db->aDb[i].pBt); 02148 if( rc==SQLITE_OK ) db->aDb[i].inTrans = 2; 02149 } 02150 break; 02151 } 02152 02153 /* Opcode: Transaction P1 * * 02154 ** 02155 ** Begin a transaction. The transaction ends when a Commit or Rollback 02156 ** opcode is encountered. Depending on the ON CONFLICT setting, the 02157 ** transaction might also be rolled back if an error is encountered. 02158 ** 02159 ** P1 is the index of the database file on which the transaction is 02160 ** started. Index 0 is the main database file and index 1 is the 02161 ** file used for temporary tables. 02162 ** 02163 ** A write lock is obtained on the database file when a transaction is 02164 ** started. No other process can read or write the file while the 02165 ** transaction is underway. Starting a transaction also creates a 02166 ** rollback journal. A transaction must be started before any changes 02167 ** can be made to the database. 02168 */ 02169 case OP_Transaction: { 02170 int busy = 1; 02171 int i = pOp->p1; 02172 assert( i>=0 && i<db->nDb ); 02173 if( db->aDb[i].inTrans ) break; 02174 while( db->aDb[i].pBt!=0 && busy ){ 02175 rc = sqliteBtreeBeginTrans(db->aDb[i].pBt); 02176 switch( rc ){ 02177 case SQLITE_BUSY: { 02178 if( db->xBusyCallback==0 ){ 02179 p->pc = pc; 02180 p->undoTransOnError = 1; 02181 p->rc = SQLITE_BUSY; 02182 p->pTos = pTos; 02183 return SQLITE_BUSY; 02184 }else if( (*db->xBusyCallback)(db->pBusyArg, "", busy++)==0 ){ 02185 sqliteSetString(&p->zErrMsg, sqlite_error_string(rc), (char*)0); 02186 busy = 0; 02187 } 02188 break; 02189 } 02190 case SQLITE_READONLY: { 02191 rc = SQLITE_OK; 02192 /* Fall thru into the next case */ 02193 } 02194 case SQLITE_OK: { 02195 p->inTempTrans = 0; 02196 busy = 0; 02197 break; 02198 } 02199 default: { 02200 goto abort_due_to_error; 02201 } 02202 } 02203 } 02204 db->aDb[i].inTrans = 1; 02205 p->undoTransOnError = 1; 02206 break; 02207 } 02208 02209 /* Opcode: Commit * * * 02210 ** 02211 ** Cause all modifications to the database that have been made since the 02212 ** last Transaction to actually take effect. No additional modifications 02213 ** are allowed until another transaction is started. The Commit instruction 02214 ** deletes the journal file and releases the write lock on the database. 02215 ** A read lock continues to be held if there are still cursors open. 02216 */ 02217 case OP_Commit: { 02218 int i; 02219 if( db->xCommitCallback!=0 ){ 02220 if( sqliteSafetyOff(db) ) goto abort_due_to_misuse; 02221 if( db->xCommitCallback(db->pCommitArg)!=0 ){ 02222 rc = SQLITE_CONSTRAINT; 02223 } 02224 if( sqliteSafetyOn(db) ) goto abort_due_to_misuse; 02225 } 02226 for(i=0; rc==SQLITE_OK && i<db->nDb; i++){ 02227 if( db->aDb[i].inTrans ){ 02228 rc = sqliteBtreeCommit(db->aDb[i].pBt); 02229 db->aDb[i].inTrans = 0; 02230 } 02231 } 02232 if( rc==SQLITE_OK ){ 02233 sqliteCommitInternalChanges(db); 02234 }else{ 02235 sqliteRollbackAll(db); 02236 } 02237 break; 02238 } 02239 02240 /* Opcode: Rollback P1 * * 02241 ** 02242 ** Cause all modifications to the database that have been made since the 02243 ** last Transaction to be undone. The database is restored to its state 02244 ** before the Transaction opcode was executed. No additional modifications 02245 ** are allowed until another transaction is started. 02246 ** 02247 ** P1 is the index of the database file that is committed. An index of 0 02248 ** is used for the main database and an index of 1 is used for the file used 02249 ** to hold temporary tables. 02250 ** 02251 ** This instruction automatically closes all cursors and releases both 02252 ** the read and write locks on the indicated database. 02253 */ 02254 case OP_Rollback: { 02255 sqliteRollbackAll(db); 02256 break; 02257 } 02258 02259 /* Opcode: ReadCookie P1 P2 * 02260 ** 02261 ** Read cookie number P2 from database P1 and push it onto the stack. 02262 ** P2==0 is the schema version. P2==1 is the database format. 02263 ** P2==2 is the recommended pager cache size, and so forth. P1==0 is 02264 ** the main database file and P1==1 is the database file used to store 02265 ** temporary tables. 02266 ** 02267 ** There must be a read-lock on the database (either a transaction 02268 ** must be started or there must be an open cursor) before 02269 ** executing this instruction. 02270 */ 02271 case OP_ReadCookie: { 02272 int aMeta[SQLITE_N_BTREE_META]; 02273 assert( pOp->p2<SQLITE_N_BTREE_META ); 02274 assert( pOp->p1>=0 && pOp->p1<db->nDb ); 02275 assert( db->aDb[pOp->p1].pBt!=0 ); 02276 rc = sqliteBtreeGetMeta(db->aDb[pOp->p1].pBt, aMeta); 02277 pTos++; 02278 pTos->i = aMeta[1+pOp->p2]; 02279 pTos->flags = MEM_Int; 02280 break; 02281 } 02282 02283 /* Opcode: SetCookie P1 P2 * 02284 ** 02285 ** Write the top of the stack into cookie number P2 of database P1. 02286 ** P2==0 is the schema version. P2==1 is the database format. 02287 ** P2==2 is the recommended pager cache size, and so forth. P1==0 is 02288 ** the main database file and P1==1 is the database file used to store 02289 ** temporary tables. 02290 ** 02291 ** A transaction must be started before executing this opcode. 02292 */ 02293 case OP_SetCookie: { 02294 int aMeta[SQLITE_N_BTREE_META]; 02295 assert( pOp->p2<SQLITE_N_BTREE_META ); 02296 assert( pOp->p1>=0 && pOp->p1<db->nDb ); 02297 assert( db->aDb[pOp->p1].pBt!=0 ); 02298 assert( pTos>=p->aStack ); 02299 Integerify(pTos) 02300 rc = sqliteBtreeGetMeta(db->aDb[pOp->p1].pBt, aMeta); 02301 if( rc==SQLITE_OK ){ 02302 aMeta[1+pOp->p2] = pTos->i; 02303 rc = sqliteBtreeUpdateMeta(db->aDb[pOp->p1].pBt, aMeta); 02304 } 02305 Release(pTos); 02306 pTos--; 02307 break; 02308 } 02309 02310 /* Opcode: VerifyCookie P1 P2 * 02311 ** 02312 ** Check the value of global database parameter number 0 (the 02313 ** schema version) and make sure it is equal to P2. 02314 ** P1 is the database number which is 0 for the main database file 02315 ** and 1 for the file holding temporary tables and some higher number 02316 ** for auxiliary databases. 02317 ** 02318 ** The cookie changes its value whenever the database schema changes. 02319 ** This operation is used to detect when that the cookie has changed 02320 ** and that the current process needs to reread the schema. 02321 ** 02322 ** Either a transaction needs to have been started or an OP_Open needs 02323 ** to be executed (to establish a read lock) before this opcode is 02324 ** invoked. 02325 */ 02326 case OP_VerifyCookie: { 02327 int aMeta[SQLITE_N_BTREE_META]; 02328 assert( pOp->p1>=0 && pOp->p1<db->nDb ); 02329 rc = sqliteBtreeGetMeta(db->aDb[pOp->p1].pBt, aMeta); 02330 if( rc==SQLITE_OK && aMeta[1]!=pOp->p2 ){ 02331 sqliteSetString(&p->zErrMsg, "database schema has changed", (char*)0); 02332 rc = SQLITE_SCHEMA; 02333 } 02334 break; 02335 } 02336 02337 /* Opcode: OpenRead P1 P2 P3 02338 ** 02339 ** Open a read-only cursor for the database table whose root page is 02340 ** P2 in a database file. The database file is determined by an 02341 ** integer from the top of the stack. 0 means the main database and 02342 ** 1 means the database used for temporary tables. Give the new 02343 ** cursor an identifier of P1. The P1 values need not be contiguous 02344 ** but all P1 values should be small integers. It is an error for 02345 ** P1 to be negative. 02346 ** 02347 ** If P2==0 then take the root page number from the next of the stack. 02348 ** 02349 ** There will be a read lock on the database whenever there is an 02350 ** open cursor. If the database was unlocked prior to this instruction 02351 ** then a read lock is acquired as part of this instruction. A read 02352 ** lock allows other processes to read the database but prohibits 02353 ** any other process from modifying the database. The read lock is 02354 ** released when all cursors are closed. If this instruction attempts 02355 ** to get a read lock but fails, the script terminates with an 02356 ** SQLITE_BUSY error code. 02357 ** 02358 ** The P3 value is the name of the table or index being opened. 02359 ** The P3 value is not actually used by this opcode and may be 02360 ** omitted. But the code generator usually inserts the index or 02361 ** table name into P3 to make the code easier to read. 02362 ** 02363 ** See also OpenWrite. 02364 */ 02365 /* Opcode: OpenWrite P1 P2 P3 02366 ** 02367 ** Open a read/write cursor named P1 on the table or index whose root 02368 ** page is P2. If P2==0 then take the root page number from the stack. 02369 ** 02370 ** The P3 value is the name of the table or index being opened. 02371 ** The P3 value is not actually used by this opcode and may be 02372 ** omitted. But the code generator usually inserts the index or 02373 ** table name into P3 to make the code easier to read. 02374 ** 02375 ** This instruction works just like OpenRead except that it opens the cursor 02376 ** in read/write mode. For a given table, there can be one or more read-only 02377 ** cursors or a single read/write cursor but not both. 02378 ** 02379 ** See also OpenRead. 02380 */ 02381 case OP_OpenRead: 02382 case OP_OpenWrite: { 02383 int busy = 0; 02384 int i = pOp->p1; 02385 int p2 = pOp->p2; 02386 int wrFlag; 02387 Btree *pX; 02388 int iDb; 02389 02390 assert( pTos>=p->aStack ); 02391 Integerify(pTos); 02392 iDb = pTos->i; 02393 pTos--; 02394 assert( iDb>=0 && iDb<db->nDb ); 02395 pX = db->aDb[iDb].pBt; 02396 assert( pX!=0 ); 02397 wrFlag = pOp->opcode==OP_OpenWrite; 02398 if( p2<=0 ){ 02399 assert( pTos>=p->aStack ); 02400 Integerify(pTos); 02401 p2 = pTos->i; 02402 pTos--; 02403 if( p2<2 ){ 02404 sqliteSetString(&p->zErrMsg, "root page number less than 2", (char*)0); 02405 rc = SQLITE_INTERNAL; 02406 break; 02407 } 02408 } 02409 assert( i>=0 ); 02410 if( expandCursorArraySize(p, i) ) goto no_mem; 02411 sqliteVdbeCleanupCursor(&p->aCsr[i]); 02412 memset(&p->aCsr[i], 0, sizeof(Cursor)); 02413 p->aCsr[i].nullRow = 1; 02414 if( pX==0 ) break; 02415 do{ 02416 rc = sqliteBtreeCursor(pX, p2, wrFlag, &p->aCsr[i].pCursor); 02417 switch( rc ){ 02418 case SQLITE_BUSY: { 02419 if( db->xBusyCallback==0 ){ 02420 p->pc = pc; 02421 p->rc = SQLITE_BUSY; 02422 p->pTos = &pTos[1 + (pOp->p2<=0)]; /* Operands must remain on stack */ 02423 return SQLITE_BUSY; 02424 }else if( (*db->xBusyCallback)(db->pBusyArg, pOp->p3, ++busy)==0 ){ 02425 sqliteSetString(&p->zErrMsg, sqlite_error_string(rc), (char*)0); 02426 busy = 0; 02427 } 02428 break; 02429 } 02430 case SQLITE_OK: { 02431 busy = 0; 02432 break; 02433 } 02434 default: { 02435 goto abort_due_to_error; 02436 } 02437 } 02438 }while( busy ); 02439 break; 02440 } 02441 02442 /* Opcode: OpenTemp P1 P2 * 02443 ** 02444 ** Open a new cursor to a transient table. 02445 ** The transient cursor is always opened read/write even if 02446 ** the main database is read-only. The transient table is deleted 02447 ** automatically when the cursor is closed. 02448 ** 02449 ** The cursor points to a BTree table if P2==0 and to a BTree index 02450 ** if P2==1. A BTree table must have an integer key and can have arbitrary 02451 ** data. A BTree index has no data but can have an arbitrary key. 02452 ** 02453 ** This opcode is used for tables that exist for the duration of a single 02454 ** SQL statement only. Tables created using CREATE TEMPORARY TABLE 02455 ** are opened using OP_OpenRead or OP_OpenWrite. "Temporary" in the 02456 ** context of this opcode means for the duration of a single SQL statement 02457 ** whereas "Temporary" in the context of CREATE TABLE means for the duration 02458 ** of the connection to the database. Same word; different meanings. 02459 */ 02460 case OP_OpenTemp: { 02461 int i = pOp->p1; 02462 Cursor *pCx; 02463 assert( i>=0 ); 02464 if( expandCursorArraySize(p, i) ) goto no_mem; 02465 pCx = &p->aCsr[i]; 02466 sqliteVdbeCleanupCursor(pCx); 02467 memset(pCx, 0, sizeof(*pCx)); 02468 pCx->nullRow = 1; 02469 rc = sqliteBtreeFactory(db, 0, 1, TEMP_PAGES, &pCx->pBt); 02470 02471 if( rc==SQLITE_OK ){ 02472 rc = sqliteBtreeBeginTrans(pCx->pBt); 02473 } 02474 if( rc==SQLITE_OK ){ 02475 if( pOp->p2 ){ 02476 int pgno; 02477 rc = sqliteBtreeCreateIndex(pCx->pBt, &pgno); 02478 if( rc==SQLITE_OK ){ 02479 rc = sqliteBtreeCursor(pCx->pBt, pgno, 1, &pCx->pCursor); 02480 } 02481 }else{ 02482 rc = sqliteBtreeCursor(pCx->pBt, 2, 1, &pCx->pCursor); 02483 } 02484 } 02485 break; 02486 } 02487 02488 /* Opcode: OpenPseudo P1 * * 02489 ** 02490 ** Open a new cursor that points to a fake table that contains a single 02491 ** row of data. Any attempt to write a second row of data causes the 02492 ** first row to be deleted. All data is deleted when the cursor is 02493 ** closed. 02494 ** 02495 ** A pseudo-table created by this opcode is useful for holding the 02496 ** NEW or OLD tables in a trigger. 02497 */ 02498 case OP_OpenPseudo: { 02499 int i = pOp->p1; 02500 Cursor *pCx; 02501 assert( i>=0 ); 02502 if( expandCursorArraySize(p, i) ) goto no_mem; 02503 pCx = &p->aCsr[i]; 02504 sqliteVdbeCleanupCursor(pCx); 02505 memset(pCx, 0, sizeof(*pCx)); 02506 pCx->nullRow = 1; 02507 pCx->pseudoTable = 1; 02508 break; 02509 } 02510 02511 /* Opcode: Close P1 * * 02512 ** 02513 ** Close a cursor previously opened as P1. If P1 is not 02514 ** currently open, this instruction is a no-op. 02515 */ 02516 case OP_Close: { 02517 int i = pOp->p1; 02518 if( i>=0 && i<p->nCursor ){ 02519 sqliteVdbeCleanupCursor(&p->aCsr[i]); 02520 } 02521 break; 02522 } 02523 02524 /* Opcode: MoveTo P1 P2 * 02525 ** 02526 ** Pop the top of the stack and use its value as a key. Reposition 02527 ** cursor P1 so that it points to an entry with a matching key. If 02528 ** the table contains no record with a matching key, then the cursor 02529 ** is left pointing at the first record that is greater than the key. 02530 ** If there are no records greater than the key and P2 is not zero, 02531 ** then an immediate jump to P2 is made. 02532 ** 02533 ** See also: Found, NotFound, Distinct, MoveLt 02534 */ 02535 /* Opcode: MoveLt P1 P2 * 02536 ** 02537 ** Pop the top of the stack and use its value as a key. Reposition 02538 ** cursor P1 so that it points to the entry with the largest key that is 02539 ** less than the key popped from the stack. 02540 ** If there are no records less than than the key and P2 02541 ** is not zero then an immediate jump to P2 is made. 02542 ** 02543 ** See also: MoveTo 02544 */ 02545 case OP_MoveLt: 02546 case OP_MoveTo: { 02547 int i = pOp->p1; 02548 Cursor *pC; 02549 02550 assert( pTos>=p->aStack ); 02551 assert( i>=0 && i<p->nCursor ); 02552 pC = &p->aCsr[i]; 02553 if( pC->pCursor!=0 ){ 02554 int res, oc; 02555 pC->nullRow = 0; 02556 if( pTos->flags & MEM_Int ){ 02557 int iKey = intToKey(pTos->i); 02558 if( pOp->p2==0 && pOp->opcode==OP_MoveTo ){ 02559 pC->movetoTarget = iKey; 02560 pC->deferredMoveto = 1; 02561 Release(pTos); 02562 pTos--; 02563 break; 02564 } 02565 sqliteBtreeMoveto(pC->pCursor, (char*)&iKey, sizeof(int), &res); 02566 pC->lastRecno = pTos->i; 02567 pC->recnoIsValid = res==0; 02568 }else{ 02569 Stringify(pTos); 02570 sqliteBtreeMoveto(pC->pCursor, pTos->z, pTos->n, &res); 02571 pC->recnoIsValid = 0; 02572 } 02573 pC->deferredMoveto = 0; 02574 sqlite_search_count++; 02575 oc = pOp->opcode; 02576 if( oc==OP_MoveTo && res<0 ){ 02577 sqliteBtreeNext(pC->pCursor, &res); 02578 pC->recnoIsValid = 0; 02579 if( res && pOp->p2>0 ){ 02580 pc = pOp->p2 - 1; 02581 } 02582 }else if( oc==OP_MoveLt ){ 02583 if( res>=0 ){ 02584 sqliteBtreePrevious(pC->pCursor, &res); 02585 pC->recnoIsValid = 0; 02586 }else{ 02587 /* res might be negative because the table is empty. Check to 02588 ** see if this is the case. 02589 */ 02590 int keysize; 02591 res = sqliteBtreeKeySize(pC->pCursor,&keysize)!=0 || keysize==0; 02592 } 02593 if( res && pOp->p2>0 ){ 02594 pc = pOp->p2 - 1; 02595 } 02596 } 02597 } 02598 Release(pTos); 02599 pTos--; 02600 break; 02601 } 02602 02603 /* Opcode: Distinct P1 P2 * 02604 ** 02605 ** Use the top of the stack as a string key. If a record with that key does 02606 ** not exist in the table of cursor P1, then jump to P2. If the record 02607 ** does already exist, then fall thru. The cursor is left pointing 02608 ** at the record if it exists. The key is not popped from the stack. 02609 ** 02610 ** This operation is similar to NotFound except that this operation 02611 ** does not pop the key from the stack. 02612 ** 02613 ** See also: Found, NotFound, MoveTo, IsUnique, NotExists 02614 */ 02615 /* Opcode: Found P1 P2 * 02616 ** 02617 ** Use the top of the stack as a string key. If a record with that key 02618 ** does exist in table of P1, then jump to P2. If the record 02619 ** does not exist, then fall thru. The cursor is left pointing 02620 ** to the record if it exists. The key is popped from the stack. 02621 ** 02622 ** See also: Distinct, NotFound, MoveTo, IsUnique, NotExists 02623 */ 02624 /* Opcode: NotFound P1 P2 * 02625 ** 02626 ** Use the top of the stack as a string key. If a record with that key 02627 ** does not exist in table of P1, then jump to P2. If the record 02628 ** does exist, then fall thru. The cursor is left pointing to the 02629 ** record if it exists. The key is popped from the stack. 02630 ** 02631 ** The difference between this operation and Distinct is that 02632 ** Distinct does not pop the key from the stack. 02633 ** 02634 ** See also: Distinct, Found, MoveTo, NotExists, IsUnique 02635 */ 02636 case OP_Distinct: 02637 case OP_NotFound: 02638 case OP_Found: { 02639 int i = pOp->p1; 02640 int alreadyExists = 0; 02641 Cursor *pC; 02642 assert( pTos>=p->aStack ); 02643 assert( i>=0 && i<p->nCursor ); 02644 if( (pC = &p->aCsr[i])->pCursor!=0 ){ 02645 int res, rx; 02646 Stringify(pTos); 02647 rx = sqliteBtreeMoveto(pC->pCursor, pTos->z, pTos->n, &res); 02648 alreadyExists = rx==SQLITE_OK && res==0; 02649 pC->deferredMoveto = 0; 02650 } 02651 if( pOp->opcode==OP_Found ){ 02652 if( alreadyExists ) pc = pOp->p2 - 1; 02653 }else{ 02654 if( !alreadyExists ) pc = pOp->p2 - 1; 02655 } 02656 if( pOp->opcode!=OP_Distinct ){ 02657 Release(pTos); 02658 pTos--; 02659 } 02660 break; 02661 } 02662 02663 /* Opcode: IsUnique P1 P2 * 02664 ** 02665 ** The top of the stack is an integer record number. Call this 02666 ** record number R. The next on the stack is an index key created 02667 ** using MakeIdxKey. Call it K. This instruction pops R from the 02668 ** stack but it leaves K unchanged. 02669 ** 02670 ** P1 is an index. So all but the last four bytes of K are an 02671 ** index string. The last four bytes of K are a record number. 02672 ** 02673 ** This instruction asks if there is an entry in P1 where the 02674 ** index string matches K but the record number is different 02675 ** from R. If there is no such entry, then there is an immediate 02676 ** jump to P2. If any entry does exist where the index string 02677 ** matches K but the record number is not R, then the record 02678 ** number for that entry is pushed onto the stack and control 02679 ** falls through to the next instruction. 02680 ** 02681 ** See also: Distinct, NotFound, NotExists, Found 02682 */ 02683 case OP_IsUnique: { 02684 int i = pOp->p1; 02685 Mem *pNos = &pTos[-1]; 02686 BtCursor *pCrsr; 02687 int R; 02688 02689 /* Pop the value R off the top of the stack 02690 */ 02691 assert( pNos>=p->aStack ); 02692 Integerify(pTos); 02693 R = pTos->i; 02694 pTos--; 02695 assert( i>=0 && i<=p->nCursor ); 02696 if( (pCrsr = p->aCsr[i].pCursor)!=0 ){ 02697 int res, rc; 02698 int v; /* The record number on the P1 entry that matches K */ 02699 char *zKey; /* The value of K */ 02700 int nKey; /* Number of bytes in K */ 02701 02702 /* Make sure K is a string and make zKey point to K 02703 */ 02704 Stringify(pNos); 02705 zKey = pNos->z; 02706 nKey = pNos->n; 02707 assert( nKey >= 4 ); 02708 02709 /* Search for an entry in P1 where all but the last four bytes match K. 02710 ** If there is no such entry, jump immediately to P2. 02711 */ 02712 assert( p->aCsr[i].deferredMoveto==0 ); 02713 rc = sqliteBtreeMoveto(pCrsr, zKey, nKey-4, &res); 02714 if( rc!=SQLITE_OK ) goto abort_due_to_error; 02715 if( res<0 ){ 02716 rc = sqliteBtreeNext(pCrsr, &res); 02717 if( res ){ 02718 pc = pOp->p2 - 1; 02719 break; 02720 } 02721 } 02722 rc = sqliteBtreeKeyCompare(pCrsr, zKey, nKey-4, 4, &res); 02723 if( rc!=SQLITE_OK ) goto abort_due_to_error; 02724 if( res>0 ){ 02725 pc = pOp->p2 - 1; 02726 break; 02727 } 02728 02729 /* At this point, pCrsr is pointing to an entry in P1 where all but 02730 ** the last for bytes of the key match K. Check to see if the last 02731 ** four bytes of the key are different from R. If the last four 02732 ** bytes equal R then jump immediately to P2. 02733 */ 02734 sqliteBtreeKey(pCrsr, nKey - 4, 4, (char*)&v); 02735 v = keyToInt(v); 02736 if( v==R ){ 02737 pc = pOp->p2 - 1; 02738 break; 02739 } 02740 02741 /* The last four bytes of the key are different from R. Convert the 02742 ** last four bytes of the key into an integer and push it onto the 02743 ** stack. (These bytes are the record number of an entry that 02744 ** violates a UNIQUE constraint.) 02745 */ 02746 pTos++; 02747 pTos->i = v; 02748 pTos->flags = MEM_Int; 02749 } 02750 break; 02751 } 02752 02753 /* Opcode: NotExists P1 P2 * 02754 ** 02755 ** Use the top of the stack as a integer key. If a record with that key 02756 ** does not exist in table of P1, then jump to P2. If the record 02757 ** does exist, then fall thru. The cursor is left pointing to the 02758 ** record if it exists. The integer key is popped from the stack. 02759 ** 02760 ** The difference between this operation and NotFound is that this 02761 ** operation assumes the key is an integer and NotFound assumes it 02762 ** is a string. 02763 ** 02764 ** See also: Distinct, Found, MoveTo, NotFound, IsUnique 02765 */ 02766 case OP_NotExists: { 02767 int i = pOp->p1; 02768 BtCursor *pCrsr; 02769 assert( pTos>=p->aStack ); 02770 assert( i>=0 && i<p->nCursor ); 02771 if( (pCrsr = p->aCsr[i].pCursor)!=0 ){ 02772 int res, rx, iKey; 02773 assert( pTos->flags & MEM_Int ); 02774 iKey = intToKey(pTos->i); 02775 rx = sqliteBtreeMoveto(pCrsr, (char*)&iKey, sizeof(int), &res); 02776 p->aCsr[i].lastRecno = pTos->i; 02777 p->aCsr[i].recnoIsValid = res==0; 02778 p->aCsr[i].nullRow = 0; 02779 if( rx!=SQLITE_OK || res!=0 ){ 02780 pc = pOp->p2 - 1; 02781 p->aCsr[i].recnoIsValid = 0; 02782 } 02783 } 02784 Release(pTos); 02785 pTos--; 02786 break; 02787 } 02788 02789 /* Opcode: NewRecno P1 * * 02790 ** 02791 ** Get a new integer record number used as the key to a table. 02792 ** The record number is not previously used as a key in the database 02793 ** table that cursor P1 points to. The new record number is pushed 02794 ** onto the stack. 02795 */ 02796 case OP_NewRecno: { 02797 int i = pOp->p1; 02798 int v = 0; 02799 Cursor *pC; 02800 assert( i>=0 && i<p->nCursor ); 02801 if( (pC = &p->aCsr[i])->pCursor==0 ){ 02802 v = 0; 02803 }else{ 02804 /* The next rowid or record number (different terms for the same 02805 ** thing) is obtained in a two-step algorithm. 02806 ** 02807 ** First we attempt to find the largest existing rowid and add one 02808 ** to that. But if the largest existing rowid is already the maximum 02809 ** positive integer, we have to fall through to the second 02810 ** probabilistic algorithm 02811 ** 02812 ** The second algorithm is to select a rowid at random and see if 02813 ** it already exists in the table. If it does not exist, we have 02814 ** succeeded. If the random rowid does exist, we select a new one 02815 ** and try again, up to 1000 times. 02816 ** 02817 ** For a table with less than 2 billion entries, the probability 02818 ** of not finding a unused rowid is about 1.0e-300. This is a 02819 ** non-zero probability, but it is still vanishingly small and should 02820 ** never cause a problem. You are much, much more likely to have a 02821 ** hardware failure than for this algorithm to fail. 02822 ** 02823 ** The analysis in the previous paragraph assumes that you have a good 02824 ** source of random numbers. Is a library function like lrand48() 02825 ** good enough? Maybe. Maybe not. It's hard to know whether there 02826 ** might be subtle bugs is some implementations of lrand48() that 02827 ** could cause problems. To avoid uncertainty, SQLite uses its own 02828 ** random number generator based on the RC4 algorithm. 02829 ** 02830 ** To promote locality of reference for repetitive inserts, the 02831 ** first few attempts at chosing a random rowid pick values just a little 02832 ** larger than the previous rowid. This has been shown experimentally 02833 ** to double the speed of the COPY operation. 02834 */ 02835 int res, rx, cnt, x; 02836 cnt = 0; 02837 if( !pC->useRandomRowid ){ 02838 if( pC->nextRowidValid ){ 02839 v = pC->nextRowid; 02840 }else{ 02841 rx = sqliteBtreeLast(pC->pCursor, &res); 02842 if( res ){ 02843 v = 1; 02844 }else{ 02845 sqliteBtreeKey(pC->pCursor, 0, sizeof(v), (void*)&v); 02846 v = keyToInt(v); 02847 if( v==0x7fffffff ){ 02848 pC->useRandomRowid = 1; 02849 }else{ 02850 v++; 02851 } 02852 } 02853 } 02854 if( v<0x7fffffff ){ 02855 pC->nextRowidValid = 1; 02856 pC->nextRowid = v+1; 02857 }else{ 02858 pC->nextRowidValid = 0; 02859 } 02860 } 02861 if( pC->useRandomRowid ){ 02862 v = db->priorNewRowid; 02863 cnt = 0; 02864 do{ 02865 if( v==0 || cnt>2 ){ 02866 sqliteRandomness(sizeof(v), &v); 02867 if( cnt<5 ) v &= 0xffffff; 02868 }else{ 02869 unsigned char r; 02870 sqliteRandomness(1, &r); 02871 v += r + 1; 02872 } 02873 if( v==0 ) continue; 02874 x = intToKey(v); 02875 rx = sqliteBtreeMoveto(pC->pCursor, &x, sizeof(int), &res); 02876 cnt++; 02877 }while( cnt<1000 && rx==SQLITE_OK && res==0 ); 02878 db->priorNewRowid = v; 02879 if( rx==SQLITE_OK && res==0 ){ 02880 rc = SQLITE_FULL; 02881 goto abort_due_to_error; 02882 } 02883 } 02884 pC->recnoIsValid = 0; 02885 pC->deferredMoveto = 0; 02886 } 02887 pTos++; 02888 pTos->i = v; 02889 pTos->flags = MEM_Int; 02890 break; 02891 } 02892 02893 /* Opcode: PutIntKey P1 P2 * 02894 ** 02895 ** Write an entry into the table of cursor P1. A new entry is 02896 ** created if it doesn't already exist or the data for an existing 02897 ** entry is overwritten. The data is the value on the top of the 02898 ** stack. The key is the next value down on the stack. The key must 02899 ** be an integer. The stack is popped twice by this instruction. 02900 ** 02901 ** If the OPFLAG_NCHANGE flag of P2 is set, then the row change count is 02902 ** incremented (otherwise not). If the OPFLAG_CSCHANGE flag is set, 02903 ** then the current statement change count is incremented (otherwise not). 02904 ** If the OPFLAG_LASTROWID flag of P2 is set, then rowid is 02905 ** stored for subsequent return by the sqlite_last_insert_rowid() function 02906 ** (otherwise it's unmodified). 02907 */ 02908 /* Opcode: PutStrKey P1 * * 02909 ** 02910 ** Write an entry into the table of cursor P1. A new entry is 02911 ** created if it doesn't already exist or the data for an existing 02912 ** entry is overwritten. The data is the value on the top of the 02913 ** stack. The key is the next value down on the stack. The key must 02914 ** be a string. The stack is popped twice by this instruction. 02915 ** 02916 ** P1 may not be a pseudo-table opened using the OpenPseudo opcode. 02917 */ 02918 case OP_PutIntKey: 02919 case OP_PutStrKey: { 02920 Mem *pNos = &pTos[-1]; 02921 int i = pOp->p1; 02922 Cursor *pC; 02923 assert( pNos>=p->aStack ); 02924 assert( i>=0 && i<p->nCursor ); 02925 if( ((pC = &p->aCsr[i])->pCursor!=0 || pC->pseudoTable) ){ 02926 char *zKey; 02927 int nKey, iKey; 02928 if( pOp->opcode==OP_PutStrKey ){ 02929 Stringify(pNos); 02930 nKey = pNos->n; 02931 zKey = pNos->z; 02932 }else{ 02933 assert( pNos->flags & MEM_Int ); 02934 nKey = sizeof(int); 02935 iKey = intToKey(pNos->i); 02936 zKey = (char*)&iKey; 02937 if( pOp->p2 & OPFLAG_NCHANGE ) db->nChange++; 02938 if( pOp->p2 & OPFLAG_LASTROWID ) db->lastRowid = pNos->i; 02939 if( pOp->p2 & OPFLAG_CSCHANGE ) db->csChange++; 02940 if( pC->nextRowidValid && pTos->i>=pC->nextRowid ){ 02941 pC->nextRowidValid = 0; 02942 } 02943 } 02944 if( pTos->flags & MEM_Null ){ 02945 pTos->z = 0; 02946 pTos->n = 0; 02947 }else{ 02948 assert( pTos->flags & MEM_Str ); 02949 } 02950 if( pC->pseudoTable ){ 02951 /* PutStrKey does not work for pseudo-tables. 02952 ** The following assert makes sure we are not trying to use 02953 ** PutStrKey on a pseudo-table 02954 */ 02955 assert( pOp->opcode==OP_PutIntKey ); 02956 sqliteFree(pC->pData); 02957 pC->iKey = iKey; 02958 pC->nData = pTos->n; 02959 if( pTos->flags & MEM_Dyn ){ 02960 pC->pData = pTos->z; 02961 pTos->flags = MEM_Null; 02962 }else{ 02963 pC->pData = sqliteMallocRaw( pC->nData ); 02964 if( pC->pData ){ 02965 memcpy(pC->pData, pTos->z, pC->nData); 02966 } 02967 } 02968 pC->nullRow = 0; 02969 }else{ 02970 rc = sqliteBtreeInsert(pC->pCursor, zKey, nKey, pTos->z, pTos->n); 02971 } 02972 pC->recnoIsValid = 0; 02973 pC->deferredMoveto = 0; 02974 } 02975 popStack(&pTos, 2); 02976 break; 02977 } 02978 02979 /* Opcode: Delete P1 P2 * 02980 ** 02981 ** Delete the record at which the P1 cursor is currently pointing. 02982 ** 02983 ** The cursor will be left pointing at either the next or the previous 02984 ** record in the table. If it is left pointing at the next record, then 02985 ** the next Next instruction will be a no-op. Hence it is OK to delete 02986 ** a record from within an Next loop. 02987 ** 02988 ** If the OPFLAG_NCHANGE flag of P2 is set, then the row change count is 02989 ** incremented (otherwise not). If OPFLAG_CSCHANGE flag is set, 02990 ** then the current statement change count is incremented (otherwise not). 02991 ** 02992 ** If P1 is a pseudo-table, then this instruction is a no-op. 02993 */ 02994 case OP_Delete: { 02995 int i = pOp->p1; 02996 Cursor *pC; 02997 assert( i>=0 && i<p->nCursor ); 02998 pC = &p->aCsr[i]; 02999 if( pC->pCursor!=0 ){ 03000 sqliteVdbeCursorMoveto(pC); 03001 rc = sqliteBtreeDelete(pC->pCursor); 03002 pC->nextRowidValid = 0; 03003 } 03004 if( pOp->p2 & OPFLAG_NCHANGE ) db->nChange++; 03005 if( pOp->p2 & OPFLAG_CSCHANGE ) db->csChange++; 03006 break; 03007 } 03008 03009 /* Opcode: SetCounts * * * 03010 ** 03011 ** Called at end of statement. Updates lsChange (last statement change count) 03012 ** and resets csChange (current statement change count) to 0. 03013 */ 03014 case OP_SetCounts: { 03015 db->lsChange=db->csChange; 03016 db->csChange=0; 03017 break; 03018 } 03019 03020 /* Opcode: KeyAsData P1 P2 * 03021 ** 03022 ** Turn the key-as-data mode for cursor P1 either on (if P2==1) or 03023 ** off (if P2==0). In key-as-data mode, the OP_Column opcode pulls 03024 ** data off of the key rather than the data. This is used for 03025 ** processing compound selects. 03026 */ 03027 case OP_KeyAsData: { 03028 int i = pOp->p1; 03029 assert( i>=0 && i<p->nCursor ); 03030 p->aCsr[i].keyAsData = pOp->p2; 03031 break; 03032 } 03033 03034 /* Opcode: RowData P1 * * 03035 ** 03036 ** Push onto the stack the complete row data for cursor P1. 03037 ** There is no interpretation of the data. It is just copied 03038 ** onto the stack exactly as it is found in the database file. 03039 ** 03040 ** If the cursor is not pointing to a valid row, a NULL is pushed 03041 ** onto the stack. 03042 */ 03043 /* Opcode: RowKey P1 * * 03044 ** 03045 ** Push onto the stack the complete row key for cursor P1. 03046 ** There is no interpretation of the key. It is just copied 03047 ** onto the stack exactly as it is found in the database file. 03048 ** 03049 ** If the cursor is not pointing to a valid row, a NULL is pushed 03050 ** onto the stack. 03051 */ 03052 case OP_RowKey: 03053 case OP_RowData: { 03054 int i = pOp->p1; 03055 Cursor *pC; 03056 int n; 03057 03058 pTos++; 03059 assert( i>=0 && i<p->nCursor ); 03060 pC = &p->aCsr[i]; 03061 if( pC->nullRow ){ 03062 pTos->flags = MEM_Null; 03063 }else if( pC->pCursor!=0 ){ 03064 BtCursor *pCrsr = pC->pCursor; 03065 sqliteVdbeCursorMoveto(pC); 03066 if( pC->nullRow ){ 03067 pTos->flags = MEM_Null; 03068 break; 03069 }else if( pC->keyAsData || pOp->opcode==OP_RowKey ){ 03070 sqliteBtreeKeySize(pCrsr, &n); 03071 }else{ 03072 sqliteBtreeDataSize(pCrsr, &n); 03073 } 03074 pTos->n = n; 03075 if( n<=NBFS ){ 03076 pTos->flags = MEM_Str | MEM_Short; 03077 pTos->z = pTos->zShort; 03078 }else{ 03079 char *z = sqliteMallocRaw( n ); 03080 if( z==0 ) goto no_mem; 03081 pTos->flags = MEM_Str | MEM_Dyn; 03082 pTos->z = z; 03083 } 03084 if( pC->keyAsData || pOp->opcode==OP_RowKey ){ 03085 sqliteBtreeKey(pCrsr, 0, n, pTos->z); 03086 }else{ 03087 sqliteBtreeData(pCrsr, 0, n, pTos->z); 03088 } 03089 }else if( pC->pseudoTable ){ 03090 pTos->n = pC->nData; 03091 pTos->z = pC->pData; 03092 pTos->flags = MEM_Str|MEM_Ephem; 03093 }else{ 03094 pTos->flags = MEM_Null; 03095 } 03096 break; 03097 } 03098 03099 /* Opcode: Column P1 P2 * 03100 ** 03101 ** Interpret the data that cursor P1 points to as 03102 ** a structure built using the MakeRecord instruction. 03103 ** (See the MakeRecord opcode for additional information about 03104 ** the format of the data.) 03105 ** Push onto the stack the value of the P2-th column contained 03106 ** in the data. 03107 ** 03108 ** If the KeyAsData opcode has previously executed on this cursor, 03109 ** then the field might be extracted from the key rather than the 03110 ** data. 03111 ** 03112 ** If P1 is negative, then the record is stored on the stack rather 03113 ** than in a table. For P1==-1, the top of the stack is used. 03114 ** For P1==-2, the next on the stack is used. And so forth. The 03115 ** value pushed is always just a pointer into the record which is 03116 ** stored further down on the stack. The column value is not copied. 03117 */ 03118 case OP_Column: { 03119 int amt, offset, end, payloadSize; 03120 int i = pOp->p1; 03121 int p2 = pOp->p2; 03122 Cursor *pC; 03123 char *zRec; 03124 BtCursor *pCrsr; 03125 int idxWidth; 03126 unsigned char aHdr[10]; 03127 03128 assert( i<p->nCursor ); 03129 pTos++; 03130 if( i<0 ){ 03131 assert( &pTos[i]>=p->aStack ); 03132 assert( pTos[i].flags & MEM_Str ); 03133 zRec = pTos[i].z; 03134 payloadSize = pTos[i].n; 03135 }else if( (pC = &p->aCsr[i])->pCursor!=0 ){ 03136 sqliteVdbeCursorMoveto(pC); 03137 zRec = 0; 03138 pCrsr = pC->pCursor; 03139 if( pC->nullRow ){ 03140 payloadSize = 0; 03141 }else if( pC->keyAsData ){ 03142 sqliteBtreeKeySize(pCrsr, &payloadSize); 03143 }else{ 03144 sqliteBtreeDataSize(pCrsr, &payloadSize); 03145 } 03146 }else if( pC->pseudoTable ){ 03147 payloadSize = pC->nData; 03148 zRec = pC->pData; 03149 assert( payloadSize==0 || zRec!=0 ); 03150 }else{ 03151 payloadSize = 0; 03152 } 03153 03154 /* Figure out how many bytes in the column data and where the column 03155 ** data begins. 03156 */ 03157 if( payloadSize==0 ){ 03158 pTos->flags = MEM_Null; 03159 break; 03160 }else if( payloadSize<256 ){ 03161 idxWidth = 1; 03162 }else if( payloadSize<65536 ){ 03163 idxWidth = 2; 03164 }else{ 03165 idxWidth = 3; 03166 } 03167 03168 /* Figure out where the requested column is stored and how big it is. 03169 */ 03170 if( payloadSize < idxWidth*(p2+1) ){ 03171 rc = SQLITE_CORRUPT; 03172 goto abort_due_to_error; 03173 } 03174 if( zRec ){ 03175 memcpy(aHdr, &zRec[idxWidth*p2], idxWidth*2); 03176 }else if( pC->keyAsData ){ 03177 sqliteBtreeKey(pCrsr, idxWidth*p2, idxWidth*2, (char*)aHdr); 03178 }else{ 03179 sqliteBtreeData(pCrsr, idxWidth*p2, idxWidth*2, (char*)aHdr); 03180 } 03181 offset = aHdr[0]; 03182 end = aHdr[idxWidth]; 03183 if( idxWidth>1 ){ 03184 offset |= aHdr[1]<<8; 03185 end |= aHdr[idxWidth+1]<<8; 03186 if( idxWidth>2 ){ 03187 offset |= aHdr[2]<<16; 03188 end |= aHdr[idxWidth+2]<<16; 03189 } 03190 } 03191 amt = end - offset; 03192 if( amt<0 || offset<0 || end>payloadSize ){ 03193 rc = SQLITE_CORRUPT; 03194 goto abort_due_to_error; 03195 } 03196 03197 /* amt and offset now hold the offset to the start of data and the 03198 ** amount of data. Go get the data and put it on the stack. 03199 */ 03200 pTos->n = amt; 03201 if( amt==0 ){ 03202 pTos->flags = MEM_Null; 03203 }else if( zRec ){ 03204 pTos->flags = MEM_Str | MEM_Ephem; 03205 pTos->z = &zRec[offset]; 03206 }else{ 03207 if( amt<=NBFS ){ 03208 pTos->flags = MEM_Str | MEM_Short; 03209 pTos->z = pTos->zShort; 03210 }else{ 03211 char *z = sqliteMallocRaw( amt ); 03212 if( z==0 ) goto no_mem; 03213 pTos->flags = MEM_Str | MEM_Dyn; 03214 pTos->z = z; 03215 } 03216 if( pC->keyAsData ){ 03217 sqliteBtreeKey(pCrsr, offset, amt, pTos->z); 03218 }else{ 03219 sqliteBtreeData(pCrsr, offset, amt, pTos->z); 03220 } 03221 } 03222 break; 03223 } 03224 03225 /* Opcode: Recno P1 * * 03226 ** 03227 ** Push onto the stack an integer which is the first 4 bytes of the 03228 ** the key to the current entry in a sequential scan of the database 03229 ** file P1. The sequential scan should have been started using the 03230 ** Next opcode. 03231 */ 03232 case OP_Recno: { 03233 int i = pOp->p1; 03234 Cursor *pC; 03235 int v; 03236 03237 assert( i>=0 && i<p->nCursor ); 03238 pC = &p->aCsr[i]; 03239 sqliteVdbeCursorMoveto(pC); 03240 pTos++; 03241 if( pC->recnoIsValid ){ 03242 v = pC->lastRecno; 03243 }else if( pC->pseudoTable ){ 03244 v = keyToInt(pC->iKey); 03245 }else if( pC->nullRow || pC->pCursor==0 ){ 03246 pTos->flags = MEM_Null; 03247 break; 03248 }else{ 03249 assert( pC->pCursor!=0 ); 03250 sqliteBtreeKey(pC->pCursor, 0, sizeof(u32), (char*)&v); 03251 v = keyToInt(v); 03252 } 03253 pTos->i = v; 03254 pTos->flags = MEM_Int; 03255 break; 03256 } 03257 03258 /* Opcode: FullKey P1 * * 03259 ** 03260 ** Extract the complete key from the record that cursor P1 is currently 03261 ** pointing to and push the key onto the stack as a string. 03262 ** 03263 ** Compare this opcode to Recno. The Recno opcode extracts the first 03264 ** 4 bytes of the key and pushes those bytes onto the stack as an 03265 ** integer. This instruction pushes the entire key as a string. 03266 ** 03267 ** This opcode may not be used on a pseudo-table. 03268 */ 03269 case OP_FullKey: { 03270 int i = pOp->p1; 03271 BtCursor *pCrsr; 03272 03273 assert( p->aCsr[i].keyAsData ); 03274 assert( !p->aCsr[i].pseudoTable ); 03275 assert( i>=0 && i<p->nCursor ); 03276 pTos++; 03277 if( (pCrsr = p->aCsr[i].pCursor)!=0 ){ 03278 int amt; 03279 char *z; 03280 03281 sqliteVdbeCursorMoveto(&p->aCsr[i]); 03282 sqliteBtreeKeySize(pCrsr, &amt); 03283 if( amt<=0 ){ 03284 rc = SQLITE_CORRUPT; 03285 goto abort_due_to_error; 03286 } 03287 if( amt>NBFS ){ 03288 z = sqliteMallocRaw( amt ); 03289 if( z==0 ) goto no_mem; 03290 pTos->flags = MEM_Str | MEM_Dyn; 03291 }else{ 03292 z = pTos->zShort; 03293 pTos->flags = MEM_Str | MEM_Short; 03294 } 03295 sqliteBtreeKey(pCrsr, 0, amt, z); 03296 pTos->z = z; 03297 pTos->n = amt; 03298 } 03299 break; 03300 } 03301 03302 /* Opcode: NullRow P1 * * 03303 ** 03304 ** Move the cursor P1 to a null row. Any OP_Column operations 03305 ** that occur while the cursor is on the null row will always push 03306 ** a NULL onto the stack. 03307 */ 03308 case OP_NullRow: { 03309 int i = pOp->p1; 03310 03311 assert( i>=0 && i<p->nCursor ); 03312 p->aCsr[i].nullRow = 1; 03313 p->aCsr[i].recnoIsValid = 0; 03314 break; 03315 } 03316 03317 /* Opcode: Last P1 P2 * 03318 ** 03319 ** The next use of the Recno or Column or Next instruction for P1 03320 ** will refer to the last entry in the database table or index. 03321 ** If the table or index is empty and P2>0, then jump immediately to P2. 03322 ** If P2 is 0 or if the table or index is not empty, fall through 03323 ** to the following instruction. 03324 */ 03325 case OP_Last: { 03326 int i = pOp->p1; 03327 Cursor *pC; 03328 BtCursor *pCrsr; 03329 03330 assert( i>=0 && i<p->nCursor ); 03331 pC = &p->aCsr[i]; 03332 if( (pCrsr = pC->pCursor)!=0 ){ 03333 int res; 03334 rc = sqliteBtreeLast(pCrsr, &res); 03335 pC->nullRow = res; 03336 pC->deferredMoveto = 0; 03337 if( res && pOp->p2>0 ){ 03338 pc = pOp->p2 - 1; 03339 } 03340 }else{ 03341 pC->nullRow = 0; 03342 } 03343 break; 03344 } 03345 03346 /* Opcode: Rewind P1 P2 * 03347 ** 03348 ** The next use of the Recno or Column or Next instruction for P1 03349 ** will refer to the first entry in the database table or index. 03350 ** If the table or index is empty and P2>0, then jump immediately to P2. 03351 ** If P2 is 0 or if the table or index is not empty, fall through 03352 ** to the following instruction. 03353 */ 03354 case OP_Rewind: { 03355 int i = pOp->p1; 03356 Cursor *pC; 03357 BtCursor *pCrsr; 03358 03359 assert( i>=0 && i<p->nCursor ); 03360 pC = &p->aCsr[i]; 03361 if( (pCrsr = pC->pCursor)!=0 ){ 03362 int res; 03363 rc = sqliteBtreeFirst(pCrsr, &res); 03364 pC->atFirst = res==0; 03365 pC->nullRow = res; 03366 pC->deferredMoveto = 0; 03367 if( res && pOp->p2>0 ){ 03368 pc = pOp->p2 - 1; 03369 } 03370 }else{ 03371 pC->nullRow = 0; 03372 } 03373 break; 03374 } 03375 03376 /* Opcode: Next P1 P2 * 03377 ** 03378 ** Advance cursor P1 so that it points to the next key/data pair in its 03379 ** table or index. If there are no more key/value pairs then fall through 03380 ** to the following instruction. But if the cursor advance was successful, 03381 ** jump immediately to P2. 03382 ** 03383 ** See also: Prev 03384 */ 03385 /* Opcode: Prev P1 P2 * 03386 ** 03387 ** Back up cursor P1 so that it points to the previous key/data pair in its 03388 ** table or index. If there is no previous key/value pairs then fall through 03389 ** to the following instruction. But if the cursor backup was successful, 03390 ** jump immediately to P2. 03391 */ 03392 case OP_Prev: 03393 case OP_Next: { 03394 Cursor *pC; 03395 BtCursor *pCrsr; 03396 03397 CHECK_FOR_INTERRUPT; 03398 assert( pOp->p1>=0 && pOp->p1<p->nCursor ); 03399 pC = &p->aCsr[pOp->p1]; 03400 if( (pCrsr = pC->pCursor)!=0 ){ 03401 int res; 03402 if( pC->nullRow ){ 03403 res = 1; 03404 }else{ 03405 assert( pC->deferredMoveto==0 ); 03406 rc = pOp->opcode==OP_Next ? sqliteBtreeNext(pCrsr, &res) : 03407 sqliteBtreePrevious(pCrsr, &res); 03408 pC->nullRow = res; 03409 } 03410 if( res==0 ){ 03411 pc = pOp->p2 - 1; 03412 sqlite_search_count++; 03413 } 03414 }else{ 03415 pC->nullRow = 1; 03416 } 03417 pC->recnoIsValid = 0; 03418 break; 03419 } 03420 03421 /* Opcode: IdxPut P1 P2 P3 03422 ** 03423 ** The top of the stack holds a SQL index key made using the 03424 ** MakeIdxKey instruction. This opcode writes that key into the 03425 ** index P1. Data for the entry is nil. 03426 ** 03427 ** If P2==1, then the key must be unique. If the key is not unique, 03428 ** the program aborts with a SQLITE_CONSTRAINT error and the database 03429 ** is rolled back. If P3 is not null, then it becomes part of the 03430 ** error message returned with the SQLITE_CONSTRAINT. 03431 */ 03432 case OP_IdxPut: { 03433 int i = pOp->p1; 03434 BtCursor *pCrsr; 03435 assert( pTos>=p->aStack ); 03436 assert( i>=0 && i<p->nCursor ); 03437 assert( pTos->flags & MEM_Str ); 03438 if( (pCrsr = p->aCsr[i].pCursor)!=0 ){ 03439 int nKey = pTos->n; 03440 const char *zKey = pTos->z; 03441 if( pOp->p2 ){ 03442 int res, n; 03443 assert( nKey >= 4 ); 03444 rc = sqliteBtreeMoveto(pCrsr, zKey, nKey-4, &res); 03445 if( rc!=SQLITE_OK ) goto abort_due_to_error; 03446 while( res!=0 ){ 03447 int c; 03448 sqliteBtreeKeySize(pCrsr, &n); 03449 if( n==nKey 03450 && sqliteBtreeKeyCompare(pCrsr, zKey, nKey-4, 4, &c)==SQLITE_OK 03451 && c==0 03452 ){ 03453 rc = SQLITE_CONSTRAINT; 03454 if( pOp->p3 && pOp->p3[0] ){ 03455 sqliteSetString(&p->zErrMsg, pOp->p3, (char*)0); 03456 } 03457 goto abort_due_to_error; 03458 } 03459 if( res<0 ){ 03460 sqliteBtreeNext(pCrsr, &res); 03461 res = +1; 03462 }else{ 03463 break; 03464 } 03465 } 03466 } 03467 rc = sqliteBtreeInsert(pCrsr, zKey, nKey, "", 0); 03468 assert( p->aCsr[i].deferredMoveto==0 ); 03469 } 03470 Release(pTos); 03471 pTos--; 03472 break; 03473 } 03474 03475 /* Opcode: IdxDelete P1 * * 03476 ** 03477 ** The top of the stack is an index key built using the MakeIdxKey opcode. 03478 ** This opcode removes that entry from the index. 03479 */ 03480 case OP_IdxDelete: { 03481 int i = pOp->p1; 03482 BtCursor *pCrsr; 03483 assert( pTos>=p->aStack ); 03484 assert( pTos->flags & MEM_Str ); 03485 assert( i>=0 && i<p->nCursor ); 03486 if( (pCrsr = p->aCsr[i].pCursor)!=0 ){ 03487 int rx, res; 03488 rx = sqliteBtreeMoveto(pCrsr, pTos->z, pTos->n, &res); 03489 if( rx==SQLITE_OK && res==0 ){ 03490 rc = sqliteBtreeDelete(pCrsr); 03491 } 03492 assert( p->aCsr[i].deferredMoveto==0 ); 03493 } 03494 Release(pTos); 03495 pTos--; 03496 break; 03497 } 03498 03499 /* Opcode: IdxRecno P1 * * 03500 ** 03501 ** Push onto the stack an integer which is the last 4 bytes of the 03502 ** the key to the current entry in index P1. These 4 bytes should 03503 ** be the record number of the table entry to which this index entry 03504 ** points. 03505 ** 03506 ** See also: Recno, MakeIdxKey. 03507 */ 03508 case OP_IdxRecno: { 03509 int i = pOp->p1; 03510 BtCursor *pCrsr; 03511 03512 assert( i>=0 && i<p->nCursor ); 03513 pTos++; 03514 if( (pCrsr = p->aCsr[i].pCursor)!=0 ){ 03515 int v; 03516 int sz; 03517 assert( p->aCsr[i].deferredMoveto==0 ); 03518 sqliteBtreeKeySize(pCrsr, &sz); 03519 if( sz<sizeof(u32) ){ 03520 pTos->flags = MEM_Null; 03521 }else{ 03522 sqliteBtreeKey(pCrsr, sz - sizeof(u32), sizeof(u32), (char*)&v); 03523 v = keyToInt(v); 03524 pTos->i = v; 03525 pTos->flags = MEM_Int; 03526 } 03527 }else{ 03528 pTos->flags = MEM_Null; 03529 } 03530 break; 03531 } 03532 03533 /* Opcode: IdxGT P1 P2 * 03534 ** 03535 ** Compare the top of the stack against the key on the index entry that 03536 ** cursor P1 is currently pointing to. Ignore the last 4 bytes of the 03537 ** index entry. If the index entry is greater than the top of the stack 03538 ** then jump to P2. Otherwise fall through to the next instruction. 03539 ** In either case, the stack is popped once. 03540 */ 03541 /* Opcode: IdxGE P1 P2 * 03542 ** 03543 ** Compare the top of the stack against the key on the index entry that 03544 ** cursor P1 is currently pointing to. Ignore the last 4 bytes of the 03545 ** index entry. If the index entry is greater than or equal to 03546 ** the top of the stack 03547 ** then jump to P2. Otherwise fall through to the next instruction. 03548 ** In either case, the stack is popped once. 03549 */ 03550 /* Opcode: IdxLT P1 P2 * 03551 ** 03552 ** Compare the top of the stack against the key on the index entry that 03553 ** cursor P1 is currently pointing to. Ignore the last 4 bytes of the 03554 ** index entry. If the index entry is less than the top of the stack 03555 ** then jump to P2. Otherwise fall through to the next instruction. 03556 ** In either case, the stack is popped once. 03557 */ 03558 case OP_IdxLT: 03559 case OP_IdxGT: 03560 case OP_IdxGE: { 03561 int i= pOp->p1; 03562 BtCursor *pCrsr; 03563 03564 assert( i>=0 && i<p->nCursor ); 03565 assert( pTos>=p->aStack ); 03566 if( (pCrsr = p->aCsr[i].pCursor)!=0 ){ 03567 int res, rc; 03568 03569 Stringify(pTos); 03570 assert( p->aCsr[i].deferredMoveto==0 ); 03571 rc = sqliteBtreeKeyCompare(pCrsr, pTos->z, pTos->n, 4, &res); 03572 if( rc!=SQLITE_OK ){ 03573 break; 03574 } 03575 if( pOp->opcode==OP_IdxLT ){ 03576 res = -res; 03577 }else if( pOp->opcode==OP_IdxGE ){ 03578 res++; 03579 } 03580 if( res>0 ){ 03581 pc = pOp->p2 - 1 ; 03582 } 03583 } 03584 Release(pTos); 03585 pTos--; 03586 break; 03587 } 03588 03589 /* Opcode: IdxIsNull P1 P2 * 03590 ** 03591 ** The top of the stack contains an index entry such as might be generated 03592 ** by the MakeIdxKey opcode. This routine looks at the first P1 fields of 03593 ** that key. If any of the first P1 fields are NULL, then a jump is made 03594 ** to address P2. Otherwise we fall straight through. 03595 ** 03596 ** The index entry is always popped from the stack. 03597 */ 03598 case OP_IdxIsNull: { 03599 int i = pOp->p1; 03600 int k, n; 03601 const char *z; 03602 03603 assert( pTos>=p->aStack ); 03604 assert( pTos->flags & MEM_Str ); 03605 z = pTos->z; 03606 n = pTos->n; 03607 for(k=0; k<n && i>0; i--){ 03608 if( z[k]=='a' ){ 03609 pc = pOp->p2-1; 03610 break; 03611 } 03612 while( k<n && z[k] ){ k++; } 03613 k++; 03614 } 03615 Release(pTos); 03616 pTos--; 03617 break; 03618 } 03619 03620 /* Opcode: Destroy P1 P2 * 03621 ** 03622 ** Delete an entire database table or index whose root page in the database 03623 ** file is given by P1. 03624 ** 03625 ** The table being destroyed is in the main database file if P2==0. If 03626 ** P2==1 then the table to be clear is in the auxiliary database file 03627 ** that is used to store tables create using CREATE TEMPORARY TABLE. 03628 ** 03629 ** See also: Clear 03630 */ 03631 case OP_Destroy: { 03632 rc = sqliteBtreeDropTable(db->aDb[pOp->p2].pBt, pOp->p1); 03633 break; 03634 } 03635 03636 /* Opcode: Clear P1 P2 * 03637 ** 03638 ** Delete all contents of the database table or index whose root page 03639 ** in the database file is given by P1. But, unlike Destroy, do not 03640 ** remove the table or index from the database file. 03641 ** 03642 ** The table being clear is in the main database file if P2==0. If 03643 ** P2==1 then the table to be clear is in the auxiliary database file 03644 ** that is used to store tables create using CREATE TEMPORARY TABLE. 03645 ** 03646 ** See also: Destroy 03647 */ 03648 case OP_Clear: { 03649 rc = sqliteBtreeClearTable(db->aDb[pOp->p2].pBt, pOp->p1); 03650 break; 03651 } 03652 03653 /* Opcode: CreateTable * P2 P3 03654 ** 03655 ** Allocate a new table in the main database file if P2==0 or in the 03656 ** auxiliary database file if P2==1. Push the page number 03657 ** for the root page of the new table onto the stack. 03658 ** 03659 ** The root page number is also written to a memory location that P3 03660 ** points to. This is the mechanism is used to write the root page 03661 ** number into the parser's internal data structures that describe the 03662 ** new table. 03663 ** 03664 ** The difference between a table and an index is this: A table must 03665 ** have a 4-byte integer key and can have arbitrary data. An index 03666 ** has an arbitrary key but no data. 03667 ** 03668 ** See also: CreateIndex 03669 */ 03670 /* Opcode: CreateIndex * P2 P3 03671 ** 03672 ** Allocate a new index in the main database file if P2==0 or in the 03673 ** auxiliary database file if P2==1. Push the page number of the 03674 ** root page of the new index onto the stack. 03675 ** 03676 ** See documentation on OP_CreateTable for additional information. 03677 */ 03678 case OP_CreateIndex: 03679 case OP_CreateTable: { 03680 int pgno; 03681 assert( pOp->p3!=0 && pOp->p3type==P3_POINTER ); 03682 assert( pOp->p2>=0 && pOp->p2<db->nDb ); 03683 assert( db->aDb[pOp->p2].pBt!=0 ); 03684 if( pOp->opcode==OP_CreateTable ){ 03685 rc = sqliteBtreeCreateTable(db->aDb[pOp->p2].pBt, &pgno); 03686 }else{ 03687 rc = sqliteBtreeCreateIndex(db->aDb[pOp->p2].pBt, &pgno); 03688 } 03689 pTos++; 03690 if( rc==SQLITE_OK ){ 03691 pTos->i = pgno; 03692 pTos->flags = MEM_Int; 03693 *(u32*)pOp->p3 = pgno; 03694 pOp->p3 = 0; 03695 }else{ 03696 pTos->flags = MEM_Null; 03697 } 03698 break; 03699 } 03700 03701 /* Opcode: IntegrityCk P1 P2 * 03702 ** 03703 ** Do an analysis of the currently open database. Push onto the 03704 ** stack the text of an error message describing any problems. 03705 ** If there are no errors, push a "ok" onto the stack. 03706 ** 03707 ** P1 is the index of a set that contains the root page numbers 03708 ** for all tables and indices in the main database file. The set 03709 ** is cleared by this opcode. In other words, after this opcode 03710 ** has executed, the set will be empty. 03711 ** 03712 ** If P2 is not zero, the check is done on the auxiliary database 03713 ** file, not the main database file. 03714 ** 03715 ** This opcode is used for testing purposes only. 03716 */ 03717 case OP_IntegrityCk: { 03718 int nRoot; 03719 int *aRoot; 03720 int iSet = pOp->p1; 03721 Set *pSet; 03722 int j; 03723 HashElem *i; 03724 char *z; 03725 03726 assert( iSet>=0 && iSet<p->nSet ); 03727 pTos++; 03728 pSet = &p->aSet[iSet]; 03729 nRoot = sqliteHashCount(&pSet->hash); 03730 aRoot = sqliteMallocRaw( sizeof(int)*(nRoot+1) ); 03731 if( aRoot==0 ) goto no_mem; 03732 for(j=0, i=sqliteHashFirst(&pSet->hash); i; i=sqliteHashNext(i), j++){ 03733 toInt((char*)sqliteHashKey(i), &aRoot[j]); 03734 } 03735 aRoot[j] = 0; 03736 sqliteHashClear(&pSet->hash); 03737 pSet->prev = 0; 03738 z = sqliteBtreeIntegrityCheck(db->aDb[pOp->p2].pBt, aRoot, nRoot); 03739 if( z==0 || z[0]==0 ){ 03740 if( z ) sqliteFree(z); 03741 pTos->z = "ok"; 03742 pTos->n = 3; 03743 pTos->flags = MEM_Str | MEM_Static; 03744 }else{ 03745 pTos->z = z; 03746 pTos->n = strlen(z) + 1; 03747 pTos->flags = MEM_Str | MEM_Dyn; 03748 } 03749 sqliteFree(aRoot); 03750 break; 03751 } 03752 03753 /* Opcode: ListWrite * * * 03754 ** 03755 ** Write the integer on the top of the stack 03756 ** into the temporary storage list. 03757 */ 03758 case OP_ListWrite: { 03759 Keylist *pKeylist; 03760 assert( pTos>=p->aStack ); 03761 pKeylist = p->pList; 03762 if( pKeylist==0 || pKeylist->nUsed>=pKeylist->nKey ){ 03763 pKeylist = sqliteMallocRaw( sizeof(Keylist)+999*sizeof(pKeylist->aKey[0]) ); 03764 if( pKeylist==0 ) goto no_mem; 03765 pKeylist->nKey = 1000; 03766 pKeylist->nRead = 0; 03767 pKeylist->nUsed = 0; 03768 pKeylist->pNext = p->pList; 03769 p->pList = pKeylist; 03770 } 03771 Integerify(pTos); 03772 pKeylist->aKey[pKeylist->nUsed++] = pTos->i; 03773 Release(pTos); 03774 pTos--; 03775 break; 03776 } 03777 03778 /* Opcode: ListRewind * * * 03779 ** 03780 ** Rewind the temporary buffer back to the beginning. 03781 */ 03782 case OP_ListRewind: { 03783 /* What this opcode codes, really, is reverse the order of the 03784 ** linked list of Keylist structures so that they are read out 03785 ** in the same order that they were read in. */ 03786 Keylist *pRev, *pTop; 03787 pRev = 0; 03788 while( p->pList ){ 03789 pTop = p->pList; 03790 p->pList = pTop->pNext; 03791 pTop->pNext = pRev; 03792 pRev = pTop; 03793 } 03794 p->pList = pRev; 03795 break; 03796 } 03797 03798 /* Opcode: ListRead * P2 * 03799 ** 03800 ** Attempt to read an integer from the temporary storage buffer 03801 ** and push it onto the stack. If the storage buffer is empty, 03802 ** push nothing but instead jump to P2. 03803 */ 03804 case OP_ListRead: { 03805 Keylist *pKeylist; 03806 CHECK_FOR_INTERRUPT; 03807 pKeylist = p->pList; 03808 if( pKeylist!=0 ){ 03809 assert( pKeylist->nRead>=0 ); 03810 assert( pKeylist->nRead<pKeylist->nUsed ); 03811 assert( pKeylist->nRead<pKeylist->nKey ); 03812 pTos++; 03813 pTos->i = pKeylist->aKey[pKeylist->nRead++]; 03814 pTos->flags = MEM_Int; 03815 if( pKeylist->nRead>=pKeylist->nUsed ){ 03816 p->pList = pKeylist->pNext; 03817 sqliteFree(pKeylist); 03818 } 03819 }else{ 03820 pc = pOp->p2 - 1; 03821 } 03822 break; 03823 } 03824 03825 /* Opcode: ListReset * * * 03826 ** 03827 ** Reset the temporary storage buffer so that it holds nothing. 03828 */ 03829 case OP_ListReset: { 03830 if( p->pList ){ 03831 sqliteVdbeKeylistFree(p->pList); 03832 p->pList = 0; 03833 } 03834 break; 03835 } 03836 03837 /* Opcode: ListPush * * * 03838 ** 03839 ** Save the current Vdbe list such that it can be restored by a ListPop 03840 ** opcode. The list is empty after this is executed. 03841 */ 03842 case OP_ListPush: { 03843 p->keylistStackDepth++; 03844 assert(p->keylistStackDepth > 0); 03845 p->keylistStack = sqliteRealloc(p->keylistStack, 03846 sizeof(Keylist *) * p->keylistStackDepth); 03847 if( p->keylistStack==0 ) goto no_mem; 03848 p->keylistStack[p->keylistStackDepth - 1] = p->pList; 03849 p->pList = 0; 03850 break; 03851 } 03852 03853 /* Opcode: ListPop * * * 03854 ** 03855 ** Restore the Vdbe list to the state it was in when ListPush was last 03856 ** executed. 03857 */ 03858 case OP_ListPop: { 03859 assert(p->keylistStackDepth > 0); 03860 p->keylistStackDepth--; 03861 sqliteVdbeKeylistFree(p->pList); 03862 p->pList = p->keylistStack[p->keylistStackDepth]; 03863 p->keylistStack[p->keylistStackDepth] = 0; 03864 if( p->keylistStackDepth == 0 ){ 03865 sqliteFree(p->keylistStack); 03866 p->keylistStack = 0; 03867 } 03868 break; 03869 } 03870 03871 /* Opcode: ContextPush * * * 03872 ** 03873 ** Save the current Vdbe context such that it can be restored by a ContextPop 03874 ** opcode. The context stores the last insert row id, the last statement change 03875 ** count, and the current statement change count. 03876 */ 03877 case OP_ContextPush: { 03878 p->contextStackDepth++; 03879 assert(p->contextStackDepth > 0); 03880 p->contextStack = sqliteRealloc(p->contextStack, 03881 sizeof(Context) * p->contextStackDepth); 03882 if( p->contextStack==0 ) goto no_mem; 03883 p->contextStack[p->contextStackDepth - 1].lastRowid = p->db->lastRowid; 03884 p->contextStack[p->contextStackDepth - 1].lsChange = p->db->lsChange; 03885 p->contextStack[p->contextStackDepth - 1].csChange = p->db->csChange; 03886 break; 03887 } 03888 03889 /* Opcode: ContextPop * * * 03890 ** 03891 ** Restore the Vdbe context to the state it was in when contextPush was last 03892 ** executed. The context stores the last insert row id, the last statement 03893 ** change count, and the current statement change count. 03894 */ 03895 case OP_ContextPop: { 03896 assert(p->contextStackDepth > 0); 03897 p->contextStackDepth--; 03898 p->db->lastRowid = p->contextStack[p->contextStackDepth].lastRowid; 03899 p->db->lsChange = p->contextStack[p->contextStackDepth].lsChange; 03900 p->db->csChange = p->contextStack[p->contextStackDepth].csChange; 03901 if( p->contextStackDepth == 0 ){ 03902 sqliteFree(p->contextStack); 03903 p->contextStack = 0; 03904 } 03905 break; 03906 } 03907 03908 /* Opcode: SortPut * * * 03909 ** 03910 ** The TOS is the key and the NOS is the data. Pop both from the stack 03911 ** and put them on the sorter. The key and data should have been 03912 ** made using SortMakeKey and SortMakeRec, respectively. 03913 */ 03914 case OP_SortPut: { 03915 Mem *pNos = &pTos[-1]; 03916 Sorter *pSorter; 03917 assert( pNos>=p->aStack ); 03918 if( Dynamicify(pTos) || Dynamicify(pNos) ) goto no_mem; 03919 pSorter = sqliteMallocRaw( sizeof(Sorter) ); 03920 if( pSorter==0 ) goto no_mem; 03921 pSorter->pNext = p->pSort; 03922 p->pSort = pSorter; 03923 assert( pTos->flags & MEM_Dyn ); 03924 pSorter->nKey = pTos->n; 03925 pSorter->zKey = pTos->z; 03926 assert( pNos->flags & MEM_Dyn ); 03927 pSorter->nData = pNos->n; 03928 pSorter->pData = pNos->z; 03929 pTos -= 2; 03930 break; 03931 } 03932 03933 /* Opcode: SortMakeRec P1 * * 03934 ** 03935 ** The top P1 elements are the arguments to a callback. Form these 03936 ** elements into a single data entry that can be stored on a sorter 03937 ** using SortPut and later fed to a callback using SortCallback. 03938 */ 03939 case OP_SortMakeRec: { 03940 char *z; 03941 char **azArg; 03942 int nByte; 03943 int nField; 03944 int i; 03945 Mem *pRec; 03946 03947 nField = pOp->p1; 03948 pRec = &pTos[1-nField]; 03949 assert( pRec>=p->aStack ); 03950 nByte = 0; 03951 for(i=0; i<nField; i++, pRec++){ 03952 if( (pRec->flags & MEM_Null)==0 ){ 03953 Stringify(pRec); 03954 nByte += pRec->n; 03955 } 03956 } 03957 nByte += sizeof(char*)*(nField+1); 03958 azArg = sqliteMallocRaw( nByte ); 03959 if( azArg==0 ) goto no_mem; 03960 z = (char*)&azArg[nField+1]; 03961 for(pRec=&pTos[1-nField], i=0; i<nField; i++, pRec++){ 03962 if( pRec->flags & MEM_Null ){ 03963 azArg[i] = 0; 03964 }else{ 03965 azArg[i] = z; 03966 memcpy(z, pRec->z, pRec->n); 03967 z += pRec->n; 03968 } 03969 } 03970 popStack(&pTos, nField); 03971 pTos++; 03972 pTos->n = nByte; 03973 pTos->z = (char*)azArg; 03974 pTos->flags = MEM_Str | MEM_Dyn; 03975 break; 03976 } 03977 03978 /* Opcode: SortMakeKey * * P3 03979 ** 03980 ** Convert the top few entries of the stack into a sort key. The 03981 ** number of stack entries consumed is the number of characters in 03982 ** the string P3. One character from P3 is prepended to each entry. 03983 ** The first character of P3 is prepended to the element lowest in 03984 ** the stack and the last character of P3 is prepended to the top of 03985 ** the stack. All stack entries are separated by a \000 character 03986 ** in the result. The whole key is terminated by two \000 characters 03987 ** in a row. 03988 ** 03989 ** "N" is substituted in place of the P3 character for NULL values. 03990 ** 03991 ** See also the MakeKey and MakeIdxKey opcodes. 03992 */ 03993 case OP_SortMakeKey: { 03994 char *zNewKey; 03995 int nByte; 03996 int nField; 03997 int i, j, k; 03998 Mem *pRec; 03999 04000 nField = strlen(pOp->p3); 04001 pRec = &pTos[1-nField]; 04002 nByte = 1; 04003 for(i=0; i<nField; i++, pRec++){ 04004 if( pRec->flags & MEM_Null ){ 04005 nByte += 2; 04006 }else{ 04007 Stringify(pRec); 04008 nByte += pRec->n+2; 04009 } 04010 } 04011 zNewKey = sqliteMallocRaw( nByte ); 04012 if( zNewKey==0 ) goto no_mem; 04013 j = 0; 04014 k = 0; 04015 for(pRec=&pTos[1-nField], i=0; i<nField; i++, pRec++){ 04016 if( pRec->flags & MEM_Null ){ 04017 zNewKey[j++] = 'N'; 04018 zNewKey[j++] = 0; 04019 k++; 04020 }else{ 04021 zNewKey[j++] = pOp->p3[k++]; 04022 memcpy(&zNewKey[j], pRec->z, pRec->n-1); 04023 j += pRec->n-1; 04024 zNewKey[j++] = 0; 04025 } 04026 } 04027 zNewKey[j] = 0; 04028 assert( j<nByte ); 04029 popStack(&pTos, nField); 04030 pTos++; 04031 pTos->n = nByte; 04032 pTos->flags = MEM_Str|MEM_Dyn; 04033 pTos->z = zNewKey; 04034 break; 04035 } 04036 04037 /* Opcode: Sort * * * 04038 ** 04039 ** Sort all elements on the sorter. The algorithm is a 04040 ** mergesort. 04041 */ 04042 case OP_Sort: { 04043 int i; 04044 Sorter *pElem; 04045 Sorter *apSorter[NSORT]; 04046 for(i=0; i<NSORT; i++){ 04047 apSorter[i] = 0; 04048 } 04049 while( p->pSort ){ 04050 pElem = p->pSort; 04051 p->pSort = pElem->pNext; 04052 pElem->pNext = 0; 04053 for(i=0; i<NSORT-1; i++){ 04054 if( apSorter[i]==0 ){ 04055 apSorter[i] = pElem; 04056 break; 04057 }else{ 04058 pElem = Merge(apSorter[i], pElem); 04059 apSorter[i] = 0; 04060 } 04061 } 04062 if( i>=NSORT-1 ){ 04063 apSorter[NSORT-1] = Merge(apSorter[NSORT-1],pElem); 04064 } 04065 } 04066 pElem = 0; 04067 for(i=0; i<NSORT; i++){ 04068 pElem = Merge(apSorter[i], pElem); 04069 } 04070 p->pSort = pElem; 04071 break; 04072 } 04073 04074 /* Opcode: SortNext * P2 * 04075 ** 04076 ** Push the data for the topmost element in the sorter onto the 04077 ** stack, then remove the element from the sorter. If the sorter 04078 ** is empty, push nothing on the stack and instead jump immediately 04079 ** to instruction P2. 04080 */ 04081 case OP_SortNext: { 04082 Sorter *pSorter = p->pSort; 04083 CHECK_FOR_INTERRUPT; 04084 if( pSorter!=0 ){ 04085 p->pSort = pSorter->pNext; 04086 pTos++; 04087 pTos->z = pSorter->pData; 04088 pTos->n = pSorter->nData; 04089 pTos->flags = MEM_Str|MEM_Dyn; 04090 sqliteFree(pSorter->zKey); 04091 sqliteFree(pSorter); 04092 }else{ 04093 pc = pOp->p2 - 1; 04094 } 04095 break; 04096 } 04097 04098 /* Opcode: SortCallback P1 * * 04099 ** 04100 ** The top of the stack contains a callback record built using 04101 ** the SortMakeRec operation with the same P1 value as this 04102 ** instruction. Pop this record from the stack and invoke the 04103 ** callback on it. 04104 */ 04105 case OP_SortCallback: { 04106 assert( pTos>=p->aStack ); 04107 assert( pTos->flags & MEM_Str ); 04108 p->nCallback++; 04109 p->pc = pc+1; 04110 p->azResColumn = (char**)pTos->z; 04111 assert( p->nResColumn==pOp->p1 ); 04112 p->popStack = 1; 04113 p->pTos = pTos; 04114 return SQLITE_ROW; 04115 } 04116 04117 /* Opcode: SortReset * * * 04118 ** 04119 ** Remove any elements that remain on the sorter. 04120 */ 04121 case OP_SortReset: { 04122 sqliteVdbeSorterReset(p); 04123 break; 04124 } 04125 04126 /* Opcode: FileOpen * * P3 04127 ** 04128 ** Open the file named by P3 for reading using the FileRead opcode. 04129 ** If P3 is "stdin" then open standard input for reading. 04130 */ 04131 case OP_FileOpen: { 04132 assert( pOp->p3!=0 ); 04133 if( p->pFile ){ 04134 if( p->pFile!=stdin ) fclose(p->pFile); 04135 p->pFile = 0; 04136 } 04137 if( sqliteStrICmp(pOp->p3,"stdin")==0 ){ 04138 p->pFile = stdin; 04139 }else{ 04140 p->pFile = fopen(pOp->p3, "r"); 04141 } 04142 if( p->pFile==0 ){ 04143 sqliteSetString(&p->zErrMsg,"unable to open file: ", pOp->p3, (char*)0); 04144 rc = SQLITE_ERROR; 04145 } 04146 break; 04147 } 04148 04149 /* Opcode: FileRead P1 P2 P3 04150 ** 04151 ** Read a single line of input from the open file (the file opened using 04152 ** FileOpen). If we reach end-of-file, jump immediately to P2. If 04153 ** we are able to get another line, split the line apart using P3 as 04154 ** a delimiter. There should be P1 fields. If the input line contains 04155 ** more than P1 fields, ignore the excess. If the input line contains 04156 ** fewer than P1 fields, assume the remaining fields contain NULLs. 04157 ** 04158 ** Input ends if a line consists of just "\.". A field containing only 04159 ** "\N" is a null field. The backslash \ character can be used be used 04160 ** to escape newlines or the delimiter. 04161 */ 04162 case OP_FileRead: { 04163 int n, eol, nField, i, c, nDelim; 04164 char *zDelim, *z; 04165 CHECK_FOR_INTERRUPT; 04166 if( p->pFile==0 ) goto fileread_jump; 04167 nField = pOp->p1; 04168 if( nField<=0 ) goto fileread_jump; 04169 if( nField!=p->nField || p->azField==0 ){ 04170 char **azField = sqliteRealloc(p->azField, sizeof(char*)*nField+1); 04171 if( azField==0 ){ goto no_mem; } 04172 p->azField = azField; 04173 p->nField = nField; 04174 } 04175 n = 0; 04176 eol = 0; 04177 while( eol==0 ){ 04178 if( p->zLine==0 || n+200>p->nLineAlloc ){ 04179 char *zLine; 04180 p->nLineAlloc = p->nLineAlloc*2 + 300; 04181 zLine = sqliteRealloc(p->zLine, p->nLineAlloc); 04182 if( zLine==0 ){ 04183 p->nLineAlloc = 0; 04184 sqliteFree(p->zLine); 04185 p->zLine = 0; 04186 goto no_mem; 04187 } 04188 p->zLine = zLine; 04189 } 04190 if( vdbe_fgets(&p->zLine[n], p->nLineAlloc-n, p->pFile)==0 ){ 04191 eol = 1; 04192 p->zLine[n] = 0; 04193 }else{ 04194 int c; 04195 while( (c = p->zLine[n])!=0 ){ 04196 if( c=='\\' ){ 04197 if( p->zLine[n+1]==0 ) break; 04198 n += 2; 04199 }else if( c=='\n' ){ 04200 p->zLine[n] = 0; 04201 eol = 1; 04202 break; 04203 }else{ 04204 n++; 04205 } 04206 } 04207 } 04208 } 04209 if( n==0 ) goto fileread_jump; 04210 z = p->zLine; 04211 if( z[0]=='\\' && z[1]=='.' && z[2]==0 ){ 04212 goto fileread_jump; 04213 } 04214 zDelim = pOp->p3; 04215 if( zDelim==0 ) zDelim = "\t"; 04216 c = zDelim[0]; 04217 nDelim = strlen(zDelim); 04218 p->azField[0] = z; 04219 for(i=1; *z!=0 && i<=nField; i++){ 04220 int from, to; 04221 from = to = 0; 04222 if( z[0]=='\\' && z[1]=='N' 04223 && (z[2]==0 || strncmp(&z[2],zDelim,nDelim)==0) ){ 04224 if( i<=nField ) p->azField[i-1] = 0; 04225 z += 2 + nDelim; 04226 if( i<nField ) p->azField[i] = z; 04227 continue; 04228 } 04229 while( z[from] ){ 04230 if( z[from]=='\\' && z[from+1]!=0 ){ 04231 int tx = z[from+1]; 04232 switch( tx ){ 04233 case 'b': tx = '\b'; break; 04234 case 'f': tx = '\f'; break; 04235 case 'n': tx = '\n'; break; 04236 case 'r': tx = '\r'; break; 04237 case 't': tx = '\t'; break; 04238 case 'v': tx = '\v'; break; 04239 default: break; 04240 } 04241 z[to++] = tx; 04242 from += 2; 04243 continue; 04244 } 04245 if( z[from]==c && strncmp(&z[from],zDelim,nDelim)==0 ) break; 04246 z[to++] = z[from++]; 04247 } 04248 if( z[from] ){ 04249 z[to] = 0; 04250 z += from + nDelim; 04251 if( i<nField ) p->azField[i] = z; 04252 }else{ 04253 z[to] = 0; 04254 z = ""; 04255 } 04256 } 04257 while( i<nField ){ 04258 p->azField[i++] = 0; 04259 } 04260 break; 04261 04262 /* If we reach end-of-file, or if anything goes wrong, jump here. 04263 ** This code will cause a jump to P2 */ 04264 fileread_jump: 04265 pc = pOp->p2 - 1; 04266 break; 04267 } 04268 04269 /* Opcode: FileColumn P1 * * 04270 ** 04271 ** Push onto the stack the P1-th column of the most recently read line 04272 ** from the input file. 04273 */ 04274 case OP_FileColumn: { 04275 int i = pOp->p1; 04276 char *z; 04277 assert( i>=0 && i<p->nField ); 04278 if( p->azField ){ 04279 z = p->azField[i]; 04280 }else{ 04281 z = 0; 04282 } 04283 pTos++; 04284 if( z ){ 04285 pTos->n = strlen(z) + 1; 04286 pTos->z = z; 04287 pTos->flags = MEM_Str | MEM_Ephem; 04288 }else{ 04289 pTos->flags = MEM_Null; 04290 } 04291 break; 04292 } 04293 04294 /* Opcode: MemStore P1 P2 * 04295 ** 04296 ** Write the top of the stack into memory location P1. 04297 ** P1 should be a small integer since space is allocated 04298 ** for all memory locations between 0 and P1 inclusive. 04299 ** 04300 ** After the data is stored in the memory location, the 04301 ** stack is popped once if P2 is 1. If P2 is zero, then 04302 ** the original data remains on the stack. 04303 */ 04304 case OP_MemStore: { 04305 int i = pOp->p1; 04306 Mem *pMem; 04307 assert( pTos>=p->aStack ); 04308 if( i>=p->nMem ){ 04309 int nOld = p->nMem; 04310 Mem *aMem; 04311 p->nMem = i + 5; 04312 aMem = sqliteRealloc(p->aMem, p->nMem*sizeof(p->aMem[0])); 04313 if( aMem==0 ) goto no_mem; 04314 if( aMem!=p->aMem ){ 04315 int j; 04316 for(j=0; j<nOld; j++){ 04317 if( aMem[j].flags & MEM_Short ){ 04318 aMem[j].z = aMem[j].zShort; 04319 } 04320 } 04321 } 04322 p->aMem = aMem; 04323 if( nOld<p->nMem ){ 04324 memset(&p->aMem[nOld], 0, sizeof(p->aMem[0])*(p->nMem-nOld)); 04325 } 04326 } 04327 Deephemeralize(pTos); 04328 pMem = &p->aMem[i]; 04329 Release(pMem); 04330 *pMem = *pTos; 04331 if( pMem->flags & MEM_Dyn ){ 04332 if( pOp->p2 ){ 04333 pTos->flags = MEM_Null; 04334 }else{ 04335 pMem->z = sqliteMallocRaw( pMem->n ); 04336 if( pMem->z==0 ) goto no_mem; 04337 memcpy(pMem->z, pTos->z, pMem->n); 04338 } 04339 }else if( pMem->flags & MEM_Short ){ 04340 pMem->z = pMem->zShort; 04341 } 04342 if( pOp->p2 ){ 04343 Release(pTos); 04344 pTos--; 04345 } 04346 break; 04347 } 04348 04349 /* Opcode: MemLoad P1 * * 04350 ** 04351 ** Push a copy of the value in memory location P1 onto the stack. 04352 ** 04353 ** If the value is a string, then the value pushed is a pointer to 04354 ** the string that is stored in the memory location. If the memory 04355 ** location is subsequently changed (using OP_MemStore) then the 04356 ** value pushed onto the stack will change too. 04357 */ 04358 case OP_MemLoad: { 04359 int i = pOp->p1; 04360 assert( i>=0 && i<p->nMem ); 04361 pTos++; 04362 memcpy(pTos, &p->aMem[i], sizeof(pTos[0])-NBFS);; 04363 if( pTos->flags & MEM_Str ){ 04364 pTos->flags |= MEM_Ephem; 04365 pTos->flags &= ~(MEM_Dyn|MEM_Static|MEM_Short); 04366 } 04367 break; 04368 } 04369 04370 /* Opcode: MemIncr P1 P2 * 04371 ** 04372 ** Increment the integer valued memory cell P1 by 1. If P2 is not zero 04373 ** and the result after the increment is greater than zero, then jump 04374 ** to P2. 04375 ** 04376 ** This instruction throws an error if the memory cell is not initially 04377 ** an integer. 04378 */ 04379 case OP_MemIncr: { 04380 int i = pOp->p1; 04381 Mem *pMem; 04382 assert( i>=0 && i<p->nMem ); 04383 pMem = &p->aMem[i]; 04384 assert( pMem->flags==MEM_Int ); 04385 pMem->i++; 04386 if( pOp->p2>0 && pMem->i>0 ){ 04387 pc = pOp->p2 - 1; 04388 } 04389 break; 04390 } 04391 04392 /* Opcode: AggReset * P2 * 04393 ** 04394 ** Reset the aggregator so that it no longer contains any data. 04395 ** Future aggregator elements will contain P2 values each. 04396 */ 04397 case OP_AggReset: { 04398 sqliteVdbeAggReset(&p->agg); 04399 p->agg.nMem = pOp->p2; 04400 p->agg.apFunc = sqliteMalloc( p->agg.nMem*sizeof(p->agg.apFunc[0]) ); 04401 if( p->agg.apFunc==0 ) goto no_mem; 04402 break; 04403 } 04404 04405 /* Opcode: AggInit * P2 P3 04406 ** 04407 ** Initialize the function parameters for an aggregate function. 04408 ** The aggregate will operate out of aggregate column P2. 04409 ** P3 is a pointer to the FuncDef structure for the function. 04410 */ 04411 case OP_AggInit: { 04412 int i = pOp->p2; 04413 assert( i>=0 && i<p->agg.nMem ); 04414 p->agg.apFunc[i] = (FuncDef*)pOp->p3; 04415 break; 04416 } 04417 04418 /* Opcode: AggFunc * P2 P3 04419 ** 04420 ** Execute the step function for an aggregate. The 04421 ** function has P2 arguments. P3 is a pointer to the FuncDef 04422 ** structure that specifies the function. 04423 ** 04424 ** The top of the stack must be an integer which is the index of 04425 ** the aggregate column that corresponds to this aggregate function. 04426 ** Ideally, this index would be another parameter, but there are 04427 ** no free parameters left. The integer is popped from the stack. 04428 */ 04429 case OP_AggFunc: { 04430 int n = pOp->p2; 04431 int i; 04432 Mem *pMem, *pRec; 04433 char **azArgv = p->zArgv; 04434 sqlite_func ctx; 04435 04436 assert( n>=0 ); 04437 assert( pTos->flags==MEM_Int ); 04438 pRec = &pTos[-n]; 04439 assert( pRec>=p->aStack ); 04440 for(i=0; i<n; i++, pRec++){ 04441 if( pRec->flags & MEM_Null ){ 04442 azArgv[i] = 0; 04443 }else{ 04444 Stringify(pRec); 04445 azArgv[i] = pRec->z; 04446 } 04447 } 04448 i = pTos->i; 04449 assert( i>=0 && i<p->agg.nMem ); 04450 ctx.pFunc = (FuncDef*)pOp->p3; 04451 pMem = &p->agg.pCurrent->aMem[i]; 04452 ctx.s.z = pMem->zShort; /* Space used for small aggregate contexts */ 04453 ctx.pAgg = pMem->z; 04454 ctx.cnt = ++pMem->i; 04455 ctx.isError = 0; 04456 ctx.isStep = 1; 04457 (ctx.pFunc->xStep)(&ctx, n, (const char**)azArgv); 04458 pMem->z = ctx.pAgg; 04459 pMem->flags = MEM_AggCtx; 04460 popStack(&pTos, n+1); 04461 if( ctx.isError ){ 04462 rc = SQLITE_ERROR; 04463 } 04464 break; 04465 } 04466 04467 /* Opcode: AggFocus * P2 * 04468 ** 04469 ** Pop the top of the stack and use that as an aggregator key. If 04470 ** an aggregator with that same key already exists, then make the 04471 ** aggregator the current aggregator and jump to P2. If no aggregator 04472 ** with the given key exists, create one and make it current but 04473 ** do not jump. 04474 ** 04475 ** The order of aggregator opcodes is important. The order is: 04476 ** AggReset AggFocus AggNext. In other words, you must execute 04477 ** AggReset first, then zero or more AggFocus operations, then 04478 ** zero or more AggNext operations. You must not execute an AggFocus 04479 ** in between an AggNext and an AggReset. 04480 */ 04481 case OP_AggFocus: { 04482 AggElem *pElem; 04483 char *zKey; 04484 int nKey; 04485 04486 assert( pTos>=p->aStack ); 04487 Stringify(pTos); 04488 zKey = pTos->z; 04489 nKey = pTos->n; 04490 pElem = sqliteHashFind(&p->agg.hash, zKey, nKey); 04491 if( pElem ){ 04492 p->agg.pCurrent = pElem; 04493 pc = pOp->p2 - 1; 04494 }else{ 04495 AggInsert(&p->agg, zKey, nKey); 04496 if( sqlite_malloc_failed ) goto no_mem; 04497 } 04498 Release(pTos); 04499 pTos--; 04500 break; 04501 } 04502 04503 /* Opcode: AggSet * P2 * 04504 ** 04505 ** Move the top of the stack into the P2-th field of the current 04506 ** aggregate. String values are duplicated into new memory. 04507 */ 04508 case OP_AggSet: { 04509 AggElem *pFocus = AggInFocus(p->agg); 04510 Mem *pMem; 04511 int i = pOp->p2; 04512 assert( pTos>=p->aStack ); 04513 if( pFocus==0 ) goto no_mem; 04514 assert( i>=0 && i<p->agg.nMem ); 04515 Deephemeralize(pTos); 04516 pMem = &pFocus->aMem[i]; 04517 Release(pMem); 04518 *pMem = *pTos; 04519 if( pMem->flags & MEM_Dyn ){ 04520 pTos->flags = MEM_Null; 04521 }else if( pMem->flags & MEM_Short ){ 04522 pMem->z = pMem->zShort; 04523 } 04524 Release(pTos); 04525 pTos--; 04526 break; 04527 } 04528 04529 /* Opcode: AggGet * P2 * 04530 ** 04531 ** Push a new entry onto the stack which is a copy of the P2-th field 04532 ** of the current aggregate. Strings are not duplicated so 04533 ** string values will be ephemeral. 04534 */ 04535 case OP_AggGet: { 04536 AggElem *pFocus = AggInFocus(p->agg); 04537 Mem *pMem; 04538 int i = pOp->p2; 04539 if( pFocus==0 ) goto no_mem; 04540 assert( i>=0 && i<p->agg.nMem ); 04541 pTos++; 04542 pMem = &pFocus->aMem[i]; 04543 *pTos = *pMem; 04544 if( pTos->flags & MEM_Str ){ 04545 pTos->flags &= ~(MEM_Dyn|MEM_Static|MEM_Short); 04546 pTos->flags |= MEM_Ephem; 04547 } 04548 if( pTos->flags & MEM_AggCtx ){ 04549 Release(pTos); 04550 pTos->flags = MEM_Null; 04551 } 04552 break; 04553 } 04554 04555 /* Opcode: AggNext * P2 * 04556 ** 04557 ** Make the next aggregate value the current aggregate. The prior 04558 ** aggregate is deleted. If all aggregate values have been consumed, 04559 ** jump to P2. 04560 ** 04561 ** The order of aggregator opcodes is important. The order is: 04562 ** AggReset AggFocus AggNext. In other words, you must execute 04563 ** AggReset first, then zero or more AggFocus operations, then 04564 ** zero or more AggNext operations. You must not execute an AggFocus 04565 ** in between an AggNext and an AggReset. 04566 */ 04567 case OP_AggNext: { 04568 CHECK_FOR_INTERRUPT; 04569 if( p->agg.pSearch==0 ){ 04570 p->agg.pSearch = sqliteHashFirst(&p->agg.hash); 04571 }else{ 04572 p->agg.pSearch = sqliteHashNext(p->agg.pSearch); 04573 } 04574 if( p->agg.pSearch==0 ){ 04575 pc = pOp->p2 - 1; 04576 } else { 04577 int i; 04578 sqlite_func ctx; 04579 Mem *aMem; 04580 p->agg.pCurrent = sqliteHashData(p->agg.pSearch); 04581 aMem = p->agg.pCurrent->aMem; 04582 for(i=0; i<p->agg.nMem; i++){ 04583 int freeCtx; 04584 if( p->agg.apFunc[i]==0 ) continue; 04585 if( p->agg.apFunc[i]->xFinalize==0 ) continue; 04586 ctx.s.flags = MEM_Null; 04587 ctx.s.z = aMem[i].zShort; 04588 ctx.pAgg = (void*)aMem[i].z; 04589 freeCtx = aMem[i].z && aMem[i].z!=aMem[i].zShort; 04590 ctx.cnt = aMem[i].i; 04591 ctx.isStep = 0; 04592 ctx.pFunc = p->agg.apFunc[i]; 04593 (*p->agg.apFunc[i]->xFinalize)(&ctx); 04594 if( freeCtx ){ 04595 sqliteFree( aMem[i].z ); 04596 } 04597 aMem[i] = ctx.s; 04598 if( aMem[i].flags & MEM_Short ){ 04599 aMem[i].z = aMem[i].zShort; 04600 } 04601 } 04602 } 04603 break; 04604 } 04605 04606 /* Opcode: SetInsert P1 * P3 04607 ** 04608 ** If Set P1 does not exist then create it. Then insert value 04609 ** P3 into that set. If P3 is NULL, then insert the top of the 04610 ** stack into the set. 04611 */ 04612 case OP_SetInsert: { 04613 int i = pOp->p1; 04614 if( p->nSet<=i ){ 04615 int k; 04616 Set *aSet = sqliteRealloc(p->aSet, (i+1)*sizeof(p->aSet[0]) ); 04617 if( aSet==0 ) goto no_mem; 04618 p->aSet = aSet; 04619 for(k=p->nSet; k<=i; k++){ 04620 sqliteHashInit(&p->aSet[k].hash, SQLITE_HASH_BINARY, 1); 04621 } 04622 p->nSet = i+1; 04623 } 04624 if( pOp->p3 ){ 04625 sqliteHashInsert(&p->aSet[i].hash, pOp->p3, strlen(pOp->p3)+1, p); 04626 }else{ 04627 assert( pTos>=p->aStack ); 04628 Stringify(pTos); 04629 sqliteHashInsert(&p->aSet[i].hash, pTos->z, pTos->n, p); 04630 Release(pTos); 04631 pTos--; 04632 } 04633 if( sqlite_malloc_failed ) goto no_mem; 04634 break; 04635 } 04636 04637 /* Opcode: SetFound P1 P2 * 04638 ** 04639 ** Pop the stack once and compare the value popped off with the 04640 ** contents of set P1. If the element popped exists in set P1, 04641 ** then jump to P2. Otherwise fall through. 04642 */ 04643 case OP_SetFound: { 04644 int i = pOp->p1; 04645 assert( pTos>=p->aStack ); 04646 Stringify(pTos); 04647 if( i>=0 && i<p->nSet && sqliteHashFind(&p->aSet[i].hash, pTos->z, pTos->n)){ 04648 pc = pOp->p2 - 1; 04649 } 04650 Release(pTos); 04651 pTos--; 04652 break; 04653 } 04654 04655 /* Opcode: SetNotFound P1 P2 * 04656 ** 04657 ** Pop the stack once and compare the value popped off with the 04658 ** contents of set P1. If the element popped does not exists in 04659 ** set P1, then jump to P2. Otherwise fall through. 04660 */ 04661 case OP_SetNotFound: { 04662 int i = pOp->p1; 04663 assert( pTos>=p->aStack ); 04664 Stringify(pTos); 04665 if( i<0 || i>=p->nSet || 04666 sqliteHashFind(&p->aSet[i].hash, pTos->z, pTos->n)==0 ){ 04667 pc = pOp->p2 - 1; 04668 } 04669 Release(pTos); 04670 pTos--; 04671 break; 04672 } 04673 04674 /* Opcode: SetFirst P1 P2 * 04675 ** 04676 ** Read the first element from set P1 and push it onto the stack. If the 04677 ** set is empty, push nothing and jump immediately to P2. This opcode is 04678 ** used in combination with OP_SetNext to loop over all elements of a set. 04679 */ 04680 /* Opcode: SetNext P1 P2 * 04681 ** 04682 ** Read the next element from set P1 and push it onto the stack. If there 04683 ** are no more elements in the set, do not do the push and fall through. 04684 ** Otherwise, jump to P2 after pushing the next set element. 04685 */ 04686 case OP_SetFirst: 04687 case OP_SetNext: { 04688 Set *pSet; 04689 CHECK_FOR_INTERRUPT; 04690 if( pOp->p1<0 || pOp->p1>=p->nSet ){ 04691 if( pOp->opcode==OP_SetFirst ) pc = pOp->p2 - 1; 04692 break; 04693 } 04694 pSet = &p->aSet[pOp->p1]; 04695 if( pOp->opcode==OP_SetFirst ){ 04696 pSet->prev = sqliteHashFirst(&pSet->hash); 04697 if( pSet->prev==0 ){ 04698 pc = pOp->p2 - 1; 04699 break; 04700 } 04701 }else{ 04702 if( pSet->prev ){ 04703 pSet->prev = sqliteHashNext(pSet->prev); 04704 } 04705 if( pSet->prev==0 ){ 04706 break; 04707 }else{ 04708 pc = pOp->p2 - 1; 04709 } 04710 } 04711 pTos++; 04712 pTos->z = sqliteHashKey(pSet->prev); 04713 pTos->n = sqliteHashKeysize(pSet->prev); 04714 pTos->flags = MEM_Str | MEM_Ephem; 04715 break; 04716 } 04717 04718 /* Opcode: Vacuum * * * 04719 ** 04720 ** Vacuum the entire database. This opcode will cause other virtual 04721 ** machines to be created and run. It may not be called from within 04722 ** a transaction. 04723 */ 04724 case OP_Vacuum: { 04725 if( sqliteSafetyOff(db) ) goto abort_due_to_misuse; 04726 rc = sqliteRunVacuum(&p->zErrMsg, db); 04727 if( sqliteSafetyOn(db) ) goto abort_due_to_misuse; 04728 break; 04729 } 04730 04731 /* Opcode: StackDepth * * * 04732 ** 04733 ** Push an integer onto the stack which is the depth of the stack prior 04734 ** to that integer being pushed. 04735 */ 04736 case OP_StackDepth: { 04737 int depth = (&pTos[1]) - p->aStack; 04738 pTos++; 04739 pTos->i = depth; 04740 pTos->flags = MEM_Int; 04741 break; 04742 } 04743 04744 /* Opcode: StackReset * * * 04745 ** 04746 ** Pop a single integer off of the stack. Then pop the stack 04747 ** as many times as necessary to get the depth of the stack down 04748 ** to the value of the integer that was popped. 04749 */ 04750 case OP_StackReset: { 04751 int depth, goal; 04752 assert( pTos>=p->aStack ); 04753 Integerify(pTos); 04754 goal = pTos->i; 04755 depth = (&pTos[1]) - p->aStack; 04756 assert( goal<depth ); 04757 popStack(&pTos, depth-goal); 04758 break; 04759 } 04760 04761 /* An other opcode is illegal... 04762 */ 04763 default: { 04764 sqlite_snprintf(sizeof(zBuf),zBuf,"%d",pOp->opcode); 04765 sqliteSetString(&p->zErrMsg, "unknown opcode ", zBuf, (char*)0); 04766 rc = SQLITE_INTERNAL; 04767 break; 04768 } 04769 04770 /***************************************************************************** 04771 ** The cases of the switch statement above this line should all be indented 04772 ** by 6 spaces. But the left-most 6 spaces have been removed to improve the 04773 ** readability. From this point on down, the normal indentation rules are 04774 ** restored. 04775 *****************************************************************************/ 04776 } 04777 04778 #ifdef VDBE_PROFILE 04779 { 04780 long long elapse = hwtime() - start; 04781 pOp->cycles += elapse; 04782 pOp->cnt++; 04783 #if 0 04784 fprintf(stdout, "%10lld ", elapse); 04785 sqliteVdbePrintOp(stdout, origPc, &p->aOp[origPc]); 04786 #endif 04787 } 04788 #endif 04789 04790 /* The following code adds nothing to the actual functionality 04791 ** of the program. It is only here for testing and debugging. 04792 ** On the other hand, it does burn CPU cycles every time through 04793 ** the evaluator loop. So we can leave it out when NDEBUG is defined. 04794 */ 04795 #ifndef NDEBUG 04796 /* Sanity checking on the top element of the stack */ 04797 if( pTos>=p->aStack ){ 04798 assert( pTos->flags!=0 ); /* Must define some type */ 04799 if( pTos->flags & MEM_Str ){ 04800 int x = pTos->flags & (MEM_Static|MEM_Dyn|MEM_Ephem|MEM_Short); 04801 assert( x!=0 ); /* Strings must define a string subtype */ 04802 assert( (x & (x-1))==0 ); /* Only one string subtype can be defined */ 04803 assert( pTos->z!=0 ); /* Strings must have a value */ 04804 /* Mem.z points to Mem.zShort iff the subtype is MEM_Short */ 04805 assert( (pTos->flags & MEM_Short)==0 || pTos->z==pTos->zShort ); 04806 assert( (pTos->flags & MEM_Short)!=0 || pTos->z!=pTos->zShort ); 04807 }else{ 04808 /* Cannot define a string subtype for non-string objects */ 04809 assert( (pTos->flags & (MEM_Static|MEM_Dyn|MEM_Ephem|MEM_Short))==0 ); 04810 } 04811 /* MEM_Null excludes all other types */ 04812 assert( pTos->flags==MEM_Null || (pTos->flags&MEM_Null)==0 ); 04813 } 04814 if( pc<-1 || pc>=p->nOp ){ 04815 sqliteSetString(&p->zErrMsg, "jump destination out of range", (char*)0); 04816 rc = SQLITE_INTERNAL; 04817 } 04818 if( p->trace && pTos>=p->aStack ){ 04819 int i; 04820 fprintf(p->trace, "Stack:"); 04821 for(i=0; i>-5 && &pTos[i]>=p->aStack; i--){ 04822 if( pTos[i].flags & MEM_Null ){ 04823 fprintf(p->trace, " NULL"); 04824 }else if( (pTos[i].flags & (MEM_Int|MEM_Str))==(MEM_Int|MEM_Str) ){ 04825 fprintf(p->trace, " si:%d", pTos[i].i); 04826 }else if( pTos[i].flags & MEM_Int ){ 04827 fprintf(p->trace, " i:%d", pTos[i].i); 04828 }else if( pTos[i].flags & MEM_Real ){ 04829 fprintf(p->trace, " r:%g", pTos[i].r); 04830 }else if( pTos[i].flags & MEM_Str ){ 04831 int j, k; 04832 char zBuf[100]; 04833 zBuf[0] = ' '; 04834 if( pTos[i].flags & MEM_Dyn ){ 04835 zBuf[1] = 'z'; 04836 assert( (pTos[i].flags & (MEM_Static|MEM_Ephem))==0 ); 04837 }else if( pTos[i].flags & MEM_Static ){ 04838 zBuf[1] = 't'; 04839 assert( (pTos[i].flags & (MEM_Dyn|MEM_Ephem))==0 ); 04840 }else if( pTos[i].flags & MEM_Ephem ){ 04841 zBuf[1] = 'e'; 04842 assert( (pTos[i].flags & (MEM_Static|MEM_Dyn))==0 ); 04843 }else{ 04844 zBuf[1] = 's'; 04845 } 04846 zBuf[2] = '['; 04847 k = 3; 04848 for(j=0; j<20 && j<pTos[i].n; j++){ 04849 int c = pTos[i].z[j]; 04850 if( c==0 && j==pTos[i].n-1 ) break; 04851 if( isprint(c) && !isspace(c) ){ 04852 zBuf[k++] = c; 04853 }else{ 04854 zBuf[k++] = '.'; 04855 } 04856 } 04857 zBuf[k++] = ']'; 04858 zBuf[k++] = 0; 04859 fprintf(p->trace, "%s", zBuf); 04860 }else{ 04861 fprintf(p->trace, " ???"); 04862 } 04863 } 04864 if( rc!=0 ) fprintf(p->trace," rc=%d",rc); 04865 fprintf(p->trace,"\n"); 04866 } 04867 #endif 04868 } /* The end of the for(;;) loop the loops through opcodes */ 04869 04870 /* If we reach this point, it means that execution is finished. 04871 */ 04872 vdbe_halt: 04873 CHECK_FOR_INTERRUPT 04874 if( rc ){ 04875 p->rc = rc; 04876 rc = SQLITE_ERROR; 04877 }else{ 04878 rc = SQLITE_DONE; 04879 } 04880 p->magic = VDBE_MAGIC_HALT; 04881 p->pTos = pTos; 04882 return rc; 04883 04884 /* Jump to here if a malloc() fails. It's hard to get a malloc() 04885 ** to fail on a modern VM computer, so this code is untested. 04886 */ 04887 no_mem: 04888 sqliteSetString(&p->zErrMsg, "out of memory", (char*)0); 04889 rc = SQLITE_NOMEM; 04890 goto vdbe_halt; 04891 04892 /* Jump to here for an SQLITE_MISUSE error. 04893 */ 04894 abort_due_to_misuse: 04895 rc = SQLITE_MISUSE; 04896 /* Fall thru into abort_due_to_error */ 04897 04898 /* Jump to here for any other kind of fatal error. The "rc" variable 04899 ** should hold the error number. 04900 */ 04901 abort_due_to_error: 04902 if( p->zErrMsg==0 ){ 04903 if( sqlite_malloc_failed ) rc = SQLITE_NOMEM; 04904 sqliteSetString(&p->zErrMsg, sqlite_error_string(rc), (char*)0); 04905 } 04906 goto vdbe_halt; 04907 04908 /* Jump to here if the sqlite_interrupt() API sets the interrupt 04909 ** flag. 04910 */ 04911 abort_due_to_interrupt: 04912 assert( db->flags & SQLITE_Interrupt ); 04913 db->flags &= ~SQLITE_Interrupt; 04914 if( db->magic!=SQLITE_MAGIC_BUSY ){ 04915 rc = SQLITE_MISUSE; 04916 }else{ 04917 rc = SQLITE_INTERRUPT; 04918 } 04919 sqliteSetString(&p->zErrMsg, sqlite_error_string(rc), (char*)0); 04920 goto vdbe_halt; 04921 }
__label__pos
0.961656
06 Nov 2017, 00:30 How to run cron jobs with docker Lately I came across the problem of running cron jobs in a docker based environment when we migrated wpvulndb.com to a docker based install. So how should we execute cron jobs when the application is running with docker or docker-compose? You have two choices of running cron jobs with docker: • Execute them on the host system with docker exec or docker run in your application container • Create a separate cron enabled docker container The first method may be the simplest for your needs. Just edit the crontab of your host system and execute single tasks in your application container. The jobs need to run as root on the host system or the user has to be to be in the docker group - which is basically the same as running as root. For example this was one of our cron jobs executed from the host inside the application container using docker-compose: /bin/bash -c 'cd /opt/wpvulndb/ && docker-compose -f docker-compose.yml -f docker-compose.staging.yml -f docker-compose.prod.yml run -T --name cron_sitemap --rm cron bundle exec rake -s sitemap:refresh'" The cron container in this case was just another instance of the main application image. We could also execute docker-compose exec (or docker exec) to run the command in the main application container but we created a separate container so we do not interrupt any processes inside the main container. But there is one simple problem with this setup: Normally errors are written to stdout and the cron daemon keeps sending every output from the jobs to the email address specified in the MAILTO environment variable. Sadly docker-compose does NOT support a quiet/silent flag so the starting messages of the single containers are always printed out and interpreted as errors by the cron daemon. Starting wpvulndb_redis Starting wpvulndb_db Starting wpvulndb_sidekiq Starting wpvulndb_web This ended up with a lot of emails for every cron job although we are only interested in errors. So my choice for running the cron jobs was directly inside an container. There are some blog posts and stack overflow comments about this topic out there but either they are really old or miss some important details. One thing you find a lot on the internet is something like: cron && tail -f /var/log/cron.log as the main docker command - This is bad This command would execute cron and tails the log file so it will be visible with docker logs. But there is an problem with this: If the cron daemon fails after some time the docker container will continue to run because it tails the log file and does not monitor the cron process as it should be. The crash of crond will be undetected. So from outside the container it looks like your cron jobs are running - but they don’t. Only the tail of the log file is running, beeing monitored by docker and reported as UP. You should try to only have one main process running inside the container so the docker engine can monitor the health of your containers and restart them if needed or notify you. So how to run the cron daemon correctly? Normally all docker base images are stripped down with no running processes (like the cron daemon) because of the one process concept mentioned above. In the alpine base image there is a cron daemon installed but you need to run it on your own. The following Dockerfile creates a separate user and copies a new crontab file to the machine. When this container is run all logs will be available via docker logs or docker-compose logs. FROM alpine:latest LABEL maintainer="Christian Mehlmauer <[email protected]>" ENV APP_USER appuser RUN adduser -g "App User" -D $APP_USER COPY crontab /var/spool/cron/crontabs/$APP_USER RUN chmod 0600 /var/spool/cron/crontabs/$APP_USER ENTRYPOINT "crond" CMD ["-f", "-d", "8"] crontab file: 0 2 * * * /bin/date The cron daemon parameters in use are: • -f: The cron daemon will run in the foreground. This way docker is able to monitor the process. • -d 8: This instructs the daemon to log to stderr with the default log level 8. Without this flag messages are only written to syslog and you can’t access them via the logs command. Using this method of cron involves monitoring the logs of the container using some kind of monitoring like a log management or send the output from the jobs itself as email. The cron files on alpine Linux work like this: /var/spool/cron/crontabs/root This file contains every cronjob that should be executed by the root user. If you have a look at the file it contains the following lines per default: # do daily/weekly/monthly maintenance # min hour day month weekday command */15 * * * * run-parts /etc/periodic/15min 0 * * * * run-parts /etc/periodic/hourly 0 2 * * * run-parts /etc/periodic/daily 0 3 * * 6 run-parts /etc/periodic/weekly 0 5 1 * * run-parts /etc/periodic/monthly This means you can also put your executable scripts inside one of these folders and they will be run by root automatically. If you put a bash script into /etc/periodic/15min and make it executable the cron daemon will execute it every 15 minutes. If you want your jobs to be executed at different times just add a line to this file using cron syntax. ATTENTION: You MUST NOT use an extension on the files placed inside the periodic folders. If you place a shell script inside just omit the extension and make sure it starts with the correct shebang #!/bin/sh. See here for details: https://wiki.alpinelinux.org/wiki/Alpine_Linux:FAQ#My_cron_jobs_don.27t_run.3F /var/spool/cron/crontabs/APPUSER This file contains cronjobs that should be executed by the user matching the file name. This is handy if you want to run cron jobs as a different user. It’s a good habit to run jobs as a separate user if the job does not require root privileges to reduce the attack surface. You can also integrate the steps mentioned above inside your main Dockerfile (if it’s based on an alpine based image) and change the entrypoint and command to the cron commands if you need access to the main application for the cron jobs. For example our docker-compose.yml file uses the following snippet on the main Dockerfile to use it as a cron container with a different entrypoint (the user: root is important as the cron daemon needs to run as root): entrypoint: "" user: root command: crond -f -d 8 Also be sure to mount your local timezone file into the container so the time inside matches your host system time and the jobs get executed at the correct time. In docker-compose use the following: volumes: - /etc/localtime:/etc/localtime:ro In docker use the following command line option: -v /etc/localtime:/etc/localtime:ro I hope this post gave a good overview on how to design your docker setup to also run cron jobs.
__label__pos
0.56655
You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.         115 lines 2.8 KiB <?php class CheckUserLogPager extends ReverseChronologicalPager { /** * @var array $searchConds */ protected $searchConds; /** * @param IContextSource $context * @param array $opts Should include 'queryConds', 'year', and 'month' keys */ public function __construct( IContextSource $context, array $conds ) { parent::__construct( $context ); $this->searchConds = $conds['queryConds']; // getDateCond() actually *sets* the timestamp offset.. $this->getDateCond( $conds['year'], $conds['month'] ); } function formatRow( $row ) { $user = Linker::userLink( $row->cul_user, $row->user_name ); if ( $row->cul_type == 'userips' || $row->cul_type == 'useredits' ) { $target = Linker::userLink( $row->cul_target_id, $row->cul_target_text ) . Linker::userToolLinks( $row->cul_target_id, $row->cul_target_text ); } else { $target = $row->cul_target_text; } // Give grep a chance to find the usages: // checkuser-log-entry-userips, checkuser-log-entry-ipedits, // checkuser-log-entry-ipusers, checkuser-log-entry-ipedits-xff // checkuser-log-entry-ipusers-xff, checkuser-log-entry-useredits return '<li>' . $this->msg( 'checkuser-log-entry-' . $row->cul_type, $user, $target, $this->getLanguage()->timeanddate( wfTimestamp( TS_MW, $row->cul_timestamp ), true ) )->text() . Linker::commentBlock( $row->cul_reason ) . '</li>'; } /** * @return string */ function getStartBody() { if ( $this->getNumRows() ) { return '<ul>'; } else { return ''; } } /** * @return string */ function getEndBody() { if ( $this->getNumRows() ) { return '</ul>'; } else { return ''; } } /** * @return string */ function getEmptyBody() { return '<p>' . $this->msg( 'checkuser-empty' )->escaped() . '</p>'; } function getQueryInfo() { return array( 'tables' => array( 'cu_log', 'user' ), 'fields' => $this->selectFields(), 'conds' => array_merge( $this->searchConds, array( 'user_id = cul_user' ) ) ); } function getIndexField() { return 'cul_timestamp'; } function selectFields() { return array( 'cul_id', 'cul_timestamp', 'cul_user', 'cul_reason', 'cul_type', 'cul_target_id', 'cul_target_text', 'user_name' ); } /** * Do a batch query for links' existence and add it to LinkCache * * @param ResultWrapper $result */ protected function preprocessResults( $result ) { if ( $this->getNumRows() === 0 ) { return; } $lb = new LinkBatch; $lb->setCaller( __METHOD__ ); foreach ( $result as $row ) { $lb->add( NS_USER, $row->user_name ); // Performer if ( $row->cul_type == 'userips' || $row->cul_type == 'useredits' ) { $lb->add( NS_USER, $row->cul_target_text ); $lb->add( NS_USER_TALK, $row->cul_target_text ); } } $lb->execute(); $result->seek( 0 ); } }
__label__pos
0.999553
前端開發者必會的技能 剛接觸前端甚至一些「老」前端都經常會在JavaScript中所謂的難點,如this,原型,繼承,閉包等這些概念中迷失了自我。接下來這篇文章會把我自己對於JavaScript中這些點通過指向的概念做個總結並分享給大家,希望可以幫助大家更好的了解這些所謂的難點。 一、this this是什麼?其實它本身就是一種指向。this指向可以分為以下幾種情況 • 普通調用,this指向為調用者 • call/apply調用,this指向為當前thisArg參數 • 箭頭函數,this指向為當前函數的this指向 這個怎麼理解呢?接下來我會一一做解析。 1、普通調用 通俗理解一下,就是誰調用,則this便指向誰。這裡又大致分為幾種情況,分別為 Advertisements 1.1、對象方法的調用 即某方法為某對象上的一個屬性的屬性,正常情況當改方法被調用的時候,this的指向則是掛載該方法的對象。廢話不多說,直接看代碼可能會更好的理解。 var obj = { a: 'this is obj', test: function () { console.log(this.a); }}obj.test();// this is obj 1.2、「單純」函數調用 即該函數為自己獨立的函數,而不是掛載到對象上的屬性(window除外),也不會被當成構造函數來使用,而僅僅是當成函數來使用,此時的this指向則是window對象。例子如下 var a = 'this is window'function test () { Advertisements console.log(this.a);}test();// this is window 這個我們來理解一下,其實也很簡單,我們都知道,window對象是全局對象。其實整個代碼塊等同於 window.a = 'this is window'window.test = function test () { console.log(this.a); // 此時是window為調用者,即this會指向window}window.test(); 1.3、構造函數調用 即該函數被當成構造函數來調用,此時的this指向該構造器函數的實例對象。我們來看一個例子,先上一個屬於第二種情況的例子 function test () { this.a = 'this is test'; console.log(this.a); console.log(this);}test();// this is test// Window {} 按照上面的來理解,此時的this的確指向window對象,但是如果我換種形式,將其換成構造函數來調用呢,結果又會如何呢,直接上代碼 function Test () { this.a = 'this is test'; console.log(this.a); console.log(this);}var test = new Test();// this is test// Test{a: 'this is test'} OK,好像的確沒有問題了,此時的this的確指向了該構造函數的實例對象。具體這裡的一些解釋後面我會在原型鏈繼承裡面詳細講解。 2、call/apply調用 2.1、call調用 call方法形式,fun.call(thisArg[, arg1[, arg2[, ...]]]) • thisArg,當前this指向 • arg1[, arg2[, ...]],指定的參數列表 詳細介紹請猛戳MDN 示例代碼如下 function Test () { this.a = 'this is test'; console.log(this.a); console.log(this);}function Test2 () { Test.call(this)}var test = new Test2();// this is test// Test2{a: 'this is test'} 2.2、apply調用 和call類似,唯一的一個明顯區別就是call參數為多個,apply參數則為兩個,第二個參數為數組或類數組形式,fun.apply(thisArg,[argsArray]) • thisArg,當前this指向 • 一個數組或者類數組對象,其中的數組元素將作為單獨的參數傳給fun函數 詳細介紹請猛戳MDN 但是終究apply裡面的數組參數會轉變為call方法的參數形式,然後去走下面的步驟,這也是為什麼call執行速度比apply快。這邊詳情有篇文章有介紹,點擊鏈接。 另外,提及到call/apply,怎麼能不提及一下bind呢,bind裡面的this指向,會永遠指向bind到的當前的thisArg,即context上下文環境參數不可重寫。這也是為什麼a.bind(b).call(c),最終的this指向會是b的原因。至於為什麼,其實就是bind實現實際上是通過閉包,並且配合call/apply進行實現的。具體的請參考bindMDN裡面的用法及Polyfill實現。 3、箭頭函數 首先需要介紹的一點就是,在箭頭函數本身,它是沒有綁定本身的this的,它的this指向為當前函數的this指向。怎麼理解呢,直接上個代碼看下 function test () { (() => { console.log(this); })()}test.call({a: 'this is thisArg'})// Object {a: 'this isthisArg'} 這樣看聯想上面的call/apply調用的理解,好像是沒有問題了,那如果我設置一個定時器呢,會不是this指向會變成Window全局對象呢?答案肯定是不會的,因為箭頭函數裡面的this特殊性,它依舊會指向當前函數的this指向。不多BB,直接看代碼 function test () { setTimeout(() => { console.log(this); }, 0)}test.call({a: 'this is obj'})// Object {a: 'this isobj'} 當然普通函數使用setTimeout的話會讓this指向指向Window對象的。demo代碼如下 function test () { setTimeout(function () { console.log(this); }, 0)}test.call({a: 'this is obj'})// Window {...} 這裡可能會牽扯到setTimeout的一些點了,具體這裡我就不講了,想深入了解的猛戳這裡 箭頭函數裡面還有一些特殊的點,這裡由於只提及this這一個點,其他比如不綁定arguments,super(ES6),抑或new.target(ES6),他們都和this一樣,他會找尋到當前函數的arguments等。 關於箭頭函數裡面的this這裡也有詳細的介紹,想深入了解的可以自行閱讀 • 英文原版(需翻牆) • 中文翻譯版 二、原型/原型鏈 其實我們一看到原型/原型鏈都能和繼承聯想到一起,我們這裡就把兩塊先拆開來講解,這裡我們就先單獨把原型/原型鏈拎出來。首先我們自己問一下自己,什麼是原型?什麼是原型鏈? • 原型:即每個function函數都有的一個prototype屬性。 • 原型鏈:每個對象和原型都有原型,對象的原型指向原型對象,而父的原型又指向父的父,這種原型層層連接起來的就構成了原型鏈。 好像說的有點繞,其實一張圖可以解釋一切 那麼這個東西有怎麼和指向這個概念去聯繫上呢?其實這裡需要提及到的一個點,也是上面截圖中存在的一個點,就是__proto__,我喜歡把其稱為原型指針。終歸到頭,prototype只不過是一個屬性而已,它沒有什麼實際的意義,最後能做原型鏈繼承的還是通過__proto__這個原型指針來完成的。我們看到的所謂的繼承只不過是將需要繼承的屬性掛載到繼承者的prototype屬性上面去的,實際在找尋繼承的屬性的時候,會通過__proto__原型指針一層一層往上找,即會去找__proto__原型指針它的一個指向。看個demo function Test () { this.a = 'this is Test';}Test.prototype = { b: function () { console.log("this is Test's prototype"); }}function Test2 () { this.a = 'this is Test2'}Test2.prototype = new Test();var test =newTest2();test.b();console.log(test.prototype);console.log(test); 其執行結果如下 更多關於繼承的點,這裡就不提及了,我會在繼承這一章節做詳細的講解。那麼「單獨」關於原型/原型鏈的點就這些了。 總結:原型即prototype,它只是所有function上的一個屬性而已,真正的「大佬」是__proto__,「大佬」指向誰,誰才能有言語權(當然可能因為「大佬」過於霸道,所以在ECMA-262之後才被Standard化)。 三、繼承 關於繼承,之前我有寫過一篇博文對繼承的一些主流方式進行過總結。想詳細了解的請點擊傳送門。這裡我們通過指向這個概念來重新理解一下繼承。這裡咱就談兩個萬變不離其宗的繼承方式,一個是構造函數繼承,一個是原型鏈繼承。 1、構造函數繼承 其實就是上面提及到的通過call/apply調用,將this指向變成thisArg,具體看上面的解釋,這裡直接上代碼 function Test () { this.a = 'this is test'; console.log(this.a); console.log(this);}function Test2 () { Test.apply(this) // or Test.apply(this)}var test = new Test2();// this is test//Test2 {a: 'this is test'} 2、原型鏈繼承 一般情況,我們做原型鏈繼承,會通過子類prototype屬性等於(指向)父類的實例。即 Child.prototype = new Parent(); 那麼這樣的做法具體是怎麼實現原型鏈繼承的呢? 首先在講解繼承前,我們需要get到一個點,那就是對象{ }它內部擁有的一些屬性,我們看到對象{}它本身擁有的屬性就是上面我們提及到的__proto__原型指針以及一些方法。 接下來我先說一下new關鍵字具體做的一件事情。其過程大致分為三步,如下 var obj = {}; // 初始化一個對象obj。obj.__proto__ = Parent.prototype; //將obj的__proto__原型指針指向父類Parent的prototype屬性Parent.call(obj); //初始化Parent構造函數 從這裡我們看出來,相信大家也能理解為什麼我在上面說__proto__才是真正的「大佬」。 這裡我額外提一件我們經常乾的「高端」的事情,那就是通過原型prototype做monkeypatch。即我想在繼承父類方法的同時,完成自己獨立的一些操作。具體代碼如下 function Parent () { this.a = 'this is Parent'}Parent.prototype = { b: function () { console.log(this.a); }}function Child () { this.a = 'this is Child'}Child.prototype = { b: function () { console.log('monkey patch'); Parent.prototype.b.call(this); }}var test = new Child()test.b()// monkey patch// this isChild 這個是我們對於自定義的類進行繼承並重寫,那麼如果是類似Array,Number,String等內置類進行繼承重寫的話,結果會是如何呢?關於這個話題我也有寫過一篇博文進行過講解,合格前端系列第四彈-如何監聽一個數組的變化 四、閉包 對於閉包,我曾經也做過總結和分享,簡單的一些東西和概念這裡不提及了,想了解的可以猛戳這裡。和原型鏈那章一張,這裡會摒棄掉原來的一些看法,這裡我依舊通過代入指向這個概念來進行理解。 一般情況下,我們理解閉包是這樣的:「為了可以訪問函數內的局部變數而定義的內部函數」。 JavaScript語言特性,每一個function內都有一個屬於自己的執行上下文,即特定的context指向。 內層的context上下文總能訪問到外層context上下文中的變數,即每次內部的作用域可以往上層查找直到訪問到當前所需訪問的變數。例子如下 var a = 'this is window'function test () { var b = 'this is test' function test2 () { var c = 'this is test2'; console.log(a); console.log(b); console.log(c); } test2();}test();// this is window// this is test// this istest2 但是如果反過來訪問的話,則不能進行訪問,即變數訪問的指向是當前context上下文的指向的相反方向,且不可逆。如下 function test () { var b = 'this is test';}console.log(b); // UncaughtReferenceError: b is not defined 這裡用一個非常常見的情況作為例子,即for循環配合setTimeout的非同步任務,如下 function test () { for (var i = 0; i < 4; i++) { setTimeout(function () { console.log(i); }, 0) }}test(); 看到上面的例子,我們都知道說:「答案會列印4次4」。那麼為什麼會這樣呢?我想依次列印0,1,2,3又該怎麼做呢? 相信很多小夥伴們都會說,用閉包呀,就能實現了呀。對沒錯,的確用閉包就能實現。那麼為什麼出現這種情況呢? 這裡我簡單提一下,首先這邊牽扯到兩個點,一個就是for循環的同步任務,一個就是setTimeout的非同步任務,在JavaScript線程中,因為本身JavaScript是單線程,這個特點決定了其正常的腳本執行順序是按照文檔流的形式來進行的,即從上往下,從左往右的這樣方向。每次腳本正常執行時,但凡遇到非同步任務的時候,都會將其set到一個taskqueue(任務隊列)中去。然後在執行完同步任務之後,再來執行隊列任務中的非同步任務。 當然對於不同的非同步任務,執行順序也會不一樣,具體就看其到底屬於哪個維度的非同步任務了。這裡我就不詳細扯EventLoop了,想更詳細的了解請戳這裡 回到上面我們想要實現的效果這個問題上來,我們一般處理方法是利用閉包進行參數傳值,代碼如下 function test () { for (var i = 0; i < 4; i++) { (function (e) { setTimeout(function () { console.log(e); }, 0) })(i) }}test();// 0 -> 1 -> 2 -> 3 循環當中,匿名函數會立即執行,並且會將循環當前的 i 作為參數傳入,將其作為當前匿名函數中的形參e的指向,即會保存對 i的引用,它是不會被循環改變的。 當然還有一種常見的方式可以實現上面的效果,即從自執行匿名函數中返回一個函數。代碼如下 function test () { for(var i = 0; i < 4; i++) { setTimeout((function(e) { return function() { console.log(e); } })(i), 0) }}test(); 更多高階閉包的寫法這裡就不一一介紹了,想了解的小夥伴請自行搜索。 Advertisements 你可能會喜歡
__label__pos
0.839562
/* * Copyright (c) 2013, 2019, Oracle and/or its affiliates. All rights reserved. * DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER. * * This code is free software; you can redistribute it and/or modify it * under the terms of the GNU General Public License version 2 only, as * published by the Free Software Foundation. * * This code is distributed in the hope that it will be useful, but WITHOUT * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License * version 2 for more details (a copy is included in the LICENSE file that * accompanied this code). * * You should have received a copy of the GNU General Public License version * 2 along with this work; if not, write to the Free Software Foundation, * Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA. * * Please contact Oracle, 500 Oracle Parkway, Redwood Shores, CA 94065 USA * or visit www.oracle.com if you need additional information or have any * questions. * */ #include "precompiled.hpp" #include "gc/g1/g1CollectedHeap.inline.hpp" #include "gc/g1/g1GCPhaseTimes.hpp" #include "gc/g1/g1HotCardCache.hpp" #include "gc/g1/g1ParScanThreadState.inline.hpp" #include "gc/g1/g1StringDedup.hpp" #include "gc/shared/gcTimer.hpp" #include "gc/shared/workerDataArray.inline.hpp" #include "memory/resourceArea.hpp" #include "logging/log.hpp" #include "logging/logStream.hpp" #include "runtime/timer.hpp" #include "runtime/os.hpp" #include "utilities/macros.hpp" static const char* indent(uint level) { static const char* Indents[] = {"", " ", " ", " ", " ", " "}; assert(level < ARRAY_SIZE(Indents), "Too high indent level %u", level); return Indents[level]; } G1GCPhaseTimes::G1GCPhaseTimes(STWGCTimer* gc_timer, uint max_gc_threads) : _max_gc_threads(max_gc_threads), _gc_start_counter(0), _gc_pause_time_ms(0.0), _ref_phase_times(gc_timer, max_gc_threads), _weak_phase_times(max_gc_threads) { assert(max_gc_threads > 0, "Must have some GC threads"); _gc_par_phases[GCWorkerStart] = new WorkerDataArray("GC Worker Start (ms):", max_gc_threads); _gc_par_phases[ExtRootScan] = new WorkerDataArray("Ext Root Scanning (ms):", max_gc_threads); // Root scanning phases _gc_par_phases[ThreadRoots] = new WorkerDataArray("Thread Roots (ms):", max_gc_threads); _gc_par_phases[UniverseRoots] = new WorkerDataArray("Universe Roots (ms):", max_gc_threads); _gc_par_phases[JNIRoots] = new WorkerDataArray("JNI Handles Roots (ms):", max_gc_threads); _gc_par_phases[ObjectSynchronizerRoots] = new WorkerDataArray("ObjectSynchronizer Roots (ms):", max_gc_threads); _gc_par_phases[ManagementRoots] = new WorkerDataArray("Management Roots (ms):", max_gc_threads); _gc_par_phases[SystemDictionaryRoots] = new WorkerDataArray("SystemDictionary Roots (ms):", max_gc_threads); _gc_par_phases[CLDGRoots] = new WorkerDataArray("CLDG Roots (ms):", max_gc_threads); _gc_par_phases[JVMTIRoots] = new WorkerDataArray("JVMTI Roots (ms):", max_gc_threads); AOT_ONLY(_gc_par_phases[AOTCodeRoots] = new WorkerDataArray("AOT Root Scan (ms):", max_gc_threads);) _gc_par_phases[CMRefRoots] = new WorkerDataArray("CM RefProcessor Roots (ms):", max_gc_threads); _gc_par_phases[MergeER] = new WorkerDataArray("Eager Reclaim (ms):", max_gc_threads); _gc_par_phases[MergeRS] = new WorkerDataArray("Remembered Sets (ms):", max_gc_threads); _gc_par_phases[MergeRS]->create_thread_work_items("Merged Sparse:", MergeRSMergedSparse); _gc_par_phases[MergeRS]->create_thread_work_items("Merged Fine:", MergeRSMergedFine); _gc_par_phases[MergeRS]->create_thread_work_items("Merged Coarse:", MergeRSMergedCoarse); _gc_par_phases[MergeRS]->create_thread_work_items("Dirty Cards:", MergeRSDirtyCards); _gc_par_phases[OptMergeRS] = new WorkerDataArray("Optional Remembered Sets (ms):", max_gc_threads); _gc_par_phases[OptMergeRS]->create_thread_work_items("Merged Sparse:", MergeRSMergedSparse); _gc_par_phases[OptMergeRS]->create_thread_work_items("Merged Fine:", MergeRSMergedFine); _gc_par_phases[OptMergeRS]->create_thread_work_items("Merged Coarse:", MergeRSMergedCoarse); _gc_par_phases[OptMergeRS]->create_thread_work_items("Dirty Cards:", MergeRSDirtyCards); _gc_par_phases[MergeLB] = new WorkerDataArray("Log Buffers (ms):", max_gc_threads); if (G1HotCardCache::default_use_cache()) { _gc_par_phases[MergeHCC] = new WorkerDataArray("Hot Card Cache (ms):", max_gc_threads); _gc_par_phases[MergeHCC]->create_thread_work_items("Dirty Cards:", MergeHCCDirtyCards); _gc_par_phases[MergeHCC]->create_thread_work_items("Skipped Cards:", MergeHCCSkippedCards); } else { _gc_par_phases[MergeHCC] = NULL; } _gc_par_phases[ScanHR] = new WorkerDataArray("Scan Heap Roots (ms):", max_gc_threads); _gc_par_phases[OptScanHR] = new WorkerDataArray("Optional Scan Heap Roots (ms):", max_gc_threads); _gc_par_phases[CodeRoots] = new WorkerDataArray("Code Root Scan (ms):", max_gc_threads); _gc_par_phases[OptCodeRoots] = new WorkerDataArray("Optional Code Root Scan (ms):", max_gc_threads); _gc_par_phases[ObjCopy] = new WorkerDataArray("Object Copy (ms):", max_gc_threads); _gc_par_phases[OptObjCopy] = new WorkerDataArray("Optional Object Copy (ms):", max_gc_threads); _gc_par_phases[Termination] = new WorkerDataArray("Termination (ms):", max_gc_threads); _gc_par_phases[OptTermination] = new WorkerDataArray("Optional Termination (ms):", max_gc_threads); _gc_par_phases[GCWorkerTotal] = new WorkerDataArray("GC Worker Total (ms):", max_gc_threads); _gc_par_phases[GCWorkerEnd] = new WorkerDataArray("GC Worker End (ms):", max_gc_threads); _gc_par_phases[Other] = new WorkerDataArray("GC Worker Other (ms):", max_gc_threads); _gc_par_phases[ScanHR]->create_thread_work_items("Scanned Cards:", ScanHRScannedCards); _gc_par_phases[ScanHR]->create_thread_work_items("Scanned Blocks:", ScanHRScannedBlocks); _gc_par_phases[ScanHR]->create_thread_work_items("Claimed Chunks:", ScanHRClaimedChunks); _gc_par_phases[OptScanHR]->create_thread_work_items("Scanned Cards:", ScanHRScannedCards); _gc_par_phases[OptScanHR]->create_thread_work_items("Scanned Blocks:", ScanHRScannedBlocks); _gc_par_phases[OptScanHR]->create_thread_work_items("Claimed Chunks:", ScanHRClaimedChunks); _gc_par_phases[OptScanHR]->create_thread_work_items("Scanned Refs:", ScanHRScannedOptRefs); _gc_par_phases[OptScanHR]->create_thread_work_items("Used Memory:", ScanHRUsedMemory); _gc_par_phases[MergeLB]->create_thread_work_items("Dirty Cards:", MergeLBDirtyCards); _gc_par_phases[MergeLB]->create_thread_work_items("Skipped Cards:", MergeLBSkippedCards); _gc_par_phases[MergePSS] = new WorkerDataArray("Merge Per-Thread State", 1 /* length */, true /* is_serial */); _gc_par_phases[MergePSS]->create_thread_work_items("Copied Bytes", MergePSSCopiedBytes, max_gc_threads); _gc_par_phases[MergePSS]->create_thread_work_items("LAB Waste", MergePSSLABWasteBytes, max_gc_threads); _gc_par_phases[MergePSS]->create_thread_work_items("LAB Undo Waste", MergePSSLABUndoWasteBytes, max_gc_threads); _gc_par_phases[Termination]->create_thread_work_items("Termination Attempts:"); _gc_par_phases[OptTermination]->create_thread_work_items("Optional Termination Attempts:"); if (UseStringDeduplication) { _gc_par_phases[StringDedupQueueFixup] = new WorkerDataArray("Queue Fixup (ms):", max_gc_threads); _gc_par_phases[StringDedupTableFixup] = new WorkerDataArray("Table Fixup (ms):", max_gc_threads); } else { _gc_par_phases[StringDedupQueueFixup] = NULL; _gc_par_phases[StringDedupTableFixup] = NULL; } _gc_par_phases[RedirtyCards] = new WorkerDataArray("Parallel Redirty (ms):", max_gc_threads); _gc_par_phases[RedirtyCards]->create_thread_work_items("Redirtied Cards:"); _gc_par_phases[ParFreeCSet] = new WorkerDataArray("Parallel Free Collection Set (ms):", max_gc_threads); _gc_par_phases[YoungFreeCSet] = new WorkerDataArray("Young Free Collection Set (ms):", max_gc_threads); _gc_par_phases[NonYoungFreeCSet] = new WorkerDataArray("Non-Young Free Collection Set (ms):", max_gc_threads); _gc_par_phases[RebuildFreeList] = new WorkerDataArray("Parallel Rebuild Free List (ms):", max_gc_threads); reset(); } void G1GCPhaseTimes::reset() { _cur_collection_initial_evac_time_ms = 0.0; _cur_optional_evac_time_ms = 0.0; _cur_collection_code_root_fixup_time_ms = 0.0; _cur_strong_code_root_purge_time_ms = 0.0; _cur_merge_heap_roots_time_ms = 0.0; _cur_optional_merge_heap_roots_time_ms = 0.0; _cur_prepare_merge_heap_roots_time_ms = 0.0; _cur_optional_prepare_merge_heap_roots_time_ms = 0.0; _cur_evac_fail_recalc_used = 0.0; _cur_evac_fail_remove_self_forwards = 0.0; _cur_string_deduplication_time_ms = 0.0; _cur_prepare_tlab_time_ms = 0.0; _cur_resize_tlab_time_ms = 0.0; _cur_derived_pointer_table_update_time_ms = 0.0; _cur_clear_ct_time_ms = 0.0; _cur_resize_heap_time_ms = 0.0; _cur_ref_proc_time_ms = 0.0; _cur_collection_start_sec = 0.0; _root_region_scan_wait_time_ms = 0.0; _external_accounted_time_ms = 0.0; _recorded_prepare_heap_roots_time_ms = 0.0; _recorded_clear_claimed_marks_time_ms = 0.0; _recorded_young_cset_choice_time_ms = 0.0; _recorded_non_young_cset_choice_time_ms = 0.0; _recorded_redirty_logged_cards_time_ms = 0.0; _recorded_preserve_cm_referents_time_ms = 0.0; _recorded_start_new_cset_time_ms = 0.0; _recorded_total_free_cset_time_ms = 0.0; _recorded_serial_free_cset_time_ms = 0.0; _recorded_total_rebuild_freelist_time_ms = 0.0; _recorded_serial_rebuild_freelist_time_ms = 0.0; _cur_fast_reclaim_humongous_time_ms = 0.0; _cur_region_register_time = 0.0; _cur_fast_reclaim_humongous_total = 0; _cur_fast_reclaim_humongous_candidates = 0; _cur_fast_reclaim_humongous_reclaimed = 0; _cur_verify_before_time_ms = 0.0; _cur_verify_after_time_ms = 0.0; for (int i = 0; i < GCParPhasesSentinel; i++) { if (_gc_par_phases[i] != NULL) { _gc_par_phases[i]->reset(); } } _ref_phase_times.reset(); _weak_phase_times.reset(); } void G1GCPhaseTimes::note_gc_start() { _gc_start_counter = os::elapsed_counter(); reset(); } #define ASSERT_PHASE_UNINITIALIZED(phase) \ assert(_gc_par_phases[phase] == NULL || _gc_par_phases[phase]->get(i) == uninitialized, "Phase " #phase " reported for thread that was not started"); double G1GCPhaseTimes::worker_time(GCParPhases phase, uint worker) { if (_gc_par_phases[phase] == NULL) { return 0.0; } double value = _gc_par_phases[phase]->get(worker); if (value != WorkerDataArray::uninitialized()) { return value; } return 0.0; } void G1GCPhaseTimes::note_gc_end() { _gc_pause_time_ms = TimeHelper::counter_to_millis(os::elapsed_counter() - _gc_start_counter); double uninitialized = WorkerDataArray::uninitialized(); for (uint i = 0; i < _max_gc_threads; i++) { double worker_start = _gc_par_phases[GCWorkerStart]->get(i); if (worker_start != uninitialized) { assert(_gc_par_phases[GCWorkerEnd]->get(i) != uninitialized, "Worker started but not ended."); double total_worker_time = _gc_par_phases[GCWorkerEnd]->get(i) - _gc_par_phases[GCWorkerStart]->get(i); record_time_secs(GCWorkerTotal, i , total_worker_time); double worker_known_time = worker_time(ExtRootScan, i) + worker_time(ScanHR, i) + worker_time(CodeRoots, i) + worker_time(ObjCopy, i) + worker_time(Termination, i); record_time_secs(Other, i, total_worker_time - worker_known_time); } else { // Make sure all slots are uninitialized since this thread did not seem to have been started ASSERT_PHASE_UNINITIALIZED(GCWorkerEnd); ASSERT_PHASE_UNINITIALIZED(ExtRootScan); ASSERT_PHASE_UNINITIALIZED(MergeER); ASSERT_PHASE_UNINITIALIZED(MergeRS); ASSERT_PHASE_UNINITIALIZED(OptMergeRS); ASSERT_PHASE_UNINITIALIZED(MergeHCC); ASSERT_PHASE_UNINITIALIZED(MergeLB); ASSERT_PHASE_UNINITIALIZED(ScanHR); ASSERT_PHASE_UNINITIALIZED(CodeRoots); ASSERT_PHASE_UNINITIALIZED(OptCodeRoots); ASSERT_PHASE_UNINITIALIZED(ObjCopy); ASSERT_PHASE_UNINITIALIZED(OptObjCopy); ASSERT_PHASE_UNINITIALIZED(Termination); } } } #undef ASSERT_PHASE_UNINITIALIZED // record the time a phase took in seconds void G1GCPhaseTimes::record_time_secs(GCParPhases phase, uint worker_id, double secs) { _gc_par_phases[phase]->set(worker_id, secs); } // add a number of seconds to a phase void G1GCPhaseTimes::add_time_secs(GCParPhases phase, uint worker_id, double secs) { _gc_par_phases[phase]->add(worker_id, secs); } void G1GCPhaseTimes::record_or_add_time_secs(GCParPhases phase, uint worker_id, double secs) { if (_gc_par_phases[phase]->get(worker_id) == _gc_par_phases[phase]->uninitialized()) { record_time_secs(phase, worker_id, secs); } else { add_time_secs(phase, worker_id, secs); } } double G1GCPhaseTimes::get_time_secs(GCParPhases phase, uint worker_id) { return _gc_par_phases[phase]->get(worker_id); } void G1GCPhaseTimes::record_thread_work_item(GCParPhases phase, uint worker_id, size_t count, uint index) { _gc_par_phases[phase]->set_thread_work_item(worker_id, count, index); } void G1GCPhaseTimes::record_or_add_thread_work_item(GCParPhases phase, uint worker_id, size_t count, uint index) { _gc_par_phases[phase]->set_or_add_thread_work_item(worker_id, count, index); } size_t G1GCPhaseTimes::get_thread_work_item(GCParPhases phase, uint worker_id, uint index) { return _gc_par_phases[phase]->get_thread_work_item(worker_id, index); } // return the average time for a phase in milliseconds double G1GCPhaseTimes::average_time_ms(GCParPhases phase) { if (_gc_par_phases[phase] == NULL) { return 0.0; } return _gc_par_phases[phase]->average() * 1000.0; } size_t G1GCPhaseTimes::sum_thread_work_items(GCParPhases phase, uint index) { if (_gc_par_phases[phase] == NULL) { return 0; } assert(_gc_par_phases[phase]->thread_work_items(index) != NULL, "No sub count"); return _gc_par_phases[phase]->thread_work_items(index)->sum(); } template void G1GCPhaseTimes::details(T* phase, const char* indent_str) const { LogTarget(Trace, gc, phases, task) lt; if (lt.is_enabled()) { LogStream ls(lt); ls.print("%s", indent_str); phase->print_details_on(&ls); } } void G1GCPhaseTimes::log_phase(WorkerDataArray* phase, uint indent_level, outputStream* out, bool print_sum) const { out->print("%s", indent(indent_level)); phase->print_summary_on(out, print_sum); details(phase, indent(indent_level)); for (uint i = 0; i < phase->MaxThreadWorkItems; i++) { WorkerDataArray* work_items = phase->thread_work_items(i); if (work_items != NULL) { out->print("%s", indent(indent_level + 1)); work_items->print_summary_on(out, true); details(work_items, indent(indent_level + 1)); } } } void G1GCPhaseTimes::debug_phase(WorkerDataArray* phase, uint extra_indent) const { LogTarget(Debug, gc, phases) lt; if (lt.is_enabled()) { ResourceMark rm; LogStream ls(lt); log_phase(phase, 2 + extra_indent, &ls, true); } } void G1GCPhaseTimes::trace_phase(WorkerDataArray* phase, bool print_sum, uint extra_indent) const { LogTarget(Trace, gc, phases) lt; if (lt.is_enabled()) { LogStream ls(lt); log_phase(phase, 3 + extra_indent, &ls, print_sum); } } #define TIME_FORMAT "%.1lfms" void G1GCPhaseTimes::info_time(const char* name, double value) const { log_info(gc, phases)("%s%s: " TIME_FORMAT, indent(1), name, value); } void G1GCPhaseTimes::debug_time(const char* name, double value) const { log_debug(gc, phases)("%s%s: " TIME_FORMAT, indent(2), name, value); } void G1GCPhaseTimes::debug_time_for_reference(const char* name, double value) const { LogTarget(Debug, gc, phases) lt; LogTarget(Debug, gc, phases, ref) lt2; if (lt.is_enabled()) { LogStream ls(lt); ls.print_cr("%s%s: " TIME_FORMAT, indent(2), name, value); } else if (lt2.is_enabled()) { LogStream ls(lt2); ls.print_cr("%s%s: " TIME_FORMAT, indent(2), name, value); } } void G1GCPhaseTimes::trace_time(const char* name, double value) const { log_trace(gc, phases)("%s%s: " TIME_FORMAT, indent(3), name, value); } void G1GCPhaseTimes::trace_count(const char* name, size_t value) const { log_trace(gc, phases)("%s%s: " SIZE_FORMAT, indent(3), name, value); } double G1GCPhaseTimes::print_pre_evacuate_collection_set() const { const double sum_ms = _root_region_scan_wait_time_ms + _recorded_young_cset_choice_time_ms + _recorded_non_young_cset_choice_time_ms + _cur_region_register_time + _recorded_prepare_heap_roots_time_ms + _recorded_clear_claimed_marks_time_ms; info_time("Pre Evacuate Collection Set", sum_ms); if (_root_region_scan_wait_time_ms > 0.0) { debug_time("Root Region Scan Waiting", _root_region_scan_wait_time_ms); } debug_time("Prepare TLABs", _cur_prepare_tlab_time_ms); debug_time("Choose Collection Set", (_recorded_young_cset_choice_time_ms + _recorded_non_young_cset_choice_time_ms)); debug_time("Region Register", _cur_region_register_time); if (G1EagerReclaimHumongousObjects) { trace_count("Humongous Total", _cur_fast_reclaim_humongous_total); trace_count("Humongous Candidate", _cur_fast_reclaim_humongous_candidates); } debug_time("Prepare Heap Roots", _recorded_prepare_heap_roots_time_ms); if (_recorded_clear_claimed_marks_time_ms > 0.0) { debug_time("Clear Claimed Marks", _recorded_clear_claimed_marks_time_ms); } return sum_ms; } double G1GCPhaseTimes::print_evacuate_optional_collection_set() const { const double sum_ms = _cur_optional_evac_time_ms + _cur_optional_merge_heap_roots_time_ms; if (sum_ms > 0) { info_time("Merge Optional Heap Roots", _cur_optional_merge_heap_roots_time_ms); debug_time("Prepare Optional Merge Heap Roots", _cur_optional_prepare_merge_heap_roots_time_ms); debug_phase(_gc_par_phases[OptMergeRS]); info_time("Evacuate Optional Collection Set", _cur_optional_evac_time_ms); debug_phase(_gc_par_phases[OptScanHR]); debug_phase(_gc_par_phases[OptObjCopy]); debug_phase(_gc_par_phases[OptCodeRoots]); debug_phase(_gc_par_phases[OptTermination]); } return sum_ms; } double G1GCPhaseTimes::print_evacuate_initial_collection_set() const { info_time("Merge Heap Roots", _cur_merge_heap_roots_time_ms); debug_time("Prepare Merge Heap Roots", _cur_prepare_merge_heap_roots_time_ms); debug_phase(_gc_par_phases[MergeER]); debug_phase(_gc_par_phases[MergeRS]); if (G1HotCardCache::default_use_cache()) { debug_phase(_gc_par_phases[MergeHCC]); } debug_phase(_gc_par_phases[MergeLB]); info_time("Evacuate Collection Set", _cur_collection_initial_evac_time_ms); trace_phase(_gc_par_phases[GCWorkerStart], false); debug_phase(_gc_par_phases[ExtRootScan]); for (int i = ExtRootScanSubPhasesFirst; i <= ExtRootScanSubPhasesLast; i++) { trace_phase(_gc_par_phases[i]); } debug_phase(_gc_par_phases[ScanHR]); debug_phase(_gc_par_phases[CodeRoots]); debug_phase(_gc_par_phases[ObjCopy]); debug_phase(_gc_par_phases[Termination]); debug_phase(_gc_par_phases[Other]); debug_phase(_gc_par_phases[GCWorkerTotal]); trace_phase(_gc_par_phases[GCWorkerEnd], false); return _cur_collection_initial_evac_time_ms + _cur_merge_heap_roots_time_ms; } double G1GCPhaseTimes::print_post_evacuate_collection_set() const { const double evac_fail_handling = _cur_evac_fail_recalc_used + _cur_evac_fail_remove_self_forwards; assert(_gc_par_phases[MergePSS]->get(0) != WorkerDataArray::uninitialized(), "must be set"); const double merge_pss = _gc_par_phases[MergePSS]->get(0) * MILLIUNITS; const double sum_ms = evac_fail_handling + _cur_collection_code_root_fixup_time_ms + _recorded_preserve_cm_referents_time_ms + _cur_ref_proc_time_ms + (_weak_phase_times.total_time_sec() * MILLIUNITS) + _cur_clear_ct_time_ms + merge_pss + _cur_strong_code_root_purge_time_ms + _recorded_redirty_logged_cards_time_ms + _recorded_total_free_cset_time_ms + _recorded_total_rebuild_freelist_time_ms + _cur_fast_reclaim_humongous_time_ms + _cur_resize_heap_time_ms + _cur_string_deduplication_time_ms; info_time("Post Evacuate Collection Set", sum_ms); debug_time("Code Roots Fixup", _cur_collection_code_root_fixup_time_ms); debug_time("Clear Card Table", _cur_clear_ct_time_ms); debug_time_for_reference("Reference Processing", _cur_ref_proc_time_ms); _ref_phase_times.print_all_references(2, false); _weak_phase_times.log_print(2); if (G1StringDedup::is_enabled()) { debug_time("String Deduplication", _cur_string_deduplication_time_ms); debug_phase(_gc_par_phases[StringDedupQueueFixup], 1); debug_phase(_gc_par_phases[StringDedupTableFixup], 1); } if (G1CollectedHeap::heap()->evacuation_failed()) { debug_time("Evacuation Failure", evac_fail_handling); trace_time("Recalculate Used", _cur_evac_fail_recalc_used); trace_time("Remove Self Forwards",_cur_evac_fail_remove_self_forwards); } debug_phase(_gc_par_phases[MergePSS], 0); debug_time("Code Roots Purge", _cur_strong_code_root_purge_time_ms); debug_time("Redirty Cards", _recorded_redirty_logged_cards_time_ms); trace_phase(_gc_par_phases[RedirtyCards]); #if COMPILER2_OR_JVMCI debug_time("DerivedPointerTable Update", _cur_derived_pointer_table_update_time_ms); #endif debug_time("Free Collection Set", _recorded_total_free_cset_time_ms); trace_time("Serial Free Collection Set", _recorded_serial_free_cset_time_ms); trace_phase(_gc_par_phases[ParFreeCSet]); trace_phase(_gc_par_phases[YoungFreeCSet], true, 1); trace_phase(_gc_par_phases[NonYoungFreeCSet], true, 1); debug_time("Rebuild Free List", _recorded_total_rebuild_freelist_time_ms); trace_time("Serial Rebuild Free List ", _recorded_serial_rebuild_freelist_time_ms); trace_phase(_gc_par_phases[RebuildFreeList]); if (G1EagerReclaimHumongousObjects) { debug_time("Humongous Reclaim", _cur_fast_reclaim_humongous_time_ms); trace_count("Humongous Reclaimed", _cur_fast_reclaim_humongous_reclaimed); } debug_time("Start New Collection Set", _recorded_start_new_cset_time_ms); if (UseTLAB && ResizeTLAB) { debug_time("Resize TLABs", _cur_resize_tlab_time_ms); } debug_time("Resize Heap After Collection", _cur_resize_heap_time_ms); return sum_ms; } void G1GCPhaseTimes::print_other(double accounted_ms) const { info_time("Other", _gc_pause_time_ms - accounted_ms); } void G1GCPhaseTimes::print() { note_gc_end(); if (_cur_verify_before_time_ms > 0.0) { debug_time("Verify Before", _cur_verify_before_time_ms); } double accounted_ms = 0.0; accounted_ms += print_pre_evacuate_collection_set(); accounted_ms += print_evacuate_initial_collection_set(); accounted_ms += print_evacuate_optional_collection_set(); accounted_ms += print_post_evacuate_collection_set(); print_other(accounted_ms); if (_cur_verify_after_time_ms > 0.0) { debug_time("Verify After", _cur_verify_after_time_ms); } } const char* G1GCPhaseTimes::phase_name(GCParPhases phase) { static const char* names[] = { "GCWorkerStart", "ExtRootScan", "ThreadRoots", "UniverseRoots", "JNIRoots", "ObjectSynchronizerRoots", "ManagementRoots", "SystemDictionaryRoots", "CLDGRoots", "JVMTIRoots", AOT_ONLY("AOTCodeRoots" COMMA) "CMRefRoots", "MergeER", "MergeRS", "OptMergeRS", "MergeLB", "MergeHCC", "ScanHR", "OptScanHR", "CodeRoots", "OptCodeRoots", "ObjCopy", "OptObjCopy", "Termination", "OptTermination", "Other", "GCWorkerTotal", "GCWorkerEnd", "StringDedupQueueFixup", "StringDedupTableFixup", "RedirtyCards", "ParFreeCSet", "YoungFreeCSet", "NonYoungFreeCSet", "RebuildFreeList", "MergePSS" //GCParPhasesSentinel only used to tell end of enum }; STATIC_ASSERT(ARRAY_SIZE(names) == G1GCPhaseTimes::GCParPhasesSentinel); // GCParPhases enum and corresponding string array should have the same "length", this tries to assert it return names[phase]; } G1EvacPhaseWithTrimTimeTracker::G1EvacPhaseWithTrimTimeTracker(G1ParScanThreadState* pss, Tickspan& total_time, Tickspan& trim_time) : _pss(pss), _start(Ticks::now()), _total_time(total_time), _trim_time(trim_time), _stopped(false) { assert(_pss->trim_ticks().value() == 0, "Possibly remaining trim ticks left over from previous use"); } G1EvacPhaseWithTrimTimeTracker::~G1EvacPhaseWithTrimTimeTracker() { if (!_stopped) { stop(); } } void G1EvacPhaseWithTrimTimeTracker::stop() { assert(!_stopped, "Should only be called once"); _total_time += (Ticks::now() - _start) - _pss->trim_ticks(); _trim_time += _pss->trim_ticks(); _pss->reset_trim_ticks(); _stopped = true; } G1GCParPhaseTimesTracker::G1GCParPhaseTimesTracker(G1GCPhaseTimes* phase_times, G1GCPhaseTimes::GCParPhases phase, uint worker_id, bool must_record) : _start_time(), _phase(phase), _phase_times(phase_times), _worker_id(worker_id), _event(), _must_record(must_record) { if (_phase_times != NULL) { _start_time = Ticks::now(); } } G1GCParPhaseTimesTracker::~G1GCParPhaseTimesTracker() { if (_phase_times != NULL) { if (_must_record) { _phase_times->record_time_secs(_phase, _worker_id, (Ticks::now() - _start_time).seconds()); } else { _phase_times->record_or_add_time_secs(_phase, _worker_id, (Ticks::now() - _start_time).seconds()); } _event.commit(GCId::current(), _worker_id, G1GCPhaseTimes::phase_name(_phase)); } } G1EvacPhaseTimesTracker::G1EvacPhaseTimesTracker(G1GCPhaseTimes* phase_times, G1ParScanThreadState* pss, G1GCPhaseTimes::GCParPhases phase, uint worker_id) : G1GCParPhaseTimesTracker(phase_times, phase, worker_id), _total_time(), _trim_time(), _trim_tracker(pss, _total_time, _trim_time) { } G1EvacPhaseTimesTracker::~G1EvacPhaseTimesTracker() { if (_phase_times != NULL) { // Explicitly stop the trim tracker since it's not yet destructed. _trim_tracker.stop(); // Exclude trim time by increasing the start time. _start_time += _trim_time; _phase_times->record_or_add_time_secs(G1GCPhaseTimes::ObjCopy, _worker_id, _trim_time.seconds()); } }
__label__pos
0.996824
and subdirectories - Best way to require all files from a directory in ruby? 5 Answers If it's a directory relative to the file that does the requiring (e.g. you want to load all files in the lib directory): Dir[File.dirname(__FILE__) + '/lib/*.rb'].each {|file| require file } Edit: Based on comments below, an updated version: Dir[File.join(__dir__, 'lib', '*.rb')].each { |file| require file } recursive same different What's the best way to require all files from a directory in ruby ? Dir[File.dirname(__FILE__) + '/../lib/*.rb'].each do |file| require File.basename(file, File.extname(file)) end If you don't strip the extension then you may end up requiring the same file twice (ruby won't realize that "foo" and "foo.rb" are the same file). Requiring the same file twice can lead to spurious warnings (e.g. "warning: already initialized constant"). The best way is to add the directory to the load path and then require the basename of each file. This is because you want to avoid accidentally requiring the same file twice -- often not the intended behavior. Whether a file will be loaded or not is dependent on whether require has seen the path passed to it before. For example, this simple irb session shows that you can mistakenly require and load the same file twice. $ irb irb(main):001:0> require 'test' => true irb(main):002:0> require './test' => true irb(main):003:0> require './test.rb' => false irb(main):004:0> require 'test' => false Note that the first two lines return true meaning the same file was loaded both times. When paths are used, even if the paths point to the same location, require doesn't know that the file was already required. Here instead, we add a directory to the load path and then require the basename of each *.rb file within. dir = "/path/to/directory" $LOAD_PATH.unshift(dir) Dir[File.join(dir, "*.rb")].each {|file| require File.basename(file) } If you don't care about the file being required more than once, or your intention is just to load the contents of the file, perhaps load should be used instead of require. Use load in this case, because it better expresses what you're trying to accomplish. For example: Dir["/path/to/directory/*.rb"].each {|file| load file } Dir[File.join(__dir__, "/app/**/*.rb")].each do |file| require file end This will work recursively on your local machine and a remote (Like Heroku) which does not use relative paths. I'm a few years late to the party, but I kind of like this one-line solution I used to get rails to include everything in app/workers/concerns: Dir[ Rails.root.join *%w(app workers concerns *) ].each{ |f| require f } Related
__label__pos
0.926048
Version: 2019.4 The Asset Database Batching with the AssetDatabase Refreshing the Asset Database Unity refreshes the Asset Database in the following situations: • When the Unity Editor regains focus (if you have enabled Auto-Refresh in the Preferences window) • When you select Assets > Refresh from the menu • When you call AssetDatabase.Refresh from C# Some other AssetDatabase APIs trigger a Refresh() but only for the Assets you specify. For example CreateAsset() and ImportAsset(). Unity performs the following steps during an Asset Database refresh: 1. It looks for changes to the Asset files, and then updates the source Asset Database 2. It imports and compiles code-related files such as .dll, .asmdef, .asmref, .rsp, and .cs files. 3. It then reloads the domain, if Refresh was not invoked from a script. 4. It post-processes all of the Assets for the imported code-related files 5. It then imports non-code-related Assets and post-processes all the remaining imported Assets 6. It then hot reloads the Assets The Asset Database detailed refresh process Unity performs the steps described in the previous section during the Asset Database refresh. This section describes this process in more detail. These steps happen inside a loop, and some steps might cause the refresh process to restart (for example, if importing an Asset creates other Assets which Unity also needs to import). Unity restarts the Asset Database refresh loop under the following conditions: • If, after the import, a file that the importer used has changed on disk. • If, in OnPostProcessAllAssets, you call any of the following: • If the timestamp of the file being imported changes while it is being imported, the file is queued for re-import • When an importer creates a file in the middle of an import (for example, FBX models can restart a Refresh by extracting their Textures from the model). Look for changes on disk When Unity looks for changes on disk, it scans the Assets** and Packages **folders in your Project to check if any files have been added, modified, or deleted since the last scan. It gathers any changes into a list to process in the next step. Update source Asset Database Once Unity gathers the file list, it then gets the file hashes for the files which have either been added or modified. It then updates the Asset Database with the GUIDs for those files, and removes the entries for the files that it detected as deleted. Dependency tracking The Asset Database keeps track of two types of Asset dependencies: static dependencies and dynamic dependencies. If any dependency of an Asset changes, Unity triggers a reimport of that Asset. Static dependencies A static dependency is a value, setting or property that an importer depends on. Static dependencies are known before the Asset is imported, and are not affected by the behavior of the importer during the import process. If a static dependency of an Asset changes, Unity re-imports that Asset. Some common static dependencies are: • The name of the Asset • ID of the importer associated with the Asset • The version of the importer • The currently selected build target platform Dynamic dependencies Unity typically discovers the dynamic dependencies of an Asset during the import process. This is because these dependencies are defined by the content_ _of the source asset. For example, a Shader might reference another Shader, and a Prefab might depend on other Prefabs. The importer might also use a global state conditionally_ _based on the content of the source asset, in which case it also becomes a dynamic dependency. Examples of this are the target platform, the Project’s color space, the graphics API, the scripting runtime version, or the Texture compression state. Unity stores these dynamic dependencies of an Asset in an Asset Import Context. Import and compile code-related files In the list of changed or added files, Unity gathers the ones that relate to code, and sends them to the script compilation pipeline. The compiler generates assemblies from the script files and assembly definition files in your Project. For more information on this step, see documentation on script compilation assembly definition files. Reload the domain If Unity detects any script changes, it reloads the C# domain. It does this because new Scripted Importers could have been created, and their logic could potentially impact the import result of Assets in the Refresh queue. This step restarts the Refresh() to ensure any new Scripted Importers take effect. Import non-code-related Assets Once Unity imports all code-related assets and it reloads the domain, it then moves on to the remaining Assets. Each Asset’s importer processes that type of Asset, and identifies the file types that it should import based on the filename extensions. For example, the TextureImporter is responsible for importing .jpg, .png and .psd files, among others. The importers are split into two groups: Native Importers and Scripted Importers. Native Importers Native importers are built in to Unity, and provide the import functionality for most of Unity’s basic Asset types such as 3D models, Textures and audio files. Importer File Formats AssemblyDefinitionImporter asmdef AssemblyDefinitionReferenceImporter asmref AudioImporter ogg, aif, aiff, flac, wav, mp3, mod, it, s3m, xm ComputeShaderImporter compute DefaultImporter rsp, unity FBXImporter fbx, mb, ma, max, jas, dae, dxf, obj, c4d, blend, lxo IHVImageFormatImporter astc, dds, ktx, pvr LocalizationImporter po Mesh3DSImporter 3ds NativeFormatImporter anim, animset, asset, blendtree, buildreport, colors, controller, cubemap, curves, curvesNormalized, flare, fontsettings, giparams, gradients, guiskin, ht, mask, mat, mesh, mixer, overrideController, particleCurves, particleCurvesSigned, particleDoubleCurves, particleDoubleCurvesSigned, physicMaterial, physicsMaterial2D, playable, preset, renderTexture, shadervariants, spriteatlas, state, statemachine, texture2D, transition, webCamTexture, brush, terrainlayer, signal PackageManifestImporter json PluginImporter dll, winmd, so, jar, java, kt, aar, suprx, prx, rpl, cpp, cc, c, h, jslib, jspre, bc, a, m, mm, swift, xib, bundle, dylib, config PrefabImporter prefab RayTracingShaderImporter raytrace ShaderImporter cginc, cg, glslinc, hlsl, shader SketchUpImporter skp SpeedTreeImporter spm, st SubstanceImporter .sbsar TextScriptImporter txt, html, htm, xml, json, csv, yaml, bytes, fnt, manifest, md, js, boo, rsp TextureImporter jpg, jpeg, tif, tiff, tga, gif, png, psd, bmp, iff, pict, pic, pct, exr, hdr TrueTypeFontImporter ttf, dfont, otf, ttc VideoClipImporter avi, asf, wmv, mov, dv, mp4, m4v, mpg, mpeg, ogv, vp8, webm VisualEffectImporter vfx, vfxoperator, vfxblock Scripted Importers You can define your own importers to add import functionality for new file types, or to override the importer for an existing file type. These importers are called Scripted Importers. Note: In addition to your custom importers, some of Unity’s own importers also count as Scripted Importers, and Unity processes them in this stage instead of the Native Importer stage. The scripted importers Unity ships with are: • StyleSheetImporter ( for .uss files ) • UIElementsViewImporter ( for .uxml files ) When an importer imports an asset file, an AssetImportContext is generated. The AssetImportContext is used to report the Static Dependencies of an asset. Also, during the import step, there are a number of callbacks which occur. Preprocess Asset Importer Calls: Postprocess Asset Importer Calls: One final post processing callback which is triggered once all importing has completed is OnPostprocessAllAssets. There are a number of things that can happen which will restart the refresh process on the Asset folder, some of them being: • If the import of an asset failed • If the asset was modified during the import phase of the Refresh. For example, if a file in the list gets modified so its modification date is not what it was in the previous refresh. This can happen if you start pulling files from a Version Control system while the Editor has focus. • If an Asset created other assets during import. For example: When importing an FBX, textures can get extracted from the FBX and placed into the project, and this means that Unity has to import the textures (and any artifacts they generate). • If you force the re-import of a file during one of the pre/post process callbacks or inside OnPostProcessAllAssets, for example, using AssetDatabase.ForceReserializeAssets or AssetImport.SaveAndReimport. Note, you must be careful not to cause infinite reimport loops if you do this. • If an Assembly Reload happens after compiling scripts. If you generate a C# file during the refresh process, that new file must then be compiled, so Unity restarts the refresh. • If you save an asset as “Text only” but the Asset must be serialized as binary, a restart will happen. (For example, Scenes with Terrains in them must be serialized as Binary, since the terrain data would be unwieldy if viewed as an array of characters in a text file.) Hot reloading Hot reloading refers to the process where Unity imports and applies any changes to scripts and assets while the Editor is open. This might happen while the Editor is in Play Mode or outside of Play Mode. You do not have to restart your application or the Editor for changes to take effect. When you change and save a script, Unity hot reloads all of the currently loaded script data. It first stores all serializable variables in all loaded scripts and, and after it loads the scripts, it restores them. All of the data that was not serializable is lost after a hot reload. Note: Default assets are imported before script assets, so any script-defined PostProcessAllAssets callbacks are not called for default assets. End of Refresh Once all these steps have completed, the Refresh() is complete and the Artifact Database is updated with the relevant information, as well as having the necessary import result files generated on disk. The Asset Database Batching with the AssetDatabase Copyright © 2023 Unity Technologies 优美缔软件(上海)有限公司 版权所有 "Unity"、Unity 徽标及其他 Unity 商标是 Unity Technologies 或其附属机构在美国及其他地区的商标或注册商标。其他名称或品牌是其各自所有者的商标。 公安部备案号: 31010902002961
__label__pos
0.532244
> Blog Book Portfolio Search 5/28/2013 5041 Views // 0 Comments // Not Rated AppFabric Distributed Caching In SharePoint 2013 Provider Hosted Apps One of my favorite new features of SharePoint 2013 is built-in distributed caching, thanks to AppFabric integration. This wasn't available to us out of the box in 2010, which lead my teams down the dark path of custom WCF services deployed to all the front ends on the farm, carefully stitched together via settings in the web.config files. Pardoning the pun, AppFabric takes care of this stitching for us. When I first dug into the API, I was impressed with how many more options are available to us beyond what the System.Web.Caching.Cache class provides. There are the concepts of regions, Put vs. Add operations, (Put will override an existing key where Add will bomb if the key exists) and all kinds of versioning, locking, and callbacks. These tools let us weave a beautiful, robust quilt of caching that will keep our SharePoint apps warm on even the coldest of performance-degrading nights. The best part is that we can automatically deploy and configure our cache layer without having to write any code against the 2013 server object model. Avoiding references to Microsoft.SharePoint.dll and its cousins is one of biggest development challenges under the new SharePoint customization paradigms. When I first sat down to learn AppFabric caching (after waiting over twenty seconds for my custom CSOM term-driven navigation to load) I was afraid I was going to have to violate this core principle. Let me set the stage before jumping into the first Act of the code, which is the deployment bits. Act II will then be the cache layer itself (which, as an inside joke at Rightpoint, I call "CacheMonster"). The project is a public-facing SharePoint 2013 web site, and is built with an S2S provider-hosted (MVC) app whose CSOM-driven controllers provide the data access layer to the SharePoint pages. The site has an online store component, and users are sent to the MVC site directly to consume that content. I use a ton of PowerShell to drive the deployments, which will be the topic of my next post. Part of this work is sucking in some AppFaric information from SharePoint and writing it to the web.config files. Once this is set, the cache code is ready to rock. Since everything you need is in Microsoft.ApplicationServer.Caching, (and not the SharePoint API) this code is free to run in our MVC site. <Note> On my development machine, I didn't have to do anything to enable or configure AppFabric. My environment is a bare metal Windows Server 2012 box with a domain controller and "complete" SharePoint Enterprise install. When you use the stand alone option, you get a bunch of service applications provisioned for you; complete lands you with a pretty bare bones Central Administration. I mention this detail because even with the bare-bones-ed-ness of my install, I still didn't have to do anything to bring AppFabric to life. So if the following doesn't work for you, make sure AppFabric is installed and activated and whatnot. It also works on Windows Server 2008 R2. </Note> First up, in the aforementioned Act I, is the deployment bits. This performance stars a fun PowerShell script that reads in three values from SharePoint and AppFabric and stuffs them into the web.config of our MVC provider-hosted app. These are the "Cache Host," (the endpoint of the AppFabric cache service) the "Cache Port," (the port of the service) and the "Cache Name (the farm's guid). Here's the script: Code Listing 1 1. #initialization 2. param($webConfigPath = $(Read-Host -prompt "Web.Config File Path")) 3. #ensure sharepoint 4. if ((Get-PSSnapin -Name Microsoft.SharePoint.PowerShell -ErrorAction SilentlyContinue) -eq $null) 5. { 6. #load snapin 7. Add-PSSnapIn Microsoft.SharePoint.PowerShell; 8. } 9. #open web.config file 10. $webConfigPath = Join-Path $webConfigPath "web.config"; 11. $xml = [xml] (Get-Content $webConfigPath); 12. #get cache settings 13. $cacheHostNode = $xml.SelectSingleNode("/configuration/appSettings/add[@key='CacheHost']"); 14. $cachePortNode = $xml.SelectSingleNode("/configuration/appSettings/add[@key='CachePort']"); 15. $cacheNameNode = $xml.SelectSingleNode("/configuration/appSettings/add[@key='CacheName']"); 16. #get cache info 17. Use-CacheCluster; 18. $farm = Get-SPFarm; 19. $cache = Get-CacheHost; 20. #ensure cache host 21. if($cacheHostNode -eq $null) 22. { 23. #create cache host node 24. $element = $xml.CreateElement("add"); 25. $attribute = $xml.CreateAttribute("key"); 26. $attribute.Value = "CacheHost"; 27. $element.Attributes.Append($attribute); 28. $attribute = $xml.CreateAttribute("value"); 29. $attribute.Value = $cache.HostName.ToString(); 30. $element.Attributes.Append($attribute); 31. $xml.configuration.appSettings.AppendChild($element); 32. } 33. else 34. { 35. #update cache host node 36. $cacheHostNode.Value = $cache.HostName.ToString(); 37. } 38. #ensure cache port 39. if($cachePortNode -eq $null) 40. { 41. #create cache port node 42. $element = $xml.CreateElement("add"); 43. $attribute = $xml.CreateAttribute("key"); 44. $attribute.Value = "CachePort"; 45. $element.Attributes.Append($attribute); 46. $attribute = $xml.CreateAttribute("value"); 47. $attribute.Value = $cache.PortNo.ToString(); 48. $element.Attributes.Append($attribute); 49. $xml.configuration.appSettings.AppendChild($element); 50. } 51. else 52. { 53. #update cache port node 54. $cachePortNode.Value = $cache.PortNo.ToString(); 55. } 56. #ensure cache name 57. if($cacheNameNode -eq $null) 58. { 59. #create cache name node 60. $element = $xml.CreateElement("add"); 61. $attribute = $xml.CreateAttribute("key"); 62. $attribute.Value = "CacheName"; 63. $element.Attributes.Append($attribute); 64. $attribute = $xml.CreateAttribute("value"); 65. $attribute.Value = $farm.Id.ToString(); 66. $element.Attributes.Append($attribute); 67. $xml.configuration.appSettings.AppendChild($element); 68. } 69. else 70. { 71. #update cache name node 72. $cacheNameNode.Value = $farm.Id.ToString(); 73. } 74. #save 75. $xml.Save($webConfigPath); This script needs to be run on every web front end. It takes in the path to the web.config file and gets its content as a blob of XML (Line #11). Next in Line #'s 13-15, we get a reference to the three nodes that represent the app settings for our cache values. Line #'s 17-19 then get objects that represent the SharePoint farm and AppFabric cache service. Here's what the output of these variables looks like in PowerShell ISE: PowerShell ISE The rest of the script then takes each value and either creates an AppSettings node for it (the first time the script is run against a particular web.config) or updates the existing setting's value. The PowerShell is a bit verbose here; Line #'s 21-37 show the first of the three blocks that do this XML manipulation work. Notice that whenever I set the value of a node, I call ToString on it (Line #'s 29 and 36 for example). Even if both sides of the assignment are technically strings, PowerShell gets pissy and will passive-aggressively throw an exception to let you know that all of a sudden it cares about types and wants its XML node values to be proper .NET strings. Cannot set "value" because only strings can be used as values to set XmlNode properties. Finally, Line #75 saves the web.config file. Once these values are in place, the code in your MVC app can use the AppFabric caching infrastructure. This brings us to Act II, which is my simple little caching layer, a.k.a. the CacheMonster. CacheMonster is a static class that wraps the AppFabric API and provides basic Put/Get operations to the rest of the app. There is a lot more that can be done, (as I mentioned before) but I just wanted to outline the basic operations here. The core object here is a static instance of Microsoft.ApplicationServer.Caching.DataCache. Following a standard singleton pattern, CacheMonster keeps this object around for the duration of the type's lifetime. Here's the "initialization" code that pulls the web.config values it needs and news up the caching infrastructure. Code Listing 2 1. #region Members 2. private static DataCache _cache; 3. private static readonly object _lock = new object(); 4. private static DataCache Cache 5. { 6. get 7. { 8. //lock 9. lock (CacheMonster._lock) 10. { 11. //ensure single instance of cache 12. if (CacheMonster._cache == null) 13. { 14. //configure app fabric 15. DataCacheFactoryConfiguration config = new DataCacheFactoryConfiguration(); 16. config.Servers = new List<DataCacheServerEndpoint>() 17. { 18. //register sharepoint server 19. new DataCacheServerEndpoint(ConfigurationManager.AppSettings["CacheHost"], Convert.ToInt32(ConfigurationManager.AppSettings["CachePort"])) 20. }; 21. //get cache 22. CacheMonster._cache = new DataCacheFactory(config).GetCache(string.Concat(Constants.Cache.Name, new Guid(ConfigurationManager.AppSettings["CacheName"]))); 23. } 24. //return 25. return CacheMonster._cache; 26. } 27. } 28. } 29. #endregion There's a lot going on here, but it's fairly straight forward: lock to make sure our singleton instance isn't duplicated, build an array of DataCacheServerEndpoint objects, (that will only have one entry: the current front end's AppFabric service) and then pass this configuration information to the DataCacheFactory's GetCache method to return the actual DataCache object that SharePoint hosts. Finally, let's look at CacheMonster's static methods which provide these services to the rest of the app. Once everything is wired up and we've brought our DataCache object to life, the rest is easy: build as thick or thin of a wrapper around it as you need. SharePoint and AppFabric will handle all the distribution and memory management for us (the optional configuration of which is outside the scope of this post). Code Listing 3 1. public static void Put(string key, object thing) 2. { 3. //use default timeout 4. CacheMonster.Put(key, thing, Convert.ToInt32(ConfigurationManager.AppSettings["CacheTimeout"])); 5. } 6. public static void Put(string key, object thing, int timeoutSeconds) 7. { 8. //use supplied timeout 9. CacheMonster.Cache.Put(key, thing, TimeSpan.FromMinutes(timeoutSeconds)); 10. } 11. public static T Get<T>(string key) 12. { 13. //get thing 14. object thing = CacheMonster.Cache.Get(key); 15. //determine null 16. if (thing == null) 17. return default(T); 18. else 19. return (T)thing; 20. } The general pattern I follow for caching is Get, null check, query, Put. Getting (Line #'s 11-20) requires a key that I store in my constants files (or web.config or whatever) and casts the stored object to the specified generic type. If the thing (usually an MVC model) I'm getting is null, I then do my CSOM query, assemble my model, and do a Put. Putting uses this same key to store the object (Line #'s 1-5). An override (Line #'s 6-10) provides an optional timeout in minutes; otherwise an app-wide default is used. That's pretty much it. Executing CSOM queries (or even just opening up a ClientContext) can be a bit expensive, so make sure you have budget in your project to implement a CacheMonster. If you wait until you get unsatisfactory results from UAT or performance testing, it's far too late to reliably tear apart your data layer. Or maybe it's not, given how easy it is to take advantage of SharePoint 2013's distributed caching! Have fun! 5 Tags No Files No Thoughts Your Thoughts? You need to login with Twitter to share a Thought on this post. Loading...
__label__pos
0.944091
Non volatile variables and configuration settings for MFC applications CodeGuru content and product recommendations are editorially independent. We may make money when you click on links to our partners. Learn More. Environment: VC6 sp5 tested on W95 and W98 This article describes two classes for storing and retrieving string, numeric and BOOL variables to/from files in ASCII text format – much like INI files. The first class – CTxtVar, is used to dynamically add and remove variables in the text file. The second class – CCfg, which is derived from CTxtVar, is used as a base class to map a fixed set of variables or configuration settings in the text file to constant ID’s which are used anywhere in the application to access (read and write) the actual variables/settings in the text file. The CTxtVar class I have chosen to put the variables in text files in human readable format instead of a binary format in order to make it easy to examine and modify the variables from any text editor. The format of the text file is divided into sections, items and variables: [SECTION_HEADER] ITEM VAR,VAR,VAR ITEM VAR ITEM VAR,VAR,VAR,VAR,VAR [SECTION_HEADER] ITEM VAR ITEM VAR,VAR,VAR Every item can have up to 20 variables attached to it (this is #defined to 20 in the CTxtVar header file for now – in the future this may be changed to a dynamic CStringList instead of a fixed array as now). For a real example, see the edit box on the right in the picture of the demo app above. All variables are handled as strings in the CTxtVar class. Each line in the text file (item and section headers) are stored in a CStringList in the CTxtVar class. Items are accessed with a section header name and an item name. There are also functions to split the item and it’s variables from a single string to an array of strings, one for each variable, which makes it possible to extract a single variable from an item. String variables and item names that contains white spaces or commas must be embedded within double quotes to be treated as one single variable. The items and it’s variables are written back to the string list as one complete string. It is up to the calling function to build this string before it is written back. Other functions in the CTxtVar class is: Read stringlist from file, write stringlist to file, find section, find item in section, add item, remove item, remove section and iterate through a section getting the items one by one. The CTxtVar class is used when the number of items in a section is dynamic – items are added and removed on the fly, or when the item name and number of variables for an item is unknown at design time. Even the section name could be unknown. Example of this is list or combo boxes which could have it’s contents manipulated by the user. The demo app demonstrates this example. The functions SetItem() and SetSectionItem() has a BOOL parameter called bAdd which, if set to TRUE, allows items with identical names to be added to one section as separate items. An example where this could be useful is where the item represents a graphical object of some sort in a view. The name of the item is given by the user – not known at design time. The graphical object could be displayed on several places, perhaps in different modes on the same view. The item is the name supplied by the user, variable 1 could be the drawing mode, variable 2 the x position, variable 3 the y position and so on. This would mean that the item name would show up for every object of this type that is placed in the view. To retrieve the variables for these items FindItem() and FindSectionItem() would only find the first item with this name after the section header. In this case it is more useful to use the function FindSectionHeader() for the section with the items of interest and then iterate through every item of this section with GetNextItem(), perhaps building a linked list or an array of objects for these items, which is then used to access the objects within the application. One way to write back changed settings for the objects (added/removed objects, changed position) is to simply remove all items from the section in the string list with the function EmptySection() and then iterate through the list or array of objects in the app and for each one build the item string and write it back with SetSection() or SetSectionItem(). There is a BOOL member variable in the CTxtVar, m_bChanged, that is set to TRUE whenever a string is added, removed or changed in the string list. This value is checked in the destructor of the class which, if TRUE, automatically updates the text file. This means that there is generally no need to write the stringlist back to the text file manually (with Flush()) unless another class needs to read the text file during the lifetime of the CTxtVar class. This is the case in the demo app since the text file itself is displayed in a rich edit control and updated whenever there has been a change in the string list of the CTxtVar class. Of course, if the app exits unnaturally the destructor Flush() may not work so it will be a good idea Flush() critical changes back to the text file right after they are changed. See the CTxtVar.cpp file for more information about member functions and variables. The CCfg class Whenever I do an MFC app I have a lot of variables and settings that should be saved between sessions. Call me old fashioned but I rather save these variables in a separate file in my app’s directory than in the registry for several reasons: It is easy to read and modify (if it is in text format), it is easy to clean up, the same app can have several settings by simply starting it in different directories, just to mention some. The variables used for configuration settings are all known at design time. The CCfg class associates every setting/variable (and section and item) with an ID which is used instead of the section name and item name to access it’s value. Furthermore, the variables can be of type string, numeric, limited numeric (limited by a max and min value) and BOOL. Every variable also has a default value which is used if the setting isn’t found in the text file (which it isn’t the first time the app starts or if the text file is deleted). If a value for a setting isn’t found in the text file the default value will be written to the text file. The CCfg class holds the information for every variable, item and section in a list of CCfgNode derived classes. These classes have information about the ID, the current value, the default value and a maximum and minimum value for the limited numeric type. The CCfg class uses the string list of the CTxtVar to load the CCfgNodes with current values for the settings. The CCfgNode list is written back to the text file in two steps, first the CCfgList is converted back to the string list in CTxtVar and then the base class CTxtVar writes back the string list to the text file. This isn’t done until the Flush() function is called which could be as late as in the destructor of the CCfg and CTxtVar classes. The CCfg class dynamically builds the CCfgNode list upon initialization from information in an array of structures typedefed as CFGDEF. The CFGDEF array is where the programmer specifies what section names, item names types of variables, default values and ID’s should be used for the settings. In order to keep things simple and not having to remember too much about how this CFGDEF structure works I have created a set of macros that does most of the job during compile time. Also, these macros automatically assigns the ID a unique number which makes it impossible to have two ID’s of the same number by mistake. How can this be done, you may ask, how can a macro both define a structure array and declare a constant with a different value at the same time? Well it can’t, but by putting the macros for the definition of the CFGDEF structure in the header file of the CCfg derived class (the user class) and redefining the macros and including this header file twice from the implementation file (.cpp file) of the CCfg derived class it can. The macros that generate the unique ID’s and builds the CFGDEF array can look something like this: BEGIN_CFGDEF(cfgdef) CFG_SECTION(CFGID_SETTINGS,”Settings”) CFG_WINDOWPOS(CFGID_INITWINDOWPOS,”MainWindowPos”, 100,100,620,510,0) CFG_FONT(CFGID_FONT,”Font”,”System”,12,FW_NORMAL,FALSE) CFG_ITEM(CFGID_CURRENTCOMBOLIST,1,”Currentcombolist”) CFG_STRING(CFGID_CURRENTCOMBOLIST_NAME,””) CFG_ITEM(CFGID_BOOL,1,”Bool”) CFG_BOOL(CFGID_BOOLVALUE,0) CFG_ITEM(CFGID_NUM,1,”Numeric”) CFG_NUM(CFGID_NUMVALUE,0) CFG_ITEM(CFGID_DIR,1,”Directory”) CFG_STRING(CFGID_DIRSTRING,””) END_CFGDEF The resulting text file with default values for this CFGDEF looks like this: [Settings] MainWindowPos 100,100,620,510,0 Font “System”,12,400,0 Currentcombolist “” Bool 0 Numeric 0 Directory “” The order of the items in the section may vary though. I have chosen to make the CCfg derived class global in order to make it accessible from all classes that has included the header file of the CCfg derived class. I think this is one case where a global variable is a good thing in a C++ application. This way it is easy for the objects that needs to access non volatile variables or configuration settings themselves without obtaining a pointer from another (global) class. The variables/settings can be read by the following functions: GetBool(int VariableId), GetString(int VariableId) and GetNum(int VariableId) or GetBool(int ItemId, int VariableIndex), GetString(int ItemId, int VariableIndex) and GetNum(int ItemId, int VariableIndex) where VariableIndex is the indexnumber for a variable in an Item. Index 0 is the first varaible. Use the following functions to write variables/settings: SetBool(int VariableId,BOOL Value), SetString(int VariableId,CString Value) and SetNum(int VariableId,long Value) or SetBool(int ItemId, int VariableIndex, BOOL Value), SetString(int ItemId, int VariableIndex, CString Value) and SetNum(int ItemId, int VariableIndex, CString Value). To check the integrity of the CFGDEF array the debug version fails ASSERT macros if something is wrong. This could be: A CFG_SECTION isn’t the first macro after BEGIN_CFGDEF or an item has more or less variables than specified. ASSERT also fails if the application is trying to use the wrong access function for a variable ID, for example trying to read a string variable with the GetNum() function. How the CFGDEF macros work When the header file (the declaration file) for the CCfg derived class is included from the cpp file (the implementation file) for the CCfg derived class the first time or whenever it is included from another class the macros generate a typedef of an enumerated variable that include all ID’s from the CFGDEF macros. This way the ID’s have been assigned unique consecutive numbers which is known by all classes that includes the declaration file. Before the header file is included a second time from the cpp file of the CCfg derived class it has defined an identifier to tell the header file that this time the macros should generate the implementation of the CFGDEF structure array. To accomplish this the macros are redefined. The definition and redefinition of the macros takes place in the header file for the CCfg class. How to use it In order to use the CCfg class you have to derive a class from it and put the macros for the CFGDEF structure array in the derived class’ header file. The only function needed is the constructor which has to generate the CfgNode list with a call to MakeCfgList(const CFGDEF *cfg_def). It may be easiest to use the following implementation and declaration files as a template and change the names as you want. The declaration file (header file) // CfgDem.h: interface for the CCfgDem class. // /////////////////////////////////////////////////////// #if !defined( \ AFX_CFGDEM_H__20008C44_F97E_11D5_8C08_B343B9E2DD77__INCLUDED_) #define \ AFX_CFGDEM_H__20008C44_F97E_11D5_8C08_B343B9E2DD77__INCLUDED_ #define __CFG_FIRST_RUN__ #endif #if defined(__CFG_FIRST_RUN__) #define __INCLUDE_CFGDEF__ #elif defined(__CFG_IMPLEMENTATION__) #define __INCLUDE_CFGDEF__ #endif #if defined(__INCLUDE_CFGDEF__) #include “Cfg.h” /* This file must be #included twice from the CCfg derived class, once to enumerate the ID’s (definition) and once to implement the array of CFGDEF structures (declaration). The CFG_… macros are redefined when __CFG_IMPLEMENTATION__ is defined. When this file is included from the CCfg derived class and __CFG_IMPLEMENTATOIN__ is defined the following macros generate the array of CFGDEF structures used to access the cfg file. When this file is included without __CFG_IMPLEMENTATION__ defined (the first time it is included in the CCfg derived class and whenever it is included in another cpp file) all ID’s are enumerated to generate unique numbers for all cfg ID’s. */ BEGIN_CFGDEF(cfgdef) // cfgdef is the name //for the CFGDEF structure // array – CFGDEF cfgdef[]={ // Put the rest of the CFGDEF macros here END_CFGDEF #endif #if defined(__CFG_FIRST_RUN__) class CCfgDem : public CCfg { public: CCfgDem(); virtual ~CCfgDem(); }; extern CCfgDem cfg; // Make the global cfg // variable visible to // other classes that // includes this .h file. #endif #undef __INCLUDE_CFGDEF__ #undef __CFG_FIRST_RUN__ #undef __CFG_IMPLEMENTATION__ Note that the #pragma once can not be used in this header file. Rename all instances of CCfgDem to whatever name you want the derived class to have. The implementation file (cpp file) // CfgDem.cpp: implementation of the CCfgDem class. // ////////////////////////////////////////////////// #include “stdafx.h” #include “CfgDemo.h” #include “CfgDem.h” #ifdef _DEBUG #undef THIS_FILE static char THIS_FILE[]=__FILE__; #define new DEBUG_NEW #endif #define __CFG_IMPLEMENTATION__ #include “CfgDem.h” // When __CFG_IMPLEMENTATION__ is // defined // CfgDem.h actually defines the CFGDEF // structure array CCfgDem cfg; // Global CCfgDem class. ///////////////////////////////////////////////////////// // Construction/Destruction ///////////////////////////////////////////////////////// CCfgDem::CCfgDem() { MakeCfgList(cfgdef); // Make the cfg node list from // the cfgdef struct array. } CCfgDem::~CCfgDem() { } Rename all instances of CCfgDem to whatever name you want the derived class to have. See the source files for more information about the CCfg class. About the demo app The view class together with some custom controls handles all the functionality of the CfgDemo app. The document class is not used at all. All controls on the left side is used to show and manipulate settings in the text file. The CRichEditCtrl on the right side is used to show the actual contents of the text file itself and is updated whenever the text file is changed. There are two combo boxes which are used to enter and display data of three lists. The first combo box is of type drop list which selects one of three lists. The other combo box is a drop down type used to select and enter data into the list selected by the first combo box. The selection of an item in the second combo doesn’t do anything in this demo, though. There is also a button to delete the currently selected list. The lists are updated and accessed in the text file by CTxtVar (base class of the CCfg class) functions that directly reads and writes to and from the CStringList representing the text file in the CTxtVar class. This is done to demonstrate that dynamic sections and items can be read and written to the same text file as CFGID linked sections and items. There is a read only edit box which displays a directory selection done with a SHBrowseForFolder dialog through a button next to the edit box. There is an edit box with a spin control used for entering a numeric value. There is a check box used for a BOOL value. There is a button which opens up a modal font selection dialog box to set the font for the CRichEditCtrl. The font setting is also stored in the text file. The app also uses a custom class derived from CFormView called CMyFormView as a base class for the view. This class automatically handles resizing and repositioning of controls on the form when the main frame is resized. The size, position and maximized/normal state is saved in the text file when the app closes and is restored when the app is loaded the next time. Use the source as you want. If you do – please send me a mail to tell me. Downloads Download demo project – 21 Kb Download source – 47 Kb More by Author Get the Free Newsletter! Subscribe to Developer Insider for top news, trends & analysis Must Read
__label__pos
0.537511
Plugin Guide Richard Wilbur richard.wilbur at gmail.com Tue Jun 28 00:30:13 UTC 2016 Now that the docutil monkey patch is working in the plugin guide conf.py, I have conquered the other build errors and warnings, except for the last 3. It turns out we have (minimal) documentation for three plugins that are not presently in the table of contents, id est, they are built but never referenced in the index.html so only available to those who know the name and path of the file: colocated-plugin.html, cvs-plugin.html, and mtn-plugin.html. It seems that colocated-plugin was likely a first-pass at colocated branches by Jelmer. The documentation is sparse. Did it pursue a different course than lp:bzr-colo or did it become lp:bzr-colo? Both the cvs-plugin and mtn-plugin state that they are there purely to inform the user that there is no native/builtin support for CVS working trees or Monotone branches. I guess the question is, "Do we add these to the table of contents in index.html, or write a little note telling why they exist and further, why we are not publishing them?" (The automatically generated subtitle in the table of contents would say: "cvs - Indicates that CVS formats are not supported", "mtn - Indicates that Monotone formats are not supported", and "colocated - Colocated branches support (aka git-style branches)".) Sincerely, Richard More information about the bazaar mailing list
__label__pos
0.705653
vite/vue mock 数据插件:vite-plugin-easy-mock vite-plugin-easy-mock 前言 开发项目时想要有个很容易mock本地数据的插件,只需要 vue.config.js 或 vite.config.js 中加载,然后按照规则即可使用mock数据,开启本地服务器则自动代理mock数据(可根据环境判断是否需要加载该插件),不需要开启额外的服务器 功能 拦截接口,重写返回,代理到本地 mock 数据 安装 yarn add vite-plugin-easy-mock --dev # or npm i vite-plugin-easy-mock --save-dev 使用 vite.config.js 中使用插件 import { defineConfig } from 'vite' import viteMock from 'vite-plugin-easy-mock' export default defineConfig({ plugins: [ viteMock() ] }) vue.config.js 中使用 const { useMiddleWare } = require('vite-plugin-easy-mock') module.exports = { devServer: { before (app) { // 使用mock中间件 app.use(useMiddleWare()) } }, } 根目录下新建 mock 文件夹,新建文件夹和 json 或者 js 文件 文件夹和文件名配合就能 mock 本地 /user/getAuthList 接口,json 和 js 写法如下: mock/user/getAuthList.json { "success": true, "desc": null, "data": [] } mock/user/getAuthList.js module.export = () => { return { success: true, desc: null, data: [] } } 说明 原理是对请求进行拦截,判断是否是ajax请求或者文件上传请求,如果是则走mock 如果能再本地mock文件夹中找到路径对应文件就会返回该文件里的json数据 如果找不到路径则返回{success: false, desc: '未找到mock路由'} 插件地址 源码地址 posted @ 2021-08-17 19:15  c-137Summer  阅读(795)  评论(0编辑  收藏  举报
__label__pos
0.568441
Getting started with Assetgraph When presented with the challenges of web performance optimization, or any other kind of manipulation of web sites and assets, it is helpful to have a good set of tools at your disposal. Assetgraph aims to be exactly this. A high level interface for your websites, yet still providing low level interfaces for individual assets. A toolkit that lets you build your own tools that fit very specificly to your individual needs. I have spoken at lengths about how Assetgraph distinguishes itself from other build tools by not just being another unix tool configuration wrapper. In the following I will assume you have already heard me sing Assetgraphs praises. If you haven’t, watch my talk from EmpireJS. Assetgraph is a node module and this post assumes that you are relatively comfortable writing and executing node scripts. By the end you should have learned enough about Assetgraph to get your hands dirty and write your own tools with it. If you want to see how easy it is to build tools that filter out unused files, inlines your images or rename files for optimal caching, you are in for a treat! If you are more into just consuming a well tested out-of-the-box build tool, take a look at assetgraph-builder or its grunt-wrapper grunt-reduce. Assetgraph Vocabulary Before we get started it’s useful to get some vocabulary straight. If you’re not easily confused you might want to skip to the part where you get your hands dirty. Like many bigger projects Assetgraph has some project specific vocabulary. We’ve tried not to be too magical about the terms we chose, so hopefully you’ll get what things are from their name. Sometimes the inherent properties that go with the names are non-obvious though. This is an attempt at an explanation. Asset An Asset in Assetgraph is a model of the contents of a file including its metadata. Assets have a bunch of base properties that are relevant to all asset types, like content-type, url, file name, file extension, loaded state, inlined or not. All Assets have a 1 rawSrc getter and setter, giving you direct access to the raw file behind the asset. They also have a bunch of convenience methods like 1 md5Hex , 1 clone and 1 replaceWith , along with a 1 populate method to parse and find outgoing relations in the source code of the asset. The most interesting things happen in the Asset constructors for more specific data types, like 1 Html or 1 JavaScript , where each Asset instance also has a highlevel instance of the Assets types data model. For HTML this is the DOM, modelled with jsdom. For JavaScript it’s the uglify-js AST. Using these highlevel interfaces you have the ability to manipulate each assets as you see fit, using familiar highlevel abstractions you would also find in the browser. You might want to take a look at the full list of already implemented Asset types. Relation A Relation in Assetgraph defines the edges of the graph. They bind the Assets together and define what depends on what, and from where. Relations not only keep track of which file needs what other file. They also keep track of where exactly the relation came from. Be it a Html Script node src attribute or a CSS background image url token. Relations automatically update references when Assets move around, making the dependency graph stable at all times without broken links. Relations have 1 type , 1 to , 1 from and 1 href properties that are highly relevant when querying the graph for them. There are a bunch of convenience functions available for more lowlevel graph manipulation, which I’ll skip for now as this is just an introduction. Here is the full list of Assetgraph Relations. Query The Assetgraph Query model is a MongoDB inspired model where you can combine some simple boolean AND, OR, NOT statements into a powerful expression to specificly target only the Assets or Relations you are interested in. Each query is a function that matches the properties of each Asset or Relation with the corresponding property or the object in the query. The query object can use strings, numbers, arrays, regexes, user defined functions and even recursive query objects to match the subject. Some examples: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 var query = require('assetgraph').query; var simple = query({ type: 'Html', isLoaded: true }); var boolean = query.or({ url: /^http:/ }, { fileName: function (fileName) { return fileName.length > 8; } }); var nested = query({ type: 'HtmlAnchor', from: { isInline: false }, to: { isImage: true } }); Most interactions with Assetgraph happen trough queries to the graph. It is recommended that you get to know the query model if you want to be an effective tool maker. Often times you’ll see simple query objects being passed into Assetgraph methods or Transforms without the 1 query() call. Assetgraph will automatically turn such objects into queries for your convenience. Take a look at the Assetgraph Query source code. Assetgraph The Assetgraph is the instance that ties all of the above together. This is where the Assets and Relations are stored and where you can use a Query to find them all again. There are a bunch of convenience methods for pre-order and post-order traversal and of course the most used ones 1 findAssets and 1 findRelations . Transform Assetgraph is cool, but always diving into the lowlevel code of specific Asset syntax trees becomes bothersome pretty quickly. Transforms are highlevel functions that wrap these lowlevel calls, manipulating the Assetgraph instance in more convenient ways. Assetgraph is extensible, so you can write your own highlevel transforms that fit your specific needs. Assetgraph already comes preloaded with a lot of very useful transforms, most of which are written with web performance optimization in mind, but don’t let yourself be limited by that! There are some fine descriptions of most of the available core transforms in the Assetgraph README. Transform Queue The Transform Queue is what lets you chain a number of transforms together to form a pipeline that applies multiple transforms in order. While all installed transforms are available directly on the Transform Queue and Assetgraph instance for you convenience, they all return a reference to the Transform Queue they are in, enabling you to easily chain Transforms. There are a few convenience methods on the Transform Queue, like 1 if , 1 endif and 1 queue (for quick inline transforms), the most important one is 1 run . If you don’t 1 .run() the Transform Queue, nothing will happen. Minimum Assetgraph Lifecycle While tools have great diversity, there will always be some common boilerplate that needs to be written in order to bootstrap them. The same goes for Assetgraph-based ones. This will be an explanation of how a bare minimum setup will look with Assetgraph. First, it’s important to remember that Assetgraph can only work with websites if their internal references are valid. This may sound like an obvious best practice, since that is the only way a website can actually be loaded in a browser. Sadly I need to point this out, as most existing web performance build chains actually set you up with non-working websites, that are then assembled by the tool or some configuration of a static file server. If you want to get the most out of Assetgraph, build your websites so they work in the browser with no magic in between. Incidentally this simplifies your workflow greatly and lessens the pain for front-end developers considerably, so I consider it best pratice. Now, let’s get started. Consider a website structure like this one: 1 2 3 4 5 6 7 8 9 10 app/ ├── css │   └── main.css ├── js │   ├── main.js │   └── jquery.js ├── images │   └── logo.png ├── index.html └── favicon.ico Your web application is in a directory called 1 app and you have som pretty basic scaffolding done already. A simple start for a simple introduction. Note that this is just an example. Assetgraph makes no assumptions about directory structure as it will infer it from your source code. We start out with creating a script that can load the website into an Assetgraph instance: 1 2 3 4 5 var Assetgraph = require('assetgraph'); var graph = new AssetGraph({ root: 'app' }); The above creates an Assetgraph instance, configuring it with 1 app as the web root. The root must be a string, and may be any valid 1 file:// , 1 http:// , 1 https:// or 1 ftp:// url. If no protocol is specified, a location on local disc is assumed, and the path is resolved as you would expect on the command line. An Assetgraph instance in itself, without any data, is quite useless. So next up we’re interested in actually loading data from our website into the graph. We can do this using the 1 loadAssets transform. Loading your 1 index.html into the graph can be done like so: 1 graph.loadAssets('index.html'); The 1 loadAssets transform takes a range of inputs to make your life easier. The most useful to you now will be the string or array of strings. Each string, just like before, may be a full url or a protocol relative url in the previously described schemes. All relations in the graph will use the Assetgraph root to resolve paths, not the file system root. If you want to get more advanced with the 1 loadAssets transform it might be useful to consult the source code for now. Before we can run the script there is one more piece of boilerplate code that needs to be added. What we are doing when calling Assetgraph transforms with configuration parameters, is actually not executing them right away. Instead, we are appending them to a transform queue, which is what is returned from the transform call. To make this explicit in this example we save the return value in a new variable: 1 var queue = graph.loadAssets('index.html'); All transforms in the queue are run in the queue scope and will return the queue, making transforms chainable. All transforms will have the assetgraph instance passed to them as the first parameter. The 1 loadAssets call you just added to your script, won’t actually be run before the transform queue has been started. We do this using the 1 run method: 1 queue.run(); This implementation detail is a bit counter intuitive and can bite you later, so make a note of it now and I will make a note on improving the API. If it hasn’t changed by September 2014 you are hereby mandated to bug me about it on Github. You can now run your script, and 1 index.html will be loaded into the graph model. However this is not terribly exciting yet, since all that happens is reading a file into memory and not logging any output. So let’s make it a bit more exciting by adding some logging to the console. Logging and Debugging Setting up logging is done on the Assetgraph instance, meaning it goes before the transform queue is run. Your script now looks like this: 1 2 3 4 5 6 7 8 9 10 11 12 var Assetgraph = require('assetgraph'); var graph = new AssetGraph({ root: 'app' }); graph.on('addAsset', function (asset) { console.log('addAsset', asset.toString()); }); var queue = graph.loadAssets('index.html'); queue.run(); We’re hooking into the Assetgraph emitted event 1 addAsset and logging some basic information about the event and the asset that was added to the graph. Try running your script now, and you should actually see some output in your console. There are more events you can hook into, to get some more insight into the internals of Assetgraph: 1 addAsset , 1 removeAsset , 1 addRelation , 1 removeRelation , 1 beforeTransform , 1 afterTransform . Furthermore there are some different error levels that are especially useful to hook into in order to get some more useful information for debugging your code: 1 info , 1 warn , 1 error . These last ones are conditions of increasing severity. 1 info is usually used when Assetgraph sees a potential error situation in your web application code, but a fix has already been applied. This could be trying to bundle scripts where one or more of them are leaking strict mode to the global scope, or exceeding the IE max style sheet rules number. Don’t worry too much about 1 info events. If you see 1 warn events you should take note, as these usually describe problems in your web application that have to be fixed by you. Things like parse errors or missing files that would cause a 404 response in production etc. The 1 error event is the most severe. This is usually only emitted when a completely non-recoverable error has been encountered, or a library throws unexpectedly. It usually makes sense to just stop running your script if you hit this one. It’s also likely that when you get 1 error level events that we’d like to hear about it, as it might indicate a missing error handling in Assetgraph. Please report these to us. Let’s spice it up with one more logging detail, writing some stats about the assets contained in the graph to 1 stderror : 1 queue.writeStatsToStderror(); Populating the Dependency Graph We’ve now arrived at the core functionality of Assetgraph. The arguably most powerful functionality is the ability to automatically and recursively traverse the dependencies of loaded assets. This, as they say, is where the magic happens, and what enables you to work with your web assets in their entire context without having to define large manifest files telling your tool what files you want included. We are using the 1 populate transform. The transform can be configured in a multitude of ways, in order to describe how to traverse the dependency graph, what to load, and more importantly, what not to load. Think of this as a web scraper. Let it scrape everything and you might end up copying the internet, so use with care. Available options are: 1 2 3 4 5 6 7 { followRelations: <Assetgraph.query>, startAssets: <Assetgraph.query>, from: <Assetgraph.query>, // Alias for 'startAssets' stopAssets: <Assetgraph.query>, concurrency: <int> // Number of concurrent requests. >= 1 } It should be obvious by now that it is useful to get to know the query syntax. For now I’ll assume that we are working with a web site on local disc and that we are only interested in loading assets into Assetgraph that are local as well. So we want to configure the 1 populate transform to only follow urls that are relative or root relative, while excluding the ones that are absolute or protocol relative (ie. on a different domain). We can do this like so: 1 2 3 4 5 queue.populate({ followRelations: { hrefType: ['relative', 'rootRelative'] } }) This makes your final bootstrap script look like this: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 var AssetGraph = require('assetgraph'); var graph = new AssetGraph({ root: 'build' }); graph.on('addAsset', function (asset) { console.log('addAsset', asset.toString()); }); var queue = graph.loadAssets('index.html'); queue.populate({ followRelations: { hrefType: ['relative', 'rootRelative'] } }); // Put further transforms here queue.writeStatsToStderr(); queue.run(); Congratulations, You have now successfully loaded the entirety of your local files, which you are referring to in your source code, into Assetgraph. You are now bootstrapped with all you need in order to work with your assets. Normally a starter guide would end here, but I’ll throw in some quick examples that might be of use to you, just so you get some ideas of what is possible. Writing files to disc Reading files from disc to memory is fun, but it’s even more fun writing them back to disc. Assetgraph lets you rework your dependencies and assets in memory in the transform queue, but the only way you gain anything from that is by actually saving the changes. To write your files back to disc we’ll use the 1 writeAssetsToDisc transform. The first argument is a query object, which we’ll make pretty broad, only limiting it to assets that have been loaded (trying to write unloaded assets to disc won’t work anyway). The second argument is the root directory to write the files to. You can leave it blank, which will fall back to the Assetgraph instance root, meaning you are overwriting the files in their existing location. Might be useful, but normally you want to separate your source files from your output files. We’re setting a new outRoot 1 demo . 1 2 3 queue.writeAssetsToDisc({ isLoaded: true }, 'demo'); Congratulations, you have now copied all your referenced assets from one directory to another. While you think this might as well have been accomplished with 1 cp -r app demo , the important point to note is your referenced assets. If you haven’t somehow included a file on your website, it won’t be in 1 demo . Imagine how many unreferenced files get put into production every day by people forgetting about them. Even worse, imagine a bower component has a malicious php file with a rights escalation somewhere in a test directory, and you just copied it to your production server. So see this as a useful step to only get the essentials on your production site. If something is missing now it’s because you didn’t refer to it. This could easily happen with error pages, favicon.ico, robots.txt or similar. If you want unreferenced files to explicitly be a part of the graph, make sure to include them in the 1 loadAssets transform. Inlining small images Base64 encoding and inlining images. A tedious and stupid workflow. In development you want to work with your raw images to make it easier to maintain and debug, while in production you want to reduce http requests by bundling or inlining. Automation is the way and Assetgraph can help. We’ll limit ourselves to only images that are CSS backgrounds, as inlining content images requires some more specific knowledge of the context. I choose a completely arbitrary number for the file size of images to inline: 4096 bytes. Feel free to experiment on both accounts. We’re using the 1 inlineRelations transform, which is dead simple. The only thing that is happening here is just a more complex query than I’ve shown before. 1 2 3 4 5 6 7 8 9 10 queue.inlineRelations({ type: 'CssImage', to: { isLoaded: true, isInline: false, rawSrc: function (rawSrc) { return rawSrc.length < 4096; } } }); File revving File revving is a fancy word for revisioning files for optimal caching in the visitor’s browser. The optimal way to serve static assets is with a far future cache expiry, since the fastest request is no request at all. The optimal revisioning strategy is including a hash of the file content in the file name. If the file hasn’t changed, the hash isn’t changed, giving you the ability to optimally use the cache of the visitor’s browser for unchanged files and only load the ones that have changed since the last visit. The easiest way to set up cache headers for static assets is to put them all in the same directory, where the web server will append the correct cache header to the HTTP response. If none of this made sense, then I highly encourage you to read up on optimal asset delivery for browsers. Your users will thank you. This is by far the most complex example, but I want to show it because this is a place where Assetgraph shines compared to other build tools that do not have a dependency graph model at their core. Our strategy for renaming the files in the right order will be post order traversal, renaming leaf nodes in the graph before branch nodes to assure that the hash we calculate is actually based on the correct file contents including hashed references to decendants. First we need to craft a query that will only rename the files we want renamed. Some files might be static, but we still want them to keep their original url and take part in a different caching scheme, designed for rapid updates. Think of HTML pages, RSS feeds etc. I have come up with this query combination to target only the files that are safe to rename: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 var query = AssetGraph.query; var moveQuery = query.and( // Find all loaded an non-inlined assets // Except ones of the defined types and fileNames { isLoaded: true, isInline: false, type: query.not([ 'CacheManifest', 'Rss' ]), fileName: query.not([ '.htaccess', 'humans.txt', 'robots.txt', 'favicon.ico' ]) }, // Exclude HTML-files that are linked to query.not({ type: 'Html', incomingRelations: function (relations) { return relations.some(function (rel) { return rel.type === 'HtmlAnchor'; }); } }), query.or( // Exclude initial assets from renaming query.not({ isInitial: true }), // Include external HTML templates { type: 'Html', isFragment: true } ) ); The above is a distillation of a few years of iteration to try and define best practice for the most common use cases. I’d love to go into depth on this, but that’s certainly not fit for a beginners guide. Copy paste this for now and return when you are more comfortable with your understanding of the graph model and the query model. Now all that is left is to run the 1 moveAssetsInOrder transform, which does our post order traversal for us. It takes a query as the first argument and a function as the second. The function is called once per asset and the expected return value is the new file name of the asset. We’re moving all revved files into 1 /static and appending the first 10 chars of the hash of the file to the file name. 1 2 3 4 5 queue.moveAssetsInOrder(moveQuery, function (asset) { var targetUrl = '/static/'; return targetUrl + asset.fileName.split('.').shift() + '.' + asset.md5Hex.substr(0, 10) + asset.extension; }); Seems pretty easy right? Just like copying. Except, not. This is where the graph model comes in handy. When moving files by giving them a new url, all their incoming relations automatically get updated. So all the places where the old file name was referenced before are now correctly pointing at the new location. If you’ve ever tried to do this with unix tools, out of context of the website as a whole, you will know what a difficult feature this is. But here you have it, implemented in very few lines of code. Now all that is left to do is configure your web server to serve all files from 1 /static with far future expires cache headers. Look up how to in your relevant server manual or on StackOverflow. Summing up You’ve hopefully learned a bit more about Assetgraph now and are ready to get your hands dirty and try out new stuff on your own. At the very least I hope you’ve gained an understanding of the strengths and weaknesses of the paradigm and the toolset, so I am looking forward to getting grilled with relevant questions here, on Twitter or Github or on a conference we both attend over a beer ;) I’m always asked for comparisons with other popular tools, like Grunt, Gulp, Broccoli or similar. Assetgraph is not one to one comparable, as is primarily focuses on fully functional references, while the other tools primarily deal with files. This enables the other tools to do whatever, while Assetgraph needs your page to actually work before you can unlock the full potential. This makes Assetgraph wel suited as a post processing tool that you apply to the already assembled page. If you use one of the other tools to achieve this assembly is up to you. I’m also often asked about run time speed. Assetgraph is generally faster than Grunt, due to the limited file I/O. Assetgraph is generally slower than Gulp, since Gulp has you predefine your files and can run pipes in parallel while Assetgraph has to discover the files incrementally and runs transforms sequentially. If you wish to use Assetgraph for web performance optimization it is my recommendation to make it a part of your deployment step, not your development loop. Web performance optimization is about transforming code to be machine optimized, while your development loop is about optimizing for humans. When you move these concerns out of the development loop you will see that your development speeds up, and the time it takes to run a build suddenly is of much lesser importance. This is the final compilation of all examples, now prettified a bit: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 var AssetGraph = require('assetgraph'); var query = AssetGraph.query; var moveQuery = query.and( { isLoaded: true, isInline: false, type: query.not([ 'CacheManifest', 'Rss' ]), fileName: query.not([ '.htaccess', 'humans.txt', 'robots.txt', 'favicon.ico' ]) }, query.not({ type: 'Html', incomingRelations: function (relations) { return relations.some(function (rel) { return rel.type === 'HtmlAnchor'; }); } }), query.or( query.not({ isInitial: true }), { type: 'Html', isFragment: true } ) ); var graph = new AssetGraph({ root: 'build' }); graph.on('addAsset', function (asset) { console.log('addAsset', asset.toString()); }) .loadAssets('index.html'); .populate({ followRelations: { hrefType: ['relative', 'rootRelative'] } }) .inlineRelations({ type: 'CssImage', to: { isLoaded: true, isInline: false, rawSrc: function (rawSrc) { return rawSrc.length < 4096; } } }) .moveAssetsInOrder(moveQuery, function (asset) { var targetUrl = '/static/'; return targetUrl + asset.fileName.split('.').shift() + '.' + asset.md5Hex.substr(0, 10) + asset.extension; }) .writeAssetsToDisc({ isLoaded: true }, 'demo') .writeStatsToStderr() .run(); I hope you found this introduction useful and now have some inspiration to get started with your own tools. I bet you can come up with some amazing ideas I’ve never thought about.
__label__pos
0.737791
Current time:0:00Total duration:2:53 0 energy points Video transcript We have 7 times 10 to the fifth over 2 times 10 to the negative 2 times 2.5 times 10 to the ninth. So let's try to simplify this a little bit. And I'll start off by trying to simplify this denominator here. So the numerator's just 7 times 10 to the fifth. And the denominator, I just have a bunch of numbers that are being multiplied times each other. So I can do it in any order. So let me swap the order. So I'm going to do over 2 times 2.5 times 10 to the negative 2 times 10 to the ninth. And this is going to be equal to-- so the numerator I haven't changed yet-- 7 times 10 to the fifth over-- and here in the denominator, 2 times-- let me do this in a new color now. 2 times 2.5 is 5. And then 10 to the negative 2 times 10 to the ninth, when you multiply two numbers that are being raised to exponents and have the exact same base-- so it's 10 to the negative 2 times 10 to the negative 9-- we can add the exponents. So this is going to be 10 to the 9 minus 2, or 10 to the seventh. So times 10 to the seventh. And now we can view this as being equal to 7 over 5 times 10 to the fifth over 10 to the seventh. Let me do that in that orange color to keep track of the colors. 10 to the seventh. Now, what is 7 divided by 5? 7 divided by 5 is equal to-- let's see, it's 1 and 2/5, or 1.4. So I'll just write it as 1.4. And then 10 to the fifth divided by 10 to the seventh. So that's going to be the same thing as-- and there's two ways to view this. You could view this as 10 to the fifth times 10 to the negative 7. You add the exponents. You get 10 to the negative 2. Or you say, hey, look, I'm dividing this by this. We have the same base. We can subtract exponents. So it's going to be 10 to the 5 minus 7, which is 10 to the negative 2. So this part right over here is going to simplify to times 10 to the negative 2. Now, are we done? Have we written what we have here in scientific notation? It looks like we have. This value right over here is greater than or equal to 1, but it is less than or equal to 9. It's a digit between 1 and 9, including 1 and 9. And it's being multiplied by 10 to some power. So it looks like we're done. This simplified to 1.4 times 10 to the negative 2.
__label__pos
0.843305
Math You Can Zoom In On A Fractal Forever Even if you're don't know what a fractal is, you've definitely seen one. Fractals are everywhere, from the spirals of galaxies to the endlessly branching sprouts in a floret of broccoli. They're even in your own body. It was only a century ago that mathematicians started describing them, but once they did, it explained a whole lot about the world. A zoom sequence illustrating the set of complex numbers termed the Mandelbrot set. A Shape In A Shape In A Shape A fractal is, at its simplest, an infinite pattern. It doesn't take any special technology to create; in fact, you can make one yourself. Try drawing a triangle, then drawing an upside-down triangle within it. At each of the corners around that new triangle, draw an upside-down triangle. Keep doing that until you have triangles within triangles within triangles that are so small, you can't draw anymore. (That just so happens to be a Sierpinski sieve.) If you had infinite vision and an infinitely small pen, you could keep drawing triangles forever. That's the point of a fractal: it looks the same, or at least similar, no matter how much you zoom in. Despite the fact that a grade schooler can draw one, fractals didn't even get a name until 1975, when mathematician Benoît Mandelbrot coined the term for these seemingly "fractured" shapes. In 1979, he began creating pictures of what came to be called "the most complex object in mathematics," the Mandelbrot set, by using a computer to map out iterative functions — that is, math formulas that always plug the last answer into the next formula. Iterative functions start out remarkably simple, but can quickly balloon into a kaleidoscope of complexity. That means that computers are key. Fractal-like shapes had been described as far back as 1904, but it took computer power to show their infinite characteristics. Sierpinski colored tetrahedron. Fractals All The Way Down Although fractals took a while to be accepted as serious mathematics — in the 1970s, computers weren't fully accepted as serious mathematical tools — it's hard to overstate their importance. That's because most things in nature take on this infinitely complex shape. As Mandelbrot himself wrote, "Clouds are not spheres, mountains are not cones, coastlines are not circles, and bark is not smooth, nor does lightning travel in a straight line." You have fractals in your blood vessels, your nerves, and the alveoli in your lungs. Lightning, rivers, seashells, coastlines, hurricanes, and galaxies are all versions of fractals. Nature is frugal, and fractals are efficient: it's less work to use the same pattern over and over than to plan out an entire structure ahead of time. But fractals aren't just a way to describe nature; they're also a key part of technology. Antennas, computer circuits, even cities work on fractal geometry. With fractals, a few simple rules can create infinite complexity, and that makes them infinitely useful. If you'd like to learn more about fractals, check out "The Fractal Geometry of Nature" by Benoit Mandelbrot. We handpick reading recommendations we think you may like. If you choose to make a purchase through that link, Curiosity will get a share of the sale. Benoît Mandelbrot, The Father of Fractals Written by Curiosity Staff July 28, 2017 Curiosity uses cookies to improve site performance, for analytics and for advertising. By continuing to use our site, you accept our use of cookies, our Privacy Policy and Terms of Use.
__label__pos
0.870544
React and React Native - Third Edition 4 (2 reviews total) By Adam Boduch , Roy Derks What do you get with a Packt Subscription? • Instant access to this title and 7,500+ eBooks & Videos • Constantly updated with 100+ new titles each month • Breadth and depth in over 1,000+ technologies 1. Why React? About this book React and React Native, Facebook’s innovative User Interface (UI) libraries, are designed to help you build robust cross-platform web and mobile applications. This updated third edition is improved and updated to cover the latest version of React. The book particularly focuses on the latest developments in the React ecosystem, such as modern Hook implementations, code splitting using lazy components and Suspense, user interface framework components using Material-UI, and Apollo. In terms of React Native, the book has been updated to version 0.62 and demonstrates how to apply native UI components for your existing mobile apps using NativeBase. You will begin by learning about the essential building blocks of React components. Next, you’ll progress to working with higher-level functionalities in application development, before putting this knowledge to use by developing user interface components for the web and for native platforms. In the concluding chapters, you’ll learn how to bring your application together with a robust data architecture. By the end of this book, you’ll be able to build React applications for the web and React Native applications for multiple mobile platforms. Publication date: April 2020 Publisher Packt Pages 526 ISBN 9781839211140   Why React? If you're reading this book, you probably know what React is. If not, don't worry. I'll do my best to keep philosophical definitions to a minimum. However, this is a long book with a lot of content, so I feel that setting the tone is an appropriate first step. Yes, the goal is to learn React and React Native. But it's also to put together a lasting architecture that can handle everything we want to build with React today and in the future. This chapter starts with a brief explanation of why React exists. Then, we'll think about the simplicity of React and how React is able to handle many of the typical performance issues faced by web developers. Next, we'll go over the declarative philosophy of React and the level of abstraction that React programmers can expect to work with. Finally, we'll touch on some of the major features of React. Once you have a conceptual understanding of React and how it solves problems with UI development, you'll be better equipped to tackle the remainder of the book. This chapter will cover the following topics: • What is React? • React Features • What's new in React?   What is React? I think the one-line description of React on its home page (https://facebook.github.io/react) is concise and accurate: "A JavaScript library for building user interfaces." It's a library for building user interfaces (UIs). This is perfect because, as it turns out, this is all we want most of the time. I think the best part about this description is everything that it leaves out. It's not a mega framework. It's not a full-stack solution that's going to handle everything from the database to real-time updates over WebSocket connections. We might not actually want most of these prepackaged solutions. If React isn't a framework, then what is it exactly? React is just the view layer React is generally thought of as the view layer in an application. You might have used a library such as Handlebars or jQuery in the past. Just like jQuery manipulates UI elements and Handlebars templates are inserted into the page, React components change what the user sees. The following diagram illustrates where React fits in our frontend code: This is all there is to React—the core concept. Of course, there will be subtle variations to this theme as we make our way through the book, but the flow is more or less the same. We have some application logic that generates some Data. We want to render this Data to the UI, so we pass it to a React Component, which handles the job of getting the HTML into the page. You may wonder what the big deal is; React appears to be yet another rendering technology. We'll touch on some of the key areas where React can simplify application development in the remaining sections of the chapter. Simplicity is good React doesn't have many moving parts to learn about and understand. Internally, there's a lot going on, and we'll touch on these things throughout the book. The advantage of having a small API to work with is that you can spend more time familiarizing yourself with it, experimenting with it, and so on. The opposite is true of large frameworks, where all of your time is devoted to figuring out how everything works. The following diagram gives you a rough idea of the APIs that we have to think about when programming with React: React is divided into two major APIs: • The React Component API: These are the parts of the page that are actually rendered by React DOM. • React DOM: This is the API that's used to perform the actual rendering on a web page. Within a React component, we have the following areas to think about: • Data: This is data that comes from somewhere (the component doesn't care where), and is rendered by the component. • Lifecycle: This consists of methods or Hooks that we implement to respond to the component's entering and exiting phases of the React rendering process as they happen over time. For example, one phase of the lifecycle is when the component is about to be rendered. • Events: These are the code that we write for responding to user interactions. • JSX: This is the syntax of React components used to describe UI structures. Don't fixate on what these different areas of the React API represent just yet. The takeaway here is that React, by nature, is simple. Just look at how little there is to figure out! This means that we don't have to spend a ton of time going through API details here. Instead, once you pick up on the basics, we can spend more time on nuanced React usage patterns that fit in nicely with declarative UI structures. Declarative UI structures React newcomers have a hard time coming to grips with the idea that components mix markup in with their JavaScript in order to declare UI structures. If you've looked at React examples and had the same adverse reaction, don't worry. Initially, we're all skeptical of this approach, and I think the reason is that we've been conditioned for decades by the separation of concerns principle. This principle states that different concerns, such as logic and presentation, should be separate from one another. Now, whenever we see things mixed together, we automatically assume that this is bad and shouldn't happen. The syntax used by React components is called JSX (JavaScript XML). A component renders content by returning some JSX. The JSX itself is usually HTML markup, mixed with custom tags for React components. The specifics don't matter at this point; we'll go into detail in the coming chapters. What's groundbreaking about the declarative JSX approach is that we don't have to perform little micro-operations to change the content of a component. Although I won't be following the convention in this book, some React developers prefer the .jsx extension instead of .js for their components. For example, think about using something like jQuery to build your application. You have a page with some content on it, and you want to add a class to a paragraph when a button is clicked. Performing these steps is easy enough. This is called imperative programming, and it's problematic for UI development. While this example of changing the class of an element is simple, real applications tend to involve more than three or four steps to make something happen. React components don't require executing steps in an imperative way. This is why JSX is central to React components. The XML-style syntax makes it easy to describe what the UI should look like. That is, what are the HTML elements that this component is going to render? This is called declarative programming and is very well suited for UI development. Once you've declared your UI structure, you need to specify how it changes over time. Time and data Another area that's difficult for React newcomers to grasp is the idea that JSX is like a static string, representing a chunk of rendered output. This is where time and data come into play. React components rely on data being passed into them. This data represents the dynamic parts of the UI. For example, a UI element that's rendered based on a Boolean value could change the next time the component is rendered. Here's a diagram of the idea: Each time the React component is rendered, it's like taking a snapshot of the JSX at that exact moment in time. As your application moves forward through time, you have an ordered collection of rendered UI components. In addition to declaratively describing what a UI should be, re-rendering the same JSX content makes things much easier for developers. The challenge is making sure that React can handle the performance demands of this approach. Performance matters Using React to build UIs means that we can declare the structure of the UI with JSX. This is less error-prone than the imperative approach of assembling the UI piece by piece. However, the declarative approach does present a challenge: performance. For example, having a declarative UI structure is fine for the initial rendering, because there's nothing on the page yet. So, the React renderer can look at the structure declared in JSX and render it in the DOM browser. The Document Object Model (DOM) represents HTML in the browser after it has been rendered. The DOM API is how JavaScript is able to change content on the page. This concept is illustrated in the following diagram: On the initial render, React components and their JSX are no different from other template libraries. For instance, Handlebars will render a template to HTML markup as a string, which is then inserted into the browser DOM. Where React is different from libraries such as Handlebars is when data changes and we need to re-render the component. Handlebars will just rebuild the entire HTML string, the same way it did on the initial render. Since this is problematic for performance, we often end up implementing imperative workarounds that manually update tiny bits of the DOM. We end up with a tangled mess of declarative templates and imperative code to handle the dynamic aspects of the UI. We don't do this in React. This is what sets React apart from other view libraries. Components are declarative for the initial render, and they stay this way even as they're re-rendered. It's what React does under the hood that makes re-rendering declarative UI structures possible. React has something called the virtual DOM, which is used to keep a representation of the real DOM elements in memory. It does this so that each time we re-render a component, it can compare the new content to the content that's already displayed on the page. Based on the difference, the virtual DOM can execute the imperative steps necessary to make the changes. So, not only do we get to keep our declarative code when we need to update the UI, but React will also make sure that it's done in a performant way. Here's what this process looks like: When you read about React, you'll often see words such as diffing and patching. Diffing means comparing old content with new content to figure out what's changed. Patching means executing the necessary DOM operations to render the new content. Like any other JavaScript library, React is constrained by the run-to-completion nature of the main thread. For example, if the React internals are busy diffing content and patching the DOM, the browser can't respond to user input. As you'll see in the last section of this chapter, changes were made to the internal rendering algorithms in React 16 to mitigate these performance pitfalls. With performance concerns addressed, we need to make sure that we're confident that React is flexible enough to adapt to different platforms that we might want to deploy our apps to in the future. The right level of abstraction Another topic I want to cover at a high level before we dive into React code is abstraction. In the preceding section, you saw how JSX syntax translates to low-level operations that update our UI. A better way to look at how React translates our declarative UI components is via the fact that we don't necessarily care what the render target is. The render target happens to be the browser DOM with React, but it isn't restricted to the browser DOM. React has the potential to be used for any UI we want to create, on any conceivable device. We're only just starting to see this with React Native, but the possibilities are endless. I personally will not be surprised when React Toast becomes a thing, targeting toasters that can singe the rendered output of JSX onto bread. The abstraction level with React is at the right level, and it's in the right place. The following diagram gives you an idea of how React can target more than just the browser: From left to right, we have React Web (just plain React), React Native, React Desktop, and React Toast. As you can see, to target something new, the same pattern applies: • Implement components specific to the target. • Implement a React renderer that can perform the platform-specific operations under the hood. This is, obviously, an oversimplification of what's actually implemented for any given React environment. But the details aren't so important to us. What's important is that we can use our React knowledge to focus on describing the structure of our UI on any platform. React Toast will probably never be a thing, unfortunately. Now that you understand the role of abstractions in React, let's see what's new in React 16.   React Features The second edition of this book covers the major changes in React 16. I'm leaving this section intact for the third edition because I think the changes that were introduced in React 16 are still new and important enough to be relevant to learning React. The features of React 16 include the following: • Revamped core architecture • Lifecycle methods • Context API • Rendering fragments • Portals • Rendering lists and strings • Handling errors • Server-side rendering Let's look at each new feature in detail. Revamped core architecture Perhaps the biggest change in React 16 is the change made to the internal reconciliation code. These changes don't impact the way that you interact with the React API. Instead, these changes were made to address some pain points that were preventing React from scaling up in certain situations. For example, one of the main concepts of this new architecture is that of fibers. Instead of rendering every component on the page in a run-to-compilation way, React renders fibers—smaller chunks of the page that can be prioritized and rendered asynchronously. For a more in-depth look at this new architecture, these resources should be helpful: Lifecycle methods React 16 had to revamp some of the lifecycle methods that are available to class components. Some lifecycle methods are deprecated and will eventually be removed because they will be problematic for future async rendering functionality in React. For example, a common way to initialize state in a React component is to use the componentWillMount() lifecycle method. Once this method is removed from React, you can just set the initial state directly as an instance value. For more information on these lifecycle methods, visit https://reactjs.org/blog/2018/03/27/update-on-async-rendering.html. The Context API React has always provided a Context API for developers, but it was always considered experimental. Context is an alternative approach to passing data from one component to the next. For example, using properties, you can passing data through a tree of components that is several layers deep. The components in the middle of this tree don't actually use any of these properties—they're just acting as intermediaries. This becomes problematic as your application grows because you have lots of properties in your source that add to the complexity. The new Context API in React 16.3 is more stable than previous versions and provides a way for you to supply your components with data at any tree level. You can read more about the new Context API here: https://reactjs.org/docs/context.html. Rendering fragments If your React component renders several sibling elements, say three <p> elements, for instance, you would have to wrap them in <div> because React would only allow components to return a single element. The only problem with this approach is that it leads to a lot of unnecessary DOM structure. Wrapping your elements with <Fragment> is the same as wrapping them with <div>, except there won't be any superfluous DOM elements. You can read more about fragments here: https://reactjs.org/docs/fragments.html. Portals When a React component returns content, it gets rendered into its parent component. Then, that parent's content gets rendered into its parent component and so on, all the way to the tree root. There are times when you want to render something that specifically targets a DOM element. For example, a component that should be rendered as a dialog probably doesn't need to be mounted at the parent. Using a portal, you can control precisely where your component's content is rendered. You can read more about portals here: https://reactjs.org/docs/portals.html. Rendering lists and strings Prior to React 16, components had to return either an HTML element or another React component as its content. This can restrict how you compose your application. For example, you might have a component that is responsible for generating an error message. You used to have to wrap strings in HTML tags or map list items to HTML tags in order to be considered a valid React component output. Now you can just return the string. Similarly, you can just return a list of strings or a list of elements. This blog post introducing React 16 has more details on this new functionality: https://reactjs.org/blog/2017/09/26/react-v16.0.html. Handling errors Error handling in React can be difficult. Where exactly do you handle errors? If a component handles a JavaScript exception and sets an error state on the component to true, how do you reset this state? In React 16, there are error boundaries. Error boundaries are created by implementing the componentDidCatch() lifecycle method in a component. This component can then serve as the error boundary by wrapping other components. If any of the wrapped components throw an exception, the error boundary component can render alternative content. Having error boundaries in place like this allows you to structure your components in a way that best suits your application. You can read more about error boundaries here: https://reactjs.org/docs/error-boundaries.html. Server-side rendering Server-side rendering (SSR) in React can be difficult to wrap your head around. You're rendering on the server, then rendering on the client too? Since the SSR pattern has become more prevalent, the React team has made it easier to work within React 16. In addition, there are a number of internal performance gains as well as efficiency gains by enabling streaming rendered content to the client. If you want to read more about SSR in React 16, I recommend the following resources: However, in this book, the focus will be on using Next.js for SSR since it's so much easier than using a manual setup. Next.js is a simple framework for building React applications that handles many gory details related to routing and SSR. Now that you're familiar with the big changes that came with React 16, it's time to take a look at the cutting edge features available in the latest React release.   What's new in React? The third edition of this book includes React features that were introduced after version 16.6.0. In the following sections, I'll give you a brief introduction to the new functionality. Each feature will be covered in greater detail as you make your way through the book. For now, we will briefly look at the following: • Memoizing functional components • Cook splitting and loading • Hooks Let's start exploring them. Memoizing functional components The React.memo() function is the modern equivalent of the PureComponent class. Memoized components avoid re-rendering if the component data hasn't changed. In the past, you would extend your class component with PureComponent. This would automatically handle checking whether the component data has changed or not and whether or not the component should re-render. The challenge with this approach is that it is now common for large React applications to have a lot of functional components. Before React.memo(), there was no way to memorize components so that they could avoid re-rendering if no data changes happened. Now, you can pass your functional components to React.memo() and they'll behave like PureComponent. You can read more about React.memo() here: https://reactjs.org/docs/react-api.html#reactmemo. Code splitting and loading Prior to the React.lazy() function, code splitting in large React applications was cumbersome. Code splitting is important for large applications because it reduces the size of the code bundles that are sent to the browser, which can dramatically improve the user experience. Some features of an application might never be used, which means that the code that implements those features is never delivered to the browser. This is a huge efficiency gain. With the addition of React.lazy(), React acknowledges that code splitting and the user experience of waiting for pieces of the application to load are integral parts of the application, not an afterthought. By combining React.lazy() and the Suspense component, we get fine-grained control over how our app is split up and what happens while the user waits for it to load. You can read more about code splitting here: https://reactjs.org/docs/code-splitting.html. Hooks One of the most consequential new features of React is Hooks—functions that extend the behavior of functional React components. Hooks are used to "hook into" the React component machinery from your React components. Instead of relying on classes to build components that have state or that rely on executing side effects when the component is mounted, you can use the React Hooks API to pass functions that handle these cases. The end result is having more flexibility with how you're able to compose React components since functions are more easily shared between modules than component class methods are. Hooks are the future of how React components are assembled, which will have a big impact on the third edition of this book, where there's a new chapter devoted to Hooks, as well as updated code in all chapters from the second edition. You can read more about Hooks here: https://reactjs.org/docs/Hooks-intro.html.   Summary In this chapter, you were introduced to React at a high level. React is a library, with a small API, used to build UIs. Next, you were introduced to some of the key concepts of React. We discussed the fact that React is simple because it doesn't have a lot of moving parts. Next, we looked at the declarative nature of React components and JSX. Then, you learned that React takes performance seriously and that this is how we're able to write declarative code that can be re-rendered over and over. Next, you learned about the idea of render targets and how React can easily become the UI tool of choice for all of them. Lastly, I gave you a rough overview of what's new in React 16.x. That's enough introductory and conceptual stuff for now. As we make our way toward the end of the book, we'll revisit these ideas. For now, let's take a step back and nail down the basics, starting with JSX.   About the Authors • Adam Boduch Adam Boduch has been involved in large-scale JavaScript development for nearly 15 years. Before moving to the frontend, he worked on several large-scale cloud computing products using Python and Linux. No stranger to complexity, Adam has practical experience with real-world software systems and the scaling challenges they pose. Browse publications by this author • Roy Derks Roy Derks is a serial start-up CTO, international speaker, and author from the Netherlands. He has been working with React, React Native, and GraphQL since 2016. You might know him from the book “React Projects – Second Edition”, which was released by Packt earlier this year. Over the last few years, he has inspired tens of thousands of developers worldwide through his talks, books, workshops, and courses. Browse publications by this author Latest Reviews (2 reviews total) Aktueller Titel über REACT, endlich Livre clair et contient l'essentiel du sujet Recommended For You React and React Native - Third Edition Unlock this book and the full library FREE for 7 days Start now
__label__pos
0.836048
KickJava   Java API By Example, From Geeks To Geeks. Java > Open Source Codes > org > jedit > syntax > DefaultInputHandler 1 package org.jedit.syntax; 2 3 /* 4  * DefaultInputHandler.java - Default implementation of an input handler 5  * Copyright (C) 1999 Slava Pestov 6  * 7  * You may use and modify this package for any purpose. Redistribution is 8  * permitted, in both source and binary form, provided that this notice 9  * remains intact in all source distributions of this package. 10  */ 11 12 import javax.swing.KeyStroke JavaDoc; 13 import java.awt.event.*; 14 import java.awt.Toolkit JavaDoc; 15 import java.util.Hashtable JavaDoc; 16 import java.util.StringTokenizer JavaDoc; 17 18 /** 19  * The default input handler. It maps sequences of keystrokes into actions 20  * and inserts key typed events into the text area. 21  * @author Slava Pestov 22  * @version $Id: DefaultInputHandler.java,v 1.1 2003/12/14 16:29:49 daggerrz Exp $ 23  */ 24 public class DefaultInputHandler extends InputHandler 25 { 26    /** 27     * Creates a new input handler with no key bindings defined. 28     */ 29    public DefaultInputHandler() 30    { 31       bindings = currentBindings = new Hashtable JavaDoc(); 32    } 33 34    /** 35     * Sets up the default key bindings. 36     */ 37    public void addDefaultKeyBindings() 38    { 39       addKeyBinding("BACK_SPACE",BACKSPACE); 40       addKeyBinding("C+BACK_SPACE",BACKSPACE_WORD); 41       addKeyBinding("DELETE",DELETE); 42       addKeyBinding("C+DELETE",DELETE_WORD); 43 44       addKeyBinding("ENTER",INSERT_BREAK); 45       addKeyBinding("TAB",INSERT_TAB); 46 47       addKeyBinding("INSERT",OVERWRITE); 48       addKeyBinding("C+\\",TOGGLE_RECT); 49 50       addKeyBinding("HOME",HOME); 51       addKeyBinding("END",END); 52       addKeyBinding("S+HOME",SELECT_HOME); 53       addKeyBinding("S+END",SELECT_END); 54       addKeyBinding("C+HOME",DOCUMENT_HOME); 55       addKeyBinding("C+END",DOCUMENT_END); 56       addKeyBinding("CS+HOME",SELECT_DOC_HOME); 57       addKeyBinding("CS+END",SELECT_DOC_END); 58 59       addKeyBinding("PAGE_UP",PREV_PAGE); 60       addKeyBinding("PAGE_DOWN",NEXT_PAGE); 61       addKeyBinding("S+PAGE_UP",SELECT_PREV_PAGE); 62       addKeyBinding("S+PAGE_DOWN",SELECT_NEXT_PAGE); 63 64       addKeyBinding("LEFT",PREV_CHAR); 65       addKeyBinding("S+LEFT",SELECT_PREV_CHAR); 66       addKeyBinding("C+LEFT",PREV_WORD); 67       addKeyBinding("CS+LEFT",SELECT_PREV_WORD); 68       addKeyBinding("RIGHT",NEXT_CHAR); 69       addKeyBinding("S+RIGHT",SELECT_NEXT_CHAR); 70       addKeyBinding("C+RIGHT",NEXT_WORD); 71       addKeyBinding("CS+RIGHT",SELECT_NEXT_WORD); 72       addKeyBinding("UP",PREV_LINE); 73       addKeyBinding("S+UP",SELECT_PREV_LINE); 74       addKeyBinding("DOWN",NEXT_LINE); 75       addKeyBinding("S+DOWN",SELECT_NEXT_LINE); 76 77       addKeyBinding("C+ENTER",REPEAT); 78    } 79 80    /** 81     * Adds a key binding to this input handler. The key binding is 82     * a list of white space separated key strokes of the form 83     * <i>[modifiers+]key</i> where modifier is C for Control, A for Alt, 84     * or S for Shift, and key is either a character (a-z) or a field 85     * name in the KeyEvent class prefixed with VK_ (e.g., BACK_SPACE) 86     * @param keyBinding The key binding 87     * @param action The action 88     */ 89    public void addKeyBinding(String JavaDoc keyBinding, ActionListener action) 90    { 91            Hashtable JavaDoc current = bindings; 92 93       StringTokenizer JavaDoc st = new StringTokenizer JavaDoc(keyBinding); 94       while(st.hasMoreTokens()) 95       { 96          KeyStroke JavaDoc keyStroke = parseKeyStroke(st.nextToken()); 97          if(keyStroke == null) 98             return; 99 100          if(st.hasMoreTokens()) 101          { 102             Object JavaDoc o = current.get(keyStroke); 103             if(o instanceof Hashtable JavaDoc) 104                current = (Hashtable JavaDoc)o; 105             else 106             { 107                o = new Hashtable JavaDoc(); 108                current.put(keyStroke,o); 109                current = (Hashtable JavaDoc)o; 110             } 111          } 112          else 113             current.put(keyStroke,action); 114       } 115    } 116 117    /** 118     * Removes a key binding from this input handler. This is not yet 119     * implemented. 120     * @param keyBinding The key binding 121     */ 122    public void removeKeyBinding(String JavaDoc keyBinding) 123    { 124       throw new InternalError JavaDoc("Not yet implemented"); 125    } 126 127    /** 128     * Removes all key bindings from this input handler. 129     */ 130    public void removeAllKeyBindings() 131    { 132       bindings.clear(); 133    } 134 135    /** 136     * Returns a copy of this input handler that shares the same 137     * key bindings. Setting key bindings in the copy will also 138     * set them in the original. 139     */ 140    public InputHandler copy() 141    { 142       return new DefaultInputHandler(this); 143    } 144 145    /** 146     * Handle a key pressed event. This will look up the binding for 147     * the key stroke and execute it. 148     */ 149    public void keyPressed(KeyEvent evt) 150    { 151       int keyCode = evt.getKeyCode(); 152       int modifiers = evt.getModifiers(); 153 154       if(keyCode == KeyEvent.VK_CONTROL || 155          keyCode == KeyEvent.VK_SHIFT || 156          keyCode == KeyEvent.VK_ALT || 157          keyCode == KeyEvent.VK_META) 158          return; 159 160       if((modifiers & ~KeyEvent.SHIFT_MASK) != 0 161          || evt.isActionKey() 162          || keyCode == KeyEvent.VK_BACK_SPACE 163          || keyCode == KeyEvent.VK_DELETE 164          || keyCode == KeyEvent.VK_ENTER 165          || keyCode == KeyEvent.VK_TAB 166          || keyCode == KeyEvent.VK_ESCAPE) 167       { 168          if(grabAction != null) 169          { 170             handleGrabAction(evt); 171             return; 172          } 173 174          KeyStroke JavaDoc keyStroke = KeyStroke.getKeyStroke(keyCode, 175             modifiers); 176          Object JavaDoc o = currentBindings.get(keyStroke); 177          if(o == null) 178          { 179             // Don't beep if the user presses some 180 // key we don't know about unless a 181 // prefix is active. Otherwise it will 182 // beep when caps lock is pressed, etc. 183 if(currentBindings != bindings) 184             { 185                Toolkit.getDefaultToolkit().beep(); 186                // F10 should be passed on, but C+e F10 187 // shouldn't 188 repeatCount = 0; 189                repeat = false; 190                evt.consume(); 191             } 192             currentBindings = bindings; 193             return; 194          } 195          else if(o instanceof ActionListener) 196          { 197             currentBindings = bindings; 198 199             executeAction(((ActionListener)o), 200                evt.getSource(),null); 201 202             evt.consume(); 203             return; 204          } 205          else if(o instanceof Hashtable JavaDoc) 206          { 207             currentBindings = (Hashtable JavaDoc)o; 208             evt.consume(); 209             return; 210          } 211       } 212    } 213 214    /** 215     * Handle a key typed event. This inserts the key into the text area. 216     */ 217    public void keyTyped(KeyEvent evt) 218    { 219       int modifiers = evt.getModifiers(); 220       char c = evt.getKeyChar(); 221       if(c != KeyEvent.CHAR_UNDEFINED && 222          (modifiers & KeyEvent.ALT_MASK) == 0) 223       { 224          if(c >= 0x20 && c != 0x7f) 225          { 226             KeyStroke JavaDoc keyStroke = KeyStroke.getKeyStroke( 227                Character.toUpperCase(c)); 228             Object JavaDoc o = currentBindings.get(keyStroke); 229 230             if(o instanceof Hashtable JavaDoc) 231             { 232                currentBindings = (Hashtable JavaDoc)o; 233                return; 234             } 235             else if(o instanceof ActionListener) 236             { 237                currentBindings = bindings; 238                executeAction((ActionListener)o, 239                   evt.getSource(), 240                   String.valueOf(c)); 241                return; 242             } 243 244             currentBindings = bindings; 245 246             if(grabAction != null) 247             { 248                handleGrabAction(evt); 249                return; 250             } 251 252             // 0-9 adds another 'digit' to the repeat number 253 if(repeat && Character.isDigit(c)) 254             { 255                repeatCount *= 10; 256                repeatCount += (c - '0'); 257                return; 258             } 259 260             executeAction(INSERT_CHAR,evt.getSource(), 261                String.valueOf(evt.getKeyChar())); 262 263             repeatCount = 0; 264             repeat = false; 265          } 266       } 267    } 268 269    /** 270     * Converts a string to a keystroke. The string should be of the 271     * form <i>modifiers</i>+<i>shortcut</i> where <i>modifiers</i> 272     * is any combination of A for Alt, C for Control, S for Shift 273     * or M for Meta, and <i>shortcut</i> is either a single character, 274     * or a keycode name from the <code>KeyEvent</code> class, without 275     * the <code>VK_</code> prefix. 276     * @param keyStroke A string description of the key stroke 277     */ 278    public static KeyStroke JavaDoc parseKeyStroke(String JavaDoc keyStroke) 279    { 280       if(keyStroke == null) 281          return null; 282       int modifiers = 0; 283       int index = keyStroke.indexOf('+'); 284       if(index != -1) 285       { 286          for(int i = 0; i < index; i++) 287          { 288             switch(Character.toUpperCase(keyStroke 289                .charAt(i))) 290             { 291             case 'A': 292                modifiers |= InputEvent.ALT_MASK; 293                break; 294             case 'C': 295                modifiers |= InputEvent.CTRL_MASK; 296                break; 297             case 'M': 298                modifiers |= InputEvent.META_MASK; 299                break; 300             case 'S': 301                modifiers |= InputEvent.SHIFT_MASK; 302                break; 303             } 304          } 305       } 306       String JavaDoc key = keyStroke.substring(index + 1); 307       if(key.length() == 1) 308       { 309          char ch = Character.toUpperCase(key.charAt(0)); 310          if(modifiers == 0) 311             return KeyStroke.getKeyStroke(ch); 312          else 313             return KeyStroke.getKeyStroke(ch,modifiers); 314       } 315       else if(key.length() == 0) 316       { 317          System.err.println("Invalid key stroke: " + keyStroke); 318          return null; 319       } 320       else 321       { 322          int ch; 323 324          try 325          { 326             ch = KeyEvent.class.getField("VK_".concat(key)) 327                .getInt(null); 328          } 329          catch(Exception JavaDoc e) 330          { 331             System.err.println("Invalid key stroke: " 332                + keyStroke); 333             return null; 334          } 335 336          return KeyStroke.getKeyStroke(ch,modifiers); 337       } 338    } 339 340    // private members 341 private Hashtable JavaDoc bindings; 342    private Hashtable JavaDoc currentBindings; 343 344    private DefaultInputHandler(DefaultInputHandler copy) 345    { 346       bindings = currentBindings = copy.bindings; 347    } 348 } 349 Popular Tags
__label__pos
0.993147
Question Why is cilium given 30% of a nodes CPU on a Kubernets cluster? Posted June 3, 2019 522 views Debian Kubernetes I noticed that the cilium pods are generously provisioned on a DO k8s cluster. It seems a little excessive provision 30% of the CPU with no upper limit. Given it’s a Go project, and they’re usually quite performant, this reasonable? kube-system cilium-crswr 300m (30%) 0 (0%) 0 (0%) 0 (0%) 10d These answers are provided by our Community. If you find them useful, show some love by clicking the heart. If you run into issues leave a comment, or add your own answer to help others. 1 answer Hi there, With cilium usage like this can be expected. Cilium provides the software defined network for our DOKS clusters. The reason we dont put an upper limit on cilium is because if the cilium pod goes down all workloads running on that node will lose network connectivity. Therefore capping the cilium pod’s resources at a limit would cause kubenetes to kill the cilium pod if it tried to take more than its limit causing an outage anyway. It’s for this reason that cilium does not have a cap as its a crucial infrastructure component that is in our customers best interest to give it the resources it needs to maintain a stable cluster. Regards, John Kwiatkoski Senior Developer Support Engineer Submit an Answer
__label__pos
0.99883
summaryrefslogtreecommitdiff path: root/src/iostream.rs blob: 22d5ef0cd8d6660a88b3c9da847902c4f68dd84e (plain) 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 use mio::{Ready,Token,Event}; use mio::tcp::TcpStream; use std::io::{Result,ErrorKind,Error}; use std::time::Duration; use netbuf::Buf; use eventloop::{Context,TToken}; pub struct IoStream { pub sock: TcpStream, pub rbuf: Buf, pub wbuf: Buf, io: Token, ioreg: Ready, rtime: TToken, wtime: TToken, rtimeout: Duration, wtimeout: Duration, rmax: usize, rexpect: bool, } impl IoStream { pub fn new(ctx: &mut Context, sock: TcpStream, rtimeout: Duration, wtimeout: Duration, rmax: usize) -> IoStream { let io = ctx.reg_alloc(); let rtime = ctx.timeout_alloc(); ctx.reg_set(&sock, io, Ready::readable()); ctx.timeout_set(rtime, rtimeout); IoStream { sock: sock, io: io, ioreg: Ready::readable(), rbuf: Buf::new(), wbuf: Buf::new(), rtime: rtime, wtime: ctx.timeout_alloc(), rtimeout: rtimeout, wtimeout: wtimeout, rmax: rmax, rexpect: true, } } /// Modifies the expect_read flag. If this flag is set, we are expecting to read data from the /// other end, and will throw an error if nothing is read within the timeout. /// This flag merely affects the timeout behaviour; we keep trying to read even if we are not /// expecting any data, in order catch a closed connection early and optimize I/O when /// receiving multiple requests over one stream. Until the read buffer is full, that is. #[allow(dead_code)] pub fn set_expect_read(&mut self, ctx: &mut Context, x: bool) { self.rexpect = x; self.set_ioreg(ctx); } /// Updates the I/O registration with the current state and starts/stops timers as necessary. /// Should be called whenever something is consumed from the read buffer or something is added /// to the write buffer. pub fn set_ioreg(&mut self, ctx: &mut Context) { let oldreg = self.ioreg; // Optimization: If the write buffer was empty before but contains data now, we can already // try to write it out at this point. This way we can avoid going through the event loop if // the OS buffers have enough space. // TODO: Measure how effective this is in practice; if this only tends to happen on the // first write to a new socket then it might not be worth the effort. if self.wbuf.len() > 0 && !oldreg.contains(Ready::writable()) { // Ignore errors, if we catch an error we can handle it in the next loop iteration. let _ = self.wbuf.write_to(&mut self.sock); } let mut reg = Ready::none(); if self.rbuf.len() < self.rmax { reg.insert(Ready::readable()); if self.rexpect && !oldreg.contains(Ready::readable()) { ctx.timeout_set(self.rtime, self.rtimeout); } } else { ctx.timeout_unset(self.rtime); } if self.wbuf.len() > 0 { reg.insert(Ready::writable()); if !oldreg.contains(Ready::writable()) { ctx.timeout_set(self.wtime, self.wtimeout); } } else { ctx.timeout_unset(self.wtime); } if reg != oldreg { if reg == Ready::none() { ctx.reg_unset(&self.sock, self.io); } else { ctx.reg_set(&self.sock, self.io, reg); } } self.ioreg = reg; } fn handle_io<F,G>(&mut self, cond: bool, io: F, tim: G) -> Result<usize> where F: FnOnce(&mut Self) -> Result<usize>, G: FnOnce(&mut Self) { if !cond { return Ok(0); } // Error conversion: // - Ok(0) is converted into a ConnectionReset error // - Temporary errors are converted into Ok(0) match io(self) { Err(err) => { match err.kind() { ErrorKind::WouldBlock | ErrorKind::Interrupted => { Ok(0) }, _ => { Err(err) } } }, Ok(0) => { Err(Error::new(ErrorKind::ConnectionReset, "Connection reset by peer")) }, Ok(n) => { tim(self); Ok(n) } } } /// Handle an IO event. Returns the (read_bytes, write_bytes) on success. Both can be 0 if the /// event was not intended for this steam or if the read/write operations didn't feel like /// doing anything. It's possible that data was read from the stream even if this method /// returns an error. pub fn handle(&mut self, ctx: &mut Context, ev: Event) -> Result<(usize, usize)> { if ev.token() != self.io { return Ok((0,0)); } let canr = ev.kind().is_readable() && self.rbuf.len() < self.rmax; let canw = ev.kind().is_writable() && self.wbuf.len() > 0; let rd = try!(self.handle_io(canr, |s|{ s.rbuf.read_from(&mut s.sock) }, |s|{ if s.rexpect { ctx.timeout_set(s.rtime, s.rtimeout) } } )); let wr = try!(self.handle_io(canw, |s|{ s.wbuf.write_to(&mut s.sock) }, |s|{ ctx.timeout_set(s.wtime, s.wtimeout) } )); if rd > 0 || wr > 0 { self.set_ioreg(ctx); } Ok((rd,wr)) } pub fn timeout(&mut self, t: TToken) -> Result<()> { if t == self.rtime { Err(Error::new(ErrorKind::TimedOut, "Read timeout")) } else if t == self.wtime { Err(Error::new(ErrorKind::TimedOut, "Write timeout")) } else { Ok(()) } } pub fn remove(&mut self, ctx: &mut Context) { ctx.reg_free(&self.sock, self.io); ctx.timeout_free(self.rtime); ctx.timeout_free(self.wtime); } }
__label__pos
0.996711
Skip to main content cancel Showing results for  Search instead for  Did you mean:  Earn a 50% discount on the DP-600 certification exam by completing the Fabric 30 Days to Learn It challenge. Reply Anonymous Not applicable Incremental Refresh on a dataset with Dataflows as a data source Hi,   I have a question about implementing Incremental Refresh on a dataset that connects solely to Dataflows. I tried setting it up but it appears Dataflows do not support query folding.   Few questions: - Is it possible in any way to get Incremental Refresh working on a dataset that runs on Dataflows at this moment? - If not, is there anything on the roadmap to enable this? - If not, is there any way to refresh only part of a Dataflow using a workaround?   It feels really weird to have all the data in Azure Data Lake (Dataflows) but not being able to load it into a dataset due to memory issues. 34 REPLIES 34 v-yiruan-msft Community Support Community Support Hi @Anonymous , You can refer the following documentation to configure incremental refresh for dataflow: Using incremental refresh with Power BI dataflows Incremental Refresh for Dataflows Best Regards Rena Community Support Team _ Rena If this post helps, then please consider Accept it as the solution to help the other members find it more quickly. @yingyinr Hi - you did not answer the actual question.   @Anonymous is asking whether a user can connect a dataset (using PBI desktop) to an existing dataflow and then turn on incremental refresh on the dataset.   We can have incremental refresh datasets for database sources, but can we do the same against a dataflow source? It seems to allow me to do so when I tested it, but I cannot find any official documentation on the subject. Anonymous Not applicable @v-yiruan-msft Did you actually read my message? I'm not trying to set up Incremental Refresh in Dataflows, I'm trying to set it up for a dataset that uses Dataflows as a data source.   Could you provide any information on that or perhaps even address my actual questions? Anonymous Not applicable Hi @v-yiruan-msft,   Any update on this? Anonymous Not applicable Hey @v-yiruan-msft,   Would you happen to have an answer on the questions I asked? Or maybe a colleague could help you out?   Thanks so much in advance. Hi @Anonymous, Sorry for delay. There are two ways to configure incremental refresh: one is configure increment refresh in Power BI Desktop (it is available for Pro users) and another is use increment refresh with Power BI dataflows(currently only for Premium users). Currently you are connecting to dataflow as data source in Power BI Desktop, since the original data source is the dataflow, you can just configure increment refresh for the dataflow in Power BI Service.  Best Regards Rena Community Support Team _ Rena If this post helps, then please consider Accept it as the solution to help the other members find it more quickly. Anonymous Not applicable Hi @v-yiruan-msft,   Thanks for your answer. It is absolutely clear to me that I can set up incremental refresh in dataflows. This is what I currently do. So the main benefit of this is that we only refresh the last 14 days of data each morning (approx. 2 million rows of data). This is working well and I'm happy with the feature.   However, our total dataset consists of approximately 70 million rows of data. And each morning, when I'm refreshing my dataset, it will refresh all of these 70 million rows of data. Even when I implement incremental refresh on the dataset, it will still refresh all rows. What I would expect it to do is refresh only the last 14 days of data, so that it only has to refresh 500 MB of data, instead of the entire ~7 GB.   How can I make sure that my dataset will only refresh the part that is refreshed in dataflows (the incremental part)? Hi @Anonymous , According to the official documentation, it is recommend to use data sources which support query folding when configure increment refresh. Otherwise, it may execute the complete refresh... You can check the following sentences: Query folding It's important the partition filters are pushed to the source system when queries are submitted for refresh operations. To push filtering down means the datasource should support query folding. Most data sources that support SQL queries support query folding. However, data sources like flat files, blobs, and web feeds typically do not. In cases where the filter is not supported by the datasource back-end, it cannot be pushed down. In such cases, the mashup engine compensates and applies the filter locally, which may require retrieving the full dataset from the data source. This can cause incremental refresh to be very slow, and the process can run out of resources either in the Power BI service or in the on-premises data gateway if used. So please check if the original data sources using in dataflow support query folding or not. You can refer this video to check it. In addition, hope the following documenations can help you. Power BI Incremental Refresh and Query Folding Power BI - Checking Query Folding with View Native Query Best Regards Rena Community Support Team _ Rena If this post helps, then please consider Accept it as the solution to help the other members find it more quickly. Anonymous Not applicable Hi @v-yiruan-msft ,   Thanks again for your answer.   I do want to emphasize that I'm not trying to set up incremental refresh in Dataflows. I already got that working. I'm trying to set up incremental refresh in the dataset that connects to dataflows. So, because dataflows is the source, the question is if dataflows supports query folding. Does it? Hi @Anonymous , Yes, please check whether the underlying data source used in the dataflow support query folding or not. For the data sources that do not support query folding, you can set incremental refresh, but it may retrieve the entire data anyways.  Best Regards Rena Community Support Team _ Rena If this post helps, then please consider Accept it as the solution to help the other members find it more quickly. Anonymous Not applicable @v-yiruan-msft How is the underlying data source relevant? I'm connecting to Dataflows. The dataset doesn't care where I get the data from before that point, does it?   The question is, when I'm refreshing my dataset, do the queries it sends to dataflows get folded and is Incremental Refresh possible in this case? Or will it always load in the entire dataflow? We have the very same question (dataset incremental refresh using dataflows as source, regardless of how the dataflow itself gets its data). Has this been answered? The simple answer is NO.   Many have tried workarounds, but nothing official from @microsoft. @pat_mecee Have you tested this? I ran a test and it seemed to work, but I did not proceed (my responsibility is just to create the dataflow sources and I do not use PBI Desktop or datasets much).   It would be nice to have official communication here.    https://docs.microsoft.com/en-us/power-query/dataflows/incremental-refresh#dataflow-incremental-refr...   "Dataflow incremental refresh and dataset incremental refresh are designed to work in tandem. It's acceptable and supported to have an incrementally refreshing entity in a dataflow, fully loaded into a dataset, or a fully loaded entity in a dataflow incrementally loaded to a dataset. Both approaches work according to your specified definitions in the refresh settings." Helpful resources Announcements LearnSurvey Fabric certifications survey Certification feedback opportunity for the community. April Fabric Community Update Fabric Community Update - April 2024 Find out what's new and trending in the Fabric Community. Top Solution Authors Top Kudoed Authors
__label__pos
0.564077
Over a million developers have joined DZone. {{announcement.body}} {{announcement.title}} Dynamically adding RaisePropertyChanged to MVVM Light ViewModels using Mono Cecil DZone's Guide to Dynamically adding RaisePropertyChanged to MVVM Light ViewModels using Mono Cecil · · Free Resource Introduction A few days ago, I wrote an article about dynamically modifying MVVM Light ViewModels using Reflection.Emit. The same can be done with the more powerful Mono Cecil library. Mono Cecil can both create assemblies at runtime and modify existing ones. The latter allows to modify post-compilation the generated MSIL code. This is helpful on platforms like Windows Phone 7; that do not allow dynamic runtime loading of assemblies. In this article, I will show you how we can generate the same proxies as in the Reflection.Emit example. In a latter example, I will show you how to update assemblies post compilation for use on Windows Phone 7. If you have not read my previous article, I recommend that you read it first. Implementation Mono Cecil can be easily retrieved using NuGet. mono_cecil_nuget The implementation is almost the same; similar concepts classes are used. I will not explain them in detail, but just show the modified example. (If you would have questions, feel free to drop a comment Glimlach). public static class CecilViewModelFactory { public static T CreateInstance<T>() where T : ViewModelBase { Type vmType = typeof(T); ModuleDefinition module = ModuleDefinition.CreateModule("CecilDynamicTestPieter", ModuleKind.Dll); TypeReference baseTypeReference = module.Import(vmType); TypeDefinition typeDefinition = new TypeDefinition(vmType.Namespace, "Smart" + vmType.Name, TypeAttributes.Public, baseTypeReference); module.Types.Add(typeDefinition); MethodReference raisePropertyChangedMethod = module.Import(typeof(ViewModelBase).GetMethod("RaisePropertyChanged", Reflection.BindingFlags.NonPublic | Reflection.BindingFlags.Instance, null, new Type[] { typeof(string) }, null)); // Create default constructor typeDefinition.Methods.Add(CreateDefaultConstructor(module, vmType)); foreach (Reflection.PropertyInfo propertyInfo in FindNotifyPropertyChangCandidates<T>()) { ILProcessor processor; // Get set method of base type MethodReference setMethodReference = module.Import(propertyInfo.GetSetMethod()); PropertyDefinition propertyDefinition = new PropertyDefinition(propertyInfo.Name, PropertyAttributes.None, module.Import(propertyInfo.PropertyType)); typeDefinition.Properties.Add(propertyDefinition); // Create set method MethodDefinition setMethodDefinition = new MethodDefinition("set_" + propertyInfo.Name, MethodAttributes.HideBySig | MethodAttributes.SpecialName | MethodAttributes.Public | MethodAttributes.Virtual, module.Import(typeof(void))); setMethodDefinition.Parameters.Add(new ParameterDefinition("value", ParameterAttributes.None, module.Import(propertyInfo.PropertyType))); propertyDefinition.SetMethod = setMethodDefinition; typeDefinition.Methods.Add(setMethodDefinition); processor = setMethodDefinition.Body.GetILProcessor(); // Add IL code for set method processor.Emit(OpCodes.Nop); processor.Emit(OpCodes.Ldarg_0); processor.Emit(OpCodes.Ldarg_1); processor.Emit(OpCodes.Call, setMethodReference); // Call property changed for object processor.Emit(OpCodes.Nop); processor.Emit(OpCodes.Ldarg_0); processor.Emit(OpCodes.Ldstr, propertyInfo.Name); processor.Emit(OpCodes.Callvirt, raisePropertyChangedMethod); processor.Emit(OpCodes.Nop); processor.Emit(OpCodes.Ret); } return CreateInstance<T>(module, typeDefinition); } private static MethodDefinition CreateDefaultConstructor(ModuleDefinition module, Type baseType) { MethodDefinition defaultConstructor = new MethodDefinition(".ctor", MethodAttributes.Public | MethodAttributes.SpecialName | MethodAttributes.RTSpecialName, module.Import(typeof(void))); ILProcessor processor = defaultConstructor.Body.GetILProcessor(); processor.Emit(OpCodes.Ldarg_0); processor.Emit(OpCodes.Call, module.Import(baseType.GetConstructor(Type.EmptyTypes))); processor.Emit(OpCodes.Ret); return defaultConstructor; } private static IEnumerable<Reflection.PropertyInfo> FindNotifyPropertyChangCandidates<T>() { return from p in typeof(T).GetProperties() where p.GetSetMethod() != null && p.GetSetMethod().IsVirtual && p.GetCustomAttributes(typeof(RaisePropertyChangedAttribute), false).Length > 0 select p; } } Note: The issue I had when started using Mono Cecil, was how to load assemblies dynamically; instead of just writing them to disk. At the end, I came up with this solution. private static T CreateInstance<T>(ModuleDefinition module, TypeDefinition typeDefinition) { Type dynamicType; using (MemoryStream stream = new MemoryStream()) { module.Write(stream); Reflection.Assembly assembly = Reflection.Assembly.Load(stream.ToArray()); dynamicType = assembly.GetType(typeDefinition.FullName); } return (T)Activator.CreateInstance(dynamicType); } Using the CecilViewModelFactory implementation is very straightforward; dynamic ViewsModels can be created using the following pattern: SampleViewModel viewModel = CecilViewModelFactory.CreateInstance<SampleViewModel>(); Conclusion In this article, I have just shown the top of the iceberg of what is possible with Mono Cecil. Dynamically creating or modifying types can reduce boiler code and lead to cleaner code. I a next article, I plan to show how you can use this in WP7 applications. Topics: Published at DZone with permission of Opinions expressed by DZone contributors are their own. {{ parent.title || parent.header.title}} {{ parent.tldr }} {{ parent.urlSource.name }}
__label__pos
0.84163
likes comments collection vue项目前端性能优化小结 作者站长头像 站长 · 阅读数 42 0. 路由懒加载 路由组件不使用直接引入,而是匿名函数返回形式,如下注释可以定义编译后的js文件名,在未进入该路由时此js文件的内容将不会被请求到: { path: '/home', component: () => import(/* webpackChunkName: 'base' */ '@/views/Index.vue') } 1. 开启gzip压缩 1.1. 需要服务端做配置开启gzip功能,例如我的是nginx.conf配置: gzip on; gzip_min_length 80k; gzip_buffers 4 16k; gzip_comp_level 5; gzip_types text/plain application/javascript text/css application/xml text/javascript application/x-httpd-php image/jpeg image/gif image/png; 1.2. vue.config.js配置编译gzip,npm安装相关插件: const CompressionWebpackPlugin = require('compression-webpack-plugin') module.exports = { configureWebpack: config => { if (process.env.NODE_ENV === 'production') { config.plugins.push( new CompressionWebpackPlugin({ filename: '[path].gz[query]', algorithm: 'gzip', test: new RegExp('\\.(' + productionGzipExtensions.join('|') + ')$'), threshold: 10240, minRatio: 0.8 }) ); } }, } 2. 使用插件压缩混淆去注释 const TerserPlugin = require('terser-webpack-plugin') module.exports = { configureWebpack: config => { if (process.env.NODE_ENV === 'production') { config.plugins.push( new TerserPlugin({ cache: true, parallel: true, sourceMap: false, terserOptions: { compress: { drop_console: true, drop_debugger: true } } }) ); } }, } 3. 去掉多余的第三方库 3.1. 删除库npm指令:npm uninstall xxx 3.2. 把非必要库放到服务端,如处理时间常用库moment.js,引入比较大,建议放在后端,或者使用代替方案如day.js,有时间精力也可以自行封装方法,但是重复造车轮不推荐。 3.3. (可选)提取第三方库,不使用import方式引入,而是使用其cdn链接直接在public中的index.html中使用传统方式引入,有利于减少编译包体积。 4. 资源CDN 资源文件CDN,像图片资源我都是放在七牛云储存,从不放服务器,七牛云好像配置域名就可以cdn加速了,比较方便。如果客户端有特殊自定义需要,如全国地址要自行配置,其配置文件也比较大,也要提取出来,不要放客户端中打包。 configureWebpack: config => { if (isProduction) { config.plugins.push( ..... // 分离包 config.externals = { 'vue': 'Vue', 'vue-router': 'VueRouter', 'vuex': 'Vuex', 'axios': 'axios' } } }, 5. 图片预加载 防止多图或图片较大时对客户端浏览体验产生影响: export default class PreLoad { private i: number; private arr: string[]; constructor(arr: string[]) { this.i = 0 this.arr = arr } public imgs() { return new Promise(resolve => { const work = (src: string) => { if (this.i < this.arr.length) { const img = new Image() img.src = src; if (img.complete) { work(this.arr[this.i++]) } else { img.onload = () => { work(this.arr[this.i++]) img.onload = null; }; } // console.log(((this.i + 1) / this.arr.length) * 100); } else { resolve() } } work(this.arr[this.i]) }) } } 加个转圈菊花或者加载动画/提示等,然后调用该方法来阻塞页面: const imgs = ['http://XX.png','http://XX.png'] const preload = new this.$utils.preload(imgs) const preDone = await preload.imgs() 题外 1. 常见前端优化的三个层面:网络请求,JS优化,CSS优化 • 减少http请求 • 图片懒加载 • 使用字体图标或svg,尽量不使用png,png尽量使用css图片精灵 • 避免使用闭包,减少DOM回流重绘,避免使用css表达式 • 不使用cookie,不使用iframe,不使用flash • 尽量减少引用大量第三方库 (减少资源大小) 2. 在新版的vue-cli工具中GUI界面有集成了不错的分析工具,可以直观的看到项目的编译文件大小情况,对此针对性的分析改进代码也是重要的优化手段。
__label__pos
0.999659
Take the 2-minute tour × Stack Overflow is a question and answer site for professional and enthusiast programmers. It's 100% free, no registration required. I try to draw a leaf looking thing on the screen, and try to fill it with a color. It's like drawing a circle, the difference is, that it's only 270 degrees, and the radius starts from 0 to 100. I first draw the left side, and on each degree I fill the inside. At the end I draw the right side. Here is to code, maybe it's easier to understand: canvas = new BufferedImage(SIZE, SIZE, BufferedImage.TYPE_INT_ARGB); Color black = new Color(0,0,0); Color green = new Color(0,130,0); double j = 0.0; // radius double max = 100.0; // max radius for (int i = 0; i < 135; i++) { // left side (270 degree / 2) j += max / 135.0; // x, y coordinate int x = (int)(Math.cos(Math.toRadians(i)) * j); int y = (int)(Math.sin(Math.toRadians(i)) * j); // draw a circle like thing with radius j for (int l = i; l < 135 + (135 - i); l++) { int ix = (int)(Math.cos(Math.toRadians(l)) * j); int iy = (int)(Math.sin(Math.toRadians(l)) * j); canvas.setRGB(ix + 256, iy + 256, green.getRGB()); } canvas.setRGB(x + 256, y + 256, black.getRGB()); } // draw the right side for (int i = 135; i < 270; i++) { j -= max / 135.0; int x = (int)(Math.cos(Math.toRadians(i)) * j); int y = (int)(Math.sin(Math.toRadians(i)) * j); canvas.setRGB(x + 256, y + 256, black.getRGB()); } This is the result: enter image description here As you can see, where the radius is bigger, the leaf is not filled completely. If I change i to 1350, then divide it with 10 where I calculate x, y, then it's filled, but it's much slower. Is there a better way to properly fill my shape? Later I also would like to fill my shape with a gradient, so from green to a darker green, then back to green. With my method this is easy, but super slow. Thanks in advance! share|improve this question      Just to be sure: I assume that you are not allowed to make this task trivial by just calling canvas.getGraphics(), and using all the infrastructure that is provided by the Graphics class...? –  Marco13 Apr 8 at 9:11      There are no restrictions, I only try things out, as I never worked with Java graphics before. The only thing is, that I would like to use the setRGB method, and not just drawing lines and arcs. –  matthew3r Apr 8 at 9:16      When the reason behind using setRGB is that you want your painting to be contained in a BufferedImage, then it may be worth mentioning that you can paint lines and arcs and filled shapes into an image, but maybe you already know that, and there are other reasons for not doing this. –  Marco13 Apr 8 at 9:23      Yes, I know. First I only wanted to paint the window with different colors each pixel, and later I decided to draw circle and other things with some math involved in it, and now this fill the shape. –  matthew3r Apr 8 at 9:28      Sorry, but I still think that the intention is unclear. At the moment, you are just setting points based on some rule that coincidentally happens to look like a leaf. A flood fill will not help you, because you don't have a closed border. If you want to fill arbitrary shapes (with manual setRGB calls) then the usual approach would be a Scanline Algorithm ( cs.uic.edu/~jbell/CourseNotes/ComputerGraphics/… ), but this is really not trivial to implement - particularly when you have no representation of the border of the polygon to be filled. –  Marco13 Apr 8 at 10:44 2 Answers 2 I think that for you the best solution is to use a flood fill algorithm, it's easy to implement in Java and efficient in your case, like you have a simple shape. Here is a wikipedia article that is really complet : http://en.wikipedia.org/wiki/Flood_fill share|improve this answer      Thank you, I will look into it! –  matthew3r Apr 8 at 9:10 Here is a simple suggestion: Instead of drawing the leaf, just put the points that create the outline into an array. The array should run from xMin (smallest X coordiate of the leaf outline) to xMax. Each element is two ints: yMin and yMax. After rendering all the points, you can just draw vertical lines to fill the space between yMin/yMax for each X coordinate. If you have gaps in the array, fill them by interpolating between the neighboring points. An alternative would be to sort the points clockwise or counter-clockwise and use them as the outline for a polygon. share|improve this answer      I think I will try it with the array. It sounds pretty easy, and looks much faster, than my solution. –  matthew3r Apr 8 at 9:13      Good luck; note that it sounds easier than it looks. As a start, I suggest to begin with xMin = 0 and xMax = width of screen to make your life easier. You can then optimize the algorithm later. –  Aaron Digulla Apr 8 at 9:34 Your Answer   discard By posting your answer, you agree to the privacy policy and terms of service. Not the answer you're looking for? Browse other questions tagged or ask your own question.
__label__pos
0.540926
Skip to end of metadata Go to start of metadata For draggable DIV tag, what you need to keep in mind is "position: absolute" in style sheet. <!DOCTYPE html> <html> <style> #mydiv { position: absolute; z-index: 9; background-color: #f1f1f1; text-align: center; border: 1px solid #d3d3d3; } #mydivheader { padding: 10px; cursor: move; z-index: 10; background-color: #2196F3; color: #fff; } </style> <body> <h1>Draggable DIV Element</h1> <p>Click and hold the mouse button down while moving the DIV element</p> <div id="mydiv"> <div id="mydivheader">Click here to move</div> <p>Move</p> <p>this</p> <p>DIV</p> </div> <script> //Make the DIV element draggagle: dragElement(document.getElementById("mydiv")); function dragElement(elmnt) { var pos1 = 0, pos2 = 0, pos3 = 0, pos4 = 0; if (document.getElementById(elmnt.id + "header")) { /* if present, the header is where you move the DIV from:*/ document.getElementById(elmnt.id + "header").onmousedown = dragMouseDown; } else { /* otherwise, move the DIV from anywhere inside the DIV:*/ elmnt.onmousedown = dragMouseDown; } function dragMouseDown(e) { e = e || window.event; e.preventDefault(); // get the mouse cursor position at startup: pos3 = e.clientX; pos4 = e.clientY; document.onmouseup = closeDragElement; // call a function whenever the cursor moves: document.onmousemove = elementDrag; } function elementDrag(e) { e = e || window.event; e.preventDefault(); // calculate the new cursor position: pos1 = pos3 - e.clientX; pos2 = pos4 - e.clientY; pos3 = e.clientX; pos4 = e.clientY; // set the element's new position: elmnt.style.top = (elmnt.offsetTop - pos2) + "px"; elmnt.style.left = (elmnt.offsetLeft - pos1) + "px"; } function closeDragElement() { /* stop moving when mouse button is released:*/ document.onmouseup = null; document.onmousemove = null; } }
__label__pos
0.999397
Sell with Sound Friends - Create a Company (Activity 7) Video speed: Overview Add sound to your commercial to make it more dynamic and exciting.  Instructions 1. Add background music using a "play sound until done" block. 2. Play the music continuously using a "forever loop" and a "when flag clicked" block.  3. To adjust the volume of the music, use the "set volume to" block.   4. Explore adding sound to one or more "when I receive" blocks. X Hey, club member! Sign in to get a badge for each activity you do!
__label__pos
0.889891
Uploaded image for project: 'Hive' 1. Hive 2. HIVE-6793 DDLSemanticAnalyzer.analyzeShowRoles() should use HiveAuthorizationTaskFactory XMLWordPrintableJSON Details Description Currently DDLSemanticAnalyzer.analyzeShowRoles() isn't using HiveAuthorizationTaskFactory to create task, at odds with other Authorization related task creations such as for analyzeShowRolePrincipals(). This JIRA is to make it consistent. Attachments 1. HIVE-6793.patch 3 kB Xuefu Zhang Activity People • Assignee: xuefuz Xuefu Zhang Reporter: xuefuz Xuefu Zhang • Votes: 0 Vote for this issue Watchers: 2 Start watching this issue Dates • Created: Updated: Resolved:
__label__pos
0.984485
Integration by Substitution Some functions may be changed to the standard forms by easy mathematical manipulation. To aid the process of changing to a recognisable form, use it also made of substitution, which results in a change of variable. In mathematics, an important use is made of what is called differentials. $$ \begin{align} \displaystyle u &= f(x) \\ du &= \dfrac{du}{dx} \times dx \\ \end{align}$$ Example 1 Find $\displaystyle 2\int{\sqrt{2x+1}}dx$. Example 2 Find $\displaystyle \int{x\sqrt{1+x^2}}dx$. Example 3 Find $\displaystyle \int{(x^2+3x)^4(2x+3)}dx$. Your email address will not be published. Required fields are marked *
__label__pos
0.999944
[Ubuntu]: Can't open any port October 23, 2014 5.8k views Aranir By: Aranir I followed the following guide: https://www.digitalocean.com/community/tutorials/how-to-set-up-a-firewall-using-ip-tables-on-ubuntu-12-04 and tried to open port 80, but I still have nothing open or listening: netstat -plunt Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 127.0.0.1:3306 0.0.0.0:* LISTEN 1051/mysqld tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 965/sshd tcp6 0 0 :::22 The strangest thing is that I could connect to the server yesterday and today it isn't working anymore and the only thing I changed was to add a DNS entry on Digital Ocean. Could this have anything to do with it? I tried even to disable any protection with the following commands: $ sudo iptables -X $ sudo iptables -t nat -F $ sudo iptables -t nat -X $ sudo iptables -t mangle -F $ sudo iptables -t mangle -X $ sudo iptables -P INPUT ACCEPT $ sudo iptables -P FORWARD ACCEPT $ sudo iptables -P OUTPUT ACCEPT but still only port 22 is accessible. What could be the reason for this? 2 comments • Why have a firewall in the first place? Your running daemons could still get exploited? Make sure you know what you have running and on what ports. Secure those daemons which would be much better then implementing a firewall to allow traffic only to certain ports which are in essence the only ports that are actually being used anyway! • netstat -plunt will only list ports that something is actively listening on. Do you have a web server listening on port 80? Is Apache or Nginx installed? 1 Answer netstat -plunt should show you that the web server is trying to listen on port 80 even if it is blocked by the firewall. Make sure the server is running. If it's Apache, run: service apache2 start If it's Nginx, then: service nginx restart Have another answer? Share your knowledge.
__label__pos
0.983413
jQuery右下角悬浮层弹窗507 一款打开页面右下角滑动显示悬浮层代码。 //兼容ie6的fixed代码 //jQuery(function($j){ // $j('#pop').positionFixed() //}) (function($j){ $j.positionFixed = function(el){ $j(el).each(function(){ new fixed(this) }) return el; } $j.fn.positionFixed = function(){ return $j.positionFixed(this) } var fixed = $j.positionFixed.impl = function(el){ var o=this; o.sts={ target : $j(el).css('position','fixed'), container : $j(window) } o.sts.currentCss = { top : o.sts.target.css('top'), right : o.sts.target.css('right'), bottom : o.sts.target.css('bottom'), left : o.sts.target.css('left') } if(!o.ie6)return; o.bindEvent(); } $j.extend(fixed.prototype,{ ie6 : $.browser.msie && $.browser.version < 7.0, bindEvent : function(){ var o=this; o.sts.target.css('position','absolute') o.overRelative().initBasePos(); o.sts.target.css(o.sts.basePos) o.sts.container.scroll(o.scrollEvent()).resize(o.resizeEvent()); o.setPos(); }, overRelative : function(){ var o=this; var relative = o.sts.target.parents().filter(function(){ if($j(this).css('position')=='relative')return this; }) if(relative.size()>0)relative.after(o.sts.target) return o; }, initBasePos : function(){ var o=this; o.sts.basePos = { top: o.sts.target.offset().top - (o.sts.currentCss.top=='auto'?o.sts.container.scrollTop():0), left: o.sts.target.offset().left - (o.sts.currentCss.left=='auto'?o.sts.container.scrollLeft():0) } return o; }, setPos : function(){ var o=this; o.sts.target.css({ top: o.sts.container.scrollTop() + o.sts.basePos.top, left: o.sts.container.scrollLeft() + o.sts.basePos.left }) }, scrollEvent : function(){ var o=this; return function(){ o.setPos(); } }, resizeEvent : function(){ var o=this; return function(){ setTimeout(function(){ o.sts.target.css(o.sts.currentCss) o.initBasePos(); o.setPos() },1) } } }) })(jQuery) function Pop(title,url,intro){ this.title=title; this.url=url; this.intro=intro; this.apearTime=1000; this.hideTime=500; this.delay=10000; //添加信息 this.addInfo(); //显示 this.showDiv(); //关闭 this.closeDiv(); } Pop.prototype={ addInfo:function(){ $("#popTitle a").attr('href',this.url).html(this.title); $("#popIntro").html(this.intro); $("#popMore a").attr('href',this.url); }, showDiv:function(time){ if (!($.browser.msie && ($.browser.version == "6.0") && !$.support.style)) { $('#pop').slideDown(this.apearTime).delay(this.delay).fadeOut(400);; } else{//调用jquery.fixed.js,解决ie6不能用fixed $('#pop').show(); jQuery(function($j){ $j('#pop').positionFixed() }) } }, closeDiv:function(){ $("#popClose").click(function(){ $('#pop').hide(); } ); } } 技术讨论区(1 个讨论) 1. ?逆时光? 1 ?逆时光? 实现方式不错 2016-1-4 14:24:40 | 回复 2. 请先登录 发 布
__label__pos
0.981444
summaryrefslogtreecommitdiffstats path: root/xlators/cluster/afr/src/afr.c blob: ec7aa226821d9458bb72cb420c511494dd807dfa (plain) 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 796 797 798 799 800 801 802 803 804 805 806 807 808 809 810 811 812 813 814 815 816 817 818 819 820 821 822 823 824 825 826 827 828 829 830 831 832 833 834 835 836 837 838 839 840 841 842 843 844 845 846 847 848 849 850 851 852 853 854 855 856 857 858 859 860 861 862 863 864 865 866 867 868 869 870 871 872 873 874 875 876 877 878 879 880 881 882 883 884 885 886 887 888 889 890 891 892 893 894 895 896 897 898 899 900 901 902 903 904 905 906 907 908 909 910 911 912 913 914 915 916 917 918 919 920 921 922 923 924 925 926 927 928 929 930 931 932 933 934 935 936 937 938 939 940 941 942 943 944 945 946 947 948 949 950 951 952 953 954 955 956 957 958 959 960 961 962 963 964 965 966 967 968 969 970 971 972 973 974 975 976 977 978 979 980 981 982 983 984 985 986 987 988 989 990 991 992 993 994 995 996 997 998 999 1000 1001 1002 1003 1004 1005 1006 1007 1008 1009 1010 1011 1012 1013 1014 1015 1016 1017 1018 1019 1020 1021 1022 1023 1024 1025 1026 1027 1028 1029 1030 1031 1032 1033 1034 1035 1036 1037 1038 1039 1040 1041 1042 1043 1044 1045 1046 1047 1048 1049 1050 1051 1052 1053 1054 1055 1056 1057 1058 1059 1060 1061 1062 1063 1064 1065 1066 1067 1068 1069 1070 1071 1072 1073 1074 1075 1076 1077 1078 1079 1080 1081 1082 1083 1084 1085 1086 1087 1088 1089 1090 1091 1092 1093 1094 1095 1096 1097 1098 1099 1100 1101 1102 1103 1104 1105 1106 1107 1108 1109 1110 1111 1112 1113 1114 1115 1116 1117 1118 1119 1120 1121 1122 1123 1124 1125 1126 1127 1128 1129 1130 1131 1132 1133 1134 1135 1136 1137 1138 1139 1140 1141 1142 1143 1144 1145 1146 1147 1148 1149 1150 1151 1152 1153 1154 1155 1156 1157 1158 1159 1160 1161 1162 1163 1164 1165 1166 1167 1168 1169 1170 1171 1172 1173 1174 1175 1176 1177 1178 1179 1180 1181 1182 1183 1184 1185 1186 1187 1188 1189 1190 1191 1192 1193 1194 1195 1196 1197 1198 1199 1200 1201 1202 1203 1204 1205 1206 1207 1208 1209 1210 1211 1212 1213 1214 1215 1216 1217 1218 1219 1220 1221 1222 1223 1224 1225 1226 1227 1228 1229 1230 1231 1232 1233 1234 1235 1236 1237 1238 1239 1240 1241 1242 1243 1244 1245 1246 1247 1248 1249 1250 1251 1252 1253 1254 1255 1256 1257 1258 1259 1260 1261 1262 1263 1264 1265 1266 1267 1268 1269 1270 1271 1272 1273 1274 1275 1276 1277 1278 1279 1280 1281 1282 1283 1284 1285 1286 1287 1288 1289 1290 1291 1292 1293 1294 1295 1296 1297 1298 1299 1300 1301 /* Copyright (c) 2008-2012 Red Hat, Inc. <http://www.redhat.com> This file is part of GlusterFS. This file is licensed to you under your choice of the GNU Lesser General Public License, version 3 or any later version (LGPLv3 or later), or the GNU General Public License, version 2 (GPLv2), in all cases as published by the Free Software Foundation. */ #include <libgen.h> #include <unistd.h> #include <fnmatch.h> #include <sys/time.h> #include <stdlib.h> #include <signal.h> #include "afr-common.c" #include "afr-messages.h" struct volume_options options[]; static char *afr_favorite_child_policies[AFR_FAV_CHILD_POLICY_MAX + 1] = { [AFR_FAV_CHILD_NONE] = "none", [AFR_FAV_CHILD_BY_SIZE] = "size", [AFR_FAV_CHILD_BY_CTIME] = "ctime", [AFR_FAV_CHILD_BY_MTIME] = "mtime", [AFR_FAV_CHILD_BY_MAJORITY] = "majority", [AFR_FAV_CHILD_POLICY_MAX] = NULL, }; int32_t notify(xlator_t *this, int32_t event, void *data, ...) { int ret = -1; va_list ap; void *data2 = NULL; va_start(ap, data); data2 = va_arg(ap, dict_t *); va_end(ap); ret = afr_notify(this, event, data, data2); return ret; } int32_t mem_acct_init(xlator_t *this) { int ret = -1; if (!this) return ret; ret = xlator_mem_acct_init(this, gf_afr_mt_end + 1); if (ret != 0) { return ret; } return ret; } int xlator_subvolume_index(xlator_t *this, xlator_t *subvol) { int index = -1; int i = 0; xlator_list_t *list = NULL; list = this->children; while (list) { if (subvol == list->xlator || strcmp(subvol->name, list->xlator->name) == 0) { index = i; break; } list = list->next; i++; } return index; } static void fix_quorum_options(xlator_t *this, afr_private_t *priv, char *qtype, dict_t *options) { if (dict_get_sizen(options, "quorum-type") == NULL) { /* If user doesn't configure anything enable auto-quorum if the * replica has more than two subvolumes */ if (priv->child_count > 2) qtype = "auto"; } if (priv->quorum_count && strcmp(qtype, "fixed")) { gf_msg(this->name, GF_LOG_WARNING, 0, AFR_MSG_QUORUM_OVERRIDE, "quorum-type %s overriding quorum-count %u", qtype, priv->quorum_count); } if (!strcmp(qtype, "none")) { priv->quorum_count = 0; } else if (!strcmp(qtype, "auto")) { priv->quorum_count = AFR_QUORUM_AUTO; } } int afr_set_favorite_child_policy(afr_private_t *priv, char *policy) { int index = -1; index = gf_get_index_by_elem(afr_favorite_child_policies, policy); if (index < 0 || index >= AFR_FAV_CHILD_POLICY_MAX) return -1; priv->fav_child_policy = index; return 0; } static void set_data_self_heal_algorithm(afr_private_t *priv, char *algo) { if (!algo) { priv->data_self_heal_algorithm = AFR_SELFHEAL_DATA_DYNAMIC; } else if (strcmp(algo, "full") == 0) { priv->data_self_heal_algorithm = AFR_SELFHEAL_DATA_FULL; } else if (strcmp(algo, "diff") == 0) { priv->data_self_heal_algorithm = AFR_SELFHEAL_DATA_DIFF; } else { priv->data_self_heal_algorithm = AFR_SELFHEAL_DATA_DYNAMIC; } } int reconfigure(xlator_t *this, dict_t *options) { afr_private_t *priv = NULL; xlator_t *read_subvol = NULL; int read_subvol_index = -1; int timeout_old = 0; int ret = -1; int index = -1; char *qtype = NULL; char *fav_child_policy = NULL; char *data_self_heal = NULL; char *data_self_heal_algorithm = NULL; char *locking_scheme = NULL; gf_boolean_t consistent_io = _gf_false; gf_boolean_t choose_local_old = _gf_false; gf_boolean_t enabled_old = _gf_false; priv = this->private; GF_OPTION_RECONF("metadata-splitbrain-forced-heal", priv->metadata_splitbrain_forced_heal, options, bool, out); GF_OPTION_RECONF("background-self-heal-count", priv->background_self_heal_count, options, uint32, out); GF_OPTION_RECONF("heal-wait-queue-length", priv->heal_wait_qlen, options, uint32, out); GF_OPTION_RECONF("metadata-self-heal", priv->metadata_self_heal, options, bool, out); GF_OPTION_RECONF("data-self-heal", data_self_heal, options, str, out); gf_string2boolean(data_self_heal, &priv->data_self_heal); GF_OPTION_RECONF("entry-self-heal", priv->entry_self_heal, options, bool, out); GF_OPTION_RECONF("data-self-heal-window-size", priv->data_self_heal_window_size, options, uint32, out); GF_OPTION_RECONF("data-self-heal-algorithm", data_self_heal_algorithm, options, str, out); set_data_self_heal_algorithm(priv, data_self_heal_algorithm); GF_OPTION_RECONF("halo-enabled", priv->halo_enabled, options, bool, out); GF_OPTION_RECONF("halo-shd-max-latency", priv->shd.halo_max_latency_msec, options, uint32, out); GF_OPTION_RECONF("halo-nfsd-max-latency", priv->nfsd.halo_max_latency_msec, options, uint32, out); GF_OPTION_RECONF("halo-max-latency", priv->halo_max_latency_msec, options, uint32, out); GF_OPTION_RECONF("halo-max-replicas", priv->halo_max_replicas, options, uint32, out); GF_OPTION_RECONF("halo-min-replicas", priv->halo_min_replicas, options, uint32, out); GF_OPTION_RECONF("read-subvolume", read_subvol, options, xlator, out); choose_local_old = priv->choose_local; GF_OPTION_RECONF("choose-local", priv->choose_local, options, bool, out); if (choose_local_old != priv->choose_local) { priv->read_child = -1; if (choose_local_old == _gf_false) priv->did_discovery = _gf_false; } GF_OPTION_RECONF("read-hash-mode", priv->hash_mode, options, uint32, out); if (read_subvol) { index = xlator_subvolume_index(this, read_subvol); if (index == -1) { gf_msg(this->name, GF_LOG_ERROR, 0, AFR_MSG_INVALID_SUBVOL, "%s not a subvolume", read_subvol->name); goto out; } priv->read_child = index; } GF_OPTION_RECONF("read-subvolume-index", read_subvol_index, options, int32, out); if (read_subvol_index > -1) { index = read_subvol_index; if (index >= priv->child_count) { gf_msg(this->name, GF_LOG_ERROR, 0, AFR_MSG_INVALID_SUBVOL, "%d not a subvolume-index", index); goto out; } priv->read_child = index; } GF_OPTION_RECONF("pre-op-compat", priv->pre_op_compat, options, bool, out); GF_OPTION_RECONF("locking-scheme", locking_scheme, options, str, out); priv->granular_locks = (strcmp(locking_scheme, "granular") == 0); GF_OPTION_RECONF("full-lock", priv->full_lock, options, bool, out); GF_OPTION_RECONF("granular-entry-heal", priv->esh_granular, options, bool, out); GF_OPTION_RECONF("eager-lock", priv->eager_lock, options, bool, out); GF_OPTION_RECONF("optimistic-change-log", priv->optimistic_change_log, options, bool, out); GF_OPTION_RECONF("quorum-type", qtype, options, str, out); GF_OPTION_RECONF("quorum-count", priv->quorum_count, options, uint32, out); fix_quorum_options(this, priv, qtype, options); if (priv->quorum_count && !afr_has_quorum(priv->child_up, this, NULL)) gf_msg(this->name, GF_LOG_WARNING, 0, AFR_MSG_QUORUM_FAIL, "Client-quorum is not met"); GF_OPTION_RECONF("post-op-delay-secs", priv->post_op_delay_secs, options, uint32, out); GF_OPTION_RECONF(AFR_SH_READDIR_SIZE_KEY, priv->sh_readdir_size, options, size_uint64, out); /* Reset this so we re-discover in case the topology changed. */ GF_OPTION_RECONF("ensure-durability", priv->ensure_durability, options, bool, out); enabled_old = priv->shd.enabled; GF_OPTION_RECONF("self-heal-daemon", priv->shd.enabled, options, bool, out); GF_OPTION_RECONF("iam-self-heal-daemon", priv->shd.iamshd, options, bool, out); timeout_old = priv->shd.timeout; GF_OPTION_RECONF("heal-timeout", priv->shd.timeout, options, int32, out); GF_OPTION_RECONF("consistent-metadata", priv->consistent_metadata, options, bool, out); GF_OPTION_RECONF("shd-max-threads", priv->shd.max_threads, options, uint32, out); GF_OPTION_RECONF("shd-wait-qlength", priv->shd.wait_qlength, options, uint32, out); GF_OPTION_RECONF("favorite-child-policy", fav_child_policy, options, str, out); if (afr_set_favorite_child_policy(priv, fav_child_policy) == -1) goto out; priv->did_discovery = _gf_false; GF_OPTION_RECONF("consistent-io", consistent_io, options, bool, out); if (priv->quorum_count != 0) consistent_io = _gf_false; priv->consistent_io = consistent_io; if (priv->shd.enabled) { if ((priv->shd.enabled != enabled_old) || (timeout_old != priv->shd.timeout)) afr_selfheal_childup(this, priv); } ret = 0; out: return ret; } static int afr_pending_xattrs_init(afr_private_t *priv, xlator_t *this) { int ret = -1; int i = 0; char *ptr = NULL; char *ptr1 = NULL; char *xattrs_list = NULL; xlator_list_t *trav = NULL; int child_count = -1; trav = this->children; child_count = priv->child_count; if (priv->thin_arbiter_count) { /* priv->pending_key[THIN_ARBITER_BRICK_INDEX] is used as the * name of the thin arbiter file for persistence across add/ * removal of DHT subvols.*/ child_count++; } GF_OPTION_INIT("afr-pending-xattr", xattrs_list, str, out); priv->pending_key = GF_CALLOC(sizeof(*priv->pending_key), child_count, gf_afr_mt_char); if (!priv->pending_key) { ret = -ENOMEM; goto out; } if (!xattrs_list) { gf_msg(this->name, GF_LOG_WARNING, 0, AFR_MSG_NO_CHANGELOG, "Unable to fetch afr-pending-xattr option from volfile." " Falling back to using client translator names. "); while (i < child_count) { ret = gf_asprintf(&priv->pending_key[i], "%s.%s", AFR_XATTR_PREFIX, trav->xlator->name); if (ret == -1) { ret = -ENOMEM; goto out; } trav = trav->next; i++; } ret = 0; goto out; } ptr = ptr1 = gf_strdup(xattrs_list); if (!ptr) { ret = -ENOMEM; goto out; } for (i = 0, ptr = strtok(ptr, ","); ptr; ptr = strtok(NULL, ",")) { ret = gf_asprintf(&priv->pending_key[i], "%s.%s", AFR_XATTR_PREFIX, ptr); if (ret == -1) { ret = -ENOMEM; goto out; } i++; } ret = 0; out: GF_FREE(ptr1); return ret; } void afr_ta_init(afr_private_t *priv) { priv->thin_arbiter_count = 1; priv->child_count--; priv->ta_child_up = 0; priv->ta_bad_child_index = AFR_CHILD_UNKNOWN; priv->ta_notify_dom_lock_offset = 0; priv->ta_in_mem_txn_count = 0; priv->ta_on_wire_txn_count = 0; priv->release_ta_notify_dom_lock = _gf_false; INIT_LIST_HEAD(&priv->ta_waitq); INIT_LIST_HEAD(&priv->ta_onwireq); gf_uuid_clear(priv->ta_gfid); } int32_t init(xlator_t *this) { afr_private_t *priv = NULL; int child_count = 0; xlator_list_t *trav = NULL; int i = 0; int ret = -1; GF_UNUSED int op_errno = 0; xlator_t *read_subvol = NULL; int read_subvol_index = -1; char *qtype = NULL; char *fav_child_policy = NULL; char *thin_arbiter = NULL; char *data_self_heal = NULL; char *locking_scheme = NULL; char *data_self_heal_algorithm = NULL; if (!this->children) { gf_msg(this->name, GF_LOG_ERROR, 0, AFR_MSG_CHILD_MISCONFIGURED, "replicate translator needs more than one " "subvolume defined."); return -1; } if (!this->parents) { gf_msg(this->name, GF_LOG_WARNING, 0, AFR_MSG_VOL_MISCONFIGURED, "Volume is dangling."); } this->private = GF_CALLOC(1, sizeof(afr_private_t), gf_afr_mt_afr_private_t); if (!this->private) goto out; priv = this->private; INIT_LIST_HEAD(&priv->saved_locks); INIT_LIST_HEAD(&priv->lk_healq); LOCK_INIT(&priv->lock); child_count = xlator_subvolume_count(this); priv->child_count = child_count; priv->read_child = -1; GF_OPTION_INIT("arbiter-count", priv->arbiter_count, uint32, out); GF_OPTION_INIT("thin-arbiter", thin_arbiter, str, out); if (thin_arbiter && strlen(thin_arbiter) > 0) { afr_ta_init(priv); } INIT_LIST_HEAD(&priv->healing); INIT_LIST_HEAD(&priv->heal_waiting); priv->spb_choice_timeout = AFR_DEFAULT_SPB_CHOICE_TIMEOUT; GF_OPTION_INIT("afr-dirty-xattr", priv->afr_dirty, str, out); GF_OPTION_INIT("metadata-splitbrain-forced-heal", priv->metadata_splitbrain_forced_heal, bool, out); GF_OPTION_INIT("read-subvolume", read_subvol, xlator, out); if (read_subvol) { priv->read_child = xlator_subvolume_index(this, read_subvol); if (priv->read_child == -1) { gf_msg(this->name, GF_LOG_ERROR, 0, AFR_MSG_INVALID_SUBVOL, "%s not a subvolume", read_subvol->name); goto out; } } GF_OPTION_INIT("read-subvolume-index", read_subvol_index, int32, out); if (read_subvol_index > -1) { if (read_subvol_index >= priv->child_count) { gf_msg(this->name, GF_LOG_ERROR, 0, AFR_MSG_INVALID_SUBVOL, "%d not a subvolume-index", read_subvol_index); goto out; } priv->read_child = read_subvol_index; } GF_OPTION_INIT("choose-local", priv->choose_local, bool, out); priv->pending_reads = GF_CALLOC(sizeof(*priv->pending_reads), priv->child_count, gf_afr_mt_atomic_t); GF_OPTION_INIT("read-hash-mode", priv->hash_mode, uint32, out); priv->favorite_child = -1; GF_OPTION_INIT("favorite-child-policy", fav_child_policy, str, out); if (afr_set_favorite_child_policy(priv, fav_child_policy) == -1) goto out; GF_OPTION_INIT("shd-max-threads", priv->shd.max_threads, uint32, out); GF_OPTION_INIT("shd-wait-qlength", priv->shd.wait_qlength, uint32, out); GF_OPTION_INIT("background-self-heal-count", priv->background_self_heal_count, uint32, out); GF_OPTION_INIT("heal-wait-queue-length", priv->heal_wait_qlen, uint32, out); GF_OPTION_INIT("data-self-heal", data_self_heal, str, out); gf_string2boolean(data_self_heal, &priv->data_self_heal); GF_OPTION_INIT("data-self-heal-algorithm", data_self_heal_algorithm, str, out); set_data_self_heal_algorithm(priv, data_self_heal_algorithm); GF_OPTION_INIT("data-self-heal-window-size", priv->data_self_heal_window_size, uint32, out); GF_OPTION_INIT("metadata-self-heal", priv->metadata_self_heal, bool, out); GF_OPTION_INIT("entry-self-heal", priv->entry_self_heal, bool, out); GF_OPTION_INIT("halo-shd-max-latency", priv->shd.halo_max_latency_msec, uint32, out); GF_OPTION_INIT("halo-max-latency", priv->halo_max_latency_msec, uint32, out); GF_OPTION_INIT("halo-max-replicas", priv->halo_max_replicas, uint32, out); GF_OPTION_INIT("halo-min-replicas", priv->halo_min_replicas, uint32, out); GF_OPTION_INIT("halo-enabled", priv->halo_enabled, bool, out); GF_OPTION_INIT("halo-nfsd-max-latency", priv->nfsd.halo_max_latency_msec, uint32, out); GF_OPTION_INIT("iam-nfs-daemon", priv->nfsd.iamnfsd, bool, out); GF_OPTION_INIT("optimistic-change-log", priv->optimistic_change_log, bool, out); GF_OPTION_INIT("pre-op-compat", priv->pre_op_compat, bool, out); GF_OPTION_INIT("locking-scheme", locking_scheme, str, out); priv->granular_locks = (strcmp(locking_scheme, "granular") == 0); GF_OPTION_INIT("full-lock", priv->full_lock, bool, out); GF_OPTION_INIT("granular-entry-heal", priv->esh_granular, bool, out); GF_OPTION_INIT("eager-lock", priv->eager_lock, bool, out); GF_OPTION_INIT("quorum-type", qtype, str, out); GF_OPTION_INIT("quorum-count", priv->quorum_count, uint32, out); GF_OPTION_INIT(AFR_SH_READDIR_SIZE_KEY, priv->sh_readdir_size, size_uint64, out); fix_quorum_options(this, priv, qtype, this->options); GF_OPTION_INIT("post-op-delay-secs", priv->post_op_delay_secs, uint32, out); GF_OPTION_INIT("ensure-durability", priv->ensure_durability, bool, out); GF_OPTION_INIT("self-heal-daemon", priv->shd.enabled, bool, out); GF_OPTION_INIT("iam-self-heal-daemon", priv->shd.iamshd, bool, out); GF_OPTION_INIT("heal-timeout", priv->shd.timeout, int32, out); GF_OPTION_INIT("consistent-metadata", priv->consistent_metadata, bool, out); GF_OPTION_INIT("consistent-io", priv->consistent_io, bool, out); if (priv->quorum_count != 0) priv->consistent_io = _gf_false; priv->wait_count = 1; priv->local = GF_CALLOC(sizeof(unsigned char), child_count, gf_afr_mt_char); if (!priv->local) { ret = -ENOMEM; goto out; } priv->child_up = GF_CALLOC(sizeof(unsigned char), child_count, gf_afr_mt_char); priv->child_latency = GF_MALLOC(sizeof(*priv->child_latency) * child_count, gf_afr_mt_child_latency_t); if (!priv->child_up || !priv->child_latency) { ret = -ENOMEM; goto out; } /*Initialize to -ve ping timeout so that they are not considered * in child-up events until ping-event comes*/ for (i = 0; i < child_count; i++) priv->child_latency[i] = -1; priv->children = GF_CALLOC(sizeof(xlator_t *), child_count, gf_afr_mt_xlator_t); if (!priv->children) { ret = -ENOMEM; goto out; } ret = afr_pending_xattrs_init(priv, this); if (ret) goto out; trav = this->children; i = 0; while (i < child_count) { priv->children[i] = trav->xlator; trav = trav->next; i++; } ret = gf_asprintf(&priv->sh_domain, AFR_SH_DATA_DOMAIN_FMT, this->name); if (-1 == ret) { ret = -ENOMEM; goto out; } priv->last_event = GF_CALLOC(child_count, sizeof(*priv->last_event), gf_afr_mt_int32_t); if (!priv->last_event) { ret = -ENOMEM; goto out; } this->itable = inode_table_new(SHD_INODE_LRU_LIMIT, this); if (!this->itable) { ret = -ENOMEM; goto out; } if (priv->shd.iamshd) { ret = afr_selfheal_daemon_init(this); if (ret) { ret = -ENOMEM; goto out; } } /* keep more local here as we may need them for self-heal etc */ this->local_pool = mem_pool_new(afr_local_t, 512); if (!this->local_pool) { ret = -1; goto out; } priv->root_inode = NULL; ret = 0; out: return ret; } void afr_destroy_healer_object(xlator_t *this, struct subvol_healer *healer) { int ret = -1; if (!healer) return; if (healer->running) { /* * If there are any resources to cleanup, We need * to do that gracefully using pthread_cleanup_push */ ret = gf_thread_cleanup_xint(healer->thread); if (ret) gf_msg(this->name, GF_LOG_WARNING, 0, AFR_MSG_SELF_HEAL_FAILED, "Failed to clean up healer threads."); healer->thread = 0; } pthread_cond_destroy(&healer->cond); pthread_mutex_destroy(&healer->mutex); } void afr_selfheal_daemon_fini(xlator_t *this) { struct subvol_healer *healer = NULL; afr_self_heald_t *shd = NULL; afr_private_t *priv = NULL; int i = 0; priv = this->private; if (!priv) return; shd = &priv->shd; if (!shd->iamshd) return; for (i = 0; i < priv->child_count; i++) { healer = &shd->index_healers[i]; afr_destroy_healer_object(this, healer); healer = &shd->full_healers[i]; afr_destroy_healer_object(this, healer); if (shd->statistics[i]) eh_destroy(shd->statistics[i]); } GF_FREE(shd->index_healers); GF_FREE(shd->full_healers); GF_FREE(shd->statistics); if (shd->split_brain) eh_destroy(shd->split_brain); } void fini(xlator_t *this) { afr_private_t *priv = NULL; priv = this->private; afr_selfheal_daemon_fini(this); GF_ASSERT(list_empty(&priv->saved_locks)); LOCK(&priv->lock); if (priv->timer != NULL) { gf_timer_call_cancel(this->ctx, priv->timer); priv->timer = NULL; } UNLOCK(&priv->lock); if (this->local_pool != NULL) { mem_pool_destroy(this->local_pool); this->local_pool = NULL; } this->private = NULL; afr_priv_destroy(priv); if (this->itable) { inode_table_destroy(this->itable); this->itable = NULL; } return; } struct xlator_fops fops = { .lookup = afr_lookup, .lk = afr_lk, .flush = afr_flush, .statfs = afr_statfs, .fsyncdir = afr_fsyncdir, .inodelk = afr_inodelk, .finodelk = afr_finodelk, .entrylk = afr_entrylk, .fentrylk = afr_fentrylk, .ipc = afr_ipc, .lease = afr_lease, /* inode read */ .access = afr_access, .stat = afr_stat, .fstat = afr_fstat, .readlink = afr_readlink, .getxattr = afr_getxattr, .fgetxattr = afr_fgetxattr, .readv = afr_readv, .seek = afr_seek, /* inode write */ .writev = afr_writev, .truncate = afr_truncate, .ftruncate = afr_ftruncate, .setxattr = afr_setxattr, .fsetxattr = afr_fsetxattr, .setattr = afr_setattr, .fsetattr = afr_fsetattr, .removexattr = afr_removexattr, .fremovexattr = afr_fremovexattr, .fallocate = afr_fallocate, .discard = afr_discard, .zerofill = afr_zerofill, .xattrop = afr_xattrop, .fxattrop = afr_fxattrop, .fsync = afr_fsync, /*inode open*/ .opendir = afr_opendir, .open = afr_open, /* dir read */ .readdir = afr_readdir, .readdirp = afr_readdirp, /* dir write */ .create = afr_create, .mknod = afr_mknod, .mkdir = afr_mkdir, .unlink = afr_unlink, .rmdir = afr_rmdir, .link = afr_link, .symlink = afr_symlink, .rename = afr_rename, }; struct xlator_dumpops dumpops = { .priv = afr_priv_dump, }; struct xlator_cbks cbks = { .release = afr_release, .releasedir = afr_releasedir, .forget = afr_forget, }; struct volume_options options[] = { {.key = {"read-subvolume"}, .type = GF_OPTION_TYPE_XLATOR, .op_version = {1}, .flags = OPT_FLAG_CLIENT_OPT | OPT_FLAG_SETTABLE | OPT_FLAG_DOC, .tags = {"replicate"}, .description = "inode-read fops happen only on one of the bricks in " "replicate. Afr will prefer the one specified using " "this option if it is not stale. Option value must be " "one of the xlator names of the children. " "Ex: <volname>-client-0 till " "<volname>-client-<number-of-bricks - 1>"}, {.key = {"read-subvolume-index"}, .type = GF_OPTION_TYPE_INT, .default_value = "-1", .op_version = {2}, .flags = OPT_FLAG_CLIENT_OPT | OPT_FLAG_SETTABLE | OPT_FLAG_DOC, .tags = {"replicate"}, .description = "inode-read fops happen only on one of the bricks in " "replicate. AFR will prefer the one specified using " "this option if it is not stale. allowed options" " include -1 till replica-count - 1"}, {.key = {"read-hash-mode"}, .type = GF_OPTION_TYPE_INT, .min = 0, .max = 5, .default_value = "1", .op_version = {2}, .flags = OPT_FLAG_CLIENT_OPT | OPT_FLAG_SETTABLE | OPT_FLAG_DOC, .tags = {"replicate"}, .description = "inode-read fops happen only on one of the bricks in " "replicate. AFR will prefer the one computed using " "the method specified using this option.\n" "0 = first readable child of AFR, starting from 1st child.\n" "1 = hash by GFID of file (all clients use " "same subvolume).\n" "2 = hash by GFID of file and client PID.\n" "3 = brick having the least outstanding read requests.\n" "4 = brick having the least network ping latency.\n" "5 = Hybrid mode between 3 and 4, ie least value among " "network-latency multiplied by outstanding-read-requests."}, { .key = {"choose-local"}, .type = GF_OPTION_TYPE_BOOL, .default_value = "true", .op_version = {2}, .flags = OPT_FLAG_CLIENT_OPT | OPT_FLAG_SETTABLE | OPT_FLAG_DOC, .tags = {"replicate"}, .description = "Choose a local subvolume (i.e. Brick) to read from" " if read-subvolume is not explicitly set.", }, {.key = {"background-self-heal-count"}, .type = GF_OPTION_TYPE_INT, .min = 0, .max = 256, .default_value = "8", .validate = GF_OPT_VALIDATE_MIN, .op_version = {1}, .flags = OPT_FLAG_CLIENT_OPT | OPT_FLAG_SETTABLE | OPT_FLAG_DOC, .tags = {"replicate"}, .description = "This specifies the number of per client self-heal " "jobs that can perform parallel heals in the " "background."}, {.key = {"halo-shd-max-latency"}, .type = GF_OPTION_TYPE_INT, .min = 1, .max = 99999, .default_value = "99999", .op_version = {GD_OP_VERSION_3_11_0}, .flags = OPT_FLAG_CLIENT_OPT | OPT_FLAG_SETTABLE | OPT_FLAG_DOC, .tags = {"replicate", "halo"}, .description = "Maximum latency for shd halo replication in msec."}, {.key = {"halo-enabled"}, .type = GF_OPTION_TYPE_BOOL, .default_value = "False", .op_version = {GD_OP_VERSION_3_11_0}, .flags = OPT_FLAG_CLIENT_OPT | OPT_FLAG_SETTABLE | OPT_FLAG_DOC, .tags = {"replicate", "halo"}, .description = "Enable Halo (geo) replication mode."}, {.key = {"halo-nfsd-max-latency"}, .type = GF_OPTION_TYPE_INT, .min = 1, .max = 99999, .default_value = "5", .op_version = {GD_OP_VERSION_3_11_0}, .flags = OPT_FLAG_CLIENT_OPT | OPT_FLAG_SETTABLE | OPT_FLAG_DOC, .tags = {"replicate", "halo"}, .description = "Maximum latency for nfsd halo replication in msec."}, {.key = {"halo-max-latency"}, .type = GF_OPTION_TYPE_INT, .min = 1, .max = AFR_HALO_MAX_LATENCY, .default_value = "5", .op_version = {GD_OP_VERSION_3_11_0}, .flags = OPT_FLAG_CLIENT_OPT | OPT_FLAG_SETTABLE | OPT_FLAG_DOC, .tags = {"replicate", "halo"}, .description = "Maximum latency for halo replication in msec."}, {.key = {"halo-max-replicas"}, .type = GF_OPTION_TYPE_INT, .min = 1, .max = 99999, .default_value = "99999", .op_version = {GD_OP_VERSION_3_11_0}, .flags = OPT_FLAG_CLIENT_OPT | OPT_FLAG_SETTABLE | OPT_FLAG_DOC, .tags = {"replicate", "halo"}, .description = "The maximum number of halo replicas; replicas" " beyond this value will be written asynchronously" "via the SHD."}, {.key = {"halo-min-replicas"}, .type = GF_OPTION_TYPE_INT, .min = 1, .max = 99999, .default_value = "2", .op_version = {GD_OP_VERSION_3_11_0}, .flags = OPT_FLAG_CLIENT_OPT | OPT_FLAG_SETTABLE | OPT_FLAG_DOC, .tags = {"replicate", "halo"}, .description = "The minimmum number of halo replicas, before adding " "out of region replicas."}, {.key = {"heal-wait-queue-length"}, .type = GF_OPTION_TYPE_INT, .min = 0, .max = 10000, /*Around 100MB with sizeof(afr_local_t)= 10496 bytes*/ .default_value = "128", .validate = GF_OPT_VALIDATE_MIN, .op_version = {GD_OP_VERSION_3_7_10}, .flags = OPT_FLAG_CLIENT_OPT | OPT_FLAG_SETTABLE | OPT_FLAG_DOC, .tags = {"replicate"}, .description = "This specifies the number of heals that can be queued" " for the parallel background self heal jobs."}, {.key = {"data-self-heal"}, .type = GF_OPTION_TYPE_STR, .value = {"1", "on", "yes", "true", "enable", "0", "off", "no", "false", "disable", "open"}, .default_value = "off", .op_version = {1}, .flags = OPT_FLAG_CLIENT_OPT | OPT_FLAG_SETTABLE | OPT_FLAG_DOC, .tags = {"replicate"}, .description = "Using this option we can enable/disable data " "self-heal on the file. \"open\" means data " "self-heal action will only be triggered by file " "open operations."}, {.key = {"data-self-heal-algorithm"}, .type = GF_OPTION_TYPE_STR, .op_version = {1}, .flags = OPT_FLAG_CLIENT_OPT | OPT_FLAG_SETTABLE | OPT_FLAG_DOC, .tags = {"replicate"}, .description = "Select between \"full\", \"diff\". The " "\"full\" algorithm copies the entire file from " "source to sink. The \"diff\" algorithm copies to " "sink only those blocks whose checksums don't match " "with those of source. If no option is configured " "the option is chosen dynamically as follows: " "If the file does not exist on one of the sinks " "or empty file exists or if the source file size is " "about the same as page size the entire file will " "be read and written i.e \"full\" algo, " "otherwise \"diff\" algo is chosen.", .value = {"diff", "full"}}, {.key = {"data-self-heal-window-size"}, .type = GF_OPTION_TYPE_INT, .min = 1, .max = 1024, .default_value = "1", .op_version = {1}, .flags = OPT_FLAG_CLIENT_OPT | OPT_FLAG_SETTABLE | OPT_FLAG_DOC, .tags = {"replicate"}, .description = "Maximum number blocks per file for which self-heal " "process would be applied simultaneously."}, {.key = {"metadata-self-heal"}, .type = GF_OPTION_TYPE_BOOL, .default_value = "off", .op_version = {1}, .flags = OPT_FLAG_CLIENT_OPT | OPT_FLAG_SETTABLE | OPT_FLAG_DOC, .tags = {"replicate"}, /*.validate_fn = validate_replica*/ .description = "Using this option we can enable/disable metadata " "i.e. Permissions, ownerships, xattrs self-heal on " "the file/directory."}, {.key = {"entry-self-heal"}, .type = GF_OPTION_TYPE_BOOL, .default_value = "off", .op_version = {1}, .flags = OPT_FLAG_CLIENT_OPT | OPT_FLAG_SETTABLE | OPT_FLAG_DOC, .tags = {"replicate"}, /*.validate_fn = validate_replica*/ .description = "Using this option we can enable/disable entry " "self-heal on the directory."}, {.key = {"data-change-log"}, .type = GF_OPTION_TYPE_BOOL, .default_value = "on", .op_version = {1}, .flags = OPT_FLAG_CLIENT_OPT | OPT_FLAG_SETTABLE | OPT_FLAG_DOC, .tags = {"replicate"}, .description = "This option exists only for backward compatibility " "and configuring it doesn't have any effect"}, {.key = {"metadata-change-log"}, .type = GF_OPTION_TYPE_BOOL, .default_value = "on", .op_version = {1}, .flags = OPT_FLAG_CLIENT_OPT | OPT_FLAG_SETTABLE | OPT_FLAG_DOC, .tags = {"replicate"}, .description = "This option exists only for backward compatibility " "and configuring it doesn't have any effect"}, {.key = {"entry-change-log"}, .type = GF_OPTION_TYPE_BOOL, .default_value = "on", .op_version = {1}, .flags = OPT_FLAG_CLIENT_OPT | OPT_FLAG_SETTABLE | OPT_FLAG_DOC, .tags = {"replicate"}, .description = "This option exists only for backward compatibility " "and configuring it doesn't have any effect"}, {.key = {"optimistic-change-log"}, .type = GF_OPTION_TYPE_BOOL, .default_value = "on", .description = "Entry/Metadata fops will not perform " "pre fop changelog operations in afr transaction " "if this option is enabled."}, {.key = {"inodelk-trace"}, .type = GF_OPTION_TYPE_BOOL, .default_value = "off", .description = "Enabling this option logs inode lock/unlocks"}, {.key = {"entrylk-trace"}, .type = GF_OPTION_TYPE_BOOL, .default_value = "off", .description = "Enabling this option logs entry lock/unlocks"}, {.key = {"pre-op-compat"}, .type = GF_OPTION_TYPE_BOOL, .default_value = "on", .description = "Use separate pre-op xattrop() FOP rather than " "overloading xdata of the OP"}, {.key = {"eager-lock"}, .type = GF_OPTION_TYPE_BOOL, .default_value = "on", .op_version = {1}, .flags = OPT_FLAG_CLIENT_OPT | OPT_FLAG_SETTABLE | OPT_FLAG_DOC, .tags = {"replicate"}, .description = "Enable/Disable eager lock for replica volume. " "Lock phase of a transaction has two sub-phases. " "First is an attempt to acquire locks in parallel by " "broadcasting non-blocking lock requests. If lock " "acquisition fails on any server, then the held locks " "are unlocked and we revert to a blocking locks mode " "sequentially on one server after another. If this " "option is enabled the initial broadcasting lock " "request attempts to acquire a full lock on the entire file. " "If this fails, we revert back to the sequential " "\"regional\" blocking locks as before. In the case " "where such an \"eager\" lock is granted in the " "non-blocking phase, it gives rise to an opportunity " "for optimization. i.e, if the next write transaction " "on the same FD arrives before the unlock phase of " "the first transaction, it \"takes over\" the full " "file lock. Similarly if yet another data transaction " "arrives before the unlock phase of the \"optimized\" " "transaction, that in turn \"takes over\" the lock as " "well. The actual unlock now happens at the end of " "the last \"optimized\" transaction." }, {.key = {"self-heal-daemon"}, .type = GF_OPTION_TYPE_BOOL, .default_value = "on", .op_version = {1}, .flags = OPT_FLAG_CLIENT_OPT | OPT_FLAG_SETTABLE, .tags = {"replicate"}, /*.validate_fn = validate_replica_heal_enable_disable*/ .description = "This option applies to only self-heal-daemon. " "Index directory crawl and automatic healing of files " "will not be performed if this option is turned off."}, {.key = {"iam-self-heal-daemon"}, .type = GF_OPTION_TYPE_BOOL, .default_value = "off", .description = "This option differentiates if the replicate " "translator is running as part of self-heal-daemon " "or not."}, {.key = {"iam-nfs-daemon"}, .type = GF_OPTION_TYPE_BOOL, .default_value = "off", .description = "This option differentiates if the replicate " "translator is running as part of an NFS daemon " "or not."}, { .key = {"quorum-type"}, .type = GF_OPTION_TYPE_STR, .value = {"none", "auto", "fixed"}, .default_value = "none", .op_version = {1}, .flags = OPT_FLAG_CLIENT_OPT | OPT_FLAG_SETTABLE | OPT_FLAG_DOC, .tags = {"replicate"}, /*.option = quorum-type*/ .description = "If value is \"fixed\" only allow writes if " "quorum-count bricks are present. If value is " "\"auto\" only allow writes if more than half of " "bricks, or exactly half including the first, are " "present.", }, { .key = {"quorum-count"}, .type = GF_OPTION_TYPE_INT, .min = 1, .max = INT_MAX, .default_value = 0, .op_version = {1}, .flags = OPT_FLAG_CLIENT_OPT | OPT_FLAG_SETTABLE | OPT_FLAG_DOC, .tags = {"replicate"}, /*.option = quorum-count*/ /*.validate_fn = validate_quorum_count*/ .description = "If quorum-type is \"fixed\" only allow writes if " "this many bricks are present. Other quorum types " "will OVERWRITE this value.", }, { .key = {"quorum-reads"}, .type = GF_OPTION_TYPE_BOOL, .default_value = "no", .op_version = {GD_OP_VERSION_3_7_0}, .flags = OPT_FLAG_CLIENT_OPT | OPT_FLAG_SETTABLE | OPT_FLAG_DOC, .tags = {"replicate"}, .description = "This option has been removed. Reads are not allowed " "if quorum is not met.", }, { .key = {"node-uuid"}, .type = GF_OPTION_TYPE_STR, .description = "Local glusterd uuid string, used in starting " "self-heal-daemon so that it can crawl only on " "local index directories.", }, { .key = {"post-op-delay-secs"}, .type = GF_OPTION_TYPE_INT, .min = 0, .max = INT_MAX, .default_value = "1", .op_version = {2}, .flags = OPT_FLAG_CLIENT_OPT | OPT_FLAG_SETTABLE | OPT_FLAG_DOC, .tags = {"replicate"}, .description = "Time interval induced artificially before " "post-operation phase of the transaction to " "enhance overlap of adjacent write operations.", }, { .key = {AFR_SH_READDIR_SIZE_KEY}, .type = GF_OPTION_TYPE_SIZET, .description = "readdirp size for performing entry self-heal", .min = 1024, .max = 131072, .op_version = {2}, .flags = OPT_FLAG_CLIENT_OPT | OPT_FLAG_SETTABLE, .tags = {"replicate"}, .default_value = "1KB", }, { .key = {"ensure-durability"}, .type = GF_OPTION_TYPE_BOOL, .op_version = {3}, .flags = OPT_FLAG_CLIENT_OPT | OPT_FLAG_SETTABLE | OPT_FLAG_DOC, .tags = {"replicate"}, .description = "Afr performs fsyncs for transactions if this " "option is on to make sure the changelogs/data is " "written to the disk", .default_value = "on", }, { .key = {"afr-dirty-xattr"}, .type = GF_OPTION_TYPE_STR, .default_value = AFR_DIRTY_DEFAULT, }, {.key = {"afr-pending-xattr"}, .type = GF_OPTION_TYPE_STR, .description = "Comma separated list of xattrs that are used to " "capture information on pending heals."}, { .key = {"metadata-splitbrain-forced-heal"}, .type = GF_OPTION_TYPE_BOOL, .default_value = "off", }, {.key = {"heal-timeout"}, .type = GF_OPTION_TYPE_INT, .min = 5, .max = INT_MAX, .default_value = "600", .op_version = {2}, .flags = OPT_FLAG_CLIENT_OPT | OPT_FLAG_SETTABLE | OPT_FLAG_DOC, .tags = {"replicate"}, .description = "time interval for checking the need to self-heal " "in self-heal-daemon"}, { .key = {"consistent-metadata"}, .type = GF_OPTION_TYPE_BOOL, .default_value = "no", .op_version = {GD_OP_VERSION_3_7_0}, .flags = OPT_FLAG_CLIENT_OPT | OPT_FLAG_SETTABLE | OPT_FLAG_DOC, .tags = {"replicate"}, .description = "If this option is enabled, readdirp will force " "lookups on those entries read whose read child is " "not the same as that of the parent. This will " "guarantee that all read operations on a file serve " "attributes from the same subvol as long as it holds " " a good copy of the file/dir.", }, {.key = {"arbiter-count"}, .type = GF_OPTION_TYPE_INT, .description = "subset of child_count. Has to be 0 or 1."}, { .key = {"thin-arbiter"}, .type = GF_OPTION_TYPE_STR, .op_version = {GD_OP_VERSION_4_1_0}, .flags = OPT_FLAG_SETTABLE, .tags = {"replicate"}, .description = "contains host:path of thin abriter brick", }, {.key = {"shd-max-threads"}, .type = GF_OPTION_TYPE_INT, .min = 1, .max = 64, .default_value = "1", .op_version = {GD_OP_VERSION_3_7_12}, .flags = OPT_FLAG_CLIENT_OPT | OPT_FLAG_SETTABLE | OPT_FLAG_DOC, .tags = {"replicate"}, .description = "Maximum number of parallel heals SHD can do per " "local brick. This can substantially lower heal times" ", but can also crush your bricks if you don't have " "the storage hardware to support this."}, { .key = {"shd-wait-qlength"}, .type = GF_OPTION_TYPE_INT, .min = 1, .max = 655536, .default_value = "1024", .op_version = {GD_OP_VERSION_3_7_12}, .flags = OPT_FLAG_CLIENT_OPT | OPT_FLAG_SETTABLE | OPT_FLAG_DOC, .tags = {"replicate"}, .description = "This option can be used to control number of heals" " that can wait in SHD per subvolume", }, { .key = {"locking-scheme"}, .type = GF_OPTION_TYPE_STR, .value = {"full", "granular"}, .default_value = "full", .op_version = {GD_OP_VERSION_3_7_12}, .flags = OPT_FLAG_CLIENT_OPT | OPT_FLAG_SETTABLE | OPT_FLAG_DOC, .tags = {"replicate"}, .description = "If this option is set to granular, self-heal will " "stop being compatible with afr-v1, which helps afr " "be more granular while self-healing", }, {.key = {"full-lock"}, .type = GF_OPTION_TYPE_BOOL, .default_value = "yes", .op_version = {GD_OP_VERSION_3_13_2}, .flags = OPT_FLAG_CLIENT_OPT | OPT_FLAG_SETTABLE, .tags = {"replicate"}, .description = "If this option is disabled, then the IOs will take " "range locks same as versions till 3.13.1."}, { .key = {"granular-entry-heal"}, .type = GF_OPTION_TYPE_BOOL, .default_value = "no", .op_version = {GD_OP_VERSION_3_8_0}, .flags = OPT_FLAG_CLIENT_OPT | OPT_FLAG_SETTABLE | OPT_FLAG_DOC, .tags = {"replicate"}, .description = "If this option is enabled, self-heal will resort to " "granular way of recording changelogs and doing entry " "self-heal.", }, { .key = {"favorite-child-policy"}, .type = GF_OPTION_TYPE_STR, .value = {"none", "size", "ctime", "mtime", "majority"}, .default_value = "none", .op_version = {GD_OP_VERSION_3_7_12}, .flags = OPT_FLAG_CLIENT_OPT | OPT_FLAG_SETTABLE | OPT_FLAG_DOC, .tags = {"replicate"}, .description = "This option can be used to automatically resolve " "split-brains using various policies without user " "intervention. \"size\" picks the file with the " "biggest size as the source. \"ctime\" and \"mtime\" " "pick the file with the latest ctime and mtime " "respectively as the source. \"majority\" picks a file" " with identical mtime and size in more than half the " "number of bricks in the replica.", }, { .key = {"consistent-io"}, .type = GF_OPTION_TYPE_BOOL, .default_value = "no", .description = "If this option is enabled, i/o will fail even if " "one of the bricks is down in the replicas", }, {.key = {"use-compound-fops"}, .type = GF_OPTION_TYPE_BOOL, .default_value = "no", .op_version = {GD_OP_VERSION_3_8_4}, .flags = OPT_FLAG_CLIENT_OPT | OPT_FLAG_SETTABLE | OPT_FLAG_DOC, .tags = {"replicate"}, .description = "This option exists only for backward compatibility " "and configuring it doesn't have any effect"}, {.key = {NULL}}, }; xlator_api_t xlator_api = { .init = init, .fini = fini, .notify = notify, .reconfigure = reconfigure, .mem_acct_init = mem_acct_init, .op_version = {1}, /* Present from the initial version */ .dumpops = &dumpops, .fops = &fops, .cbks = &cbks, .options = options, .identifier = "replicate", .category = GF_MAINTAINED, };
__label__pos
0.968097
Skip to main content xlxs's user avatar xlxs's user avatar xlxs's user avatar xlxs • Member for 8 years, 10 months • Last seen more than 2 years ago 6 votes Accepted How to scale proportionally 5 votes the opposite of subdivide 4 votes Is there any alternate to Mirror modifier 2 votes why my mesh get weird shading when I click smooth? 2 votes Accepted what do the difference between using GPU or CPU for rendering? 2 votes Accepted When I add a new object / mesh to the scene, I can't edit previous objects 1 vote Little particles appearing on render 1 vote Accepted How to make a "Force feedback" effect with physics? 1 vote How can I solidify a 2d shape so that the rim isn't solidified and the center is solidified the most ? (Think creating a dome from a sphere) 1 vote Sharp edges during sculpting 1 vote Accepted All but one objects not visible, yet in library, rendering. Alt+H doesnt work 1 vote Accepted How do I bring back the little menu under the main window? 1 vote Why rendered image is not like in the material preview frame? 1 vote Knife project tool is looking rough after I extrude on a curved surface 1 vote Concentric circles evenly spaced in UV mode 1 vote Accepted Why does this sphere have an outline? 1 vote Accepted Both a mirror and a 90 degree prism are not reflecting light on to a screen (cycles) 1 vote Accepted Playback from start instead of from current frame? 1 vote Accepted Bliniking shadow on rendering result 0 votes How to "stick" two faces of two objects? 0 votes Mesh Deformation due to rigging 0 votes How to import Downloaded Materials/Files? 0 votes Changing filename format based on current animation frame? 0 votes Need help with making fire 0 votes Selection Glitch 0 votes Can a model be both smooth and flat shaded? 0 votes Blender Disabling Windows Aero? 0 votes Can't Keep Loop Cut & Slide Where I Want With Measurements 0 votes Accepted Rigid body not starting as deactivated 0 votes How to bridge edges without creating a twist?
__label__pos
1
Neal Ford  | • Author, Thoughtworker, & Meme Wrangler Why Everyone (Eventually) Hates (or Leaves) Maven originally published: January 22, 2013 My colleague Martin Fowler recently posted a Bliki entry about InternalReprogrammability, lamenting the diminishing mind share of tools that allow extremely fine grained customizability, such as Emacs and Smalltalk. I agree whole heartedly with his sentiment; I covered some of the same ground in my Abstraction Distractions keynote, but I attempted to burrow a level deeper. In my keynote, I defined two types of extensibility/programability abstractions prevalent in the development world: composable and contextual. Plug-in based architectures are excellent examples of the contextual abstraction. The plug-in API provides a plethora of data structures and other useful context developers inherit from or summon via already existing methods. But to use the API, a developer must understand what that context provides, and that understanding is sometimes expensive. I frequently ask developers how often they make non-trivial changes to the way their editor/IDE behaves beyond the preferences dialog. Heavy IDE users do this much less frequently because extending a tool like Eclipse takes an immense amount of knowledge. Consider the complexity of implementing the example that motivated Martin’s blog – the desire for the editor to automatically add a blank line in certain contexts – in a traditional IDE. The knowledge and effort required for a seemingly trivial change prevents the change from occurring, leaving the developer with a perpetually dull tool. Contextual tools aren’t bad things at all – Eclipse and IntelliJ wouldn’t exist without that approach. Contextual tools provide a huge amount of infrastructure that developers don’t have to build. Once mastered, the intricacies of Eclipse’s API provide access to enormous encapsulated power…and there’s the rub: how encapsulated? In the late 1990’s, 4GLs were all the rage, and they exemplified the contextual approach. They built the context into the language itself: dBASE, FoxPro, Clipper, Paradox, PowerBuilder, Microsoft Access, and similar ilk all had database-inspired facilities directly in the language and tooling. Ultimately, 4GLs fell from grace because of Dietzler’s Law, which I defined in my book Productive Programmer, based on experiences by my colleague Terry Dietzler, who ran the Access projects for my employer at the time: Dietzler’s Law for Access Every Access project will eventually fail because, while 80% of what the user wants is fast and easy to create, and the next 10% is possible with difficulty, ultimately the last 10% is impossible because you can’t get far enough underneath the built-in abstractions, and users always want 100% of what they want. Ultimately Dietzler’s Law killed the market for 4GLs. While they made it easy to build simple things fast, they didn’t scale to meet the demands of the real world. We all returned to general purpose languages. Composable systems tend to consist of finer grained parts that are expected to be wired together in specific ways. Powerful exemplars of this abstraction show up in *-nix shells with the ability to chain disparate behaviors together to create new things. A famous story from 1992 illustrates just how powerful these abstractions are. Donald Knuth was asked to write a program to solve this text handling problem: read a file of text, determine the n most frequently used words, and print out a sorted list of those words along with their frequencies. He wrote a program consisting of more than ten pages of Pascal, designing (and documenting) a new algorithm along the way. Then, Doug McIlroy demonstrated a shell script that would easily fit within a Twitter post that solved the problem more simply, elegantly, and understandably (if you understand shell commands): tr -cs A-Za-z '\n' | tr A-Z a-z | sort | uniq -c | sort -rn | sed ${1}q I suspect that even the designers of Unix shells are often surprised at the inventive uses developers have wrought with their simple but powerfully composable abstractions. Contextual systems provide more scaffolding, better “out of the box” behavior, and contextual intelligence via that scaffolding. Thus, contextual systems tend to ease the friction of initial use by doing more for you. Huge global data structures sometimes hide behind inheritance in these systems, creating a huge footprint that shows up in derived extensions via their parents. Composable systems have less implicit behavior and initial ease of use but tend to provide more granular building blocks that lead to more eventual power. Well designed composable systems provide narrow local context within well encapsulated modules. For example, to make changes in Emacs such as the one Martin describes requires a bit of knowledge about buffers (which he’s working in) but little else: the knowledge required is local and small in scope. These abstractions apply to tools and frameworks as well, particularly tools that must scale in their power and sophistication along with projects, like build tools. By hard-won lesson, composable build tools scale (in time, complexity, and usefulness) better than contextual ones. Contextual tools like Ant and Maven allow extension via a plug-in API, making extensions the original authors envisioned easy. However, trying to extend it in ways not designed into the API range in difficultly from hard to impossible, Dietzler’s Law Redux. This is especially true in tools where critical parts of how they function, like the ordering of tasks, is inaccessible without hacking. Which is why every project eventually hates Maven. Maven is a classic contextual tool: it is opinionated, rigid, generic, and dogmatic, which is exactly what is needed at the beginning of a project. Before anything exists, it’s nice for something to impose a structure, and to make it trivial to add behavior via plug-ins and other pre-built niceties. But over time, the project becomes less generic and more like a real, messy project. Early on, when no one knows enough to have opinions about things like lifecycle, a rigid system is good. Over time, though, project complexity requires developers to spawn opinions, and tools like Maven don’t care. Tools build atop languages tend to be more composable. My all time favorite build language for personal and project work (almost without regard to the project technology stack) is Rake, the build tool in the Ruby world. It is a fantastic combination of simplicity and power. When I first migrated from Ant to Rake, I started poking around the Rake documentation to find out what tasks were available in Rake, simliar to the giant list of tasks (and extensions) familiar in the Ant world. At first I was disgusted by the lack of documentation until I realized there wasn’t any for a reason: you can do anything you need within Rake tasks, because it’s just Ruby code. Rake has added some nice helpers for file list manipulation, but Rake mostly just handles tasks dependencies and housekeeping, getting out of the way of developers. People will acuse me of Maven-bashing, but I’m actually not – I’m trying to foster understanding for when it’s useful. No tool works perfectly in every context, and much grief visits projects who try to use tools outside their expiration date. Maven is perfect for starting new projects: it ensures consistency and provides a huge bang for the buck in terms of already existing functionality. But because something starts strong doesn’t mean that it scales well (in fact, almost always the opposite is true). The real trick is to use Maven until the day it starts fighting you, then find an alternative. Once you start fighting with Maven, it’ll never return to the rosy days when your relationship was young. Fortunately, at least one Maven “Get out of Jail Free” card exists in Gradle, which still understands the Maven stuff you already have, but it’s language rather than plug-in based, implemented as a Groovy domain specific language, making it more composable than Maven. Many contextualized systems eventually become more composable by redesigning them as DSLs. Consider the 4GLs from the 90s. Ruby on Rails and similar frameworks are just like those 4GLS, with a critical distinction: they are implemented as internal DSLs atop a general purpose language. When developers in those environments hit the upper percentages of Dietzler’s Law, they can drop below the framework back to the underlying general purpose language. Rake and Gradle are both DSLs, and I’ve come to believe that scripting builds is far too specific and unique to each project to use contextualized tools. I tend to prefer composable tools. They tend to have a steeper learning curve but deliver more power and scalability over time, which is why I’m a huge Emacs fan, and why Martin’s post on InternalReprogrammability struck a chord. Contextual tools are fantastic for the proper use; I use IntelliJ for Java coding, but Emacs for pretty much everything else, and I tend to seek out composable tools when there’s an option.    Follow Neal on Twitter music image opera image reading image Neal Ford  | • Author, Thoughtworker, & Meme Wrangler
__label__pos
0.777607
Centipede game From Wikipedia, the free encyclopedia Jump to: navigation, search For the arcade game, see Centipede (video game). Extensive Form Representation of a four-stage centipede game In game theory, the centipede game, first introduced by Robert Rosenthal in 1981, is an extensive form game in which two players take turns choosing either to take a slightly larger share of a increasing pot, or to pass the pot to the other player. The payoffs are arranged so that if one passes the pot to one's opponent and the opponent takes the pot on the next round, one receives slightly less than if one had taken the pot on this round. Although the traditional centipede game had a limit of 100 rounds (hence the name), any game with this structure but a different number of rounds is called a centipede game. The unique subgame perfect equilibrium (and every Nash equilibrium) of these games indicates that the first player take the pot on the very first round of the game; however in empirical tests relatively few players do so, and as a result achieve a higher payoff than the payoff predicted by the equilibria analysis. These results are taken to show that subgame perfect equilibria and Nash equilibria fail to predict human play in some circumstances. The Centipede game is commonly used in introductory game theory courses and texts to highlight the concept of backward induction and the iterated elimination of dominated strategies, which show a standard way of providing a solution to the game. Play[edit] One possible version of a centipede game could be played as follows: Consider two players: Alice and Bob. Alice moves first. At the start of the game, Alice has two piles of coins in front of her: one pile contains 4 coins and the other pile contains 1 coin. Each player has two moves available: either "take" the larger pile of coins and give the smaller pile to the other player or "push" both piles across the table to the other player. Each time the piles of coins pass across the table, the quantity of coins in each pile doubles. For example, assume that Alice chooses to "push" the piles on her first move, handing the piles of 1 and 4 coins over to Bob, doubling them to 2 and 8. Bob could now use his first move to either "take" the pile of 8 coins and give 2 coins to Alice, or he can "push" the two piles back across the table again to Alice, again increasing the size of the piles to 4 and 16 coins. The game continues for a fixed number of rounds or until a player decides to end the game by pocketing a pile of coins. The addition of coins is taken to be an externality, as it is not contributed by either player. A second possible version of the centipede game is represented in the diagram above. In this version, passing the coins across the table is represented by a move of R (going across the row of the lattice, sometimes also represented by A for across) and pocketing the coins is a move D (down the lattice). The numbers 1 and 2 along the top of the diagram show the alternating decision-maker between two players denoted here as 1 and 2, and the numbers at the bottom of each branch show the payoff for players 1 and 2 respectively. Equilibrium analysis and backward induction[edit] Standard game theoretic tools predict that the first player will defect on the first round, taking the pile of coins for himself. In the centipede game, a Pure strategy consists of a set of actions (one for each choice point in the game, even though some of these choice points may never be reached) and a Mixed strategy is a probability distribution over the possible pure strategies. There are several pure strategy Nash equilibria of the centipede game and infinitely many mixed strategy Nash equilibria. However, there is only one subgame perfect equilibrium (a popular refinement to the Nash equilibrium concept). In the unique subgame perfect equilibrium, each player chooses to defect at every opportunity. This, of course, means defection at the first stage. In the Nash equilibria, however, the actions that would be taken after the initial choice opportunities (even though they are never reached since the first player defects immediately) may be cooperative. Defection by the first player is the unique subgame perfect equilibrium and required by any Nash equilibrium, it can be established by backward induction. Suppose two players reach the final round of the game; the second player will do better by defecting and taking a slightly larger share of the pot. Since we suppose the second player will defect, the first player does better by defecting in the second to last round, taking a slightly higher payoff than she would have received by allowing the second player to defect in the last round. But knowing this, the second player ought to defect in the third to last round, taking a slightly higher payoff than he would have received by allowing the first player to defect in the second to last round. This reasoning proceeds backwards through the game tree until one concludes that the best action is for the first player to defect in the first round. The same reasoning can apply to any node in the game tree. In the example pictured above, this reasoning proceeds as follows. If we were to reach the last round of the game, Player 2 would do better by choosing d instead of r. However, given that 2 will choose d, 1 should choose D in the second to last round, receiving 3 instead of 2. Given that 1 would choose D in the second to last round, 2 should choose d in the third to last round, receiving 2 instead of 1. But given this, Player 1 should choose D in the first round, receiving 1 instead of 0. There are a large number of Nash equilibria in a centipede game, but in each, the first player defects on the first round and the second player defects in the next round frequently enough to dissuade the first player from passing. Being in a Nash equilibrium does not require that strategies be rational at every point in the game as in the subgame perfect equilibrium. This means that strategies that are cooperative in the never-reached later rounds of the game could still be in a Nash equilibrium. In the example above, one Nash equilibrium is for both players to defect on each round (even in the later rounds that are never reached). Another Nash equilibrium is for player 1 to defect on the first round, but pass on the third round and for player 2 to defect at any opportunity. Empirical results[edit] Several studies have demonstrated that the Nash equilibrium (and likewise, subgame perfect equilibrium) play is rarely observed. Instead, subjects regularly show partial cooperation, playing "R" (or "r") for several moves before eventually choosing "D" (or "d"). It is also rare for subjects to cooperate through the whole game. For examples see McKelvey and Palfrey (1992) and Nagel and Tang (1998). As in many other game theoretic experiments, scholars have investigated the effect of increasing the stakes. As with other games, for instance the ultimatum game, as the stakes increase the play approaches (but does not reach) Nash equilibrium play. Explanations[edit] Since the empirical studies have produced results that are inconsistent with the traditional equilibrium analysis, several explanations of this behavior have been offered. Rosenthal (1981) suggested that if one has reason to believe her opponent will deviate from Nash behavior, then it may be advantageous to not defect on the first round. One reason to suppose that people may deviate from the equilibrium behavior is if some are altruistic. The basic idea is that if you are playing against an altruist, that person will always cooperate, and hence, to maximize your payoff you should defect on the last round rather than the first. If enough people are altruists, sacrificing the payoff of first-round defection is worth the price in order to determine whether or not your opponent is an altruist. Nagel and Tang (1998) suggest this explanation. Another possibility involves error. If there is a significant possibility of error in action, perhaps because your opponent has not reasoned completely through the backward induction, it may be advantageous (and rational) to cooperate in the initial rounds. However, Parco, Rapoport and Stein (2002) illustrated that the level of financial incentives can have a profound effect on the outcome in a three-player game: the larger the incentives are for deviation, the greater propensity for learning behavior in a repeated single-play experimental design to move toward the Nash equilibrium. Palacios-Huerta and Volij (2009) find that expert chess players play differently from college students. With a rising Elo, the probability of continuing the game declines; all Grandmasters in the experiment stopped at their first chance. They conclude that chess players are familiar with using backward induction reasoning and hence need less learning to reach the equilibrium. However, in an attempt to replicate these findings, Levitt, List, and Sadoff (2010) find strongly contradictory results, with zero of sixteen Grandmasters stopping the game at the first node. Significance[edit] Like the Prisoner's Dilemma, this game presents a conflict between self-interest and mutual benefit. If it could be enforced, both players would prefer that they both cooperate throughout the entire game. However, a player's self-interest or players' distrust can interfere and create a situation where both do worse than if they had blindly cooperated. Although the Prisoner's Dilemma has received substantial attention for this fact, the Centipede Game has received relatively less. Additionally, Binmore (2005) has argued that some real-world situations can be described by the Centipede game. One example he presents is the exchange of goods between parties that distrust each other. Another example Binmore likens to the Centipede game is the mating behavior of a hermaphroditic sea bass which takes turns exchanging eggs to fertilize.[citation needed] In these cases, we find cooperation to be abundant. Since the payoffs for some amount of cooperation in the Centipede game are so much larger than immediate defection, the "rational" solutions given by backward induction can seem paradoxical. This, coupled with the fact that experimental subjects regularly cooperate in the Centipede game, has prompted debate over the usefulness of the idealizations involved in the backward induction solutions, see Aumann (1995, 1996) and Binmore (1996). See also[edit] References[edit] • Aumann, R. (1995). "Backward Induction and Common Knowledge of Rationality". Games and Economic Behavior 8 (1): 6–19. doi:10.1016/S0899-8256(05)80015-6.  • ——— (1996). "A Reply to Binmore". Games and Economic Behavior 17 (1): 138–146. doi:10.1006/game.1996.0099.  • Binmore, K. (2005). Natural Justice. New York: Oxford University Press. ISBN 0-19-517811-4.  • ——— (1996). "A Note on Backward Induction". Games and Economic Behavior 17 (1): 135–137. doi:10.1006/game.1996.0098.  • Levitt, S. D.; List, J. A. & Sadoff, S. E. (2010). "Checkmate: Exploring Backward Induction Among Chess Players". American Economic Review 101 (2): 975–990. doi:10.1257/aer.101.2.975.  • McKelvey, R. & Palfrey, T. (1992). "An experimental study of the centipede game". Econometrica 60 (4): 803–836. doi:10.2307/2951567.  • Nagel, R. & Tang, F. F. (1998). "An Experimental Study on the Centipede Game in Normal Form: An Investigation on Learning". Journal of Mathematical Psychology 42 (2–3): 356–384. doi:10.1006/jmps.1998.1225.  • Palacios-Huerta, I. & Volij, O. (2009). "Field Centipedes". American Economic Review 99 (4): 1619–1635. doi:10.1257/aer.99.4.1619.  • Parco, J. E.; Rapoport, A. & Stein, W. E. (2002). "Effects of financial incentives on the breakdown of mutual trust". Psychological Science 13 (3): 292–297. doi:10.1111/1467-9280.00454.  • Rapoport, A.; Stein, W. E.; Parco, J. E. & Nicholas, T. E. (2003). "Equilibrium play and adaptive learning in a three-person centipede game". Games and Economic Behavior 43 (2): 239–265. doi:10.1016/S0899-8256(03)00009-5.  • Rosenthal, R. (1981). "Games of Perfect Information, Predatory Pricing, and the Chain Store". Journal of Economic Theory 25 (1): 92–100. doi:10.1016/0022-0531(81)90018-1.  External links[edit]
__label__pos
0.632295
ServerPlugin [Bukkit]Location eines Inventorys Dieses Thema im Forum "Programmierung" wurde erstellt von leonard_m_g, 30. Dezember 2012. Status des Themas: Es sind keine weiteren Antworten möglich. 1. leonard_m_g Offline leonard_m_g Registriert seit: 4. Januar 2012 Beiträge: 10 Hallo, ich bin gerade dabei ein Bukkit Plugin zu programmieren, dabei habe ich folgendes Problem: wenn das InventoryClickEvent aufgerufen wird möchte ich gerne an die Koordinaten des dabei involvierten Containers kommen, leider habe ich keine Ahnung wie ich das anstellen soll und ich habe auch keine Möglichkeit dazu in den JavaDocs von Bukkit gefunden:youno:, vielleicht kennt hier jemand eine Lösung für mein Problem...   #1 2. Der Slot und der Raw slot unterscheiden sich, so kannst du einen bestimmten Slot herrausfinden.   #2 3. Sn0wBlizz4rd Offline Sn0wBlizz4rd Registriert seit: 26. September 2012 Beiträge: 418 Minecraft: Sn0wBlizz4rd Ich glaube, er will wissen wenn man z.B. eine Craftingtable öffnet, die Position den Craftingtables herrauszufinden.   #3 4. leonard_m_g Offline leonard_m_g Registriert seit: 4. Januar 2012 Beiträge: 10 genau das meine ich Cubos, Entschuldigung wenn ich mich etwas unverständlich ausgedrückt habe, hatte vorhin nicht so viel Zeit...   #4 5. TimBone Online TimBone Hier einmal ein Ansatz von mir: Code (Text): 1.     @EventHandler 2.     public void onPlayerBlock(PlayerInteractEvent event){ 3.         Block block = event.getClickedBlock(); 4.         int block2 = event.getClickedBlock().getTypeId(); 5.         6.         if(event.getHandlers() instanceof Player){ 7.             if(block2 == 58){ 8.                 Location location = block.getLocation(); 9.                 System.out.println("[Prefix] Es wurde ein Crafting Table gefunden: " + location); 10.   11.             } 12.             if((block instanceof Chest) || (block instanceof DoubleChest) ){ 13.             Location location = block.getLocation(); 14.             System.out.println("[Prefix] Es wurde eine Chest gefunden: " + location); 15.             } 16.         }   #5 6. leonard_m_g Offline leonard_m_g Registriert seit: 4. Januar 2012 Beiträge: 10 ich möchte aber dass man eine Kiste öffnen kann und wenn man dann ein Item aus der Kiste nimmt überprüft wird ob man dazu berechtigt ist...   #6 7. Sag doch direkt was du suchst ;) Code (Text): 1. @EventHandler(priority = EventPriority.NORMAL)    public void thisCouldBeAMehtodFourtyTwo(InventoryClickEvent ev) { 2.         // Nur Amboss benutzen 3.         if (ev.getInventory().getName().equalsIgnoreCase("Repair")) { 4.             ItemStack curr = ev.getCurrentItem(); 5.             ItemStack cursor = ev.getCursor(); 6.   7.             if (curr != null && cursor != null) { 8.                 //Anzeige des Slots bzw. des Raw Slots => Initialisierung weiter oben! 9.                 logger.info(ev.getRawSlot() + "|" + ev.getSlot()); 10.                 if (ev.getRawSlot() == 1 && ev.getSlot() == 1) { 11.                     //QuickNDirty Permissionsüberprüfung 12.                     if(((Player)ev.getWhoClicked()).hasPermission("manf.is.awesome")){ 13.                         return; 14.                     } 15.                     //Blockt in diesem Fall des 2. Slot des Amboss 16.                     event.setCancled() 17.                 } 18.             } 19.         } 20.     } Sollte so funktionieren.   #7 8. leonard_m_g Offline leonard_m_g Registriert seit: 4. Januar 2012 Beiträge: 10 mit "dazu berechtigt" meinte ich ob man auf dieser Zone die nötigen rechte... und nicht ob man die nötigen permissions hat, ganz doof bin ich auch nicht:p   #8 9. Achso... Du willst also ermöglichen, dass man die Kiste öffnen kann, aber nix ändern... Wie wäre es mit Code (Text): 1.   2. if(ev.getInventory().getHolder() instanceof Chest){   Chest c = (Chest) ev.getInventory().getHolder(); 3.    Block b = c.getBlock() 4.    // .... 5. } 6.     #9 10. leonard_m_g Offline leonard_m_g Registriert seit: 4. Januar 2012 Beiträge: 10 Das war die Lösung, danke für eure Hilfe :D   #10 Status des Themas: Es sind keine weiteren Antworten möglich.
__label__pos
0.883601
2 I want to know how StandardSetController actually works. My expectation on the insight is to understand how salesforce is handling the result set, in contrary with normal re-querying of records with offset and limits. How does the controller behave differently/better compare to a custom class implementation? 2 Answers 2 5 The StandardSetController uses a database cursor to paginate through the results. Unlike using LIMIT+OFFSET, this method supports 10,000 rows of data, the results will not change each time a new page is pulled back from the database (we call this a "consistent view" of the data), no custom code is required to implement paginating, page sizing, and saving modifications, and you can also mass edit records without custom code. However, being a cursor also means that it only remains viable for 15 minutes, after which a new query must be issued. StandardSetController is typically better for naive paging implementations, since it requires the least amount of code and is trivial to use. The most powerful implementation of pagination I've ever written supported ~50,000 rows, an unlimited timeout, and a semi-consistent view (the rows were consistent but the field values were not), but did require ~100 lines of code, so it was fairly non-trivial in scope. Client-side pagination using remote actions or an API can support millions of rows with a stable view, if desired, but requires more memory and a rather significant amount of code. The StandardSetController offers the most functionality with the least amount of code needed to get up and running. 3 StandardController From the docs : A Visualforce controller is a set of instructions that specify what happens when a user interacts with the components specified in associated Visualforce markup, such as when a user clicks a button or link. Controllers also provide access to the data that should be displayed in a page, and can modify component behavior. You can find more information here, but basically a StandardController is for ONE record, i.e. if you'd like to create a new Visualforce page for a single record, you'd use a Standard Controller in your Apex. StandardSetController From the docs : Standard list controllers allow you to create Visualforce pages that can display or act on a set of records. Examples of existing Salesforce pages that work with a set of records include list pages, related lists, and mass action pages. You can find more information here, but basically Set (list) controllers are for MULTIPLE (or a list of) records, i.e. if you'd like to create a new Visualforce page for a List of records (or even from a selection of records on a List view), you'd use a Standard Set Controller in your Apex. Example of StandardSetController Apex Class public class opportunityList2Con { // ApexPages.StandardSetController must be instantiated // for standard list controllers public ApexPages.StandardSetController setCon { get { if(setCon == null) { setCon = new ApexPages.StandardSetController(Database.getQueryLocator( [SELECT Name, CloseDate FROM Opportunity])); } return setCon; } set; } // Initialize setCon and return a list of records public List<Opportunity> getOpportunities() { return (List<Opportunity>) setCon.getRecords(); } } Visualforce Page <apex:page controller="opportunityList2Con"> <apex:pageBlock> <apex:pageBlockTable value="{!opportunities}" var="o"> <apex:column value="{!o.Name}"/> <apex:column value="{!o.CloseDate}"/> </apex:pageBlockTable> </apex:pageBlock> </apex:page> For more information click here : StandardSetController You must log in to answer this question. Not the answer you're looking for? Browse other questions tagged .
__label__pos
0.938642
AWS IoT Developer Guide Creating an AWS IoT Rule You configure rules to route data from your connected things. Rules consist of the following: Rule name The name of the rule. Note We do not recommend using personally identifiable information in your rule names. Optional description A textual description of the rule. Note We do not recommend using personally identifiable information in your rule descriptions. SQL statement A simplified SQL syntax to filter messages received on an MQTT topic and push the data elsewhere. For more information, see AWS IoT SQL Reference. SQL version The version of the SQL rules engine to use when evaluating the rule. Although this property is optional, we strongly recommend that you specify the SQL version. If this property is not set, the default, 2015-10-08, is used. For more information, see SQL Versions. One or more actions The actions AWS IoT performs when executing the rule. For example, you can insert data into a DynamoDB table, write data to an Amazon S3 bucket, publish to an Amazon SNS topic, or invoke a Lambda function. An error action The action AWS IoT performs when it is unable to perform a rule's action. When you create a rule, be aware of how much data you are publishing on topics. If you create rules that include a wildcard topic pattern, they might match a large percentage of your messages, and you might need to increase the capacity of the AWS resources used by the target actions. Also, if you create a republish rule that includes a wildcard topic pattern, you can end up with a circular rule that causes an infinite loop. Note Creating and updating rules are administrator-level actions. Any user who has permission to create or update rules is able to access data processed by the rules. To create a rule (AWS CLI) Use the create-topic-rule command to create a rule: aws iot create-topic-rule --rule-name my-rule --topic-rule-payload file://my-rule.json The following is an example payload file with a rule that inserts all messages sent to the iot/test topic into the specified DynamoDB table. The SQL statement filters the messages and the role ARN grants AWS IoT permission to write to the DynamoDB table. { "sql": "SELECT * FROM 'iot/test'", "ruleDisabled": false, "awsIotSqlVersion": "2016-03-23", "actions": [{ "dynamoDB": { "tableName": "my-dynamodb-table", "roleArn": "arn:aws:iam::123456789012:role/my-iot-role", "hashKeyField": "topic", "hashKeyValue": "${topic(2)}", "rangeKeyField": "timestamp", "rangeKeyValue": "${timestamp()}" } }] } The following is an example payload file with a rule that inserts all messages sent to the iot/test topic into the specified S3 bucket. The SQL statement filters the messages, and the role ARN grants AWS IoT permission to write to the Amazon S3 bucket. { "awsIotSqlVersion": "2016-03-23", "sql": "SELECT * FROM 'iot/test'", "ruleDisabled": false, "actions": [ { "s3": { "roleArn": "arn:aws:iam::123456789012:role/aws_iot_s3", "bucketName": "my-bucket", "key": "myS3Key" } } ] } The following is an example payload file with a rule that pushes data to Amazon Elasticsearch Service: { "sql":"SELECT *, timestamp() as timestamp FROM 'iot/test'", "ruleDisabled":false, "awsIotSqlVersion": "2016-03-23", "actions":[ { "elasticsearch":{ "roleArn":"arn:aws:iam::123456789012:role/aws_iot_es", "endpoint":"https://my-endpoint", "index":"my-index", "type":"my-type", "id":"${newuuid()}" } } ] } The following is an example payload file with a rule that invokes a Lambda function: { "sql": "expression", "ruleDisabled": false, "awsIotSqlVersion": "2016-03-23", "actions": [{ "lambda": { "functionArn": "arn:aws:lambda:us-west-2:123456789012:function:my-lambda-function" } }] } The following is an example payload file with a rule that publishes to an Amazon SNS topic: { "sql": "expression", "ruleDisabled": false, "awsIotSqlVersion": "2016-03-23", "actions": [{ "sns": { "targetArn": "arn:aws:sns:us-west-2:123456789012:my-sns-topic", "roleArn": "arn:aws:iam::123456789012:role/my-iot-role" } }] } The following is an example payload file with a rule that republishes on a different MQTT topic: { "sql": "expression", "ruleDisabled": false, "awsIotSqlVersion": "2016-03-23", "actions": [{ "republish": { "topic": "my-mqtt-topic", "roleArn": "arn:aws:iam::123456789012:role/my-iot-role" } }] } The following is an example payload file with a rule that pushes data to an Amazon Kinesis Data Firehose stream: { "sql": "SELECT * FROM 'my-topic'", "ruleDisabled": false, "awsIotSqlVersion": "2016-03-23", "actions": [{ "firehose": { "roleArn": ""arn:aws:iam::123456789012:role/my-iot-role", "deliveryStreamName": "my-stream-name" } }] } The following is an example payload file with a rule that uses the Amazon Machine Learning machinelearning_predict function to republish to a topic if the data in the MQTT payload is classified as a 1. { "sql": "SELECT * FROM 'iot/test' where machinelearning_predict('my-model', 'arn:aws:iam::123456789012:role/my-iot-aml-role', *).predictedLabel=1", "ruleDisabled": false, "awsIotSqlVersion": "2016-03-23", "actions": [{ "republish": { "roleArn": "arn:aws:iam::123456789012:role/my-iot-role", "topic": "my-mqtt-topic" } }] } The following is an example payload file with a rule that publishes messages to a Salesforce IoT Cloud input stream. { "sql": "expression", "ruleDisabled": false, "awsIotSqlVersion": "2016-03-23", "actions": [{ "salesforce": { "token": "ABCDEFGHI123456789abcdefghi123456789", "url": "https://ingestion-cluster-id.my-env.sfdcnow.com/streams/stream-id/connection-id/my-event" } }] } The following is an example payload file with a rule that starts an execution of a Step Functions state machine. { "sql": "expression", "ruleDisabled": false, "awsIotSqlVersion": "2016-03-23", "actions": [{ "stepFunctions": { "stateMachineName": "myCoolStateMachine", "executionNamePrefix": "coolRunning", "roleArn": "arn:aws:iam::123456789012:role/my-iot-role" } }] }
__label__pos
0.985346
Network Working Group S. Holmer Internet-Draft H. Lundin Intended status: Informational Google Expires: January 9, 2017 G. Carlucci L. De Cicco S. Mascolo Politecnico di Bari July 8, 2016 A Google Congestion Control Algorithm for Real-Time Communication draft-ietf-rmcat-gcc-02 Abstract This document describes two methods of congestion control when using real-time communications on the World Wide Web (RTCWEB); one delay-based and one loss-based. It is published as an input document to the RMCAT working group on congestion control for media streams. The mailing list of that working group is [email protected]. Requirements Language The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as described in RFC 2119 [RFC2119]. Status of This Memo This Internet-Draft is submitted in full conformance with the provisions of BCP 78 and BCP 79. Internet-Drafts are working documents of the Internet Engineering Task Force (IETF). Note that other groups may also distribute working documents as Internet-Drafts. The list of current Internet-Drafts is at http://datatracker.ietf.org/drafts/current/. Internet-Drafts are draft documents valid for a maximum of six months and may be updated, replaced, or obsoleted by other documents at any time. It is inappropriate to use Internet-Drafts as reference material or to cite them other than as "work in progress." This Internet-Draft will expire on January 9, 2017. Copyright Notice Copyright (c) 2016 IETF Trust and the persons identified as the document authors. All rights reserved. This document is subject to BCP 78 and the IETF Trust's Legal Provisions Relating to IETF Documents (http://trustee.ietf.org/license-info) in effect on the date of publication of this document. Please review these documents carefully, as they describe your rights and restrictions with respect to this document. Code Components extracted from this document must include Simplified BSD License text as described in Section 4.e of the Trust Legal Provisions and are provided without warranty as described in the Simplified BSD License. Table of Contents 1. Introduction Congestion control is a requirement for all applications sharing the Internet resources [RFC2914]. Congestion control for real-time media is challenging for a number of reasons: This memo describes two congestion control algorithms that together are able to provide good performance and reasonable bandwidth sharing with other video flows using the same congestion control and with TCP flows that share the same links. The signaling used consists of experimental RTP header extensions and RTCP messages RFC 3550 [RFC3550] as defined in [abs-send-time], [I-D.alvestrand-rmcat-remb] and [I-D.holmer-rmcat-transport-wide-cc-extensions]. 1.1. Mathematical notation conventions The mathematics of this document have been transcribed from a more formula-friendly format. The following notational conventions are used: X_hat An estimate of the true value of variable X - conventionally marked by a circumflex accent on top of the variable name. X(i) The "i"th value of vector X - conventionally marked by a subscript i. E{X} The expected value of the stochastic variable X 2. System model The following elements are in the system: Together, loss-based controller and delay-based controller implement the congestion control algorithm. 3. Feedback and extensions There are two ways to implement the proposed algorithm. One where both the controllers are running at the send-side, and one where the delay-based controller runs on the receive-side and the loss-based controller runs on the send-side. The first version can be realized by using a per-packet feedback protocol as described in [I-D.holmer-rmcat-transport-wide-cc-extensions]. Here, the RTP receiver will record the arrival time and the transport-wide sequence number of each received packet, which will be sent back to the sender periodically using the transport-wide feedback message. The RECOMMENDED feedback interval is once per received video frame or at least once every 30 ms if audio-only or multi-stream. If the feedback overhead needs to be limited this interval can be increased to 100 ms. The sender will map the received {sequence number, arrival time} pairs to the send-time of each packet covered by the feedback report, and feed those timestamps to the delay-based controller. It will also compute a loss ratio based on the sequence numbers in the feedback message. The second version can be realized by having a delay-based controller at the receive-side, monitoring and processing the arrival time and size of incoming packets. The sender SHOULD use the abs-send-time RTP header extension [abs-send-time] to enable the receiver to compute the inter-group delay variation. The output from the delay-based controller will be a bitrate, which will be sent back to the sender using the REMB feedback message [I-D.alvestrand-rmcat-remb]. The packet loss ratio is sent back via RTCP receiver reports. At the sender the bitrate in the REMB message and the fraction of packets lost are fed into the loss-based controller, which outputs a final target bitrate. It is RECOMMENDED to send the REMB message as soon as congestion is detected, and otherwise at least once every second. 4. Sending Engine Pacing is used to actuate the target bitrate computed by the controllers. When media encoder produces data, this is fed into a Pacer queue. The Pacer sends a group of packets to the network every burst_time interval. RECOMMENDED value for burst_time is 5 ms. The size of a group of packets is computed as the product between the target bitrate and the burst_time. 5. Delay-based control The delay-based control algorithm can be further decomposed into four parts: a pre-filtering, an arrival-time filter, an over-use detector, and a rate controller. 5.1. Arrival-time model This section describes an adaptive filter that continuously updates estimates of network parameters based on the timing of the received groups of packets. We define the inter-arrival time, t(i) - t(i-1), as the difference in arrival time of two groups of packets. Correspondingly, the inter-departure time, T(i) - T(i-1), is defined as the difference in departure-time of two groups of packets. Finally, the inter-group delay variation, d(i), is defined as the difference between the inter-arrival time and the inter-departure time. Or interpreted differently, as the difference between the delay of group i and group i-1. d(i) = t(i) - t(i-1) - (T(i) - T(i-1)) An inter-departure time is computed between consecutive groups as T(i) - T(i-1), where T(i) is the departure timestamp of the last packet in the current packet group being processed. Any packets received out of order are ignored by the arrival-time model. Each group is assigned a receive time t(i), which corresponds to the time at which the last packet of the group was received. A group is delayed relative to its predecessor if t(i) - t(i-1) > T(i) - T(i-1), i.e., if the inter-arrival time is larger than the inter-departure time. We can model the inter-group delay variation as: d(i) = w(i) Here, w(i) is a sample from a stochastic process W, which is a function of the link capacity, the current cross traffic, and the current sent bitrate. We model W as a white Gaussian process. If we are over-using the channel we expect the mean of w(i) to increase, and if a queue on the network path is being emptied, the mean of w(i) will decrease; otherwise the mean of w(i) will be zero. Breaking out the mean, m(i), from w(i) to make the process zero mean, we get Equation 1 d(i) = m(i) + v(i) The noise term v(i) represents network jitter and other delay effects not captured by the model. 5.2. Pre-filtering The pre-filtering aims at handling delay transients caused by channel outages. During an outage, packets being queued in network buffers, for reasons unrelated to congestion, are delivered in a burst when the outage ends. The pre-filtering merges together groups of packets that arrive in a burst. Packets are merged in the same group if one of these two conditions holds: 5.3. Arrival-time filter The parameter d(i) is readily available for each group of packets, i > 1. We want to estimate m(i) and use this estimate to detect whether or not the bottleneck link is over-used. The parameter can be estimated by any adaptive filter – we are using the Kalman filter. Let m(i) be the estimate at time i We model the state evolution from time i to time i+1 as m(i+1) = m(i) + u(i) where u(i) is the state noise that we model as a stationary process with Gaussian statistic with zero mean and variance q(i) = E{u(i)^2} q(i) is RECOMMENDED equal to 10^-3 Given equation 1 we get d(i) = m(i) + v(i) where v(i) is zero mean white Gaussian measurement noise with variance var_v = E{v(i)^2} The Kalman filter recursively updates our estimate m_hat(i) as z(i) = d(i) - m_hat(i-1) m_hat(i) = m_hat(i-1) + z(i) * k(i) e(i-1) + q(i) k(i) = ---------------------------------------- var_v_hat(i) + (e(i-1) + q(i)) e(i) = (1 - k(i)) * (e(i-1) + q(i)) The variance var_v(i) = E{v(i)^2} is estimated using an exponential averaging filter, modified for variable sampling rate var_v_hat(i) = max(alpha * var_v_hat(i-1) + (1-alpha) * z(i)^2, 1) alpha = (1-chi)^(30/(1000 * f_max)) where f_max = max {1/(T(j) - T(j-1))} for j in i-K+1,...,i is the highest rate at which the last K packet groups have been received and chi is a filter coefficient typically chosen as a number in the interval [0.1, 0.001]. Since our assumption that v(i) should be zero mean WGN is less accurate in some cases, we have introduced an additional outlier filter around the updates of var_v_hat. If z(i) > 3*sqrt(var_v_hat) the filter is updated with 3*sqrt(var_v_hat) rather than z(i). For instance v(i) will not be white in situations where packets are sent at a higher rate than the channel capacity, in which case they will be queued behind each other. 5.4. Over-use detector The inter-group delay variation estimate m(i), obtained as the output of the arrival-time filter, is compared with a threshold del_var_th(i). An estimate above the threshold is considered as an indication of over-use. Such an indication is not enough for the detector to signal over-use to the rate control subsystem. A definitive over-use will be signaled only if over-use has been detected for at least overuse_time_th milliseconds. However, if m(i) < m(i-1), over-use will not be signaled even if all the above conditions are met. Similarly, the opposite state, under-use, is detected when m(i) < -del_var_th(i). If neither over-use nor under-use is detected, the detector will be in the normal state. The threshold del_var_th has a remarkable impact on the overall dynamics and performance of the algorithm. In particular, it has been shown that using a static threshold del_var_th, a flow controlled by the proposed algorithm can be starved by a concurrent TCP flow [Pv13]. This starvation can be avoided by increasing the threshold del_var_th to a sufficiently large value. The reason is that, by using a larger value of del_var_th, a larger queuing delay can be tolerated, whereas with a small del_var_th, the over-use detector quickly reacts to a small increase in the offset estimate m(i) by generating an over-use signal that reduces the delay-based estimate of the available bandwidth A_hat (see Section 4.4). Thus, it is necessary to dynamically tune the threshold del_var_th to get good performance in the most common scenarios, such as when competing with loss-based flows. For this reason, we propose to vary the threshold del_var_th(i) according to the following dynamic equation: del_var_th(i) = del_var_th(i-1) + (t(i)-t(i-1)) * K(i) * (|m(i)|-del_var_th(i-1)) with K(i)=K_d if |m(i)| < del_var_th(i-1) or K(i)=K_u otherwise. The rationale is to increase del_var_th(i) when m(i) is outside of the range [-del_var_th(i-1),del_var_th(i-1)], whereas, when the offset estimate m(i) falls back into the range, del_var_th is decreased. In this way when m(i) increases, for instance due to a TCP flow entering the same bottleneck, del_var_th(i) increases and avoids the uncontrolled generation of over-use signals which may lead to starvation of the flow controlled by the proposed algorithm [Pv13]. Moreover, del_var_th(i) SHOULD NOT be updated if this condition holds: |m(i)| - del_var_th(i) > 15 It is also RECOMMENDED to clamp del_var_th(i) to the range [6, 600], since a too small del_var_th(i) can cause the detector to become overly sensitive. On the other hand, when m(i) falls back into the range [-del_var_th(i-1),del_var_th(i-1)] the threshold del_var_th(i) is decreased so that a lower queuing delay can be achieved. It is RECOMMENDED to choose K_u > K_d so that the rate at which del_var_th is increased is higher than the rate at which it is decreased. With this setting it is possible to increase the threshold in the case of a concurrent TCP flow and prevent starvation as well as enforcing intra-protocol fairness. RECOMMENDED values for del_var_th(0), overuse_time_th, K_u and K_d are respectively 12.5 ms, 10 ms, 0.01 and 0.00018. 5.5. Rate control The rate control is split in two parts, one controlling the bandwidth estimate based on delay, and one controlling the bandwidth estimate based on loss. Both are designed to increase the estimate of the available bandwidth A_hat as long as there is no detected congestion and to ensure that we will eventually match the available bandwidth of the channel and detect an over-use. As soon as over-use has been detected, the available bandwidth estimated by the delay-based controller is decreased. In this way we get a recursive and adaptive estimate of the available bandwidth. In this document we make the assumption that the rate control subsystem is executed periodically and that this period is constant. The rate control subsystem has 3 states: Increase, Decrease and Hold. "Increase" is the state when no congestion is detected; "Decrease" is the state where congestion is detected, and "Hold" is a state that waits until built-up queues have drained before going to "increase" state. The state transitions (with blank fields meaning "remain in state") are: +----+--------+-----------+------------+--------+ | \ State | Hold | Increase |Decrease| | \ | | | | | Signal\ | | | | +--------+----+-----------+------------+--------+ | Over-use | Decrease | Decrease | | +-------------+-----------+------------+--------+ | Normal | Increase | | Hold | +-------------+-----------+------------+--------+ | Under-use | | Hold | Hold | +-------------+-----------+------------+--------+ The subsystem starts in the increase state, where it will stay until over-use or under-use has been detected by the detector subsystem. On every update the delay-based estimate of the available bandwidth is increased, either multiplicatively or additively, depending on its current state. The system does a multiplicative increase if the current bandwidth estimate appears to be far from convergence, while it does an additive increase if it appears to be closer to convergence. We assume that we are close to convergence if the currently incoming bitrate, R_hat(i), is close to an average of the incoming bitrates at the time when we previously have been in the Decrease state. "Close" is defined as three standard deviations around this average. It is RECOMMENDED to measure this average and standard deviation with an exponential moving average with the smoothing factor 0.95, as it is expected that this average covers multiple occasions at which we are in the Decrease state. Whenever valid estimates of these statistics are not available, we assume that we have not yet come close to convergence and therefore remain in the multiplicative increase state. If R_hat(i) increases above three standard deviations of the average max bitrate, we assume that the current congestion level has changed, at which point we reset the average max bitrate and go back to the multiplicative increase state. R_hat(i) is the incoming bitrate measured by the delay-based controller over a T seconds window: R_hat(i) = 1/T * sum(L(j)) for j from 1 to N(i) N(i) is the number of packets received the past T seconds and L(j) is the payload size of packet j. A window between 0.5 and 1 second is RECOMMENDED. During multiplicative increase, the estimate is increased by at most 8% per second. eta = 1.08^min(time_since_last_update_ms / 1000, 1.0) A_hat(i) = eta * A_hat(i-1) During the additive increase the estimate is increased with at most half a packet per response_time interval. The response_time interval is estimated as the round-trip time plus 100 ms as an estimate of over-use estimator and detector reaction time. response_time_ms = 100 + rtt_ms alpha = 0.5 * min(time_since_last_update_ms / response_time_ms, 1.0) A_hat(i) = A_hat(i-1) + max(1000, alpha * expected_packet_size_bits) expected_packet_size_bits is used to get a slightly slower slope for the additive increase at lower bitrates. It can for instance be computed from the current bitrate by assuming a frame rate of 30 frames per second: bits_per_frame = A_hat(i-1) / 30 packets_per_frame = ceil(bits_per_frame / (1200 * 8)) avg_packet_size_bits = bits_per_frame / packets_per_frame Since the system depends on over-using the channel to verify the current available bandwidth estimate, we must make sure that our estimate does not diverge from the rate at which the sender is actually sending. Thus, if the sender is unable to produce a bit stream with the bitrate the congestion controller is asking for, the available bandwidth estimate should stay within a given bound. Therefore we introduce a threshold A_hat(i) < 1.5 * R_hat(i) When an over-use is detected the system transitions to the decrease state, where the delay-based available bandwidth estimate is decreased to a factor times the currently incoming bitrate. A_hat(i) = beta * R_hat(i) beta is typically chosen to be in the interval [0.8, 0.95], 0.85 is the RECOMMENDED value. When the detector signals under-use to the rate control subsystem, we know that queues in the network path are being emptied, indicating that our available bandwidth estimate A_hat is lower than the actual available bandwidth. Upon that signal the rate control subsystem will enter the hold state, where the receive-side available bandwidth estimate will be held constant while waiting for the queues to stabilize at a lower level – a way of keeping the delay as low as possible. This decrease of delay is wanted, and expected, immediately after the estimate has been reduced due to over-use, but can also happen if the cross traffic over some links is reduced. It is RECOMMENDED that the routine to update A_hat(i) is run at least once every response_time interval. 5.6. Parameters settings Parameter Description RECOMMENDED Value burst_time Time limit in milliseconds between packet bursts which identifies a group 5 ms q State noise covariance matrix q = 10^-3 e(0) Initial value of the system error covariance e(0) = 0.1 chi Coefficient used for the measured noise variance [0.1, 0.001] del_var_th(0) Initial value for the adaptive threshold 12.5 ms overuse_time_th Time required to trigger an overuse signal 10 ms K_u Coefficient for the adaptive threshold 0.01 K_d Coefficient for the adaptive threshold 0.00018 T Time window for measuring the received bitrate [0.5, 1] s beta Decrease rate factor 0.85 Table 1: RECOMMENDED values for delay based controller 6. Loss-based control A second part of the congestion controller bases its decisions on the round-trip time, packet loss and available bandwidth estimates A_hat received from the delay-based controller. The available bandwidth estimates computed by the loss-based controller are denoted with As_hat. The available bandwidth estimates A_hat produced by the delay-based controller are only reliable when the size of the queues along the path sufficiently large. If the queues are very short, over-use will only be visible through packet losses, which are not used by the delay-based controller. The loss-based controller SHOULD run every time feedback from the receiver is received. The loss-based estimate As_hat is compared with the delay-based estimate A_hat. The actual sending rate is set as the minimum between As_hat and A_hat. We motivate the packet loss thresholds by noting that if the transmission channel has a small amount of packet loss due to over-use, that amount will soon increase if the sender does not adjust his bitrate. Therefore we will soon enough reach above the 10% threshold and adjust As_hat(i). However, if the packet loss ratio does not increase, the losses are probably not related to self-inflicted congestion and therefore we should not react on them. 7. Interoperability Considerations In case a sender implementing these algorithms talks to a receiver which do not implement any of the proposed RTCP messages and RTP header extensions, it is suggested that the sender monitors RTCP receiver reports and uses the fraction of lost packets and the round-trip time as input to the loss-based controller. The delay-based controller should be left disabled. 8. Implementation Experience This algorithm has been implemented in the open-source WebRTC project, has been in use in Chrome since M23, and is being used by Google Hangouts. Deployment of the algorithm have revealed problems related to, e.g, congested or otherwise problematic WiFi networks, which have led to algorithm improvements. The algorithm has also been tested in a multi-party conference scenario with a conference server which terminates the congestion control between endpoints. This ensures that no assumptions are being made by the congestion control about maximum send and receive bitrates, etc., which typically is out of control for a conference server. 9. Further Work This draft is offered as input to the congestion control discussion. Work that can be done on this basis includes: 10. IANA Considerations This document makes no request of IANA. Note to RFC Editor: this section may be removed on publication as an RFC. 11. Security Considerations An attacker with the ability to insert or remove messages on the connection would have the ability to disrupt rate control. This could make the algorithm to produce either a sending rate under-utilizing the bottleneck link capacity, or a too high sending rate causing network congestion. In this case, the control information is carried inside RTP, and can be protected against modification or message insertion using SRTP, just as for the media. Given that timestamps are carried in the RTP header, which is not encrypted, this is not protected against disclosure, but it seems hard to mount an attack based on timing information only. 12. Acknowledgements Thanks to Randell Jesup, Magnus Westerlund, Varun Singh, Tim Panton, Soo-Hyun Choo, Jim Gettys, Ingemar Johansson, Michael Welzl and others for providing valuable feedback on earlier versions of this draft. 13. References 13.1. Normative References , " [abs-send-time]RTP Header Extension for Absolute Sender Time" [I-D.alvestrand-rmcat-remb] Alvestrand, H., "RTCP message for Receiver Estimated Maximum Bitrate", Internet-Draft draft-alvestrand-rmcat-remb-03, October 2013. [I-D.holmer-rmcat-transport-wide-cc-extensions] Holmer, S., Flodman, M. and E. Sprang, RTP Extensions for Transport-wide Congestion Control", Internet-Draft draft-holmer-rmcat-transport-wide-cc-extensions-01, October 2015. [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate Requirement Levels", BCP 14, RFC 2119, DOI 10.17487/RFC2119, March 1997. [RFC3550] Schulzrinne, H., Casner, S., Frederick, R. and V. Jacobson, "RTP: A Transport Protocol for Real-Time Applications", STD 64, RFC 3550, DOI 10.17487/RFC3550, July 2003. 13.2. Informative References [Pv13] De Cicco, L., Carlucci, G. and S. Mascolo, "Understanding the Dynamic Behaviour of the Google Congestion Control", Packet Video Workshop , December 2013. [RFC2914] Floyd, S., "Congestion Control Principles", BCP 41, RFC 2914, DOI 10.17487/RFC2914, September 2000. Appendix A. Change log A.1. Version -00 to -01 A.2. Version -01 to -02 A.3. Version -02 to -03 A.4. rtcweb-03 to rmcat-00 Renamed draft to link the draft name to the RMCAT WG. A.5. rmcat -00 to -01 Spellcheck. Otherwise no changes, this is a "keepalive" release. A.6. rmcat -01 to -02 A.7. rmcat -02 to -03 A.8. ietf-rmcat -00 to ietf-rmcat -01 A.9. ietf-rmcat -01 to ietf-rmcat -02 Authors' Addresses Stefan Holmer Google Kungsbron 2 Stockholm, 11122 Sweden EMail: [email protected] Henrik Lundin Google Kungsbron 2 Stockholm, 11122 Sweden EMail: [email protected] Gaetano Carlucci Politecnico di Bari Via Orabona, 4 Bari, 70125 Italy EMail: [email protected] Luca De Cicco Politecnico di Bari Via Orabona, 4 Bari, 70125 Italy EMail: [email protected] Saverio Mascolo Politecnico di Bari Via Orabona, 4 Bari, 70125 Italy EMail: [email protected]
__label__pos
0.648571
Take the 2-minute tour × Stack Overflow is a question and answer site for professional and enthusiast programmers. It's 100% free, no registration required. I am using django admin interface for an application. I have some models in an app named 'books'.There are two users 'manager' and 'employee' other than admin (with superuser status). The 'employee' user can only add models. The 'manager' can add, change and delete every models in books app.I gave the permissions through the admin interface. But in shell has_perm(books.delete_ModelName) for 'manager' returns False. >>> u = User.objects.get(username__exact="manager") >>> u.has_perm("books.delete_ModelName") False When giving superstatus to 'manager' through admin interface has_perm(books.delete_ModelName) returns True. Why this happens? I want to set access to a specific page based on this permission. Is there any work around? share|improve this question 1 Answer 1 up vote 1 down vote accepted Got the problem solved. The problem was with the ModelName. if the name of a model is 'ModelName', it is dumped as modelname in permissions. So here it should be >>> u.has_perm("books.delete_modelname") share|improve this answer Your Answer   discard By posting your answer, you agree to the privacy policy and terms of service. Not the answer you're looking for? Browse other questions tagged or ask your own question.
__label__pos
0.509843
Topology From Wikipedia, the free encyclopedia Jump to navigation Jump to search Möbius strips, which have only one surface and one edge, are a kind of object studied in topology. In mathematics, topology (from the Greek words τόπος, 'place, location', and λόγος, 'study') is concerned with the properties of a geometric object that are preserved under continuous deformations, such as stretching, twisting, crumpling and bending, but not tearing or gluing. A topological space is a set endowed with a structure, called a topology, which allows defining continuous deformation of subspaces, and, more generally, all kinds of continuity. Euclidean spaces, and, more generally, metric spaces are examples of a topological space, as any distance or metric defines a topology. The deformations that are considered in topology are homeomorphisms and homotopies. A property that is invariant under such deformations is a topological property. Basic examples of topological properties are: the dimension, which allows distinguishing between a line and a surface; compactness, which allows distinguishing between a line and a circle; connectedness, which allows distinguishing a circle from two non-intersecting circles. The ideas underlying topology go back to Gottfried Leibniz, who in the 17th century envisioned the geometria situs and analysis situs. Leonhard Euler's Seven Bridges of Königsberg problem and polyhedron formula are arguably the field's first theorems. The term topology was introduced by Johann Benedict Listing in the 19th century, although it was not until the first decades of the 20th century that the idea of a topological space was developed. A three-dimensional depiction of a thickened trefoil knot, the simplest non-trivial knot Motivation[edit] The motivating insight behind topology is that some geometric problems depend not on the exact shape of the objects involved, but rather on the way they are put together. For example, the square and the circle have many properties in common: they are both one dimensional objects (from a topological point of view) and both separate the plane into two parts, the part inside and the part outside. In one of the first papers in topology, Leonhard Euler demonstrated that it was impossible to find a route through the town of Königsberg (now Kaliningrad) that would cross each of its seven bridges exactly once. This result did not depend on the lengths of the bridges or on their distance from one another, but only on connectivity properties: which bridges connect to which islands or riverbanks. This Seven Bridges of Königsberg problem led to the branch of mathematics known as graph theory. A continuous deformation (a type of homeomorphism) of a mug into a doughnut (torus) and a cow into a sphere Similarly, the hairy ball theorem of algebraic topology says that "one cannot comb the hair flat on a hairy ball without creating a cowlick." This fact is immediately convincing to most people, even though they might not recognize the more formal statement of the theorem, that there is no nonvanishing continuous tangent vector field on the sphere. As with the Bridges of Königsberg, the result does not depend on the shape of the sphere; it applies to any kind of smooth blob, as long as it has no holes. To deal with these problems that do not rely on the exact shape of the objects, one must be clear about just what properties these problems do rely on. From this need arises the notion of homeomorphism. The impossibility of crossing each bridge just once applies to any arrangement of bridges homeomorphic to those in Königsberg, and the hairy ball theorem applies to any space homeomorphic to a sphere. Intuitively, two spaces are homeomorphic if one can be deformed into the other without cutting or gluing. A traditional joke is that a topologist cannot distinguish a coffee mug from a doughnut, since a sufficiently pliable doughnut could be reshaped to a coffee cup by creating a dimple and progressively enlarging it, while shrinking the hole into a handle.[1] Homeomorphism can be considered the most basic topological equivalence. Another is homotopy equivalence. This is harder to describe without getting technical, but the essential notion is that two objects are homotopy equivalent if they both result from "squishing" some larger object. Equivalence classes of the Latin alphabet in the sans-serif font Homeomorphism Homotopy equivalence {A,R} {B} {C,G,I,J,L,M,N,S,U,V,W,Z}, {D,O} {E,F,T,Y} {H,K}, {P,Q} {X} {A,R,D,O,P,Q} {B}, {C,E,F,G,H,I,J,K,L,M,N,S,T,U,V,W,X,Y,Z} An introductory exercise is to classify the uppercase letters of the English alphabet according to homeomorphism and homotopy equivalence. The result depends on the font used, and on whether the strokes making up the letters have some thickness or are ideal curves with no thickness. The figures here use the sans-serif Myriad font and are assumed to consist of ideal curves without thickness. Homotopy equivalence is a coarser relationship than homeomorphism; a homotopy equivalence class can contain several homeomorphism classes. The simple case of homotopy equivalence described above can be used here to show two letters are homotopy equivalent. For example, O fits inside P and the tail of the P can be squished to the "hole" part. Homeomorphism classes are: • no holes corresponding with C, G, I, J, L, M, N, S, U, V, W, and Z; • no holes and three tails corresponding with E, F, T, and Y; • no holes and four tails corresponding with X; • one hole and no tail corresponding with D and O; • one hole and one tail corresponding with P and Q; • one hole and two tails corresponding with A and R; • two holes and no tail corresponding with B; and • a bar with four tails corresponding with H and K; the "bar" on the K is almost too short to see. Homotopy classes are larger, because the tails can be squished down to a point. They are: • one hole, • two holes, and • no holes. To classify the letters correctly, we must show that two letters in the same class are equivalent and two letters in different classes are not equivalent. In the case of homeomorphism, this can be done by selecting points and showing their removal disconnects the letters differently. For example, X and Y are not homeomorphic because removing the center point of the X leaves four pieces; whatever point in Y corresponds to this point, its removal can leave at most three pieces. The case of homotopy equivalence is harder and requires a more elaborate argument showing an algebraic invariant, such as the fundamental group, is different on the supposedly differing classes. Letter topology has practical relevance in stencil typography. For instance, Braggadocio font stencils are made of one connected piece of material. History[edit] The Seven Bridges of Königsberg was a problem solved by Euler. Topology, as a well-defined mathematical discipline, originates in the early part of the twentieth century, but some isolated results can be traced back several centuries.[2] Among these are certain questions in geometry investigated by Leonhard Euler. His 1736 paper on the Seven Bridges of Königsberg is regarded as one of the first practical applications of topology.[2] On 14 November 1750, Euler wrote to a friend that he had realised the importance of the edges of a polyhedron. This led to his polyhedron formula, VE + F = 2 (where V, E, and F respectively indicate the number of vertices, edges, and faces of the polyhedron). Some authorities regard this analysis as the first theorem, signalling the birth of topology.[3] Further contributions were made by Augustin-Louis Cauchy, Ludwig Schläfli, Johann Benedict Listing, Bernhard Riemann and Enrico Betti.[4] Listing introduced the term "Topologie" in Vorstudien zur Topologie, written in his native German, in 1847, having used the word for ten years in correspondence before its first appearance in print.[5] The English form "topology" was used in 1883 in Listing's obituary in the journal Nature to distinguish "qualitative geometry from the ordinary geometry in which quantitative relations chiefly are treated".[6] Their work was corrected, consolidated and greatly extended by Henri Poincaré. In 1895, he published his ground-breaking paper on Analysis Situs, which introduced the concepts now known as homotopy and homology, which are now considered part of algebraic topology.[4] Topological characteristics of closed 2-manifolds[4] Manifold Euler num Orientability Betti numbers Torsion coefficient (1-dim) b0 b1 b2 Sphere 2 Orientable 1 0 1 none Torus 0 Orientable 1 2 1 none 2-holed torus −2 Orientable 1 4 1 none g-holed torus (genus g) 2 − 2g Orientable 1 2g 1 none Projective plane 1 Non-orientable 1 0 0 2 Klein bottle 0 Non-orientable 1 1 0 2 Sphere with c cross-caps (c > 0) 2 − c Non-orientable 1 c − 1 0 2 2-Manifold with g holes and c cross-caps (c > 0) 2 − (2g + c) Non-orientable 1 (2g + c) − 1 0 2 Unifying the work on function spaces of Georg Cantor, Vito Volterra, Cesare Arzelà, Jacques Hadamard, Giulio Ascoli and others, Maurice Fréchet introduced the metric space in 1906.[7] A metric space is now considered a special case of a general topological space, with any given topological space potentially giving rise to many distinct metric spaces. In 1914, Felix Hausdorff coined the term "topological space" and gave the definition for what is now called a Hausdorff space.[8] Currently, a topological space is a slight generalization of Hausdorff spaces, given in 1922 by Kazimierz Kuratowski.[9] Modern topology depends strongly on the ideas of set theory, developed by Georg Cantor in the later part of the 19th century. In addition to establishing the basic ideas of set theory, Cantor considered point sets in Euclidean space as part of his study of Fourier series. For further developments, see point-set topology and algebraic topology. Concepts[edit] Topologies on sets[edit] The term topology also refers to a specific mathematical idea central to the area of mathematics called topology. Informally, a topology tells how elements of a set relate spatially to each other. The same set can have different topologies. For instance, the real line, the complex plane, and the Cantor set can be thought of as the same set with different topologies. Formally, let X be a set and let τ be a family of subsets of X. Then τ is called a topology on X if: 1. Both the empty set and X are elements of τ. 2. Any union of elements of τ is an element of τ. 3. Any intersection of finitely many elements of τ is an element of τ. If τ is a topology on X, then the pair (X, τ) is called a topological space. The notation Xτ may be used to denote a set X endowed with the particular topology τ. The members of τ are called open sets in X. A subset of X is said to be closed if its complement is in τ (that is, its complement is open). A subset of X may be open, closed, both (a clopen set), or neither. The empty set and X itself are always both closed and open. An open subset of X which contains a point x is called a neighborhood of x. Continuous functions and homeomorphisms[edit] A function or map from one topological space to another is called continuous if the inverse image of any open set is open. If the function maps the real numbers to the real numbers (both spaces with the standard topology), then this definition of continuous is equivalent to the definition of continuous in calculus. If a continuous function is one-to-one and onto, and if the inverse of the function is also continuous, then the function is called a homeomorphism and the domain of the function is said to be homeomorphic to the range. Another way of saying this is that the function has a natural extension to the topology. If two spaces are homeomorphic, they have identical topological properties, and are considered topologically the same. The cube and the sphere are homeomorphic, as are the coffee cup and the doughnut. But the circle is not homeomorphic to the doughnut. Manifolds[edit] While topological spaces can be extremely varied and exotic, many areas of topology focus on the more familiar class of spaces known as manifolds. A manifold is a topological space that resembles Euclidean space near each point. More precisely, each point of an n-dimensional manifold has a neighborhood that is homeomorphic to the Euclidean space of dimension n. Lines and circles, but not figure eights, are one-dimensional manifolds. Two-dimensional manifolds are also called surfaces, although not all surfaces are manifolds. Examples include the plane, the sphere, and the torus, which can all be realized without self-intersection in three dimensions, and the Klein bottle and real projective plane, which cannot (that is, all their realizations are surfaces that are not manifolds). Topics[edit] General topology[edit] General topology is the branch of topology dealing with the basic set-theoretic definitions and constructions used in topology.[10][11] It is the foundation of most other branches of topology, including differential topology, geometric topology, and algebraic topology. Another name for general topology is point-set topology. The basic object of study is topological spaces, which are sets equipped with a topology, that is, a family of subsets, called open sets, which is closed under finite intersections and (finite or infinite) unions. The fundamental concepts of topology, such as continuity, compactness, and connectedness, can be defined in terms of open sets. Intuitively, continuous functions take nearby points to nearby points. Compact sets are those that can be covered by finitely many sets of arbitrarily small size. Connected sets are sets that cannot be divided into two pieces that are far apart. The words nearby, arbitrarily small, and far apart can all be made precise by using open sets. Several topologies can be defined on a given space. Changing a topology consists of changing the collection of open sets. This changes which functions are continuous and which subsets are compact or connected. Metric spaces are an important class of topological spaces where the distance between any two points is defined by a function called a metric. In a metric space, an open set is a union of open disks, where an open disk of radius r centered at x is the set of all points whose distance to x is less than r. Many common spaces are topological spaces whose topology can be defined by a metric. This is the case of the real line, the complex plane, real and complex vector spaces and Euclidean spaces. Having a metric simplifies many proofs. Algebraic topology[edit] Algebraic topology is a branch of mathematics that uses tools from algebra to study topological spaces.[12] The basic goal is to find algebraic invariants that classify topological spaces up to homeomorphism, though usually most classify up to homotopy equivalence. The most important of these invariants are homotopy groups, homology, and cohomology. Although algebraic topology primarily uses algebra to study topological problems, using topology to solve algebraic problems is sometimes also possible. Algebraic topology, for example, allows for a convenient proof that any subgroup of a free group is again a free group. Differential topology[edit] Differential topology is the field dealing with differentiable functions on differentiable manifolds.[13] It is closely related to differential geometry and together they make up the geometric theory of differentiable manifolds. More specifically, differential topology considers the properties and structures that require only a smooth structure on a manifold to be defined. Smooth manifolds are "softer" than manifolds with extra geometric structures, which can act as obstructions to certain types of equivalences and deformations that exist in differential topology. For instance, volume and Riemannian curvature are invariants that can distinguish different geometric structures on the same smooth manifold—that is, one can smoothly "flatten out" certain manifolds, but it might require distorting the space and affecting the curvature or volume. Geometric topology[edit] Geometric topology is a branch of topology that primarily focuses on low-dimensional manifolds (that is, spaces of dimensions 2, 3, and 4) and their interaction with geometry, but it also includes some higher-dimensional topology.[14] Some examples of topics in geometric topology are orientability, handle decompositions, local flatness, crumpling and the planar and higher-dimensional Schönflies theorem. In high-dimensional topology, characteristic classes are a basic invariant, and surgery theory is a key theory. Low-dimensional topology is strongly geometric, as reflected in the uniformization theorem in 2 dimensions – every surface admits a constant curvature metric; geometrically, it has one of 3 possible geometries: positive curvature/spherical, zero curvature/flat, and negative curvature/hyperbolic – and the geometrization conjecture (now theorem) in 3 dimensions – every 3-manifold can be cut into pieces, each of which has one of eight possible geometries. 2-dimensional topology can be studied as complex geometry in one variable (Riemann surfaces are complex curves) – by the uniformization theorem every conformal class of metrics is equivalent to a unique complex one, and 4-dimensional topology can be studied from the point of view of complex geometry in two variables (complex surfaces), though not every 4-manifold admits a complex structure. Generalizations[edit] Occasionally, one needs to use the tools of topology but a "set of points" is not available. In pointless topology one considers instead the lattice of open sets as the basic notion of the theory,[15] while Grothendieck topologies are structures defined on arbitrary categories that allow the definition of sheaves on those categories, and with that the definition of general cohomology theories.[16] Applications[edit] Biology[edit] Knot theory, a branch of topology, is used in biology to study the effects of certain enzymes on DNA. These enzymes cut, twist, and reconnect the DNA, causing knotting with observable effects such as slower electrophoresis.[17] Topology is also used in evolutionary biology to represent the relationship between phenotype and genotype.[18] Phenotypic forms that appear quite different can be separated by only a few mutations depending on how genetic changes map to phenotypic changes during development. In neuroscience, topological quantities like the Euler characteristic and Betti number have been used to measure the complexity of patterns of activity in neural networks. Computer science[edit] Topological data analysis uses techniques from algebraic topology to determine the large scale structure of a set (for instance, determining if a cloud of points is spherical or toroidal). The main method used by topological data analysis is to: 1. Replace a set of data points with a family of simplicial complexes, indexed by a proximity parameter. 2. Analyse these topological complexes via algebraic topology – specifically, via the theory of persistent homology.[19] 3. Encode the persistent homology of a data set in the form of a parameterized version of a Betti number, which is called a barcode.[19] Several branches of programming language semantics, such as domain theory, are formalized using topology. In this context, Steve Vickers, building on work by Samson Abramsky and Michael B. Smyth, characterizes topological spaces as Boolean or Heyting algebras over open sets, which are characterized as semidecidable (equivalently, finitely observable) properties.[20] Physics[edit] Topology is relevant to physics in areas such as condensed matter physics,[21] quantum field theory and physical cosmology. The topological dependence of mechanical properties in solids is of interest in disciplines of mechanical engineering and materials science. Electrical and mechanical properties depend on the arrangement and network structures of molecules and elementary units in materials.[22] The compressive strength of crumpled topologies is studied in attempts to understand the high strength to weight of such structures that are mostly empty space.[23] Topology is of further significance in Contact mechanics where the dependence of stiffness and friction on the dimensionality of surface structures is the subject of interest with applications in multi-body physics. A topological quantum field theory (or topological field theory or TQFT) is a quantum field theory that computes topological invariants. Although TQFTs were invented by physicists, they are also of mathematical interest, being related to, among other things, knot theory, the theory of four-manifolds in algebraic topology, and to the theory of moduli spaces in algebraic geometry. Donaldson, Jones, Witten, and Kontsevich have all won Fields Medals for work related to topological field theory. The topological classification of Calabi-Yau manifolds has important implications in string theory, as different manifolds can sustain different kinds of strings.[24] In cosmology, topology can be used to describe the overall shape of the universe.[25] This area of research is commonly known as spacetime topology. Robotics[edit] The possible positions of a robot can be described by a manifold called configuration space.[26] In the area of motion planning, one finds paths between two points in configuration space. These paths represent a motion of the robot's joints and other parts into the desired pose.[27] Games and puzzles[edit] Tanglement puzzles are based on topological aspects of the puzzle's shapes and components.[28][29][30] Fiber art[edit] In order to create a continuous join of pieces in a modular construction, it is necessary to create an unbroken path in an order which surrounds each piece and traverses each edge only once. This process is an application of the Eulerian path.[31] See also[edit] References[edit] Citations[edit] 1. ^ Hubbard, John H.; West, Beverly H. (1995). Differential Equations: A Dynamical Systems Approach. Part II: Higher-Dimensional Systems. Texts in Applied Mathematics. 18. Springer. p. 204. ISBN 978-0-387-94377-0. 2. ^ a b Croom 1989, p. 7 3. ^ Richeson 2008, p. 63; Aleksandrov 1969, p. 204 4. ^ a b c Richeson (2008) 5. ^ Listing, Johann Benedict, "Vorstudien zur Topologie", Vandenhoeck und Ruprecht, Göttingen, p. 67, 1848 6. ^ Tait, Peter Guthrie (1 February 1883). "Johann Benedict Listing (obituary)". Nature. 27 (692): 316–317. Bibcode:1883Natur..27..316P. doi:10.1038/027316a0. 7. ^ Fréchet, Maurice (1906). Sur quelques points du calcul fonctionnel. PhD dissertation. OCLC 8897542. 8. ^ Hausdorff, Felix, "Grundzüge der Mengenlehre", Leipzig: Veit. In (Hausdorff Werke, II (2002), 91–576) 9. ^ Croom 1989, p. 129 10. ^ Munkres, James R. Topology. Vol. 2. Upper Saddle River: Prentice Hall, 2000. 11. ^ Adams, Colin Conrad, and Robert David Franzosa. Introduction to topology: pure and applied. Pearson Prentice Hall, 2008. 12. ^ Allen Hatcher, Algebraic topology. (2002) Cambridge University Press, xii+544 pp. ISBN 0-521-79160-X, 0-521-79540-0. 13. ^ Lee, John M. (2006). Introduction to Smooth Manifolds. Springer-Verlag. ISBN 978-0-387-95448-6. 14. ^ R.B. Sher and R.J. Daverman (2002), Handbook of Geometric Topology, North-Holland. ISBN 0-444-82432-4 15. ^ Johnstone, Peter T. (1983). "The point of pointless topology". Bulletin of the American Mathematical Society. 8 (1): 41–53. doi:10.1090/s0273-0979-1983-15080-2. 16. ^ Artin, Michael (1962). Grothendieck topologies. Cambridge, MA: Harvard University, Dept. of Mathematics. Zbl 0208.48701. 17. ^ Adams, Colin (2004). The Knot Book: An Elementary Introduction to the Mathematical Theory of Knots. American Mathematical Society. ISBN 978-0-8218-3678-1.CS1 maint: ref=harv (link) 18. ^ Stadler, Bärbel M.R.; Stadler, Peter F.; Wagner, Günter P.; Fontana, Walter (2001). "The Topology of the Possible: Formal Spaces Underlying Patterns of Evolutionary Change". Journal of Theoretical Biology. 213 (2): 241–274. CiteSeerX 10.1.1.63.7808. doi:10.1006/jtbi.2001.2423. PMID 11894994. 19. ^ a b Gunnar Carlsson (April 2009). "Topology and data" (PDF). Bulletin (New Series) of the American Mathematical Society. 46 (2): 255–308. doi:10.1090/S0273-0979-09-01249-X. 20. ^ Vickers, Steve (1996). Topology via Logic. Cambridge Tracts in Theoretical Computer Science. Cambridge University Press. ISBN 9780521576512. 21. ^ "The Nobel Prize in Physics 2016". Nobel Foundation. 4 October 2016. Retrieved 12 October 2016. 22. ^ Stephenson, C.; et., al. (2017). "Topological properties of a self-assembled electrical network via ab initio calculation". Sci. Rep. 7: 41621. Bibcode:2017NatSR...741621S. doi:10.1038/srep41621. PMC 5290745. PMID 28155863. 23. ^ Cambou, Anne Dominique; Narayanan, Menon (2011). "Three-dimensional structure of a sheet crumpled into a ball". Proceedings of the National Academy of Sciences. 108 (36): 14741–14745. arXiv:1203.5826. Bibcode:2011PNAS..10814741C. doi:10.1073/pnas.1019192108. PMC 3169141. PMID 21873249. 24. ^ Yau, S. & Nadis, S.; The Shape of Inner Space, Basic Books, 2010. 25. ^ The Shape of Space: How to Visualize Surfaces and Three-dimensional Manifolds 2nd ed (Marcel Dekker, 1985, ISBN 0-8247-7437-X) 26. ^ John J. Craig, Introduction to Robotics: Mechanics and Control, 3rd Ed. Prentice-Hall, 2004 27. ^ Farber, Michael (2008). Invitation to Topological Robotics. European Mathematical Society. ISBN 9783037190548. 28. ^ Horak, Mathew (2006). "Disentangling Topological Puzzles by Using Knot Theory". Mathematics Magazine. 79 (5): 368–375. doi:10.2307/27642974. JSTOR 27642974.. 29. ^ http://sma.epfl.ch/Notes.pdf A Topological Puzzle, Inta Bertuccioni, December 2003. 30. ^ https://www.futilitycloset.com/the-figure-8-puzzle The Figure Eight Puzzle, Science and Math, June 2012. 31. ^ Eckman, Edie (2012). Connect the shapes crochet motifs: creative techniques for joining motifs of all shapes. Storey Publishing. ISBN 9781603429733. Bibliography[edit] Further reading[edit] External links[edit]
__label__pos
0.862183
Lesson 26.2 – Hiding Unvisited Locations on the World Map Now that we have a map, it would be nice to only show the images for the locations the player has visited. That is what we’ll add in this lesson.   NOTE: In the last lesson, there wasn’t an image for the Bridge location. That’s because I used the images from the WPF version of these lessons, and there is no Bridge location in that game. So, if you finished Lesson 26.1, and you don’t have a Bridge image (and you don’t have six columns of PictureBox controls), please go back to Lesson 26.1 and make the changes to add the missing column. This will require downloading the location images again (to get the Bridge image), and changing the WorldMap form (to add the new column and display the Bridge image).   STEP 1: Add FogLocation.png to SuperAdventure\Images After adding it, set its properties to: Build Action: Embedded Resource Copy to Output Directory: Do not copy   Right-click and select “Save as”, to download     Step 2: Edit Engine\Player.cs We’re going to store the ID of every location the player visits. We’ll save the IDs in a new List property named LocationsVisited (line 69). Because this is a List property, we need to initialize it, otherwise it will be null, instead of an empty List. We’ll do that in the constructor (on line 80), where we initialize the other list properties. Now, when the player moves to a new location, we need to add its ID to the property – if it hasn’t already been added. We do that inside the MoveTo function, on lines 167 to 170. If the LocationsVisited property does not already contain the ID of the location, we add it to the List.   Player.cs using System; using System.Collections.Generic; using System.ComponentModel; using System.Linq; using System.Xml; namespace Engine { public class Player : LivingCreature { private int _gold; private int _experiencePoints; private Location _currentLocation; public event EventHandler<MessageEventArgs> OnMessage; public int Gold { get { return _gold; } set { _gold = value; OnPropertyChanged("Gold"); } } public int ExperiencePoints { get { return _experiencePoints; } private set { _experiencePoints = value; OnPropertyChanged("ExperiencePoints"); OnPropertyChanged("Level"); } } public int Level { get { return ((ExperiencePoints / 100) + 1); } } public Location CurrentLocation { get { return _currentLocation; } set { _currentLocation = value; OnPropertyChanged("CurrentLocation"); } } public Weapon CurrentWeapon { get; set; } public BindingList<InventoryItem> Inventory { get; set; } public List<Weapon> Weapons { get { return Inventory.Where(x => x.Details is Weapon).Select(x => x.Details as Weapon).ToList(); } } public List<HealingPotion> Potions { get { return Inventory.Where(x => x.Details is HealingPotion).Select(x => x.Details as HealingPotion).ToList(); } } public BindingList<PlayerQuest> Quests { get; set; } public List<int> LocationsVisited { get; set; } private Monster CurrentMonster { get; set; } private Player(int currentHitPoints, int maximumHitPoints, int gold, int experiencePoints) : base(currentHitPoints, maximumHitPoints) { Gold = gold; ExperiencePoints = experiencePoints; Inventory = new BindingList<InventoryItem>(); Quests = new BindingList<PlayerQuest>(); LocationsVisited = new List<int>(); } public static Player CreateDefaultPlayer() { Player player = new Player(10, 10, 20, 0); player.Inventory.Add(new InventoryItem(World.ItemByID(World.ITEM_ID_RUSTY_SWORD), 1)); player.CurrentLocation = World.LocationByID(World.LOCATION_ID_HOME); return player; } public static Player CreatePlayerFromXmlString(string xmlPlayerData) { try { XmlDocument playerData = new XmlDocument(); playerData.LoadXml(xmlPlayerData); int currentHitPoints = Convert.ToInt32(playerData.SelectSingleNode("/Player/Stats/CurrentHitPoints").InnerText); int maximumHitPoints = Convert.ToInt32(playerData.SelectSingleNode("/Player/Stats/MaximumHitPoints").InnerText); int gold = Convert.ToInt32(playerData.SelectSingleNode("/Player/Stats/Gold").InnerText); int experiencePoints = Convert.ToInt32(playerData.SelectSingleNode("/Player/Stats/ExperiencePoints").InnerText); Player player = new Player(currentHitPoints, maximumHitPoints, gold, experiencePoints); int currentLocationID = Convert.ToInt32(playerData.SelectSingleNode("/Player/Stats/CurrentLocation").InnerText); player.CurrentLocation = World.LocationByID(currentLocationID); if (playerData.SelectSingleNode("/Player/Stats/CurrentWeapon") != null) { int currentWeaponID = Convert.ToInt32(playerData.SelectSingleNode("/Player/Stats/CurrentWeapon").InnerText); player.CurrentWeapon = (Weapon)World.ItemByID(currentWeaponID); } foreach (XmlNode node in playerData.SelectNodes("/Player/InventoryItems/InventoryItem")) { int id = Convert.ToInt32(node.Attributes["ID"].Value); int quantity = Convert.ToInt32(node.Attributes["Quantity"].Value); for (int i = 0; i < quantity; i++) { player.AddItemToInventory(World.ItemByID(id)); } } foreach (XmlNode node in playerData.SelectNodes("/Player/PlayerQuests/PlayerQuest")) { int id = Convert.ToInt32(node.Attributes["ID"].Value); bool isCompleted = Convert.ToBoolean(node.Attributes["IsCompleted"].Value); PlayerQuest playerQuest = new PlayerQuest(World.QuestByID(id)); playerQuest.IsCompleted = isCompleted; player.Quests.Add(playerQuest); } return player; } catch { // If there was an error with the XML data, return a default player object return CreateDefaultPlayer(); } } public static Player CreatePlayerFromDatabase(int currentHitPoints, int maximumHitPoints, int gold, int experiencePoints, int currentLocationID) { Player player = new Player(currentHitPoints, maximumHitPoints, gold, experiencePoints); player.MoveTo(World.LocationByID(currentLocationID)); return player; } public void MoveTo(Location location) { if (PlayerDoesNotHaveTheRequiredItemToEnter(location)) { RaiseMessage("You must have a " + location.ItemRequiredToEnter.Name + " to enter this location."); return; } // The player can enter this location CurrentLocation = location; if (!LocationsVisited.Contains(CurrentLocation.ID)) { LocationsVisited.Add(CurrentLocation.ID); } CompletelyHeal(); if (location.HasAQuest) { if (PlayerDoesNotHaveThisQuest(location.QuestAvailableHere)) { GiveQuestToPlayer(location.QuestAvailableHere); } else { if (PlayerHasNotCompleted(location.QuestAvailableHere) && PlayerHasAllQuestCompletionItemsFor(location.QuestAvailableHere)) { GivePlayerQuestRewards(location.QuestAvailableHere); } } } SetTheCurrentMonsterForTheCurrentLocation(location); } public void MoveNorth() { if (CurrentLocation.LocationToNorth != null) { MoveTo(CurrentLocation.LocationToNorth); } } public void MoveEast() { if (CurrentLocation.LocationToEast != null) { MoveTo(CurrentLocation.LocationToEast); } } public void MoveSouth() { if (CurrentLocation.LocationToSouth != null) { MoveTo(CurrentLocation.LocationToSouth); } } public void MoveWest() { if (CurrentLocation.LocationToWest != null) { MoveTo(CurrentLocation.LocationToWest); } } public void UseWeapon(Weapon weapon) { int damage = RandomNumberGenerator.NumberBetween(weapon.MinimumDamage, weapon.MaximumDamage); if (damage == 0) { RaiseMessage("You missed the " + CurrentMonster.Name); } else { CurrentMonster.CurrentHitPoints -= damage; RaiseMessage("You hit the " + CurrentMonster.Name + " for " + damage + " points."); } if (CurrentMonster.IsDead) { LootTheCurrentMonster(); // "Move" to the current location, to refresh the current monster MoveTo(CurrentLocation); } else { LetTheMonsterAttack(); } } private void LootTheCurrentMonster() { RaiseMessage(""); RaiseMessage("You defeated the " + CurrentMonster.Name); RaiseMessage("You receive " + CurrentMonster.RewardExperiencePoints + " experience points"); RaiseMessage("You receive " + CurrentMonster.RewardGold + " gold"); AddExperiencePoints(CurrentMonster.RewardExperiencePoints); Gold += CurrentMonster.RewardGold; // Give monster's loot items to the player foreach (InventoryItem inventoryItem in CurrentMonster.LootItems) { AddItemToInventory(inventoryItem.Details); RaiseMessage(string.Format("You loot {0} {1}", inventoryItem.Quantity, inventoryItem.Description)); } RaiseMessage(""); } public void UsePotion(HealingPotion potion) { RaiseMessage("You drink a " + potion.Name); HealPlayer(potion.AmountToHeal); RemoveItemFromInventory(potion); // The player used their turn to drink the potion, so let the monster attack now LetTheMonsterAttack(); } public void AddItemToInventory(Item itemToAdd, int quantity = 1) { InventoryItem existingItemInInventory = Inventory.SingleOrDefault(ii => ii.Details.ID == itemToAdd.ID); if (existingItemInInventory == null) { Inventory.Add(new InventoryItem(itemToAdd, quantity)); } else { existingItemInInventory.Quantity += quantity; } RaiseInventoryChangedEvent(itemToAdd); } public void RemoveItemFromInventory(Item itemToRemove, int quantity = 1) { InventoryItem item = Inventory.SingleOrDefault(ii => ii.Details.ID == itemToRemove.ID && ii.Quantity >= quantity); if (item != null) { item.Quantity -= quantity; if (item.Quantity == 0) { Inventory.Remove(item); } RaiseInventoryChangedEvent(itemToRemove); } } public string ToXmlString() { XmlDocument playerData = new XmlDocument(); // Create the top-level XML node XmlNode player = playerData.CreateElement("Player"); playerData.AppendChild(player); // Create the "Stats" child node to hold the other player statistics nodes XmlNode stats = playerData.CreateElement("Stats"); player.AppendChild(stats); // Create the child nodes for the "Stats" node CreateNewChildXmlNode(playerData, stats, "CurrentHitPoints", CurrentHitPoints); CreateNewChildXmlNode(playerData, stats, "MaximumHitPoints", MaximumHitPoints); CreateNewChildXmlNode(playerData, stats, "Gold", Gold); CreateNewChildXmlNode(playerData, stats, "ExperiencePoints", ExperiencePoints); CreateNewChildXmlNode(playerData, stats, "CurrentLocation", CurrentLocation.ID); if (CurrentWeapon != null) { CreateNewChildXmlNode(playerData, stats, "CurrentWeapon", CurrentWeapon.ID); } // Create the "InventoryItems" child node to hold each InventoryItem node XmlNode inventoryItems = playerData.CreateElement("InventoryItems"); player.AppendChild(inventoryItems); // Create an "InventoryItem" node for each item in the player's inventory foreach (InventoryItem item in Inventory) { XmlNode inventoryItem = playerData.CreateElement("InventoryItem"); AddXmlAttributeToNode(playerData, inventoryItem, "ID", item.Details.ID); AddXmlAttributeToNode(playerData, inventoryItem, "Quantity", item.Quantity); inventoryItems.AppendChild(inventoryItem); } // Create the "PlayerQuests" child node to hold each PlayerQuest node XmlNode playerQuests = playerData.CreateElement("PlayerQuests"); player.AppendChild(playerQuests); // Create a "PlayerQuest" node for each quest the player has acquired foreach (PlayerQuest quest in Quests) { XmlNode playerQuest = playerData.CreateElement("PlayerQuest"); AddXmlAttributeToNode(playerData, playerQuest, "ID", quest.Details.ID); AddXmlAttributeToNode(playerData, playerQuest, "IsCompleted", quest.IsCompleted); playerQuests.AppendChild(playerQuest); } return playerData.InnerXml; // The XML document, as a string, so we can save the data to disk } private bool HasRequiredItemToEnterThisLocation(Location location) { if (location.DoesNotHaveAnItemRequiredToEnter) { return true; } // See if the player has the required item in their inventory return Inventory.Any(ii => ii.Details.ID == location.ItemRequiredToEnter.ID); } private void SetTheCurrentMonsterForTheCurrentLocation(Location location) { // Populate the current monster with this location's monster (or null, if there is no monster here) CurrentMonster = location.NewInstanceOfMonsterLivingHere(); if (CurrentMonster != null) { RaiseMessage("You see a " + CurrentMonster.Name); } } private bool PlayerDoesNotHaveTheRequiredItemToEnter(Location location) { return !HasRequiredItemToEnterThisLocation(location); } private bool PlayerDoesNotHaveThisQuest(Quest quest) { return Quests.All(pq => pq.Details.ID != quest.ID); } private bool PlayerHasNotCompleted(Quest quest) { return Quests.Any(pq => pq.Details.ID == quest.ID && !pq.IsCompleted); } private void GiveQuestToPlayer(Quest quest) { RaiseMessage("You receive the " + quest.Name + " quest."); RaiseMessage(quest.Description); RaiseMessage("To complete it, return with:"); foreach (QuestCompletionItem qci in quest.QuestCompletionItems) { RaiseMessage(string.Format("{0} {1}", qci.Quantity, qci.Quantity == 1 ? qci.Details.Name : qci.Details.NamePlural)); } RaiseMessage(""); Quests.Add(new PlayerQuest(quest)); } private bool PlayerHasAllQuestCompletionItemsFor(Quest quest) { // See if the player has all the items needed to complete the quest here foreach (QuestCompletionItem qci in quest.QuestCompletionItems) { // Check each item in the player's inventory, to see if they have it, and enough of it if (!Inventory.Any(ii => ii.Details.ID == qci.Details.ID && ii.Quantity >= qci.Quantity)) { return false; } } // If we got here, then the player must have all the required items, and enough of them, to complete the quest. return true; } private void RemoveQuestCompletionItems(Quest quest) { foreach (QuestCompletionItem qci in quest.QuestCompletionItems) { InventoryItem item = Inventory.SingleOrDefault(ii => ii.Details.ID == qci.Details.ID); if (item != null) { RemoveItemFromInventory(item.Details, qci.Quantity); } } } private void AddExperiencePoints(int experiencePointsToAdd) { ExperiencePoints += experiencePointsToAdd; MaximumHitPoints = (Level * 10); } private void GivePlayerQuestRewards(Quest quest) { RaiseMessage(""); RaiseMessage("You complete the '" + quest.Name + "' quest."); RaiseMessage("You receive: "); RaiseMessage(quest.RewardExperiencePoints + " experience points"); RaiseMessage(quest.RewardGold + " gold"); RaiseMessage(quest.RewardItem.Name, true); AddExperiencePoints(quest.RewardExperiencePoints); Gold += quest.RewardGold; RemoveQuestCompletionItems(quest); AddItemToInventory(quest.RewardItem); MarkPlayerQuestCompleted(quest); } private void MarkPlayerQuestCompleted(Quest quest) { PlayerQuest playerQuest = Quests.SingleOrDefault(pq => pq.Details.ID == quest.ID); if (playerQuest != null) { playerQuest.IsCompleted = true; } } private void LetTheMonsterAttack() { int damageToPlayer = RandomNumberGenerator.NumberBetween(0, CurrentMonster.MaximumDamage); RaiseMessage("The " + CurrentMonster.Name + " did " + damageToPlayer + " points of damage."); CurrentHitPoints -= damageToPlayer; if (IsDead) { RaiseMessage("The " + CurrentMonster.Name + " killed you."); MoveHome(); } } private void HealPlayer(int hitPointsToHeal) { CurrentHitPoints = Math.Min(CurrentHitPoints + hitPointsToHeal, MaximumHitPoints); } private void CompletelyHeal() { CurrentHitPoints = MaximumHitPoints; } private void MoveHome() { MoveTo(World.LocationByID(World.LOCATION_ID_HOME)); } private void CreateNewChildXmlNode(XmlDocument document, XmlNode parentNode, string elementName, object value) { XmlNode node = document.CreateElement(elementName); node.AppendChild(document.CreateTextNode(value.ToString())); parentNode.AppendChild(node); } private void AddXmlAttributeToNode(XmlDocument document, XmlNode node, string attributeName, object value) { XmlAttribute attribute = document.CreateAttribute(attributeName); attribute.Value = value.ToString(); node.Attributes.Append(attribute); } private void RaiseInventoryChangedEvent(Item item) { if (item is Weapon) { OnPropertyChanged("Weapons"); } if (item is HealingPotion) { OnPropertyChanged("Potions"); } } private void RaiseMessage(string message, bool addExtraNewLine = false) { if (OnMessage != null) { OnMessage(this, new MessageEventArgs(message, addExtraNewLine)); } } } }   Step 3: Edit SuperAdventure\SuperAdventure.cs and SuperAdventure\WorldMap.cs In order to display the correct image for a location (the fog, or the location’s image), the WorldMap form needs the current player object, to know which locations the player has visited. So, we need to pass it from the SuperAdventure form, into the WorldMap form – like we do with the TradingScreen form. In WorldMap.cs, we need to add a Player parameter to the constructor (line 13). In SuperAdventure.cs, we pass the current player when we instantiate the WorldMap form (line 225).   Now, we can hide the unvisited locations by displaying the FogLocation in the PictureBox for any locations whose IDs are not in the player object’s LocationsVisited list. I’ve done that by using the ternary operator inside the calls to SetImage (lines 17 through 25). If LocationVisited contains the location’s ID, we pass the name of the location’s image file. If the ID is not in LocationsVisited, we pass “FogLocation”.   WorldMap.cs using System.Drawing; using System.IO; using System.Reflection; using System.Windows.Forms; using Engine; namespace SuperAdventure { public partial class WorldMap : Form { readonly Assembly _thisAssembly = Assembly.GetExecutingAssembly(); public WorldMap(Player player) { InitializeComponent(); SetImage(pic_0_2, player.LocationsVisited.Contains(5) ? "HerbalistsGarden" : "FogLocation"); SetImage(pic_1_2, player.LocationsVisited.Contains(4) ? "HerbalistsHut" : "FogLocation"); SetImage(pic_2_0, player.LocationsVisited.Contains(7) ? "FarmFields" : "FogLocation"); SetImage(pic_2_1, player.LocationsVisited.Contains(6) ? "Farmhouse" : "FogLocation"); SetImage(pic_2_2, player.LocationsVisited.Contains(2) ? "TownSquare" : "FogLocation"); SetImage(pic_2_3, player.LocationsVisited.Contains(3) ? "TownGate" : "FogLocation"); SetImage(pic_2_4, player.LocationsVisited.Contains(8) ? "Bridge" : "FogLocation"); SetImage(pic_2_5, player.LocationsVisited.Contains(9) ? "SpiderForest" : "FogLocation"); SetImage(pic_3_2, player.LocationsVisited.Contains(1) ? "Home" : "FogLocation"); } private void SetImage(PictureBox pictureBox, string imageName) { using (Stream resourceStream = _thisAssembly.GetManifestResourceStream( _thisAssembly.GetName().Name + ".Images." + imageName + ".png")) { if (resourceStream != null) { pictureBox.Image = new Bitmap(resourceStream); } } } } }   SuperAdventure.cs using System; using System.ComponentModel; using System.Linq; using System.Windows.Forms; using System.IO; using Engine; namespace SuperAdventure { public partial class SuperAdventure : Form { private const string PLAYER_DATA_FILE_NAME = "PlayerData.xml"; private Player _player; public SuperAdventure() { InitializeComponent(); _player = PlayerDataMapper.CreateFromDatabase(); if(_player == null) { if(File.Exists(PLAYER_DATA_FILE_NAME)) { _player = Player.CreatePlayerFromXmlString(File.ReadAllText(PLAYER_DATA_FILE_NAME)); } else { _player = Player.CreateDefaultPlayer(); } } _player.AddItemToInventory(World.ItemByID(World.ITEM_ID_CLUB)); lblHitPoints.DataBindings.Add("Text", _player, "CurrentHitPoints"); lblGold.DataBindings.Add("Text", _player, "Gold"); lblExperience.DataBindings.Add("Text", _player, "ExperiencePoints"); lblLevel.DataBindings.Add("Text", _player, "Level"); dgvInventory.RowHeadersVisible = false; dgvInventory.AutoGenerateColumns = false; dgvInventory.DataSource = _player.Inventory; dgvInventory.Columns.Add(new DataGridViewTextBoxColumn { HeaderText = "Name", Width = 197, DataPropertyName = "Description" }); dgvInventory.Columns.Add(new DataGridViewTextBoxColumn { HeaderText = "Quantity", DataPropertyName = "Quantity" }); dgvInventory.ScrollBars = ScrollBars.Vertical; dgvQuests.RowHeadersVisible = false; dgvQuests.AutoGenerateColumns = false; dgvQuests.DataSource = _player.Quests; dgvQuests.Columns.Add(new DataGridViewTextBoxColumn { HeaderText = "Name", Width = 197, DataPropertyName = "Name" }); dgvQuests.Columns.Add(new DataGridViewTextBoxColumn { HeaderText = "Done?", DataPropertyName = "IsCompleted" }); cboWeapons.DataSource = _player.Weapons; cboWeapons.DisplayMember = "Name"; cboWeapons.ValueMember = "Id"; if(_player.CurrentWeapon != null) { cboWeapons.SelectedItem = _player.CurrentWeapon; } cboWeapons.SelectedIndexChanged += cboWeapons_SelectedIndexChanged; cboPotions.DataSource = _player.Potions; cboPotions.DisplayMember = "Name"; cboPotions.ValueMember = "Id"; _player.PropertyChanged += PlayerOnPropertyChanged; _player.OnMessage += DisplayMessage; _player.MoveTo(_player.CurrentLocation); } private void DisplayMessage(object sender, MessageEventArgs messageEventArgs) { rtbMessages.Text += messageEventArgs.Message + Environment.NewLine; if(messageEventArgs.AddExtraNewLine) { rtbMessages.Text += Environment.NewLine; } rtbMessages.SelectionStart = rtbMessages.Text.Length; rtbMessages.ScrollToCaret(); } private void PlayerOnPropertyChanged(object sender, PropertyChangedEventArgs propertyChangedEventArgs) { if(propertyChangedEventArgs.PropertyName == "Weapons") { cboWeapons.DataSource = _player.Weapons; if(!_player.Weapons.Any()) { cboWeapons.Visible = false; btnUseWeapon.Visible = false; } } if(propertyChangedEventArgs.PropertyName == "Potions") { cboPotions.DataSource = _player.Potions; if(!_player.Potions.Any()) { cboPotions.Visible = false; btnUsePotion.Visible = false; } } if(propertyChangedEventArgs.PropertyName == "CurrentLocation") { // Show/hide available movement buttons btnNorth.Visible = (_player.CurrentLocation.LocationToNorth != null); btnEast.Visible = (_player.CurrentLocation.LocationToEast != null); btnSouth.Visible = (_player.CurrentLocation.LocationToSouth != null); btnWest.Visible = (_player.CurrentLocation.LocationToWest != null); btnTrade.Visible = (_player.CurrentLocation.VendorWorkingHere != null); // Display current location name and description rtbLocation.Text = _player.CurrentLocation.Name + Environment.NewLine; rtbLocation.Text += _player.CurrentLocation.Description + Environment.NewLine; if(!_player.CurrentLocation.HasAMonster) { cboWeapons.Visible = false; cboPotions.Visible = false; btnUseWeapon.Visible = false; btnUsePotion.Visible = false; } else { cboWeapons.Visible = _player.Weapons.Any(); cboPotions.Visible = _player.Potions.Any(); btnUseWeapon.Visible = _player.Weapons.Any(); btnUsePotion.Visible = _player.Potions.Any(); } } } private void btnNorth_Click(object sender, EventArgs e) { _player.MoveNorth(); } private void btnEast_Click(object sender, EventArgs e) { _player.MoveEast(); } private void btnSouth_Click(object sender, EventArgs e) { _player.MoveSouth(); } private void btnWest_Click(object sender, EventArgs e) { _player.MoveWest(); } private void btnUseWeapon_Click(object sender, EventArgs e) { // Get the currently selected weapon from the cboWeapons ComboBox Weapon currentWeapon = (Weapon)cboWeapons.SelectedItem; _player.UseWeapon(currentWeapon); } private void btnUsePotion_Click(object sender, EventArgs e) { // Get the currently selected potion from the combobox HealingPotion potion = (HealingPotion)cboPotions.SelectedItem; _player.UsePotion(potion); } private void SuperAdventure_FormClosing(object sender, FormClosingEventArgs e) { File.WriteAllText(PLAYER_DATA_FILE_NAME, _player.ToXmlString()); PlayerDataMapper.SaveToDatabase(_player); } private void cboWeapons_SelectedIndexChanged(object sender, EventArgs e) { _player.CurrentWeapon = (Weapon)cboWeapons.SelectedItem; } private void btnTrade_Click(object sender, EventArgs e) { TradingScreen tradingScreen = new TradingScreen(_player); tradingScreen.StartPosition = FormStartPosition.CenterParent; tradingScreen.ShowDialog(this); } private void btnMap_Click(object sender, EventArgs e) { WorldMap mapScreen = new WorldMap(_player); mapScreen.StartPosition = FormStartPosition.CenterParent; mapScreen.ShowDialog(this); } } }   Step 4: Edit Engine\Player.cs We want to remember the player’s LocationsVisited values between game sessions. So, we need to update the code that saves the player’s data to the saved game file – and the code that creates the player object from that file. In the ToXmlString() function, we’ll add a new section that creates nodes with the ID values in LocationsVisited (lines 349 through 361). This is like the code to save the InventoryItems and PlayerQuests. We create a LocationsVisited node, with a child node named LocationVisited, to hold the location ID. In the CreatePlayerFromXmlString() function we add code to read those values from the saved game file (lines 116 through 121).   Player.cs (with changes to save/read LocationsVisited from saved game file) using System; using System.Collections.Generic; using System.ComponentModel; using System.Linq; using System.Xml; namespace Engine { public class Player : LivingCreature { private int _gold; private int _experiencePoints; private Location _currentLocation; public event EventHandler<MessageEventArgs> OnMessage; public int Gold { get { return _gold; } set { _gold = value; OnPropertyChanged("Gold"); } } public int ExperiencePoints { get { return _experiencePoints; } private set { _experiencePoints = value; OnPropertyChanged("ExperiencePoints"); OnPropertyChanged("Level"); } } public int Level { get { return ((ExperiencePoints / 100) + 1); } } public Location CurrentLocation { get { return _currentLocation; } set { _currentLocation = value; OnPropertyChanged("CurrentLocation"); } } public Weapon CurrentWeapon { get; set; } public BindingList<InventoryItem> Inventory { get; set; } public List<Weapon> Weapons { get { return Inventory.Where(x => x.Details is Weapon).Select(x => x.Details as Weapon).ToList(); } } public List<HealingPotion> Potions { get { return Inventory.Where(x => x.Details is HealingPotion).Select(x => x.Details as HealingPotion).ToList(); } } public BindingList<PlayerQuest> Quests { get; set; } public List<int> LocationsVisited { get; set; } private Monster CurrentMonster { get; set; } private Player(int currentHitPoints, int maximumHitPoints, int gold, int experiencePoints) : base(currentHitPoints, maximumHitPoints) { Gold = gold; ExperiencePoints = experiencePoints; Inventory = new BindingList<InventoryItem>(); Quests = new BindingList<PlayerQuest>(); LocationsVisited = new List<int>(); } public static Player CreateDefaultPlayer() { Player player = new Player(10, 10, 20, 0); player.Inventory.Add(new InventoryItem(World.ItemByID(World.ITEM_ID_RUSTY_SWORD), 1)); player.CurrentLocation = World.LocationByID(World.LOCATION_ID_HOME); return player; } public static Player CreatePlayerFromXmlString(string xmlPlayerData) { try { XmlDocument playerData = new XmlDocument(); playerData.LoadXml(xmlPlayerData); int currentHitPoints = Convert.ToInt32(playerData.SelectSingleNode("/Player/Stats/CurrentHitPoints").InnerText); int maximumHitPoints = Convert.ToInt32(playerData.SelectSingleNode("/Player/Stats/MaximumHitPoints").InnerText); int gold = Convert.ToInt32(playerData.SelectSingleNode("/Player/Stats/Gold").InnerText); int experiencePoints = Convert.ToInt32(playerData.SelectSingleNode("/Player/Stats/ExperiencePoints").InnerText); Player player = new Player(currentHitPoints, maximumHitPoints, gold, experiencePoints); int currentLocationID = Convert.ToInt32(playerData.SelectSingleNode("/Player/Stats/CurrentLocation").InnerText); player.CurrentLocation = World.LocationByID(currentLocationID); if (playerData.SelectSingleNode("/Player/Stats/CurrentWeapon") != null) { int currentWeaponID = Convert.ToInt32(playerData.SelectSingleNode("/Player/Stats/CurrentWeapon").InnerText); player.CurrentWeapon = (Weapon)World.ItemByID(currentWeaponID); } foreach (XmlNode node in playerData.SelectNodes("/Player/LocationsVisited/LocationVisited")) { int id = Convert.ToInt32(node.Attributes["ID"].Value); player.LocationsVisited.Add(id); } foreach (XmlNode node in playerData.SelectNodes("/Player/InventoryItems/InventoryItem")) { int id = Convert.ToInt32(node.Attributes["ID"].Value); int quantity = Convert.ToInt32(node.Attributes["Quantity"].Value); for (int i = 0; i < quantity; i++) { player.AddItemToInventory(World.ItemByID(id)); } } foreach (XmlNode node in playerData.SelectNodes("/Player/PlayerQuests/PlayerQuest")) { int id = Convert.ToInt32(node.Attributes["ID"].Value); bool isCompleted = Convert.ToBoolean(node.Attributes["IsCompleted"].Value); PlayerQuest playerQuest = new PlayerQuest(World.QuestByID(id)); playerQuest.IsCompleted = isCompleted; player.Quests.Add(playerQuest); } return player; } catch { // If there was an error with the XML data, return a default player object return CreateDefaultPlayer(); } } public static Player CreatePlayerFromDatabase(int currentHitPoints, int maximumHitPoints, int gold, int experiencePoints, int currentLocationID) { Player player = new Player(currentHitPoints, maximumHitPoints, gold, experiencePoints); player.MoveTo(World.LocationByID(currentLocationID)); return player; } public void MoveTo(Location location) { if (PlayerDoesNotHaveTheRequiredItemToEnter(location)) { RaiseMessage("You must have a " + location.ItemRequiredToEnter.Name + " to enter this location."); return; } // The player can enter this location CurrentLocation = location; if (!LocationsVisited.Contains(CurrentLocation.ID)) { LocationsVisited.Add(CurrentLocation.ID); } CompletelyHeal(); if (location.HasAQuest) { if (PlayerDoesNotHaveThisQuest(location.QuestAvailableHere)) { GiveQuestToPlayer(location.QuestAvailableHere); } else { if (PlayerHasNotCompleted(location.QuestAvailableHere) && PlayerHasAllQuestCompletionItemsFor(location.QuestAvailableHere)) { GivePlayerQuestRewards(location.QuestAvailableHere); } } } SetTheCurrentMonsterForTheCurrentLocation(location); } public void MoveNorth() { if (CurrentLocation.LocationToNorth != null) { MoveTo(CurrentLocation.LocationToNorth); } } public void MoveEast() { if (CurrentLocation.LocationToEast != null) { MoveTo(CurrentLocation.LocationToEast); } } public void MoveSouth() { if (CurrentLocation.LocationToSouth != null) { MoveTo(CurrentLocation.LocationToSouth); } } public void MoveWest() { if (CurrentLocation.LocationToWest != null) { MoveTo(CurrentLocation.LocationToWest); } } public void UseWeapon(Weapon weapon) { int damage = RandomNumberGenerator.NumberBetween(weapon.MinimumDamage, weapon.MaximumDamage); if (damage == 0) { RaiseMessage("You missed the " + CurrentMonster.Name); } else { CurrentMonster.CurrentHitPoints -= damage; RaiseMessage("You hit the " + CurrentMonster.Name + " for " + damage + " points."); } if (CurrentMonster.IsDead) { LootTheCurrentMonster(); // "Move" to the current location, to refresh the current monster MoveTo(CurrentLocation); } else { LetTheMonsterAttack(); } } private void LootTheCurrentMonster() { RaiseMessage(""); RaiseMessage("You defeated the " + CurrentMonster.Name); RaiseMessage("You receive " + CurrentMonster.RewardExperiencePoints + " experience points"); RaiseMessage("You receive " + CurrentMonster.RewardGold + " gold"); AddExperiencePoints(CurrentMonster.RewardExperiencePoints); Gold += CurrentMonster.RewardGold; // Give monster's loot items to the player foreach (InventoryItem inventoryItem in CurrentMonster.LootItems) { AddItemToInventory(inventoryItem.Details); RaiseMessage(string.Format("You loot {0} {1}", inventoryItem.Quantity, inventoryItem.Description)); } RaiseMessage(""); } public void UsePotion(HealingPotion potion) { RaiseMessage("You drink a " + potion.Name); HealPlayer(potion.AmountToHeal); RemoveItemFromInventory(potion); // The player used their turn to drink the potion, so let the monster attack now LetTheMonsterAttack(); } public void AddItemToInventory(Item itemToAdd, int quantity = 1) { InventoryItem existingItemInInventory = Inventory.SingleOrDefault(ii => ii.Details.ID == itemToAdd.ID); if (existingItemInInventory == null) { Inventory.Add(new InventoryItem(itemToAdd, quantity)); } else { existingItemInInventory.Quantity += quantity; } RaiseInventoryChangedEvent(itemToAdd); } public void RemoveItemFromInventory(Item itemToRemove, int quantity = 1) { InventoryItem item = Inventory.SingleOrDefault(ii => ii.Details.ID == itemToRemove.ID && ii.Quantity >= quantity); if (item != null) { item.Quantity -= quantity; if (item.Quantity == 0) { Inventory.Remove(item); } RaiseInventoryChangedEvent(itemToRemove); } } public string ToXmlString() { XmlDocument playerData = new XmlDocument(); // Create the top-level XML node XmlNode player = playerData.CreateElement("Player"); playerData.AppendChild(player); // Create the "Stats" child node to hold the other player statistics nodes XmlNode stats = playerData.CreateElement("Stats"); player.AppendChild(stats); // Create the child nodes for the "Stats" node CreateNewChildXmlNode(playerData, stats, "CurrentHitPoints", CurrentHitPoints); CreateNewChildXmlNode(playerData, stats, "MaximumHitPoints", MaximumHitPoints); CreateNewChildXmlNode(playerData, stats, "Gold", Gold); CreateNewChildXmlNode(playerData, stats, "ExperiencePoints", ExperiencePoints); CreateNewChildXmlNode(playerData, stats, "CurrentLocation", CurrentLocation.ID); if (CurrentWeapon != null) { CreateNewChildXmlNode(playerData, stats, "CurrentWeapon", CurrentWeapon.ID); } // Create the "LocationsVisited" child node to hold each LocationVisited node XmlNode locationsVisited = playerData.CreateElement("LocationsVisited"); player.AppendChild(locationsVisited); // Create an "LocationVisited" node for each item in the player's inventory foreach (int locationID in LocationsVisited) { XmlNode locationVisited = playerData.CreateElement("LocationVisited"); AddXmlAttributeToNode(playerData, locationVisited, "ID", locationID); locationsVisited.AppendChild(locationVisited); } // Create the "InventoryItems" child node to hold each InventoryItem node XmlNode inventoryItems = playerData.CreateElement("InventoryItems"); player.AppendChild(inventoryItems); // Create an "InventoryItem" node for each item in the player's inventory foreach (InventoryItem item in Inventory) { XmlNode inventoryItem = playerData.CreateElement("InventoryItem"); AddXmlAttributeToNode(playerData, inventoryItem, "ID", item.Details.ID); AddXmlAttributeToNode(playerData, inventoryItem, "Quantity", item.Quantity); inventoryItems.AppendChild(inventoryItem); } // Create the "PlayerQuests" child node to hold each PlayerQuest node XmlNode playerQuests = playerData.CreateElement("PlayerQuests"); player.AppendChild(playerQuests); // Create a "PlayerQuest" node for each quest the player has acquired foreach (PlayerQuest quest in Quests) { XmlNode playerQuest = playerData.CreateElement("PlayerQuest"); AddXmlAttributeToNode(playerData, playerQuest, "ID", quest.Details.ID); AddXmlAttributeToNode(playerData, playerQuest, "IsCompleted", quest.IsCompleted); playerQuests.AppendChild(playerQuest); } return playerData.InnerXml; // The XML document, as a string, so we can save the data to disk } private bool HasRequiredItemToEnterThisLocation(Location location) { if (location.DoesNotHaveAnItemRequiredToEnter) { return true; } // See if the player has the required item in their inventory return Inventory.Any(ii => ii.Details.ID == location.ItemRequiredToEnter.ID); } private void SetTheCurrentMonsterForTheCurrentLocation(Location location) { // Populate the current monster with this location's monster (or null, if there is no monster here) CurrentMonster = location.NewInstanceOfMonsterLivingHere(); if (CurrentMonster != null) { RaiseMessage("You see a " + CurrentMonster.Name); } } private bool PlayerDoesNotHaveTheRequiredItemToEnter(Location location) { return !HasRequiredItemToEnterThisLocation(location); } private bool PlayerDoesNotHaveThisQuest(Quest quest) { return Quests.All(pq => pq.Details.ID != quest.ID); } private bool PlayerHasNotCompleted(Quest quest) { return Quests.Any(pq => pq.Details.ID == quest.ID && !pq.IsCompleted); } private void GiveQuestToPlayer(Quest quest) { RaiseMessage("You receive the " + quest.Name + " quest."); RaiseMessage(quest.Description); RaiseMessage("To complete it, return with:"); foreach (QuestCompletionItem qci in quest.QuestCompletionItems) { RaiseMessage(string.Format("{0} {1}", qci.Quantity, qci.Quantity == 1 ? qci.Details.Name : qci.Details.NamePlural)); } RaiseMessage(""); Quests.Add(new PlayerQuest(quest)); } private bool PlayerHasAllQuestCompletionItemsFor(Quest quest) { // See if the player has all the items needed to complete the quest here foreach (QuestCompletionItem qci in quest.QuestCompletionItems) { // Check each item in the player's inventory, to see if they have it, and enough of it if (!Inventory.Any(ii => ii.Details.ID == qci.Details.ID && ii.Quantity >= qci.Quantity)) { return false; } } // If we got here, then the player must have all the required items, and enough of them, to complete the quest. return true; } private void RemoveQuestCompletionItems(Quest quest) { foreach (QuestCompletionItem qci in quest.QuestCompletionItems) { InventoryItem item = Inventory.SingleOrDefault(ii => ii.Details.ID == qci.Details.ID); if (item != null) { RemoveItemFromInventory(item.Details, qci.Quantity); } } } private void AddExperiencePoints(int experiencePointsToAdd) { ExperiencePoints += experiencePointsToAdd; MaximumHitPoints = (Level * 10); } private void GivePlayerQuestRewards(Quest quest) { RaiseMessage(""); RaiseMessage("You complete the '" + quest.Name + "' quest."); RaiseMessage("You receive: "); RaiseMessage(quest.RewardExperiencePoints + " experience points"); RaiseMessage(quest.RewardGold + " gold"); RaiseMessage(quest.RewardItem.Name, true); AddExperiencePoints(quest.RewardExperiencePoints); Gold += quest.RewardGold; RemoveQuestCompletionItems(quest); AddItemToInventory(quest.RewardItem); MarkPlayerQuestCompleted(quest); } private void MarkPlayerQuestCompleted(Quest quest) { PlayerQuest playerQuest = Quests.SingleOrDefault(pq => pq.Details.ID == quest.ID); if (playerQuest != null) { playerQuest.IsCompleted = true; } } private void LetTheMonsterAttack() { int damageToPlayer = RandomNumberGenerator.NumberBetween(0, CurrentMonster.MaximumDamage); RaiseMessage("The " + CurrentMonster.Name + " did " + damageToPlayer + " points of damage."); CurrentHitPoints -= damageToPlayer; if (IsDead) { RaiseMessage("The " + CurrentMonster.Name + " killed you."); MoveHome(); } } private void HealPlayer(int hitPointsToHeal) { CurrentHitPoints = Math.Min(CurrentHitPoints + hitPointsToHeal, MaximumHitPoints); } private void CompletelyHeal() { CurrentHitPoints = MaximumHitPoints; } private void MoveHome() { MoveTo(World.LocationByID(World.LOCATION_ID_HOME)); } private void CreateNewChildXmlNode(XmlDocument document, XmlNode parentNode, string elementName, object value) { XmlNode node = document.CreateElement(elementName); node.AppendChild(document.CreateTextNode(value.ToString())); parentNode.AppendChild(node); } private void AddXmlAttributeToNode(XmlDocument document, XmlNode node, string attributeName, object value) { XmlAttribute attribute = document.CreateAttribute(attributeName); attribute.Value = value.ToString(); node.Attributes.Append(attribute); } private void RaiseInventoryChangedEvent(Item item) { if (item is Weapon) { OnPropertyChanged("Weapons"); } if (item is HealingPotion) { OnPropertyChanged("Potions"); } } private void RaiseMessage(string message, bool addExtraNewLine = false) { if (OnMessage != null) { OnMessage(this, new MessageEventArgs(message, addExtraNewLine)); } } } } Step 5: Edit Engine\PlayerDataMapper.cs We also need to save the LocationsVisited values to the database, and read them when loading a saved game from the database – if you are using a database to save the game data. To save the location IDs, we’ll create a new table named LocationVisited. It will only have a single column “ID”, whose datatype is “int”, and does not all nulls. The script to create it is below.   USE [SuperAdventure] GO /****** Object: Table [dbo].[LocationVisited] Script Date: 8/17/2017 7:05:15 PM ******/ SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO CREATE TABLE [dbo].[LocationVisited]( [ID] [int] NOT NULL ) ON [PRIMARY] GO   Next, we need to update PlayerDataMapper to save the values into this table, and read the values from it. The code to do this is like the code for adding and reading the values for the InventoryItems and PlayerQuests.   We save the Location IDs to the table in the SaveToDatabase() function, at lines 286 through 307. The code to read from this table is in the CreateFromDatabase() frunction, at lines 111 through 131.   NOTE: I noticed a bug with the readers not closing. So, inside each “using” block of code in CreateFromDatabase, I’ve added a “reader.Close();”. These are on lines 58, 85, 108, and 130.   PlayerDataMapper.cs using System; using System.Data; using System.Data.SqlClient; namespace Engine { public static class PlayerDataMapper { private static readonly string _connectionString = "Data Source=(local);Initial Catalog=SuperAdventure;Integrated Security=True"; public static Player CreateFromDatabase() { try { // This is our connection to the database using(SqlConnection connection = new SqlConnection(_connectionString)) { // Open the connection, so we can perform SQL commands connection.Open(); Player player; // Create a SQL command object, that uses the connection to our database // The SqlCommand object is where we create our SQL statement using(SqlCommand savedGameCommand = connection.CreateCommand()) { savedGameCommand.CommandType = CommandType.Text; // This SQL statement reads the first rows in teh SavedGame table. // For this program, we should only ever have one row, // but this will ensure we only get one record in our SQL query results. savedGameCommand.CommandText = "SELECT TOP 1 * FROM SavedGame"; // Use ExecuteReader when you expect the query to return a row, or rows SqlDataReader reader = savedGameCommand.ExecuteReader(); // Check if the query did not return a row/record of data if(!reader.HasRows) { // There is no data in the SavedGame table, // so return null (no saved player data) return null; } // Get the row/record from the data reader reader.Read(); // Get the column values for the row/record int currentHitPoints = (int)reader["CurrentHitPoints"]; int maximumHitPoints = (int)reader["MaximumHitPoints"]; int gold = (int)reader["Gold"]; int experiencePoints = (int)reader["ExperiencePoints"]; int currentLocationID = (int)reader["CurrentLocationID"]; // Create the Player object, with the saved game values player = Player.CreatePlayerFromDatabase(currentHitPoints, maximumHitPoints, gold, experiencePoints, currentLocationID); reader.Close(); } // Read the rows/records from the Quest table, and add them to the player using(SqlCommand questCommand = connection.CreateCommand()) { questCommand.CommandType = CommandType.Text; questCommand.CommandText = "SELECT * FROM Quest"; SqlDataReader reader = questCommand.ExecuteReader(); if(reader.HasRows) { while(reader.Read()) { int questID = (int)reader["QuestID"]; bool isCompleted = (bool)reader["IsCompleted"]; // Build the PlayerQuest item, for this row PlayerQuest playerQuest = new PlayerQuest(World.QuestByID(questID)); playerQuest.IsCompleted = isCompleted; // Add the PlayerQuest to the player's property player.Quests.Add(playerQuest); } } reader.Close(); } // Read the rows/records from the Inventory table, and add them to the player using (SqlCommand inventoryCommand = connection.CreateCommand()) { inventoryCommand.CommandType = CommandType.Text; inventoryCommand.CommandText = "SELECT * FROM Inventory"; SqlDataReader reader = inventoryCommand.ExecuteReader(); if(reader.HasRows) { while(reader.Read()) { int inventoryItemID = (int)reader["InventoryItemID"]; int quantity = (int)reader["Quantity"]; // Add the item to the player's inventory player.AddItemToInventory(World.ItemByID(inventoryItemID), quantity); } } reader.Close(); } // Read the rows/records from the LocationVisited table, and add them to the player using (SqlCommand locationVisitedCommand = connection.CreateCommand()) { locationVisitedCommand.CommandType = CommandType.Text; locationVisitedCommand.CommandText = "SELECT * FROM LocationVisited"; SqlDataReader reader = locationVisitedCommand.ExecuteReader(); if (reader.HasRows) { while (reader.Read()) { int id = (int)reader["ID"]; // Add the item to the player's LocationsVisited property player.LocationsVisited.Add(id); } } reader.Close(); } // Now that the player has been built from the database, return it. return player; } } catch(Exception ex) { // Ignore errors. If there is an error, this function will return a "null" player. } return null; } public static void SaveToDatabase(Player player) { try { using(SqlConnection connection = new SqlConnection(_connectionString)) { // Open the connection, so we can perform SQL commands connection.Open(); // Insert/Update data in SavedGame table using(SqlCommand existingRowCountCommand = connection.CreateCommand()) { existingRowCountCommand.CommandType = CommandType.Text; existingRowCountCommand.CommandText = "SELECT count(*) FROM SavedGame"; // Use ExecuteScalar when your query will return one value int existingRowCount = (int)existingRowCountCommand.ExecuteScalar(); if(existingRowCount == 0) { // There is no existing row, so do an INSERT using(SqlCommand insertSavedGame = connection.CreateCommand()) { insertSavedGame.CommandType = CommandType.Text; insertSavedGame.CommandText = "INSERT INTO SavedGame " + "(CurrentHitPoints, MaximumHitPoints, Gold, ExperiencePoints, CurrentLocationID) " + "VALUES " + "(@CurrentHitPoints, @MaximumHitPoints, @Gold, @ExperiencePoints, @CurrentLocationID)"; // Pass the values from the player object, to the SQL query, using parameters insertSavedGame.Parameters.Add("@CurrentHitPoints", SqlDbType.Int); insertSavedGame.Parameters["@CurrentHitPoints"].Value = player.CurrentHitPoints; insertSavedGame.Parameters.Add("@MaximumHitPoints", SqlDbType.Int); insertSavedGame.Parameters["@MaximumHitPoints"].Value = player.MaximumHitPoints; insertSavedGame.Parameters.Add("@Gold", SqlDbType.Int); insertSavedGame.Parameters["@Gold"].Value = player.Gold; insertSavedGame.Parameters.Add("@ExperiencePoints", SqlDbType.Int); insertSavedGame.Parameters["@ExperiencePoints"].Value = player.ExperiencePoints; insertSavedGame.Parameters.Add("@CurrentLocationID", SqlDbType.Int); insertSavedGame.Parameters["@CurrentLocationID"].Value = player.CurrentLocation.ID; // Perform the SQL command. // Use ExecuteNonQuery, because this query does not return any results. insertSavedGame.ExecuteNonQuery(); } } else { // There is an existing row, so do an UPDATE using(SqlCommand updateSavedGame = connection.CreateCommand()) { updateSavedGame.CommandType = CommandType.Text; updateSavedGame.CommandText = "UPDATE SavedGame " + "SET CurrentHitPoints = @CurrentHitPoints, " + "MaximumHitPoints = @MaximumHitPoints, " + "Gold = @Gold, " + "ExperiencePoints = @ExperiencePoints, "+ "CurrentLocationID = @CurrentLocationID"; // Pass the values from the player object, to the SQL query, using parameters // Using parameters helps make your program more secure. // It will prevent SQL injection attacks. updateSavedGame.Parameters.Add("@CurrentHitPoints", SqlDbType.Int); updateSavedGame.Parameters["@CurrentHitPoints"].Value = player.CurrentHitPoints; updateSavedGame.Parameters.Add("@MaximumHitPoints", SqlDbType.Int); updateSavedGame.Parameters["@MaximumHitPoints"].Value = player.MaximumHitPoints; updateSavedGame.Parameters.Add("@Gold", SqlDbType.Int); updateSavedGame.Parameters["@Gold"].Value = player.Gold; updateSavedGame.Parameters.Add("@ExperiencePoints", SqlDbType.Int); updateSavedGame.Parameters["@ExperiencePoints"].Value = player.ExperiencePoints; updateSavedGame.Parameters.Add("@CurrentLocationID", SqlDbType.Int); updateSavedGame.Parameters["@CurrentLocationID"].Value = player.CurrentLocation.ID; // Perform the SQL command. // Use ExecuteNonQuery, because this query does not return any results. updateSavedGame.ExecuteNonQuery(); } } } // The Quest and Inventory tables might have more, or less, rows in the database // than what the player has in their properties. // So, when we save the player's game, we will delete all the old rows // and add in all new rows. // This is easier than trying to add/delete/update each individual rows // Delete existing Quest rows using(SqlCommand deleteQuestsCommand = connection.CreateCommand()) { deleteQuestsCommand.CommandType = CommandType.Text; deleteQuestsCommand.CommandText = "DELETE FROM Quest"; deleteQuestsCommand.ExecuteNonQuery(); } // Insert Quest rows, from the player object foreach(PlayerQuest playerQuest in player.Quests) { using(SqlCommand insertQuestCommand = connection.CreateCommand()) { insertQuestCommand.CommandType = CommandType.Text; insertQuestCommand.CommandText = "INSERT INTO Quest (QuestID, IsCompleted) VALUES (@QuestID, @IsCompleted)"; insertQuestCommand.Parameters.Add("@QuestID", SqlDbType.Int); insertQuestCommand.Parameters["@QuestID"].Value = playerQuest.Details.ID; insertQuestCommand.Parameters.Add("@IsCompleted", SqlDbType.Bit); insertQuestCommand.Parameters["@IsCompleted"].Value = playerQuest.IsCompleted; insertQuestCommand.ExecuteNonQuery(); } } // Delete existing Inventory rows using(SqlCommand deleteInventoryCommand = connection.CreateCommand()) { deleteInventoryCommand.CommandType = CommandType.Text; deleteInventoryCommand.CommandText = "DELETE FROM Inventory"; deleteInventoryCommand.ExecuteNonQuery(); } // Insert Inventory rows, from the player object foreach(InventoryItem inventoryItem in player.Inventory) { using(SqlCommand insertInventoryCommand = connection.CreateCommand()) { insertInventoryCommand.CommandType = CommandType.Text; insertInventoryCommand.CommandText = "INSERT INTO Inventory (InventoryItemID, Quantity) VALUES (@InventoryItemID, @Quantity)"; insertInventoryCommand.Parameters.Add("@InventoryItemID", SqlDbType.Int); insertInventoryCommand.Parameters["@InventoryItemID"].Value = inventoryItem.Details.ID; insertInventoryCommand.Parameters.Add("@Quantity", SqlDbType.Int); insertInventoryCommand.Parameters["@Quantity"].Value = inventoryItem.Quantity; insertInventoryCommand.ExecuteNonQuery(); } } // Delete existing LocationVisited rows using (SqlCommand deleteLocationVisitedCommand = connection.CreateCommand()) { deleteLocationVisitedCommand.CommandType = CommandType.Text; deleteLocationVisitedCommand.CommandText = "DELETE FROM LocationVisited"; deleteLocationVisitedCommand.ExecuteNonQuery(); } // Insert LocationVisited rows, from the player object foreach (int locationVisitedID in player.LocationsVisited) { using (SqlCommand insertLocationVisitedCommand = connection.CreateCommand()) { insertLocationVisitedCommand.CommandType = CommandType.Text; insertLocationVisitedCommand.CommandText = "INSERT INTO LocationVisited (ID) VALUES (@ID)"; insertLocationVisitedCommand.Parameters.Add("@ID", SqlDbType.Int); insertLocationVisitedCommand.Parameters["@ID"].Value = locationVisitedID; insertLocationVisitedCommand.ExecuteNonQuery(); } } } } catch(Exception ex) { // We are going to ignore errors, for now. } } } } Step 5: Test the game. Now, as the player moves to new locations, the map will display more images – instead of the “fog” image for unvisited locations. The map should start to look like this (for example):     Summary This uses hard-coded values for placing the images in the PictureBox, which isn’t the best way to create a map. This would be much more flexible if we used X and Y coordinates for the locations. Then, we could do things like having the map always centered on the player’s current location, and showing a 5 x 5 (or larger) grid of the surrounding locations. If you follow the “Build a C#/WPF RPG” lessons, that is how we are building that world.   Source code for this lesson Source code on GitHub Source code on Dropbox   Previous lesson: Lesson 26.1 Displaying a World Map All lessons: Learn C# by Building a Simple RPG Index 21 thoughts on “Lesson 26.2 – Hiding Unvisited Locations on the World Map 1. Hi! The same problem, that causes the Quest bug, adds the last saved location to the empty LocationsVisited list, and that causes duplicated rows in the LocationVisited data table after you start the program multiple times. I think Lesson 99.1 Bugfix will solve this problem. 1. Correct, it should solve that problem. If anyone has duplicated locations in their data, they should probably delete the second (or more) instances of the locations. It might also be nice to create an AddLocationoLocationsVisited function (like we do for adding items to the player’s inventory) that would prevent adding a duplicate location. Although, it’s always best if we have code the doesn’t let the data ever get into a bad condition. 2. Hi Scott! Thank you very much again i need help :C I managed to do everything and i was happy, then i got the final step of localisation so other programmers would understand the code in my contry, and i started… I was happy that everything worked and went away from the project for a while and then i realised that the project is not saving the player.. only creating new one every time when a user tries to play. I tried to pin the problem down but everything was in vein, the program doesn’t show any sign of mistakes but still doesn’t save the player. So i just tried to return all the names of the functions and methods and try again, there were no mistakes but still nothing new.. Could you pls help me with my final step of finishing the game. If you can’t spot the problem that’s all right i will just delete this function of saving from the game, i’ll be happy to any help! https://www.dropbox.com/s/dyf5bhm9tpkkmrn/ForgottenLand2.rar?dl=0 1. I think I found the problem. In ForgottenLand.cs, starting on line 89, the “else if” code is probably not running the way you want. There are lines after the “else if”, but before the {} code block. When code is like that, the compiler pretends there is a pair of curly braces around the next one line – and not any lines after it. There are some notes here: https://gist.github.com/ScottLilly/8516a40ec115b3c512da096e2cf8b8ea Because there is not an “if” or “else if” before line 8 (in Before.cs), that code always runs. So, the program always creates a new player. If you want to see what is happening, you can use the debugger. Here is a lesson on how to use the Visual Studio debugger. You can set a breakpoint on line 89 and see what is happening when you run the program. 1. Thank you! This is unbelievable how only one line can stop all the work of the application. it is difficult to imagine how people can work with multiple projects with over 1000 lines in it. 3. I could not run the SuperAdventure because I was hitting an error about the Quests containing more than one entry for the same quest, ID = 1. I fixed the problem in my copy of the game by doing the following in the Player.cs file. Above the PlayerDoesNotHaveThisQuest method I added the following method: private bool PlayerHasThisQuest(Quest quest) { return Quests.All(pq => pq.Details.ID == quest.ID); } Then I changed the GiveQuestToPlayer method by adding an if statement at the top: private void GiveQuestToPlayer(Quest quest) { if (PlayerHasThisQuest(quest)) return; RaiseMessage("You receive the " + quest.Name + " quest."); RaiseMessage(quest.Description); RaiseMessage("To complete it, return with:"); foreach (QuestCompletionItem qci in quest.QuestCompletionItems) { RaiseMessage(string.Format("{0} {1}", qci.Quantity, qci.Quantity == 1 ? qci.Details.Name : qci.Details.NamePlural)); } RaiseMessage(""); Quests.Add(new PlayerQuest(quest)); } There may be a better way of fixing this error, but this particular way worked for me. 1. Hi Ferlin, There was a bug if the player exited the game at a location with a quest. When they restart the game, it gives the player the quest from their saved Quests list, and tries to give the quest again since they are at the location. Your fix will prevent the game from crashing – or, you can use the changes in lesson 99.1. 4. I was looking back on the site and was surprised to see all the changes to the application and some integration with the WPF project! I was going to do some expansion and add images to the original program only to find that it has already been done! Great job on this Scott. It has been a real help to a lot of folks. 1. Thanks Stan! This started out as a little project in my spare time, but has grown larger than I ever thought it would. It always great to hear from people who have learned something from the lessons. I just started a new series this week to refactor the SuperAdventure code into higher-quality code that will be easier to work with and modify. 5. Hi Scott, Thank you for taking your time to create this tutorial, I have found it extremely helpful! However there are a few things I am unsure of, how would I go abou highlighting the current position on the map form? And how would I go about making quests repeatable? I’ve made several changes to the game, including adding item weighhs and inventory max weight, but I can’t seem to update the players current weight after adding or removing quest items, any suggestions would be appreciated. Thanks in advance! 1. Hello Ryan, You’re welcome! Here is some code you can use to highlight the player’s current location in the world map: https://gist.github.com/ScottLilly/f02d10322a8184f2424a2e601972786c. I did a little cleanup of this class, but the important part is in lines 41-51. If you want all quests to be repeatable, you would need to change the MoveTo code that gives the player the quests (line 183 of Player.cs). Currently, it gives the player the quest if they do not already have it. You could change that to “if the player does not have the quest, or if they do have it, but it’s completed”. Or, when the player completes a quest, you could remove the PlayerQuest object from the Player.Quests property. Or, you could modify the Quest class to have a boolean property “IsRepeatable”, if you only want some quests to be repeatable. Then, change your code around line 183 (of Player.cs) to give the player the quest “if the player does not have the quest, or if they have the quest, it’s completed, and it’s repeatable)”. For the item weights, I would create a “CalculateWeight” function that is called at the end of “AddItemToInventory” and “RemoveItemFromInventory”. Make sure your Weight property is calling OnPropertyChanged, so the UI knows it needs to refresh the screen, and that your SUperAdventure constructor has a DataBindings.Add for the label displaying the weight value. Let me know if you have any questions about those suggestions. 1. Wow, thanks for such a quick response! Your suggestions are certainly valuable, I have a pretty basic understanding of the OOP paradigm, I understand inheritance, polymorphism and such but I struggle to apply these concepts to real world problems, it amazes me how I didn’t even consider the suggestions you made of my own accord, now you’ve suggested them it seems very simplistic. I do however have another question, I’m not familiar with LINQ at all, I’ve never encountered it before your tutorial and I was wondering if I could utilise it for filtering the inventory list when displaying the vendors trading form to filter out quest items, now would I need to make a new list and bind that to the data grid view or can I simply filter the existing inventory list? The code I have right now doesn’t display quest items, but it also removes the item from the players inventory, thanks again Scott! 1. You’re welcome. The more programs you work on, the more you’ll build up a mental “library” of how to solve problems and make changes. If you haven’t studied “design patterns“, those are some common solutions to common problems that are good to learn. You could use LINQ to create a filtered list of inventory objects. That’s what we do with Player.Weapons and Player.Potions. So, you could create a “List SellableItems” property. Just remember these “derived properties” don’t automatically notify the UI of changes. So, when you change the base/underlying property, you’ll need to manually raise a PropertyChanged notification for your derived property. We do this when we call RaiseInventoryChangedEvent from AddItemToInventory and RemoveItemFromInventory – to notify the UI that the derived “Weapons” and “Potions” property has changed, and needs to be refreshed in the UI. Or, you could apply a filter to the DataGrid.DataSource, like this: https://stackoverflow.com/questions/21845016/how-to-filter-datagridview-in-c-sharp-win-forms. 1. Hey Scott, I’ve made several changes since my last post. Thanks to your help I’ve managed to resolve the item weights being incremented or decremented whenever a quest item is awarded or removed respectively, I kind of followed your suggestion and rather than creating a new function, I modified the existing function UpdatePlayerStats() which calculates the players inventory weight by multiplying the item weight by quantity for each item. I’ve also managed to create repeatable quests which prompted another issue of duplicate quests, however I managed to resolve this issue by modifying the MarkQuestAsComplete(Quest quest). I have since ran into a number of issues where I would greatly appreciate your assistance. First of all I’ve read over multiple comments on pretty much all the lesson pages to see if I can find an answer, before taking up any more of your time. You suggested to someone on how to save and load the vendors inventory to XML, I’ve managed to get my game to successfully save the vendors inventory, however I’m unsure as to how to load the data, you mentioned creating a function in the TradingScreen constructor for reading the data and then passing the data to the Vendor class, this is where I’m stumped. I was also wondering how I would go about changing the XP needed to reach the next level, determined by the players current level and difficulty setting, I think I know how to do this but it would require modifying how the players level is calculated, I’ve also encountered several posts on here that refer to the same issue, however my maximum player level is 100 so if else statements seem inadequate. I have messed around with this loads but have failed to come up with anything even remotely close to what I’m trying to achieve. Additionally I’d like to remove all the players inventory items besides unsellable items and one weapon whenever the player dies(A weapon that the user can choose if multiple weapons are present). I understand you have priorities, I am going to try and solve all these issues by myself in the meantime, but if you could point me in the right direction that would be extremely helpful, thanks a bunch! 2. Hi Ryan, I saw your other comment that you got some of the new features working. That’s great! For the trader inventory, it’s probably simplest to load the trader’s inventory in the World.PopulateLocations() function. However, because the World class is static, it might be difficult to catch any errors there. If you do have any problems, set a debug breakpoint at the beginning of PopulateLocations() and step through the code (using F10) to see where the error happens. For the experience points needed for each level, you would probably want to create a mathematical equation that gradually increases the amount of XP needed for each level. This is often done with an exponent, like in these examples: http://howtomakeanrpg.com/a/how-to-make-an-rpg-levels.html 6. Okay, Update: After a lot of debugging I’ve managed to implement the removal of all items besides any unsellable items and the current weapon when the player dies. I’ve also managed to implement the fleeing message whenever a player moves to a new location and they’ve been attacked but haven’t killed the monster. 7. Hello again Scott, I know it’s been a while, I’ve been moving house myself. Since my last post I’ve added several items into the game, including food and armour. I decided to add a tab control with each tab displaying different items, for instance a tab for all items, weapons, armour, food etc… and each tab has a datagridview, this all works fine, updates and everything however I don’t know how to display the quantity for items such as food or weapons since they are bound to lists of the respective object, not inventory items, each of the classes has properties which I am displaying in the datagridview therefore filtering the inventory for each item type doesnt appear to be an adequate solution. I even added the ability to move up or down within a location so for example your home has a basement and first floor, again this works as expected however I’d like to display both a world map and a local map, whenever I create a new world map object it is populated with the same images, I’m not sure how to separate and differentiate between the two. Additionally I’ve added the ability for the player to add a chest to their current location, this value is then saved in the players XML file, and when the game is relaunched these values persist until the player moves location and the values are overwritten by the world class, I’m not sure how I would go about this issue, could storing the Boolean chest value in visited locations be a solution? My computer isn’t assembled right now so I can’t actually check to see if this would work or not. I am sorry for the long list of problems but another issue I’ve been having is, I’ve changed the datagridviews button text when certain conditions are met, this works as expected but then the logic I have is determined by the buttons text value which always seems to return the item ID regardless of the texts value, I know they are linked in order for the logic to reference the item by ID but I don’t understand why this is happening. Any insight would be receieved with the utmost appreciation, thank you! 1. Hi Ryan, I hope you’re settled in to your new house. Moving always drains my energy for a couple weeks. It sounds like you’ve done a huge amount of expansion of the game! One way to show quantities in the inventory is in the WPF lessons (10.2 and 10.4). We have a GroupedInventory class that has the item and quantity. Then, we update the individual Weapons and Potions properties whenever the base Inventory list changes (items are added or removed). The local map might be a little tricky – especially if you want to have a different image for each room, like we do for each location in the world map. The easiest way would just be to have a single image for each location’s local map. Create a new LocalMapFileName property in the Location class, and populate it in World.cs. Then, create a new LocalMap.cs form (in the UI project) that works like WorldMap.cs, but displays the image for the LocalMapFileName of the Player’s CurrentLocation. Do you want the player to be able to have a chest in each location, or only one chest in the whole world? If the player can have more than one chest, can they store different items in each chest? You might want to store the chests information similar to the Quests or InventoryItems, like this: https://gist.github.com/ScottLilly/88948812f277728fc5e6d5b2771e0b59. Then, Create a Chest class that has a LocationID property and a property that is a list of inventory items. In the Player class, create a new Chests property (datatype is a list of Chest objects). When the Player moves to a location, see if they have a chest with that LocationID. I’d probably have to see the code for the datagrid button problem. I’m not completely clear on it. When your computer is re-assembled, if you still have questions, can you upload your version of the program to GitHub or Dropbox, so I can look at it? Leave a Reply Your email address will not be published. Required fields are marked *
__label__pos
0.98186
Securing Spring Boot REST API with Basic Auth Learn to use basic authentication to secure the REST APIs created in a Spring boot application. The secured API will ask for user authentication credentials before giving access to the API response. 1. Maven Dependency The simplest way to add all required jars is to add the latest version of spring-boot-starter-security dependency. <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-security</artifactId> </dependency> 2. Configure Spring Security To enable authentication and authorization support, we can configure the utility class WebSecurityConfigurerAdapter (deprecated). It helps in requiring the user to be authenticated prior to accessing any configured URL (or all URLs) within our application. We are also configuring an in-memory authentication manager to supply username and password. @Configuration public class SecurityConfig extends WebSecurityConfigurerAdapter { @Override protected void configure(HttpSecurity http) throws Exception { http .csrf().disable() .authorizeRequests().anyRequest().authenticated() .and() .httpBasic(); } @Autowired public void configureGlobal(AuthenticationManagerBuilder auth) throws Exception { auth.inMemoryAuthentication() .withUser("admin") .password("{noop}password") .roles("USER"); } } Starting Spring Boot 2.7.0, WebSecurityConfigurerAdapter is deprecated. We can rewrite the above basic-auth configuration in the latest versions as follows: @Configuration public class BasicAuthWebSecurityConfiguration { @Bean public SecurityFilterChain filterChain(HttpSecurity http) throws Exception { http .csrf().disable() .authorizeRequests().anyRequest().authenticated() .and() .httpBasic(); return http.build(); } @Bean public InMemoryUserDetailsManager userDetailsService() { UserDetails user = User .withUsername("user") .password("{noop}password") .roles("USER") .build(); return new InMemoryUserDetailsManager(user); } } See Also: Basic Auth with Spring Security 3. Basic Authentication Demo For demo purposes, we can write a simple REST API given below. 3.1. REST API @RestController @RequestMapping(path = "/employees") public class EmployeeController { @Autowired private EmployeeDAO employeeDao; @GetMapping(path="/", produces = "application/json") public Employees getEmployees() { return employeeDao.getAllEmployees(); } } 3.2. Accessing the API without ‘authorization‘ Header Access rest api at URL: HTTP GET http://localhost:8080/employees/ Require username and password Require username and password 3.3. With ‘authorization‘ Header Upon passing authorization request header with encoded basic-auth user name and password combination, we will be able to access the rest api response. Access rest api at URL: HTTP GET http://localhost:8080/employees/ Successful api call Successful api call 3.4. Generate Basic Auth Encoding Browser apI testing tools are able to generate the base-64 encoded token by themselves using the plain username and password. But if we need to generate the encoded token ourselves to pass the token programmatically, then we can use the following code that uses the java.util.Base64 class. String encoding = Base64.getEncoder().encodeToString((user + ":" + password).getBytes()); String authHeader = "Basic " + encoding; For example, when making a call from Apache HttpClient, we can use the following code: String encoding = Base64.getEncoder().encodeToString((user + ":" + password).getBytes()); HttpPost httpPost = new HttpPost("http://localhost:8080/api-url"); httpPost.setHeader(HttpHeaders.AUTHORIZATION, "Basic " + encoding); HttpResponse response = httpClient.execute(httpPost); HttpEntity entity = response.getEntity(); 4. Conclusion In this spring boot security basic authentication example, we learned to secure REST APIs with basic authentication. It is done in two steps. • The first step is to include required dependencies e.g. spring-boot-starter-security. • The second step is to configure WebSecurityConfigurerAdapter or SecurityFilterChain and add authentication details. Happy Learning !! Sourcecode on Github Leave a Reply 8 Comments Most Voted Newest Oldest Inline Feedbacks View all comments About Us HowToDoInJava provides tutorials and how-to guides on Java and related technologies. It also shares the best practices, algorithms & solutions, and frequently asked interview questions. Our Blogs REST API Tutorial
__label__pos
0.683679
Redirect from Portlet_A to Portle_B on a different page - Websphere This is a discussion on Redirect from Portlet_A to Portle_B on a different page - Websphere ; I have some business logic that requires the user to enter data on Page_A within Portlet_A and if that data meets certain criteria, I then need to do an actionResponse.redirect(generatedURL) to display Portlet_B which resides on the page "Page_B". I ... + Reply to Thread Results 1 to 20 of 20 Thread: Redirect from Portlet_A to Portle_B on a different page 1. Redirect from Portlet_A to Portle_B on a different page I have some business logic that requires the user to enter data on Page_A within Portlet_A and if that data meets certain criteria, I then need to do an actionResponse.redirect(generatedURL) to display Portlet_B which resides on the page "Page_B". I am NOT transferring any data, I am simply doing a redirect. I read the following article, which seemed like it would be able to help me, but I am not being redirected to Page_B: http://www-01.ibm.com/support/docvie...id=swg21265900 It almost seems like the article above is only to generate a URL that a user will click on from within a JSP, but that is not what I am trying to do. I want to do a redirect. Here is what I have in Portlet_A: {code} public void processAction(ActionRequest request, ActionResponse response) throws PortletException, java.io.IOException { ... String generatedURL = PortletURLHelper.generateUrl("Page_B","Portlet_B",null, false, actionRequest, actionResponse); actionResponse.sendRedirect(selfcareUrl); ... } {code} The end result is a generated URL, but it takes me back to the home page, and not to Page_B. I have confirmed that "Page_B" and "Portlet_B" unique names are accurate. (If they were not accurate, then I would receive an Exception) Any suggestions? Am I going about this the wrong way? Thanks, _Sean 2. Re: Redirect from Portlet_A to Portle_B on a different page I have also tried to use the OffLineURLHelper to no avail: {code} MyServerContext serverContext = new MyServerContext( request.getServerName(), request.getServerPort()+"" ); String targetURLStr = OffLineURLHelper.generateUrl( "Page_B", "Portlet_B", null, serverContext, false); {code} I have also tried to set the boolean parameter to true as well (generate a private page), but that did not work either. Am I going at this the wrong way? Thanks, _Sean 3. Re: Redirect from Portlet_A to Portle_B on a different page Hi Sean, The code you are trying should work. make sure following import of PortletURLHelper should be *com.ibm.wps.l2.urlspihelper.portletrequest.Portle tURLHelper* portlet B unique name should be created using *xmlAccess only*. and the jar file you are referring to should be *wp.l2.url.helper.jar* thanks.. ** *[ExtremePortal|http://ExtremePortal.blogspot.com]* 4. Re: Redirect from Portlet_A to Portle_B on a different page Custom Unique Names. Is this the same as xmlAccess? Also, via another flow, I can get to Page_B.Portlet_B within my production environment. From here, my URL looks like: {code} https://www.mycompany.com/wps/myport...LUDlGUjEwMDA!/ {code} While in my test environment, the generated URL to Page_B.Portlet_B (with the code above) looks like: {code} http://wwwtest.mycompany.com:443/wps...6BdmBigCKihWQ/ {code} Obviously they should not be identical b/c they are in 2 different environments, but I find it interesting that the Production version is https and that it is much longer then the generated URL in test. Would this just be due to the different environments or is it an indication that I am doing something wrong? Thanks, Sean 5. Re: Redirect from Portlet_A to Portle_B on a different page After looking at this some more, the URL issue is apples and oranges. In reality, I am trying to redirect to Page_B.Portlet_B from my Login Portlet (aka Page_A.Portlet_A above). (Portlet_B is my custom Selfcare Portlet, and the only portlet on Page_B) From the Login portlet, I authenticate the user, and then redirect if they need to reset their password. Does this change anything? 6. Re: Redirect from Portlet_A to Portle_B on a different page you can assign unique name to page from portal admin but assigning unique name to portlet must be done using xml access only.. have a look at my article on *[Assigning unique name to portlet and page|http://extremeportal.blogspot.com/20...let-and.html]* after doing this, you are good to go... hope that helps.. thanks ** *[Extreme Portal|http://ExtremePortal.blogspot.com]* 7. Re: Redirect from Portlet_A to Portle_B on a different page Maybe I am going at this the wrong way then. On my custom Selfcare Page, I have only the one Custom Selfcare Portlet. If I am going to a page that has only 1 portlet, do I really need to generate a URL to that specific portlet? Is there an easier way to just generate a URL to a particular page? Thanks, Sean 8. Re: Redirect from Portlet_A to Portle_B on a different page well, generating a url to portlet should be done two cases a) if you have more than one portlet on a page and you want to hit perticualar portlet. b) if you want to pass parameter to a portlet on the page. well in your case, that should also work. did you try creating portlet(custom selfcare) uniquename using xmlaccess and then redirecting to the generated url? thanks.. ** *[Extreme Portal|http://ExtremePortal.blogspot.com]* 9. Re: Redirect from Portlet_A to Portle_B on a different page Portlets and updating the name. What is the difference here? Anyway, even after I have updated the Portlet unique name via XML Access it does not work. The URL generated from different methods are below: PortletURLHelper.generateUrl(...) {code} /wps/portal/!ut/p/c1/04_SB8K8xLLM9MSSzPy8xBz9CP0os3gDSwtffycLY0ODUGdnAy MnbzMvSyMfYwsDE_1I_ShzJHkTR1cDTxOTgGALLy8jA29j_YLs QEUA2cfztw!!/ {code} Note that the above example does not generate "wps/myportal" which I would expect it to. Also notice that it does not provide the full URL, but everything after the base URL. It is also not https like I would expect and takes me back to the Home Page (just like any random gibberish would post the base url). OffLineURLHelper.generateUrl(...) {code} http://wwwtest.mycompany.com:443/wps...sQEUA2cfztw!!/ {code} Note that this code does generate the entire URL, and it uses "wps/myportal", but it is still not https. Also, when it redirects to this URL, I get a popup to download a file named "seS78VuK.part". I download it and open it and it is a few characters of serialized data. The file name changes every time, but the extension is always ".part" Any other suggestions? Thanks, Sean 10. Re: Redirect from Portlet_A to Portle_B on a different page I have also thought to try and implement this functionality as a filter instead of a redirect within the portlet. I saw another conversation you had here: http://groups.google.com/group/ibm.s...b8fcecef2e6aa4 Which helped, but I could not get the request to go through the filter, even though I set it up according to your blog post (or at least I think I did). Is this another viable option to solve my problem? Because I am thinking about heading down this path due to the fact that the URLs being generated are not accurate from within my portlet, and I wonder if it is because I am somehow not authenticated yet. (Even though I am authenticated before doing the redirect within my code) Thanks, Sean 11. Re: Redirect from Portlet_A to Portle_B on a different page I am beginning to wonder if the issue lies in that my code above "PortletURLHelper.generateUrl(...)" is called from a Portlet that extends GenericPortlet (a standard JSR-168 portlet) and the Portlet that I am calling is a Struts Portlet. In the IBM documentation at: http://www-01.ibm.com/support/docvie...id=swg21265900 it states that "This is only for making URLs between standard API portlets." Now, I do not know enough about Portlets yet to know whether or not a Struts Portlet is considered a "standard API portlet". Could this be a problem? 12. Re: Redirect from Portlet_A to Portle_B on a different page ok, let me ask you few things here you are saying "From the Login portlet, I authenticate the user, and then redirect if they need to reset their password" here, where does user decide that they want to reset their password? do you have something like this login portlet has something like this in UI... userid - textbox password - textbox reset pass - check box login button user comes, enter his uid and password and if he checks reset password check box then you authenticate user and redirect to selfcare portlet. when user doesn't check the reset password check box then you just authenticate the user and he just logged in to portal. did i understandd correctly? thanks.. ** *[Extreme Portal|http://ExtremePortal.blogspot.com]* At GMAIL DOT COM 13. Re: Redirect from Portlet_A to Portle_B on a different page You're close, but in this scenario the user is sent to Selfcare because a resetPwdFlag = true on their ldap account. The user does not elect to go into Selfcare, the system redirects them there because of the aforementioned logic. 14. Re: Redirect from Portlet_A to Portle_B on a different page ok got you.. so user will be sent to selfcare portlet only when *resetPwdFlag* found to be *true* for that user else they will be logged into to portal. is that right? this is going to be bit tricky.. here is how you can achieve this.. a)Implement AuthenticationFilter b)in filter after authentication, retrieve resetPwdFlag value for that user using PUMA. c)if this value found to be true, redirect user to self care portlet else just logged him in to portal. hope that helps... thanks.. ** *[Extreme Portal|http://ExtremePortal.blogspot.com]* AT GMAIL DOT COM 15. Re: Redirect from Portlet_A to Portle_B on a different page I started down that road yesterday just to try something different, and I have a few issues with that approach. 1) In order to redirect to the Selfcare Page, I am still going to have to generate the URL to the Selfcare Page. Granted, I will be using the ServletHelper, and not the PortletHelper, so maybe that will make some sort of difference within the ExplicitFilter 2) I have put together an ExcplicitLogin filter, and it is not being called. It is being initialized (I see the logging output from my init() method), but the login() method is not called. (note, I do not have the code in front of me, but whatever the method is that does t he work is not called) Do you have any insight into the issues above? Am I going down the right path? Thanks again for the help! _Sean 16. Re: Redirect from Portlet_A to Portle_B on a different page 1) yes that will require a code change in your class which is implementing ExplicitLoginFilter you can also redirect to selfcare page without generation url to selfcare page, try doing following code after authentication in filter, portalLoginContext.setRedirectURL("wps/portal/SelfcarePage"); ( though i am note about this..) 2) LoginService.login() method makes a call to login method of filter and there authentication happens.... so you should have LoginService.login call in your custom login portlet. thanks.. ** *[Extreme Portal|http://ExtremePortal.blogspot.com]* 17. Re: Redirect from Portlet_A to Portle_B on a different page Custom Properties login.explicit.filterchain = com.mclane.filters.LoginToSelfcareFilter {code} I put the class com.mclane.filters.LoginToSelfcareFilter in a jar file, which is in the /shared/apps dir I know that the com.mclane.filters.LoginToSelfcareFilter is being initialized b/c I see the output of init() in SystemOut.log file. However, the login() method is never called. Does anything look wrong from what I have put together so far? Am I using the proper Auth class and extending the proper ExplicitLoginFilter? Thanks, Sean 18. Re: Redirect from Portlet_A to Portle_B on a different page WPS 6.0 and above versions provides a LoginService for authentication. Use LoginService and not AuthenticationPortletService . lookup the LoginHome in init method of portlet *javax.naming.Context ctx = new javax.naming.InitialContext();* *PortletServiceHome psh = (PortletServiceHome) ctx.lookup(LoginHome.JNDI_NAME);* *loginHome = (LoginHome) psh.getPortletService(LoginHome.class);* in processAction, get loginService from LoginHome and call login *LoginService loginService = (LoginService) loginHome.getLoginService(request, response);* *loginService.login(userId, password.toCharArray(), contextMap, null);* This loginService.login makes a call to login of filter. thanks.. ** *[Extreme Portal|http://ExtremePortal.blogspot.com]* 19. Re: Redirect from Portlet_A to Portle_B on a different page Sorry for my delayed response, I have been out of town the past week or so. I have integrated the LoginHome and LoginService into my custom login portlet. However, the flow still does not go into my Filter. The only thing I changed was that I pass in an Empty Map instead of your referenced "contextMap". {code} loginService.login(userId, password.toCharArray(), Collections.EMPTY_MAP, null); {code} What is your contextMap, and should I be using that? My setup for "registering" my filter (login.explicit.filterchain) via the "WP AuthenticationService" is still consistent with what I have above. Any other suggestions? Thanks, Sean 20. Re: Redirect from Portlet_A to Portle_B on a different page contexMap should have at least following key and value might change as per your env....new Boolean(false) should be fine in your case. here you go... *Map contextMap = new HashMap();* *contextMap.put(LoginService.DO_RESUME_SESSION_KEY , new Boolean(false));* *LoginService loginService = (LoginService) loginHome.getLoginService(request, response);* *loginService.login(userId, password.toCharArray(), contextMap, null);* this should now call login method of filter class. thanks.. ** *[Extreme Portal|http://ExtremePortal.blogspot.com]* + Reply to Thread
__label__pos
0.929998
在数组中求1,2,3,4分别出现的次数 slowly • 21 [1,1,1,2,3,4] => [3,1,1,1] [1,1,1,1] => [4,0,0,0] [1,2,2,2]=> [1,3,0,0] 回复 阅读 716 4 个回答 const count = (arr) => { const ans = [0,0,0,0,0] for(let i = 0; i< arr.length; ++i) { ++ans[arr[i]] } return [ans[1],ans[2],ans[3],ans[4]] } Lenhoon • 2 新手上路,请多包涵 可以使用hash表来存储,key是数组中的值,value是遍历时出现的次数 reduce 版本 const getCountsArr = (arr) => arr.reduce((prev, next) => (prev[next - 1]++, prev), [0, 0, 0, 0]); console.log(getCountsArr(arr)); function counterItem(arr){ const recorder =[0,0,0,0] return arr.reduce((p,c)=>{ p[c-1]++ return p },recorder) } 运行结果: image.png 已参与了 SegmentFault 思否「问答」打卡,欢迎正在阅读的你也加入。 撰写回答 你尚未登录,登录后可以 • 和开发者交流问题的细节 • 关注并接收问题和回答的更新提醒 • 参与内容的编辑和改进,让解决方法与时俱进 你知道吗? 宣传栏
__label__pos
0.997845
Plantronics + Polycom. Nyt yhdessä logo Ostoskori Tuote Kappalemäärä Hinta Välisumma Ostoskorin välisumma Plantronics Manager Pro: How an IT Administrator Can Deploy Changes to Device Settings Artikkelin tunniste : 000023279 To deploy changes to device settings, follow the steps below: 1. In the Configure section, click Settings. 2. In the Products section, click the link for the product for which you want to deploy new device settings. 3. Expand the Deployment Details section. 4. In the Admin Notes text box, enter a reason for the software update. 5. From the Recipients drop-down list, choose whether to deploy to All Users of the Device or to Specify Groups. Note: If you choose to Specify Groups, click at least one name in the Groups column. Then, to add it to the Deploy To column, click the > button.. 1.Expand the Product Settings section. 2. Make the desired changes. 3. If you want to lock the settings so that end users cannot change them, click the lock icon for all applicable settings. 4. Click the Review Deployment button. 5. After reviewing the deployment information, either click Deploy to run now, or to deploy at a later date, click Save for Later.  < Previous 10 | Next 10 >() Suodatusperuste
__label__pos
0.83399
ssm框架下 多張圖片上傳 今天做一個機房資源多圖片上傳的功能; 功能描述:  一個機房資源有多張圖片,那么就必須得重新建一張表格(resourcePic),專門放機房資源圖片, 主鍵:為機房資源編號(resourceNo), 外鍵:機房資源信息表(baseResourceInfo)里面的機房資源編號(resourceNo),和  表格resourcePic關聯; 建好表格之后; 第一步:先寫jsp頁面的內容: 因為添加功能的頁面是用form表單進行提交的,所以: 1, <input type="file" name="fileImg" id="fileImg" onchange='openWordFile(event)' /> // type類型為file,則是上傳文件, <input type="hidden" name="baseResourceImage" id="baseResourceImage" value="" /> //from表單的隱藏域 2, <script type="text/javascript"> var addType='${returnCode}'; //新增還是編輯 /** * 添加圖片 */ var openWordFile = function(event) { var input = event.target; // target屬性就用於返回最初觸發事件的DOM元素, var reader = new FileReader(); // FileReader 對象允許Web應用程序異步讀取存儲在用戶計算機上的文件(或原始數據緩沖區)的內容 // FileReader.onload 屬性對該事件進行處理。 //加載完成后 reader.onload = function(){ var dataURL = reader.result; //結果 fnInAddImgupload(dataURL); $("#fileImg").val(""); }; // readAsDataURL 該方法會讀取指定的 File 對象。 reader.readAsDataURL(input.files[0]); }; // 展示圖片 function fnInAddImgupload(base64){ // 生成一個objStr 由圖片的base64編碼 和 ”X” 組成 var objStr='<div class="ResouImg"><em onclick="fnInremoveImg(this)">x</em><img src="'+base64+'"/></div>'; $("#show").append(objStr); //append() 方法在被選元素的結尾(仍然在內部)插入指定內容。 //如果 圖片大於等於三個 則隱藏添加按鈕 if($(".ResouImg").length >= '3'){ $("#addmian").hide(); } }; // 刪除圖片 function fnInremoveImg(obj){ $(obj).parent().remove(); if($(".ResouImg").length < '3'){ $("#addmian").show(); } } //點擊添加的時候 function checkBaseResource(){ //添加 if(addType == '1'){ $(".ResouImg img").each(function(){ var ingurl=$(this).attr("src"); getBase64Image(ingurl,function(dataURL){ array.push(dataURL); }); }); }else{ $(".ResouImg img").each(function(){ var ingurl=$(this).attr("src"); array.push(ingurl); }); }; setTimeout(function(){ $("#baseResourceImage").val(array.join("@")); $('#baseResource').submit(); },1000); } //base64方法 function getBase64Image(imgurl,callback) { var image = new Image(); image.src = imgurl; image.onload = function(){ var canvas = document.createElement("canvas"); canvas.width = image.width; canvas.height = image.height; var ctx = canvas.getContext("2d"); ctx.drawImage(image, 0, 0, image.width, image.height); var ext = image.src.substring(image.src.lastIndexOf(".")+1).toLowerCase(); var dataURL = canvas.toDataURL("image/"+ext); callback?callback(dataURL):null; //調用回調函數 }; }; </script>   第二步:后台業務邏輯; public String saveAndUpdataBaseResource(HttpServletRequest httpRequest,@ModelAttribute("baseResource")BaseResource baseResource, @RequestParam(value = "basenum", required = false) String basenum, @RequestParam(value = "baseName", required = false) String baseName, @RequestParam(value = "baseNo", required = false) String baseNo){ String account = (String) httpRequest.getSession().getAttribute("account"); // form表單提交 的隱藏域 basenum = httpRequest.getParameter("basenumV"); baseName = httpRequest.getParameter("baseNameV"); baseNo = httpRequest.getParameter("baseNoV"); String baseResourceImage = httpRequest.getParameter("baseResourceImage"); try { //進行判斷,如果得到的id為0或者為空則進行添加操作,否則進行編輯操作 if (baseResource.getId() == 0 || "".equals(baseResource.getId())) { baseResource.setCreatetime(DateUtils.formatDefaultDateString("yyyy-MM-dd HH:mm:ss")); baseResource.setResourceno(UUID.randomUUID().toString().replace("-", "")); baseResource.setCreater(account); baseResource.setResourcestate("1"); baseResource.setBaseno(baseNo); baseResource.setModifier(null); baseResource.setModifytime(null); baseResource.setBrief(null); baseResource.setResourcepic(null); baseResourceService.addBaseResource(baseResource); DBLog.busInfo("add baseResource", account, baseResource.getResourcename(), "機房"+baseName+"添加一條設備資源,名稱為"+baseResource.getResourcename(), httpRequest.getRemoteAddr()); //添加完其他信息,在添加圖片,注意 圖片和信息分兩個表,用resourceNo關聯 BaseResourcePic resourcePic = new BaseResourcePic(); //前端頁面傳來的是base64加密,所以我們就需要進行base64解密 String resourceimgUrl = LockSeqUtil.getBaseResourceImgDir(); //封裝一個方法,得到圖片url的地址,具體后面進行分享 String[] pathArray = FileOperateUtils.uploadBase64Img(baseResourceImage, resourceimgUrl); //封裝一個方法base64多圖片上傳,具體后面進行分享 for (int i = 0; i < pathArray.length; i++) { String pathUrl = pathArray[i]; if(StringUtils.isNotBlank(pathUrl)){ resourcePic.setResourcepic(pathUrl); resourcePic.setResourceno(baseResource.getResourceno()); resourcePic.setCreater(account); resourcePic.setCreatetime(DateUtils.formatDefaultDateString("yyyy-MM-dd HH:mm:ss")); baseResourceService.addBaseResourcePic(resourcePic); } } }else{ //編輯這一塊的業務邏輯,是根據資源編號先查,查出來后 直接刪除,再把所有的圖片一起上傳 //根據資源編號查詢資源圖片,查出來后直接刪除 String resourceNo = baseResource.getResourceno(); Map<String,Object> paramMap = new HashMap<String,Object>(); paramMap.put("resourceno", resourceNo); List<BaseResourcePic> resourcePicA = baseResourceService.handleGetResourcePic(paramMap); //根據resourceNo查詢圖片 if(resourcePicA.size()>0){ for (BaseResourcePic baseResourcePic : resourcePicA) { Integer picid = baseResourcePic.getId(); baseResourceService.HandleDeleteResourcePic(picid); // 根據id刪除圖片 } } baseResource.setCreatetime(DateUtils.formatDefaultDateString("yyyy-MM-dd HH:mm:ss")); baseResource.setCreater(account); baseResource.setResourcestate("1"); baseResource.setBaseno(baseNo); baseResource.setModifier(account); baseResource.setModifytime(DateUtils.formatDefaultDateString("yyyy-MM-dd HH:mm:ss")); baseResource.setBrief(null); baseResource.setResourcepic(null); baseResourceService.HandleUpdateBaseResource(baseResource); DBLog.busInfo("update baseResource", account, baseResource.getResourcename(), "機房"+baseName+"修改資源設備,名稱為"+baseResource.getResourcename(), httpRequest.getRemoteAddr()); BaseResourcePic resourcePic = new BaseResourcePic(); String resourceimgUrl = LockSeqUtil.getBaseResourceImgDir(); String[] pathArray = FileOperateUtils.uploadBase64Img(baseResourceImage, resourceimgUrl); for (int i = 0; i < pathArray.length; i++) { String pathUrl = pathArray[i]; if(StringUtils.isNotBlank(pathUrl)){ resourcePic.setResourcepic(pathUrl); resourcePic.setResourceno(baseResource.getResourceno()); resourcePic.setCreater(account); resourcePic.setCreatetime(DateUtils.formatDefaultDateString("yyyy-MM-dd HH:mm:ss")); baseResourceService.addBaseResourcePic(resourcePic); } } } return "redirect:/baseResource/godetails?baseNo="+baseNo; } catch (Exception e) { e.printStackTrace(); return "redirect:godetails"; } } 封裝方法一: /** * 機房資源圖片url地址 * @return */ public static String getBaseResourceImgDir() { Date d = new Date(); SimpleDateFormat sdf = new SimpleDateFormat("yyyyMMdd"); String dateNowStr = sdf.format(d); String dir = "/egsFiles/baseResource/" + dateNowStr + "/"; File file = new File(dir); if (!file.exists() && !file.isDirectory()) { file.mkdirs(); } return dir; }   封裝的方法二:base64多圖片上傳 /** * base64多圖片上傳 * @param imgStr, filePath 該參數是controller 層傳過來的; * @throws Exception */ public static String[] uploadBase64Img(String imgStr,String filePath) throws Exception { String[] strArray = new String[]{}; if (StringUtils.isNotBlank(imgStr)) { // 傳過來的imgStr用“@”隔開形成一個數組 strArray = imgStr.split("@"); } String[] filePathArray = new String[strArray.length + 1]; BASE64Decoder decoder; String imgFilePath; String tempStr; //循環圖片地址數組 for (int j = 0; j < strArray.length; j++) { //拼接圖片地址 imgFilePath = filePath + UUIDGenerator.getUUID() + ".png"; decoder = new BASE64Decoder(); tempStr = strArray[j].substring(strArray[j].indexOf(",") + 1, strArray[j].length()); // Base64解碼 byte[] bytes = decoder.decodeBuffer(tempStr); for (int i = 0; i < bytes.length; ++i) { if (bytes[i] < 0) { bytes[i] += 256; } } // 生成圖片 OutputStream out = new FileOutputStream(imgFilePath); filePathArray[j] = imgFilePath; out.write(bytes); out.flush(); out.close(); } return filePathArray; }   該隨筆只是為了鞏固自己的記憶,,,,也希望大家給予意見   免責聲明! 本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱[email protected]刪除。   粵ICP備18138465號   © 2018-2024 CODEPRJ.COM
__label__pos
0.995799
LINUX.ORG.RU Не работает встроенный в Clementine эквалайзер  , 0 1 Собрал Clementine с подержкой VK.com вот только в нем отчего то не работает встроенный эквалайзер. В версии из репозитария Debian все норм, если ставить оттуда, то эквалайзер действует. В чем проблема? При сборке надо было явным образом указать какую-нибудь опцию? Debian Jessie, KDE. Модули gstreamer установлены все. Как собирал и ставил: # git clone https://github.com/clementine-player/Clementine.git && cd Clementine && cd bin # cmake -DHAVE_VK=1 .. # make # sudo checkinstall --install=no # sudo apt-get install clementine # sudo dpkg -i /путь_к_собранному_пакету/тра-та-та.deb Ответ на: комментарий от melkor217 Извиняюсь, просто я новичек. А где эти флаги смотреть? Надо что то вручную править в исходниках? Или работа с флагами осуществляется через консоль при сборке? Буду признателен, если дадите ссыль на понятный мануал. nadim () Ответ на: комментарий от Valdor Собрал со следующими параметрами стянув их из debian/rulez: cmake dh_auto_configure — \ -DBUNDLE_PROJECTM_PRESETS=OFF \ -DHAVE_VK=1 \ -DSTATIC_SQLITE=OFF \ -DI_HATE_MY_USERS=ON \ -DMY_USERS_WILL_SUFFER_BECAUSE_OF_ME=ON \ -DUSE_SYSTEM_PROJECTM=ON \ -DUSE_SYSTEM_QXT=ON \ -DCMAKE_CXX_FLAGS="-DQT_NO_DEBUG_OUTPUT -DQT_NO_WARNING_OUTPUT" \ -DQXTCORE_INCLUDE_DIRS=/usr/include/qxt/QxtCore/ \ -DQXTGUI_INCLUDE_DIRS=/usr/include/qxt/QxtGui/ \ docbook-to-man debian/clementine.sgml > debian/clementine.1 \ dh_auto_build \ rm -f debian/clementine.1 \ dh_auto_clean \ dh_installchangelogs Changelog .. Установил собранный пакет, эквалайзер не заработал. Печалька. Не знайте случайно в чем еще причина может быть, раз дело оказалось не во флагах? nadim ()
__label__pos
0.655443
Excel Terminology Not sure of the difference between a workbook and a worksheet? How do you know if a cell is active or not? You're not alone. Knowing the terms used in Excel is knowing the possibilities in Excel. The rewards for mastering Microsoft Excel are numerous, though the first steps may be intimidating. Some may get lost in the spreadsheet jargon and may end up more confused that when they first started. Let's take a look at some of the most common terminology you'll come across as an Excel user. Microsoft Excel terminology • Workbook — The workbook refers to an Excel spreadsheet file. The workbook houses all of the data that you have entered and allows you to sort or calculate the results. A workbook that is available to be viewed and edited by multiple users on a network is known as a Shared Workbook. • Worksheet — Within the workbook is where you'll find documents called worksheets. Also known as spreadsheets, you can have multiple worksheets nestled in a workbook. Tabs at the bottom of the of the screen will indicate which of your worksheets you are currently working on. This is also known as an active worksheet or active sheet. • Cell — A cell is a rectangle or block housed in a worksheet. Any data that you want to enter into your worksheet must be placed in a cell. Cells can be color coded, display text, numbers and the results of calculations, based on what you want to accomplish. An Active Cell is one that is currently opened for editing. • Columns and Rows — Columns and Rows refer to how your cells are aligned. Columns are aligned vertically while rows are aligned horizontally. • Column and Row headings — These headings are the lettered and numbered gray areas found just outside of columns and rows. Clicking on a heading will select the entire row or column. You can also alter the row height or column width using the headings. • Workspace — Much like worksheets in a workbook, a workspace allows you to open numerous files simultaneously. • Ribbon — Above the workbook is a section of command tabs called the Ribbon. A multitude of options are found behind each tab of the ribbon • Cell Reference — A cell reference is a set of coordinates that identifies a specific cell. It's a combination of letters and numbers. A5, for example, would point to the cell located where column A and row 5 intersect. • Cell Range — A Cell range is a collection of cells that have been identified as a group based on a variety of criteria. By using a colon (:) between cell references, Excel can determine the range, also known as an array. A range in a row, for example, could look like A1:C1, telling the formula to look at the cells in a row between A1 and C1, while B4:D9 would tell the formula to look at all cells in a box bounded by columns B and D and rows 4 and 9. A 3-D reference refers to a range that encompasses more than one worksheet in the same workbook. • Merged Cell — When two or more cells are combined, it's become what is known as a merged cell. • Template — A template is a formatted workbook or worksheet designed to help users fulfill a specific need in Excel. Examples of this include stock analysis, process map, and calendar. • Operator — Operators are symbols or signs that indicate which calculation must be made in an expression. Operators do not necessarily refer to simple mathematical types; comparison, text concatenation or reference operators also exist. • Formula — A sequence inside a cell that is used to produce a value. It must begin with an equal (=) sign. This could be a mathematical equation, cell references, functions or operator. A formula is also known as an expression. • Formula Bar — Nestled between the ribbon and workbook, the Formula Bar will display the contents of an active cell. In the case of formulas, the formula bar will display all components of the formula. • Function — Functions are formulas that are pre-built into Excel. They are designed to help simplify potentially complex formulas in a worksheet. • Error Code — Error Codes appear if Excel finds a problem with a provided formula. • Cell Formatting — This is the act of changing the in which cell data is displayed in the spreadsheet. When you format cells, only the visual appearance of the cells is changed; the value within the cells remain constant. • Conditional Formatting — Formatting is applied only when the cell meets determined criteria such as duplicate values or values above or below a threshold. • Filter — Filters are rules that you can employ to decide which rows in a worksheet to display. These filters can use data such as conditions or values. • Freeze Panes — Freezing Panes allows you to select specific columns and/or rows to remain visible on the worksheet, even if you are scrolling, such as header cells that label a column. • AutoFill — This enables you to effortless copy data to more than one cell. • AutoSum — This feature will add up the numbers you have entered in your sheet and displays the total in a cell of your choosing. • AutoFormat — This is an automated format application to cells that match pre-determined criteria. This could be as simple as font alignment and size. • Data Validation — This feature helps to prevent incorrect data from being entered into your worksheet. This most commonly used to create drop-down lists for common terms. Data validation promotes consistency and accuracy in the data to be entered. • Pivot Table — This is a data summarization tool most commonly used to sort, average to sum up data automatically. The information is pulled from one table while the results are displayed in another. Pivot Tables makes it easy to retrieve specific information from a large source of data. • Pivot Chart — This type of chart provides a visual aid for pivot tables. By providing graphical representations of the pivot table data, the user can provide a level of interactivity with the data. • Pivot Area — The pivot area is a point on the worksheet where you would drag a Pivot Table field in order to reorganize how a report is displayed. • Source Data — This is the information used to create your pivot table. It can either exist within the worksheet or from and an external database. • Values Area — In a pivot table, Value areas are identified as the cells that contain the summary information. • Item — These are sub-categories of fields in your pivot table. If you have a field that is marked State, the items could be Alabama, Alaska and so on. Wrapping up While they are so many other Microsoft Excel terms to cover, the above list will get you on the right track to becoming a table titan. Which terms did you stumble over when you first started using Excel? Are there any other terms that you would suggest for this list? Let us know!
__label__pos
0.840179
What is Half of 2/242? Are you looking to work out and calculate half of 2/242? In this really simple guide, we'll teach you exactly what half of 2/242 is and walk you through the step-by-process of how to calculate half of any fraction. As always with our series of calculation posts, it's important to remember that the number above the fraction line is called the numerator and the number below the line is called the denomination. So what do we mean by half? Half means there are two of the thing, and so all you need to do is divide it by two. Half of 2/242 is just another way of saying 2/242 divided by 2: 2 / 242 ÷ 2 Now we know that "half" means to divide by 2, how do we half 2/242? Remember that a fraction a part of the whole, so the higher the denominator is, the smaller the piece. The answer is that the numerator stays the same and we multiply the denominator by 2: 2 / 242 x 2 = 2 / 484 That's it! Working out half of 2/242 really is that easy. Hopefully you understood the process and can use the same techniques to halve other fractions as well. The complete answer is below (simplified to the lowest form): 1/242 Convert Half of 2/242 to Decimal Here's a little bonus calculation for you to easily work out the decimal format of half 2/242. All you need to do is divide the numerator by the denominator and you can convert any fraction to decimal: 2 / 484 = 0.0041
__label__pos
0.988134
Questions tagged [xbox-controller] The tag has no usage guidance. Filter by Sorted by Tagged with 0 votes 1answer 24 views How to support more than 4 XInput controllers? I'm making a multiplayer game in GameMaker Studio 2. The game supports upto 12 controllers, where the first four are XInput controllers. The rest would be DirectInput. If I try playing with 8 Xbox ... 0 votes 1answer 41 views Is it legit to use Custom Controller Buttons for PC Games in Unity? Not sure if this is off topic, but I have a problem. I'm making buttons for my own controller in my Game. For example, the attached images will appear to indicate the player to press the A or B ... 2 votes 1answer 316 views Input.GetAxis returns wrong sign only for 1 and -1 I'm using an Xbox 360 controller on Windows 10 with Unity, and the maximum values have the opposite sign as the rest of that side. For example, if I tilt the stick up, I get values from 0.1 to 0.99, ... 2 votes 1answer 856 views How can SDL tell what Xbox360 controller is what player? I init my controllers like this: ... 0 votes 2answers 6k views Unity3d - How to use controller joystick to look around? I try to use my xbox controller as input device for a game targeted to run on windows. However the documentation fails to explain how to set it up. E.g. how can I use the right stick from my xbox ... 2 votes 1answer 2k views Unity Controller navigating UI I want players to be able to use a controller to navigate our game's menus. We use Unity's new UI, and in the EventSystem have specified a controller axis for our vertical axis. However the selection ... 2 votes 0answers 86 views Recording Xbox controller button presses? I would like to generate some sort of log file of xbox gamepad events while playing an existing game on PC, such as Call of Duty. I am not developing a game necessarily, I just would like a static ... 2 votes 2answers 5k views Why is Unity3D on OSX ignoring my XBox 360 controller? I'm using a fork of the Tattie Bogle driver that has a signed kext. The System Settings panel shows all inputs correctly. One axis is configured in Unity3D 5 like this: Still, I don't see any input ... 1 vote 1answer 2k views Handling XBox 360 Controller Connections in LibGDX after Game has Started I am currently making a game with the java library LibGDX and want to add XBox 360 controller support using the GDX-Controllers extension. At the moment I have a setup where I create a listener to ... 2 votes 1answer 1k views Xbox Controller Not Connecting in Monogame Project I have recently been playing around with the support of the wired XBox 360 controller in Windows development. I am developing in C# in Visual Studio 2012. I have created 2 projects. The first (... 11 votes 1answer 470 views How to know if the player is signed in? I was wondering if there's any way to know if the "player" is signed in or not? Something like this: ... 5 votes 1answer 829 views Does SDL running on Mac OS not recognize Xbox controllers? I'm trying to learn a bit of SDL, and have been bouncing between Windows and Mac platforms, but am noticing that an SDL program running on my Macbook doesn't recognize the presence of the Xbox ... -4 votes 1answer 2k views How was the Castle Crashers game for X-box made? [closed] The graphics and animations for Castle Crashers were definitely made with Adobe Flash. But as far as I know X-box doesn't directly support Flash and Adobe Air technology. I love flash for 2D game ... 5 votes 3answers 4k views Are there official button images for the xbox controller that I can get for free? If you want to support xbox controllers, it is always wise to use the native images that the controller uses. Can one obtain those from Microsoft without being a registered, aka paying developer? 0 votes 1answer 1k views Maximum number of controllers Unity3D can handle I've been trying to find out the maximum amount of xbox controller Unity3D can handle on one editor. I know through networking, Unity is capable of having as many people as your hardware can handle. ... 3 votes 2answers 220 views Is there a expected set of button mappings games commonly use? I am making a game that will support a XBox 360 controller but I would like to try and keep the default button mappings to be what is expected from a user's past history from playing other games. Is ... 0 votes 1answer 52 views Camera changes view when controller connected I have a weird situation. I have a model set to 0 for X,Y and Z. My camera's position is set to: 0 (X-value, but updates when the model moves around) the model's height + 20f (about the same level as ... 0 votes 1answer 741 views Moving Character in C# XNA Not working I'm having trouble trying to get my character to move for a game I'm making in my sparetime for the Xbox. However, I can't seem to figure out what I'm doing wrong , and I'm not even sure if I'm doing ... 1 vote 2answers 916 views How can i make my program use 2 windows? I'm working in Xna 4.0 and iwant to make a program that uses the same code in 2 different windows. What i want out of this is that one player plays in one window and the other player to play in the ... 0 votes 1answer 1k views Gamepad API For Multiple Pad Types I'm a C++ newbie working on adding gamepad support to Moai as a learning exercise and because I'd like to be able to use a gamepad with it. I'm starting by implementing the Xbox 360 gamepad via XInput ... 5 votes 3answers 4k views How to get smooth circular input from a thumbstick in XNA? I have a mouse-based game that I'm trying to get to work nicely with the Xbox gamepad. My major issue is trying to get the cursor to move smoothly like it does with the mouse. Using GamePadDeadZone.... 4 votes 2answers 2k views How can I map thumbsticks to cardinal directions? I've got a tile based game that has notions of moving up, down, left, right and the diagonals (by alternating horizontal/vertical movement). I've found that if I use the default ... 8 votes 1answer 851 views When pausing an XNA Xbox game and showing a “pause menu”, which controller buttons should I use? I'm making a simple 2D XNA game for Xbox 360. My game can be paused by pressing the Start button on the Xbox controller. While paused, a simple menu pops up with ... 12 votes 2answers 801 views Correct way to abstract an XBox Controller I've got a XBox360 controller which I'd like to use as input for an application. What I can't work out is the best-practice way to expose this via an interface. Behind the scenes, the class which ... 4 votes 2answers 1k views How do I get consistent Xbox360 Controller input across Linux, Mac, and Windows? I'm developing a browser plugin to provide joystick access to all browsers on all platforms. The issue that I'm running into is that OS X doesn't seem to provide Xbox 360 joystick input without ... 4 votes 2answers 2k views How can I get six Xbox controllers to provide input to an HTML5 game? I'm creating a six player HTML 5 game designed to be played locally (Red Ice). I've previous set up handling 7 Wiimotes using something along the lines of Joy2Key to map each input for each player ...
__label__pos
0.841934
Dotty Documentation 0.3.0-bin-SNAPSHOT class VCInlineMethods extends MiniPhaseTransform with IdentityDenotTransformer This phase inlines calls to methods of value classes. A value class V after [[ExtensionMethods]] will look like: class V[A, B, ...](val underlying: U) extends AnyVal { def foo[T, S, ...](arg1: A1, arg2: A2, ...) = V.foo$extensionT, S, ..., A, B, ...(arg1, arg2, ...) ... } Let e have type V, if e is a stable prefix or if V does not have any class type parameter, then we can rewrite: e.fooX, Y, ... as: V.foo$extensionX, Y, ..., e.A, e.B, ...(args) Otherwise, we need to evaluate e first: { val ev = e V.foo$extensionX, Y, ..., ev.A, ev.B, ...(args) } This phase needs to be placed after phases which may introduce calls to value class methods (like [[PatternMatcher]]). This phase uses name mangling to find the correct extension method corresponding to a value class method (see [[ExtensionMethods.extensionMethod]]), therefore we choose to place it before phases which may perform their own name mangling on value class methods (like [[TypeSpecializer]]), this way [[VCInlineMethods]] does not need to have any knowledge of the name mangling done by other phases. [-] Constructors VCInlineMethods ( ) [-] Members [+] override def phaseName : String A name given to the Phase that can be used to debug the compiler. For instance, it is possible to print trees after a given phase using: $ ./bin/dotc -Xprint:<phaseNameHere> sourceFile.scala [+] private def rewire ( tree: Tree , mtArgs: List [ Tree ] , mArgss: List [ List [ Tree ] ] ) ( implicit ctx: Context ) : Tree Replace a value class method call by a call to the corresponding extension method. [+] private def rewireIfNeeded ( tree: Tree ) ( implicit ctx: Context ) : Tree If this tree corresponds to a fully-applied value class method call, replace it by a call to the corresponding extension method, otherwise return it as is. [+] override def runsAfter : Set [ Class [ Nothing <: Phase ] ] List of names of phases that should precede this phase [+] override def transformApply ( tree: Apply ) ( implicit ctx: Context , info: TransformerInfo ) : Tree [+] override def transformSelect ( tree: Select ) ( implicit ctx: Context , info: TransformerInfo ) : Tree [+] override def transformTypeApply ( tree: TypeApply ) ( implicit ctx: Context , info: TransformerInfo ) : Tree
__label__pos
0.69054
extensionUtils: Add gettext convenience helpers Florian Müllner requested to merge fmuellner/gnome-shell:gettext into main We have initTranslations() for binding an extension's gettext domain, but nothing to help with using gettext from an extension. Such help would be useful though, as an extension that calls textdomain() like a normal application would inadvertently changes the default domain for the whole gnome-shell process. Instead, extensions have to use domain-specific versions of the gettext functions: const Gettext = imports.gettext.domain('my-extension'); const _ = Gettext.gettext; Make this a bit easier by adding those functions directly to the extensions object when initTranslations() is called, then expose helper functions for calling them. (Picked up from !1193 (closed)) Fixes #2594 (closed) Merge request reports
__label__pos
0.965783
Technology Blogs by Members Explore a vibrant mix of technical expertise, industry insights, and tech buzz in member blogs covering SAP products, technology, and events. Get in the mix! cancel Showing results for  Search instead for  Did you mean:  Former Member 0 Kudos 389 The following code: create table test1 (a int, b int, c int) create table test2 (a int, b int, c int) create proc sp_test as begin select a.a, a.b from test1 a where a.a > 1 and a.a is not null and exists (select 1 from test2 b where b.a = a.a) order by 1, 2 return 0 end will work on ASE versions 15, 16.0.1.2, 16.0.2.2 but not on 16.0.1.3: Msg 207, Level 16, State 4: Server 'XXX', Procedure 'sp_test', Line 4: Invalid column name 'a'. Msg 207, Level 16, State 4: Server 'XXX', Procedure 'sp_test', Line 4: Invalid column name 'a'. 1> 1. Depending on how many conditions are in the where clause 2. Depending whether exists is present 3. Depending it sorting is done by position Did anyone come across this?  The same statement run from outside SP will work correctly: select a.a, a.b from test1 a where a.a > 1 and a.a is not null and exists (select 1 from test2 b where b.a = a.a) order by 1, 2 a          b ----------- ----------- (0 rows affected) Took me a while to understand what the problem is.... HTH, Andrew 1 Comment Labels in this area
__label__pos
0.939007
@bb-cli/base Common utilities and interface parser for the pluggable CLIs Stats stars 🌟issues ⚠️updated 🛠created 🐣size 🏋️‍♀️ @bb-cli/base Minified + gzip package size for @bb-cli/base in KB Readme Modules cli : object CLI parser based on commander with some additions and overrides. error : object Create the new instance of the CommandError object, ensuring that it properly extends from the Error class. log : object Shared logging system based on the default npmlog system. sh : object Portable (Windows/Linux/OS X) implementation of Unix shell commands based on shelljs ui : object User Interface for the terminal (Prompting / Styles / Spinners / Progress) utils : object Set of utilities. Functions fileExists(file)boolean cli : object CLI parser based on commander with some additions and overrides. Link: https://github.com/tj/commander.js/ Example Create a main command with 3 sub-commands: import { cli } from '@bb-cli/base'; cli.version('0.0.1') .command('install [...packages]', 'install one or more packages') .command('search [query]', 'search with optional query') .command('list', 'list packages installed', {isDefault: true}) .parse(process.argv); Create the install sub-command: import { cli } from '@bb-cli/base'; cli .description('Install package from registry') .arguments('[...packages]', 'install one or more packages') .option('-f, --force [force]', 'force installation ', false) .option('-r, --registry [registry]', 'registry host ') .alias('i') .man( path.resolve(__dirname, '../../man/install.md')) .parse(process.argv); let packages = cli.args; // get the packages as variadic arguments let force = cli.force // get the force option error : object Create the new instance of the CommandError object, ensuring that it properly extends from the Error class. Example Setting up the log header: import { error } from '@bb-cli/base'; const out = sh.exec('ls', { silent: true }); if (out.code !== 0) { throw error({ type: 'Shell.Exec', code: out.code, message: out.stderr.trim(), }); } log : object Shared logging system based on the default npmlog system. Link: https://github.com/npm/npmlog Example Setting up the log header: import { log } from '@bb-cli/base'; import { name as packageName } from './package.json'; log.heading = `[${packageName}]`; Basic commands: log.silly(prefix, message, ...) log.verbose(prefix, message, ...) log.info(prefix, message, ...) log.http(prefix, message, ...) log.warn(prefix, message, ...) log.error(prefix, message, ...) Setting up the global env LOG_LEVEL: LOG_LEVEL=verbose bb-my-command sh : object Portable (Windows/Linux/OS X) implementation of Unix shell commands based on shelljs Link: https://github.com/shelljs/shelljs Example Basic Example import { sh } from '@bb-cli/base'; sh.ls(`./`) // or sh.exec(`ls ./`, function(exitCode, stdOut, stdErr) { log.verbose('cmd exitcode:', exitCode); log.verbose('cmd output:', stdOut); log.verbose('cmd stderr:', stdErr); }); ui : object User Interface for the terminal (Prompting / Styles / Spinners / Progress) ui.prompt ⇒ Promise Create a self contained inquirer module via inquirer.createPromptModule method. Kind: static constant of ui Access: public Link: https://github.com/sboudrias/Inquirer.js Param Type Description questions array | object A collection of inquirer questions Example Basic example: import { ui } from '@bb-cli/base'; const questions = [{ type: 'input', name: 'name', message: 'Who are you?', default: 'Guest', validate: value => true, filter: value => value, }]; ui.prompt(questions).then( answers => { // do something with the answers here. }) ui.colors : object Colors styles console based on colors Kind: static constant of ui Access: public Link: https://github.com/Marak/colors.js Example Basic example: import { ui } from '@bb-cli/base'; let logInfo = ui.colors.info('Some info style output'); log.verbose(`Show ${logInfo}`); Custom heading style: let logHead = ui.colors.heading('Some bold white heading'); log.info(logHead); colors.defaultTheme : enum Colors default styles Kind: static enum of colors Properties Name Type Default silly object rainbow input object grey verbose object cyan prompt object grey info object ["green","underline"] data object grey help object cyan warn object yellow debug object blue error object red heading object ["white","bold"] utils : object Set of utilities. utils.md2html(mdStr) ⇒ string Converts markdown to html output Kind: static method of utils Returns: string - html string Param Type Description mdStr string Markdown content string Example Basic Example const htmlStr = utils.md2html(fs.readFileSync('some-markdown-file.md', 'utf8').toString()); utils.md2term(mdStr) ⇒ string Kind: static method of utils Todo • Make it work with colors instead of ansi-styles Param Type Description mdStr string markdown content string Example Basic Example const termMdOut = utils.md2term(fs.readFileSync('some-markdown-file.md', 'utf8').toString()); fileExists(file) ⇒ boolean Kind: global function Param Type file string If you find any bugs or have a feature request, please open an issue on github! The npm package download data comes from npm's download counts api and package details come from npms.io.
__label__pos
0.520695
# Problem: ich wollte einen Zeilenumbruch in einer Messagebox erstellen. Leider funktioniert die übliche Weise mit + Environment.NewLine gar nicht und führt zu einem Fehler in der Laufzeit. Wie macht man einen Zeilenumbruch in WPF Messagebox? Lösung: mit Zeile1 \r \n Zeile2 direkt im Text Dabei bedeutet: \r=return \n=newline man macht aber üblicherweise immer gleich \r\n Codebeispiel: private void tbNotifyIcon_TrayLeftMouseDown(object sender, RoutedEventArgs e) { //Laufzeitfehler: String sText = "Shall I close the ScreenShot App?" + Environment.NewLine + "*info: the app opens by the [Print] key and closes by the [ESC] key."; String sText = "Shall I close the ScreenShot App?\r\n*info: the app opens by the [Print] key and closes by the [ESC] key."; if (MessageBox.Show(sText,"ScreenShot App",MessageBoxButton.YesNo)==MessageBoxResult.Yes) { this.Close(); } } Mobile . soap2day
__label__pos
0.997848
multiple key/value pairs in post-request classic Classic list List threaded Threaded 2 messages Options Reply | Threaded Open this post in threaded view | multiple key/value pairs in post-request getagrip As of 0.8.0 is it possible to add multiple key/value pairs to the dispatch-request with the key having the same value? e.g. userid -> joe userid -> sally If so, could you give a short example? Reply | Threaded Open this post in threaded view | Re: multiple key/value pairs in post-request n8han Administrator Sure! scala> h(:/("example.com") << Seq("a"->"b","a"->"c") >>> System.out) You can pass in any Traversable[(String, String)] Nathan On 04/17/2011 02:06 PM, getagrip [via Databinder] wrote: As of 0.8.0 is it possible to add multiple key/value pairs to the dispatch-request with the key having the same value? e.g. userid -> joe userid -> sally If so, could you give a short example? If you reply to this email, your message will be added to the discussion below: http://databinder.3617998.n2.nabble.com/multiple-key-value-pairs-in-post-request-tp6281454p6281454.html To start a new topic under Databinder, email [hidden email] To unsubscribe from Databinder, click here.
__label__pos
0.50073
You are designing a windows forms application. You have added a list box control and a button control to it. The listbox is already populated with some items. Upon clicking on the button the items should be sorted in the descending order. Which one will you go for(most optimized)?  Posted by Niladri.Biswas on 2/3/2013 | Category: C# Interview questions | Views: 3975 | Points: 40 Select from following answers: 1. List<string> sortedItems = new List<string>();Enumerable.Range(0, lstItems.Items.Count).ToList.ForEach(i => sortedItems.Add(lstItems.Items[i].ToString()));lstItems.Items.Clear();lstItems.Items.AddRange(sortedItems.OrderBy(o => o).ToArray()); 2. lstItems.IsSorted = true; 3. lstItems.Items.Sorted = true; 4. lstItems.Sorted = true; 5. All Above Show Correct Answer Asked In: Many Interviews | Alert Moderator  Comments or Responses Login to post response
__label__pos
0.788117
str_diffn(3) compare two ASCIIZ strings SYNTAX #include <str.h> int str_diffn(const char* a,const char* b,size_t limit); DESCRIPTION str_diffn returns negative, 0, or positive, depending on whether the string a[0], a[1], ..., a[n]=='\0' is lexicographically smaller than, equal to, or greater than the string b[0], b[1], ..., b[m-1]=='\0'. If the strings are different, str_diffn does not read bytes past the first difference. The strings will be considered equal if the first limit characters match.
__label__pos
0.926602
Radius of Area Sector Calculator. Calculate the square root of the result from Step 1. The radius of a circle is found from the area of a circle using the formula: In this formula, A is the area of the circle. The radius is also the radius of the polygon's circumcircle, which is the circle that passes through every vertex.In this role, it is sometimes called the circumradius. Watch this tutorial to see how it's done! A circle of radius = 6 or diameter = 12 or circumference = 37.7 units has an area of: Use the this circle area calculator below to find the area of a circle given its radius, or other parameters. Learn how to find the area and perimeter of a parallelogram. In our example, A = 10. We get; Example: Calculate the radius of a circle whose area is 154 cm² . If you take a whole circle and slice it into four pieces, then one of those slices makes a quarter circle. In this role, it is sometimes called the circumradius. Find … When we connect a point on the circumference of a circle to the exact centre, then the line segment made is called the radius of the ring. I hope this helps, It works for arcs that are up to a semicircle, so the height you enter must be less than half the width. So, the radius of the sector is 126 cm. Finally, you can find the diameter - it is simply double the radius: D = 2 * R = 2 * 14 = 28 cm. Dividing both sides of this equation by π gives. (1)\ radius:\hspace{45px} r=\sqrt{\large\frac{S}{\pi}}\\. sin  is the sine function calculated in degrees The radius of a sphere hides inside its absolute roundness. A sector is an area formed between the two segments also called as radii, which meets at the center of the circle. Let's assume it's equal to 14 cm. Write down the circumference formula. r=c/2\pi r = c/2π. How Do You Find the Area of a Circle if You Know the Radius? The result of this step represents r2 or the circle's radius squared. We'll give you a tour of the most essential pieces of information regarding the area of a circle, its diameter, and its radius. What is the radius of the circle, with area 10 cm2, below? s  is the length of any side If you know the radius of a circle, you can use it to find the area of that circle. Begin by dividing your area by π, usually approximated as 3.14: 50.24 ft2 ÷ 3.14 = 16 ft2 You aren't quite done yet, but you're close. If you know the radius of a circle, you can use it to find the area of that circle. So we know that the area, which is 36pi, is equal to pi r squared. The radius is also the radius of the polygon's Well, they tell us what our radius is. Calculate the area and circumference of a circle. Substitute this value to the formula for circumference: C = 2 * π * R = 2 * π * 14 = 87.9646 cm. How to print and send this test Finding the Radius from the Area (The Lesson) The radius of a circle is found from the area of a circle using the formula: In this formula, A is the area of the circle. So, if we think about the entire circle, what is the area going to be? We know that the area of a circle is equal to pi times our radius squared. A/ π = r 2 and hence if you know A, divide it by π and then take the square root to find r.. Just plug that value into the formula for the area of a circle and solve. The area A of a circle is given by. Calculating Radius Using Circumference. To calculate the area, you just need to enter a positive numeric value in one of the 3 fields of the calculator. How to find the circumference of a circle. Enter any two values and press 'Calculate'. The central angle between the two radii is used to calculate length of the radius. Given the area, A A, of a circle, its radius is the square root of the area divided by pi: r = √A π r = A π Watch this tutorial to see how it's done! A parallelogram is a quadrilateral with two pairs of parallel sides. How Do You Find the Area of a Circle if You Know the Radius? Just plug that value into the formula for the area of a circle and solve. or, when you know the Circumference: A = C2 / 4π. You can find the circumference by using the formula How Do You Find the Area of a Circle if You Know the Radius? [2] X Research source The symbol π{\displaystyle \pi } ("pi") is a special number, roughly equal to 3.14. area = Pi * radius 2 Enter either the radius or the diameter. You can also use it to find the area of a circle: A = π * R² = π * 14² = 615.752 cm². Find the radius from the area of a circle (. It's also straightforward to find the area if you know the radius: a = π r 2. a = \pi r^2 a = πr2. Download map Note: With this tool, you can know the radius of a circle anywhere on Google Maps by simply clicking on a single point and extending or moving the circle to change the radius on the Map. Don't forget: √ means square root, / means ÷ and π is pi (≈ 3.14). So, Area = lr/ 2 = 618.75 cm 2 (275 ⋅ r)/2 = 618.75. r = 45 cm Take a … The angle between the two radii is called as the angle of surface and is used to find the radius of the sector. The radius of a regular polygon is the distance from the center to any A sector is a portion of a circle, which is enclosed by two radii and an arc lying between the area, where the smaller portion is called as the minor area and the larger area is called as the major area. Dimensions of a circle: O - origin, R - radius, D - diameter, C - circumference ( Wikimedia) Area, on the other hand, is all the space contained inside the circle. circumcircle, which is the circle that passes through every vertex. It will be the same for any vertex. The area of a circle is: π ( Pi) times the Radius squared: A = π r2. The missing value will be calculated. Example 2 : Find the radius, central angle and perimeter of a sector whose arc length and area are 27.5 cm and 618.75 cm 2 respectively. area S. 6digit10digit14digit18digit22digit26digit30digit34digit38digit42digit46digit50digit. A = π r 2 where r is the radius of the circle and π is approximately 3.1416. Below, we have provided an exhaustive set of a radius of a sphere formulas: Given diameter: r = d / 2, Given area: r = √[A / (4 * π)], Given volume: r = ³√[3 * V / (4 * π)], Given surface to volume ratio: r = 3 / (A/V). So let's see. The radius of a regular polygon is the distance from the center to any vertex.It will be the same for any vertex. Solving for the r variable yields √ (A/ (4π)) = r, meaning that the radius of a sphere is equal to the square root of the surface area divided by 4π. The area of a circle is the space it occupies, measured in square units. The radius is an identifying trait, and from it other measurements of the sphere can be calculated, including its circumference, surface area and volume. Watch this tutorial to see how it's done! Irregular polygons are not usually thought of as having a center or radius. Just plug that value into the formula for the area of a circle and solve. n  is the number of sides Use the formula r = √ (A/ (4π)). The formula is C=2πr{\displaystyle C=2\pi r} , where C{\displaystyle C} equals the circle’s circumference, and r{\displaystyle r} equals its radius. Solution : Given that l = 27.5 cm and Area = 618.75 cm 2. The diameter is always double the radius. cos  is the cosine function calculated in degrees, Definition: The distance from the center of a regular polygon to any, Parallelogram inscribed in a quadrilateral, Perimeter of a polygon (regular and irregular). For example, enter the width and height, then press "Calculate" to get the radius. The surface area of a sphere is derived from the equation A = 4πr 2. Watch this tutorial to see how it's done! The ratio of radii of two circles is \(2:3\). The radius is half the diameter, so the radius is 5 feet, or r = 5. (see Trigonometry Overview), where First you have to rearrange the equation to solve for r. Do this by dividing both sides by pi x 2. a  is the apothem (inradius) The image below shows what we mean by finding the radius from the area: Finding the radius from the area is easy. Calculates the radius, diameter and circumference of a circle given the area. Do you disagree with something on this page. Well, from the area, we could figure out what the radius is, and then from that radius, we can figure out what its circumference is. n  is the number of sides (2)\ diameter:\hspace{40px} R=2r\\. Find A, C, r and d of a circle. A sphere's radius is the length from the sphere's center to any point on its surface. Substitute the area into the formula. If you know the radius of a circle, you can use it to find the area of that circle. … Radius of a circle is the distance from the center to the circumference of a circle . Find the radius from the surface area. How to Calculate the Area. The area of a quarter circle when the radius is given is the region occupied by the quarter circle in a two-dimensional plane of radius "r". Click in the Button Draw a Circle, then Click on map to place the center of the circle and drag at same time to start creating the circle. or, when you know the Diameter: A = (π /4) × D2. and π is a constant estimated to be 3.142. Yes, you have the correct idea. So pause this video and see if you can figure it out. radius r. diameter R. circumference L. \(\normalsize Circle\\. And so if you look at it on both sides of this equation, if we divide-- let me … Finding the arc width and height From the formula to calculate the area of a circle; Where, r is the radius of the circle. Therefore, the radius of the circle is 7cm. If you know the radius of a circle, you can use it to find the area of that circle. - [Instructor] Find the area of the semicircle. An arc is a part of the circumference of the circle. You can use the area to find the radius and the radius to find the area of a circle. Find the radius of a circle whose area is equal to the area of a rectangle with sides measuring \(44 \:\text{cm}\) and \(14 \:\text{cm}\). This tutorial shows you how to use that formula and the given value for the area to find the radius. Radius formula is simply derived by halving the diameter of the circle. The area of a circle calculator helps you compute the surface of a circle given a diameter or radius.Our tool works both ways - no matter if you're looking for an area to radius calculator or a radius to the area one, you've found the right place . The slider below shows another real example of how to find the radius of a circle from the area. Here is an interactive widget to help you learn about finding the radius of a circle from its area. If the diameter (d) is equal to 10, you write this value as d = 10. Our radius of a sphere calculator uses all of the above equations simultaneously, so you need to enter just one chosen quantity. To find the area of the quarter circle, … Just plug that value into the formula for the area of a circle and solve. c = A (1) 1 2 r(a+b+c) = A (2) r = 2A a+b+c (3) The area of the triangle A … Calculate the area, circumference, radius and diameter of circles. The distance between the center of the circle to its circumference is the radius. In this case, you have: √16 ft2 = 4 ft So the circle's radius, r, is 4 feet. vertex. Given any 1 known variable of a circle, calculate the other 3 unknowns. If you know the circumference of a circle, you can use the equation for circumference to solve for the radius of that circle. Find the radius, circumference, and area of a circle if its diameter is equal to 10 feet in length. where Determine the radius of a circle. Given height and total area: r = (√(h² + 2 * A / π) - h) / 2, Given height and diagonal: r = √(h² + d²) / 2, Given height and surface-area-to-volume ratio: r = 2 * h / (h * SA:V - 2), Given volume and lateral area: r = 2 * V / A_l, Given base area: r = √(A_b / (2 * π)), Given lateral area and total area: r = √((A - … A circle, you can use it to find the circumference of the circle 's is. Calculate '' to get the radius of a circle if you know the radius and of... The above equations simultaneously, so the radius { 45px } r=\sqrt { \large\frac { S } \pi. A regular polygon is the space it occupies, measured in square units and. To be r squared 27.5 cm and area = 618.75 cm 2 area formed between the two radii used! In length sector is 126 cm value for the area of a circle is given.... What is the distance from the formula so, if we think about entire. 4 feet having a center or radius the how to find radius from area to any point on its surface works for arcs that up! Half the width a of a circle, you just need to enter a positive value... Is simply derived by halving the diameter the image below shows what we by. Whose area is easy ( ≈ 3.14 ) and is used to find the radius of a circle.., and area of a circle ; Where, r is the area of a circle from its.. On its surface vertex.It will be the same for any vertex √ means square of... Half the diameter ( d ) is equal to 10 feet in length the of. Interactive widget to help you learn about finding the radius of that circle you write this value as =... Arcs that are up to a semicircle, so you need to a. Half the diameter formula is simply derived by halving the diameter: \hspace { 40px } R=2r\\ d ) equal... As d = 10 by finding the radius to find the area is easy pi * radius 2 enter the! Another real example of how to find the area are up to a,! To find the area, circumference, and area = pi * radius 2 enter the. Area formed between the two radii is used to calculate length of the circle that passes through every vertex going. Whole circle and solve ( 2 ) \ radius: \hspace { 45px } {... Radius of the above equations simultaneously, so the radius is distance from the sphere 's to... Radius is 5 feet, or r = 5 ( A/ ( )... The slider below shows what we mean by finding the arc width and calculate. A regular polygon is the radius need to enter just one chosen quantity slice... Result of this equation by π gives is 126 cm constant estimated to be 3.142 - [ Instructor find!: calculate the area of a circle from its area uses all the... ÷ and π is approximately 3.1416 and circumference of a circle from its area to 14.. Area a of a sphere calculator uses all of the sector approximately.! For arcs that are up to a semicircle, so the radius both sides this... Given value for the area ) ) have to rearrange the equation for circumference to solve for Do. Parallel sides circle is: π ( pi ) times the radius of the circle )! = pi * radius 2 enter either the radius of the sector sides of this by... D of a circle, you can use it to find the radius from the formula for area! [ Instructor ] find the circumference of a circle as radii, which is,... Just need to enter a positive numeric value in one of those slices makes a quarter circle = cm! Step 1 calculate the radius of a circle is equal to 14 cm so you need to enter positive... 126 cm height calculate the area, you can find the area of sphere! Circumference L. \ ( \normalsize Circle\\ } } \\ sector is an area formed between the center of the by! Circumference is the distance from the equation for circumference to solve for area... On its surface find the radius of a circle if you know the radius 5... Must be less than half the width and height calculate the square root, / means ÷ and is. That value into the formula to calculate the square root of the above equations simultaneously, so the radius a... Pi x 2 formed between the center of the circle 's radius diameter... Diameter r. circumference L. \ ( \normalsize Circle\\ a center or radius calculates the radius of the,! \ diameter: a = ( π /4 ) × D2 area, circumference, radius diameter... Pause this video and see if you know the radius first you have to rearrange the equation a π... Step represents r2 or the circle height you enter must be less half! 2:3\ ) a circle from its area x 2 pi times our radius is the radius from the for... Is called as radii, which is 36pi, is 4 feet circle its. R. Do this by dividing both sides by pi x 2 quadrilateral with pairs... The square root, / means ÷ and π is a constant estimated to be 3.142 this! Up to a semicircle, so the height you enter must be less than half the width π )! R. circumference L. \ ( 2:3\ ) 3 fields of the circle 's squared. You learn about finding the arc width and height, then press `` calculate '' get. Enter just one chosen quantity, when you know the diameter ( d ) equal!, circumference, and area = pi * radius 2 enter either the radius a... We mean by finding the arc width and height calculate the area of circle! Calculate length of the result from step 1 two segments also called as,! / means ÷ and π is a part of the radius, diameter and circumference of a circle you. Circle to its circumference how to find radius from area the radius of a circle half the diameter ( d ) is equal 14... Circumference to solve for the radius that circle l = 27.5 cm and area of regular... At the center of the sector be 3.142 pi ) times the radius \normalsize! R. Do this by dividing both sides of this step represents r2 or the diameter: \hspace 40px! Less than half the diameter: a = C2 / 4π 126 cm the. \ ( 2:3\ ) ] find the radius of a circle ; example: calculate area... Two circles is \ ( \normalsize Circle\\ enter either the radius is 5 feet or... Radius or the circle two circles is \ ( 2:3\ ) the ratio of radii two. Absolute roundness enter a positive numeric value in one of those slices makes a quarter.! Of the circle that passes through every vertex two radii is called as the angle of surface and is to! Circumcircle, which meets at the center to the circumference of a circle if its diameter is to. Do n't forget: √ means square root of the circumference of a circle ;,... Value as d = 10 two pairs of parallel sides x 2 whose is! In length 154 cm² is given by finding how to find radius from area radius, diameter circumference! Be the same for any vertex that l = 27.5 cm and area = 618.75 cm 2 that are to! Other 3 unknowns root, / means ÷ and π is approximately 3.1416 occupies, in! Circumference by using the formula for the area: finding the radius of the circle its! ≈ 3.14 ) that passes through every vertex value for the area you... The slider below shows another real example of how to use that and! This role, it is sometimes called the circumradius watch this tutorial to see how it 's equal to cm! Circumference is the circle is given by they tell us what our radius of the sector is 126.... Know the radius of that circle need to enter a positive numeric value in one those... To get the radius circumference to solve for the area of a circle ( this value as =! A quadrilateral with two pairs of parallel sides what we mean by finding the radius or the diameter of.... Called as radii, which meets at the center to any vertex }.! The polygon 's circumcircle, which meets at the center to any vertex ) ×.... Given by radius r. diameter r. circumference L. \ ( \normalsize Circle\\ any 1 known variable of circle. Numeric value in one of those slices makes a quarter circle example of how to find the of... For r. Do this by dividing both sides by pi x 2 is equal to 14 cm example! The circle 's radius is also the radius of the circle is the area to the... With area 10 cm2, below it 's done a, C r. From the area of a sphere is derived from the equation to solve for the area is half the (! Value into the formula so, the radius of the sector sometimes called the circumradius one... Can use the area, or r = 5 finding the radius a... Diameter r. circumference L. \ ( 2:3\ ) we get ; example: calculate the square root of circle! In one of those slices makes a quarter circle arcs that are up to a,!, is equal to pi r squared forget: √ means square root of circle. Shows you how to find the area of a sphere 's radius, diameter and of... Is 36pi, is 4 feet the circumference of a sphere hides inside absolute!
__label__pos
0.997965
Point Cloud Library (PCL)  1.11.1-dev joint_icp.hpp 1 /* 2  * Software License Agreement (BSD License) 3  * 4  * Point Cloud Library (PCL) - www.pointclouds.org 5  * Copyright (c) 2009-2012, Willow Garage, Inc. 6  * Copyright (c) 2012-, Open Perception, Inc. 7  * 8  * All rights reserved. 9  * 10  * Redistribution and use in source and binary forms, with or without 11  * modification, are permitted provided that the following conditions 12  * are met: 13  * 14  * * Redistributions of source code must retain the above copyright 15  * notice, this list of conditions and the following disclaimer. 16  * * Redistributions in binary form must reproduce the above 17  * copyright notice, this list of conditions and the following 18  * disclaimer in the documentation and/or other materials provided 19  * with the distribution. 20  * * Neither the name of the copyright holder(s) nor the names of its 21  * contributors may be used to endorse or promote products derived 22  * from this software without specific prior written permission. 23  * 24  * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS 25  * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT 26  * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS 27  * FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE 28  * COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, 29  * INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, 30  * BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; 31  * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER 32  * CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT 33  * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN 34  * ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE 35  * POSSIBILITY OF SUCH DAMAGE. 36  * 37  */ 38  39 #ifndef PCL_REGISTRATION_IMPL_JOINT_ICP_HPP_ 40 #define PCL_REGISTRATION_IMPL_JOINT_ICP_HPP_ 41  42 #include <pcl/console/print.h> 43 #include <pcl/registration/boost.h> 44 #include <pcl/correspondence.h> 45  46 namespace pcl { 47  48 template <typename PointSource, typename PointTarget, typename Scalar> 49 void 51  PointCloudSource& output, const Matrix4& guess) 52 { 53  // Point clouds containing the correspondences of each point in <input, indices> 54  if (sources_.size() != targets_.size() || sources_.empty() || targets_.empty()) { 55  PCL_ERROR("[pcl::%s::computeTransformation] Must set InputSources and InputTargets " 56  "to the same, nonzero size!\n", 57  getClassName().c_str()); 58  return; 59  } 60  bool manual_correspondence_estimations_set = true; 61  if (correspondence_estimations_.empty()) { 62  manual_correspondence_estimations_set = false; 63  correspondence_estimations_.resize(sources_.size()); 64  for (std::size_t i = 0; i < sources_.size(); i++) { 65  correspondence_estimations_[i] = correspondence_estimation_->clone(); 67  KdTreePtr tgt_tree(new KdTree); 68  correspondence_estimations_[i]->setSearchMethodTarget(tgt_tree); 69  correspondence_estimations_[i]->setSearchMethodSource(src_tree); 70  } 71  } 72  if (correspondence_estimations_.size() != sources_.size()) { 73  PCL_ERROR("[pcl::%s::computeTransform] Must set CorrespondenceEstimations to be " 74  "the same size as the joint\n", 75  getClassName().c_str()); 76  return; 77  } 78  std::vector<PointCloudSourcePtr> inputs_transformed(sources_.size()); 79  for (std::size_t i = 0; i < sources_.size(); i++) { 80  inputs_transformed[i].reset(new PointCloudSource); 81  } 82  83  nr_iterations_ = 0; 84  converged_ = false; 85  86  // Initialise final transformation to the guessed one 87  final_transformation_ = guess; 88  89  // Make a combined transformed input and output 90  std::vector<std::size_t> input_offsets(sources_.size()); 91  std::vector<std::size_t> target_offsets(targets_.size()); 92  PointCloudSourcePtr sources_combined(new PointCloudSource); 93  PointCloudSourcePtr inputs_transformed_combined(new PointCloudSource); 94  PointCloudTargetPtr targets_combined(new PointCloudTarget); 95  std::size_t input_offset = 0; 96  std::size_t target_offset = 0; 97  for (std::size_t i = 0; i < sources_.size(); i++) { 98  // If the guessed transformation is non identity 99  if (guess != Matrix4::Identity()) { 100  // Apply guessed transformation prior to search for neighbours 101  this->transformCloud(*sources_[i], *inputs_transformed[i], guess); 102  } 103  else { 104  *inputs_transformed[i] = *sources_[i]; 105  } 106  *sources_combined += *sources_[i]; 107  *inputs_transformed_combined += *inputs_transformed[i]; 108  *targets_combined += *targets_[i]; 109  input_offsets[i] = input_offset; 110  target_offsets[i] = target_offset; 111  input_offset += inputs_transformed[i]->size(); 112  target_offset += targets_[i]->size(); 113  } 114  115  transformation_ = Matrix4::Identity(); 116  // Make blobs if necessary 117  determineRequiredBlobData(); 118  // Pass in the default target for the Correspondence Estimation/Rejection code 119  for (std::size_t i = 0; i < sources_.size(); i++) { 120  correspondence_estimations_[i]->setInputTarget(targets_[i]); 121  if (correspondence_estimations_[i]->requiresTargetNormals()) { 122  PCLPointCloud2::Ptr target_blob(new PCLPointCloud2); 123  pcl::toPCLPointCloud2(*targets_[i], *target_blob); 124  correspondence_estimations_[i]->setTargetNormals(target_blob); 125  } 126  } 127  128  PCLPointCloud2::Ptr targets_combined_blob(new PCLPointCloud2); 129  if (!correspondence_rejectors_.empty() && need_target_blob_) 130  pcl::toPCLPointCloud2(*targets_combined, *targets_combined_blob); 131  132  for (std::size_t i = 0; i < correspondence_rejectors_.size(); ++i) { 133  registration::CorrespondenceRejector::Ptr& rej = correspondence_rejectors_[i]; 134  if (rej->requiresTargetPoints()) 135  rej->setTargetPoints(targets_combined_blob); 136  if (rej->requiresTargetNormals() && target_has_normals_) 137  rej->setTargetNormals(targets_combined_blob); 138  } 139  140  convergence_criteria_->setMaximumIterations(max_iterations_); 141  convergence_criteria_->setRelativeMSE(euclidean_fitness_epsilon_); 142  convergence_criteria_->setTranslationThreshold(transformation_epsilon_); 143  convergence_criteria_->setRotationThreshold(1.0 - transformation_epsilon_); 144  145  // Repeat until convergence 146  std::vector<CorrespondencesPtr> partial_correspondences_(sources_.size()); 147  for (std::size_t i = 0; i < sources_.size(); i++) { 148  partial_correspondences_[i].reset(new pcl::Correspondences); 149  } 150  151  do { 152  // Save the previously estimated transformation 153  previous_transformation_ = transformation_; 154  155  // Set the source each iteration, to ensure the dirty flag is updated 156  correspondences_->clear(); 157  for (std::size_t i = 0; i < correspondence_estimations_.size(); i++) { 158  correspondence_estimations_[i]->setInputSource(inputs_transformed[i]); 159  // Get blob data if needed 160  if (correspondence_estimations_[i]->requiresSourceNormals()) { 161  PCLPointCloud2::Ptr input_transformed_blob(new PCLPointCloud2); 162  toPCLPointCloud2(*inputs_transformed[i], *input_transformed_blob); 163  correspondence_estimations_[i]->setSourceNormals(input_transformed_blob); 164  } 165  166  // Estimate correspondences on each cloud pair separately 167  if (use_reciprocal_correspondence_) { 168  correspondence_estimations_[i]->determineReciprocalCorrespondences( 169  *partial_correspondences_[i], corr_dist_threshold_); 170  } 171  else { 172  correspondence_estimations_[i]->determineCorrespondences( 173  *partial_correspondences_[i], corr_dist_threshold_); 174  } 175  PCL_DEBUG("[pcl::%s::computeTransformation] Found %d partial correspondences for " 176  "cloud [%d]\n", 177  getClassName().c_str(), 178  partial_correspondences_[i]->size(), 179  i); 180  for (std::size_t j = 0; j < partial_correspondences_[i]->size(); j++) { 181  pcl::Correspondence corr = partial_correspondences_[i]->at(j); 182  // Update the offsets to be for the combined clouds 183  corr.index_query += input_offsets[i]; 184  corr.index_match += target_offsets[i]; 185  correspondences_->push_back(corr); 186  } 187  } 188  PCL_DEBUG("[pcl::%s::computeTransformation] Total correspondences: %d\n", 189  getClassName().c_str(), 190  correspondences_->size()); 191  192  PCLPointCloud2::Ptr inputs_transformed_combined_blob; 193  if (need_source_blob_) { 194  inputs_transformed_combined_blob.reset(new PCLPointCloud2); 195  toPCLPointCloud2(*inputs_transformed_combined, *inputs_transformed_combined_blob); 196  } 197  CorrespondencesPtr temp_correspondences(new Correspondences(*correspondences_)); 198  for (std::size_t i = 0; i < correspondence_rejectors_.size(); ++i) { 199  PCL_DEBUG("Applying a correspondence rejector method: %s.\n", 200  correspondence_rejectors_[i]->getClassName().c_str()); 201  registration::CorrespondenceRejector::Ptr& rej = correspondence_rejectors_[i]; 202  PCL_DEBUG("Applying a correspondence rejector method: %s.\n", 203  rej->getClassName().c_str()); 204  if (rej->requiresSourcePoints()) 205  rej->setSourcePoints(inputs_transformed_combined_blob); 206  if (rej->requiresSourceNormals() && source_has_normals_) 207  rej->setSourceNormals(inputs_transformed_combined_blob); 208  rej->setInputCorrespondences(temp_correspondences); 209  rej->getCorrespondences(*correspondences_); 210  // Modify input for the next iteration 211  if (i < correspondence_rejectors_.size() - 1) 212  *temp_correspondences = *correspondences_; 213  } 214  215  int cnt = correspondences_->size(); 216  // Check whether we have enough correspondences 217  if (cnt < min_number_correspondences_) { 218  PCL_ERROR("[pcl::%s::computeTransformation] Not enough correspondences found. " 219  "Relax your threshold parameters.\n", 220  getClassName().c_str()); 221  convergence_criteria_->setConvergenceState( 223  Scalar>::CONVERGENCE_CRITERIA_NO_CORRESPONDENCES); 224  converged_ = false; 225  break; 226  } 227  228  // Estimate the transform jointly, on a combined correspondence set 229  transformation_estimation_->estimateRigidTransformation( 230  *inputs_transformed_combined, 231  *targets_combined, 232  *correspondences_, 233  transformation_); 234  235  // Transform the combined data 236  this->transformCloud( 237  *inputs_transformed_combined, *inputs_transformed_combined, transformation_); 238  // And all its components 239  for (std::size_t i = 0; i < sources_.size(); i++) { 240  this->transformCloud( 241  *inputs_transformed[i], *inputs_transformed[i], transformation_); 242  } 243  244  // Obtain the final transformation 245  final_transformation_ = transformation_ * final_transformation_; 246  247  ++nr_iterations_; 248  249  // Update the vizualization of icp convergence 250  // if (update_visualizer_ != 0) 251  // update_visualizer_(output, source_indices_good, *target_, target_indices_good ); 252  253  converged_ = static_cast<bool>((*convergence_criteria_)); 254  } while (!converged_); 255  256  PCL_DEBUG("Transformation " 257  "is:\n\t%5f\t%5f\t%5f\t%5f\n\t%5f\t%5f\t%5f\t%5f\n\t%5f\t%5f\t%5f\t%5f\n\t%" 258  "5f\t%5f\t%5f\t%5f\n", 259  final_transformation_(0, 0), 260  final_transformation_(0, 1), 261  final_transformation_(0, 2), 262  final_transformation_(0, 3), 263  final_transformation_(1, 0), 264  final_transformation_(1, 1), 265  final_transformation_(1, 2), 266  final_transformation_(1, 3), 267  final_transformation_(2, 0), 268  final_transformation_(2, 1), 269  final_transformation_(2, 2), 270  final_transformation_(2, 3), 271  final_transformation_(3, 0), 272  final_transformation_(3, 1), 273  final_transformation_(3, 2), 274  final_transformation_(3, 3)); 275  276  // For fitness checks, etc, we'll use an aggregated cloud for now (should be 277  // evaluating independently for correctness, but this requires propagating a few 278  // virtual methods from Registration) 280  sources_combined); 282  targets_combined); 283  284  // If we automatically set the correspondence estimators, we should clear them now 285  if (!manual_correspondence_estimations_set) { 286  correspondence_estimations_.clear(); 287  } 288  289  // By definition, this method will return an empty cloud (for compliance with the ICP 290  // API). We can figure out a better solution, if necessary. 291  output = PointCloudSource(); 292 } 293  294 template <typename PointSource, typename PointTarget, typename Scalar> 295 void 298 { 299  need_source_blob_ = false; 300  need_target_blob_ = false; 301  // Check estimators 302  for (std::size_t i = 0; i < correspondence_estimations_.size(); i++) { 303  CorrespondenceEstimationPtr& ce = correspondence_estimations_[i]; 304  305  need_source_blob_ |= ce->requiresSourceNormals(); 306  need_target_blob_ |= ce->requiresTargetNormals(); 307  // Add warnings if necessary 308  if (ce->requiresSourceNormals() && !source_has_normals_) { 309  PCL_WARN("[pcl::%s::determineRequiredBlobData] Estimator expects source normals, " 310  "but we can't provide them.\n", 311  getClassName().c_str()); 312  } 313  if (ce->requiresTargetNormals() && !target_has_normals_) { 314  PCL_WARN("[pcl::%s::determineRequiredBlobData] Estimator expects target normals, " 315  "but we can't provide them.\n", 316  getClassName().c_str()); 317  } 318  } 319  // Check rejectors 320  for (std::size_t i = 0; i < correspondence_rejectors_.size(); i++) { 321  registration::CorrespondenceRejector::Ptr& rej = correspondence_rejectors_[i]; 322  need_source_blob_ |= rej->requiresSourcePoints(); 323  need_source_blob_ |= rej->requiresSourceNormals(); 324  need_target_blob_ |= rej->requiresTargetPoints(); 325  need_target_blob_ |= rej->requiresTargetNormals(); 326  if (rej->requiresSourceNormals() && !source_has_normals_) { 327  PCL_WARN("[pcl::%s::determineRequiredBlobData] Rejector %s expects source " 328  "normals, but we can't provide them.\n", 329  getClassName().c_str(), 330  rej->getClassName().c_str()); 331  } 332  if (rej->requiresTargetNormals() && !target_has_normals_) { 333  PCL_WARN("[pcl::%s::determineRequiredBlobData] Rejector %s expects target " 334  "normals, but we can't provide them.\n", 335  getClassName().c_str(), 336  rej->getClassName().c_str()); 337  } 338  } 339 } 340  341 } // namespace pcl 342  343 #endif /* PCL_REGISTRATION_IMPL_JOINT_ICP_HPP_ */ pcl::registration::CorrespondenceRejector::Ptr shared_ptr< CorrespondenceRejector > Ptr Definition: correspondence_rejection.h:56 pcl Definition: convolution.h:46 pcl::registration::DefaultConvergenceCriteria DefaultConvergenceCriteria represents an instantiation of ConvergenceCriteria, and implements the fol... Definition: default_convergence_criteria.h:65 pcl::PCLPointCloud2::Ptr shared_ptr< ::pcl::PCLPointCloud2 > Ptr Definition: PCLPointCloud2.h:35 pcl::IterativeClosestPoint::setInputSource void setInputSource(const PointCloudSourceConstPtr &cloud) override Provide a pointer to the input source (e.g., the point cloud that we want to align to the target) Definition: icp.h:206 pcl::JointIterativeClosestPoint::KdTreeReciprocalPtr typename KdTree::Ptr KdTreeReciprocalPtr Definition: joint_icp.h:72 pcl::IterativeClosestPoint::setInputTarget void setInputTarget(const PointCloudTargetConstPtr &cloud) override Provide a pointer to the input target (e.g., the point cloud that we want to align to the target) Definition: icp.h:239 pcl::JointIterativeClosestPoint::PointCloudTarget typename IterativeClosestPoint< PointSource, PointTarget, Scalar >::PointCloudTarget PointCloudTarget Definition: joint_icp.h:64 pcl::JointIterativeClosestPoint::computeTransformation void computeTransformation(PointCloudSource &output, const Matrix4 &guess) override Rigid transformation computation method with initial guess. Definition: joint_icp.hpp:50 pcl::JointIterativeClosestPoint::determineRequiredBlobData void determineRequiredBlobData() override Looks at the Estimators and Rejectors and determines whether their blob-setter methods need to be cal... Definition: joint_icp.hpp:297 pcl::search::KdTree search::KdTree is a wrapper class which inherits the pcl::KdTree class for performing search function... Definition: kdtree.h:61 pcl::JointIterativeClosestPoint::PointCloudSource typename IterativeClosestPoint< PointSource, PointTarget, Scalar >::PointCloudSource PointCloudSource Definition: joint_icp.h:58 pcl::PCLPointCloud2 Definition: PCLPointCloud2.h:16 pcl::JointIterativeClosestPoint::PointCloudSourcePtr typename PointCloudSource::Ptr PointCloudSourcePtr Definition: joint_icp.h:59 pcl::Correspondence::index_match index_t index_match Index of the matching (target) point. Definition: correspondence.h:65 pcl::JointIterativeClosestPoint::PointCloudTargetPtr typename PointCloudTarget::Ptr PointCloudTargetPtr Definition: joint_icp.h:65 pcl::toPCLPointCloud2 void toPCLPointCloud2(const pcl::PointCloud< PointT > &cloud, pcl::PCLPointCloud2 &msg) Convert a pcl::PointCloud<T> object to a PCLPointCloud2 binary data blob. Definition: conversions.h:240 pcl::Correspondences std::vector< pcl::Correspondence, Eigen::aligned_allocator< pcl::Correspondence > > Correspondences Definition: correspondence.h:89 pcl::Correspondence::index_query index_t index_query Index of the query (source) point. Definition: correspondence.h:63 pcl::CorrespondencesPtr shared_ptr< Correspondences > CorrespondencesPtr Definition: correspondence.h:90 pcl::Correspondence Correspondence represents a match between two entities (e.g., points, descriptors,... Definition: correspondence.h:60 pcl::JointIterativeClosestPoint::KdTreePtr typename KdTree::Ptr KdTreePtr Definition: joint_icp.h:69 pcl::JointIterativeClosestPoint::CorrespondenceEstimationPtr typename CorrespondenceEstimation::Ptr CorrespondenceEstimationPtr Definition: joint_icp.h:83 pcl::JointIterativeClosestPoint::Matrix4 typename IterativeClosestPoint< PointSource, PointTarget, Scalar >::Matrix4 Matrix4 Definition: joint_icp.h:126
__label__pos
0.975758
top of page the place of social media in the web3 world Updated: May 24, 2023 solution ivy live solution ivy live In recent years, the term "Web3" has been gaining momentum in the tech industry. Web3, also known as the decentralized web, refers to a new way of organizing and accessing information on the internet. Unlike Web2, which is dominated by a few large tech companies, Web3 is built on decentralized platforms that allow users to take control of their data and interact with each other in new and innovative ways. Social media has played a significant role in the growth of Web2, with platforms like Facebook, Twitter, and Instagram dominating the online landscape. However, with the rise of Web3, social media is undergoing a transformation that could fundamentally change the way we interact with each other online. One of the key features of Web3 is decentralization. Rather than relying on a centralized platform to manage data and interactions, Web3 platforms use blockchain technology to create decentralized networks where users have more control over their data. This has important implications for social media, where users often share sensitive information about themselves and their lives. With Web3, social media platforms can be built on decentralized networks, giving users more control over their data and more power to decide who can access it. This could help to address some of the privacy and security concerns that have plagued social media in recent years. By taking control of their data, users can decide who they want to share it with and ensure that it is protected from unauthorized access. Another important feature of Web3 is the use of smart contracts. Smart contracts are self-executing contracts that can be programmed to automatically execute when certain conditions are met. This technology has the potential to transform the way social media operates, by allowing users to create and enforce their own rules for engagement. For example, a social media platform built on a Web3 network could use smart contracts to enforce rules around the use of hate speech, fake news, or other harmful content. Users could create their own contracts that dictate the terms of engagement, such as how much personal data they are willing to share, or what kind of content they are willing to see. Web3 also has the potential to transform the way social media platforms are monetized. Currently, social media platforms generate revenue through advertising, often at the expense of user privacy. With Web3, social media platforms can be built on decentralized networks that allow users to control their own data and monetize it on their own terms. For example, a social media platform could use a token-based economy to incentivize users to create valuable content. Users could be rewarded with tokens for creating high-quality content or engaging with other users in meaningful ways. These tokens could then be traded on decentralized exchanges, allowing users to monetize their contributions to the platform. In conclusion, Web3 has the potential to transform the way we interact with each other online, including the way we use social media. By creating decentralized networks that give users more control over their data, Web3 can address some of the privacy and security concerns that have plagued social media in recent years. With the use of smart contracts and token-based economies, Web3 could also transform the way social media platforms are operated and monetized. While it remains to be seen how Web3 will evolve, it is clear that it has the potential to create a more decentralized, user-driven internet that empowers individuals and communities. Comments bottom of page
__label__pos
0.890309
[Bio] / Sprout / ERDBTypeShortString.pm Repository: ViewVC logotype View of /Sprout/ERDBTypeShortString.pm Parent Directory Parent Directory | Revision Log Revision Log Revision 1.1 - (download) (as text) (annotate) Sun Feb 22 03:28:31 2015 UTC (4 years, 1 month ago) by parrello Branch: MAIN CVS Tags: HEAD Removed Shrub stuff. Updated ERDB for compatibility with Shrub DBD. #!/usr/bin/perl -w # # Copyright (c) 2003-2006 University of Chicago and Fellowship # for Interpretations of Genomes. All Rights Reserved. # # This file is part of the SEED Toolkit. # # The SEED Toolkit is free software. You can redistribute # it and/or modify it under the terms of the SEED Toolkit # Public License. # # You should have received a copy of the SEED Toolkit Public License # along with this program; if not write to the University of Chicago # at [email protected] or the Fellowship for Interpretation of # Genomes at [email protected] or download a copy from # http://www.theseed.org/LICENSE.TXT. # package ERDBTypeShortString; use strict; use Tracer; use ERDB; use base qw(ERDBType); =head1 ERDB Short String Type Definition =head2 Introduction This object represents the data type for short strings of 32 characters or less with no odd control characters needing translation. Such strings are very limited, but more of them can be crowded into an index and they do not require encoding or decoding. =head3 new my $et = ERDBTypeShortString->new(); Construct a new ERDBTypeShortString descriptor. =cut sub new { # Get the parameters. my ($class) = @_; # Create the ERDBTypeShortString object. my $retVal = { }; # Bless and return it. bless $retVal, $class; return $retVal; } =head2 Virtual Methods =head3 averageLength my $value = $et->averageLength(); Return the average length of a data item of this field type when it is stored in the database. This value is used to compute the expected size of a database table. =cut sub averageLength { return 24; } =head3 prettySortValue my $value = $et->prettySortValue(); Number indicating where fields of this type should go in relation to other fields. The value should be somewhere between C<1> and C<5>. A value outside that range will make terrible things happen. =cut sub prettySortValue() { return 1; } =head3 validate my $okFlag = $et->validate($value); Return an error message if the specified value is invalid for this field type. The parameters are as follows. =over 4 =item value Value of this type, for validation. =item RETURN Returns an empty string if the specified field is valid, and an error message otherwise. =back =cut sub validate { # Get the parameters. my ($self, $value) = @_; # Assume it's valid until we prove otherwise. my $retVal = ""; if (length($value) > 32) { $retVal = "Invalid short string field."; } elsif ($value =~ /[\%\\\x00-\x1F\x80-\xFF]/) { $retVal = "Invalid character in short-string field."; } # Return the determination. return $retVal; } =head3 encode my $string = $et->encode($value, $mode); Encode a value of this field type for storage in the database (or in a database load file.) The parameters are as follows. =over 4 =item value Value of this type, for encoding. =item mode TRUE if the value is being encoding for placement in a load file, FALSE if it is being encoded for use as an SQL statement parameter. In most cases, the encoding is the same for both modes. =back =cut sub encode { # Get the parameters. my ($self, $value, $mode) = @_; # Return the input value. return $value; } =head3 decode my $value = $et->decode($string); Decode a string from the database into a value of this field type. The parameters are as follows. =over 4 =item string String from the database to be decoded. =item RETURN Returns a value of the desired type. =back =cut sub decode { # Get the parameters. my ($self, $string) = @_; # Return the input value. return $string; } =head3 sqlType my $typeString = $et->sqlType($dbh); Return the SQL data type for this field type. =over 4 =item dbh Open L<DBKernel> handle for the database in question. This is used when the datatype may be different depending on the DBMS used. =item RETURN Returns the datatype string to be used when creating a field of this type in an SQL table. =back =cut sub sqlType { return "VARCHAR(32)"; } =head3 indexMod my $length = $et->indexMod(); Return the index modifier for this field type. The index modifier is the number of characters to be indexed. If it is undefined, the field cannot be indexed. If it is an empty string, the entire field is indexed. The default is an empty string. =cut sub indexMod { return ''; } =head3 sortType my $letter = $et->sortType(); Return the sorting type for this field type. The sorting type is C<n> for integers, C<g> for floating-point numbers, and the empty string for character fields. The default is the empty string. =cut sub sortType { return ""; } =head3 documentation my $docText = $et->documentation(); Return the documentation text for this field type. This should be in TWiki markup format, though HTML will also work. =cut sub documentation() { return 'A short string of 32 or fewer chaaracters.'; } =head3 name my $name = $et->name(); Return the name of this type, as it will appear in the XML database definition. =cut sub name() { return "short-string"; } =head3 default my $defaultValue = $et->default(); Default value to be used for fields of this type if no default value is specified in the database definition or in an L<ERDBLoadGroup/Put> call during a loader operation. The default is undefined, which means an error will be thrown during the load. =cut sub default { return ''; } =head3 align my $alignment = $et->align(); Return the display alignment for fields of this type: either C<left>, C<right>, or C<center>. The default is C<left>. =cut sub align { return 'left'; } 1; MCS Webmaster ViewVC Help Powered by ViewVC 1.0.3  
__label__pos
0.887701
prev up next 9) Object behaviour change Here is a program which, like that in part 8 sets up a bank account into which money may be deposited and withdrawn. In this case, however, the customer had the option of upgrading the bank account to one with an overdraft facility. The command u causes an automatic upgrading. In this example, each request for a withdrawal that would lead to a negative balance is considered by the bank through a separate bank object. For simplicity, the bank object just prompts for a yes/no answer, and since we are using just simple text interface, the prompt appears alongside the interaction with the customer. This example also cleans up a loophole in the part 8 example which enabled an overdraft to be given by depositing a negative amount. #main == Screen<-aio\stdio(), Acc<-ordinaryAccount(Bank), Bank<-bank(Screen), customer(Screen,Acc); #ordinaryAccount(Bank,balance<-0)~ { deposit(n) | balance+=n; withdraw(n)- [ n<=balance |>=, balance-=n; : |># ] balance-|>balance; upgrade || <=overdraftAccount(Bank,balance); cancel | } #overdraftAccount(Bank,balance)~ { deposit(n) | balance+=n; withdraw(n)- [ n<=balance |>=, balance-=n; : | ?Bank.ask(n-balance) :>=,balance-=n; :># ] balance-|>balance; upgrade |; cancel || <=ordinaryAccount(Bank,balance<-balance); } #bank(Screen)~ { ask(overdraft)-| Screen .fwrite("Bank: grant overdraft of $">+overdraft>"? ") .readString->answer, <(answer) { answer="y" ||>=; answer="n" ||>#; : | Screen .fwrite("Invalid reply, re-enter: ") .readString->answer } } #customer(Screen,Acc)= { | Screen.fwrite("Customer: ").readString->comm <(comm) { comm="d" | Screen.readInt->amount <(amount) { amount=error | Screen.fwrite("Please enter a number only: ") .skip .readInt->amount; amount<0 | Screen.fwrite("Please enter a positive amount: ") .readInt->amount; : ||| Screen.fwrite("Deposited $">+amount).nl, Acc.deposit(amount); } comm="w" | Screen.readInt->amount, <(amount) { amount=error | Screen.fwrite("Please enter a number only: ") .skip .readInt->amount; : ||| <(amount) ?Acc.withdraw(amount) : Screen.fwrite("Withdrew $">+amount).nl; : Screen.fwrite("Insufficient funds\n"); } comm="b" || Screen.fwrite("Balance is $">+Acc.balance).nl; comm="u" || Screen.fwrite("Account upgraded\n"),Acc.upgrade; comm="c" || Screen.fwrite("Overdraft facility cancelled\n"),Acc.cancel; comm="q" |||; : || Screen.fwrite("Invalid command\n"); } } The main new concept in this example is the <= operator, used in ordinaryAccount and overdraftAccount. The following rule in ordinaryAccount: upgrade || <=overdraftAccount(Bank,balance); means that if an ordinaryAccount object receives an upgrade message, future messages to the object are directed to the new overdraftAccount object created by the call overdraftAccount(Bank,balance). The double bar in the rule means that the ordinaryAccount object does not remain in existence after this redirection. In effect, the ordinaryAccount object has become an overdraftAccount object, hence <= is referred to as the become operator. Note that a similar rule in overdraftAccount causes an overdraftAccount object to become a ordinaryAccount on receipt of a cancel message. The change of object type is not visible to other processes sharing it, except through observed change of behaviour. In fact the customer process here does not keep a record of whether the account it is sending messages to is an ordinaryAccount or an overdraftAccount. Because of this, the code for both is written to handle the full range of messages for either, so an overdraftAccount object can take an upgrade message, and an ordinaryAccount object can take a cancel message, in both cases doing nothing in response. An alternative way of getting the same effect would be to embed the code for overdraftAccount inside the code for ordinaryAccount, as below: #ordinaryAccount(Bank,balance<-0)~ { deposit(n) | balance+=n; withdraw(n)- [ n<=balance |>=, balance-=n; : |># ] balance-|>balance; upgrade | { deposit(n) | balance+=n; withdraw(n)- [ n<=balance |>=, balance-=n; : | ?Bank.ask(n-balance) :>=,balance-=n; :># ] balance-|>balance; upgrade |; cancel || } cancel | } In this case, although it is not possible to create an overdraftAccount directly, the same effect can be obtained by creating a ordinaryAccount and then sending it an upgrade message. The code with the embedded code for the upgraded account contains duplication of the rules for handling deposit and balance messages. These are unchanged in the upgraded account. Aldwych allows a more concise way of expressing this: #ordinaryAccount(Bank,balance<-0)~ { withdraw(n)- [ n<=balance |>=, balance-=n; : |># ] upgrade | { withdraw(n)- [ n<=balance |>=, balance-=n; : | ?Bank.ask(n-balance) :>=,balance-=n; :># ] upgrade |; cancel || } cancel |; = deposit(n) | balance+=n; balance-|>balance } If an = symbol follows the semiccolon at the end of a rule in a rule set, the rules following it apply not only to that rule set but also to any embedded rule sets in the rules before the = symbol. prev up next
__label__pos
0.995453
[Lua Help] Shorten Variables and error Hello, Facepunch! I’m coding a hud for darkrp and some players have really long names so how do i shorten their names? Then i use some of my variables i keep getting this error: [ERROR] lua/includes/modules/draw.lua:73: bad argument #1 to 'GetTextSize' (string expected, got nil) 1. GetTextSize - [C]:-1 2. SimpleText - lua/includes/modules/draw.lua:73 3. DrawPlayerInfo - gamemodes/darkrp/gamemode/client/hud.lua:304 4. DrawEntityDisplay - gamemodes/darkrp/gamemode/client/hud.lua:370 5. unknown - gamemodes/darkrp/gamemode/client/hud.lua:403 Code: if ply:GetNWString("usergroup") == "Owner" then local RankColor = "Color(156, 35, 35)" local RankText = "Owner" elseif ply:GetNWString("usergroup") == "Moderator" then local RankColor = "Color(155, 105, 35)" local RankText = "Moderator" elseif ply:GetNWString("usergroup") == "Vip" then local RankColor = "Color(45, 15, 155)" local RankText = "V.I.P" elseif ply:GetNWString("usergroup") == "Admin" then local RankColor = "Color(45, 160, 30)" local RankText = "Admin" elseif ply:GetNWString("usergroup") == "superadmin" then local RankColor = "Color(120, 30, 160)" local RankText = "Superadmin" else end draw.SimpleText(RankText, "DarkRPHUD2", pos.x, pos.y + 20, Color(255,255,255), TEXT_ALIGN_CENTER, TEXT_ALIGN_CENTER) Can someone please help? Thanks! I don’t understand why you would draw a players rank on a HUD… Also, I would suggest learning how variables actually work within scopes because currently the code you posted doesn’t make any sense at all. Here is an example of how a local variable works. [lua] local var = 1 local rank = “admin” if rank == “admin” then var = 2 end print(var) –returns 2 –returns 1 if rank != “admin” [/lua] The way you coded it makes it so the local variable can only be used within something like this: [lua] local var = 1 local rank = “admin” if rank == “admin” then local var = 2 end print(var) –returns 1 because print(var) is not in the scope where it equals 2 [/lua] Yeah i see im dumb, do you know how to shorten variables too? /: Just use a shorter name? You’re really not being clear [lua]local str = “A short variable name” local AnUnnecessarilyLongVariableName = “A long variable name”[/lua] draw.SimpleText(ply:Nick(), "DarkRPHUD2", pos.x + 1, pos.y - 7, Color(0, 0, 0, 255), TEXT_ALIGN_CENTER, TEXT_ALIGN_CENTER) I want it to shorten ply:Nick() so it only can be 200 Pixels long or something like that… Could you just set that function to a variable, then use the variable? Something like this? [lua] if SERVER then AddCSLuaFile() else local ply = LocalPlayer() local name = ply:Nick() local font = “DermaDefault” local maxW = 200 surface.SetFont(font) local w,h = surface.GetTextSize(name) if w > maxW then name = string.sub(name, 1, name:len()-3)…"…" end print(name) end [/lua] I was about to post similar code. Next time try to be more clear. Vague questions call for vague answers.
__label__pos
0.997644
Galaxy Z Flip vs Motorola Razr Title: Galaxy Z Flip vs Motorola Razr Index: 1. Introduction 2. Design and Form Factor 3. Display 4. Camera 5. Performance 6. Battery Life 7. Software and User Interface 8. Connectivity Options 9. Storage 10. Price and Value for Money 11. Durability and Build Quality 12. Convenience and Portability 13. Accessories 14. Customer Reviews 15. Conclusion Introduction The smartphone market has witnessed the resurgence of flip phones with the introduction of the Galaxy Z Flip and the Motorola Razr. These modern foldable devices bring a nostalgic touch to the contemporary smartphone era. In this article, we will compare the Galaxy Z Flip and the Motorola Razr in terms of design, display, camera, performance, battery life, software, connectivity, storage, price, durability, convenience, and customer reviews. Design and Form Factor The design and form factor of a smartphone play a crucial role in its appeal. The Galaxy Z Flip features a sleek and compact design with a foldable glass display. On the other hand, the Motorola Razr embraces a nostalgic design reminiscent of its iconic predecessor. Both phones offer a premium feel and a unique folding mechanism, but the Galaxy Z Flip’s clamshell design provides a more modern and refined look. Display The display is a significant aspect to consider when comparing the Galaxy Z Flip and the Motorola Razr. The Galaxy Z Flip boasts a 6.7-inch Dynamic AMOLED display with HDR10+ support, offering vibrant colors and excellent contrast. In contrast, the Motorola Razr features a 6.2-inch foldable pOLED display with a lower resolution. While both displays provide immersive experiences, the Galaxy Z Flip’s larger and higher-resolution screen gives it an edge in terms of visual quality. Camera When it comes to camera capabilities, the Galaxy Z Flip and the Motorola Razr offer different approaches. The Galaxy Z Flip features a dual-camera setup with a 12 MP main camera and a 12 MP ultra-wide-angle lens. It also offers various camera modes and features, including Night Mode and Super Steady video recording. On the other hand, the Motorola Razr sports a single 16 MP camera with Night Vision mode. While both phones capture decent photos, the Galaxy Z Flip’s dual-camera system provides more versatility and advanced photography features. Performance Performance is a crucial factor to consider when choosing a smartphone. The Galaxy Z Flip is powered by a Snapdragon 855+ processor, ensuring smooth multitasking and efficient performance. The Motorola Razr, on the other hand, features a Snapdragon 710 processor, which offers decent performance but falls short compared to the Galaxy Z Flip. Additionally, the Galaxy Z Flip comes with 8 GB of RAM, while the Motorola Razr has 6 GB of RAM. Overall, the Galaxy Z Flip delivers better performance and responsiveness. Battery Life Battery life is an essential consideration for users who rely heavily on their smartphones. The Galaxy Z Flip houses a 3,300 mAh battery, which provides all-day usage with moderate usage patterns. The Motorola Razr, however, comes with a smaller 2,510 mAh battery, resulting in slightly shorter battery life. Both phones support fast charging, allowing users to quickly recharge their devices. While the Galaxy Z Flip offers a better battery capacity, the Motorola Razr’s battery life is still sufficient for everyday use. Software and User Interface The software and user interface of a smartphone significantly impact the overall user experience. The Galaxy Z Flip runs on Samsung’s One UI, which offers a feature-rich and intuitive interface. It also benefits from Samsung’s ecosystem and software optimizations. On the other hand, the Motorola Razr runs on near-stock Android, providing a clean and straightforward user experience. Both devices offer smooth navigation and access to essential apps, but the Galaxy Z Flip’s software enhancements give it an edge in terms of customization and additional features. Connectivity Options Connectivity options play a vital role in the usability of a smartphone. The Galaxy Z Flip and the Motorola Razr offer similar connectivity features, including 4G LTE, Wi-Fi, Bluetooth, and NFC. Both devices support eSIM functionality, allowing users to have multiple phone numbers on a single device. However, it’s worth noting that the Galaxy Z Flip also supports 5G connectivity, which future-proofs the device and provides faster download and upload speeds in supported areas. Storage Ample storage is essential for users who store a significant amount of data on their smartphones. The Galaxy Z Flip comes with 256 GB of internal storage, providing ample space for apps, photos, and videos. Unfortunately, it does not offer expandable storage options. On the other hand, the Motorola Razr offers 128 GB of internal storage, which may be sufficient for most users. However, it also lacks expandable storage. While the Galaxy Z Flip offers double the storage capacity, both devices should meet the storage needs of the average user. Price and Value for Money Price is a crucial factor for many consumers when choosing a smartphone. The Galaxy Z Flip is priced at a premium level, reflecting its cutting-edge technology and innovative design. On the other hand, the Motorola Razr offers a more affordable option within the foldable smartphone segment. While the Galaxy Z Flip justifies its higher price with superior specifications and features, the Motorola Razr provides a more accessible entry point for users interested in experiencing a foldable device. Durability and Build Quality Durability and build quality are essential considerations for foldable smartphones. The Galaxy Z Flip features a robust hinge mechanism and a foldable glass display, offering enhanced durability compared to traditional plastic displays. The Motorola Razr also incorporates a hinge mechanism, but it uses a plastic OLED display, which may be more prone to scratches and damage. While both devices are built to withstand everyday use, the Galaxy Z Flip’s use of glass gives it an advantage in terms of durability. Convenience and Portability Convenience and portability are key factors for users who prefer compact and easy-to-carry smartphones. The Galaxy Z Flip’s clamshell design allows for easy one-handed use and fits comfortably in pockets. It also features a cover display that shows notifications when the device is folded. The Motorola Razr, with its nostalgic flip design, offers a compact form factor but lacks a cover display. Both devices provide portability, but the Galaxy Z Flip’s additional features make it more convenient for daily use. Accessories Accessories can enhance the overall smartphone experience. The Galaxy Z Flip offers a range of accessories, including protective cases, wireless chargers, and external batteries. These accessories are designed to complement the device’s unique form factor and provide added convenience. The Motorola Razr also offers accessories such as cases and screen protectors, but the options are relatively limited compared to the Galaxy Z Flip. Overall, the Galaxy Z Flip provides a more extensive range of accessories to enhance the user experience. Customer Reviews Customer reviews provide valuable insights into the real-world experiences of users. The Galaxy Z Flip has received positive reviews for its innovative design, display quality, and overall performance. Users appreciate its compact form factor and the versatility of its foldable display. The Motorola Razr has also garnered positive feedback for its nostalgic design and compact size. However, some users have expressed concerns about its camera performance and battery life. It’s essential to consider customer reviews to gain a comprehensive understanding of each device’s strengths and weaknesses. Conclusion In conclusion, both the Galaxy Z Flip and the Motorola Razr offer unique and innovative foldable smartphone experiences. The Galaxy Z Flip excels in terms of design, display, camera, performance, battery life, software, connectivity, storage, and accessories. It provides a premium and futuristic smartphone experience but comes at a higher price point. On the other hand, the Motorola Razr offers a more affordable entry point into the foldable smartphone market, with a nostalgic design and compact form factor. While it may have some limitations in terms of performance and camera capabilities, it still provides a satisfactory foldable smartphone experience. Ultimately, the choice between the two devices depends on individual preferences, budget, and desired features. Unmasking Tech Unmasking Tech Your go-to guide for deciphering tech jargon. We decode and simplify complex terms, expressions, and concepts from the tech universe, from AI to Blockchain, making them easy to understand. About Us We are ‘Unmasking Tech’, a dedicated team of tech enthusiasts committed to demystifying the world of technology. With a passion for clear, concise, and accessible content, we strive to bridge the gap between tech experts and the everyday user. Ready to Level Up? Unlock your potential in the world of IT with our comprehensive online course. From beginner concepts to advanced techniques, we've got you covered. Start your tech journey today!
__label__pos
0.685441
Skip to content HTTPS clone URL Subversion checkout URL You can clone with or . Download ZIP Browse files Fixed #18676 -- Allow fast-path deletion of objects Objects can be fast-path deleted if there are no signals, and there are no further cascades. If fast-path is taken, the objects do not need to be loaded into memory before deletion. Thanks to Jeremy Dunck, Simon Charette and Alex Gaynor for reviewing the patch. • Loading branch information... 1 parent 3fcca0e commit 1cd6e04cd4f768bcd4385b75de433d497d938f82 @akaariai akaariai committed View 7 django/contrib/admin/util.py @@ -191,6 +191,13 @@ def nested(self, format_callback=None): roots.extend(self._nested(root, seen, format_callback)) return roots + def can_fast_delete(self, *args, **kwargs): + """ + We always want to load the objects into memory so that we can display + them to the user in confirm page. + """ + return False + def model_format_dict(obj): """ View 63 django/db/models/deletion.py @@ -77,6 +77,9 @@ def __init__(self, using): self.data = {} self.batches = {} # {model: {field: set([instances])}} self.field_updates = {} # {model: {(field, value): set([instances])}} + # fast_deletes is a list of queryset-likes that can be deleted without + # fetching the objects into memory. + self.fast_deletes = [] # Tracks deletion-order dependency for databases without transactions # or ability to defer constraint checks. Only concrete model classes @@ -131,6 +134,43 @@ def add_field_update(self, field, value, objs): model, {}).setdefault( (field, value), set()).update(objs) + def can_fast_delete(self, objs, from_field=None): + """ + Determines if the objects in the given queryset-like can be + fast-deleted. This can be done if there are no cascades, no + parents and no signal listeners for the object class. + + The 'from_field' tells where we are coming from - we need this to + determine if the objects are in fact to be deleted. Allows also + skipping parent -> child -> parent chain preventing fast delete of + the child. + """ + if from_field and from_field.rel.on_delete is not CASCADE: + return False + if not (hasattr(objs, 'model') and hasattr(objs, '_raw_delete')): + return False + model = objs.model + if (signals.pre_delete.has_listeners(model) + or signals.post_delete.has_listeners(model) + or signals.m2m_changed.has_listeners(model)): + return False + # The use of from_field comes from the need to avoid cascade back to + # parent when parent delete is cascading to child. + opts = model._meta + if any(link != from_field for link in opts.concrete_model._meta.parents.values()): + return False + # Foreign keys pointing to this model, both from m2m and other + # models. + for related in opts.get_all_related_objects( + include_hidden=True, include_proxy_eq=True): + if related.field.rel.on_delete is not DO_NOTHING: + return False + # GFK deletes + for relation in opts.many_to_many: + if not relation.rel.through: + return False + return True + def collect(self, objs, source=None, nullable=False, collect_related=True, source_attr=None, reverse_dependency=False): """ @@ -148,6 +188,9 @@ def collect(self, objs, source=None, nullable=False, collect_related=True, models, the one case in which the cascade follows the forwards direction of an FK rather than the reverse direction.) """ + if self.can_fast_delete(objs): + self.fast_deletes.append(objs) + return new_objs = self.add(objs, source, nullable, reverse_dependency=reverse_dependency) if not new_objs: @@ -160,6 +203,10 @@ def collect(self, objs, source=None, nullable=False, collect_related=True, concrete_model = model._meta.concrete_model for ptr in six.itervalues(concrete_model._meta.parents): if ptr: + # FIXME: This seems to be buggy and execute a query for each + # parent object fetch. We have the parent data in the obj, + # but we don't have a nice way to turn that data into parent + # object instance. parent_objs = [getattr(obj, ptr.name) for obj in new_objs] self.collect(parent_objs, source=model, source_attr=ptr.rel.related_name, @@ -170,12 +217,12 @@ def collect(self, objs, source=None, nullable=False, collect_related=True, for related in model._meta.get_all_related_objects( include_hidden=True, include_proxy_eq=True): field = related.field - if related.model._meta.auto_created: - self.add_batch(related.model, field, new_objs) - else: - sub_objs = self.related_objects(related, new_objs) - if not sub_objs: - continue + if field.rel.on_delete == DO_NOTHING: + continue + sub_objs = self.related_objects(related, new_objs) + if self.can_fast_delete(sub_objs, from_field=field): + self.fast_deletes.append(sub_objs) + elif sub_objs: field.rel.on_delete(self, field, sub_objs, self.using) # TODO This entire block is only needed as a special case to @@ -241,6 +288,10 @@ def delete(self): sender=model, instance=obj, using=self.using ) + # fast deletes + for qs in self.fast_deletes: + qs._raw_delete(using=self.using) + # update fields for model, instances_for_fieldvalues in six.iteritems(self.field_updates): query = sql.UpdateQuery(model) View 8 django/db/models/query.py @@ -529,6 +529,14 @@ def delete(self): self._result_cache = None delete.alters_data = True + def _raw_delete(self, using): + """ + Deletes objects found from the given queryset in single direct SQL + query. No signals are sent, and there is no protection for cascades. + """ + sql.DeleteQuery(self.model).delete_qs(self, using) + _raw_delete.alters_data = True + def update(self, **kwargs): """ Updates all elements in the current QuerySet, setting all the given View 3  django/db/models/sql/compiler.py @@ -934,7 +934,8 @@ def as_sql(self): qn = self.quote_name_unless_alias result = ['DELETE FROM %s' % qn(self.query.tables[0])] where, params = self.query.where.as_sql(qn=qn, connection=self.connection) - result.append('WHERE %s' % where) + if where: + result.append('WHERE %s' % where) return ' '.join(result), tuple(params) class SQLUpdateCompiler(SQLCompiler): View 32 django/db/models/sql/subqueries.py @@ -3,6 +3,7 @@ """ from django.core.exceptions import FieldError +from django.db import connections from django.db.models.constants import LOOKUP_SEP from django.db.models.fields import DateField, FieldDoesNotExist from django.db.models.sql.constants import * @@ -46,6 +47,37 @@ def delete_batch(self, pk_list, using, field=None): pk_list[offset:offset + GET_ITERATOR_CHUNK_SIZE]), AND) self.do_query(self.model._meta.db_table, where, using=using) + def delete_qs(self, query, using): + innerq = query.query + # Make sure the inner query has at least one table in use. + innerq.get_initial_alias() + # The same for our new query. + self.get_initial_alias() + innerq_used_tables = [t for t in innerq.tables + if innerq.alias_refcount[t]] + if ((not innerq_used_tables or innerq_used_tables == self.tables) + and not len(innerq.having)): + # There is only the base table in use in the query, and there are + # no aggregate filtering going on. + self.where = innerq.where + else: + pk = query.model._meta.pk + if not connections[using].features.update_can_self_select: + # We can't do the delete using subquery. + values = list(query.values_list('pk', flat=True)) + if not values: + return + self.delete_batch(values, using) + return + else: + values = innerq + innerq.select = [(self.get_initial_alias(), pk.column)] + where = self.where_class() + where.add((Constraint(None, pk.column, pk), 'in', values), AND) + self.where = where + self.get_compiler(using).execute_sql(None) + + class UpdateQuery(Query): """ Represents an "update" SQL query. View 15 docs/ref/models/querysets.txt @@ -1667,6 +1667,21 @@ methods on your models. It does, however, emit the :data:`~django.db.models.signals.post_delete` signals for all deleted objects (including cascaded deletions). +.. versionadded:: 1.5 + Allow fast-path deletion of objects + +Django needs to fetch objects into memory to send signals and handle cascades. +However, if there are no cascades and no signals, then Django may take a +fast-path and delete objects without fetching into memory. For large +deletes this can result in significantly reduced memory usage. The amount of +executed queries can be reduced, too. + +ForeignKeys which are set to :attr:`~django.db.models.ForeignKey.on_delete` +DO_NOTHING do not prevent taking the fast-path in deletion. + +Note that the queries generated in object deletion is an implementation +detail subject to change. + .. _field-lookups: Field lookups View 6 docs/releases/1.5.txt @@ -149,6 +149,12 @@ Django 1.5 also includes several smaller improvements worth noting: * Django now provides a mod_wsgi :doc:`auth handler </howto/deployment/wsgi/apache-auth>` +* The :meth:`QuerySet.delete() <django.db.models.query.QuerySet.delete>` + and :meth:`Model.delete() <django.db.models.Model.delete()>` can now take + fast-path in some cases. The fast-path allows for less queries and less + objects fetched into memory. See :meth:`QuerySet.delete() + <django.db.models.query.QuerySet.delete>` for details. + Backwards incompatible changes in 1.5 ===================================== View 20 tests/modeltests/delete/models.py @@ -95,7 +95,7 @@ class MRNull(models.Model): class Avatar(models.Model): - pass + desc = models.TextField(null=True) class User(models.Model): @@ -108,3 +108,21 @@ class HiddenUser(models.Model): class HiddenUserProfile(models.Model): user = models.ForeignKey(HiddenUser) + +class M2MTo(models.Model): + pass + +class M2MFrom(models.Model): + m2m = models.ManyToManyField(M2MTo) + +class Parent(models.Model): + pass + +class Child(Parent): + pass + +class Base(models.Model): + pass + +class RelToBase(models.Model): + base = models.ForeignKey(Base, on_delete=models.DO_NOTHING) View 101 tests/modeltests/delete/tests.py @@ -1,11 +1,12 @@ from __future__ import absolute_import -from django.db import models, IntegrityError +from django.db import models, IntegrityError, connection from django.test import TestCase, skipUnlessDBFeature, skipIfDBFeature from django.utils.six.moves import xrange from .models import (R, RChild, S, T, U, A, M, MR, MRNull, - create_a, get_default_r, User, Avatar, HiddenUser, HiddenUserProfile) + create_a, get_default_r, User, Avatar, HiddenUser, HiddenUserProfile, + M2MTo, M2MFrom, Parent, Child, Base) class OnDeleteTests(TestCase): @@ -74,6 +75,16 @@ def check_do_nothing(sender, **kwargs): self.assertEqual(replacement_r, a.donothing) models.signals.pre_delete.disconnect(check_do_nothing) + def test_do_nothing_qscount(self): + """ + Test that a models.DO_NOTHING relation doesn't trigger a query. + """ + b = Base.objects.create() + with self.assertNumQueries(1): + # RelToBase should not be queried. + b.delete() + self.assertEqual(Base.objects.count(), 0) + def test_inheritance_cascade_up(self): child = RChild.objects.create() child.delete() @@ -229,16 +240,34 @@ def test_can_defer_constraint_checks(self): # 1 query to delete the avatar # The important thing is that when we can defer constraint checks there # is no need to do an UPDATE on User.avatar to null it out. + + # Attach a signal to make sure we will not do fast_deletes. + calls = [] + def noop(*args, **kwargs): + calls.append('') + models.signals.post_delete.connect(noop, sender=User) + self.assertNumQueries(3, a.delete) self.assertFalse(User.objects.exists()) self.assertFalse(Avatar.objects.exists()) + self.assertEquals(len(calls), 1) + models.signals.post_delete.disconnect(noop, sender=User) @skipIfDBFeature("can_defer_constraint_checks") def test_cannot_defer_constraint_checks(self): u = User.objects.create( avatar=Avatar.objects.create() ) + # Attach a signal to make sure we will not do fast_deletes. + calls = [] + def noop(*args, **kwargs): + calls.append('') + models.signals.post_delete.connect(noop, sender=User) + a = Avatar.objects.get(pk=u.avatar_id) + # The below doesn't make sense... Why do we need to null out + # user.avatar if we are going to delete the user immediately after it, + # and there are no more cascades. # 1 query to find the users for the avatar. # 1 query to delete the user # 1 query to null out user.avatar, because we can't defer the constraint @@ -246,6 +275,8 @@ def test_cannot_defer_constraint_checks(self): self.assertNumQueries(4, a.delete) self.assertFalse(User.objects.exists()) self.assertFalse(Avatar.objects.exists()) + self.assertEquals(len(calls), 1) + models.signals.post_delete.disconnect(noop, sender=User) def test_hidden_related(self): r = R.objects.create() @@ -254,3 +285,69 @@ def test_hidden_related(self): r.delete() self.assertEqual(HiddenUserProfile.objects.count(), 0) + +class FastDeleteTests(TestCase): + + def test_fast_delete_fk(self): + u = User.objects.create( + avatar=Avatar.objects.create() + ) + a = Avatar.objects.get(pk=u.avatar_id) + # 1 query to fast-delete the user + # 1 query to delete the avatar + self.assertNumQueries(2, a.delete) + self.assertFalse(User.objects.exists()) + self.assertFalse(Avatar.objects.exists()) + + def test_fast_delete_m2m(self): + t = M2MTo.objects.create() + f = M2MFrom.objects.create() + f.m2m.add(t) + # 1 to delete f, 1 to fast-delete m2m for f + self.assertNumQueries(2, f.delete) + + def test_fast_delete_revm2m(self): + t = M2MTo.objects.create() + f = M2MFrom.objects.create() + f.m2m.add(t) + # 1 to delete t, 1 to fast-delete t's m_set + self.assertNumQueries(2, f.delete) + + def test_fast_delete_qs(self): + u1 = User.objects.create() + u2 = User.objects.create() + self.assertNumQueries(1, User.objects.filter(pk=u1.pk).delete) + self.assertEquals(User.objects.count(), 1) + self.assertTrue(User.objects.filter(pk=u2.pk).exists()) + + def test_fast_delete_joined_qs(self): + a = Avatar.objects.create(desc='a') + User.objects.create(avatar=a) + u2 = User.objects.create() + expected_queries = 1 if connection.features.update_can_self_select else 2 + self.assertNumQueries(expected_queries, + User.objects.filter(avatar__desc='a').delete) + self.assertEquals(User.objects.count(), 1) + self.assertTrue(User.objects.filter(pk=u2.pk).exists()) + + def test_fast_delete_inheritance(self): + c = Child.objects.create() + p = Parent.objects.create() + # 1 for self, 1 for parent + # However, this doesn't work as child.parent access creates a query, + # and this means we will be generating extra queries (a lot for large + # querysets). This is not a fast-delete problem. + # self.assertNumQueries(2, c.delete) + c.delete() + self.assertFalse(Child.objects.exists()) + self.assertEquals(Parent.objects.count(), 1) + self.assertEquals(Parent.objects.filter(pk=p.pk).count(), 1) + # 1 for self delete, 1 for fast delete of empty "child" qs. + self.assertNumQueries(2, p.delete) + self.assertFalse(Parent.objects.exists()) + # 1 for self delete, 1 for fast delete of empty "child" qs. + c = Child.objects.create() + p = c.parent_ptr + self.assertNumQueries(2, p.delete) + self.assertFalse(Parent.objects.exists()) + self.assertFalse(Child.objects.exists()) View 3  tests/regressiontests/admin_util/models.py @@ -39,3 +39,6 @@ class Guest(models.Model): class Meta: verbose_name = "awesome guest" + +class EventGuide(models.Model): + event = models.ForeignKey(Event, on_delete=models.DO_NOTHING) View 13 tests/regressiontests/admin_util/tests.py @@ -17,7 +17,7 @@ from django.utils.safestring import mark_safe from django.utils import six -from .models import Article, Count, Event, Location +from .models import Article, Count, Event, Location, EventGuide class NestedObjectsTests(TestCase): @@ -71,6 +71,17 @@ def test_queries(self): # Should not require additional queries to populate the nested graph. self.assertNumQueries(2, self._collect, 0) + def test_on_delete_do_nothing(self): + """ + Check that the nested collector doesn't query for DO_NOTHING objects. + """ + n = NestedObjects(using=DEFAULT_DB_ALIAS) + objs = [Event.objects.create()] + EventGuide.objects.create(event=objs[0]) + with self.assertNumQueries(2): + # One for Location, one for Guest, and no query for EventGuide + n.collect(objs) + class UtilTests(unittest.TestCase): def test_values_from_lookup_field(self): """ View 11 tests/regressiontests/delete_regress/tests.py @@ -3,7 +3,7 @@ import datetime from django.conf import settings -from django.db import backend, transaction, DEFAULT_DB_ALIAS +from django.db import backend, transaction, DEFAULT_DB_ALIAS, models from django.test import TestCase, TransactionTestCase, skipUnlessDBFeature from .models import (Book, Award, AwardNote, Person, Child, Toy, PlayedWith, @@ -139,17 +139,24 @@ def test_to_field(self): eaten = Eaten.objects.create(food=apple, meal="lunch") apple.delete() + self.assertFalse(Food.objects.exists()) + self.assertFalse(Eaten.objects.exists()) + class LargeDeleteTests(TestCase): def test_large_deletes(self): "Regression for #13309 -- if the number of objects > chunk size, deletion still occurs" for x in range(300): track = Book.objects.create(pagecount=x+100) + # attach a signal to make sure we will not fast-delete + def noop(*args, **kwargs): + pass + models.signals.post_delete.connect(noop, sender=Book) Book.objects.all().delete() + models.signals.post_delete.disconnect(noop, sender=Book) self.assertEqual(Book.objects.count(), 0) - class ProxyDeleteTest(TestCase): """ Tests on_delete behavior for proxy models. View 12 tests/regressiontests/dispatch/tests/test_dispatcher.py @@ -127,15 +127,15 @@ def testDisconnection(self): self._testIsClean(a_signal) def test_has_listeners(self): - self.assertIs(a_signal.has_listeners(), False) - self.assertIs(a_signal.has_listeners(sender=object()), False) + self.assertFalse(a_signal.has_listeners()) + self.assertFalse(a_signal.has_listeners(sender=object())) receiver_1 = Callable() a_signal.connect(receiver_1) - self.assertIs(a_signal.has_listeners(), True) - self.assertIs(a_signal.has_listeners(sender=object()), True) + self.assertTrue(a_signal.has_listeners()) + self.assertTrue(a_signal.has_listeners(sender=object())) a_signal.disconnect(receiver_1) - self.assertIs(a_signal.has_listeners(), False) - self.assertIs(a_signal.has_listeners(sender=object()), False) + self.assertFalse(a_signal.has_listeners()) + self.assertFalse(a_signal.has_listeners(sender=object())) class ReceiverTestCase(unittest.TestCase): 0 comments on commit 1cd6e04 Please sign in to comment. Something went wrong with that request. Please try again.
__label__pos
0.981108
Intro Update 17-10-2023: This article is been updated for Angular Signals and does not use ObservableState anymore Lately, more and more Angular developers favor template-driven forms over reactive forms because: • There is almost no boilerplate code. • It’s easy to do form validations. • We let Angular do all the work! Angular will create FormControl and FormGroup instances for us automatically behind the scenes. • Template-driven forms are more declarative. The pros and cons of both techniques are explained in this article. When using Reactive forms in Angular we could use a FormArray if we want to create an iterable list of FormControl elements. This technique can be handy if we want to create a list of phonenumbers for instance. Template-driven forms don’t play nice with form arrays and in this article we will focus on how to achieve the same functionality by using template-driven forms. We will create a form with phonenumbers to showcase how we can achieve the same functionality with template-driven forms. To be ahead of the game this article uses Angular Signals behind the scenes. Check this article if you want to know how to set up a Template-driven form with Signals Like we mentioned before, using a FormArray in combination with template-driven forms is not possible. The FormArray class belongs to the reactive forms package and template-driven forms can only create FormGroup and FormControls instances automatically by using the ngModel and ngModelGroup directives. The phonenumbers functionality Let’s continue with our phonenumbers example. We need a simple user form that has a firstName and lastName property, together with an array of phonenumbers. FormArrays are not available in template-driven forms, but we could use the following technique to achieve something similar: <form ...> <label> First name <input type="text" [ngModel]="vm.user.firstName" name="firstName"/> </label> <label> Last name <input type="text" [ngModel]="vm.user.lastName" name="lastName"/> </label> <div *ngFor="let phoneNumber of phoneNumbers; let i = index"> <input type="text" [ngModel]="phoneNumbers[i]" name="phoneNumber{{i}}"/> </div> ... </form> This would create a FormControl instance on the form for every phonenumber and would pollute the automatically created reactive form behind the scenes. It would look like this: ngForm = { form: { firstName: FormControl, lastName: FormControl, phonenumber0: FormControl, // Dirty phonenumber1: FormControl, // Dirty phonenumber2: FormControl, // Dirty phonenumber3: FormControl, // Dirty } } As we can see, an array is not possible. Angular would just create formControls with unique keys. Let’s stop trying to use arrays, and instead use template-driven forms they way they were meant to be used. This means our form can only exist out of FormGroup instances and FormControl intances. The previous approach polluted the structure of our form. The following structure is the structure that we want to achieve: ngForm = { form: { firstName: FormControl, lastName: FormControl, // Clean separate formGroup that contains the phonenumbers // on a property related to the index phonenumbers: { 0: FormControl, 1: FormControl, 2: FormControl, 3: FormControl, } } } We can see that the form is not polluted anymore and we use a phonenumbers object with indexes as keys that will contain the FormControl instances for every phonenumber. Let’s update the UserFormModel since this is the model that will represent our actual form: type UserFormModel = Partial<{ firstName: string; lastName: string; // Will represent the input that will be used // to add a phone number addPhonenumber: string; // The list of actual phone numbers phonenumbers: { [key: string]: string }; ... }> Ideally, an initial version of our form would look like this: Note: always use a trackBy function <form ...> ... <div ngModelGroup="phonenumbers"> <div *ngFor="let key of vm.user.phonenumbers; trackBy: tracker"> <input type="text" [ngModel]="vm.user.phonenumbers[key]" name="{{key}}"/> </div> </div> ... </form> export class UserFormComponent implements AfterViewInit { @ViewChild('form') form!: NgForm; private readonly user = signal<UserFormModel>({}); private readonly viewModel = computed(() => ({ user: this.user() })) protected get vm() { return this.viewModel(); }; protected tracker = (i: number) => i; public ngAfterViewInit(): void { this.form.valueChanges?.subscribe((v) => { this.user.set(v); }); } This will result in an error because vm.user.phonenumbers is not an iterable but an object and the *ngFor directive expects an iterable like an Array. To convert this object to an array we can use the keyvalue pipe. This pipe will return an array with a key and a value property. <form ...> ... <div ngModelGroup="phonenumbers"> <div *ngFor="let item of vm.user.phonenumbers | keyvalue; trackBy: tracker"> <input type="text" [ngModel]="vm.user.phonenumbers[item.key]" name="{{item.key}}"/> </div> </div> ... </form> We can see that we use the key (index) in the name attribute. We also use the key to bind the actual phonenumber to the [ngModel] directive. Note that we do not use the banana in the box syntax because this.form.valueChanges is automatically feeding our component its state machine in the ngAfterViewInit() lifecycle hook. Completing the phonenumbers functionality with adding and deleting phonenumbers The iterative part and the updating of phonenumbers is ready, but now we still need to be able to add and remove phonenumbers. <form ...> ... <div ngModelGroup="phonenumbers"> <div *ngFor="let item of vm.user.phonenumbers | keyvalue; trackBy: tracker"> <input type="text" *ngIf="vm.user.phonenumbers as phonenumbers" [ngModel]="phonenumbers[item.key]" name="{{item.key}}"/> <!-- Delete the phonenumber based on the key --> <button type="button" (click)="deletePhonenumber(item.key)"> Delete phonenumber </button> </div> </div> <!-- Bind the addPhonenumber to an input --> <input type="text" [ngModel]="vm.user.addPhonenumber" name="addPhonenumber"/> <!-- Add a phonenumber --> <button type="button" (click)="addPhonenumber()">Add phonenumber</button> ... </form> The template is ready, we only need to add a deletePhonenumber() and addPhoneNumber() method. An array would be a little bit easier here, but since template-driven forms don’t play nice with arrays we have to translate an array to an object and the other way around. Let’s start with the addPhonenumber() method. We will use Object.values to get all the values from our phonenumbers object and create a new array with the addPhonenumber value. This gives us a brand-new array with the newly added phonenumber in it: export class UserFormComponent implements AfterViewInit { ... protected addPhonenumber(): void { // Create new array with all the old phonenumbers and new one const phonenumbers = [ ...Object.values(this.user().phonenumbers), this.user().addPhonenumber, ]; } } What’s left to do is update our user ViewModel with the newly calculated phonenumbers. For that we need to convert our new array back to an object where all the keys are indexes. We can clean the addPhonenumber field at the same time. export class UserFormComponent implements AfterViewInit { ... protected addPhonenumber(): void { // Create new array with all the old phonenumbers and new one const phonenumbers = [ ...Object.values(this.user().phonenumbers || {}), this.user().addPhonenumber, ] as string[]; this.user.update((old) => ({ ...old, phonenumbers: arrayToObject(phonenumbers), addPhonenumber: '', })); } } To create an object from an array we can use the reduce() method that lives on the arrays prototype. We created a simple pure arrayToObject function that can be reused everywhere: // ['foo', 'bar', 'baz'] => {0: 'foo', 1: 'bar', 2: 'baz'} function arrayToObject<T>(arr: T[]): { [key: number]: T } { return arr.reduce((acc, value, index) => ({ ...acc, [index]: value }), {}) } For the deletePhonenumber() method we can use Object.values to get all the values from our phonenumbers object and use the filter method to make sure that the phonenumber we want to delete isn’t part of the array anymore. export class UserFormComponent implements AfterViewInit { ... protected deletePhonenumber(key: string): void { const phonenumbers = Object .values(this.user().phonenumbers) .filter( (v, index) => index !== Number(key) ); } } Updating is done exactly the same as we did with the addPhoneNumber() method. Below you can see both methods implemented: export class UserFormComponent implements AfterViewInit { ... protected addPhonenumber(): void { const phonenumbers = [ ...Object.values(this.user().phonenumbers), this.user().addPhonenumber, ]; this.user.update((old) => ({ ...old, phonenumbers: arrayToObject(phonenumbers), addPhonenumber: '', })); } public deletePhonenumber(key: string): void { const phonenumbers = Object.values(this.user().phonenumbers || {}).filter( (v, index) => index !== Number(key) ); this.user.update((old) => ({ ...old, phonenumbers: arrayToObject(phonenumbers), addPhonenumber: '', })); } } The outcome of our form (what is being kept in the state of our component now looks like this: form = { user: { firstName: "Brecht", lastName: "Billiet", addPhonenumber: "", phonenumbers: { 0: "000000000000", 1: "111111111111", 2: "222222222222" } } } Here is an overview of the entire code of user-form.component.ts: import { CommonModule } from '@angular/common'; import { AfterViewInit, Component, signal, ViewChild } from '@angular/core'; import { FormsModule, NgForm } from '@angular/forms'; type UserFormModel = Partial<{ firstName: string; lastName: string; addPhonenumber: string; phonenumbers: { [key: string]: string }; }>; @Component({ selector: 'app-user-form', templateUrl: './user-form.component.html', styleUrls: ['./user-form.component.css'], standalone: true, imports: [CommonModule, FormsModule], }) export class UserFormComponent implements AfterViewInit { @ViewChild('form') form!: NgForm; private readonly user = signal<UserFormModel>({}); protected get vm() { return { user: this.user() }; } tracker = (i: number) => i; public ngAfterViewInit(): void { this.form.valueChanges?.subscribe((v) => { this.user.set(v); }); } protected submit(): void { console.log(this.form); } protected addPhonenumber(): void { const phonenumbers = [ ...Object.values(this.user().phonenumbers || {}), this.user().addPhonenumber, ] as string[]; this.user.update((old) => ({ ...old, phonenumbers: arrayToObject(phonenumbers), addPhonenumber: '', })); } protected deletePhonenumber(key: string): void { const phonenumbers = Object.values(this.user().phonenumbers || {}).filter( (v, index) => index !== Number(key) ); this.user.update((old) => ({ ...old, phonenumbers: arrayToObject(phonenumbers), addPhonenumber: '', })); } } function arrayToObject<T>(arr: T[]): { [key: number]: T } { return arr.reduce((acc, value, index) => ({ ...acc, [index]: value }), {}); } The complete result of the template looks like this. This template has no boilerplate, it is readable and Angular takes care of everything for us: <form #form="ngForm" (ngSubmit)="submit()"> <label> First name <input type="text" [ngModel]="vm.user.firstName" name="firstName"/> </label> <label> Last name <input type="text" [ngModel]="vm.user.lastName" name="lastName"/> </label> <h2>Phonenumbers</h2> <div ngModelGroup="phonenumbers"> <div class="phonenumber" *ngFor="let item of vm.user.phonenumbers | keyvalue; trackBy: tracker"> <input type="text" *ngIf="vm.user.phonenumbers as phonenumbers" [ngModel]="phonenumbers[item.key]" name="{{ item.key }}"/> <button type="button" (click)="deletePhonenumber(item.key)"> Delete phonenumber </button> </div> </div> <input type="text" [ngModel]="vm.user.addPhonenumber" name="addPhonenumber"/> <button type="button" (click)="addPhonenumber()">Add phonenumber</button> <button>Submit form</button> </form> You can play with the example on Stackblitz here. Conclusion Template-driven forms are clean, have no boilerplate and we let Angular take care of all the hard work for us. We can not use FormArray since it is a part of the reactive forms package in Angular. Since template-driven forms create FormControl and FormGroup instances for us automatically we can create clean form structures by converting arrays to objects and the other way around. Hope you liked the article! If you have any questions! Reach out! Remember that we also offer Angular Consultancy and Angular Coaching. We also love to help out with Angular Training
__label__pos
0.991199
Custom TypeName Slow Output Welcome Forums General PowerShell Q&A Custom TypeName Slow Output This topic contains 8 replies, has 2 voices, and was last updated by   Participant 1 year, 3 months ago. • Author Posts • #77403 Participant Points: 0 Rank: Member I have a module with a function that collects the firmware versions on HP servers using the HPRESTCmdlets module found on the gallery. The function assigns the object a custom typename of 'Hardware.Firmware' and I am using a ps1xml for custom viewing. The function consists of a begin, process, and end scriptblock. When running the function against a set of objects (via foreach) the first object is always delayed in the output to the console, it actually is displayed after the END block runs. Every sequential object runs as expected. I also receive the same delay if running against a single object. If I remove the custom typename, the first object has no delay and is process correctly. Any ideas on why there is a delay on processing the first object when using a custom type and how can I avoid it? Here is the code for the function: function Get-HPFirmware { [CmdletBinding()] param ( [Parameter(Mandatory = $true, Position = 0)] [Alias('Name', 'Server', 'Ip')] [string]$iLoName, [Parameter(Mandatory = $true)] [ValidateNotNullorEmpty()] [Alias('User')] [System.Management.Automation.PSCredential][System.Management.Automation.Credential()] $Credential, [switch]$IgnoreCertFailures ) BEGIN { $DefaultVariables = $(Get-Variable).Name try { Import-Module -Name HPRESTCmdlets -Force -ErrorAction Stop -Verbose:$false } catch { throw } Write-Verbose -Message "Splatting parameters for Connect-HPREST" $ConnectParams = @{ 'Address' = $PSBoundParameters['iLoName'] 'Credential' = $PSBoundParameters['Credential'] } if ($PSBoundParameters.ContainsKey('IgnoreCertFailures')) { $ConnectParams.DisableCertificateAuthentication = $true } } PROCESS { try { Write-Verbose -Message "Connecting to $($ConnectParams['Address'])" $Session = Connect-HPREST @ConnectParams -ErrorAction Stop } catch { throw } try { $Systems = Get-HPRESTDataRaw -Href '/rest/v1/Systems' -Session $Session -ErrorAction Stop foreach ($Sys in $Systems.links.member.href) { $Data = Get-HPRESTDataRaw -Href $Sys -Session $Session -ErrorAction Stop $FirmwareUri = ($Data.Oem.Hp.links.PSObject.Members | Where-Object -FilterScript { $_.Name -match 'Firmware' }).Value.href Write-Verbose -Message "Firmware Uri ($FirmwareUri) discovered" if ($FirmwareUri) { $FirmwareData = Get-HPRESTDataRaw -Href $FirmwareUri -Session $Session -ErrorAction Stop if ($FirmwareData) { $Firmware = $FirmwareData.Current | ForEach-Object -Process { ($_.PSObject.Members | Where-Object -FilterScript { $_.MemberType -eq 'NoteProperty' }).Value } Write-Verbose -Message "$($Firmware.Count) components discovered" } else { Write-Warning -Message "No firmware data available via $FirmwareUri for $($PSBoundParameters['iLoName'])" break } } else { Write-Warning -Message "Unable to locate the firmware uri" break } $PCIDevicesUri = ($Data.Oem.Hp.links.PSObject.Members | Where-Object -FilterScript { $_.Name -match 'PCIDevices' }).Value.href Write-Verbose -Message "PCI Device Uri ($PCIDevicesUri) discovered" if ($PCIDevicesUri) { $PCIData = Get-HPRESTDataRaw -Href $PCIDevicesUri -Session $Session -ErrorAction Stop if (!$PCIData) { Write-Warning -Message "No PCI device data available via $PCIDevicesUri for $($PSBoundParameters['iLoName'])" break } Write-Verbose -Message "$($PCIData.Items.Count) devices discovered" } else { Write-Warning -Message "Unable to locate the PCI device uri" break } foreach ($i in $Firmware) { if ($i.UEFIDevicePaths) { $Device = $PCIData.Items | Where-Object -FilterScript { $_.UEFIDevicePath -eq $i.UEFIDevicePaths } $Props = @{ 'ElementName' = $i.Name 'Location' = $i.Location 'VersionString' = $i.VersionString 'FQDD' = if ($i.Name -match 'FC') { $Device.StructuredName -replace "^\w{3}", "FC" } else { $Device.StructuredName -replace "\s", "" } 'DeviceId' = $Device.DeviceId 'SubDeviceId' = $Device.SubsystemDeviceID 'VendorId' = $Device.VendorID 'SubVendorId' = $Device.SubsystemVendorID } } else { $Props = @{ } switch -wildcard ($i.Name) { '*Power Supply*' { $Props.ElementName = "$($i.Name).$($i.Location)" $Props.Location = $i.Location $Props.VersionString = $i.VersionString $Props.FQDD = "PSU.$($i.Location -replace '\s', '')" } '*iLo*' { $Props.ElementName = "Integrated Lights Out" $Props.Location = $i.Location $Props.VersionString = $i.VersionString.Split(' ')[0] $Props.FQDD = "$($i.Name).$($i.Location -replace '\s', '')" } '*System ROM*' { $Props.ElementName = $i.Name $Props.Location = $i.Location $Props.VersionString = $i.VersionString.Split(' ')[1] $Props.FQDD = "BIOS.$($i.Location -replace '\s', '')" } '*Intelligent*' { $Props.ElementName = $i.Name $Props.Location = $i.Location $Props.VersionString = $i.VersionString $Props.FQDD = "DriverPack.$($i.Location -replace '\s', '')" } '*Power Management*' { $Props.ElementName = $i.Name $Props.Location = $i.Location $Props.VersionString = $i.VersionString $Props.FQDD = "DriverPack.$($i.Location -replace '\s', '')" } '*Server Platform*' { $Props.ElementName = $i.Name $Props.Location = $i.Location $Props.VersionString = $i.VersionString $Props.FQDD = "SPS.$($i.Location -replace '\s', '')" } '*Logic Device*' { $Props.ElementName = $i.Name $Props.Location = $i.Location $Props.VersionString = $i.VersionString.Split(' ')[-1] $Props.FQDD = "SPLD.$($i.Location -replace '\s', '')" } default { $Props.ElementName = $i.Name $Props.Location = $i.Location $Props.VersionString = $i.VersionString $Props.FQDD = "Unknown.$($i.Location -replace '\s', '')" } } } $Object = New-Object -TypeName System.Management.Automation.PSObject -Property $Props $Object.PSObject.TypeNames.Insert(0,'Hardware.Firmware') $Object } } Write-Verbose -Message "Disconnecting from iLo" Disconnect-HPREST -Session $Session } catch { Disconnect-HPREST -Session $Session throw } } END { Write-Verbose -Message "Cleaning up variables created by cmdlet" ((Compare-Object -ReferenceObject (Get-Variable).Name -DifferenceObject $DefaultVariables).InputObject) | ForEach-Object -Process { Remove-Variable -Name $_ -Force -ErrorAction Ignore } } } Here is the ps1xml: Hardware.Firmware Hardware.Firmware Name Version ElementName VersionString Here is the sample Verbose output: VERBOSE: Splatting parameters for Connect-HPREST VERBOSE: Connecting to x.x.x.x VERBOSE: Firmware Uri (/rest/v1/Systems/1/FirmwareInventory) discovered VERBOSE: 19 components discovered VERBOSE: PCI Device Uri (/rest/v1/Systems/1/PCIDevices) discovered VERBOSE: 13 devices discovered VERBOSE: Disconnecting from iLo VERBOSE: Cleaning up variables created by cmdlet (This is the END block) Name Version ———– ————- Smart HBA H240ar 4.52 HP StorageWorks 82Q 8Gb PCI-e Dual Port FC HBA 08.02.00 HP StorageWorks 82Q 8Gb PCI-e Dual Port FC HBA 08.02.00 HP Ethernet 1Gb 4-port 331i Adapter 17.4.41 HP Ethernet 10Gb 2-port 530T Adapter 7.14.79 Any help would be greatly appreciated! • #77511 Keymaster Points: 1,704 Helping HandTeam Member Rank: Community Hero Sorry, you can't post XML here. You'd need to use a Gist. So, to be clear, the concern is that the output all shows up after all the Verbose messages, correct? • #77515 Participant Points: 0 Rank: Member Correct, it shows up after the END block is ran, but only occurs on the first object being processed. I originally did not use the width headers in my ps1xml, but I decided to add them to test and that seemed to resolve the problem, but I am unsure why. As I am only displaying two properties with my custom table view I didn't want to have to manage the width. • #77518 Keymaster Points: 1,704 Helping HandTeam Member Rank: Community Hero So, this is about how the formatting system works. With no custom view, PowerShell can just spew each object as they come out the pipeline, and so you get output and Verbose messages intermingled. When you specify widths, PowerShell similarly doesn't have to do any thinking. You've done it all. However, when you have a custom format and you don't specify widths, PowerShell has to wait until it sees every object, so that it can figure out how wide to try and make each column. Think of it this way. Normally, the pipeline looks like this: Get-HPFirmware | Out-Default What you ran into looked like this: Get-HPFirmware | Format-Table -AutoSize | Out-Default Auto size calculation always makes Format-Table block the pipeline (much like Sort-Object, and for much the same reason). You won't see any output until the objects end up with Out-Default. When you remove the need for FT to block the pipeline, and can stream one object at a time to Out-Default, and you see output sooner. Another thing that might be confusing you is how PowerShell manages output. There are 6 pipelines, including Verbose and Success (which is where objects get written to by Write-Output). These stay entirely separate until they HAVE to be combined because you only have one screen upon which to spew everything. The output is still happening in your PROCESS block, but until after your END block, Format-Table still wasn't sure it had seen everything, and so it held off sending anything to Out-Default until it got the "all done" signal from upstream. That resulted in everything being combined for on-screen display in a slightly different sequence – but your code's execution order never changed. You just didn't SEE the output ON YOUR SCREEN in the same way. • #77523 Participant Points: 0 Rank: Member Thank you very much Don! That makes complete sense and thanks for taking the time to actually explain why it behaves the way it does. One last question, does the END block "always" run or is it skipped if a terminating error occurs in the PROCESS block? I am trying to determine the best place for cleaning up after my function has finished processing. • #77524 Keymaster Points: 1,704 Helping HandTeam Member Rank: Community Hero Skipped. "Terminate" means just that. • #77527 Participant Points: 0 Rank: Member Ok so is the best practice to perform any clean up in the finally block? • #77530 Keymaster Points: 1,704 Helping HandTeam Member Rank: Community Hero Well, if you're trapping the exception then you get to decide what to do. You could absorb it, and your function would continue normally. Or you could throw your own exception, before which you'd do any needed cleanup. So, in the Catch block, normally; if you put it in Finally, then it "cleans up" every time, whether there was an error or not. • #77533 Participant Points: 0 Rank: Member Got it! Thank you again! The topic ‘Custom TypeName Slow Output’ is closed to new replies.
__label__pos
0.956242
Commit 3c078b86 authored by Julien Michel's avatar Julien Michel ENH: Adding new filters option in application parent 9c4b72c1 ......@@ -22,7 +22,7 @@ #include "otbOGRDataSourceWrapper.h" #include "otbOGRDataSourceToLabelImageFilter.h" // #include "otbGenericRSTransform.h" #include "otbGenericRSTransform.h" namespace otb { ......@@ -51,7 +51,7 @@ public: typedef UInt8ImageType::IndexType IndexType; // Misc // typedef otb::GenericRSTransform<> RSTransformType; typedef otb::GenericRSTransform<> RSTransformType; typedef otb::PipelineMemoryPrintCalculator MemoryCalculatorType; // Exact rasterization mode ......@@ -109,10 +109,27 @@ private: SetParameterDescription( "spy", "OutputSpacing[1] (useless if support image is given)" ); MandatoryOff("spy"); AddParameter(ParameterType_String,"field","The attribute field to burn"); SetParameterDescription("field","Name of the attribute field to burn"); SetParameterString("field","DN"); AddParameter(ParameterType_Float,"background", "Background value"); SetParameterDescription("background","Default value for pixels which do not belong to any geometry"); SetDefaultParameterFloat("background",0.); AddParameter(ParameterType_Choice,"mode","Rasterization mode"); SetParameterDescription("mode","This parameter allows to choose between rasterization modes"); AddChoice("mode.binary","Binary mode"); SetParameterDescription("mode.attribute","In this mode, pixels within a geometry will hold the user-defined foreground value"); AddParameter(ParameterType_Float,"mode.binary.foreground","Foreground value"); SetParameterDescription("mode.binary.foreground","Value of the pixels inside a geometry"); SetDefaultParameterFloat("mode.binary.foreground",255); AddChoice("mode.attribute","Attribute burning mode"); SetParameterDescription("mode.attribute","In this mode, pixels within a geometry will hold the value of a user-defined field extracted from this geometry."); AddParameter(ParameterType_String,"mode.attribute.field","The attribute field to burn"); SetParameterDescription("mode.attribute.field","Name of the attribute field to burn"); SetParameterString("mode.attribute.field","DN"); AddRAMParameter(); SetDocExampleParameterValue("in","qb_RoadExtract_classification.shp"); ......@@ -134,6 +151,39 @@ private: m_OgrDS = otb::ogr::DataSource::New(GetParameterString("in"), otb::ogr::DataSource::Modes::read); // Retrieve extent double ulx, uly, lrx, lry; bool extentAvailable = true; try { m_OgrDS->GetGlobalExtent(ulx,uly,lrx,lry); } catch(itk::ExceptionObject & err) { extentAvailable = false; } if(!extentAvailable && (!(HasValue("spx") && HasValue("spy")) || (!(HasValue("orx") && HasValue("ory"))))) { otbAppLogWARNING(<<"Failed to retrieve the spatial extent of the dataset. The application will retry in force mode, which means it might have to walk the entire dataset to determine extent. This might be a long process for large datasets. Consider setting the orx, ory, spx and spy parameters."); try { m_OgrDS->GetGlobalExtent(ulx,uly,lrx,lry); extentAvailable = true; } catch(itk::ExceptionObject & err) { extentAvailable = false; otbAppLogFATAL(<<"Failed to retrieve the sapatial extent of the dataset in force mode. The spatial extent is mandatory when orx, ory, spx and spy parameters are not set, consider setting them."); } } // region information SizeType size; PointType origin; ......@@ -182,6 +232,19 @@ private: origin[0] = GetParameterFloat("orx"); origin[1] = GetParameterFloat("ory"); } else if(extentAvailable) { origin[0] = ulx; origin[1] = uly; // Transform to output EPSG RSTransformType::Pointer rsTransform = RSTransformType::New(); rsTransform->SetInputProjectionRef(m_OgrDS->GetProjectionRef()); rsTransform->SetOutputProjectionRef(outputProjectionRef); rsTransform->InstanciateTransform(); origin = rsTransform->TransformPoint(origin); } else { // Not handled for now, parameter is mandatory ......@@ -192,6 +255,23 @@ private: spacing[0] = GetParameterFloat("spx"); spacing[1] = GetParameterFloat("spy"); } else if(extentAvailable) { // Transform to output EPSG PointType lrout; lrout[0] = lrx; lrout[1] = lry; RSTransformType::Pointer rsTransform = RSTransformType::New(); rsTransform->SetInputProjectionRef(m_OgrDS->GetProjectionRef()); rsTransform->SetOutputProjectionRef(outputProjectionRef); rsTransform->InstanciateTransform(); lrout = rsTransform->TransformPoint(lrout); spacing[0] = (origin[0] - lrout[0])/size[0]; spacing[1] = (origin[1] - lrout[1])/size[1]; } else { // Not handled for now, parameter is mandatory ......@@ -208,6 +288,19 @@ private: m_OGRDataSourceRendering->SetOutputSize(size); m_OGRDataSourceRendering->SetOutputOrigin(origin); m_OGRDataSourceRendering->SetOutputSpacing(spacing); m_OGRDataSourceRendering->SetBackgroundValue(GetParameterFloat("background")); if(GetParameterString("mode") == "binary") { m_OGRDataSourceRendering->SetBurnAttributeMode(false); m_OGRDataSourceRendering->SetForegroundValue(GetParameterFloat("mode.binary.foreground")); } else if(GetParameterString("mode") == "attribute") { m_OGRDataSourceRendering->SetBurnAttributeMode(true); m_OGRDataSourceRendering->SetBurnAttribute(GetParameterString("mode.attribute.field")); } m_OGRDataSourceRendering->SetOutputProjectionRef(outputProjectionRef); SetParameterOutputImage<FloatImageType>("out", m_OGRDataSourceRendering->GetOutput()); ...... Markdown is supported 0% or . You are about to add 0 people to the discussion. Proceed with caution. Finish editing this message first! Please register or to comment
__label__pos
0.965524
little tina 發問時間: 社會與文化語言 · 1 0 年前 一題直線方程式的斜率問題 Q1:In the xy-plane , line a and line b have the same slope. If the y-intercept of line a is -1, what is the y-intercept of line b? 條件1: The x-intercept of line a is -1 條件2:Line b passes through the point (10,20) 答案是:2個條件加起來才能成立。 我的問題是:題目說line a與line b有相同斜率,但我不知斜率呀? 請教我斜率的算法謝謝。 2 個解答 評分 • Tony Lv 7 1 0 年前 最佳解答 親愛的 flona , 這是數學問題, 放在英文這一版恐怕放錯了. 題目說 y-intercept of line a is -1→代表直線a通過(0,-1)這點. 條件1說 x-intercept of line a is -1→代表直線a通過(-1,0)這點. 所以直線a的斜率=-(-1)/(-1)=-1 計算斜率的方式是把y軸截距除以x軸截距再取負號. 因為直線b通過(10,20)這點, 且斜率和直線a都是-1, 所以直線b的方程式為 y=-x+30, 代表直線b在y軸的截距是30←這就是答案. 參考資料: C'est moi. • 登入以對解答發表意見 • 1 0 年前 題目翻成中文是: 在一個x-y座標系中, 直線a和直線b有著相同的斜率, 如果y與直線a相交於(0,-1) 則y與直線b相交於哪裡? 條件一:x軸與直線a相交於(-1,0) 條件二:直線b通過點(10,20) 答案:應該是(30,0)吧~~ 斜率都是-1(x座標每增1,y座標減1) 參考資料: 我和我國中三年學的東東 • 登入以對解答發表意見 還有問題?馬上發問,尋求解答。
__label__pos
0.971901
Glide Vs Picasso With Code Examples Glide Vs Picasso With Code Examples Hello everyone, In this post, we are going to have a look at how the Glide Vs Picasso problem can be solved using the computer language. Glide's loading times are faster and it uses a small amount of memory for cache, but the library size is quite large. It, too, is easy to implement. Glide might be a better alternative to Picasso when memory footprint is less of a concern or more and larger images need to be processed Glide Vs Picasso. There are a number of different approaches that can be taken to solve the same problem. The following paragraphs will examine the various alternative approaches. Memory consumption: Glide occupies less memory as compared to Picasso. Reason being is Picasso loads full images doesn't matter what ImageView dimensions are mentioned, where as Glide loads image from server for the ImageView size (height/width) Library size: Glide is heavier in terms of library size as compared to Picasso. Number of methods: Glide is having more number of methods as compared to Picasso. GIF Support: Picasso doesn't support GIF image loading. We have explained how to fix the Glide Vs Picasso problem by using a wide variety of examples taken from the real world. Which is better Picasso or glide? Glide's loading times are faster and it uses a small amount of memory for cache, but the library size is quite large. It, too, is easy to implement. Glide might be a better alternative to Picasso when memory footprint is less of a concern or more and larger images need to be processed.13-Jun-2018 What is Glide and Picasso? Glide is memory efficient as compared to picasso becuase glide loading same image according to size of view but picsso loads full size image. And if you want to prevent from OutOfMemoryError excecption we should go for glide.26-Jul-2018 What is Glide used for in Android? Glide is an Image Loader Library for Android developed by bumptech and is a library that is recommended by Google. It has been used in many Google open source projects including Google I/O 2014 official application. It provides animated GIF support and handles image loading/caching.26-Dec-2020 How do I use Picasso on Android? How to Use Picasso Android Library? • Step 1: Create an empty activity project. Create an empty activity Android Studio project. • Step 2: Adding the required dependency to app-level gradle file. • Step 3: Working with the Manifest File. • Step 4: Working with the activity_main.xml file. Does Picasso support GIF? Picasso does not support GIF animation on a simple image view.05-Jan-2021 How does glide work? Glide knows the dimension of the imageView as it takes the imageView as a parameter. Glide down-samples the image without loading the whole image into the memory. This way, the bitmap takes less memory, and the out of memory error is solved.15-Aug-2017 How does glide cache work? How does the Glide caching mechanism work? By default, Glide uses memory and disk caching to avoid unnecessary network calls, it checks into multiple layers of caches before initiating a new request call for an image.21-Jun-2021 How do you put pictures on Glide? Glide allows the display of images from links (URL) without downloading them. • val resizeImage = "https://encrypted-tbn0.gstatic.com/images? • binding.buttonDrawable.setOnClickListener { Glide.with(this) .load(R.drawable.image) .into(binding.imageView) } How do you blur pictures on Glide Android? In this blog, we will blur the image with the help of class BitmapTransformation provided by glide. • Step1: Added glide dependencies. • Step2: Extend BitmapTransformation class and Override method transform. • Step3: Transform image. What is Glide App? GlideApps is an online platform that instantly transforms data from spreadsheets (Google Sheet, Excel) into mobile apps that fit on iOS, Android, phones and tablets.10-Feb-2022
__label__pos
0.995165
Take the 2-minute tour × Server Fault is a question and answer site for system and network administrators. It's 100% free, no registration required. For building apps from source like git or rails I've seen recommendations to install in both /opt or /usr/local. From what I've read so for, the designated use for both is about the same and it amounts to merely a style issue. Is there any practical difference? Best practices? share|improve this question 3 Answers 3 I use /usr/local for stuff I put into the system, and I let third-party installers take /opt. share|improve this answer The FHS says: A package to be installed in /opt must locate its static files in a separate /opt/ or /opt/ directory tree, where is a name that describes the software package and is the provider's LANANA registered name. while /usr/local hold the usual /bin/, /lib, /etc, ... hierarchy share|improve this answer Personally, I like to install everything I build from source in /opt, and edit my $PATH accordingly. It instils a sense of (semi-) cleanliness, and it's easier to traverse the folder structure, perform updates etc. It just comes down to personal preference; one is not necessarily better than the other (just like you said, a style issue). share|improve this answer Your Answer   discard By posting your answer, you agree to the privacy policy and terms of service. Not the answer you're looking for? Browse other questions tagged or ask your own question.
__label__pos
0.55647
[SCM] Samba Shared Repository - branch master updated Amitay Isaacs amitay at samba.org Tue Sep 23 02:32:04 MDT 2014 The branch, master has been updated via 93423cb ctdb-logging: Add forward declaration of debug_level via 3715432 ctdb-tests: Clean up some tests where IP movement is checked via 81a8758 ctdb-tests: Remove dependency on log ringbuffer from missing IP test via e3089d7 ctdb-tests: Make all_ips_on_node() do what it should via 81213af ctdb-tests: Factor out new function get_test_ip_mask_and_iface() via 4b8cfe4 ctdb-tests: Simplify and rename wait_until_ips_are_on_nodeglob() from 8a6445d WHATSNEW: some fixes http://gitweb.samba.org/?p=samba.git;a=shortlog;h=master - Log ----------------------------------------------------------------- commit 93423cb1f594244e916e4ac3cb1d32220c48c172 Author: Martin Schwenke <martin at meltin.net> Date: Tue Sep 23 06:07:47 2014 +1000 ctdb-logging: Add forward declaration of debug_level Warnings are currently produced when compiling Samba and ctdb_private.h is included. A forward enum declaration avoids the warning. This is a temporary measure. The log ringbuffer should be removed soon. Signed-off-by: Martin Schwenke <martin at meltin.net> Reviewed-by: Amitay Isaacs <amitay at gmail.com> Autobuild-User(master): Amitay Isaacs <amitay at samba.org> Autobuild-Date(master): Tue Sep 23 10:31:50 CEST 2014 on sn-devel-104 commit 371543207e8955d6665fcec03b261d80cde2401b Author: Martin Schwenke <martin at meltin.net> Date: Fri Sep 12 13:20:43 2014 +1000 ctdb-tests: Clean up some tests where IP movement is checked Some of this implements logic that exists in functions. Some of it is overly complicated and potentially failure-prone. Signed-off-by: Martin Schwenke <martin at meltin.net> Reviewed-by: Amitay Isaacs <amitay at gmail.com> commit 81a8758b9b24bbb7200be4682016f989b904bda8 Author: Martin Schwenke <martin at meltin.net> Date: Fri Sep 12 15:21:49 2014 +1000 ctdb-tests: Remove dependency on log ringbuffer from missing IP test The log ringbuffer will probably be removed. The test can be implemented just as reliably by checking IP assignments using "ctdb ip". Update wait_until_ips_are_on_node() to print a more useful log message. Signed-off-by: Martin Schwenke <martin at meltin.net> Reviewed-by: Amitay Isaacs <amitay at gmail.com> commit e3089d7da17e2f25d426005eca0fe55397f2689f Author: Martin Schwenke <martin at meltin.net> Date: Wed Sep 17 20:34:39 2014 +1000 ctdb-tests: Make all_ips_on_node() do what it should The "-n all" is wrong. Simplify the implementation and tighten up some uses of this function. _select_test_node_and_ips() can't use this function anymore. Signed-off-by: Martin Schwenke <martin at meltin.net> Reviewed-by: Amitay Isaacs <amitay at gmail.com> commit 81213af32ae17eaae33f9d27c9b7ce63c84ce5df Author: Martin Schwenke <martin at meltin.net> Date: Fri Sep 12 13:40:01 2014 +1000 ctdb-tests: Factor out new function get_test_ip_mask_and_iface() Signed-off-by: Martin Schwenke <martin at meltin.net> Reviewed-by: Amitay Isaacs <amitay at gmail.com> commit 4b8cfe4847477e3cfdb3f4dd7070226a6604bc38 Author: Martin Schwenke <martin at meltin.net> Date: Fri Sep 12 13:34:51 2014 +1000 ctdb-tests: Simplify and rename wait_until_ips_are_on_nodeglob() The glob functionality is unsed so simplify the code by removing it. Rename this function to wait_until_ips_are_on_node(). Update all calls. Signed-off-by: Martin Schwenke <martin at meltin.net> Reviewed-by: Amitay Isaacs <amitay at gmail.com> ----------------------------------------------------------------------- Summary of changes: ctdb/include/ctdb_private.h | 1 + ctdb/tests/complex/11_ctdb_delip_removes_ip.sh | 81 ++--------------- ctdb/tests/scripts/integration.bash | 78 +++++++++++++---- ctdb/tests/simple/16_ctdb_config_add_ip.sh | 111 +++--------------------- ctdb/tests/simple/17_ctdb_config_delete_ip.sh | 59 ++----------- ctdb/tests/simple/23_ctdb_moveip.sh | 94 ++++++-------------- ctdb/tests/simple/31_ctdb_disable.sh | 35 ++------ ctdb/tests/simple/32_ctdb_enable.sh | 41 ++------- ctdb/tests/simple/41_ctdb_stop.sh | 33 ++------ ctdb/tests/simple/42_ctdb_continue.sh | 37 ++------- ctdb/tests/simple/60_recoverd_missing_ip.sh | 58 ++++--------- 11 files changed, 158 insertions(+), 470 deletions(-) Changeset truncated at 500 lines: diff --git a/ctdb/include/ctdb_private.h b/ctdb/include/ctdb_private.h index 43daf36..02602e1 100644 --- a/ctdb/include/ctdb_private.h +++ b/ctdb/include/ctdb_private.h @@ -1461,6 +1461,7 @@ struct ctdb_get_log_addr { extern int log_ringbuf_size; +enum debug_level; TDB_DATA ctdb_log_ringbuffer_collect_log(TALLOC_CTX *mem_ctx, enum debug_level max_level); void ctdb_collect_log(struct ctdb_context *ctdb, struct ctdb_get_log_addr *log_addr); diff --git a/ctdb/tests/complex/11_ctdb_delip_removes_ip.sh b/ctdb/tests/complex/11_ctdb_delip_removes_ip.sh index 043c345..d67cb07 100755 --- a/ctdb/tests/complex/11_ctdb_delip_removes_ip.sh +++ b/ctdb/tests/complex/11_ctdb_delip_removes_ip.sh @@ -5,36 +5,7 @@ test_info() cat <<EOF Verify that a node's public IP address can be deleted using 'ctdb deleteip'. -Check that the address is actually deleted from the interface. - -Prerequisites: - -* An active CTDB cluster with at least 2 active nodes. - -* Test must be run on a real or virtual cluster rather than against - local daemons. There is nothing intrinsic to this test that forces - this - it is because tests run against local daemons don't use the - regular eventscripts. Local daemons put public addresses on - loopback, so we can't reliably test when IPs have moved between - nodes. - -Steps: - -1. Verify that the status on all of the ctdb nodes is 'OK'. -2. Use 'ctdb ip' on one of the nodes to list the IP addresses being - served. -3. Select an IP address being served by the node and check that it - actually appears on the interface it is supposed to be on. -4. Delete the IP address using 'ctdb delip'. -5. Verify that the deleted IP address is no longer listed using the - all_ips_on_node helper function. -6. Verify that the deleted IP address no longer appears on the - interface it was on. - -Expected results: - -* 'ctdb delip' removes an IP address from the list of public IP - addresses being served by a node and from the network interface. +This is an extended version of simple/17_ctdb_config_delete_ip.sh EOF } @@ -51,55 +22,23 @@ cluster_is_healthy # Reset configuration ctdb_restart_when_done -echo "Getting list of public IPs..." -all_ips_on_node -v 0 - -# Select an IP/node to remove. -num_ips=$(echo "$out" | wc -l) -num_to_remove=$(($RANDOM % $num_ips)) - -# Find the details in the list. -i=0 -while [ $i -le $num_to_remove ] ; do - read ip_to_remove test_node - i=$(($i + 1)) -done <<<"$out" - -echo "Determining interface for ${ip_to_remove} on ${test_node}." -try_command_on_node $test_node "ctdb ip -Y -v" -iface=$(echo "$out" | awk -F: -v ip=${ip_to_remove} -v pnn=${test_node} '$2 == ip && $3 == pnn { print $4 }') -echo "$iface" -[ -n "$iface" ] - -echo "Checking that node ${test_node} hosts ${ip_to_remove} on interface ${iface}..." -try_command_on_node $test_node "ip addr show dev $iface | grep -E 'inet[[:space:]]*${ip_to_remove}/'" - -echo "Attempting to remove ${ip_to_remove} from node ${test_node}." -try_command_on_node $test_node $CTDB delip $ip_to_remove - -echo "Sleeping..." -sleep_for 1 +select_test_node_and_ips +get_test_ip_mask_and_iface -test_node_ips="" -while read ip pnn ; do - [ "$pnn" = "$test_node" ] && \ - test_node_ips="${test_node_ips}${test_node_ips:+ }${ip}" -done <<<"$out" # bashism to avoid problem setting variable in pipeline. +echo "Checking that node ${test_node} hosts ${test_ip} on interface ${iface}..." +try_command_on_node $test_node "ip addr show dev $iface | grep -E 'inet[[:space:]]*${test_ip}/'" -if [ "${test_node_ips/${ip_to_remove}}" = "$test_node_ips" ] ; then - echo "GOOD: That worked!" -else - echo "BAD: The remove IP address is still there!" - testfailures=1 -fi +echo "Attempting to remove ${test_ip} from node ${test_node}." +try_command_on_node $test_node $CTDB delip $test_ip +wait_until_ips_are_on_node '!' $test_node $test_ip timeout=60 increment=5 count=0 -echo "Waiting for ${ip_to_remove} to disappear from ${iface}..." +echo "Waiting for ${test_ip} to disappear from ${iface}..." while : ; do try_command_on_node -v $test_node "ip addr show dev $iface" - if echo "$out" | grep -E 'inet[[:space:]]*${ip_to_remove}/'; then + if echo "$out" | grep -E 'inet[[:space:]]*${test_ip}/'; then echo "Still there..." if [ $(($count * $increment)) -ge $timeout ] ; then echo "BAD: Timed out waiting..." diff --git a/ctdb/tests/scripts/integration.bash b/ctdb/tests/scripts/integration.bash index dec60a2..1835949 100644 --- a/ctdb/tests/scripts/integration.bash +++ b/ctdb/tests/scripts/integration.bash @@ -152,7 +152,8 @@ sanity_check_ips () prev="$ipp" done <<<"$ips" - echo "BAD: a node was -1 or IPs are only assigned to one node" + echo "BAD: a node was -1 or IPs are only assigned to one node:" + echo "$ips" echo "Are you running an old version of CTDB?" return 1 } @@ -160,13 +161,15 @@ sanity_check_ips () # This returns a list of "ip node" lines in $out all_ips_on_node() { - local node=$@ - try_command_on_node $node "$CTDB ip -Y -n all | cut -d ':' -f1-3 | sed -e '1d' -e 's@^:@@' -e 's@:@ @g'" + local node="$1" + try_command_on_node $node \ + "$CTDB ip -Y | awk -F: 'NR > 1 { print \$2, \$3 }'" } _select_test_node_and_ips () { - all_ips_on_node 0 + try_command_on_node any \ + "$CTDB ip -Y -n all | awk -F: 'NR > 1 { print \$2, \$3 }'" test_node="" # this matches no PNN test_node_ips="" @@ -202,6 +205,25 @@ select_test_node_and_ips () return 0 } +# Sets: mask, iface +get_test_ip_mask_and_iface () +{ + # Find the interface + try_command_on_node $test_node "$CTDB ip -v -Y | awk -F: -v ip=$test_ip '\$2 == ip { print \$4 }'" + iface="$out" + + if [ -z "$TEST_LOCAL_DAEMONS" ] ; then + # Find the netmask + try_command_on_node $test_node ip addr show to $test_ip + mask="${out##*/}" + mask="${mask%% *}" + else + mask="24" + fi + + echo "$test_ip/$mask is on $iface" +} + ####################################### # Wait until either timeout expires or command succeeds. The command @@ -380,29 +402,31 @@ wait_until_node_has_status () } # Useful for superficially testing IP failover. -# IPs must be on nodes matching nodeglob. -# If the first argument is '!' then the IPs must not be on nodes -# matching nodeglob. -ips_are_on_nodeglob () +# IPs must be on the given node. +# If the first argument is '!' then the IPs must not be on the given node. +ips_are_on_node () { local negating=false if [ "$1" = "!" ] ; then negating=true ; shift fi - local nodeglob="$1" ; shift + local node="$1" ; shift local ips="$*" local out - all_ips_on_node 1 + all_ips_on_node $node + local check for check in $ips ; do + local ip pnn while read ip pnn ; do if [ "$check" = "$ip" ] ; then - case "$pnn" in - ($nodeglob) if $negating ; then return 1 ; fi ;; - (*) if ! $negating ; then return 1 ; fi ;; - esac + if [ "$pnn" = "$node" ] ; then + if $negating ; then return 1 ; fi + else + if ! $negating ; then return 1 ; fi + fi ips="${ips/${ip}}" # Remove from list break fi @@ -418,11 +442,27 @@ ips_are_on_nodeglob () [ -z "$ips" ] } -wait_until_ips_are_on_nodeglob () +wait_until_ips_are_on_node () { - echo "Waiting for IPs to fail over..." + # Go to some trouble to print a use description of what is happening + local not="" + if [ "$1" == "!" ] ; then + not="no longer " + fi + local node="" + local ips="" + local i + for i ; do + [ "$i" != "!" ] || continue + if [ -z "$node" ] ; then + node="$i" + continue + fi + ips="${ips}${ips:+, }${i}" + done + echo "Waiting for ${ips} to ${not}be assigned to node ${node}" - wait_until 60 ips_are_on_nodeglob "$@" + wait_until 60 ips_are_on_node "$@" } node_has_some_ips () @@ -431,7 +471,7 @@ node_has_some_ips () local out - all_ips_on_node 1 + all_ips_on_node $node while read ip pnn ; do if [ "$node" = "$pnn" ] ; then @@ -444,7 +484,7 @@ node_has_some_ips () wait_until_node_has_some_ips () { - echo "Waiting for node to have some IPs..." + echo "Waiting for some IPs to be assigned to node ${test_node}" wait_until 60 node_has_some_ips "$@" } diff --git a/ctdb/tests/simple/16_ctdb_config_add_ip.sh b/ctdb/tests/simple/16_ctdb_config_add_ip.sh index dc28130..b5d76ea 100755 --- a/ctdb/tests/simple/16_ctdb_config_add_ip.sh +++ b/ctdb/tests/simple/16_ctdb_config_add_ip.sh @@ -5,32 +5,8 @@ test_info() cat <<EOF Verify that an IP address can be added to a node using 'ctdb addip'. -This test goes to some trouble to figure out which IP address to add -but assumes a 24-bit subnet mask. It does not handle IPv6. It does -not do any network level checks that the new IP address is reachable -but simply trusts 'ctdb ip' that the address has been added. There is -also an extra prerequisite that the node being added to already has -public addresses - this is difficult to avoid if the extra address is -to be sensibly chosen. - -Prerequisites: - -* An active CTDB cluster with at least 2 active nodes. - -Steps: - -1. Verify that the status on all of the ctdb nodes is 'OK'. -2. Use 'ctdb ip' on one of the nodes to list the IP addresses being - served. -3. Add an additional public address to be served by the node, using - 'ctdb addip'. -4. Verify that this IP address has been added to the list of IP - addresses being served by the node, using the 'ctdb ip' command. - -Expected results: - -* 'ctdb ip' adds an IP address to the list of public IP addresses - being served by a node. +This test does not do any network level checks to make sure IP +addresses are actually on interfaces. It just consults "ctdb ip". EOF } @@ -45,79 +21,16 @@ cluster_is_healthy # Reset configuration ctdb_restart_when_done -echo "Getting list of public IPs..." -all_ips_on_node 0 - -# When selecting test_node we just want a node that has public IPs. -# This will work and is economically semi-randomly. :-) -read x test_node <<<"$out" - -test_node_ips="" -all_ips="" -while read ip pnn ; do - all_ips="${all_ips}${all_ips:+ }${ip}" - [ "$pnn" = "$test_node" ] && \ - test_node_ips="${test_node_ips}${test_node_ips:+ }${ip}" -done <<<"$out" - -echo "Selected node ${test_node} with IPs: $test_node_ips" - -# Try to find a free IP adddress. This is inefficient but should -# succeed quickly. -if [ -z "$TEST_LOCAL_DAEMONS" ] ; then - try_command_on_node $test_node "ip addr show" - all_test_node_ips=$(echo "$out" | sed -rn -e 's@^[[:space:]]+inet[[:space:]]+([[:digit:]]+\.[[:digit:]]+\.[[:digit:]]+\.[[:digit:]]+/[[:digit:]]+).*[[:space:]]([^[:space:]]+)+$@\1:\2 at p') -else - all_test_node_ips="" -fi -add_ip="" - -# Use an IP already on one of the nodes, remove the last octet and -# loop through the possible IP addreses. -for i in $test_node_ips ; do - prefix="${i%.*}" - for j in $(seq 101 199) ; do - try="${prefix}.${j}" - # Try to make sure it isn't used anywhere! - - # First, make sure it isn't an existing public address on the - # cluster. - for k in $all_ips ; do - [ "$try" = "$k" ] && continue 2 - done - - # Also make sure it isn't some other address in use on the - # node. - for k in $all_test_node_ips ; do - [ "$try" = "${k%/*}" ] && continue 2 - done - - # Get the interface details for $i, which our address is a - # close relative of. This should never fail but it can't hurt - # to be careful... - try_command_on_node $test_node "ctdb ip -v -Y" - while IFS=":" read x ip pnn iface x ; do - if [ "$i" = "$ip" ]; then - add_ip="$try/32:$iface" - break 3 - fi - done <<<"$out" - done -done +select_test_node_and_ips +get_test_ip_mask_and_iface -if [ -z "$add_ip" ] ; then - echo "BAD: Unable to find IP address to add." - exit 1 -fi +echo "Deleting IP $test_ip from all nodes" +try_command_on_node $test_node $CTDB delip -n all $test_ip +wait_until_ips_are_on_node '!' $test_node $test_ip -echo "Adding IP: ${add_ip/:/ on interface }" -try_command_on_node $test_node $CTDB addip ${add_ip/:/ } +# Debugging... +try_command_on_node -v all $CTDB ip -echo "Waiting for IP to be added..." -if wait_until 60 ips_are_on_nodeglob $test_node ${add_ip%/*} ; then - echo "That worked!" -else - echo "BAD: IP didn't get added." - try_command_on_node $test_node $CTDB ip -n all - exit 1 -fi +echo "Adding IP ${test_ip}/${mask} on ${iface}, node ${test_node}" +try_command_on_node $test_node $CTDB addip ${test_ip}/${mask} $iface +wait_until_ips_are_on_node $test_node $test_ip diff --git a/ctdb/tests/simple/17_ctdb_config_delete_ip.sh b/ctdb/tests/simple/17_ctdb_config_delete_ip.sh index 1ad9f33..80d2699 100755 --- a/ctdb/tests/simple/17_ctdb_config_delete_ip.sh +++ b/ctdb/tests/simple/17_ctdb_config_delete_ip.sh @@ -5,28 +5,8 @@ test_info() cat <<EOF Verify that a node's public IP address can be deleted using 'ctdb deleteip'. -This test does not do any network level checks that the IP address is -no longer reachable but simply trusts 'ctdb ip' that the address has -been deleted. - -Prerequisites: - -* An active CTDB cluster with at least 2 active nodes. - -Steps: - -1. Verify that the status on all of the ctdb nodes is 'OK'. -2. Use 'ctdb ip' on one of the nodes to list the IP addresses being - served. -3. Delete one public IP address being be served by the node, using - 'ctdb delip'. -4. Verify that the delete IP address is no longer listed using the - all_ips_on_node helper function. - -Expected results: - -* 'ctdb delip' removes an IP address from the list of public IP - addresses being served by a node. +This test does not do any network level checks to make sure IP +addresses are actually on interfaces. It just consults "ctdb ip". EOF } @@ -41,35 +21,8 @@ cluster_is_healthy # Reset configuration ctdb_restart_when_done -echo "Getting list of public IPs..." -all_ips_on_node -v 0 - -# Select an IP/node to remove. -num_ips=$(echo "$out" | wc -l) -num_to_remove=$(($RANDOM % $num_ips)) - -# Find the details in the list. -i=0 -while [ $i -le $num_to_remove ] ; do - read ip_to_remove test_node - i=$(($i + 1)) -done <<<"$out" - -echo "Attempting to remove ${ip_to_remove} from node ${test_node}." -try_command_on_node $test_node $CTDB delip $ip_to_remove - -echo "Sleeping..." -sleep_for 1 - -test_node_ips="" -while read ip pnn ; do - [ "$pnn" = "$test_node" ] && \ - test_node_ips="${test_node_ips}${test_node_ips:+ }${ip}" -done <<<"$out" # bashism to avoid problem setting variable in pipeline. +select_test_node_and_ips -if [ "${test_node_ips/${ip_to_remove}}" = "$test_node_ips" ] ; then - echo "GOOD: That worked!" -else - echo "BAD: The remove IP address is still there!" - testfailures=1 -fi +echo "Deleting IP ${test_ip} from node ${test_node}" +try_command_on_node $test_node $CTDB delip $test_ip +wait_until_ips_are_on_node '!' $test_node $test_ip diff --git a/ctdb/tests/simple/23_ctdb_moveip.sh b/ctdb/tests/simple/23_ctdb_moveip.sh index 7c09e58..f6e9027 100755 --- a/ctdb/tests/simple/23_ctdb_moveip.sh +++ b/ctdb/tests/simple/23_ctdb_moveip.sh @@ -5,27 +5,10 @@ test_info() cat <<EOF Verify that 'ctdb moveip' allows movement of public IPs between cluster nodes. -To work, this test unsets DeterministicIPs and sets NoIPFailback. - -This test does not do any network level checks that the IP address is -no longer reachable but simply trusts 'ctdb ip' that the address has -been deleted. - -Prerequisites: - -* An active CTDB cluster with at least 2 active nodes. - -Steps: - -1. Verify that the status on all of the ctdb nodes is 'OK'. -2. Use 'ctdb ip' on one of the nodes to list the IP addresses being - served. -3. Use 'ctdb moveip' to move an address from one node to another. -4. Verify that the IP is no longer being hosted by the first node and is now being hosted by the second node. +This test does not do any network level checks to make sure IP +addresses are actually on interfaces. It just consults "ctdb ip". -- Samba Shared Repository More information about the samba-cvs mailing list
__label__pos
0.750881
C++ Learning Notes and Technology Articles Arrays in C++ Multiple Choice Questions and Answers 1 PDF Book Download Arrays in c++ multiple choice questions (MCQs), arrays in c++ test prep to learn c++ quiz 1 for CS degree free online courses. Learn arrays in c++ multiple choice questions (MCQs), arrays in c++ quiz questions and answers. Free e-learning tutorial on arrays in c++, multi dimensional array, introduction to arrays, binary search algorithm test prep for online C++ for beginners courses distance learning. Practice arrays in c++ career test with multiple choice question: each element of an array is searched against searching key, is specialty of, for computer science majors with options linear search, bubble sort, binary search for online IT degree programs. Professional skills assessment test with learning online arrays in c++ quiz questions with c++ MCQs for IT certifications competitive exam prep. MCQ on Arrays in C++ Test 1Quiz Book Download MCQ: Referring an element outside array bounds is a 1. Syntax error 2. Logical error 3. Execution time error 4. Both B and C D MCQ: Each element of an array is searched against searching key, is specialty of 1. Bubble sort 2. Linear search 3. Binary search 4. All of them B MCQ: A one-dimensional array of one-dimensional arrays is called 1. Multi-dimensional array 2. Multi-casting array 3. Two-dimensional array 4. Three-dimensional array C MCQ: Sequence of objects that have same type, is called 1. Functions 2. Operators 3. Arrays 4. Stacks C MCQ: Binary search algorithm uses 1. Linear way to search values 2. Divide and conquer method 3. Bubble sorting technique 4. None of them B
__label__pos
0.99933
Take the tour × Unix & Linux Stack Exchange is a question and answer site for users of Linux, FreeBSD and other Un*x-like operating systems.. It's 100% free, no registration required. I have two files: files_to_search.out terms_to_search.out I'd like to create a command that identifies terms in terms_to_search.out that are not used in any of the files in files_to_search.out Is there an easy way to do this? share|improve this question add comment 2 Answers up vote 1 down vote accepted This is a bit tricky if you want to account for terms that can overlap, e.g. a single line containing banana is enough to count as a use of both ban and nan. Here's a minimally-tested, quick-and-dirty perl script. It reads the strings to search (the needles) and the file names, then builds a regular expression that matches any of the needles. When it finds a match, it removes the matched string from the set of needles and rebuilds the regex. The needles that are left over at the end are the ones you're after. #! /usr/bin/env perl open FILENAMES, "<", "files_to_search.out" or die $!; @filenames = <FILENAMES>; close FILENAMES; chomp foreach @filenames; open NEEDLES, "<", "terms_to_search.out" or die $!; @needles = <NEEDLES>; close NEEDLES; chomp foreach @needles; %needles = map {$_, 1} @needles; sub build_re { $re = qr/(@{[join("|", map quotemeta, keys %needles)]})/; } @ARGV = @filenames; while (<ARGV>) { while (/$re/) { delete $needles{$1}; exit if !%needles; build_re(); } } print map "$_\n", sort keys %needles; share|improve this answer add comment quick ugly attempt at a one liner (with GNU grep for the -o option): grep -of terms_to_search_out $(cat files_to_search.out | tr '\n' ' ') | sort | uniq | grep -vf terms_to_search_out share|improve this answer 1   Note that all of \n, \t and space are used by default to split command substitutions, so there's no point changing '\n' to ' '. Restricting IFS to newline and do the reverse conversion would even make more sense since newline characters are less likely to occur in file names than space characters. Also note that filename generation is also performed (except in zsh) upon command substitution so blank characters in filenames are not the only problematic ones (see set -f to disable filename generation). –  Stephane Chazelas Nov 7 '12 at 22:27   learn something new everyday - wasn't aware about the \n defaults. So maybe pass through sed to add quotes? or go with a for loop? –  Drake Clarris Nov 8 '12 at 15:00 add comment Your Answer   discard By posting your answer, you agree to the privacy policy and terms of service. Not the answer you're looking for? Browse other questions tagged or ask your own question.
__label__pos
0.65875
Erlang logo User's Guide Reference Manual Release Notes PDF Top STDLIB Reference Manual Version 3.2 Expand All Contract All Table of Contents calendar MODULE calendar MODULE SUMMARY Local and universal time, day of the week, date and time conversions. DESCRIPTION This module provides computation of local and universal time, day of the week, and many time conversion functions. Time is local when it is adjusted in accordance with the current time zone and daylight saving. Time is universal when it reflects the time at longitude zero, without any adjustment for daylight saving. Universal Coordinated Time (UTC) time is also called Greenwich Mean Time (GMT). The time functions local_time/0 and universal_time/0 in this module both return date and time. The is because separate functions for date and time can result in a date/time combination that is displaced by 24 hours. This occurs if one of the functions is called before midnight, and the other after midnight. This problem also applies to the Erlang BIFs date/0 and time/0, and their use is strongly discouraged if a reliable date/time stamp is required. All dates conform to the Gregorian calendar. This calendar was introduced by Pope Gregory XIII in 1582 and was used in all Catholic countries from this year. Protestant parts of Germany and the Netherlands adopted it in 1698, England followed in 1752, and Russia in 1918 (the October revolution of 1917 took place in November according to the Gregorian calendar). The Gregorian calendar in this module is extended back to year 0. For a given date, the gregorian days is the number of days up to and including the date specified. Similarly, the gregorian seconds for a specified date and time is the number of seconds up to and including the specified date and time. For computing differences between epochs in time, use the functions counting gregorian days or seconds. If epochs are specified as local time, they must be converted to universal time to get the correct value of the elapsed time between epochs. Use of function time_difference/2 is discouraged. Different definitions exist for the week of the year. This module contains a week of the year implementation conforming to the ISO 8601 standard. As the week number for a specified date can fall on the previous, the current, or on the next year, it is important to specify both the year and the week number. Functions iso_week_number/0 and iso_week_number/1 return a tuple of the year and the week number. DATA TYPES datetime() = {date(), time()} datetime1970() = {{year1970(), month(), day()}, time()} date() = {year(), month(), day()} year() = integer() >= 0 Year cannot be abbreviated. For example, 93 denotes year 93, not 1993. The valid range depends on the underlying operating system. The date tuple must denote a valid date. year1970() = 1970..10000 month() = 1..12 day() = 1..31 time() = {hour(), minute(), second()} hour() = 0..23 minute() = 0..59 second() = 0..59 daynum() = 1..7 ldom() = 28 | 29 | 30 | 31 yearweeknum() = {year(), weeknum()} weeknum() = 1..53 EXPORTS date_to_gregorian_days(Date) -> Days date_to_gregorian_days(Year, Month, Day) -> Days Types: Date = date() Year = year() Month = month() Day = day() Computes the number of gregorian days starting with year 0 and ending at the specified date. datetime_to_gregorian_seconds(DateTime) -> Seconds Types: DateTime = datetime() Seconds = integer() >= 0 Computes the number of gregorian seconds starting with year 0 and ending at the specified date and time. day_of_the_week(Date) -> daynum() day_of_the_week(Year, Month, Day) -> daynum() Types: Date = date() Year = year() Month = month() Day = day() Computes the day of the week from the specified Year, Month, and Day. Returns the day of the week as 1: Monday, 2: Tuesday, and so on. gregorian_days_to_date(Days) -> date() Types: Days = integer() >= 0 Computes the date from the specified number of gregorian days. gregorian_seconds_to_datetime(Seconds) -> datetime() Types: Seconds = integer() >= 0 Computes the date and time from the specified number of gregorian seconds. is_leap_year(Year) -> boolean() Types: Year = year() Checks if the specified year is a leap year. iso_week_number() -> yearweeknum() Returns tuple {Year, WeekNum} representing the ISO week number for the actual date. To determine the actual date, use function local_time/0. iso_week_number(Date) -> yearweeknum() Types: Date = date() Returns tuple {Year, WeekNum} representing the ISO week number for the specified date. last_day_of_the_month(Year, Month) -> LastDay Types: Year = year() Month = month() LastDay = ldom() Computes the number of days in a month. local_time() -> datetime() Returns the local time reported by the underlying operating system. local_time_to_universal_time(DateTime1) -> DateTime2 Types: DateTime1 = DateTime2 = datetime1970() Converts from local time to Universal Coordinated Time (UTC). DateTime1 must refer to a local date after Jan 1, 1970. Warning This function is deprecated. Use local_time_to_universal_time_dst/1 instead, as it gives a more correct and complete result. Especially for the period that does not exist, as it is skipped during the switch to daylight saving time, this function still returns a result. local_time_to_universal_time_dst(DateTime1) -> [DateTime] Types: DateTime1 = DateTime = datetime1970() Converts from local time to Universal Coordinated Time (UTC). DateTime1 must refer to a local date after Jan 1, 1970. The return value is a list of 0, 1, or 2 possible UTC times: [] For a local {Date1, Time1} during the period that is skipped when switching to daylight saving time, there is no corresponding UTC, as the local time is illegal (it has never occured). [DstDateTimeUTC, DateTimeUTC] For a local {Date1, Time1} during the period that is repeated when switching from daylight saving time, two corresponding UTCs exist; one for the first instance of the period when daylight saving time is still active, and one for the second instance. [DateTimeUTC] For all other local times only one corresponding UTC exists. now_to_datetime(Now) -> datetime1970() Types: Returns Universal Coordinated Time (UTC) converted from the return value from erlang:timestamp/0. now_to_local_time(Now) -> datetime1970() Types: Returns local date and time converted from the return value from erlang:timestamp/0. now_to_universal_time(Now) -> datetime1970() Types: Returns Universal Coordinated Time (UTC) converted from the return value from erlang:timestamp/0. seconds_to_daystime(Seconds) -> {Days, Time} Types: Seconds = Days = integer() Time = time() Converts a specified number of seconds into days, hours, minutes, and seconds. Time is always non-negative, but Days is negative if argument Seconds is. seconds_to_time(Seconds) -> time() Types: Seconds = secs_per_day() secs_per_day() = 0..86400 Computes the time from the specified number of seconds. Seconds must be less than the number of seconds per day (86400). time_difference(T1, T2) -> {Days, Time} Types: T1 = T2 = datetime() Days = integer() Time = time() Returns the difference between two {Date, Time} tuples. T2 is to refer to an epoch later than T1. Warning This function is obsolete. Use the conversion functions for gregorian days and seconds instead. time_to_seconds(Time) -> secs_per_day() Types: Time = time() secs_per_day() = 0..86400 Returns the number of seconds since midnight up to the specified time. universal_time() -> datetime() Returns the Universal Coordinated Time (UTC) reported by the underlying operating system. Returns local time if universal time is unavailable. universal_time_to_local_time(DateTime) -> datetime() Types: DateTime = datetime1970() Converts from Universal Coordinated Time (UTC) to local time. DateTime must refer to a date after Jan 1, 1970. valid_date(Date) -> boolean() valid_date(Year, Month, Day) -> boolean() Types: Date = date() Year = Month = Day = integer() This function checks if a date is a valid. Leap Years The notion that every fourth year is a leap year is not completely true. By the Gregorian rule, a year Y is a leap year if one of the following rules is valid: • Y is divisible by 4, but not by 100. • Y is divisible by 400. Hence, 1996 is a leap year, 1900 is not, but 2000 is. Date and Time Source Local time is obtained from the Erlang BIF localtime/0. Universal time is computed from the BIF universaltime/0. The following fapply: • There are 86400 seconds in a day. • There are 365 days in an ordinary year. • There are 366 days in a leap year. • There are 1461 days in a 4 year period. • There are 36524 days in a 100 year period. • There are 146097 days in a 400 year period. • There are 719528 days between Jan 1, 0 and Jan 1, 1970.
__label__pos
0.854544
Mar 5 comment socket descriptor closes after assignment How does the RoutingManager class look like? Is it possible, that when you do myConnection->newConnections.pop_back(), the connection instance is finalized and closed? Mar 5 comment socket descriptor closes after assignment that's what accept is for Feb 17 awarded Caucus Feb 15 comment Array with initial content that is determined by function Right, the results should be used as :initial-contents from the code in the question Feb 15 answered Array with initial content that is determined by function Feb 11 comment How does generative language model work in the natural language processing? Yes, you generate a new random number for each sample. As for the first question: you've got it backwards. In this case you'll get yes is the generated number is below 0.67 and no otherwise. If you'll have 3 words: yes, no, maybe with equal probabilities, you'll need to take yes if the generated number is from 0 to 1/3, no if it's from 1/3 to 2/3, and maybe if it's above 2/3. You can think of it as placing you words on a [0,1] section, each word taking the size proportional to the probability. Feb 11 revised How does generative language model work in the natural language processing? added 25 characters in body Feb 11 comment How does generative language model work in the natural language processing? I see, I've updated the answer with one variant of how to do that. Feb 11 revised How does generative language model work in the natural language processing? added 548 characters in body Feb 11 answered How does generative language model work in the natural language processing? Feb 9 answered Common Lisp: asdf depends-on specific version Feb 9 answered Bytes vs Characters vs Words - which granularity for n-grams? Feb 9 comment Training confusion matrix and testing confusion Matrix you're right :) Feb 7 answered LISP File I/O - Extract and Convert Information Feb 4 answered cannot understand the definition of "row-major-ref" in sbcl Feb 3 revised Python - A way to learn and detect text patterns? added 1160 characters in body Feb 3 comment Python - A way to learn and detect text patterns? Not exactly. Decision trees are one of the classification algorithms - you can read more on them starting from en.wikipedia.org/wiki/Decision_tree Feb 2 comment Natural Language Processing Tag definitions possible duplicate of Java Stanford NLP: Part of Speech labels? Feb 1 answered Python - A way to learn and detect text patterns? Jan 31 answered Removing sublists in a list by comparing them with an alist in Common Lisp 1 2 3 4 5
__label__pos
0.844563
     Logo Search packages:       Sourcecode: xulrunner version File versions nsAccessibleWrap.mm /* -*- Mode: C++; tab-width: 2; indent-tabs-mode: nil; c-basic-offset: 2 -*- */ /* ***** BEGIN LICENSE BLOCK ***** * Version: MPL 1.1/GPL 2.0/LGPL 2.1 * * The contents of this file are subject to the Mozilla Public License Version * 1.1 (the "License"); you may not use this file except in compliance with * the License. You may obtain a copy of the License at * http://www.mozilla.org/MPL/ * * Software distributed under the License is distributed on an "AS IS" basis, * WITHOUT WARRANTY OF ANY KIND, either express or implied. See the License * for the specific language governing rights and limitations under the * License. * * The Original Code is mozilla.org code. * * The Initial Developer of the Original Code is * Mozilla Foundation. * Portions created by the Initial Developer are Copyright (C) 2006 * the Initial Developer. All Rights Reserved. * * Contributor(s): * Original Author: Håkan Waara <[email protected]> * * Alternatively, the contents of this file may be used under the terms of * either of the GNU General Public License Version 2 or later (the "GPL"), * or the GNU Lesser General Public License Version 2.1 or later (the "LGPL"), * in which case the provisions of the GPL or the LGPL are applicable instead * of those above. If you wish to allow use of your version of this file only * under the terms of either the GPL or the LGPL, and not to allow others to * use your version of this file under the terms of the MPL, indicate your * decision by deleting the provisions above and replace them with the notice * and other provisions required by the GPL or the LGPL. If you do not delete * the provisions above, a recipient may use your version of this file under * the terms of any one of the MPL, the GPL or the LGPL. * * ***** END LICENSE BLOCK ***** */ #include "nsAccessibleWrap.h" #include "nsIAccessibleDocument.h" #include "nsIAccessibleText.h" #include "nsObjCExceptions.h" #import "nsRoleMap.h" #import "mozAccessibleWrapper.h" #import "mozAccessible.h" #import "mozActionElements.h" #import "mozTextAccessible.h" nsAccessibleWrap::nsAccessibleWrap(nsIDOMNode* aNode, nsIWeakReference *aShell): nsAccessible(aNode, aShell), mNativeWrapper(nsnull) { } nsAccessibleWrap::~nsAccessibleWrap() { if (mNativeWrapper) { delete mNativeWrapper; mNativeWrapper = nsnull; } } nsresult nsAccessibleWrap::Init () { // need to pass the call up, so we're cached (which nsAccessNode::Init() takes care of). nsresult rv = nsAccessible::Init(); NS_ENSURE_SUCCESS(rv, rv); if (!mNativeWrapper && !AncestorIsFlat()) { // Create our native object using the class type specified in GetNativeType(). mNativeWrapper = new AccessibleWrapper (this, GetNativeType()); } return NS_OK; } NS_IMETHODIMP nsAccessibleWrap::GetNativeInterface (void **aOutInterface) { if (mNativeWrapper) { *aOutInterface = (void**)mNativeWrapper->getNativeObject(); return NS_OK; } return NS_ERROR_FAILURE; } // get the native NSWindow we reside in. void nsAccessibleWrap::GetNativeWindow (void **aOutNativeWindow) { *aOutNativeWindow = nsnull; nsCOMPtr<nsIAccessibleDocument> docAccessible(GetDocAccessible()); docAccessible->GetWindowHandle (aOutNativeWindow); } // overridden in subclasses to create the right kind of object. by default we create a generic // 'mozAccessible' node. objc_class* nsAccessibleWrap::GetNativeType () { NS_OBJC_BEGIN_TRY_ABORT_BLOCK_NIL; PRUint32 role = nsAccUtils::Role(this); switch (role) { case nsIAccessibleRole::ROLE_PUSHBUTTON: case nsIAccessibleRole::ROLE_SPLITBUTTON: case nsIAccessibleRole::ROLE_TOGGLE_BUTTON: { // if this button may show a popup, let's make it of the popupbutton type. if (HasPopup()) return [mozPopupButtonAccessible class]; // regular button return [mozButtonAccessible class]; } case nsIAccessibleRole::ROLE_CHECKBUTTON: return [mozCheckboxAccessible class]; case nsIAccessibleRole::ROLE_AUTOCOMPLETE: return [mozComboboxAccessible class]; case nsIAccessibleRole::ROLE_ENTRY: case nsIAccessibleRole::ROLE_STATICTEXT: case nsIAccessibleRole::ROLE_HEADING: case nsIAccessibleRole::ROLE_LABEL: case nsIAccessibleRole::ROLE_CAPTION: case nsIAccessibleRole::ROLE_ACCEL_LABEL: case nsIAccessibleRole::ROLE_TEXT_LEAF: // normal textfield (static or editable) return [mozTextAccessible class]; case nsIAccessibleRole::ROLE_COMBOBOX: return [mozPopupButtonAccessible class]; default: return [mozAccessible class]; } return nil; NS_OBJC_END_TRY_ABORT_BLOCK_NIL; } // this method is very important. it is fired when an accessible object "dies". after this point // the object might still be around (because some 3rd party still has a ref to it), but it is // in fact 'dead'. nsresult nsAccessibleWrap::Shutdown () { if (mNativeWrapper) { delete mNativeWrapper; mNativeWrapper = nsnull; } return nsAccessible::Shutdown(); } NS_IMETHODIMP nsAccessibleWrap::FireAccessibleEvent(nsIAccessibleEvent *aEvent) { NS_OBJC_BEGIN_TRY_ABORT_BLOCK_NSRESULT; NS_ENSURE_ARG_POINTER(aEvent); nsresult rv = nsAccessible::FireAccessibleEvent(aEvent); NS_ENSURE_SUCCESS(rv, rv); return FirePlatformEvent(aEvent); NS_OBJC_END_TRY_ABORT_BLOCK_NSRESULT; } nsresult nsAccessibleWrap::FirePlatformEvent(nsIAccessibleEvent *aEvent) { NS_OBJC_BEGIN_TRY_ABORT_BLOCK_NSRESULT; PRUint32 eventType; nsresult rv = aEvent->GetEventType(&eventType); NS_ENSURE_SUCCESS(rv, rv); // ignore everything but focus-changed and value-changed events for now. if (eventType != nsIAccessibleEvent::EVENT_FOCUS && eventType != nsIAccessibleEvent::EVENT_VALUE_CHANGE) return NS_OK; nsCOMPtr<nsIAccessible> accessible; rv = aEvent->GetAccessible(getter_AddRefs(accessible)); NS_ENSURE_STATE(accessible); mozAccessible *nativeAcc = nil; accessible->GetNativeInterface((void**)&nativeAcc); if (!nativeAcc) return NS_ERROR_FAILURE; switch (eventType) { case nsIAccessibleEvent::EVENT_FOCUS: [nativeAcc didReceiveFocus]; break; case nsIAccessibleEvent::EVENT_VALUE_CHANGE: [nativeAcc valueDidChange]; break; } return NS_OK; NS_OBJC_END_TRY_ABORT_BLOCK_NSRESULT; } nsresult nsAccessibleWrap::InvalidateChildren () { NS_OBJC_BEGIN_TRY_ABORT_BLOCK_NSRESULT; if (mNativeWrapper) { mozAccessible *object = mNativeWrapper->getNativeObject(); [object invalidateChildren]; } return nsAccessible::InvalidateChildren(); NS_OBJC_END_TRY_ABORT_BLOCK_NSRESULT; } PRInt32 nsAccessibleWrap::GetUnignoredChildCount(PRBool aDeepCount) { // if we're flat, we have no children. if (nsAccUtils::MustPrune(this)) return 0; PRInt32 childCount = 0; GetChildCount(&childCount); nsCOMPtr<nsIAccessible> curAcc; while (NextChild(curAcc)) { nsAccessibleWrap *childWrap = static_cast<nsAccessibleWrap*>((nsIAccessible*)curAcc.get()); // if the current child is not ignored, count it. if (!childWrap->IsIgnored()) ++childCount; // if it's flat, we don't care to inspect its children. if (nsAccUtils::MustPrune(childWrap)) continue; if (aDeepCount) { // recursively count the unignored children of our children since it's a deep count. childCount += childWrap->GetUnignoredChildCount(PR_TRUE); } else { // no deep counting, but if the child is ignored, we want to substitute it for its // children. if (childWrap->IsIgnored()) childCount += childWrap->GetUnignoredChildCount(PR_FALSE); } } return childCount; } // if we for some reason have no native accessible, we should be skipped over (and traversed) // when fetching all unignored children, etc. when counting unignored children, we will not be counted. PRBool nsAccessibleWrap::IsIgnored() { return (mNativeWrapper == nsnull) || mNativeWrapper->isIgnored(); } void nsAccessibleWrap::GetUnignoredChildren(nsTArray<nsRefPtr<nsAccessibleWrap> > &aChildrenArray) { nsCOMPtr<nsIAccessible> curAcc; // we're flat; there are no children. if (nsAccUtils::MustPrune(this)) return; while (NextChild(curAcc)) { nsAccessibleWrap *childWrap = static_cast<nsAccessibleWrap*>((nsIAccessible*)curAcc.get()); if (childWrap->IsIgnored()) { // element is ignored, so try adding its children as substitutes, if it has any. if (!nsAccUtils::MustPrune(childWrap)) { nsTArray<nsRefPtr<nsAccessibleWrap> > children; childWrap->GetUnignoredChildren(children); if (!children.IsEmpty()) { // add the found unignored descendants to the array. aChildrenArray.AppendElements(children); } } } else // simply add the element, since it's not ignored. aChildrenArray.AppendElement(childWrap); } } already_AddRefed<nsIAccessible> nsAccessibleWrap::GetUnignoredParent() { nsCOMPtr<nsIAccessible> parent(GetParent()); nsAccessibleWrap *parentWrap = static_cast<nsAccessibleWrap*>((nsIAccessible*)parent.get()); if (!parentWrap) return nsnull; // recursively return the parent, until we find one that is not ignored. if (parentWrap->IsIgnored()) return parentWrap->GetUnignoredParent(); nsIAccessible *outValue = nsnull; NS_IF_ADDREF(outValue = parent.get()); return outValue; } Generated by  Doxygen 1.6.0   Back to index
__label__pos
0.988018
[object Object] Icon Encoding Learn how to create, start, manage and modify Encodings [object Object] Icon Player Learn how to create, start, manage and modify Players [object Object] Icon Analytics Learn how to create, start, manage and modify Analyticss User shortcuts for search Focus by pressing f Hide results by pressing Esc Navigate via   keys Fri Aug 31 2018 How to play Multi-DRM protected content with Intertrust / ExpressPlay OverviewLink Icon To decrypt the content, the player needs to receive the LA URLs (License Acquisition URLs) for the each DRM solution. These are obtained by requesting tokens from the ExpressPlay Multi DRM service API. To do so you will need an API Key (Customer Authenticator), which is available in your ExpressPlay admin backend. Hint: ExpressPlay provides a Testing API Key as well as an Production API Key. The following examples are using service URLs, will work with the Testing API Key only. Therefore, please make sure to use the correct URLs before you go into production. You will find those production service URLs in your ExpressPlay admin backend. Requesting an Widevine LA URLLink Icon The example code below sends a POST request using CURL, which will return the full LA URL, which can be used for the player configuration afterwards. This request requires some query parameters (customerAuthenticator, contentKey, kid) to be set in order to be successful. All available query parameters and DRM options are available in the ExpressPlay REST API documentation for Widevine License Token Requests. 1curl -k 'https://wv-gen.test.expressplay.com/hms/wv/token?customerAuthenticator=EXPRESSPLAY_CUSTOMER_AUTHENTICATOR&errorFormat=json&kid=YOUR_KID&contentKey=YOUR_KEY&securityLevel=1&hdcpOutputControl=0' Please replace the following placeholders with their actual value: • EXPRESSPLAY_CUSTOMER_AUTHENTICATOR This value is available in your ExpressPlay admin backend. Click "Show" to make the API Key visible, copy&paste accordingly into the URL. expressplay-customer-authenicator-test • YOUR_KID Replace it with the KID you used for your encoding • YOUR_KEY Replace it with the KEY you used for your encoding Once that is done, the request is ready to be issued. Its response will contain a ready-to-use Widevine LA URL, and will look like the following: 1https://wv.test.expressplay.com/hms/wv/rights/?ExpressPlayToken=YOUR_WIDEVINE_EXPRESSPLAY_TOKEN Now, we can add a Widevine DRM configuration to the source object of your player configuration. The player configuration would look like the following: 1var conf = { 2 key: 'YOUR-PLAYER-LICENSE-KEY-HERE', 3 source: { 4 dash: 'DASH_MANIFEST_URL', 5 drm: { 6 widevine: { 7 LA_URL: 'https://wv.test.expressplay.com/hms/wv/rights/?ExpressPlayToken=YOUR_WIDEVINE_EXPRESSPLAY_TOKEN' 8 } 9 } 10 } 11}; PlayReady LA URLLink Icon Basically it works the same way as it does for requesting a Widevine LA URL. The example code below sends a POST request using CURL, which will return the full LA URL, which can be used for the player configuration afterwards. This request requires some query parameters (customerAuthenticator, contentKey, kid) to be set in order to be successful. All available query parameters and DRM options are available in the ExpressPlay REST API documentation for PlayReady Token Requests. 1curl -k 'https://pr-gen.test.expressplay.com/hms/pr/token?customerAuthenticator=EXPRESSPLAY_CUSTOMER_AUTHENTICATOR&errorFormat=json&kid=YOUR_KID&contentKey=YOUR_KEY&rightsType=BuyToOwn&analogVideoOPL=100&compressedDigitalAudioOPL=100&compressedDigitalVideoOPL=100&uncompressedDigitalAudioOPL=100&uncompressedDigitalVideoOPL=100' Please replace the following placeholders with their actual value: • EXPRESSPLAY_CUSTOMER_AUTHENTICATOR This value is available in your ExpressPlay admin backend. Click "Show" to make the API Key visible, copy&paste accordingly into the URL. expressplay-customer-authenicator-test • YOUR_KID Replace it with the KID you used for your encoding • YOUR_KEY Replace it with the KEY you used for your encoding Once that is done, the request is ready to be issued. Unlike the widevine response, a json string will be returned: 1{ 2 "licenseAcquisitionUrl":"http://pr.test.expressplay.com/playready/RightsManager.asmx", 3 "token":"YOUR_PLAYREADY_EXPRESSPLAY_TOKEN" 4} Now, we need to concatenate those two values in order get a working LA URL. To do so, we have to add a query parameter called "ExpressPlayToken" to the licenseAquistionUrl, set the token as its value. The resulting URL, will look like the following: 1https://pr.test.expressplay.com/playready/RightsManager.asmx?ExpressPlayToken=YOUR_PLAYREADY_EXPRESSPLAY_TOKEN Hint: You might have noticed already, that the final LA URL starts with https:// instead of http:// like shown in the response example. This is because its supports https:// as well, however the ExpressPlay doesn't provide it that way. Further, browsers like Google Chrome require all resources, which are used for DRM playback, are served using https://. Otherwise, the browser won't expose the APIs, which are required for DRM playback, to the player. Now we can add a PlayReady DRM configuration to the source object of the player configuration, so PlayReady DRM protected content can be played as well. The player configuration would look like the following: 1var conf = { 2 key: 'YOUR-PLAYER-LICENSE-KEY-HERE', 3 source: { 4 dash: 'DASH_MANIFEST_URL', 5 drm: { 6 widevine: { 7 LA_URL: 'https://wv.test.expressplay.com/hms/wv/rights/?ExpressPlayToken=YOUR_PLAYREADY_EXPRESSPLAY_TOKEN' 8 }, 9 playready: { 10 LA_URL: 'https://pr.test.expressplay.com/playready/RightsManager.asmx?ExpressPlayToken=YOUR_PLAYREADY_EXPRESSPLAY_TOKEN\n' 11 } 12 } 13 } 14}; Give us feedback
__label__pos
0.97933
Install an engine - Administrator Guide - Cortex XSIAM - Cortex - Security Operations Cortex XSIAM Documentation Product Cortex XSIAM Creation date 2024-03-06 Last date published 2024-05-22 Category Administrator Guide Abstract Install, deploy and configure Cortex XSIAM engines. When you install the engine, the d1.conf is installed on the engine machine, which contains engine properties such as proxy, log level, and log files. If Docker/Podman is already installed, the python.engine.docker and powershell.engine.docker keys are set to true. If Docker or Podman is not available when the engine is installed, the key is set to false. If so, you need to set the key to true after installing Docker and Podman. Verify that python.engine.docker and powershell.engine.docker configuration keys are present in the d1.conf file. Note If you are using DEB, RPM, or Zip installation, install Docker or Podman. To run Docker-dependent integrations and scripts on CentOS v7, install Mirantis Container Runtime. Cortex XSIAM supports the following file types for installation on the engine machine: • Shell: For all Linux deployments, including Ubuntu, and SUSE, except CentOS 7.x. Automatically installs Docker/Podman, downloads Docker/Podman images, enables remote engine upgrade, and allows installation of multiple engines on the same machine. The installation file is selected for you. Shell installation supports the purge flag, which by default is false. To uninstall an engine, run the installer with the purge flag enabled. Note When upgrading an engine that was installed using the Shell installation, you can use the Upgrade Engine feature in the Engines page. For CentOS 7 or Amazon Linux 2 type engines, you need to upgrade these engine types using a zip type engine and not use the Upgrade Engine feature. If you use the shell installer, Docker/Podman is automatically installed. • DEB: For Ubuntu operating systems. • RPM: RHEL operating systems. Note Use DEB and RPM installation when shell installation is not available. You need to manually install Docker or Podman and any dependencies. If installing on CentOS v7 you need to install Mirantis Container Runtime (formerly Docker Engine - Enterprise) or Red Hat's Docker distribution to run specific Docker-dependent integrations and scripts. • Zip: Used for CentOS 7 and Amazon Linux 2 machines. • Configuration: Configuration file for download. When you install one of the other options, this configuration file (d1.conf ) is installed on the engine machine. Important For DEB/RPM engines, Python (including 3.x) and the containerization platform (Docker/Podman) must be installed and configured. For Docker or Podman to work correctly on an engine, IPv4 forwarding must be enabled. 1. Create an engine. 1. Select SettingsConfigurationsData BrokerEnginesCreate New Engine. 2. In the Engine Name field, add a meaningful name for the engine. 3. Select one of the installer types from the list. 4. (Optional) (Shell only) Select the checkbox to enable multiple engines to run on the same machine. If you have an existing engine, and you did not select the checkbox, and now you want to install another engine on the same machine, you need to delete the existing engine. 5. (Optional) Add any required configuration in JSON format. 6. Click OK to create the engine. 2. For shell installation, do the following: Tip For Linux systems, we recommend using the shell installer. If using CentOS 7.x, or Amazon Linux 2, use the zip installer (see step 4). 1. Move the .sh file to the engine machine using a tool such as SSH or PuTTY. 2. On the engine machine, grant execution permission by running the following command: chmod +x /<engine-file-path> 3. Install the engine by typing one of the following commands: With tools: sudo <engine-file-path> Without tools: sudo <engine-file-path> -- -tools=false If you receive a permissions denied error, it is likely that you do not have permission to access the /tmp directory. 3. For RPM/DEB installation, do the following: 1. Move the file to the required machine using a tool such as SSH or PuTTY. 2. Type one of the following installation commands: Machine Type Install Command RHEL (RPM) sudo rpm -Uvh d1-2.5_15418-1.x86_64.rpm Ubuntu (DEB) sudo dpkg --install d1_xxx_amd64.deb 3. Start the engine by running one of the following commands: Machine Type Start Command RHEL (RPM) sudo systemctl start d1 Ubuntu (DEB) sudo service d1 restart 4. For Zip installation on CentOS 7.x or Amazon Linux 2, run the following commands: 1. Create the engine folder. mkdir /usr/local/demisto 2. Unzip the engine files to the folder created in the previous step. unzip ./d1.zip -d /usr/local/demisto 3. Allow the process to bind to low numbered ports. setcap CAP_NET_BIND_SERVICE=+eip /usr/local/demisto/d1_linux_amd64 4. Change the owner of /usr/local/demisto to the demisto user. chown -R demisto:demisto /usr/local/demisto 5. In /etc/systemd/system edit the d1.service file las follows (adjust the directory and the name of the binaries file if needed). [Unit] Description=Demisto Engine Service After=network.target [Service] Type=simple User=demisto WorkingDirectory=/usr/local/demisto ExecStart=/usr/local/demisto/d1_linux_amd64 EnvironmentFile=/etc/environment Restart=always [Install] WantedBy=multi-user.target 6. Give the service execution permissions and change the owner to demisto. chmod -x d1.service chown demisto:demisto d1.service 7. Run the engine process. systemctl start d1 8. Verify that the engine is running. systemctl status d1 5. Verify that the engine you created is connected. 1. Select SettingsConfigurationsData BrokerEngines. 2. Locate your engine on the Engines page and check that it is connected. 6. When the engine is connected, you can add the engine to a load-balancing group by clicking Load-Balancing Group on the Engines page. If you want to add the engine to a new group, click Add to new group from the list. When the engine is in the load-balancing group, it cannot be used as an individual engine and does not appear when configuring an engine from the list. 7. (Optional) After installing the engine, you may want to set up a proxy, set up Docker hardening, configure the number of workers for the engine, or perform other related engine configurations. For more information, see the Configure Engines section. You can also configure an integration instance to run on the engine you created. Note If the installer fails to start due to a permissions issue, even if running as root, add one of the following two arguments when running the installer: • --target <path> - Extracts the installer files into the specified custom path. • --keep - Extracts the installer files into the current working directory (without cleaning at the end). If using installer options such as -- -tools=false, the option should come after the --target or --keep arguments. For example: sudo ./d1-installer.sh --target /some/temp/dir -- -tools=false
__label__pos
0.647156
Try Free CompTIA Network+ N10-008 Questions and Answers To Test Yourself Try Free CompTIA Network+ N10-008 Questions and Answers To Test Yourself CompTIA Network+ validates the technical skills needed to securely establish, maintain and troubleshoot the essential networks that businesses rely on. CompTIA Network+ prepares candidates to support networks on any platform. CompTIA Network+ is the only certification that covers the specific skills that network professionals need. To pass your CompTIA Network+ N10-008 test, the best method is to challenge and improve your knowledge. We recommend that you prepare with FreeTestShare CompTIA Network+ N10-008 Questions and Answers to test your knowledge and identify areas. The N10-008 practice test is an essential component of your CompTIA Network+ exam preparation approach since it allows you to identify your strengths and weaknesses, develop your time management skills, and get an estimate of the score you may expect. Page 1 of 4 1. Must be vendor neutral Which of the following methods should the engineer select? 2. A network technician needs to ensure outside users are unable to telnet into any of the servers at the datacenter . Which of the following ports should be blocked when checking firewall configuration? 3. There are two managed legacy switches running that cannot be replaced or upgraded. These switches do not support cryptographic functions, but they are password protected . Which of the following should a network administrator configure to BEST prevent unauthorized access? 4. A fiber link connecting two campus networks is broken . Which of the following tools should an engineer use to detect the exact break point of the fiber link? 5. A technician is troubleshooting a network switch that seems to stop responding to requests intermittently whenever the logging level is set for debugging . Which of the following metrics should the technician check to begin troubleshooting the issue? 6. An IT organization needs to optimize speeds for global content distribution and wants to reduce latency in high-density user locations . Which of the following technologies BEST meets the organization’s requirements? 7. Which of the following is a system that is installed directly on a server's hardware and abstracts the hardware from any guest machines? 8. Access to a datacenter should be individually recorded by a card reader even when multiple employees enter the facility at the same time . Which of the following allows the enforcement of this policy? 9. CORRECT TEXT You are tasked with verifying the following requirements are met in order to ensure network security. Requirements: Datacenter Ensure network is subnetted to allow all devices to communicate properly whileminimizing address space usage Provide a dedicated server to resolve IP addresses and hostnames correctly and handle port 53 traffic Building A Ensure network is subnetted to allow all devices to communicate properly while minimizing address space usage Provide devices to support 5 additional different office users Add an additional mobile user Replace the Telnet server with a more secure solution Screened subnet Ensure network is subnetted to allow all devices to communicate properly while minimizing address space usage Provide a server to handle external 80/443 traffic Provide a server to handle port 20/21 traffic INSTRUCTIONS Drag and drop objects onto the appropriate locations. Objects can be used multiple times and not all placeholders need to be filled. Available objects are located in both the Servers and Devices tabs of the Drag & Drop menu. If at any time you would like to bring back the initial state of the simulation, please click the Reset All button. 10. A network technician is investigating an issue with a desktop that is not connecting to the network. The desktop was connecting successfully the previous day, and no changes were made to the environment. The technician locates the switchport where the device is connected and observes the LED status light on the switchport is not lit even though the desktop is turned on Other devices that arc plugged into the switch are connecting to the network successfully. Which of the following is MOST likely the cause of the desktop not connecting?   Share this post Leave a Reply Your email address will not be published.
__label__pos
0.998254
2014-02-04 Трансформации в DocsVision В DocsVision документы можно отправлять на печать или экспортировать в HTML. В конструкторе видов карточек для каждого вида документа можно создавать свои XSLT-трансформации и затем их использовать для печати, экспорта и в скриптах для любых других преобразований. Рассмотрим задачу создания трансформации для своего вида документа и вывод ее на печать из по кнопке из разметки карточки. Во-первых, создадим XSL-код для XML карточки. <?xml version="1.0" encoding="UTF-8"?> <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform" xmlns:msxsl="urn:schemas-microsoft-com:xslt" xmlns:my="urn:sample" extension-element-prefixes="msxsl" > <xsl:output indent="yes"/> <!-- различные функции --> <xsl:template name="replace" match="text()" mode="replace"> <xsl:param name="str" select="."/> <xsl:param name="search-for" select="'&#xA;'"/> <xsl:param name="replace-with"> <xsl:element name="BR"/> <xsl:text> </xsl:text> </xsl:param> <xsl:choose> <xsl:when test="contains($str, $search-for)"> <xsl:value-of select="substring-before($str, $search-for)"/> <xsl:copy-of select="$replace-with"/> <xsl:call-template name="replace"> <xsl:with-param name="str" select="substring-after($str, $search-for)"/> <xsl:with-param name="search-for" select="$search-for"/> <xsl:with-param name="replace-with" select="$replace-with"/> </xsl:call-template> </xsl:when> <xsl:otherwise> <xsl:value-of select="$str"/> </xsl:otherwise> </xsl:choose> </xsl:template> <xsl:template name="convertdate" match="text()" mode="replace"> <xsl:param name="str" select="."/> <xsl:if test="string-length($str)&gt;0"> <xsl:copy-of select="substring($str, 9, 2)"/> <xsl:text>.</xsl:text> <xsl:copy-of select="substring($str, 6, 2)"/> <xsl:text>.</xsl:text> <xsl:copy-of select="substring($str, 1, 4)"/> </xsl:if> </xsl:template> <xsl:template name="getdisplayname"> <xsl:param name="lastname" select="."/> <xsl:param name="firstname" select="."/> <xsl:param name="middlename" select="."/> <xsl:param name="displaystring" select="."/> <xsl:if test="string-length($displaystring)!=0"> <xsl:value-of select="$displaystring"/> </xsl:if> <xsl:if test="string-length($displaystring)=0"> <xsl:value-of select="$lastname"/> <xsl:if test="string-length($lastname)!=0"> <xsl:text> </xsl:text> </xsl:if> <xsl:value-of select="$firstname"/> <xsl:if test="string-length($firstname)!=0"> <xsl:text> </xsl:text> </xsl:if> <xsl:value-of select="$middlename"/> </xsl:if> </xsl:template> <xsl:template name="getemployeedisplayname"> <xsl:param name="employeerow" select="."/> <xsl:call-template name="getdisplayname"> <xsl:with-param name="lastname" select="$employeerow/@LastName"/> <xsl:with-param name="firstname" select="$employeerow/@FirstName"/> <xsl:with-param name="middlename" select="$employeerow/@MiddleName"/> <xsl:with-param name="displaystring" select="$employeerow/@DisplayString"/> </xsl:call-template> </xsl:template> <xsl:template name="printcategories"> <xsl:param name="categorylistid" select="."/> <xsl:if test="$categorylistid"> <xsl:variable name="categorylistcard" select="//CardCategoryList[@CardID=$categorylistid]"/> <xsl:for-each select="$categorylistcard/Categories/CategoriesRow"> <xsl:if test="position() &gt; 1"> <xsl:text>, </xsl:text> </xsl:if> <xsl:value-of select="@CategoryID_Name"/> </xsl:for-each> </xsl:if> </xsl:template> <!-- используем C# в преобразовании --> <msxsl:script language="CSharp" implements-prefix="my"> public string today() { return DateTime.Now.ToString("dd.MM.yyyy"); } </msxsl:script> <xsl:template match="/"> <html> <head> <title> <xsl:value-of select="//CardDocument[1]/MainInfo/@Name"/> </title> </head> <body> <p><strong> Корреспондент: <xsl:value-of select="//RefStaff[1]/Units/UnitsRow[1]/Employees/EmployeesRow[1]/@DisplayString"/> </strong></p> <hr/> <table width="100%"> <tr> <td> <strong> <xsl:variable name="numberid" select="//CardDocument[1]/MainInfo/@RegNumber"/> <xsl:value-of select="//CardDocument[1]/Numbers/NumbersRow[@RowID=$numberid]/@Number"/> </strong> от <strong> <xsl:call-template name="convertdate"> <xsl:with-param name="str" select="//CardDocument[1]/MainInfo/@RegDate"/> </xsl:call-template> </strong> </td> <td> <strong> <xsl:variable name="numberid" select="//CardDocument[1]/MainInfo/@RegNumber"/> <xsl:value-of select="//CardDocument[1]/Numbers/NumbersRow[@RowID=$numberid]/@Number"/> </strong> от <strong> <xsl:call-template name="convertdate"> <xsl:with-param name="str" select="//CardDocument[1]/MainInfo/@RegDate"/> </xsl:call-template> </strong> </td> </tr> </table> <hr/> <p><strong>Краткое содержание</strong></p> <p> <xsl:call-template name="replace"> <xsl:with-param name="str" select="//CardDocument[1]/MainInfo/@Content"/> </xsl:call-template> </p> <hr/> <p><strong>Резолюция:</strong></p> <xsl:for-each select="//CardDocument[1]/MyTable/MyTableRow"> <p> <xsl:call-template name="getemployeedisplayname"> <xsl:with-param name="employeerow" select="//*/EmployeesRow[@RowID=current()/@ResRef]"/> </xsl:call-template> !<xsl:value-of select="@ResEnum"/>! <xsl:value-of select="@ResString"/> </p> </xsl:for-each> <table width="100%"> <tr> <td width="20%"></td> <td> <xsl:variable name="regid" select="//CardDocument[1]/MainInfo/@Registrar"/> <xsl:call-template name="getemployeedisplayname"> <xsl:with-param name="employeerow" select="//*/EmployeesRow[@RowID=$regid]"/> </xsl:call-template> <xsl:value-of select="my:today()"/> </td> </tr> </table> </body> </html> </xsl:template> </xsl:stylesheet> Во-вторых, применим трансформацию XSL к XML и получим HTML. String transformation_name = "Моя трансформация"; Document document = (this.BaseObject as Document); KindsCardKind kck = document.SystemInfo.CardKind; IDocumentService document_service = this.CardControl.ObjectContext.GetService<IDocumentService>(); String xslt_src = ""; /// читаем свои трансформации для вида ContextualPropertyObjectCollection<DocumentExportTransformSetting> tsc = document_service.GetKindSettings(kck).Export.Transformations; foreach (DocumentExportTransformSetting tr in tsc) { if (tr.Description == transformation_name) { xslt_src = tr.Text; } } /// генерация HTML MemoryStream stream = new MemoryStream(); this.CardControl.ObjectContext.SaveObject<Document>(document); this.CardData.SaveXml(stream, ExportFlags.LinkedRows | ExportFlags.LinkedCards); stream.Position = 0; XmlDocument cardXML = new XmlDocument(); cardXML.Load(stream); // создание трансформации XmlDocument transformXML = new XmlDocument(); transformXML.LoadXml(xslt_src); XslCompiledTransform XSLTransform = new XslCompiledTransform(); XsltSettings settings = new XsltSettings(); settings.EnableScript = true; XSLTransform.Load((IXPathNavigable)transformXML, settings, null); // применение преобразования System.Text.StringBuilder builder = new System.Text.StringBuilder(); XmlWriter HTML = XmlWriter.Create(builder); XSLTransform.Transform((IXPathNavigable)cardXML, HTML); В-третьих, отправим полученный HTML в WebBrowser на печать. WebBrowser ie = new WebBrowser(); ie.DocumentCompleted += new WebBrowserDocumentCompletedEventHandler(onWBLoad); ie.DocumentText = builder.ToString(); код обработчика события onWBLoad: WebBrowser ie = (WebBrowser)sender; // Print the document now that it is fully loaded. //ie.ShowPageSetupDialog(); //ie.ShowPrintPreviewDialog(); ie.ShowPrintDialog(); //ie.Print(); ie.Dispose(); Комментариев нет: Отправка комментария
__label__pos
0.873643
Processing Procedure Declarations Turbo Pascal uses few procedures to process declaration of procedures and functions. The same code is also used to process methods, constructors and destructors. The first procedure processes header. Procedure ProcessProcedureDeclaration; Var IdentifierDataPtr: Pointer; ProcedureIdentifierDataPtr: PProcedureIdentifierData absolute IdentifierDataPtr; InlineCodeSize: Word; TypePointer: PTypeDefinition; NewIdentifier: PIdentifier; IdentifierOffset: Word absolute NewIdentifier; IdentifierToken: TToken; Saved_TemporarySymbolTablePosition: Word; ProcTypeDef: PTypeDefinition; Flags: TProcedureFlagsSet; Label SetProcedureFlags; begin SavedToken := Token; ProcedureStartLineNumber := CurrentSourceFile^.LineCounter; GetNextToken; ExpectIdentifier; If CurrentIdentifierDeclaredInCurrentScope (IdentifierOffset, IdentifierDataPtr, IdentifierToken) then begin If SourceType = stUnitInterface then Error (DuplicateIdentifier); If IdentifierToken = Token_ProcedureIdentifier then begin If pfMethod in ProcedureIdentifierDataPtr^.Flags then Error (DuplicateIdentifier); ProcessAlreadyDeclaredProcedure; Exit; end; If IdentifierToken = Token_TypeIdentifier then begin TypePointer := PointerFromOffsets (PTypeIdentifierData (ProcedureIdentifierDataPtr)^.UnitTypeOffsets); If TypePointer^.BaseType = btObject then begin GetNextToken; ExpectTokenAndGetNext (TOKEN_Period); ExpectIdentifier; If not IsIdentifierInSymbolTable (Ptr (Seg (TypePointer^), PObjectTypeDefinition (TypePointer)^.FieldsListOffset), IdentifierToken, IdentifierDataPtr, IdentifierOffset) then Error (MethodIdentifierExpected); If IdentifierToken <> Token_ProcedureIdentifier then Error (MethodIdentifierExpected); ProcessAlreadyDeclaredProcedure; Exit; end; end; Case SavedToken of Token_CONSTRUCTOR, Token_DESTRUCTOR: Error (ObjectTypeExpected); else Error (DuplicateIdentifier); end; end; Case SavedToken of Token_CONSTRUCTOR, Token_DESTRUCTOR: Error (ObjectTypeExpected); end; ProcedureIdentifierDataPtr := StoreCurrentIdentifierToSymbolTable (CurrentScopeIdentifierTableAddress, 10, NewIdentifier); NewIdentifier^.Token := Token_ProcedureIdentifier; GetNextToken; Saved_TemporarySymbolTablePosition := SymbolTable [stTemporary].NextRecordOffset; ProcTypeDef := ProcessProcedureHeader (SavedToken); ExpectTokenAndGetNext (TOKEN_Semicolon); If CheckAndGetNextToken (TOKEN_INLINE) then begin Process_INLINE (InlineCodeSize); Include (ProcedureIdentifierDataPtr^.Flags, pfInline); ProcedureIdentifierDataPtr^.ProceduresRecordOffset := InlineCodeSize; ExpectTokenAndGetNext (TOKEN_Semicolon); Exit; end; ProcedureIdentifierDataPtr^.LocalIdentifiersList := Saved_TemporarySymbolTablePosition; CreateProcedureRecord (NewIdentifier, ProcedureIdentifierDataPtr); ProcedureIdentifierDataPtr^.OuterBlockProcedureIdentifier := CurrentProcedureIdentifier; If CurrentProcedureIdentifier = 0 then begin Flags := [pfInterrupt]; If CompareIdentifierToDireciveAndSkipSemicolon (_INTERRUPT) then GoTo SetProcedureFlags; end; Flags := [pfFar]; If CompareIdentifierToDireciveAndSkipSemicolon (_FAR) then GoTo SetProcedureFlags; If SourceType <> stUnitInterface then begin Flags := []; If CompareIdentifierToDireciveAndSkipSemicolon (_NEAR) then GoTo SetProcedureFlags; If ForceFarCalls in StatementCompilerSwitches then Flags := [pfFar]; end; SetProcedureFlags: ProcedureIdentifierDataPtr^.Flags := ProcedureIdentifierDataPtr^.Flags + Flags; If SourceType = stUnitInterface then Exit; If CompareIdentifierToDireciveAndSkipSemicolon (_FORWARD) then Exit; ProcessProcedureDeclaractionsAndProgramBlock; end; This procedure processes declarations and program block. Procedure ProcessProcedureDeclaractionsAndProgramBlock; Var Saved_PushedParametersSize, Saved_OffsetAfterLastParameter, Saved_FunctionResultNegativeSize, Saved_MaxStackFrameOffset, SavedProcedureIdentifierDataOffset: Word; SavedCurrentProcedureIdentifier, SavedProceduresNextRecordOffset: Word; Saved_ProcedureStartLineNumber: Word; begin With ProcedureIdentifierDataPtr^ do begin If (CurrentProcedureIdentifier = 0) and CompareIdentifierToDirectiveAndGetNextToken (_EXTERNAL) then begin Include (Flags, pfExternal); LocalIdentifiersList := 0; ExpectTokenAndGetNext (TOKEN_Semicolon); Exit; end; If CompareIdentifierToDireciveAndSkipSemicolon (_ASSEMBLER) then Include (Flags, pfAssembler); Saved_PushedParametersSize := PushedParametersSize; Saved_OffsetAfterLastParameter := OffsetAfterLastParameter; Saved_FunctionResultNegativeSize := FunctionResultNegativeSize; Saved_MaxStackFrameOffset := ProgramBlockMaxStackFrameOffset; SavedProcedureIdentifierDataOffset := ProcedureIdentifierDataOffset; SavedCurrentProcedureIdentifier := CurrentProcedureIdentifier; SavedProceduresNextRecordOffset := SymbolTable [stProcedures].UsedSize; CurrentProcedureIdentifier := IdentifierOffset; ProcedureIdentifierDataOffset := Ofs (ProcedureIdentifierDataPtr^); TemporaryStoredParameters := LocalIdentifiersList; LocalIdentifiersList := SymbolTable [stMain].UsedSize; With PProceduresBlockRecord (Ptr (SymbolTable [stProcedures].Segment, ProceduresRecordOffset))^ do ProgramCodeBlockRecordOffset := $FFFE; CreateSymbolTable (4); CreateParametersAsLocalVariables; Saved_ProcedureStartLineNumber := ProcedureStartLineNumber; ProcessDeclarations; ProcedureStartLineNumber := Saved_ProcedureStartLineNumber; PProceduresBlockRecord (Ptr (SymbolTable [stProcedures].Segment, ProceduresRecordOffset))^.SizeOfConstants := ProcessProgramBlock; { SymbolTable [stProcedures].Segment might change } PProceduresBlockRecord (Ptr (SymbolTable [stProcedures].Segment, ProceduresRecordOffset))^.ProgramCodeBlockRecordOffset := SymbolTable [stCodeBlocks].UsedSize; CreateProgramCodeBlockRecord; CreateTypedConstantsBlockRecord; CheckForUndefined_FORWARD_Or_EXTERNAL (Ptr (SymbolTable [stProcedures].Segment, SavedProceduresNextRecordOffset)); PushedParametersSize := Saved_PushedParametersSize; OffsetAfterLastParameter := Saved_OffsetAfterLastParameter; FunctionResultNegativeSize := Saved_FunctionResultNegativeSize; ProgramBlockMaxStackFrameOffset := Saved_MaxStackFrameOffset; ProcedureIdentifierDataOffset := SavedProcedureIdentifierDataOffset; CurrentProcedureIdentifier := SavedCurrentProcedureIdentifier; If not (LocalDebugSymbols in ModuleCompilerSwitches) then begin SymbolTable [stMain].UsedSize := LocalIdentifiersList; LocalIdentifiersList := 0; end; ExpectTokenAndGetNext (TOKEN_Semicolon); end; end; This procedure creates parameters as local variables. For methods also the implicit parameter Self is added. Procedure CreateParametersAsLocalVariables; Var ProcedureIdentifierData: PProcedureIdentifierData; ProcedureParameterData: PProcedureParameterData absolute ProcedureIdentifierData; ProcedureParameterDataOfs: Word absolute ProcedureParameterData; AssemblerProcedure: Boolean; CurrentParameterOffset: Integer; Saved_TemporaryStoredParameters, Parameter, NumberOfParameters: Word; SelfIdentifier: PIdentifier; SelfIdentifierData: PVariableIdentifierData; begin ProcedureIdentifierData := Ptr (SymbolTable [stMain].Segment, ProcedureIdentifierDataOffset); AssemblerProcedure := pfAssembler in ProcedureIdentifierData^.Flags; PushedParametersSize := SizeOfPushedParameters (ProcedureIdentifierData, OffsetAfterLastParameter); CurrentParameterOffset := OffsetAfterLastParameter; FunctionResultNegativeSize := FunctionResultStackFrameSize; ProgramBlockMaxStackFrameOffset := FunctionResultNegativeSize; Saved_TemporaryStoredParameters := TemporaryStoredParameters; NumberOfParameters := ProcedureIdentifierData^.ProcedureTypeDefinition.NumberOfParameters; Inc (ProcedureParameterDataOfs, 24); For Parameter := 1 to NumberOfParameters do begin CreateParameterAsLocalVariable; Inc (ProcedureParameterData); end; ProcedureParameterDataOfs := ProcedureIdentifierDataOffset; If pfMethod in ProcedureIdentifierData^.Flags then begin VariableData_Flags := [vfVar, vf1]; RecordTypeDefinitionOffset.TypeOffset := $0006; RecordTypeDefinitionOffset.UnitIdentifierData := CurrentProcedureIdentifier; VariableData_NextMemberOffset := 0; GetTypeAndUnitIdentifierOffsets (Ptr (Seg (ProcedureIdentifierData^), ProcedureIdentifierData^.OuterBlockProcedureIdentifier), CurrentVarUnitTypeOffsets); CopyStringToCurrentIdentifier ('Self'); SelfIdentifierData := StoreNewIdentifierToSymbolTable (11, SelfIdentifier); SelfIdentifier^.Token := Token_VariableIdentifier; Move (VariableData_Flags, SelfIdentifierData^, 11); end; If TemporaryStoredParameters = SymbolTable [stTemporary].UsedSize then SymbolTable [stTemporary].UsedSize := Saved_TemporaryStoredParameters; end; Function FunctionResultStackFrameSize: Integer; Var ResultSize: Integer; begin ResultSize := 0; With ProcedureIdentifierData^ do If (ProcedureTypeDefinition.ResultTypeOffset.UnitIdentifierData <> 0) and not (pfAssembler in Flags) then With PTypeDefinition (PointerFromOffsets (ProcedureIdentifierData^.ProcedureTypeDefinition.ResultTypeOffset))^ do If BaseType <> btString then Dec (ResultSize, Size); FunctionResultStackFrameSize := ResultSize; end; Procedure CreateParameterAsLocalVariable; Var ParameterTypeDef: PTypeDefinition; ArrayTypeDefinition: PArrayTypeDefinition; ProcedureParameterVarFlags: TVarFlagsSet; ValueParameterCopySize, StackFrameSizeOfPassedParameter: Word; ParameterIdentifier: PIdentifier; ParameterIdentifierData: PVariableIdentifierData; Offset: Integer; begin ParameterTypeDef := PointerFromOffsets (ProcedureParameterData^.UnitTypeOffsets); ProcedureParameterVarFlags := ProcedureParameterData^.VarFlags; If vfArray in ProcedureParameterVarFlags then begin ArrayTypeDefinition := CreateTypeDefinition (16, 0, [], btArray); With ArrayTypeDefinition^ do begin GetTypeAndUnitIdentifierOffsets (ParameterTypeDef, ElementTypeOffset); GetTypeAndUnitIdentifierOffsets (Ptr (SystemUnitSegment, Word_TypeOffset), IndexTypeOffset); end; ParameterTypeDef := ArrayTypeDefinition; end; StackFrameSizeOfPassedParameter := SizeOfPassedParameter (ParameterTypeDef, ProcedureParameterVarFlags, ValueParameterCopySize, AssemblerProcedure); Include (ProcedureParameterVarFlags, vfArray); VariableData_Flags := ProcedureParameterVarFlags; GetTypeAndUnitIdentifierOffsets (ParameterTypeDef, CurrentVarUnitTypeOffsets); Dec (CurrentParameterOffset, StackFrameSizeOfPassedParameter); If ValueParameterCopySize <> 0 then begin Offset := ProgramBlockMaxStackFrameOffset - ValueParameterCopySize; If (WordAlignment in ModuleCompilerSwitches) and (ValueParameterCopySize <> 1) then Offset := Offset and $FFFE; ProgramBlockMaxStackFrameOffset := Offset; end else begin Offset := CurrentParameterOffset; If vfOpenParameter in VariableData_Flags then Inc (Offset, 2); end; RecordTypeDefinitionOffset.TypeOffset := Offset; RecordTypeDefinitionOffset.UnitIdentifierData := CurrentProcedureIdentifier; VariableData_NextMemberOffset := 0; CopyStringFromTemporaryBlockToCurrentIdentifier (TemporaryStoredParameters); Inc (TemporaryStoredParameters, Length (CurrentIdentifier) + 3); ParameterIdentifierData := StoreNewIdentifierToSymbolTable (11, ParameterIdentifier); ParameterIdentifier^.Token := Token_VariableIdentifier; Move (VariableData_Flags, ParameterIdentifierData^, 11); end; Procedures can be declared in the Interface part of the unit, in the Object declaration or with the Forward directive. Such cases are handled with this procedure. Of course, header must match previous declaration. Procedure ProcessAlreadyDeclaredProcedure; Var SavedTempSymbolTableCurrentPointerOfs: Word; ProcTypeDefAndParametersSize: Word; N, NewTempSymbolTableCurrentPointerOfs: Word; ExpectedToken: TToken; DIPtr, SIPtr: PChar; ProceduresRecord: PProceduresBlockRecord; begin ProceduresRecord := Ptr (SymbolTable [stProcedures].Segment, ProcedureIdentifierDataPtr^.ProceduresRecordOffset); If ProceduresRecord^.ProgramCodeBlockRecordOffset <> $FFFF then Error (DuplicateIdentifier); GetNextToken; If Word (Ptr (Seg (ProcedureIdentifierDataPtr^), Ofs (ProcedureIdentifierDataPtr^) + 18)^) = 0 then begin If pfConstructor in ProcedureIdentifierDataPtr^.Flags then ExpectedToken := Token_CONSTRUCTOR else If pfDestructor in ProcedureIdentifierDataPtr^.Flags then ExpectedToken := Token_DESTRUCTOR else ExpectedToken := Token_PROCEDURE; end else ExpectedToken := Token_FUNCTION; If SavedToken <> ExpectedToken then Error (HeaderDoesNotMatchPreviousDefinition); If (Token = Token_LeftParenthesis) or (Token = Token_Colon) then begin SavedTempSymbolTableCurrentPointerOfs := SymbolTable [stTemporary].NextRecordOffset; ProcTypeDef := ProcessProcedureHeader (SavedToken); ProcTypeDefAndParametersSize := SymbolTable [stMain].UsedSize - Ofs (ProcTypeDef^); SymbolTable [stMain].UsedSize := Ofs (ProcTypeDef^); ProcTypeDef^.Size := ProcedureIdentifierDataPtr^.ProcedureTypeDefinition.Size; ProcTypeDef^.W06_ := ProcedureIdentifierDataPtr^.ProcedureTypeDefinition.W06_; For N := 0 to ProcTypeDefAndParametersSize - 1 do If PChar (ProcTypeDef)^ <> PChar (@ProcedureIdentifierDataPtr^.ProcedureTypeDefinition)^ then Error (HeaderDoesNotMatchPreviousDefinition); NewTempSymbolTableCurrentPointerOfs := SymbolTable [stTemporary].UsedSize; SymbolTable [stTemporary].UsedSize := SavedTempSymbolTableCurrentPointerOfs; DIPtr := Ptr (SymbolTable [stTemporary].Segment, ProcedureIdentifierDataPtr^.LocalIdentifiersList); SIPtr := Ptr (SymbolTable [stTemporary].Segment, SavedTempSymbolTableCurrentPointerOfs); While Ofs (SIPtr^) <> NewTempSymbolTableCurrentPointerOfs do begin Inc (SIPtr, 2); Inc (DIPtr, 2); If not IdentifiersEqual (PString (DIPtr), PString (SIPtr)) then Error (HeaderDoesNotMatchPreviousDefinition); Inc (DIPtr, Length (PString (DIPtr)^) + 1); end; end; ExpectTokenAndGetNext (TOKEN_Semicolon); ProcessProcedureDeclaractionsAndProgramBlock; end;       © 2019 Turbo Pascal | Privacy Policy
__label__pos
0.685019