content
stringlengths
228
999k
pred_label
stringclasses
1 value
pred_score
float64
0.5
1
What is this TParser thing and how do I use it? By: mykle hoban Abstract: This article explains the undocumented class TParser and implements a simple tokenizer using it. Question I noticed this class called TParser in classes.hpp. It's undocumented but it looks intriguing. What does it do and how do I use it? Answer TParser is a class that is used internally by the Delphi and C++Builder IDEs to parse DFM (form) files into a binary format. This limits it a little in its everyday usage (as it is looking for correctly formatted, text-form DFM files), but can still help out a lot. TParser is perhaps a bit inappropriately named. It should probably be called TLexer or something similar that is more suited to its function. TParser, unlike other parsers (by definition), does not process data grabbed from a file; it merely breaks it up into tokens (like a lexer, hence the misnomer). Here is its definition from Classes.hpp: class DELPHICLASS TParser; class PASCALIMPLEMENTATION TParser : public System::TObject { typedef System::TObject inherited; private: TStream* FStream; int FOrigin; char *FBuffer; char *FBufPtr; char *FBufEnd; char *FSourcePtr; char *FSourceEnd; char *FTokenPtr; char *FStringPtr; int FSourceLine; char FSaveChar; char FToken; char FFloatType; WideString FWideStr; void __fastcall ReadBuffer(void); void __fastcall SkipBlanks(void); public: __fastcall TParser(TStream* Stream); __fastcall virtual ~TParser(void); void __fastcall CheckToken(char T); void __fastcall CheckTokenSymbol(const AnsiString S); void __fastcall Error(const AnsiString Ident); void __fastcall ErrorFmt(const AnsiString Ident, const System::TVarRec * Args, const int Args_Size); void __fastcall ErrorStr(const AnsiString Message); void __fastcall HexToBinary(TStream* Stream); char __fastcall NextToken(void); int __fastcall SourcePos(void); AnsiString __fastcall TokenComponentIdent(); Extended __fastcall TokenFloat(void); __int64 __fastcall TokenInt(void); AnsiString __fastcall TokenString(); WideString __fastcall TokenWideString(); bool __fastcall TokenSymbolIs(const AnsiString S); __property char FloatType = {read=FFloatType, nodefault}; __property int SourceLine = {read=FSourceLine, nodefault}; __property char Token = {read=FToken, nodefault}; }; In our example, we will use TParser to break a text file up into tokens (words). Because of the nature of TParser, this is not necessarily the best application, but it does illustrate the concepts. Here are the methods and properties of TParser that we need to concern ourselves with: char __fastcall NextToken(void);   Tells us the type of the next token int __fastcall SourcePos(void);   Tells us what position in the source file we're at. AnsiString __fastcall TokenString();   Tells us what the current token is (returns as a string).   (Other functions such as TokenInt return as different types. __property int SourceLine = {read=FSourceLine, nodefault};   Tells us what line in the source file we're at. __property char Token = {read=FToken, nodefault};   Tells us the type of the current token. Here is the code for the tokenizer. It should be pretty straight-forward. It goes through an input stream (file), tokenizes it, and dumps the results into a Memo. void __fastcall TForm1::Button1Click(TObject *Sender) { TFileStream *fs=new TFileStream(Edit1->Text,fmOpenRead); fs->Position = 0; Memo1->Lines->Clear(); Memo1->Lines->LoadFromStream(fs); Memo1->Lines->Insert(0,"-------------------"); Memo1->Lines->Add("-------------------"); //dump the original file fs->Position = 0; TParser *theParser=new TParser(fs); while (theParser->NextToken() != toEOF) //while we're in the file { //Get Token AnsiString str=theParser->TokenString(); //Get the position in the stream int Pos=theParser->SourcePos(); //Get the line number int Line=theParser->SourceLine; //Get token type switch(theParser->Token) { case toSymbol: Memo1->Lines->Add(str+" is a symbol at line : "+Line+" position : "+Pos); break; case toInteger: Memo1->Lines->Add(str+" is an integer at line : "+Line+" position : "+Pos); break; case toFloat: Memo1->Lines->Add(str+" is a float at line : "+Line+" position : "+Pos); break; case toString: //note: TParser is designed for DFM's so that toString only works with //'single quoted' strings Memo1->Lines->Add(str+" is a string at line : "+Line+" position : "+Pos); break; } } delete fs; delete theParser; } Server Response from: ETNASC01
__label__pos
0.990526
Home: Perl Programming Help: mod_perl: Extract a cookie value based on a variable? rrhill New User Jul 2, 2007, 5:53 PM Views: 18646 Extract a cookie value based on a variable? I can extract a cookie using a literal has key name: sub get_cookie { my $r = shift; my %cookies = CGI::Cookie->parse($r->header_in('Cookie')); return $cookies{"test"}; } But I want to extract the cookie using a variable name rather than a literal key name: sub get_cookie { my $r = shift; my %cookies = CGI::Cookie->parse($r->header_in('Cookie')); return $cookies{$cookie_name}; } This isn't working. KevinR Veteran Jul 2, 2007, 8:39 PM Views: 18644 Re: [rrhill] Extract a cookie value based on a variable? not sure what your question has to do with mod_perl, but your code looks like it should work OK. Try setting the value of $cookie_name to 'test' and see if it works. sub get_cookie { my $cookie_name = 'test'; my $r = shift; my %cookies = CGI::Cookie->parse($r->header_in('Cookie')); return $cookies{$cookie_name}; } -------------------------------------------------
__label__pos
0.99965
// Make newform 230.6.a.a in Magma, downloaded from the LMFDB on 16 October 2021. function ConvertToHeckeField(input: pass_field := false, Kf := []) if not pass_field then Kf := Rationals(); end if; return [Kf!elt[1] : elt in input]; end function; // To make the character of type GrpDrchElt, type "MakeCharacter_230_a();" function MakeCharacter_230_a() N := 230; order := 1; char_gens := [*47, 51*]; v := [*\ 1, 1*]; // chi(gens[i]) = zeta^v[i] assert SequenceToList(UnitGenerators(DirichletGroup(N))) eq char_gens; F := CyclotomicField(order); chi := DirichletCharacterFromValuesOnUnitGenerators(DirichletGroup(N,F),[F|F.1^e:e in v]); return MinimalBaseRingCharacter(chi); end function; function MakeCharacter_230_a_Hecke(Kf) return MakeCharacter_230_a(); end function; function ExtendMultiplicatively(weight, aps, character) prec := NextPrime(NthPrime(#aps)) - 1; // we will able to figure out a_0 ... a_prec primes := PrimesUpTo(prec); prime_powers := primes; assert #primes eq #aps; log_prec := Floor(Log(prec)/Log(2)); // prec < 2^(log_prec+1) F := Universe(aps); FXY := PolynomialRing(F, 2); // 1/(1 - a_p T + p^(weight - 1) * char(p) T^2) = 1 + a_p T + a_{p^2} T^2 + ... R := PowerSeriesRing(FXY : Precision := log_prec + 1); recursion := Coefficients(1/(1 - X*T + Y*T^2)); coeffs := [F!0: i in [1..(prec+1)]]; coeffs[1] := 1; //a_1 for i := 1 to #primes do p := primes[i]; coeffs[p] := aps[i]; b := p^(weight - 1) * F!character(p); r := 2; p_power := p * p; //deals with powers of p while p_power le prec do Append(~prime_powers, p_power); coeffs[p_power] := Evaluate(recursion[r + 1], [aps[i], b]); p_power *:= p; r +:= 1; end while; end for; Sort(~prime_powers); for pp in prime_powers do for k := 1 to Floor(prec/pp) do if GCD(k, pp) eq 1 then coeffs[pp*k] := coeffs[pp]*coeffs[k]; end if; end for; end for; return coeffs; end function; function qexpCoeffs() // To make the coeffs of the qexp of the newform in the Hecke field type "qexpCoeffs();" weight := 6; raw_aps := [*\ [*-4*], [*8*], [*-25*], [*199*], [*150*], [*-1202*], [*735*], [*-22*], [*-529*], [*-5525*], [*-95*], [*-397*], [*20633*], [*-11384*], [*1992*], [*-7349*], [*-23827*], [*-44016*], [*-37713*], [*-50057*], [*-16698*], [*-31004*], [*-70077*], [*7676*], [*-150094*], [*-95545*], [*-67960*], [*160011*], [*-119442*], [*206491*], [*310260*], [*-16060*], [*175094*], [*69059*], [*-150080*], [*-462052*], [*-211807*], [*-119454*], [*311784*], [*485184*], [*-260664*], [*111496*], [*282032*], [*-107808*], [*417830*], [*-322018*], [*1022281*], [*1073234*], [*-532220*], [*176824*], [*-442996*], [*-1306071*], [*1609806*], [*-1138182*], [*216048*], [*1597693*], [*1798545*], [*-579025*], [*-781034*], [*-261022*], [*-190871*], [*-769679*], [*551346*], [*-3258896*], [*267129*], [*2417484*], [*-1623155*], [*-1788086*], [*-1357988*], [*-2824457*], [*1299720*], [*345284*], [*4838333*], [*303566*], [*-2005284*], [*-187083*], [*4769362*], [*2607524*], [*-123598*], [*-6661939*], [*-2357662*], [*3375750*], [*2043108*], [*-546629*], [*2472432*], [*3685504*], [*908395*], [*406629*], [*6757102*], [*1218914*], [*5805027*], [*4279652*], [*-2752628*], [*9213799*], [*7864479*], [*-3721631*], [*1861506*], [*6781310*], [*6130108*], [*-10049466*], [*2254104*], [*3995803*], [*-9213795*], [*2922258*], [*-14427992*], [*-5784452*], [*-9120498*], [*-6637500*], [*-13253408*], [*-4348565*], [*14542408*], [*-15334906*], [*16754777*], [*13604980*], [*-13546360*], [*4500966*], [*620123*], [*-15634062*], [*2463298*], [*12853662*], [*15720274*], [*14319756*], [*-17936307*], [*20056324*], [*-7258468*], [*-9824928*], [*-13976916*], [*-12446935*], [*-12481907*], [*-8202597*], [*15612777*], [*-21483488*], [*-13902466*], [*-25915195*], [*-27707475*], [*26091880*], [*-16303818*], [*7361003*], [*11091645*], [*2303935*], [*17603001*], [*9041370*], [*8443864*], [*3109569*], [*-1211263*], [*-6712286*], [*21815322*], [*-38107274*], [*30590253*], [*-34374942*], [*-8014218*], [*-32842056*], [*1013800*], [*24044618*], [*29579791*], [*-16865352*], [*-42481356*], [*-10680205*], [*-40221334*], [*10576272*], [*29013196*], [*50764986*], [*-22684372*], [*13183964*], [*26875189*], [*-11014361*], [*-33709211*], [*20182360*], [*5779204*], [*41008368*], [*-27596148*], [*49967481*], [*-55773106*], [*-39502029*], [*-18791248*], [*-63827080*], [*-5997676*], [*-49542489*], [*1607705*], [*59589686*], [*3602646*], [*-13074936*], [*58745650*], [*15669098*], [*-78338211*], [*-56232362*], [*1695634*], [*16973617*], [*44619449*], [*16947867*], [*-4061266*], [*-64705866*], [*14574962*], [*45377097*], [*-40166432*], [*13437717*], [*-36731118*], [*-49451186*], [*27119182*], [*32502726*], [*-29322208*], [*-38401180*], [*-76743406*], [*-19116082*], [*-31190588*], [*-78419618*], [*-110828436*], [*-56171010*], [*44482689*], [*-60261772*], [*89914870*], [*35841662*], [*-73133192*], [*-65466732*], [*-12045187*], [*80177756*], [*116106794*], [*29089169*], [*84189421*], [*99343854*], [*-45238471*], [*-109639226*], [*-93502957*], [*-117065555*], [*-126129510*], [*-68606781*], [*74739886*], [*70021777*], [*100322100*], [*-74449653*], [*-124450402*], [*-98671518*], [*117811146*], [*152564283*], [*90055471*], [*150738716*], [*-60898924*], [*-106933350*], [*90492905*], [*111791047*], [*70169339*], [*-22074623*], [*66185518*], [*-71552946*], [*13145644*], [*-100458460*], [*-153495556*], [*-20426594*], [*70279870*], [*60966759*], [*145281390*], [*-72296290*], [*82475997*], [*155181026*], [*4669446*], [*13565009*], [*20776608*], [*111403272*], [*-87777046*], [*-20079954*], [*8418981*], [*-53458215*], [*-94868713*], [*-133446163*], [*-25511480*], [*-32036740*], [*-59257620*], [*107215098*], [*141889529*], [*-67708948*], [*-26627203*], [*45027303*], [*68533579*], [*-130666774*], [*64250852*], [*-63485566*], [*-39066108*], [*248613209*], [*105162572*], [*-54393464*], [*-162484266*], [*-191173792*], [*112994168*], [*263073684*], [*-2034802*], [*484295*], [*96378307*], [*-245491918*], [*172009288*], [*-129873150*], [*-294940482*], [*-3897867*], [*211039052*], [*-311140432*], [*173885792*], [*6401296*], [*45754006*], [*187291050*], [*-150287505*], [*-314054530*], [*119178413*], [*-39944509*], [*-159305336*], [*233745508*], [*334529828*], [*-100501148*], [*56177762*], [*-251462546*], [*181572876*], [*-48375876*], [*235506828*], [*67552270*], [*-18262980*], [*145969580*], [*66596705*], [*-62972264*], [*-262859163*], [*-160007095*], [*-297437541*], [*-184104202*], [*208452068*], [*-373484383*], [*-235696350*], [*-83700626*], [*-36374461*], [*356930044*], [*188475220*], [*-198160514*], [*-82039568*], [*120085897*], [*59698615*], [*-365847438*], [*-253155433*], [*231788594*], [*15462308*], [*222009520*], [*-163518430*], [*-18740286*], [*39402922*], [*437600831*], [*-31598418*], [*-395044470*], [*-377947486*], [*-324469768*], [*-406035467*], [*275066335*], [*65995337*], [*-198437630*], [*150854202*], [*408713829*], [*488482923*], [*-17618008*], [*126271427*], [*320152209*], [*3425448*], [*215352826*], [*438018930*], [*83510250*], [*-420842286*], [*203816262*], [*-219735189*], [*44513773*], [*-331809802*], [*277529136*], [*66236342*], [*-341457198*], [*102131412*], [*-167496205*], [*-288041012*], [*-125594821*], [*-256477599*], [*194789446*], [*3578160*], [*49428720*], [*-231423454*], [*-90514471*], [*218215446*], [*537328175*], [*490768344*], [*691715630*], [*120372870*], [*-81407770*], [*26629292*], [*536059546*], [*322527300*], [*-83604158*], [*364374989*], [*223770148*], [*700723687*], [*701870503*], [*174246826*], [*408494570*], [*-268818661*], [*-684194044*], [*-294419362*], [*-570243625*], [*545369254*], [*-48256*], [*-516327664*], [*292250069*], [*5467403*], [*-145585158*], [*497142869*], [*-376997335*], [*-287791701*], [*43875755*], [*-667736267*], [*432419882*], [*159401252*], [*-423074868*], [*-687385244*], [*291734338*], [*368625300*], [*-186148736*], [*-396198363*], [*-12045182*], [*-126734000*], [*606900088*], [*605188887*], [*508485692*], [*-530918954*], [*218933278*], [*-684244446*], [*-97688236*], [*-255012304*], [*-576203076*], [*-621242916*], [*101787011*], [*-190899940*], [*595943483*]*]; aps := ConvertToHeckeField(raw_aps); chi := MakeCharacter_230_a_Hecke(Universe(aps)); return ExtendMultiplicatively(weight, aps, chi); end function; // To make the newform (type ModFrm), type "MakeNewformModFrm_230_6_a_a();". // This may take a long time! To see verbose output, uncomment the SetVerbose lines below. // The precision argument determines an initial guess on how many Fourier coefficients to use. // This guess is increased enough to uniquely determine the newform. function MakeNewformModFrm_230_6_a_a(:prec:=1) chi := MakeCharacter_230_a(); f_vec := qexpCoeffs(); Kf := Universe(f_vec); // SetVerbose("ModularForms", true); // SetVerbose("ModularSymbols", true); S := CuspidalSubspace(ModularForms(chi, 6)); S := BaseChange(S, Kf); maxprec := NextPrime(2999) - 1; while true do trunc_vec := Vector(Kf, [0] cat [f_vec[i]: i in [1..prec]]); B := Basis(S, prec + 1); S_basismat := Matrix([AbsEltseq(g): g in B]); if Rank(S_basismat) eq Min(NumberOfRows(S_basismat), NumberOfColumns(S_basismat)) then S_basismat := ChangeRing(S_basismat,Kf); f_lincom := Solution(S_basismat,trunc_vec); f := &+[f_lincom[i]*Basis(S)[i] : i in [1..#Basis(S)]]; return f; end if; error if prec eq maxprec, "Unable to distinguish newform within newspace"; prec := Min(Ceiling(1.25 * prec), maxprec); end while; end function; // To make the Hecke irreducible modular symbols subspace (type ModSym) // containing the newform, type "MakeNewformModSym_230_6_a_a();". // This may take a long time! To see verbose output, uncomment the SetVerbose line below. function MakeNewformModSym_230_6_a_a() R := PolynomialRing(Rationals()); chi := MakeCharacter_230_a(); // SetVerbose("ModularSymbols", true); Snew := NewSubspace(CuspidalSubspace(ModularSymbols(chi,6,-1))); Vf := Kernel([<3,R![-8, 1]>],Snew); return Vf; end function;
__label__pos
0.999908
W3C home > Mailing lists > Public > [email protected] > September 2005 Re: Are there W3C definitions of presentation and content? From: White Lynx <[email protected]> Date: Mon, 19 Sep 2005 13:14:34 +0400 To: [email protected] Message-Id: <[email protected]> > I repeat, at the risk of being rude "What reason is there for > considering href part of the content? I would pose question otherwise. If LaTeX typesetting system is capable to generate hyperlinks, if XSL formatters can generate hyperlinks, if DSSSL renderers can do this, why CSS formatters should not be able to do the same? After all CSS is style language that is inteded to present XML/SGML document to user, making page user friendly is not just changing colors and font styles, it may require generating extra notes, links, embedding external content etc. Good CSS rendering engines de facto support some kind of linking oriented CSS properties (for instance Prince can actuate hyperlinks written in different XML languages, can generate and update cross references). So the question is why CSS WG does not want to standardize feature that the rest of style/formatting languages have and that would make XML + CSS approach more user friendly and selfcontained? -- _______________________________________________ Surf the Web in a faster, safer and easier way: Download Opera 8 at http://www.opera.com Powered by Outblaze Received on Monday, 19 September 2005 09:16:17 GMT This archive was generated by hypermail 2.2.0+W3C-0.50 : Monday, 27 April 2009 13:54:40 GMT
__label__pos
0.663106
The Crystal Programming Language Forum What happens to Tuples (stack allocated) that are inside heap memory? Example code: alias ItemTuple = {Int64, Int32, String, Int16} class Item property i : ItemTuple def initialize(@i) end end class Client property items = Hash(Int64, Item).new end p1 = Client.new p1.items[1] = Item.new( {0_i64, 25, "test", 5_i16} ) pp p1 Hash is allocated on the heap, however, Tuple is allocated on the stack. Questions: • If stack allocated memory (Tuple) is a child of heap memory (Hash), does this mean Tuple is now allocated on the heap? • Using this example code, and creating xx of items, is it possible to achieve a stack overflow error? If stack allocated memory (Tuple) is a child of heap memory (Hash), does this mean Tuple is now allocated on the heap? Yes Using this example code, and creating xx of items, is it possible to achieve a stack overflow error? No you can’t, because it’s not in the stack. Basically when you have a value type (like a tuple, an integer, as opposed to a reference type like a class): • using it in a method will use some space in the stack frame of the method, because that’s where local variables are located. • using it in a type (a struct or class) as an instance variable, will use some space in that type, in the area used by the fields of that type. When a value type is embedded in a class it will effectively be in the heap, because that’s where the class’s internal memory is… I can make a schema later if you need, I’m on the phone right now, it would be close to impossible ^^ 1 Like I think it’s helpful to think it like this: • classes are reference types: they are a pointer to the actual data • structs are value types: their data is inlined What that means is when you do: x = SomeClass.new x will be a pointer pointing to the data that SomeClass holds. Something like: x = pointer -> [...data (instance vars)...] and [...data...] is allocated on the heap. If you have a type that has an instance variable of type SomeClass, that instance variable is also a pointer to the data. When you assign x to that instance variable that pointer is copied. Note that when you do x = SomeClass.new, the pointer for x is stored on the stack. With a struct, the data is inlined: x = SomeStruct.new so essentially x is the data: x = [...data...] and x is allocated on the stack. If you have a type with an instance variable of type SomeStruct, that instance variable will not be a pointer to the data but it will have all the space for the struct. When you assign x to that instance variable, the data will be copied to that space. So let’s say SomeClass has an instance variable s of type SomeStruct, and SomeStruct has two instance variables of type Int32. You do: c = SomeClass.new c.s = SomeStruct.new it will be: c = pointer -> [s = [x: Int32, y : Int32]] note that everything after -> is in heap memory, which in this case will be 8 bytes. When you do c.s = SomeStruct.new, that entire data is moved (copied) into the heap. If it’s the other way around, and SomeStruct has an instance variable of type SomeClass, which in turn has two integers: s = SomeStruct.new s.c = SomeClass.new it will be like this: s = [c = pointer -> [x: Int32, y : Int32]] Note that s in this case has its data inlined, and because it only holds a SomeClass, which is represented as a pointer, s's data is just a pointer. The data for that pointer always lives on the heap. When you do: c = SomeClass.new s.c = c the pointer for c, which lives in the stack (but the data lives in the heap) is copied from the stack to the struct (which in this case also lives in the stack). I hope this is clear! 4 Likes Wow, very interesting! Yes, this is very clear. When you do c.s = SomeStruct.new , that entire data is moved (copied) into the heap. OHHHHHH. So if I got this right… {0_i64, 25, "test", 5_i16} is stack-allocated on creation, however, it’s being stored in a heap allocated type (Hash), so it gets copied and put into heap memory. Do I got that right? If so, when doing pp typeof(p1.items[1].i) It converts the heap memory storage of i back to a stack-allocated Tuple when accessing it? So, theoretically… IF tons of items are being accessed, this could cause a stack overflow? But not when storing the items in a Hash (because they are copied and stored on the heap). Please tell me I am close to understanding this 100%! Even if I don’t understand this 100%, I trust you guys. And you have answered my questions and it really put my mind at ease regarding ItemTuple and its possibility of stack overflow errors. Appreciate it. If you do: x1 = # some struct x2 = # some struct ... and so on, a lot of times, then yes, you will get a stack overflow. But it’s important to know that the compiler (well, LLVM) will reuse the stack. For example if you have a loop: loop do x = # some struct end it doesn’t matter that # some struct is copied from somewhere to the stack, because it’s always copied to the same stack position. In short, I would never worry about stack overflow when using structs. Stack overflow happens when you recurse too much into a function. 1 Like There is a lot of intricacies in programming. Especially at the low-level. I’m just not used to it all (starting my gameserver with nodejs probably didn’t help :laughing:)! With that said, I’m eager to learn and get as much information as my brain will comprehend!
__label__pos
0.920772
      C++ STL algorithm, set_symmetric_difference() program example   Compiler: Visual C++ Express Edition 2005 Compiled on Platform: Windows XP Pro SP2 Header file: Standard Additional project setting: Set project to be compiled as C++ Project -> your_project_name Properties -> Configuration Properties -> C/C++ -> Advanced -> Compiled As: Compiled as C++ Code (/TP) Other info: none To do: Using the C++ set_symmetric_difference() to unite all of the elements that belong to one, but not both, of the sorted source ranges into a single, sorted destination range, where the ordering criterion may be specified by a binary predicate in C++ programming To show: How to use the C++ algorithm, set_symmetric_difference() to unite all of the elements that belong to one, but not both, of the sorted source ranges into a single, sorted destination range, where the ordering criterion may be specified by a binary predicate in C++ programming   // C++ STL algorithm, set_symmetric_difference() #include <vector> #include <algorithm> // for greater<int>() #include <functional> #include <iostream> using namespace std;   // return whether modulus of elem1 is less than modulus of elem2 bool mod_lesser(int elem1, int elem2) { if(elem1 < 0) elem1 = - elem1; if(elem2 < 0) elem2 = - elem2; return (elem1 < elem2); }   int main(void) { // vector containers vector <int> vec1a, vec1b, vec1(12); // vector iterators vector <int>::iterator Iter1a, Iter1b, Iter1, Result1; int i, j;   // push data into the containers for(i = -4; i <= 4; i++) vec1a.push_back(i); for(j =-3; j <= 3; j++) vec1b.push_back(j);   // print the data cout<<"Original vec1a vector with range sorted by the binary predicate less than is: "; for(Iter1a = vec1a.begin(); Iter1a != vec1a.end(); Iter1a++) cout<<*Iter1a<<" "; cout<<endl; cout<<"\nOriginal vector vec1b with range sorted by the binary predicate less than is: "; for(Iter1b = vec1b.begin(); Iter1b != vec1b.end(); Iter1b++) cout<<*Iter1b<<" "; cout<<endl; // constructing vectors vec2a & vec2b with ranges sorted by greater vector <int>vec2a(vec1a), vec2b(vec1b), vec2(vec1); vector <int>::iterator Iter2a, Iter2b, Iter2, Result2; sort(vec2a.begin(), vec2a.end(), greater<int>()); sort(vec2b.begin(), vec2b.end(), greater<int>()); cout<<"\nOriginal vector vec2a with range sorted by the binary predicate greater is: "; for(Iter2a = vec2a.begin(); Iter2a != vec2a.end(); Iter2a++) cout<<*Iter2a<<" "; cout<<endl; cout<<"\nOriginal vector vec2b with range sorted by the binary predicate greater is: "; for(Iter2b = vec2b.begin(); Iter2b != vec2b.end(); Iter2b++) cout<<*Iter2b<<" "; cout<<endl; // constructing vectors vec3a & vec3b with ranges sorted by mod_lesser() vector<int>vec3a(vec1a), vec3b(vec1b), vec3(vec1); vector<int>::iterator Iter3a, Iter3b, Iter3, Result3; sort(vec3a.begin(), vec3a.end(), mod_lesser); sort(vec3b.begin(), vec3b.end(), mod_lesser); cout<<"\nOriginal vec3a vector with range sorted by the binary predicate mod_lesser() is: "; for(Iter3a = vec3a.begin(); Iter3a != vec3a.end(); Iter3a++) cout<<*Iter3a<<" "; cout<<endl; cout<<"\nOriginal vec3b vector with range sorted by the binary predicate mod_lesser() is: "; for(Iter3b = vec3b.begin(); Iter3b != vec3b.end(); Iter3b++) cout<<*Iter3b<<" "; cout<<endl; // to combine into a symmetric difference in ascending order with the default binary predicate less <int>() Result1 = set_symmetric_difference(vec1a.begin(), vec1a.end(), vec1b.begin(), vec1b.end(), vec1.begin()); cout<<"\nset_symmetric_difference() of source ranges with default order, vec1mod vector: "; for(Iter1 = vec1.begin(); Iter1 != Result1; Iter1++) cout<<*Iter1<<" "; cout<<endl; // to combine into a symmetric difference in descending order, specify binary predicate greater<int>() Result2 = set_symmetric_difference(vec2a.begin(), vec2a.end(), vec2b.begin(), vec2b.end(), vec2.begin(), greater<int>()); cout<<"\nset_symmetric_difference() of source ranges with binary predicate greater specified, vec2mod vector: "; for(Iter2 = vec2.begin(); Iter2 != Result2; Iter2++) cout<<*Iter2<<" "; cout<<endl; // to combine into a symmetric difference applying a user defined binary predicate mod_lesser Result3 = set_symmetric_difference(vec3a.begin(), vec3a.end(), vec3b.begin(), vec3b.end(), vec3.begin(), mod_lesser); cout<<"\nset_symmetric_difference() of source ranges with binary predicate mod_lesser() specified, vec3mod vector: "; for(Iter3 = vec3.begin(); Iter3 != Result3; Iter3++) cout<<*Iter3<<" "; cout<<endl; return 0; }   Output examples:   Original vec1a vector with range sorted by the binary predicate less than is: -4 -3 -2 -1 0 1 2 3 4 Original vector vec1b with range sorted by the binary predicate less than is: -3 -2 -1 0 1 2 3 Original vector vec2a with range sorted by the binary predicate greater is: 4 3 2 1 0 -1 -2 -3 -4 Original vector vec2b with range sorted by the binary predicate greater is: 3 2 1 0 -1 -2 -3 Original vec3a vector with range sorted by the binary predicate mod_lesser() is: 0 -1 1 -2 2 -3 3 -4 4 Original vec3b vector with range sorted by the binary predicate mod_lesser() is: 0 -1 1 -2 2 -3 3 set_symmetric_difference() of source ranges with default order, vec1mod vector: -4 4 set_symmetric_difference() of source ranges with binary predicate greater specified, vec2mod vector: 4 -4 set_symmetric_difference() of source ranges with binary predicate mod_lesser() specified, vec3mod vector: -4 4 Press any key to continue . . .     C and C++ Programming Resources | C & C++ Code Example Index
__label__pos
0.961942
Um sich den DNS-Server einer Verbindung anzeigen zu lassen kann man sich einfach das Netzwerksymbol in der Statusleiste anklicken und dort den Punkt Verbindungsinformationen auswählen: Über die Konsole ist es ähnlich einfach, wenn man weiß, wo man suchen muss. Unter Ubuntu übernimmt der NetzwerkManager die Konfiguration der Einstellungen. Die DNS-Server werden dabei für (W)-lan in die Datei /etc/resolv.conf bzw. bei einem Surfstick in die Datei /etc/ppp/resolv.conf geschrieben und beim wechseln der Verbindung neu geschrieben. Somit kann man sie sich entsprechend einfach auch anzeigen lassen lassen: cat /etc/ppp/resolv.conf nameserver 193.189.244.206 nameserver 193.189.244.225  
__label__pos
0.527132
Insights Big Data Big Data Engineering Insights Lambda Architecture Lambda architecture is the favored model for data processing that unites traditional batch processing and stream processing methods into the same framework. It has been the standard approach in big data to balance latency, throughput, and fault tolerance. In gest, batch processing is carried out to find the old dataset’s behavioral pattern in a batch layer to obtain accurate and comprehensive views of the data on a daily or hourly basis. Simultaneously, real-time stream processing is carried out to provide several views from the online data. The lambda architecture is further divided into three major sections to compute the distributed data in real-time. Batch Layer in the Lambda Architecture Initially, the data streams are obtained from various sources split into two parts and are ingested into the batch layers and speed layer simultaneously. When the new data is being streamed, it gets deposited into this layer and becomes part of the master dataset. It is also called a ‘Data Lake, ‘ which acts as a historical archive to hold all the data. The data in this layer is unchangeable except for adding up the new information. Then, the data are ingested in large batches in specific schedules to generate the various reports. The schedules can be set either daily or on an hourly basis as per the requirements. The output is typically stored in a ready only database by replacing the precomputed views. Speed Layer in the Lambda Architecture This layer is responsible for processing the data’s continuous stream by not caring if data is incomplete or needs some fixation. It does not precompute the entire data; instead, it undergoes the incremental process to store and update the data’s real-time views. This layer tries hard to provide real-time views of the most frequent data by minimizing the latency because older data is observed in the batch layer. The views produced from this layer may not be complete and have a specific time to compute as the batch layer because it really works with the live data. Serving Layer in the Lambda Architecture The combined batch views and the real-time views, which are the output of the batch layer and speed layer, respectively, are forwarded into this layer. The result is stored in the specialized distributed database, which can be queried by the user in low latency and ad-hoc manner. This layer is responsible for responding to any of the queries by providing the results of the calculations. Benefits The following features made the lambda architecture more beneficial to process big data: Data Consistency Lambda architecture solves the problem of data inconsistency in the distributed system. The sequentially processed data and the indexing process ensure the consistent data replica in batch and speed layers. Scalability This architecture is highly scalable, which allows to add or remove several nodes and doesn’t care how much data needs to be processed Fault tolerance It is also a fault tolerance for any of the hardware or software failure. There is another node that continues the workload for any failure and doesn’t impact the system’s performance. Business Agility This architecture can process the data in real-time, which helps the various companies make crucial decisions. Applications of the Lambda Architecture Twitter, which is regarded as a microblogging system, uses the lambda architecture to understand the various tweets’ sentiments so far. Crashlytics deals with mobile analysis to produce meaningful insights with the use of lambda architecture. A popular forum stack overflow that deals with the questions and answers also use the lambda architecture. Here, batch views are used to find the analytical results for voting.
__label__pos
0.796638
Printing list of string side by side I am working on the first assessment of Scientific Computing with Python and I have managed to format the strings as required by the question but now I am facing the issue with the O/P. I am not able to print the list contents side by side. I have formatted the provided list of strings and stored it inside a list as [' 1\n+ 1\n---\n 2', ' 2\n+ 22\n----\n 24', ' 3\n+ 333\n-----\n 336', ' 4\n+ 4444\n------\n 4448']. I tried to O/P the strings using the ' '.join(formatted_strings) but it doesn’t seem to work. I would be more than grateful if anyone can help me with this. Thanks Please post your current code - Thanks Here’s the whole code for your reference. def arithmetic_formatter(problems, calc=False): results = [] calculated = "" if len(problems) > 5: return "Error: Too many problems." for problem in problems: if not check_valid_operators(problem): return "Error: Operators must be '+' or '-'." elif not check_valid_operands(problem): return "Error: Numbers must only contain digits." elif not check_operand_valid_length(problem): return "Error: Numbers cannot be more than four digits." if calc: calculated = str( solve( int(problem.split(" ")[0]), int(problem.split(" ")[2]), problem.split(" ")[1], ) ) results.append(format_problem_string(problem, calculated)) else: results.append(format_problem_string(problem)) return results def format_problem_string(problem_string, calculated=""): problem_split_list = problem_string.split(" ") formatted_string = "" upper_spaces, lower_spaces, result_spaces, dashes = 0, 0, 0, 0 if len(problem_split_list[0]) >= len(problem_split_list[2]): upper_spaces = 2 dashes = len(problem_split_list[0]) + 2 lower_spaces = dashes - len(problem_split_list[2]) - 1 if len(calculated) == dashes: result_spaces = 0 elif len(calculated) < dashes: result_spaces = dashes - len(calculated) formatted_string = f"{(' ' * upper_spaces)}{problem_split_list[0]}\n{problem_split_list[1]}{(' ' * lower_spaces)}{problem_split_list[2]}\n{('-' * dashes)}\n{(' ' * result_spaces)}{calculated}" else: dashes = len(problem_split_list[2]) + 2 lower_spaces = dashes - len(problem_split_list[2]) - 1 upper_spaces = dashes - len(problem_split_list[0]) if len(calculated) == dashes: result_spaces = 0 elif len(calculated) < dashes: result_spaces = dashes - len(calculated) formatted_string = f"{(' ' * upper_spaces)}{problem_split_list[0]}\n{problem_split_list[1]}{(' ' * lower_spaces)}{problem_split_list[2]}\n{('-' * dashes)}\n{(' ' * result_spaces)}{calculated}" return formatted_string def solve(num1, num2, operand): if operand == "+": return num1 + num2 elif operand == "-": return num1 - num2 def check_valid_operators(problem): return problem.find("+") != -1 or problem.find("-") != -1 def check_valid_operands(problem): return problem.split(" ")[0].isdigit() and problem.split(" ")[2].isdigit() def check_operand_valid_length(problem): return len(problem.split(" ")[0]) <= 4 and len(problem.split(" ")[2]) <= 4 print(arithmetic_formatter(["1 + 1", "2 + 22", "3 + 333", "4 + 4444"], True)) Generally speaking: You should call the arithmetic_arranger function from main.py and call test data from main.py as well. You can see a test call there already: print(arithmetic_arranger(['3801 - 2', '123 + 49'])) For that to work you need to call the function arithmetic_arranger so that the import works: from arithmetic_arranger import arithmetic_arranger Don’t test from within arithmetic_arranger.py as you do here: print(arithmetic_formatter(["1 + 1", "2 + 22", "3 + 333", "4 + 4444"], True)) Start by renaming your function back to arithmetic_arranger (not formatter) or none of the tests are going to work and you won’t be able to complete this. The tests will also give you valuable feedback on how to structure your output. Once that’s done, this might be useful for you: An assertion error gives you a lot of information to track down a problem. For example: AssertionError: 'Year' != 'Years' - Year + Years ? + Your output comes first, and the output that the test expects is second. AssertionError: ‘Year’ != ‘Years’ Your output: Year does not equal what’s expected: Years - Year + Years ? + - Dash indicates the incorrect output + Plus shows what it should be ? The Question mark line indicates the place of the character that’s different between the two lines. Here a + is placed under the missing s . Here’s another example: E AssertionError: Expected different output when calling "arithmetic_arranger()" with ["3801 - 2", "123 + 49"] E assert ' 3801 123 \n - 2 + 49 \n------ ----- \n' == ' 3801 123\n- 2 + 49\n------ -----' E - 3801 123 E + 3801 123 E ? ++++ E - - 2 + 49 E + - 2 + 49 E - ------ ----- E + ------ ----- E ? +++++ The first line is long, and it helps to view it as 2 lines in fixed width characters, so you can compare it character by character: ' 3801 123 \n - 2 + 49 \n------ ----- \n' ' 3801 123\n- 2 + 49\n------ -----' Again, your output is first and the expected output is second. Here it’s easy to see extra spaces or \n characters. E - 3801 123 E + 3801 123 E ? ++++ Here the ? line indicates 4 extra spaces at the end of a line using four + symbols. Spaces are a little difficult to see this way, so it’s useful to use both formats together. I hope this helps interpret your error! I don’t think you will be able to “print a list of strings side by side” as you put it. Rather you need to break them up and think “line by line” like a dot matrix printer. Your first line will be the the 1st element of each problem: ` 1 2 3 ` Your second line will be the operator and 2nd element: `+ 1 + 22 +33` and the last line: -------------- --------- ----------- etc. 1 Like Hey @pkdvalis. This is a problem from my side as I don’t explain my code the whole code here is written locally on my machine not in reply that is why it is really different from how it should be regarding your first suggestion on code I have taken care of that I just had the issue with side by side formatting of my code but thanks for taking out time to go through my whole code and explaining all things in details. I appreciate it. 1 Like This topic was automatically closed 182 days after the last reply. New replies are no longer allowed.
__label__pos
0.952643
The same distinct command, the result sometimes contains null and sometimes does not version:6.0.12 4.2.8 db.t1.insert({id:1}) db.t1.insert({id:2}) db.t1.insert({}) db.t1.insert({id:2}) db.t1.find() { “_id” : ObjectId(“65a62118842d8bddd7ef9699”), “id” : 1 } { “_id” : ObjectId(“65a62119842d8bddd7ef969a”), “id” : 2 } { “_id” : ObjectId(“65a6211c842d8bddd7ef969b”) } { “_id” : ObjectId(“65a6211f842d8bddd7ef969c”), “id” : 2 } — There are no nulls in the result set db.t1.distinct(‘id’) [ 1, 2 ] – After creating the index, there are nulls in the result set db.t1.createIndex({id:1}) { “numIndexesBefore” : 1, “numIndexesAfter” : 2, “createdCollectionAutomatically” : false, “ok” : 1 } db.t1.distinct(‘id’) [ null, 1, 2 ] After adding the index, null values appeared in the result set, causing the application processing logic to change. I would like to ask if this is a bug. Hello, welcome to the MongoDB community! This is actually not a bug. You need to create a unique index so that null values are not included in your index. The only unique field by default is _id. db.t1.createIndex({id:1}, { unique: true }) https://www.mongodb.com/docs/manual/core/index-unique/#:~:text=A%20unique%20index%20ensures%20that,the%20creation%20of%20a%20collection. While a unique index would work to ensure null are not in the distinct() result set, they would prevent adding documents with the same id. In the sample documents supplied there is 2 documents with id:2 so creating a unique index would failed with E11000-duplicate-key-error. The appropriate index would be the partial index: db.t1.createIndex({id:1},{partialFilterExpression:{id:{$exists:true}}}) 1 Like It’s true Steve, I didn’t notice the ids being the same. Thanks for the correction. Saru mo ki kara ochiru 1 Like Thanks Reply, db.t1.createIndex({id:1},{partialFilterExpression:{id:{$exists:true}}}) This will indeed avoid null included in the result, but it also brings new problems. When I query for id:null , the index cannot be used. db.t1.explain().find({id:null}) I personally think that the result set is different just because of the addition of the index. This should be considered a bug。 It clearly is. I think other way. Is id:null a value you want to find or not? If it is, then do not use the partial index and accept the fact that null is a distinct value. If you do not accept the fact that null is a distinct value then you should not try to find( {id:null} ). Any way, nothing stops you from having another index with {partialFIlterExpression:{id:null}}. So there is a solution. Any way for me, the following should be true: • looking for all the distinct value should return me all the documents, so I expect null to be there Now that you know how it works, make it work for you. I am quite happy with the way it works. I design using the specs I am given. First and for all Have Fun. looking for all the distinct value should return me all the documents, so I expect null to be there。 I agree with this point of view, but the current situation is that without index, there is no null in the result set For now, I can only adapt to it looking for all the distinct value should return me all the documents, so I expect null to be there。 I agree with this point of view, but the current situation is that without index, there is no null in the result set For now, I can only adapt to it。 db.foo.drop() db.foo.insertMany([ { a: 1 }, { a: 1 }, { }, { a: 2 } ]) db.runCommand({ distinct: "foo", key: "a" }).values // [ 1, 2 ] db.foo.createIndex({ a: 1 }) db.runCommand({ distinct: "foo", key: "a" }).values // [ null, 1, 2 ] This appears to be a behavioral difference between the how null values are returned to the distinct command when a COLLSCAN plan is used vs. when an IXSCAN plan is used. I’ve filed SERVER-85298 to follow up on this further. 1 Like Edge cases like that are always interesting because they make you think and are often an occasion to learn. So I tried to find an alternative that works the same index or not and the aggregation framework came to the rescue. I found: db.foo.aggregate( { $group : { _id : "$a"}}) gives { _id: 1 } { _id: null } { _id: 2 } without the index and gives { _id: null } { _id: 1 } { _id: 2 } with the index. The explain plan confirms the index is used by $group once it is created: stage: 'DISTINCT_SCAN', keyPattern: { a: 1 }, indexName: 'a_1', I believe the issue here is the usage of the distinct command; not whether or not there are other ways to surface a list of distinct values (though it’s good to showcase the power/utility of aggregation here).
__label__pos
0.561189
mcupryk 0 Newbie Poster :rolleyes: I have the following with an ado.net routine called. Public Sub DoModify() Dim bm As BindingManagerBase = Me.DataGrid1.BindingContext(Me.DataGrid1.DataSource, Me.DataGrid1.DataMember) Dim dr As DataRow = CType(bm.Current, DataRowView).Row Dim editform As New EditTransOverride(dr) oldPolicyNumber = dr.Item(1) oldTransCode = dr.Item(2) oldTransEffDate = dr.Item(3) Dim retval As DialogResult = editform.ShowDialog() If retval = DialogResult.OK Then bm.EndCurrentEdit() Try Dim substr As String = dr.Item(4) If substr Is System.DBNull.Value Then substr = "" End If If Not substr = "" Then substr = substr.Substring(0, 2) End If ExecOnTransOverride.upd(dr.Item(1), dr.Item(2), dr.Item(3), substr, dr.Item(5), dr.Item(6), dr.Item(8), dr.Item(7), DateTime.Now, dr.Item(7), dr.Item(9), oldPolicyNumber, oldTransCode, oldTransEffDate) SqlDataAdapter1.Update(ds, "DsTransOverride1") ds.Tables("DsTransOverride1").AcceptChanges() MsgBox("Data Inserted Successfully !", MsgBoxStyle.Information, Me.Text) Catch se As SqlException MessageBox.Show(se.Message) Catch ex As Exception MessageBox.Show(ex.Message) End Try Else bm.CancelCurrentEdit() End If End Sub Now I would like to update the datagrid with the use of the SqlDataAdapter1.Update(ds, "DsTransOverride1") ds.Tables("DsTransOverride1").AcceptChanges() and use my ado.net routine since I tested it and it works and updates the database. The only thing I need to do in the above routine is to update the datagrid. How do I go about doing this? Be a part of the DaniWeb community We're a friendly, industry-focused community of 1.18 million developers, IT pros, digital marketers, and technology enthusiasts learning and sharing knowledge.
__label__pos
0.652965
Take the 2-minute tour × Stack Overflow is a question and answer site for professional and enthusiast programmers. It's 100% free, no registration required. Maybe my question has been asked several times, but... I have the following code abstract class ParentClass { protected static $count=0; public static function inc() { static::$count++; } public static function getCount() { return static::$count; } } class FirstChild extends ParentClass { } class SecondChild extends ParentClass { } And I use it just like this FirstChild::inc(); echo SecondChild::getCount(); It shows me "1". And as you probably guess I need "0" :) I see two ways: 1. Adding protected static $count=0; to each derivative classes 2. Make $count not integer but array. And do sort of such things in inc and getCount methods: static::$count[get_called_class()]++; and return static::$count[get_called_class()]; But I think these ways are a bit ugly. First - makes me copy/paste, what I'd like to avoid. Second - well, I don't know:) I just don't like it. Is there a better way to achive what I want? thanks in advance. share|improve this question      And why not use non static's ? –  JvdBerg Oct 2 '12 at 12:24 2 Answers 2 up vote 1 down vote accepted No, you have exactly laid out the two most practical options to address this. PHP cannot work magic; when you declare a static protected property you get exactly that: one property. If the syntax you give did work that might be good news for everyone who needs to do that, but it would be absolutely horrible news for everyone else who expects PHP OOP to behave in a somewhat sane manner. And for the record, if you don't need a separate counter for all derived classes without exception then I consider the explicit protected static $count = 0 in derived classes that do need one to be a beautiful solution: you want your own counter, you ask for one, and that fact remains written in the code for everyone to see. share|improve this answer I think for what you're trying to do, having just an interface method (such as getCount()) in the abstract class and the counter in the derivate class, your first choice, is the least worst option. It doesn't make sense to have abstract static count in the parent class if you're not counting all instances for that count. In general, I think the whole idea is a bit ugly, thus implementations would be ugly, too :) share|improve this answer Your Answer   discard By posting your answer, you agree to the privacy policy and terms of service. Not the answer you're looking for? Browse other questions tagged or ask your own question.
__label__pos
0.890056
Take the 2-minute tour × Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It's 100% free, no registration required. Need help finding a minorant to $(\sqrt{k+1} - \sqrt{k})$ which allows me to show that the series $\sum_{k=1}^\infty (\sqrt{k+1} - \sqrt{k})$ is divergent. share|improve this question 5 Answers 5 Do you need a minorant? Just consider the partial sums. By telescoping you see that $\sum\limits_{k=1}^n \sqrt{k+1} - \sqrt k = \sqrt{n+1} - 1$, which diverges. share|improve this answer You should observe that your series telescopes, i.e.: $$\sum_{k=0}^n (\sqrt{k+1} - \sqrt{k}) = (\sqrt{1} -\sqrt{0}) + (\sqrt{2} -\sqrt{1}) +\cdots + (\sqrt{n}-\sqrt{n-1}) +(\sqrt{n+1}-\sqrt{n}) = \sqrt{n+1}-1\; ,$$ and therefore: $$\sum_{k=0}^\infty (\sqrt{k+1} - \sqrt{k}) = \lim_{n\to \infty} \sum_{k=0}^n (\sqrt{k+1} - \sqrt{k}) = \lim_{n\to \infty} \sqrt{n+1}-1 = \infty\; .$$ share|improve this answer Since $$\sum_{k=1}^n (\sqrt{k+1} - \sqrt{k}) =\sum_{k=1}^n ((\sqrt{k+1} - \sqrt{k})\frac{\sqrt{k+1} + \sqrt{k}}{\sqrt{k+1} + \sqrt{k}}) $$ $$ =\sum_{k=1}^n \frac{1}{\sqrt{k+1} + \sqrt{k}} \geq 2\sum_{k=1}^n \frac{1}{\sqrt{k+1}} \geq 2\sum_{k=1}^n \frac{1}{k+1}, $$ then the series does not converge, but the telescope argument is much simpler. share|improve this answer If you do want a minorant, then you can use $$ \sqrt{k+1}-\sqrt k = \int_k^{k+1} \frac1{2\sqrt x} dx > \int_k^{k+1} \frac1{2\sqrt {k+1}} dx = \frac1{2\sqrt {k+1}}. $$ share|improve this answer Note that $$ \sqrt{k+1}-\sqrt{k}=\frac{1}{\sqrt{k+1}+\sqrt{k}}>\frac{1}{2\sqrt{k+1}} $$ share|improve this answer      This might be similar to miracle173's answer, but I have trouble parsing that answer. –  robjohn Mar 26 '12 at 19:33 Your Answer   discard By posting your answer, you agree to the privacy policy and terms of service. Not the answer you're looking for? Browse other questions tagged or ask your own question.
__label__pos
1
Autogenerating documentation - 7.1 Talend Big Data Studio User Guide Version 7.1 Language English (United States) Product Talend Big Data Module Talend Studio Content Design and Development In Talend Studio, you can set permanent parameters for the auto-documentation to be generated every time a Job is created, saved or updated. To set up the automatic generation of a Job/Joblet documentation, proceed as follows: 1. Click Window > Preferences > Talend> Documentation. 2. Select the check box Automatic update of corresponding documentation of Job/joblet. 3. Click OK if no further customizing of the generated documentation is required. Now every time a Job is created, saved or updated, the related documentation is generated. The generated documents are shown directly in the Documentation folder of the Repository tree view. To open automatically generated Job/Joblet documentation, proceed as follows: 1. Click the Documentation node in the Repository tree view. 2. Then look into the Generated folder where all Jobs and Joblets auto-documentation is stored. 3. Double-click the relevant Job or Joblet label to open the corresponding Html file as a new tab in the design workspace. This documentation gathers all information related to the Job or Joblet. You can then export the documentation in an archive file as html and pdf: 1. In the Repository tree view, right-click the relevant documentation you want to export. 2. Select Export documentation to open the export wizard. 3. Browse to the destination location for the archive to create. 4. Click Finish to export. The archive file contains all needed files for the Html to be viewed in any web browser. In addition, you can customize the autogenerated documentation using your own logo and company name with different CSS (Cascading Style Sheets) styles. The destination folder for HTML will contain the html file, a css file, an xml file and a pictures folder. To do so: 1. Go to Window > Preferences > Talend > Documentation. 2. In the User Doc Logo field, browse to the image file of your company logo in order to use it on all auto-generated documentation. 3. In the Company Name field, type in your company name. 4. Select the Use CSS file as a template when export to HTML check box to activate the CSS File field if you need to use a CSS file. 5. In the CSS File field, browse to, or enter the path to the CSS file to be used. 6. Click Apply and then OK.
__label__pos
0.890091
If you read this file _as_is_, just ignore the funny characters you see. It is written in the POD format (see pod/perlpod.pod) which is specially designed to be readable as is. =head1 NAME README.haiku - Perl version 5.10+ on Haiku =head1 DESCRIPTION This file contains instructions how to build Perl for Haiku and lists known problems. =head1 BUILD AND INSTALL The build procedure is completely standard: ./Configure -de make make install Make perl executable and create a symlink for libperl: chmod a+x /boot/common/bin/perl cd /boot/common/lib; ln -s perl5/5.10.1/BePC-haiku/CORE/libperl.so . Replace C<5.10.1> with your respective version of Perl. =head1 KNOWN PROBLEMS The following problems are encountered with Haiku revision 28311: =over 4 =item * Perl cannot be compiled with threading support ATM. =item * The C test fails. More precisely: the subtests using datagram sockets fail. Unix datagram sockets aren't implemented in Haiku yet. =item * A subtest of the C test fails. This is due to Haiku not implementing C support yet. =item * The tests C and C fail. This is due to bugs in Haiku's network stack implementation. =back =head1 CONTACT For Haiku specific problems contact the HaikuPorts developers: http://ports.haiku-files.org/ The initial Haiku port was done by Ingo Weinhold . Last update: 2008-10-29
__label__pos
0.971726
1 $\begingroup$ I am reading the paper "A Semidefinite Optimization Approach to Quadratic Fractional Optimization with a Strictly Constraints" by Maziar Salahi & Saeed Fallahi. I this paper, they tried to prove that if the Slater condition holds for the primal SDP problem: \begin{equation*} \begin{aligned} \min_{X} \mathrm{tr}(C^\top X)\\ \text{s.t.}~\mathrm{tr}(A^\top X) = 1\\ \mathrm{tr}(B^\top X) \preceq 0 \\ X \succeq 0 \end{aligned} \end{equation*} as well as its dual problem: \begin{equation*} \begin{aligned} \max_{\lambda,\eta} \eta\\ \text{s.t.}~C^\top -\eta A^\top + \lambda B^\top = Z\\ \lambda\geq 0 \\ Z \succeq 0 \end{aligned} \end{equation*} then both problems attain their optimal values and the duality gap is zero. Because trace is a linear function, in order to show the strong duality, I know one has to show the Slater condition holds for the primal problem. My Question is: why should I also have to show the Slater condition holds for the dual problem? $\endgroup$ 2 $\begingroup$ If either the primal or the dual satisfies Slater's condition, strong duality holds. However, the problem for which Slater's condition holds can still be unbounded, so you cannot conclude that "both problems attain their optimal values". To show that the primal problem is bounded you could give a feasible point for the dual or vice versa. That is not the only way. Instead, you can conclude that the primal is bounded if $C$ is positive semidefinite. $\endgroup$ 4 • $\begingroup$ Thank you very much for your replying, I was wondering, "If either the primal or the dual satisfies Slater's condition, strong duality holds." is only correct for LP problem? Because in most text books, they only require primal problem satisfies Slater's condition. $\endgroup$ – Stephen Ge Feb 5 at 4:02 • 1 $\begingroup$ @StephenGe it is correct for any convex problem. If the primal problem is closed, it is the dual of the dual problem, so you could apply the text book argument problem to the dual, by treating it as the primal. $\endgroup$ – LinAlg Feb 5 at 14:39 • $\begingroup$ Thank you again, LinAlg! I was wondering, is there any recommended text books or websites about how to prove this "dual of dual is primal" statement? $\endgroup$ – Stephen Ge Feb 6 at 15:33 • $\begingroup$ @StephenGe I quickly checked, and surprisingly most textbooks only show it for specific problems. You could check out this answer though. $\endgroup$ – LinAlg Feb 6 at 17:17 Your Answer By clicking “Post Your Answer”, you agree to our terms of service, privacy policy and cookie policy Not the answer you're looking for? Browse other questions tagged or ask your own question.
__label__pos
0.997152
Version Data anonymization steps • Add noise to date: replace the date with new random date near the original value to anonymize your date values. • Add noise to number: replace the original value with newly generated number that is near the original to anonymize numeric data. • Mask text: mask parts of text to hide personal or private data.
__label__pos
0.656575
How do you resolve a circular reference in Spring? How do you resolve a circular reference in Spring? How do you resolve a circular reference in Spring? 4. The Workarounds 1. 4.1. Redesign. When you have a circular dependency, it’s likely you have a design problem and the responsibilities are not well separated. 2. 4.2. Use @Lazy. 3. 4.3. Use Setter/Field Injection. 4. 4.4. Use @PostConstruct. 5. 4.5. Implement ApplicationContextAware and InitializingBean. What is circular dependency in Spring boot? Circular dependencies are the scenarios when two or more beans try to inject each other via constructor. The same problem Spring faces when it has to inject the circular dependencies via constructor. Spring throws BeanCurrentlyInCreationException in that situation. How do I resolve BeanCurrentlyInCreationException? For dependencies via constructor, Sprint just throw BeanCurrentlyInCreationException, to resolve this exception, set lazy-init to true for the bean which depends on others via constructor-arg way. What is bean Autowiring in spring framework? Autowiring feature of spring framework enables you to inject the object dependency implicitly. It internally uses setter or constructor injection. Autowiring can’t be used to inject primitive and string values. It works with reference only. Why are circular dependencies bad? Cyclic dependencies between components inhibit understanding, testing, and reuse (you need to understand both components to use either). This makes the system less maintainable because understanding the code is harder. Cyclic dependencies can cause unwanted side effects in a software system. How do I initialize a Spring bean? 1 Answer 1. Reload method In the Bean. Create a method in your bean which will update/reload its properties. 2. Delete & Register Bean in Registry. You can use DefaultSingletonBeanRegistry to remove & re-register your bean. 3. @RefreshScope. Useful for refreshing bean value properties from config changes. Is circular reference bad? Circular references aren’t a bad thing in itself: you can use them to achieve complex calculations that are otherwise impossible to do, but first you must set them up properly. If you want to perform a calculation for which you need the last result to be a new input value for the calculation.
__label__pos
1
8 min read A Deeper Look into Markdown: Its Purpose and Limitations Exploring the benefits and limitations of Markdown. A Deeper Look into Markdown: Its Purpose and Limitations Unstructured Text In 2004 John Gruber (with a lot of help from Aaron Swartz) created Markdown to address the need for a simple, easy-to-read, and easy-to-write plain text format that could be converted to HTML. Gruber states in his blog: Markdown’s syntax is intended for one purpose: to be used as a format for writing for the web. Gruber also wanted Markdown to be a human-readable markup language that focused on the content itself rather than the presentation or formatting. Markdown was designed to allow users to write text that would be easily readable in its raw form (while also providing basic formatting when rendered to HTML). Gruber states, The overriding design goal for Markdown's formatting syntax is to make it as readable as possible. The idea is that a Markdown-formatted document should be publishable as-is, as plain text, without looking like it's been marked up with tags or formatting instructions. At its core, Markdown syntax is a form of codified text. It employs a set of symbols and syntax rules to represent formatting and structure within a plain text document. For example: # Heading H1 * Bullet point **Emphasis** [Link text](https://www.example.com) Adding structural information like this in a plain text document has sometimes led people to refer to Markdown as "structured text." Although "structured text" has a somewhat technical definition, the term can be misleading for laypeople. What is essential to recognize is that Markdown often falls short in providing the necessary semantic richness and hierarchical representation for many use cases. Tables, for example, look pretty awful and are finicky to create beyond the simplest of examples: | Column 1 | Column 2 | Column 3 | | -------- | -------- | -------- | | Row 1-1 | Row 1-2 | Row 1-3 | | Row 2-1 | Row 2-2 | Row 2-3 | To overcome the limitations Markdown requires any formatting not supported by its syntax to be created in HTML or a hybrid of HTML and Markdown. A nested table, for example, might look like this: | Column 1 | Column 2 | | -------- | ----------------------------- | | Row 1-1 | <table><tr><td>Nested 1-1</td><td>Nested 1-2</td></tr><tr><td>Nested 2-1</td><td>Nested 2-2</td></tr></table> | | Row 2-1 | Row 2-2 | Despite its initial design and limited intent, Markdown has experienced far wider implementation than originally intended. At inception the target users were early web folks who had limited tools for creating simple content blocks on the web, today, however, it is mainly used by developers. It's popularity amongst developers has also lead to Markdown being implemented by developers for use cases where it might not be the most suitable solution. In this article, we will take a closer look at the drawbacks of Markdown and why it might not be the best choice for all but a limited set of use cases. We will break down these limitations into two use cases 1) as a human-readable format that allows users to easily write and understand 'structured' content in plain text, and 2) as a source for rendering content to other formats, such as HTML. Limitations of using Markdown as a structured plain text format There are limitations that arise when using Markdown even for this 'simple' purpose. a. Inconsistent Implementations: There are numerous variations of Markdown, each with its own syntax. While this inconsistency might not pose a significant problem for human readers, as they can adapt to minor changes when reading Markdown documents in plain text and understand their structural intent, it is more challenging for machines to handle such inconsistencies. This issue mainly arises when rendering Markdown into other formats, rather than affecting human readability (see next section). Additionally, when working with multiple systems employing different Markdown syntax, users may frequently need to consult the Markdown documentation for that particular implementation while creating content. This process can interrupt the 'flow' of the writing process. Supporting a 'free flow' is one of Markdown's original design goals ("makes writing entries so much easier and fun" - Aaron Swartz), and the need to look up syntax documentation can be a drawback for users who seek a smooth writing experience. b. Limited Formatting: Markdown's simplicity, while a strength in some cases, can also be a limitation when it comes to more advanced formatting needs. Markdown supports only basic formatting options such as bold, italic, lists, and headings. For users who require more formatting beyond the very basic, Markdown's capabilities will prove insufficient. As shown in the nested table example above, writing Markdown can soon become messy and it can be very irritating to do correctly. c. Not an Intuitive Syntax for new/non-technical users: Markdown's syntax is not always intuitive to new or non-technical users. This can result in a learning curve for those who are new to the language, as well as difficulties remembering the correct syntax for more infrequent users. d. Cognitive Drag when Reading: Some elements of Markdown's syntax may be unclear when reading, potentially leading to confusion for users even when they are familiar with the specific type of Markdown being used and it is applied correctly. For instance, the use of asterisks (*) for both bold and italic text may introduce ambiguity during reading or writing. This is a trite but illustrative example: The quick *brown fox* **jumps** *over* the lazy **dog** e. Challenges in Reading Long Texts: Markdown's minimalistic approach can pose difficulties when it comes to reading lengthy documents. The absence of varying font sizes and styles for headings, paragraphs, and other structural elements means that Markdown doesn't provide easily distinguishable visual cues about the content's organization and hierarchy. Limitations of using Markdown to render to another format While Markdown can be converted to other formats, especially HTML (as was the original intent), this process introduces its own set of challenges: a. Limited Formatting: Markdown was designed for simplicity, which consequently means that it lacks some advanced formatting options. While it is possible to incorporate HTML for more complex formatting (eg. nested tables) into a Markdown document, doing so makes the content pretty unreadable and defeats the purpose of using Markdown for its simplicity for both reading and writing. b . Limited support for rich media elements, like video: Markdown was designed to be simple and human-readable, focusing primarily on text formatting. As a result, it doesn't natively support embedding rich media elements like video. Due to this limitation, users cannot be sure if the syntax is correct until the document is rendered (which also defeats Grubers' original design aim to easily know if a text is correct before it was rendered). To address this issue, some Markdown systems employ graphical user interfaces for adding video to a Markdown document. This workaround highlights the inherent limitation in the Markdown paradigm when it comes to accommodating more complex content types. It illustrates that the paradigm itself is inadequate for modern content requirements. c. Absense of Standardization: Markdown faces challenges due to the lack of an official specification, resulting in inconsistencies between parsers. While the CommonMark project seeks to address this issue, it has not yet achieved universal acceptance as a standard. This inconsistency poses a real problem, as the results may vary depending on the type of Markdown used and the parser employed, leading to unexpected outcomes. d. Difficulty in styling / dynamically manipulating: When rendering to HTML Markdown does not provide a straightforward way to assign HTML classes or ids to content elements, which can make it challenging to apply custom CSS styles or JavaScript manipulations when rendering the content. e. Real-Time Rendering Issues: When using a side-by-side renderer or inline rendering approach, where the content is re-rendered with every keystroke, Markdown can present some challenges. In such cases, the content might initially be interpreted as one semantic feature and rendered accordingly, but with subsequent keystrokes, the meaning changes and a new type of format needs to be rendered. This can lead to considerable visual fluctuations in the content as you type, which can be disconcerting for the user. Good Uses of Markdown Markdown can be an excellent choice for various applications where simplicity, ease of use, and readability are crucial factors. Its lightweight nature makes it ideal for quick content creation, documentation, and note-taking, among other uses. It is also a useful format when the production environment does not support other formatting methods (eg. a command line interface). In these scenarios, Markdown's strengths are evident, and its limitations become less of a concern. Here are some examples of situations where Markdown is a suitable choice: 1. Developer workflows: Markdown is commonly used for README files and inline code documentation by developers. Its' compatibility with version control systems like Git makes collaboration and version tracking effortless. In other words, Markdown is a a great tool for developers as it suits their workflow. 2. Developer friendly syntax: Developers, accustomed to writing raw syntax, find Markdown's simplicity appealing. Its straightforward nature allows them to create clear and concise instructions with ease. 3. Formatting from the command line: For users (mostly developers) working in command-line interfaces, Markdown offers a simple way to create formatted text without the need for dedicated text editors or cumbersome graphical user interfaces. This capability is particularly useful for developers and system administrators who frequently work in terminal environments and require a lightweight method to create structured content. 4. Formatting in form-based environments: In situations where users need to input formatted text into web forms, text chat, or text fields with limited formatting options, Markdown serves as a quick and efficient solution to add a little more structure to the content. 5. Formatting in notepads: Markdown is well-suited for note-taking in various settings, from personal notes to meeting minutes and class lectures. Its plain text nature allows for easy editing and organization, while its simple formatting options provide enough structure to make the notes easily understandable. It's crucial to recognize that the first three use cases are primarily aimed at developers. Beyond these scenarios, the options become somewhat limited. Within developer-centric contexts, Markdown stands out as a convenient, user-friendly, and efficient solution. For non-developer use cases, Markdown remains a valuable tool for users who prioritize speed and clarity in creating simple, short-form content, or when the tools in use do not offer alternative formatting choices. Nevertheless, these latter cases are relatively narrow in scope, leading to the question: Why should non-developers opt for Markdown? Where Markdown should not be used There are many more scenarios where Markdown should not be used. The following are 4 very high level examples: 1. Moderately complex to complex documents: For documents with elaborate structures, multiple sections, or intricate content organization, Markdown's plain text formatting is not sufficient to effectively convey the structure and relationships between various elements. 2. Moderately complex to complex web content: When it comes to web content that demands a higher level of structural complexity, such as nested sections, multiple columns, non-textual content, or content that needs to be dynamically generated or manipulated, Markdown is not the right choice. 3. Publishing: For creating visually rich and beautifully designed documents, such as magazines, brochures, or books, Markdown falls short in providing the necessary styling and layout capabilities unless the user is very technical. In this case, using dedicated publishing software with powerful markup options is a better choice. 4. Moderately complex to complex web design: When designing intricate web pages that require advanced styling, interactivity, or multimedia content, Markdown's limited formatting options do not make it a good candidate. In these cases, opting for alternative markup languages, tools, or software that provides more advanced features and flexibility will prove to be a better fit. Conclusion While Markdown is an effective and user-friendly option for basic text formatting, especially for developers working in a command-line environment, it also has drawbacks that render it unsuitable for most applications outside of this scope. Generally however, its' inconsistent implementations, syntax ambiguity, non-intuitive syntax, limited styling options, absence of standardization, and unsuitability for complex documents all contribute to it's limited utility across all scenarios. © Adam Hyde, 2023, CC-BY-SA Image public domain, created by MidJourney from prompts by Adam.
__label__pos
0.993069
Crest Car Loan Loan Securitizations: Understanding the Mechanisms Behind Financial Structures Crest Car Loan | - Cybersecurity Audits: Common Challenges and How to Overcome Them In an increasingly digital world, cybersecurity audits have become an essential part of maintaining and improving an organization’s security posture. These audits help identify vulnerabilities, assess risks, and ensure compliance with regulatory standards. However, navigating the complexities of cybersecurity audits presents several challenges. This article explores common challenges faced during cybersecurity audits and offers strategies for overcoming them. 1. Understanding the Scope of the Audit Defining the Audit Scope One of the first challenges in a cybersecurity audit is defining its scope. An unclear or overly broad scope can lead to wasted resources and incomplete results. It’s essential to determine what aspects of the IT infrastructure will be covered, including hardware, software, networks, and data management. Solution: Establish Clear Objectives To overcome this challenge, organizations should work closely with auditors to establish clear, detailed objectives. This involves defining specific areas of concern, regulatory requirements, and the goals of the audit. A well-defined scope ensures that the audit remains focused and manageable. 1. Keeping Up with Evolving Threats Rapidly Changing Threat Landscape The cybersecurity threat landscape evolves rapidly, with new threats and vulnerabilities emerging regularly. Keeping up with these changes can be daunting, and an audit that does not account for the latest threats may provide a false sense of security. Solution: Regular Updates and Threat Intelligence To address this challenge, organizations should integrate threat intelligence into their audit processes. This involves staying updated on the latest cybersecurity threats and incorporating this information into audit plans. Regularly updating audit procedures and tools to reflect current threat landscapes can help in identifying and mitigating new risks. 1. Ensuring Compliance with Regulatory Requirements Complexity of Regulatory Standards Organizations must comply with various regulatory standards, such as GDPR, HIPAA, and PCI-DSS, each with its own requirements and guidelines. Ensuring compliance across different regulations can be complex and overwhelming. Solution: Implement a Compliance Framework Adopting a compliance framework that aligns with multiple regulations can simplify this process. Frameworks such as the NIST Cybersecurity Framework or ISO/IEC 27001 provide structured approaches to managing and complying with regulatory requirements. These frameworks can help organizations systematically address compliance issues during audits. 1. Handling Large Volumes of Data Data Management Challenges Cybersecurity audits often involve analyzing large volumes of data to identify potential issues. Managing and processing this data can be resource-intensive and challenging, especially for organizations with extensive IT infrastructures. Solution: Utilize Advanced Analytics Tools To efficiently handle large data volumes, organizations should invest in advanced analytics tools and technologies. Tools that offer automated data collection, analysis, and visualization can significantly reduce the manual effort required and improve the accuracy of the audit results. 1. Managing Internal Resistance Cultural and Organizational Barriers Resistance from internal stakeholders can pose a significant challenge during cybersecurity audits. Employees may view audits as disruptive or perceive them as an indictment of their work. This resistance can hinder the audit process and delay the identification of critical issues. Solution: Foster a Security-Conscious Culture Overcoming internal resistance requires fostering a culture of security awareness and collaboration. This involves educating employees about the importance of cybersecurity and how audits contribute to organizational resilience. Clear communication and involving staff in the audit process can help mitigate resistance and facilitate a smoother audit. 1. Integrating Audit Findings with Business Operations Aligning Findings with Business Objectives Once an audit is completed, integrating its findings into business operations can be challenging. Organizations must prioritize and address identified issues while aligning them with broader business objectives and strategies. Solution: Develop an Actionable Remediation Plan To effectively integrate audit findings, organizations should develop a detailed remediation plan. This plan should prioritize issues based on their severity and impact on business operations. Collaboration between audit teams and business units is crucial for implementing changes that align with organizational goals. 1. Ensuring Auditor Independence and Objectivity Potential Biases in Auditing Maintaining auditor independence and objectivity is crucial for ensuring the credibility of the audit results. Internal auditors, in particular, may face challenges in remaining unbiased, especially when auditing familiar systems or processes. Solution: Engage External Auditors Engaging external auditors can help ensure independence and objectivity. External auditors bring a fresh perspective and are less likely to be influenced by internal biases. Organizations should consider periodic external audits to complement internal assessments and enhance overall audit credibility. 1. Addressing Resource Constraints Limited Time and Budget Resource constraints, including limited time and budget, can impact the effectiveness of cybersecurity audits. Adequate resources are necessary to conduct thorough assessments, implement recommendations, and follow up on findings. Solution: Prioritize and Plan Effectively Organizations should prioritize audit activities based on risk assessments and resource availability. Effective planning and resource allocation can help ensure that critical areas are addressed within budget and time constraints. Leveraging automated tools and outsourcing specific tasks can also optimize resource use. 1. Managing Vulnerability and Risk Assessment Complexity of Vulnerability Management Identifying and assessing vulnerabilities is a core aspect of cybersecurity audits. However, the complexity of modern IT environments can make vulnerability management challenging, with numerous potential points of failure and varied risk levels. Solution: Implement a Comprehensive Risk Management Approach A comprehensive risk management approach involves continuous monitoring and assessment of vulnerabilities. Utilizing vulnerability management tools and conducting regular scans can help identify and address potential issues proactively. Integrating risk assessment into the overall audit process ensures a thorough evaluation of the security posture. 1. Maintaining Continuous Improvement Static vs. Dynamic Security Posture Cybersecurity is not a one-time effort but an ongoing process. Relying solely on periodic audits can lead to a static security posture that does not adapt to evolving threats and vulnerabilities. Solution: Adopt a Continuous Improvement Model Organizations should adopt a continuous improvement model for cybersecurity. This involves regular reviews and updates of security practices, ongoing training for staff, and iterative improvements based on audit findings and emerging threats. Establishing a culture of continuous improvement ensures that security measures evolve alongside changing risk landscapes. Conclusion Cybersecurity audits are essential for safeguarding organizations against evolving threats and ensuring compliance with regulatory standards. However, they come with their own set of challenges, from defining the audit scope to managing internal resistance and resource constraints. By implementing the strategies outlined in this article—such as establishing clear objectives, leveraging advanced tools, and fostering a security-conscious culture—organizations can overcome these challenges and enhance their overall cybersecurity posture. Embracing these solutions will help organizations not only navigate the complexities of cybersecurity audits but also build a resilient defense against future threats.
__label__pos
0.996953
3 ForA = I8 1] the only eigenvalue is A = 1. ThusA -MI =Since this matrix is = 0, A = 1 is & defective eigenvalue: Thus wC Can take V2 1% V = (A - AI)vz [o] and t... Question ForA = I8 1] the only eigenvalue is A = 1. ThusA -MI =Since this matrix is = 0, A = 1 is & defective eigenvalue: Thus wC Can take V2 1% V = (A - AI)vz [o] and the general solution is y(t) = Cie't [o] +ce' ([1]_ +t [o])Finally, the case ofA=|51 3| is very similar to the previous case The only eigenvalue is A = 4 Thus A-N=|41 Since this matrix is # 0; A = 4 is & defective eigenvalue. Thus we can take Vz = V1 = (A -A)vz = [4]and the general solution isy(t) = Ciet [4J+ce' ([93 For A = I8 1] the only eigenvalue is A = 1. Thus A -MI = Since this matrix is = 0, A = 1 is & defective eigenvalue: Thus wC Can take V2 1% V = (A - AI)vz [o] and the general solution is y(t) = Cie't [o] +ce' ([1]_ +t [o]) Finally, the case of A=|51 3| is very similar to the previous case The only eigenvalue is A = 4 Thus A-N=|41 Since this matrix is # 0; A = 4 is & defective eigenvalue. Thus we can take Vz = V1 = (A -A)vz = [4] and the general solution is y(t) = Ciet [4J+ce' ([93+'[4) Answers Let $A=\left[\begin{array}{rrr}{4} & {-1} & {-1} \\ {-1} & {4} & {-1} \\ {-1} & {-1} & {4}\end{array}\right]$ and $\mathbf{v}=\left[\begin{array}{l}{1} \\ {1} \\ {1}\end{array}\right] .$ Verify that 5 is an eigenvalue of $A$ and $\mathbf{v}$ is an eigenvector. Then orthogonally diagonalize $A .$ In this video, we're gonna go through the answers. Question number 35 from chapter 9.5. So, in part, A were asked to find Ah, what's to show that the matrix A here has repeated I value our equals two minds Well, on dhe show that all the time it is. That's the form given in the question. Okay, so I start with the wagon values. So we need Thio. Find the determinant off the matrix A minus. Aw, times dead and see matrix. That's this. Determine in here. That's gonna be well minus ah times by minus three minus. Uh, because for that's gonna be able to u R squared. Ah, hoofs every, uh, minus. Ah, sets plus thio minus three plus four plus one. Okay, so this is easily fact arised as ah course one squared. If that's equal to zero, then we have repeated Eigen value eyes equal to minus one. Okay, cool. So now it's fine. The Eiken vector associating with that. So a plus the identity matrix is gonna be too two minus one for minus three plus one is minus two okay, times by infected, you go on is equal to zero So therefore, any item vector must be proportional to Okay, let's see if we let the first component be one second component. Must be two cases going breaks that fulfills. Ah, well, the quiet for a movements part B. Uh, so this is quite easy. So the an intravenous solution can be Rin as eat first I about you East the first time value, which was minus one times t times by dragon Vector 12 And it's very easy to check with us. Solution off the system, given you a question. Okay, But see, so I guess a little bit more tricky here for us to find a second mini independent solution. Eso we have only we had a repeat. I can value said can't just use the second night value because the repeated I could buy you only had one linearly independent. Uh, Ivan Vector. So it's Astra's use. The form next to is equal to t eat. The minus t does buy you one plus even my honesty times by you too. Okay, so we're gonna substitute that in. We're gonna need to find the first derivative using the what a fool on the first term is gonna be one minus t times e to the minus t Because it's the modesty. Different shades to minus eats. The modesty does buy you want because it's just a constant vector. Minus it reminds t you too, it Okay, so, saying, Ah, you two prime people to eight times you, too. So meaty. Prime waas one minus t Because by e to the minus t you want minus eight minus t you too. And then a times You too. Uh, well, that's just gonna be tee times e to the minus t times by eight times by Yeah, yes, I was by a you one. So say you one because you want is an aiken defector with Aiken value minus one and a you want he's gonna be equal to B minus one times You want. Just by the definition off, you won't be Knight director of the Matrix egg. Uh, we got close E to the minus t times bite matrix. A YouTube look. Okay, so let's have a look. What's going on here then? So this minus t eat my honesty? You want concussive with this modesty here? And then what we're left with is ah, eats the minus t You want minus you too equals eats the minus t a You too. So eat the virus Taken Never be zeros. We can cancel that. So this is gonna allow us to arrive at a plus. The identity matrix I times by e t equals you are. Okay, so now we console that. Let's have a look. At what a plus. I Yes, that's gonna be the matrix to minus one for minus two times by you, too. He calls when you want. Was the vector one too. So this means that you too. Okay. So if the first component is one, then we're gonna have second opponent is gonna be two times one minus one, which is just what? So that's, uh, a solution for you, too. Okay. Now, huh? Day as us girls just find what a close I Yeah, like squared times. The YouTube is okay. So this is just a plus I times by a close eye. That's just the definition of squaring. Something just most quiet by. You must buy twice by that by that thing. Okay, So then hey, they were just showing that April's I times you two is You want. It's that in this Caribbean is a close eye you want. But it was high time too. You want because that you wanted a infective. Ah, metrics. I with Ivan value minus one, that is just gonna be equal to zero. And that completes the In this video. We're gonna go through the answer to that question 39 from chapter 9.5. Um, so yeah, but given the Matrix A which is written here, we asked Oh! Ah, derive. Uh, yeah, There is an ancestor questions A, B, C and D. I think this is quite hard questions, so I'll try and take us through it slowly. Okay? So question apart a were asked to find that I can values I come back to us basically on dso first. Find ion values. Gotta find the determinant off the matrix A minus I So basically mind seeing are from the leading diagonal. Okay, So what's that gonna be? We can evaluate it around the top itself forever. That's two minus r times by the matrix of at the seven of the Matrix to minus one minus two minus one minus. One times the matrix. December of the Matrix. No, there should be a minus. Other close. One times the matrix. 12 months, huh? Minus two minus two. Okay. Uh, I think I'll avoid boring you with the algebra for this one. Um, you guys can work that through and show that is equal Thio. Well, you can expand it all out and show that it's minus, uh, cubed. Plus three squared minus three. Ah, close one which could be fact arise to be one minus R cubed Get. So that's equal to zero than that shows the the value of ah, which are Ah, I convey values. Eyes equal to one on It's a triple. Really? Because about three here. So it's an egg value with multiplicity three. Okay, fine. Victor, that's fine. I'm vectors. We find a minus one times. I because I value this one times by you equals zero. So what is a minus? I was going to the back there. The matrix one 11 one on one minus two, minus two minus two. It comes by u equals zero. Okay, so if we let the components off, you be X y zed, then each of these rows in the matrix equation tells us the experts Why? Course said is equal to zero. Okay, so then what could solutions look like to this equation? Well, we could have that. The first term at the Ex Capone is zero. Then that would mean that Ah, the wines that components with a sign of each other. We could have that The y component is zero. In which case it'd be minus one one. Or we could have a second potent be zero. Gonna be zero minus one, but one. Okay, but now look at this guy. This equation this factor could be written in terms off this factor on this vector. So you can see that if you, uh let's see if you do this vector minus this factor that will give you this vector. So therefore, this factor is not linearly independent to these guys. So, um, where these guys are linearly independent to each other. So therefore, any item vector must pay off the form. That's, um, constant s times by the first of the linearly independent vectors. First, some constant v times a second of our linearly independent backers. Okay, so if so, have chosen this kind of arbitrarily because we could equally right. We could have equally written this guy as a linear, some off this thing guy and this guy, in which case we would change the inspector's on DDE. That would also be correct. So we've kind of written this kind of arbitrarily the point is that we kind of have only two degrees of freedom on our choice. If I came back that way, can't write the Eiken vector as a some off Constance Well supplied by all three of these. I mean, we could, but it would kind of be, ah, were necessary because we already showed that, uh, to sufficient. Just two of the of the expression, too. Give you all of the possible high in vectors to the Matrix. A. Okay, so hopefully that's Colbert. Why? We just have two vectors here. Okay, So, uh, B, this is just following standard rules for linear started different differential equations from those two aiken vectors. We can write too linearly independent solutions on They're gonna be e to the First Aiken, How you bought the only item value times T c to the t times by the first heart of the Eigen vectors on then. Secondly, Secondly, nearly defendant solution that's gonna be eats the tee times by the second darken vector. See? Okay, let's use the form that they've given us, which is x three, because t into the tea. You three plus a to the t you for. Okay, So what's the derivative of this Well, we can take out you take The city is a common factor. Used the product rule Gonna be one plus t you three plus you for times e to the t. We know that this is gonna be equal thio because what we've assumed that it's that this form is a solution to, uh, the matrix differential equation so we could have tea. That's a you three for us. Hey, you four times by E t t. Okay, so this guy and this guy are equal to each other so we can cancel each the teas and then compare coefficients off. Tease will tell us that a U three is equal to you three itself on and you have a cross compare coefficients here. This tells us that a minus by you three is equal to zero on dde a minus. I you four secret to you three. Okay, so this tells us that you three is an icon vector. That means that it needs to be of the form. Found a pot A which is s times minus one 10 close a times minus ones. They're well you asked me about for so Okay, let's try and figure out what form we're gonna take. Okay? So to figure out what you four is gonna be a we're gonna need Thio. Find out what this matrix is. A minus. Eyes the matrix one 11111 minus two minus two minus two. What time did not buy you for? And then this is gonna be equal to you three. Which, as of yet, we're unsure as to what I infected to choose. So what's gonna help us? Well, if you four has the components X y and said Just try to see you. What's that? You for its components, X. Why? And said then the top road off this equation, it's gonna tell us the expose, what was said is equal to whatever this first component is a man. The second I was gonna tell us the extras Wipers said because he was the second component third component in the third row is gonna tell us that minus two times by extra swipe, all set is equal to Okay, So then now, in order to make this house a any solutions, all we need Well, we need this component on this components were the same because otherwise would have experts watches that is equal to two things which is just mathematically consistent. So that means that, um yeah, I need those guys to be the same. So let's say that we let them be equal to one on one. Let's see whether weaken do that. Uh, yeah, definitely. Can. So that some if s is equal to you. Uh, let's see, that's equal to one. And B is equal to minus two. Then that will work. And then how does that work? That tells us that this 3rd 1 is gonna be minus two, which shows that that's gonna be minus two 11 works because now, diesel, these equations here or class into one equation which settles wipeout set is equal to walk. Forget it. So I've chosen you three to be 11 minus two. So therefore X, with y plus said where we need to choose. Yeah, a basically any backdoor that it satisfies this so we can just easily choose a vector 100 because that satisfies the equation. X plus y equals said Okay, so just finishing that off on these x three is equal to t It's the tea. 11 minus two. I saw x three. Plus it's the tea 100 we saw explore. Okay, then pot de a minus. I squared times you for equals. I was gonna be a minus. I times a minus I you for which is a that which is you three. And then we'd know from hot See, that is equal to zero. Okay, nothing. She's answer a question with the night. Similar Solved Questions 2 answers Plpu Ihat Is 0.52 lanaand onenbolh cnds vlbealethafronic Kath HceDamntc#PcCd 0rcound In tre Alf Ju uuts pipa? plpu Ihat Is 0.52 lanaand onen bolh cnds vlbealet hafronic Kath HceDamntc #PcCd 0rcound In tre Alf Ju uuts pipa?... 5 answers Iv) Find an equation of the tangent line to the graph of sin(x + y)= y cosxat ! {F_ and graph this tangent line iv) Find an equation of the tangent line to the graph of sin(x + y)= y cosxat ! {F_ and graph this tangent line... 5 answers (8 points) r x 2 withoupblar curve 0 = 1/4 for the tangential truth at the point:a) The slope of the tangent truth mb) The equation of tangent truth: y (8 points) r x 2 withoupblar curve 0 = 1/4 for the tangential truth at the point: a) The slope of the tangent truth m b) The equation of tangent truth: y... 5 answers Consider the following reaction: Ni" (aq) 6NH,(aq) ~ = Ni(NH,)&(aq) When 25.0 mLof0.250 M Ni(NO,), is combined with 25.0 mL 0.350 M NH ,the equilibrium concentration 'of Ni(NH;); is determined to be 0.020 M What is the value of K for this reaction? Consider the following reaction: Ni" (aq) 6NH,(aq) ~ = Ni(NH,)&(aq) When 25.0 mLof0.250 M Ni(NO,), is combined with 25.0 mL 0.350 M NH ,the equilibrium concentration 'of Ni(NH;); is determined to be 0.020 M What is the value of K for this reaction?... 4 answers Suppose the labor cost in dollars for manufacturing cabinet is € where X is the number of hours required by a master crafts person and Y is the number of hours required by an apprentice. Find the number of hours required bY the master crafts person to minimize labor costs.C (I, y) = 3r2 +y? 41 8y 2cy +1550 16155 Suppose the labor cost in dollars for manufacturing cabinet is € where X is the number of hours required by a master crafts person and Y is the number of hours required by an apprentice. Find the number of hours required bY the master crafts person to minimize labor costs. C (I, y) = 3r2 +y? 4... 5 answers { 1 1 8 WWIW 6 6 8 1 ! 8 {88 { 1 1 8 WWIW 6 6 8 1 ! 8 { 8 8... 5 answers Find the frequency in terahertz of visible light with a wavelength of 511 nm in vacuum:NumberTHzWhat is the wavelength in centimeters of electromagnetic microwave radiation whose frequency is 6.39 GHz?Numbercm Find the frequency in terahertz of visible light with a wavelength of 511 nm in vacuum: Number THz What is the wavelength in centimeters of electromagnetic microwave radiation whose frequency is 6.39 GHz? Number cm... 5 answers To test Ho: / = 20 versus Hq: p < 20, simple random sample of size n = 19 is obtained from population that is known to be normally distributed. Answer parts (a)- (d):Click here to view the t-Distribution Area in Right Tail(a) If X = 18.2 andcompute the test statistic_(Round t0 two decimal places as needed ) To test Ho: / = 20 versus Hq: p < 20, simple random sample of size n = 19 is obtained from population that is known to be normally distributed. Answer parts (a)- (d): Click here to view the t-Distribution Area in Right Tail (a) If X = 18.2 and compute the test statistic_ (Round t0 two decimal pla... 5 answers Refer to exercise $7,$ where an estimated regression equation relating years of experience and annual sales was developed.a. Compute the residuals and construct a residual plot for this problem.b. Do the assumptions about the error terms seem reasonable in light of the residual plot? Refer to exercise $7,$ where an estimated regression equation relating years of experience and annual sales was developed. a. Compute the residuals and construct a residual plot for this problem. b. Do the assumptions about the error terms seem reasonable in light of the residual plot?... 5 answers Change 8 Problem 1 the person 5-ft Famn sho 1 1 from walks f moH N does waha height 1 3fhe pesson' shaotdighon the L ] change 8 Problem 1 the person 5-ft Famn sho 1 1 from walks f moH N does waha height 1 3fhe pesson' shaotdighon the L ]... 5 answers If data is collected on the number of houses sold on a randomweekend. what level of data is this? If data is collected on the number of houses sold on a random weekend. what level of data is this?... 5 answers 1. Which of the following statements is correct?a. no correct answerb. water and isopropyl alcohol are immisciblec. water and Kerosene are msicible/miscibled. water and isopropyl alcohol are miscible 1. Which of the following statements is correct? a. no correct answer b. water and isopropyl alcohol are immiscible c. water and Kerosene are msicible/miscible d. water and isopropyl alcohol are miscible... 1 answers A block with mass $M$ rests on a frictionless surface and is connected to a horizontal spring of force constant $k$. The other end of the spring is attached to a wall ($\textbf{Fig. P14.68}$). A second block with mass $m$ rests on top of the first block. The coefficient of static friction between the blocks is $\mu_s$. Find the $maximum$ amplitude of oscillation such that the top block will not slip on the bottom block. A block with mass $M$ rests on a frictionless surface and is connected to a horizontal spring of force constant $k$. The other end of the spring is attached to a wall ($\textbf{Fig. P14.68}$). A second block with mass $m$ rests on top of the first block. The coefficient of static friction between th... 5 answers Consider the following curve 7(t) : 28t3 + 12t2 + 18t+7, 12t3 18t2 + 20t + 2, 40t3 18t2 + 8t + 2) , 0 <t<1. Let Jo` F6)dt = (A,B,8). Find Consider the following curve 7(t) : 28t3 + 12t2 + 18t+7, 12t3 18t2 + 20t + 2, 40t3 18t2 + 8t + 2) , 0 <t<1. Let Jo` F6)dt = (A,B,8). Find... 5 answers 1cnciculc ont L 1 1 1 cnciculc ont L 1 1... 5 answers 13 Consider the region bounded by the curves y ~12 + 9 and y = -3 + 9. Find the volume obtained by rotating this region about the y-axis byusing washer/disk method_ (6) using shell method 13 Consider the region bounded by the curves y ~12 + 9 and y = -3 + 9. Find the volume obtained by rotating this region about the y-axis by using washer/disk method_ (6) using shell method... 5 answers A charged particle with speed v = 2.5 x 105 m/s enters uniform magnetic field of B = 1.3 T The velocity of the particle makes an angle of w/3 rad with the magnetic field. Mass and charge of the particle are m =4x10 kg and q = 4.8 X 10 C, respectively: Find the pitch of the helical path taken by the particle in units_ofmillimetersPitchAnswer: A charged particle with speed v = 2.5 x 105 m/s enters uniform magnetic field of B = 1.3 T The velocity of the particle makes an angle of w/3 rad with the magnetic field. Mass and charge of the particle are m =4x10 kg and q = 4.8 X 10 C, respectively: Find the pitch of the helical path taken by the ... 5 answers Concentralion MD Mu 2{ J 1 L | % I concentralion MD Mu 2 { J 1 L | % I... -- 0.018402--
__label__pos
0.954627
Title: Content control of a user interface Kind Code: A1 Abstract: The present disclosure provides, among other things, a method comprising loading learning content into a content player. The method further includes modifying a user interface of the content player in accordance with a command from the learning content. The user interface comprises a plurality of interface elements and at least a subset of the interface elements comprising navigational controls for controlling learning content. This modification of the user interface may include disabling one or more of the interface elements, automatically navigating between a first component and a second component of the content, and other processes or techniques. Inventors: Hochwarth, Christian (Wiesloch, DE) Krebs, Andreas S. (Karlsruhe, DE) Erhard, Martin (Karlsruhe, DE) Philipp, Marcus (Dielheim, DE) Application Number: 11/263779 Publication Date: 05/03/2007 Filing Date: 10/31/2005 Primary Class: 1/1 Other Classes: 707/999.107 International Classes: G06F17/00 View Patent Images: Related US Applications: Primary Examiner: SHIAU, SHEN C Attorney, Agent or Firm: FISH & RICHARDSON, P.C. (SAP) (MINNEAPOLIS, MN, US) Claims: What is claimed is: 1. A computer-implemented method, comprising: loading learning content into a content player; and modifying a user interface of the content player in accordance with a command from the learning content, the user interface comprising a plurality of interface elements and at least a subset of the interface elements comprising navigational controls for controlling learning content. 2. The method of claim 1, where the learning content is associated with one or more of the following representations: HTML, SGML, DHTML, XML, JavaScript, or Flash. 3. The method of claim 2, wherein modifying the user interface comprises disabling at least one of the plurality of interface elements. 4. The method of claim 1, further comprising dynamically changing the learning strategy in accordance with a second command from the learning content. 5. The method of claim 1, where the content is a component of a learning course. 6. The method of claim 1, where the command is one or more statements in a scripting language embedded in the content. 7. The method of claim 1, where the command is a method call exposing a common API for content control. 8. The method of claim 1, where the user interface is configured to navigate to different learning content in accordance with the command from the learning content. 9. The method of claim 1, wherein modifying the user interface comprises automatically navigating between a first component and a second component of the content. 10. A learning system comprising: memory storing a plurality of learning content; and one or more processors performing the following operations: loading content into a content player; presenting a portion of the content and at least one user interface element on a user interface, the at least one user interface element configured to navigation the content in accordance to a learning strategy; and modifying the user interface of the content player in accordance with a command from the learning content, where the user interface comprising a plurality of interface elements and at least a subset of the interface elements comprising navigational controls for controlling learning content and modifying includes disabling one or more of the interface elements. 11. The system of claim 10, where the content is associated with one or more of the following representations: HTML, SGML, DHTML, XML, JavaScript, or Flash. 12. The system of claim 10, the one or more processors further performing the operations of: presenting a second portion of the learning content based on a learner's progress; and modifying the user interface in accordance with a command from the second portion of the learning content. 13. The system of claim 12, the one or more processors further performing the operation of resetting the user interface prior to presenting the second portion of the learning content. 14. The system of claim 10, the one or more processors further dynamically changing the learning strategy based on metadata associated with the content in accordance with the command from the learning content. 15. The system of claim 10, where the command is one or more statements in a scripting language embedded in the content. 16. The system of claim 10, where the command is a method call exposing a common API for content control. 17. The system of claim 10, where the user interface is configured to navigate to different learning content in accordance with the command from the learning content. 18. The system of claim 10, wherein modifying the user interface further comprises automatically navigating between a first component and a second component of the content. 19. Software comprising instructions stored on a computer readable medium, the software operable when executed to: load content into a content player, the content associated with a learning strategy; and modify a user interface of the content player in accordance with a command from the content, where the user interface comprising a plurality of interface elements and at least a subset of the interface elements comprising navigational controls for controlling learning content and modifying includes disabling one or more of the interface elements. 20. The software of claim 19, where the content is associated with one or more of the following representations: HTML, SGML, DHTML, XML, JavaScript, or Flash. 21. The software of claim 19, the software further operable to: present a second portion of the learning content based on a learner's progress; and modify the user interface in accordance with a command from the second portion of the learning content. 22. The software of claim 21, the software further operable to reset the user interface prior to presenting the second portion of the learning content. 23. The software of claim 19, the software further operable to dynamically change the learning strategy based on metadata associated with the content in accordance with the command from the learning content. 24. The software of claim 19, where the command is one or more statements in a scripting language embedded in the content. 25. The software of claim 19, where the command is a method call exposing a common API for content control. 26. The software of claim 19, where the user interface is configured to navigate to different learning content in accordance with the command from the learning content. 27. The software of claim 19, wherein modifying the user interface further comprises automatically navigating between a first component and a second component of the content. Description: BACKGROUND Today, an enterprise's survival in local or global markets at least partially depends on the knowledge and competencies of its employees, which may easily be considered a competitive factor for the enterprises (or other organizations). Shorter product life cycles and the speed with which the enterprise can react to changing market requirements are often important factors in competition and ones that underline the importance of being able to convey information on products and services to employees as swiftly as possible. Moreover, enterprise globalization and the resulting international competitive pressure are making rapid global knowledge transfer even more significant. Thus, enterprises are often faced with the challenge of lifelong learning to train a (perhaps globally) distributed workforce, update partners and suppliers about new products and developments, educate apprentices or new hires, or set up new markets. In other words, efficient and targeted learning is a challenge that learners, employees, and employers are equally faced with. But traditional classroom training typically ties up time and resources, takes employees away from their day-to-day tasks, and drives up expenses. Electronic learning systems provide users with the ability to access course content directly from their computers, without the need for intermediaries such as teachers, tutors, and the like. Such systems have proven attractive for this reason (and perhaps others) and may include a master repository that stores existing versions of learning objects. These learning objects are typically developed in-house or received from third-party providers to achieve some particular learning objective. This course content can be presented in a display region of an interactive content player. The content player allows a user to navigate the content by selecting interactive navigation controls. The navigation controls can allow a user to move forward or backward through the content, or present a table of contents to the user. However, course content can itself present navigation controls in the display region for allowing the user to navigate the content. For example, this can arise where the content includes an audio/visual presentation that the user can view, pause, rewind, etc. SUMMARY The present disclosure provides systems, methods, and software for controlling user interfaces via content. For example, one method comprises loading learning content into a content player. This method further includes modifying a user interface of the content player in accordance with a command from the learning content. The user interface comprises a plurality of interface elements and at least a subset of the interface elements comprising navigational controls for controlling learning content. This modification of the user interface may include disabling one or more of the interface elements, automatically navigating between a first component and a second component of the content, and other processes or techniques. The details of one or more embodiments are set forth in the accompanying drawings and the description below. Features, aspects, and advantages will be apparent from the description, drawings, and claims. DESCRIPTION OF DRAWINGS FIG. 1 is a diagram illustrating an example learning environment according to one embodiment of the present disclosure; FIG. 2 illustrates an example architecture of a learning management system implemented within the learning environment of FIG. 1; FIG. 3 illustrates an example content aggregation model in the learning management system; FIG. 4 is an example of one possible ontology of knowledge types used in the learning management system; FIG. 5 illustrates an example run-time environment for the content player and the learning portal implemented within the learning environment of FIG. 1, as well as certain components of the example content player; FIGS. 6A-D illustrate example user interfaces with one or more sets of navigation controls and modification of certain interface elements as determined by the particular content; and FIG. 7 is a flow diagram of processing in accordance to various embodiments. DETAILED DESCRIPTION FIG. 1 illustrates an example environment for a learning management system 140 that may deliver a blended learning solution of learning methods used in traditional classroom training, web-based training, and virtual classrooms. At a high level, such applications 140 provide convenient information on the learner 104's virtual workplace and at least partially control the learning process itself. The system proposes learning units based on the learner 104's personal data, tracks progress through courses and coordinates the personalized learning experience. In addition, learning management system 140 encompasses the administrative side of the learning platform, where a training administrator 105 structures and updates the offering and distributes it among the target groups. Moreover, the course offering is usually not restricted to internally hosted content. The learning management system 140 often offers robust reporting capabilities, including ad hoc reporting and business intelligence. These capabilities may provide in-depth analysis of the entire business or organization, thereby enabling better decision making. Learning management system 140 typically helps improve the quality of training and cut costs by reducing the travel and administrative costs associated with classroom training while delivering a consistent learning offering. Such electronic learning systems present learning content in various different formats from multiple sources, both internal and external. This content may be presented to learner 104 using a content player comprising a media player, a browser implementing various scripts and programming, or other such interfaces. Such content players often include a content presentation screen, as well as a header and navigation bar. In these implementations, the header bar may show the title of the learning content and the navigation bar may be used to navigate between learning units, to open the table of contents screen, and to implement logging off. If the learning content controls the navigation flow inside a particular instructional element and the content player manages navigation between instructional elements, then this automatically leads to double navigation elements. Double navigation may be confusing for learners 104, since they do not know which navigation elements should be used and when. Accordingly, LMS 140 provides functionality, often through APIs, to allow the learning content to control the navigational controls, or other interface elements, thereby possibly reducing or eliminating double navigation and other potential interface issues. These APIs provide the particular developer of the content with the ability to create content with more specific internal controls without concern for external influences or redundancies. For example, an instructional element that contains some form of sub-navigation can turn off the navigation bar and trigger navigation to the next instructional element after the current one is completed. Alternatively, this example instructional element may turn navigation on again and let learner 104 decide where to navigate next. In another example, a designer of the instructional element, such as developer 106, may determine that it needs more screen space; in this case, he can allow or design the content to turn off the navigation and the header bar. In a further example, if the instructional element includes some form of test that should be passed before learners 104 are allowed to navigate forward, then the instructional element or other portion of the content may be designed to turn off (or otherwise disable) the navigation bar until the test is passed. In yet another example, if an instructional element determines that it should display a portion or all of the particular table of contents for some learning content, then it may read it in XML format and automatically render it accordingly. Of course, while content control of certain portions of the content player is described in terms of learning content and learning systems, such content control may be implemented in other media content and format, including music, video, and such. Training administrators 105 may customize teaching scenarios by using web services to integrate external content, functions, and services into the learning platform from a remote or third party content provider 108. The training administrator 105 can administer internal and external participants (or learners 104) and enroll them for courses to be delivered via any number of techniques. Training management supports the respective organization, entity, or learner 104 in the day-to-day activities associated with course bookings. Booking activities can be performed by the training administrator in training management on an individual or group participant basis. For example, training administrator 105 can often request, execute, or otherwise manage the following activities in a dynamic participation menu presented in learning management system 140: i) prebook: if participants are interested in taking certain classroom courses or virtual classroom sessions, but there are no suitable dates scheduled, 30 learners 104 can be prebooked for the course types. Prebooking data can be used to support a demand planning process; ii) book: individual or group learners 104 (for example, companies, departments, roles, or other organizational units) can be enrolled for courses that can be delivered using many technologies; iii) rebook: learners 104 can book a course on an earlier or later date than originally booked; iv) replace: learners 104 can be swapped; and v) cancel: course bookings can be canceled, for example, if the learners 104 cannot attend. Environment 100 is typically a distributed client/server system that spans one or more networks such as external network 112 or internal network 114. In such embodiments, data may be communicated or stored in an encrypted format such as, for example, using the RSA, WEP, or DES encryption algorithms. But environment 100 may be in a dedicated enterprise environment—across a local area network or subnet—or any other suitable environment without departing from the scope of this disclosure. Indeed, while generally described or referenced in terms of an enterprise, the components and techniques may be implemented in any suitable environment, organization, entity, and such. Turning to the illustrated embodiment, environment 100 includes or is communicably coupled with server 102, one or more learners 104 or other users on clients, and network 112. In this embodiment, environment 100 is also communicably coupled with external content provider 108. Server 102 comprises an electronic computing device operable to receive, transmit, process and store data associated with environment 100. Generally, FIG. 1 provides merely one example of computers that may be used with the disclosure. Each computer is generally intended to encompass any suitable processing device. For example, although FIG. 1 illustrates one server 102 that may be used with the disclosure, environment 100 can be implemented using computers other than servers, as well as a server pool. Indeed, server 102 may be any computer or processing device such as, for example, a blade server, general-purpose personal computer (PC), Macintosh, workstation, Unix-based computer, or any other suitable device. In other words, the present disclosure contemplates computers other than general purpose computers as well as computers without conventional operating systems. Server 102 may be adapted to execute any operating system including Linux, UNIX, Windows Server, or any other suitable operating system. According to one embodiment, server 102 may also include or be communicably coupled with a web server and/or a mail server. Server 102 may also be communicably coupled with a remote repository over a portion of network 112. While not illustrated, the repository may be any intra-enterprise, inter-enterprise, regional, nationwide, or other electronic storage facility, data processing center, or archive that allows for one or a plurality of clients (as well as servers 102) to dynamically store data elements, which may include any business, enterprise, application or other transaction data. For example, the repository may be a central database communicably coupled with one or more servers 102 and clients via a virtual private network (VPN), SSH (Secure Shell) tunnel, or other secure network connection. This repository may be physically or logically located at any appropriate location including in one of the example enterprises or off-shore, so long as it remains operable to store information associated with environment 100 and communicate such data to at least a subset of plurality of the clients (perhaps via server 102). As a possible supplement to or as a portion of this repository, server 102 normally includes some form of local memory. The memory may include any memory or database module and may take the form of volatile or non-volatile memory including, without limitation, magnetic media, optical media, random access memory (RAM), read-only memory (ROM), removable media, or any other suitable local or remote memory component. For example, the memory may store or reference a large volume of information relevant to the planning, management, and follow-up of courses or other content. This example data includes information on i) course details, such as catalog information, dates, prices, capacity, time schedules, assignment of course content, and completion times; ii) personnel resources, such as trainers who are qualified to hold courses; iii) room details, such as addresses, capacity, and equipment; and iv) participant data for internal and external participants. The memory may also include any other appropriate data such as VPN applications or services, firewall policies, a security or access log, print or other reporting files, HTML files or templates, data classes or object interfaces, child software applications or sub-systems, and others. In some embodiments, the memory may store information as one or more tables in a relational database described in terms of SQL statements or scripts. In another embodiment, the memory may store information as various data structures in text files, eXtensible Markup Language (XML) documents, Virtual Storage Access Method (VSAM) files, flat files, Btrieve files, comma-separated-value (CSV) files, internal variables, or one or more libraries. But any stored information may comprise one table or file or a plurality of tables or files stored on one computer or across a plurality of computers in any appropriate format. Indeed, some or all of the learning or content data may be local or remote without departing from the scope of this disclosure and store any type of appropriate data. Server 102 also includes one or more processors. Each processor executes instructions and manipulates data to perform the operations of server 102 such as, for example, a central processing unit (CPU), a blade, an application specific integrated circuit (ASIC), or a field-programmable gate array (FPGA). Although this disclosure typically discusses computers in terms of a single processor, multiple processors may be used according to particular needs and reference to one processor is meant to include multiple processors where applicable. In the illustrated embodiment, the processor executes enterprise resource planning (ERP) solution 135, thereby providing organizations with the strategic insight, ability to differentiate, increased productivity, and flexibility they need to succeed. With software such as ERP solution 135, the implementing entity may automate end-to-end processes and extend those processes beyond the particular organization to the entire system by incorporating customers, partners, suppliers, or other entities. For example, ERP solution 135 may include or implement easy-to-use self-services and role-based access to information and services for certain users, thereby possibly boosting productivity and efficiency. In another example, ERP solution 135 may include or implement analytics that enable the particular entity or user to evaluate performance and analyze operations, workforce, and financials on an entity and individual level for strategic and operational insight. ERP solution 135 may further include or implement i) financials to control corporate finance functions while providing support for compliance to rigorous regulatory mandates; ii) operations to support end-to-end logistics for complete business cycles and capabilities that improve product quality, costs, and time to market; and/or iii) corporate services to optimize both centralized and decentralized services for managing real estate, project portfolios, business travel, environment, health and safety, and quality. In the illustrated embodiment, ERP solution 135 also includes or implements some form of human capital management (in this case, learning) to maximize the profitability or other measurable potential of the users, with support for talent management, workforce deployment, and workforce process management. In certain cases, ERP solution 135 may be a composite application that includes, execute, or otherwise implement some or all of the foregoing aspects, which include learning management system 140 as illustrated. As briefly described above, learning management system 140 is any software operable to provide a comprehensive enterprise learning platform capable of managing and integrating business and learning processes and supporting all methods of learning, not restricted to e-learning or classroom training. As described in more detail in FIG. 2, learning management system 140 is often fully integrated with ERP solution 135 and includes an intuitive learning portal and a powerful training and learning management system, as well as content authoring, structuring, and management capabilities. Learning management system 140 offers back-office functionality for competency management and comprehensive assessment for performance management, and offers strong analytical capabilities, including support for ad hoc reporting. The solution uses a comprehensive learning approach to deliver knowledge to all stakeholders, and tailors learning paths to an individual's educational needs and personal learning style. Interactive learning units can be created with a training simulation tool that is also available. Regardless of the particular implementation, “software” may include software, firmware, wired or programmed hardware, or any combination thereof as appropriate. Indeed, ERP solution 135 may be written or described in any appropriate computer language including C, C++, Java, J#, Visual Basic, assembler, Perl, any suitable version of 4GL, as well as others. For example, returning to the above described composite application, the composite application portions may be implemented as Enterprise Java Beans (EJBs) or the design-time components may have the ability to generate run-time implementations into different platforms, such as J2EE (Java 2 Platform, Enterprise Edition), ABAP (Advanced Business Application Programming) objects, or Microsoft's NET. It will be understood that while ERP solution 135 is illustrated in FIG. 1 as including one sub-module learning management system 140, ERP solution 135 may include numerous other sub-modules or may instead be a single multi-tasked module that implements the various features and functionality through various objects, methods, or other processes. Further, while illustrated as internal to server 102, one or more processes associated with ERP solution 135 may be stored, referenced, or executed remotely. For example, a portion of ERP solution 135 may be a web service that is remotely called, while another portion of ERP solution 135 may be an interface object bundled for processing at the remote client. Moreover, ERP solution 135 and/or learning management system 140 may be a child or sub-module of another software module or enterprise application (not illustrated) without departing from the scope of this disclosure. Server 102 may also include an interface for communicating with other computer systems, such as the clients, over networks, such as 112 or 114, in a client-server or other distributed environment. In certain embodiments, server 102 receives data from internal or external senders through the interface for storage in the memory and/or processing by the processor. Generally, the interface comprises logic encoded in software and/or hardware in a suitable combination and operable to communicate with networks 112 or 114. More specifically, the interface may comprise software supporting one or more communications protocols associated with communications network 112 or hardware operable to communicate physical signals. Network 112 facilitates wireless or wireline communication between computer server 102 and any other local or remote computers, such as clients. Network 112, as well as network 114, facilitates wireless or wireline communication between computer server 102 and any other local or remote computer, such as local or remote clients or a remote content provider 108. While the following is a description of network 112, the description may also apply to network 114, where appropriate. For example, while illustrated as separate networks, network 112 and network 114 may be a continuous network logically divided into various sub-nets or virtual networks without departing from the scope of this disclosure. In some embodiments, network 112 includes access points that are responsible for brokering exchange of information between the clients. As discussed above, access points may comprise conventional access points, wireless security gateways, bridges, wireless switches, sensors, or any other suitable device operable to receive and/or transmit wireless signals. In other words, network 112 encompasses any internal or external network, networks, sub-network, or combination thereof operable to facilitate communications between various computing components in system 100. Network 112 may communicate, for example, Internet Protocol (IP) packets, Frame Relay frames, Asynchronous Transfer Mode (ATM) cells, voice, video, data, and other suitable information between network addresses. Network 112 may include one or more local area networks (LANs), radio access networks (RANs), metropolitan area networks (MANs), wide area networks (WANs), all or a portion of the global computer network known as the Internet, and/or any other communication system or systems at one or more locations. Turning to network 114, as illustrated, it may be all or a portion of an enterprise or secured network. In another example, network 114 may be a VPN between server 102 and a particular client across wireline or wireless links. In certain embodiments, network 114 may be a secure network associated with the enterprise and certain local or remote clients. Each client is any computing device operable to connect or communicate with server 102 or other portions of the network using any communication link. At a high level, each client includes or executes at least GUI 116 and comprises an electronic computing device operable to receive, transmit, process and store any appropriate data associated with environment 100. It will be understood that there may be any number of clients communicably coupled to server 102. Further, “client” and “learner,” “administrator,” “developer” and “user” may be used interchangeably as appropriate without departing from the scope of this disclosure. Moreover, for ease of illustration, each client is described in terms of being used by one user. But this disclosure contemplates that many users may use one computer or that one user may use multiple computers. As used in this disclosure, the client is intended to encompass a personal computer, touch screen terminal, workstation, network computer, kiosk, wireless data port, smart phone, personal data assistant (PDA), one or more processors within these or other devices, or any other suitable processing device or computer. For example, the client may be a PDA operable to wirelessly connect with external or unsecured network. In another example, the client may comprise a laptop that includes an input device, such as a keypad, touch screen, mouse, or other device that can accept information, and an output device that conveys information associated with the operation of server 102 or other clients, including digital data, visual information, or GUI 116. Both the input device and output device may include fixed or removable storage media such as a magnetic computer disk, CD-ROM, or other suitable media to both receive input from and provide output to users of the clients through the display, namely the client portion of GUI or application interface 116. GUI 116 comprises a graphical user interface operable to allow the user of the client to interface with at least a portion of environment 100 for any suitable purpose, such as viewing application or other transaction data. Generally, GUI 116 provides the particular user with an efficient and user-friendly presentation of data provided by or communicated within environment 100. As shown in later FIGUREs, GUI 116 may comprise a plurality of customizable frames or views having interactive fields, pull-down lists, and buttons operated by the user. GUI 116 may be a learning interface allowing the user or learner 104 to search a course catalog, book and cancel course participation, and support individual course planning (e.g., by determining qualification deficits and displaying a learner's completed, started, and planned training activities). Learner 104 also may access and work through web based courses using the learning interface. The learning interface may be used to start a course, reenter a course, exit a course, and take tests. The learning interface also provides messages, notes, and special course offerings to the learner 104. GUI 116 may also be a course editor allowing the content developer to create the structure for the course content, which may be associated with certain metadata. The metadata may be interpreted by a content player of learning management system 140 (described below) to present a course to learner 104 according to a learning strategy selected at run time. In particular, the course editor may enable the author or content developer 106 to classify and describe structural elements, assign attributes to structural elements, assign relations between structural elements, and build a subject-taxonomic course structure. The course editor generates the structure of the course and may include a menu bar, a button bar, a course overview, a dialog box, and work space. The menu bar may include various drop-down menus, such as, for example, file, edit, tools, options, and help. The drop-down menus may include functions, such as create a new course, open an existing course, edit a course, or save a course. The button bar may include a number of buttons. The buttons may be shortcuts to functions in the drop down menus that are used frequently and that activate tools and functions for use with the course editor. The remaining portions of the example course editor interface may be divided in to three primary sections or windows: a course overview, a dialog box, and a workspace. Each of the sections may be provided with horizontal or vertical scroll bars or other means allowing the windows to be sized to fit on different displays while providing access to elements that may not appear in the window. GUI 116 may also present a plurality of portals or dashboards. For example, GUI 116 may display a portal that allows users to view, create, and manage historical and real-time reports including role-based reporting and such. Generally, historical reports provide critical information on what has happened including static or canned reports that require no input from the user and dynamic reports that quickly gather run-time information to generate the report. Of course, reports may be in any appropriate output format including PDF, HTML, and printable text. It should be understood that the term graphical user interface may be used in the singular or in the plural to describe one or more graphical user interfaces and each of the displays of a particular graphical user interface. Indeed, reference to GUI 116 may indicate a reference to the front-end or other component of learning management system 140, as well as the particular interface or learning portal accessible via the client, as appropriate, without departing from the scope of this disclosure. In short, GUI 116 contemplates any graphical user interface, such as a generic web browser or touch screen, that processes information in environment 100 and efficiently presents the results to the user. Server 102 can accept data from the client via the web browser (e.g., Microsoft Internet Explorer or Netscape Navigator) and return the appropriate HTML or XML responses to the browser using network 112 or 114, such as those illustrated in subsequent FIGUREs. FIG. 2 illustrates one example implementation of learning management system (LMS) 140. In the illustrated embodiment, LMS 140 comprises four example components, namely i) a management system core 202, which controls learning processes and manages and handles the administrative side of training; ii) a learning portal 204, which is the learner's springboard into the learning environment, which allows him to access the course offering and information on personal learning data and learning activities; iii) an authoring environment 210, where learning content and tests are designed and structured; and iv) a content management system 220, where learning content is stored and managed. Generally, LMS 140 is aimed at learners 104, trainers 105, course authors 106 and instructional designers, administrators, and managers. Learners 104 log on to their personalized learning portal 204 from any suitable client via GUI 116. The learning portal 204 is the user's personalized point of access to the learning-related functions. Generally, learning portal 204 presents details of the complete education and training offering, such as traditional classroom training, e-learning courses (such as virtual classroom sessions or web-based training), or extensive curricula. Self-service applications enable learners 104 to enroll themselves for courses, prebook for classroom courses, and cancel bookings for delivery methods, as well as start self-paced learning units directly. If learner 104 wants to continue learning offline, he can often download the courses onto the client and synchronize the learning progress later. The learning portal 204 may be seamlessly integrated in an enterprise portal, where learner 104 is provided with access to a wide range of functions via one system. Such an enterprise portal may be the learner's single point of entry and may integrate a large number of role-based functions, which are presented to the user in a clear, intuitive structure. The learning portal 204 often gives learner 104 access to functions such as, for example, search for courses using i) find functions: finding courses in the course catalog that have keywords in the course title or description; and ii) extended search functions: using the attributes appended to courses, such as target group, prerequisites, qualifications imparted, or delivery method. Additional functions may include self-service applications for booking courses and canceling bookings, messages and notes, course appraisals, and special (or personalized) course offering including courses prescribed for the learner 104 on the basis of his or her role in the enterprise or the wishes of the respective supervisor or trainer and qualification deficits of learner 104 that can be reduced or eliminated by participating in the relevant courses. The learning portal 204 may also provide a view of current and planned training activities, as well as access to courses booked, including: i) starting a course; ii) reentering an interrupted course; iii) downloading a course and continuing learning offline; iv) going online again with a downloaded course and synchronizing the learning progress; v) exiting a course; and vi) taking a test. On the basis of the information the learning management system 140 has about learner 104, the learning management system core 202 proposes learning units for the learner 104, monitors the learner's progress, and coordinates the learner's personal learning process. In addition, the learning management system core 202 is often responsible for managing and handling the administrative processes. Targeted knowledge transfer may use precise matching of the learning objectives and qualifications of a learning unit with the learner's level of knowledge. For example, at the start of a course, the management system core 202 may compare learning objectives already attained by the respective learner 104 with the learning objectives of the course. On the basis of this, core 202 determines the learner's current level and the required content and scope of the course. The resulting course is then presented to the learner 104 via a content player 208. The content player 208 is a virtual teacher that tailors learning content to the needs of the individual learner 104 and helps him navigate through the course; content player 208 then presents the learning course to the learner 104. In certain embodiments, the content player 208 is a Java application that is deployed on a Java runtime environment, such as J2EE. In this case, it is linked with other systems such as a web application server and ERP solution 135 via the Java Connector. The individual course navigation may be set up at runtime on the basis of the learning strategy stored in the learner account. Using the didactical strategies, content player 208 helps ensure that the course is dynamically adapted to the individual learning situation and the preferences expressed by learner 104. At this point, the content player 208 then calculates dynamically adjusted learning paths and presents these to the learner 104—perhaps graphically—to facilitate orientation within a complex subject area. When learner 104 starts a course, the learning objectives and qualifications achieved so far are compared with the qualifications imparted by the course. This may enable content player 208 to avoid offering redundant learning objects to learner 104 as part of the course. If a course has already been completed, a page is displayed with an appropriate message. On successful completion of a course, the learning objectives achieved are credited as qualifications to the learner's personal learner account. The learner 104 can resume working on an interrupted course at any time. At this point, the content player 208 guides the learner 104 to the spot at which training was interrupted. Offline learning player 206 generally enables learners 104 to download network or other web-based courses from the learning portal 204 and play them locally. Locally stored courses are listed in the course list with an icon indicating the status of each course. The offline player 206 may guide the learner 104 through the course according to the preferred learning strategy. It may also dynamically adjust the number and sequence of learning objects to the learner's individual learning pattern. If the learner 104 interrupts a course, the offline player 206 reenters the course at the point of interruption the next time. The learner 104 can, at any point in time, resynchronize his offline learning progress with the learning portal 204 and either continue learning online or set the course to a completed status. LMS core 202 may also include or invoke training management that would be an administrative side of LMS 140. This typically includes course planning and execution, booking and cancellation of course participation, and follow-up processing, including cost settlement. In training management, the training administrator 105 creates the course offering and can, for example, define training measures for individual learners 104 and groups of learners 104. The training administrator 105 creates the course catalog in training management and makes it available (partially or completely) to learners 104 in the learning portal 204 for reference and enrollment purposes. The training administrator 105 can typically administer internal and external participants and enroll them for courses to be delivered using various technologies and techniques. Training management supports numerous business processes involved in the organization, management, and handling of training. Training management can be configured to meet the requirements, work processes, and delivery methods common in the enterprise. Training measures are usually flexibly structured and may include briefings, seminars, workshops, virtual classroom sessions, web-based trainings, external web-based trainings, static web courses, or curricula. Training management includes functions to efficiently create the course offerings. Using course groups to categorize topics by subject area enables flexible structuring of the course catalog. For example, when training administrator 105 creates a new subject area represented by a course group, he can decide whether it should be accessible to learners 104 in the learning portal 202. Reporting functions 214 in training management enable managers to keep track of learners' learning activities and the associated costs at all times. Supervisors or managers can monitor and steer the learning processes of their employees. They can be notified when their employees request participation or cancellation in courses and can approve or reject these requests. LMS 140 may provide the training manager with extensive support for the planning, organization, and controlling of corporate education and training. Trainers need to have up-to-the-minute, reliable information about their course schedules. There is a wide range of reporting options available in training management to enable the trainer to keep track of participants, rooms, course locations, and so on. Authoring environment 210 contains tools and wizards that content developers 106 and instructional designers can use to create or import external course content. External authoring tools can be launched directly via authoring environment 210 to create learning content that can be integrated into learning objects and combined to create complete courses (learning nets). Attributes may be appended to content, thereby allowing learners 104 to structure learning content more flexibly depending on the learning strategy they prefer. Customizable and flexible views allow subject matter experts and instructional designers to configure. and personalize the authoring environment 210. To create the HTML pages for the content, the user can easily and seamlessly integrate editors from external providers or other content providers 108 into LMS 140 and launch the editors directly from authoring environment 210. Authoring environment 210 often includes a number of tools for creating, structuring, and publishing course content and tests to facilitate and optimize the work of instructional designers, subject matter experts, and training administrators 105. Authoring environment 210 may contain any number of components or sub-modules such as an instructional design editor is used by instructional designers and subject matter experts to create and structure learning content (learning nets and learning objects), a test author is used by instructional designers and subject matter experts to create web-based tests, and a repository explorer is for training administrators and instructional designers to manage content. In the illustrated embodiment, course content is stored and managed in content management system 220. Put another way, LMS 140 typically uses the content management system 220 as its content storage location. But a WebDAV (Web-based Distributed Authoring and Versioning) interface (or other HTTP extension) allows integration of other WebDAV-enabled storage facilities as well without departing from the scope of this disclosure. Content authors or developers 106 publish content in the back-end training management system. Links to this content assist the training administrator 105 in retrieving suitable course content when planning web-based courses. A training management component of LMS 140 may help the training administrator 105 plan and create the course offering; manage participation, resources, and courses; and perform reporting. When planning e-learning courses, the training administrator 105 uses references inserted in published courses to retrieve the appropriate content in the content management system for the courses being planned. Content management system 220 may also include or implement content conversion, import, and export functions, allowing easy integration of Sharable Content Object Reference Model (SCORM)-compliant courses from external providers or other content providers 108. Customers can create and save their own templates for the various learning elements (learning objects, tests, and so on) that define structural and content-related specifications. These provide authors with valuable methodological and didactical support. LMS 140 and its implemented methodology typically structure content so that the content is reusable and flexible. For example, the content structure allows the creator of a course to reuse existing content to create new or additional courses. In addition, the content structure provides flexible content delivery that may be adapted to the learning styles of different learners. E-learning content may be aggregated using a number of structural elements arranged at different aggregation levels. Each higher level structural element may refer to any instances of all structural elements of a lower level. At its lowest level, a structural element refers to content and may not be further divided. According to one implementation shown in FIG. 3, course material 300 may be divided into four structural elements: a course 301, a sub-course 302, a learning unit 303, and a knowledge item 304. Starting from the lowest level, knowledge items 304 are the basis for the other structural elements and are the building blocks of the course content structure. Each knowledge item 304 may include content that illustrates, explains, practices, or tests an aspect of a thematic area or topic. Knowledge items 304 typically are small in size (i.e., of short duration, e.g., approximately five minutes or less). Any number of attributes may be used to describe a particular knowledge item 304 such as, for example, a name, a type of media, and a type of knowledge. The name may be used by a learning system to identify and locate the content associated with a knowledge item 304. The type of media describes the form of the content that is associated with the knowledge item 304. For example, media types include a presentation type, a communication type, and an interactive type. A presentation media type may include a text, a table, an illustration, a graphic, an image, an animation, an audio clip, and a video clip. A communication media type may include a chat session, a group (e.g., a newsgroup, a team, a class, and a group of peers), an email, a short message service (SMS), and an instant message. An interactive media type may include a computer based training, a simulation, and a test. Knowledge item 304 also may be described by the attribute of knowledge type. For example, knowledge types include knowledge of orientation, knowledge of action, knowledge of explanation, and knowledge of source/reference. Knowledge types may differ in learning goal and content. For example, knowledge of orientation offers a point of reference to the learner, and, therefore, provides general information for a better understanding of the structure of interrelated structural elements. Each of the knowledge types are described in further detail below. Knowledge items 304 may be generated using a wide range of technologies, often allowing a browser (including plug-in applications) to be able to interpret and display the appropriate file formats associated with each knowledge item. For example, markup languages (such as HTML, a standard generalized markup language (SGML), a dynamic HTML (DHTML), or XML), JavaScript (a client-side scripting language), and/or Flash may be used to create knowledge items 304. HTML may be used to describe the logical elements and presentation of a document, such as, for example, text, headings, paragraphs, lists, tables, or image references. Flash may be used as a file format for Flash movies and as a plug-in for playing Flash files in a browser. For example, Flash movies using vector and bitmap graphics, animations, transparencies, transitions, MP3 audio files, input forms, and interactions may be used. In addition, Flash allows a pixel-precise positioning of graphical elements to generate impressive and interactive applications for presentation of course material to a learner. Learning units 303 may be assembled using one or more knowledge items 304 to represent, for example, a distinct, thematically-coherent unit. Consequently, learning units 303 may be considered containers for knowledge items 304 of the same topic. Learning units 303 also may be considered relatively small in size (i.e., duration) though larger than a knowledge item 304. Sub-courses 302 may be assembled using other sub-courses 302, learning units 303, and/or knowledge items 304. The sub-course 302 may be used to split up an extensive course into several smaller subordinate courses. Sub-courses 302 may be used to build an arbitrarily deep nested structure by referring to other sub-courses 302. Courses may be assembled from all of the subordinate structural elements including sub-courses 302, learning units 303, and knowledge items 304. To foster maximum reuse, all structural elements should be self-contained and context free. Structural elements also may be tagged with metadata that is used to support adaptive delivery, reusability, and search/retrieval of content associated with the structural elements. For example, learning object metadata (LOM), per maps defined by the IEEE “Learning Object Metadata Working Group,” may be attached to individual course structure elements. The metadata may be used to indicate learner competencies associated with the structural elements. Other metadata may include a number of knowledge types (e.g., orientation, action, explanation, and resources) that may be used to categorize structural elements. As shown in FIG. 4, structural elements may be categorized using a didactical ontology 400 of knowledge types 401 that includes orientation knowledge 402, action knowledge 403, explanation knowledge 404, and resource knowledge 405. Orientation knowledge 402 helps a learner 104 to find their way through a topic without being able to act in a topic-specific manner and may be referred to as “know what.” Action knowledge 403 helps a learner to acquire topic related skills and may be referred to as “know how.” Explanation knowledge 404 provides a learner with an explanation of why something is the way it is and may be referred to as “know why.” Resource knowledge 405 teaches a learner where to find additional information on a specific topic and may be referred to as “know where.” The four knowledge types (orientation, action, explanation, and reference) may be further divided into a fine grained ontology. For example, orientation knowledge 402 may refer to sub-types 407 that include a history, a scenario, a fact, an overview, and a summary. Action knowledge 403 may refer to sub-types 409 that include a strategy, a procedure, a rule, a principle, an order, a law, a comment on law, and a checklist. Explanation knowledge 404 may refer to sub-types 406 that include an example, an intention, a reflection, an explanation of why or what, and an argumentation. Resource knowledge 405 may refer to sub-types 408 that include a reference, a document reference, and an archival reference. Dependencies between structural elements may be described by relations when assembling the structural elements at one aggregation level. A relation may be used to describe the natural, subject-taxonomic relation between the structural elements. A relation may be directional or non-directional. A directional relation may be used to indicate that the relation between structural elements is true only in one direction. Directional relations should be followed. Relations may be divided into two categories: subject-taxonomic and non-subject taxonomic. Subject-taxonomic relations may be further divided into hierarchical relations and associative relations. Hierarchical relations may be used to express a relation between structural elements that have a relation of subordination or superordination. For example, a hierarchical relation between the knowledge items A and B exists if B is part of A. Hierarchical relations may be divided into two categories: the part/whole relation (i.e., “has part”) and the abstraction relation (i.e., “generalizes”). For example, the part/whole relation “A has part B” describes that B is part of A. The abstraction relation “A generalizes B” implies that B is a specific type of A (e.g., an aircraft generalizes a jet or a jet is a specific type of aircraft). Associative relations may be used refer to a kind of relation of relevancy between two structural elements. Associative relations may help a learner obtain a better understanding of facts associated with the structural elements. Associative relations describe a manifold relation between two structural elements and are mainly directional (i.e., the relation between structural elements is true only in one direction). Examples of associative relations include “determines,” “side-by-side,” “alternative to,” “opposite to,” “precedes,” “context of,” “process of,” “values,” “means of,” and “affinity.” The “determines” relation describes a deterministic correlation between A and B (e.g., B causally depends on A). The “side-by-side” relation may be viewed from a spatial, conceptual, theoretical, or ontological perspective (e.g., A side-by-side with B is valid if both knowledge objects are part of a superordinate whole). The side-by-side relation may be subdivided into relations, such as “similar to,” “alternative to,” and “analogous to.” The “opposite to” relation implies that two structural elements are opposite in reference to at least one quality. The “precedes” relation describes a temporal relationship of succession (e.g., A occurs in time before B (and not that A is a prerequisite of B). The “context of” relation describes the factual and situational relationship on a basis of which one of the related structural elements may be derived. An “affinity” between structural elements suggests that there is a close functional correlation between the structural elements (e.g., there is an affinity between books and the act of reading because reading is the main function of books). Non Subject-Taxonomic relations may include the relations “prerequisite of” and “belongs to.” The “prerequisite of” and the “belongs to” relations do not refer to the subject-taxonomic interrelations of the knowledge to be imparted. Instead, these relations refer to the progression of the course in the learning environment (e.g., as the learner traverses the course). The “prerequisite of” relation is directional whereas the “belongs to” relation is non-directional. Both relations may be used for knowledge items 304 that cannot be further subdivided. For example, if the size of the screen is too small to display the entire content on one page, the page displaying the content may be split into two pages that are connected by the relation “prerequisite of.” Another type of metadata is competencies. Competencies may be assigned to structural elements, such as, for example, a sub-course 302 or a learning unit 303. The competencies may be used to indicate and evaluate the performance of a learner as learner 104 traverses the course material. A competency may be classified as a cognitive skill, an emotional skill, a senso-motorical skill, or a social skill. As described in more detail below, learner 104 may choose between one or more learning strategies to determine which path to take through course 301. As a result, the progression of learners 104 through the course 301 may differ. Learning strategies may be created using macro-strategies and micro-strategies. Learner 104 may select from a number of different learning strategies when taking course 301. These learning strategies may be selected at run time of the presentation of course content to learner 104. As result, course authors 106 may be relieved from the burden of determining a sequence or an order of presentation of the course material. Instead, developers 106 may focus on structuring and annotating the course material. In addition, authors 106 may not be required to apply complex rules or Boolean expressions to domain models, thus minimizing or reducing the training necessary to use the system. Further, the course material may be easily adapted and reused to edit and create new courses. Macro-strategies are used in learning strategies to refer to the coarse-grained structure of a course (i.e., the organization of sub-courses 302 and learning units 303). The macro-strategy determines the sequence that sub-courses 302 and learning units 303 of a course 301 are presented to the learner. For example, content player 208 uses the macro strategy to determine the sequence in which learning objects are displayed in the browser. For example, the following macro strategies may be used in content player 208: i) Table of contents; ii) deductive; iii) inductive; and iv) SCORM. The table of contents strategy uses the table of contents of the learning net, ignoring relationships. This means that the learning objects are displayed in the sequence in which they are arranged in the learning net. The deductive strategy generally arrange the objects from general to specific using the hierarchical relationships between learning objects; i.e., learner 104 can work through the hierarchy from the top down. The inductive strategy goes from specific to general by allowing learner 104 to work through the hierarchy from the top down. However, there is often only a brief orientation given for each learning object, as you go down through the hierarchy. This is true for all elements down to the lowest. Then, learner 104 works his way up through the hierarchy again with the desired level of detail. The SCORM strategy for learning nets out of external learning management systems that are imported as SCORM courses (the data format used for the exchange). Of course, these are merely example macro-strategies and any appropriate learning strategies may be used including none, some, or all of the foregoing examples. Micro-strategies, implemented by the learning strategies, target the learning progression within a learning unit. The micro-strategies determine the order that knowledge items of a learning unit are presented. Micro-strategies refer to the attributes describing the knowledge items. Examples of micro-strategies include “orientation only”, “action oriented”, “explanation oriented”, “orientation oriented”, “table of contents”, “initial orientation”, “task oriented”, “example oriented”, and SCORM. The micro-strategy “orientation only” ignores all knowledge items that are not classified as orientation knowledge. The “orientation only” strategy may be best suited to implement an overview of the course. The micro-strategy “action oriented” first picks knowledge items that are classified as action knowledge. All other knowledge items are sorted in their natural order (i.e., as they appear in the knowledge structure of the learning unit). The micro-strategy “explanation oriented” is similar to action oriented and focuses on explanation knowledge. For example, “explanation oriented” may displays explanatory knowledge to start with and then other knowledge types. The micro-strategy “orientation oriented” is similar to action oriented and focuses on orientation knowledge. The micro-strategy “table of contents” operates like the macro-strategy table of contents (but on a learning unit level). The micro-strategy “initial orientation” displays orientation knowledge to start with and then other knowledge types. The micro-strategy “task-oriented” displays practical instruction/action to start with and then other knowledge types. The micro-strategy “example oriented” displays example knowledge to start with and then other knowledge types. The micro-strategy SCORM is the strategy used with the corresponding SCORM macro strategy. FIG. 5 illustrates a system 500 for presenting course content, perhaps using the foregoing strategies. Although the illustrated components are logically organized into groups for discussion purposes, components may be, for example, distributed on one or more computing devices connected by one or more networks, shared memory, inter-processor communication channels, or other suitable means. There may be more or fewer components without departing from the scope and spirit of this disclosure. An individual component's functionality can be distributed on one or more computing devices. For example, FIG. 2 illustrates LMS 140 including, among other things, content player 208 and learning portal 204. This example learning portal 204 allows learner 104 to start a new course or continue a previously initiated course using content player 208. Both the learning portal 204 and the content player 208 can access learning content 220 (provide by content 106 or content provider 108) through objects or services in a runtime environment 502. Learning content can be persisted in any number of ways including, for example, with one or more files, databases, repositories, virtual repositories, and content management systems. Content player 208 can include, present, or otherwise utilize user interface 116 to present representations of content including, for example, HTML, SGML, DHTML, XML, JavaScript, or Flash. By way of illustration, runtime environment 502 can provide the ability to create one or more processes, threads or other units of execution with access to local and remote resources/services (e.g., virtual memory, threads, processes, web services, user interface 116, learning content 220, and content provider 108), intra-process and inter-process communication facilities, authorization and authentication services, and/or exception handling. Runtime environment 502 can be distributed across one or more computing devices. In one implementation, the runtime environment 502 is the Java 2 Platform, Enterprise Edition (J2EE). In a more detailed embodiment, the user interface 116 is communicably coupled to a presentation component 616 which is part of content player 208. Content player 208 includes, references, or otherwise presents a number of interface elements available for displaying or managing a course. For example, the topic of the course or learning object is displayed in the header in the upper part of the screen. The content of the learning object may be displayed in the center screen area. The navigation bar is located in the lower screen area and allows learner 104 to activate all of the navigation functions for a course. For example, content player 208 may present the following navigation controls or interface elements: back, next, table of contents, path, and others. In this example, the back interface element allows learner 104 to go back to the previous learning object. Navigation steps from other sessions are also often taken into account at this point. The next interface element allows learner 104 to move on to the next object. This navigation may happen in a time axis according to the selected learning strategy. In short, if learner 104 navigates using “back,” then content player 208 goes back in this history. If learner 104 then navigates again with “next,” then content player 208 returns to the learning content that was most recently edited. The “table of contents” element displays an overview of the content of a course. In the dialog box, the learning objects are presented in the sequence in which the author created them. This view is often independent of the learning strategy selected. In the table of contents, entries that learner 104 can access are usually highlighted in some fashion. Access to entries depends on the learner's completion status and the learning strategy selected. The path interface element may allow learner 104 to know where he is in this course. A dialog box may appear with an overview of the course. The overview may depend on the learning strategy selected. If the course was started from learning portal 204, then the learning strategy selected in the portal is displayed in the dialog box. If the course was started from authoring environment 210, then the selected macro strategy (perhaps with its corresponding micro strategy) may be displayed in the dialog box. In the upper part of the dialog box, the system displays the instructional elements or other content in the current learning object. In the lower part of the screen, learning objects and learning nets that are in the environment of the current path are displayed on a dark background. As above, elements that learner 104 has already displayed or completed and the learning object currently in process may be flagged accordingly. Other example interface elements include settings, print, help, and log off. The settings element allows learner 104 to switch learning strategies or reset progress already attained in a course. The print element prints out the content of a course. In certain case, the print element only prints the page that is currently displayed. The navigation and the path may not be printed as appropriate. The help interface element presents a dialog box that displays learner-specific help for, inter alia, navigating in the content player 208. The example log off element allows learner 104 to log off in a controlled fashion. In this case, the achieved learning objectives are entered in the learner account and the system saves the point at which the course was interrupted to help ensure that the learner can resume at the same point. After logging off, the dialog box is usually closed if the course was started from the portal and has been fully completed. Returning to the illustrated embodiment, presentation component 616 interacts with the user interface 116 to manifest course content in a presentation 626 (such as that illustrated in FIGS. 6A-D) and to receive user input. The presentation component 616 receives the content 608 from content player 208, a web service, cache file, or other suitable source. The content 608 can include discrete or streaming portions and may include one or more content representations. In one embodiment, the presentation component 616 is a web browser or other suitable application, or a proxy for such. In presenting content 608, the presentation component 616 can also present one or more user interface elements 622 in a footer area. The presentation component can also present a header area 625 for displaying messages or status. In certain cases, content 608 may include its own interface elements 624 that are in addition to or redundant to those present by content player 208. See FIG. 6A. The user interface elements 622 (or “navigation controls”) allow a user to navigate the course content 608, for example. Course navigation can be set up at runtime on the basis of a learning strategy (e.g. stored in the learner account). The navigation controls 622 are separate from, and may be in addition to, any such elements that are presented as part of the content presentation 626. For example, if the content representation 608 includes a Flash presentation, the presentation 626 can include its own user interface elements 624 for controlling the presentation, apart from the navigation controls 622. Back button 628 user interface element allows the user to move backwards through the course. A continue button 630 alternately allows the user to pause and resume the course. Selecting a table of contents button 632 presents a table of contents for the course which can allow the user to directly navigate to a section of interest. A path button 634 allows the user to view their current path through a course. A settings button, a print button, a help button 636, a path button 634, and a log off button 638 enable the user to change settings for the content player, print some or all of the course content, invoke a help system and log off from the content player, respectively. The forgoing descriptions of user interface elements 622 are merely examples of possible implementations. Many other types of user interface elements and configurations are possible within the scope of the present disclosure. Referring again to FIG. 5, presentation component 616 includes a content processor 602 which can process content 608. Content processor 602 can perform any processing of the content 608 to create or update the content presentation 626. In one embodiment, content processor 602 parses one or more portions of content 608 where the content can include, without limitation, HTML, SGML, DHTML, XML, JavaScript or Flash or suitable formats. In one embodiment, content processor 602 can be implemented as a web browser plug-in or an applet. Content processor 602 can modify a model 604 of the user interface 116 in order to affect the user interface 116. The model 604 can be a hierarchical representation of the components comprising the rendered user interface 116. Changes to the model 604 can be automatically reflected in the rendered user interface 116. In one implementation, the model 604 is a Document Object Model (DOM). Content 608 can contain commands, metadata or other information that can be used to trigger modification of user interface 116, including modification (such as disabling, triggering, moving, resizing, and others) of navigation controls 622, header 625 and other interface elements. By way of a non-limiting illustration, a command can be one or more statements in a programming language, metadata, an identifier, or other information that is part of the content 626. In one embodiment, the commands are JavaScript statements that cause the invocation of JavaScript functions. Table 1 is an exemplary listing of JavaScript functions that are included in services 606 and can be invoked from the content processor 602. In one implementation, these functions cause modification of the model 604. TABLE 1 FUNCTION NAMEDESCRIPTION HideFooterHides the navigation controls 622 in user inter- face 116 and thus puts the content in charge of navigating the course. ShowFooterShows the navigation controls 622 in user inter- face 116. HideHeaderHides the header bar 625 in user interface 116. ShowHeaderShows the header bar 625 in user interface 116. NavigateForwardTriggers content navigation to present the next instructional component or portion of a course in user interface 116. NavigateBackwardTriggers content navigation to present the pre- vious component or portion of a course in user interface 116. LogoutTriggers a log off from the content player 208. OpenTOCTriggers presentation of a table of contents 640 in user interface 116. CloseTOCTriggers closing of a table of contents 640 in user interface 116. OpenSettingsOpens a settings window in user interface 116. CloseSettingsCloses a settings window in user interface 116. OpenPathOpens a path window in user interface 116. ClosePathCloses a path window in user interface 116. GetVersionAcquires the current version of the system. Get_XML_TOC_URLProvides the URL that can be invoked to obtain the table of contents 640 in XML format. NavigateToNodeNavigates to a specific course node or instruc- tional element. In one embodiment, these commands may be provided through services 606. The services can include functionality for creating, destroying, altering, enabling, disabling, hiding or showing navigation controls 622. The services 608 can include an application programming interface (API) or other suitable programmatic means for altering the model. This API may offer developer 106 the option of steering or modifying the user interface of content player from the content by providing a number of functions that can be executed dynamically at runtime from the content. For example, developer 106 might decide to hide the navigation bar (as illustrated in FIG. 6B) or the header completely (as illustrated in FIG. 6C). In another example, learners 104 often steer their progress through a course or its table of contents triggered by the content (as illustrated in FIG. 6D). Equally, developer 106 can steer from the content, thus automatically guiding learner 104 through the course without some or all of the ability to influence the sequence. These services 606 can be invoked by the content processor 602 before, during or after processing the content 608 and in accordance to commands encountered in the content 608. By way of a non-limiting illustration, the services can be used to hide navigation controls 622 when user interface elements 624 are present, or perform navigation of the content without requiring presentation of the navigation controls 622. Doing so may prevent users from becoming confused by the presentation of two sets of controls (622, 624). FIGS. 6B-D illustrate the user interface 116 after various content commands have been executed to disable, hide or remove certain interface elements. Also illustrated in the user interface 116 is a table of contents 640 created as a result of the command(s), which could have alternatively been presented if the user had selected button 632. By way of further illustration, commands in the content 608 can automatically trigger navigation of course content without requiring the navigation controls 622 to be presented. For example, when a user completes a course, command(s) in the content can trigger navigation of the 116. The presentation component 616 is communicably coupled to content player 208 or a proxy for such. A content interface 612 in the content player 208 allows the content player 208 to access learning content 220, content provider 108, or other sources of content. The content interface 612 presents a uniform interface to the content player 208 regardless of the particulars of the learning content 220. The content interface 612 can include, by way of illustration, an API or other suitable means that implements or uses a communication protocol between the content player 502 and the learning content 220. A strategy selector component 610 of the content player 208 can be used to select a learning strategy. As described above, content metadata may be interpreted to present a course to learner 104 according to a learning strategy selected at run time. Metadata can classify and describe structural course content elements, assign attributes to structural elements, assign relations between structural elements, and build a subject-taxonomic course structure. A lesson planner component 614 can tailor content retrieved through the content interface 612 to the needs of an individual learner 104 and establish course navigation according to the learning strategy. The lesson planner provides course content and navigation info 608 to the presentation component 616. In one embodiment, the course content can contain one or more commands to modify the user interface elements 622. FIG. 7 is a flow diagram 700 of processing in accordance to various embodiments. Content 626 is loaded into a content player 208 (step 702). The content includes one or more representations of a course of study, and one or more commands for modifying the user interface 116. A content player process 602 identifies the one or more commands in the content (step 704). User interface 116, such as that presented by content player 208, is modified in accordance with the commands (step 706). In one embodiment, the commands are used to invoke services 606 for modifying a model 604 of the user interface 116. A number of embodiments have been described. Nevertheless, it will be understood that various modifications may be made without departing from the spirit and scope of the disclosure. For example, as described earlier, such content control techniques may be implemented in various systems, such as enterprises or home user computers, presenting or developing any suitable content. Indeed, such content may not be related to “learning,” but may instead involve entertainment and sports, presentations, marketing, or any other suitable media or environment. Accordingly, other embodiments are within the scope of the following claims.
__label__pos
0.980484
Lab 1: Your first program Programming Rules There are two things we can ask Python to do: simplify an expression to get a value, or store a value (perhaps created by simplifying an expression) into a variable. Type the following code into the Python shell (the bottom window in Thonny, called the shell): 1 + 2 Python will simplify that expression and show you the value: 3. Next, try this: x = 1 + 2 This time, Python doesn't show you the result, because it stored the result into the variable x instead. To see the value of x, just type x and then hit enter. Next, try these: 1 + 2 = x x + y = 4 x = y + 4 None of these commands will work, but x = 1 + 2 did. With your partner, discuss what you think are the limitations of Python? Why might those limitations be in place? Click here to show the answers Python can only assign a value to one variable at once, and it must be on the left-hand side of the equals sign. Other variables can appear on the right, but they must be variables that have already been assigned values. These limitations ensure that when Python is simplifying an expression, there aren't any variables left over: every variable in the expression can be replaced with a value, so the whole expression can be reduced down to a single number (or some other kind of value). Python only has to keep track of the current value of each variable, and it never has to remember an equation. Now, try this: y = 5 x = y + 4 If you ask for the value of x, it should give you 9. Python can simplify, as long as all variables are known in advance. What happens if we change the value of y now? y = 10 With your partner, discuss what you think the value of x will be after running this statement, and why. Click here to show the answer What we said above gives a clue: Python remembers values, but not equations. So Python doesn't remember that the value of x is y + 4, it just remembers that the value is 9. When we change y, Python doesn't update x, and its value will still be 9. This is an important difference between mathematical equations and programming statements. Next, try this, but before you hit enter, discuss with your partner: Will this work, and if so, what would be the value of y afterwards? y = y + 1 Click here to show the answer Indeed, this does work, and sets the value of y to 11 (or if you run it again, 12, and then 13, etc.). Python first simplifies the right-hand side using the old value of y, and then updates y to get the new value. There are a variety of operators available to use in expressions (try these): 2 - 3 3 * 4 15 / 5 And a few special operators: 5 // 4 12 % 10 3 ** 2 Try different values with //, %, and **. What do these operators do? Click here to show the answer // does division, but rounds down to the nearest integer. % computes the remainder after division, which is also called the modulus. ** does exponentiation, raising the number before to the power of the number after. Functions In addition to mathematical operations like +, Python can simplify functions, and it comes with some built-in. Just like in math, we write the name of the function, then a pair of parentheses, and then the input values for the function (which we call "arguments" in programming). One of the built-in functions is print, which causes the program to display a value. So far, we've been writing code in the shell, and whenever we write an expression, Python shows us the result, but when we created a variable, we had to ask for the value separately. When we write a whole program however, Python won't show us anything unless we ask it to. Create a new file in Thonny, save it as lab01.py, and copy/paste the following code into the file; then run it: x = 1 + 2 y = x + 4 print("x + y is:") print(x + y) This program simplifies two expressions to create two variables, which hold the values 3 and 7. Then, it simplifies two more expressions which use the print function to display values (code in a program runs from top to bottom, one line at a time). The output should look like this: x + y is: 10 Why do you think Python replaced x + y with 10 in the second print, but not the first one? Click here to show the answer In the first line, we used quotation marks. This tells Python to use that text as-is, without simplifying it. On the second line, we didn't, so Python interpreted x + y as an expression to be simplified. Working with text We can define text within a program by surrounding it with quotation marks. Pieces of text (a.k.a. strings) can be stored in variables, and we can also use '+' to combine them. Here's a simple program that uses text (you can replace your current program with this code): name = 'Peter' print("Welcome to CS111," + name) How could you use your own name instead of 'Peter' in this program? What's the issue with the output, and how can we fix it? Try to make these changes yourself. Click here to show the answers On the first line, put your name in quotes instead of Peter. The issue with the output is that there's no space between the command and the name. To fix this, we can simply add a space inside the quotation marks, just after the comma, so that when we add the strings together, the space ends up in the middle. In addition to using '+' to combine strings, we can also use '*' with a string and an integer. What do you think this code will do? print('HA'*8) Click here to show the answer When we use * with text and an integer, Python will repeat the text that many times to make a longer piece of text. In this case, the result is "HAHAHAHAHAHAHAHA". There's one more important built-in function that works with text: input. Try this code in your file: name = input("What is your name? ") print("Hi, " + name) Now, when you run the program, Python pauses after displaying "What is your name? ". Try typing something and hitting Enter. What do you think is the significance of the input function? Click here to show the answer input lets us write programs that do something different each time we run them, based on what is typed by whoever runs the program. Without input, anyone using the program would have to modify its code to change what it does, but with input, only the original programmer needs to know how to program. Next, try this program, but before you run it, try to predict what it will do: result = input("First number: ") + input("Second number: ") print("The result is: " + result) Click here to show the explanation This program begins by displaying the text "First number: " and waiting for an answer. Then it gets an answer to the prompt "Second number: " and combines those two answers into one value. But it treats them as text, not as numbers, even if you type in a number! That's because Python keeps track of the type of each variable, and processes things differently depending on their types. Text and Numbers What happens if we try to add a string and a number together? Let's try it (in the shell): "two" + 2 We get an error: Traceback (most recent call last): File "<pyshell>", line 1, in <module> TypeError: can only concatenate str (not "int") to str Discuss with your partner: What do you think this error means? Click here to show the answer "Concatenate" means "add together", and "int" and "str" are abbreviations for "integer" and "string." Python is trying to tell us that it doesn't know how to combine a piece of text with a number into a single result using '+'. Now try these expressions (can you guess what the results will be first?): "two" + "two" 2 + 2 "2" + "2" We need to keep track of types as we program, and we can use the built-in function named type to help. Try these expressions: type(2) type("two") type(2.5) We can also use conversion functions to convert some values between types. Try these: str(5) int('5') int('five') int('五') Note that Python has a very limited ability to convert strings into integers (only Arabic digits work). Finally, try these expressions in the shell: len('hi') len('hello') Can you and your partner guess what the len function does? Click here to show the answer len measures the length of a piece of text, in terms of the number of letters (and spaces, digits, etc.). The result is always an integer. Task 1: Underlining text To practice working with expressions and variables, you'll write a program that does something simple: it asks the user for input, and then prints out that input, with a second line of dashes below that has the same length to serve as an "underline." Use input, print, and len to accomplish this. Work together with your partner, with one person writing code and the other person suggesting strategies and spotting typos. Here are three examples of how the program should work: Example #1: Enter a string: Hi Hi -- Example #2: Enter a string: Lovely day today Lovely day today ---------------- Example #3: Enter a string: Sometimes you feel like a nut Sometimes you feel like a nut ----------------------------- Exploration Questions You may already know the basics of programming from previous experience. If you and your partner are finished with this part of the lab early, you can move on to the next part, but if you're curious, here are a few questions to stretch your thinking about basic operators and variables (if this is your first day programming, don't worry about these!): 1. What is the shortest possible string in Python? What is its length? How would you write it? Click here to show the answer The "empty string" can be created by writing two quotations marks next to each other, like this: ''. Its length is zero. 2. We saw an example of an error that happened when trying to add together a string and a number. Is it possible to create an error using just numbers and mathematical operators? If so, what's an expression that would do that? Click here to show the answer Some mathematical operations aren't well-defined, and we can get errors for those. For example, dividing by zero doens't make sense, so the expression 5 / 0 will result in an error. By the same logic, the integer division and remainder operators will cause an error if their right-hand operand is 0. 3. print is a function that causes stuff to be displayed when you simplify it, which we call a "side effect," but like every function, it also has a result value. However, unlike most results, we won't see it show up if we just type print('hi') in the shell. What is the result value of print, and what happens if we store it in a variable? Click here to show the answer The result of print is a special value called None, which is a way of indicating that even though there needs to be some result (because every function must have a result value when simplified), there isn't really anything meaningful. By default, this value is not displayed in the shell. If you try typing 5 in the shell, it will show you that 5 is the result, but if you type in None (no quotation marks), you will just see another prompt without any result displayed. If we store it in a variable, we won't be able to use that variable in other operations. For example: m = "hello" print(m * 2) x = print("hello") print(x * 2) The first two lines here work fine, and print hellohello. But while the third line actually prints hello, the fourth line will crash, since x will have the value None rather than "hello". 4. What's a piece of text that you can't create using single quotes? How about using double quotes? Are there strings you can't create using either double or single quotes? Click here to show the answer A piece of text with a contraction that uses a single quote can't be enclosed in single quotes, since the quote mark inside will be interpreted as the end of the string. For example, you can write "This doesn't work." using double quotes, but not single quotes. By the same logic, a piece of text that should contain a double quote, like 'She said "Hi!"' can't be created using double quotes. And if you wanted to use both a single quote mark and a double quote mark in the same string, neither type of quote would work. Python allows something called "triple quotes" to get around this, where you use three of the same kind of quotation mark in a row to start and end a string. In addition, triple-quoted strings can contain multiple lines of text, which isn't allowed with normal single or double quotes. 5. In general, Python will simplify expressions so fast that it basically takes no time at all. But can you think of an expression that would take a while for the computer to evaluate? Click here to show the answer There are several was to force Python to spend some time on an operation. One example: compute an extremely large number (like 2 ** 1000000). Another example: ask Python to create a huge number of repetitions of a piece of text (like 'duck' * 10000000). Note that if you try something too large you could be waiting minutes, hours, or even days for an answer, and Python might crash if it runs out of memory in the attempt. You can click the stop button (or hold the control key and press 'C') to force Python to stop what it's doing if you need to. Table of Contents
__label__pos
0.999144
[Back to DELPHI SWAG index]  [Back to Main SWAG index]  [Original] This code does it, but it's not included in the ShowMessage displays. This is for Delphi 2. function TForm1.ShowDiskData(Drive: string): string; var VolSer : DWord; SysFlags : DWord; DSize, DFree : integer; NamLen, SysLen : integer; Buf : string; VolNameAry: array[0..255] of char; VolNameStr: String; LW : byte; begin { get Disk name (volume id) and serial number } if (Length(Drive) >= 3) then Buf := Copy(Drive, 1, 3) else Buf := ''; NamLen:=255; SysLen:=255; (* function GetVolumeInformation(lpRootPathName: PChar; lpVolumeNameBuffer: PChar; nVolumeNameSize: DWORD; lpVolumeSerialNumber: PDWORD; var lpMaximumComponentLength, lpFileSystemFlags: DWORD; lpFileSystemNameBuffer: PChar; nFileSystemNameSize: DWORD): BOOL; stdcall; *) if GetVolumeInformation(pChar(Buf), VolNameAry, NamLen, @VolSer, SysLen, SysFlags, nil, 0) then VolNameStr := StrPas(VolNameAry) else VolNameStr := '<no name>'; ShowMessage('Volume name is: ' + VolNameStr); { get free disk space and size} LW := ord(upcase(Drive[1])) - 64; DSize := DiskSize(LW); if (DSize <> -1) then begin DSize := disksize(LW) DIV 1024; DFree := diskfree(LW) DIV 1024; ShowMessage('Disk size = ' + IntToStr(DSize) + ' K'); ShowMessage('Disk free = ' + IntToStr(DFree) + ' K'); end; end; [Back to DELPHI SWAG index]  [Back to Main SWAG index]  [Original]
__label__pos
0.564438
AWS_IO_CONNECTION_REFUSED - Greengrass v2 IPC 0 Hi, I am trying to resolve an AWS_IO_SOCKET_CONNECTION_REFUSED error that I get when calling awsiot.greengrasscoreipc.connect() in a Docker component installed on Greengrass v2 (running in a Docker container). We are running Greengrass Core v2.4 in a Docker container on the Edge server. The greengrassDataPlanePort and mqtt.port of the Nucleus are set to 443 since other ports are not available on the server. I am passing in the environment values and mounting the /greengrass/v2 and /greengrass/v2/ipc.socket locations in the Docker run command for the component. Stack Trace: {scriptName=services.com.IpcTestComponent.lifecycle.Run.Script, serviceName=com.IpcTestComponent, currentState=RUNNING} 2021-08-19T15:10:24.846Z [WARN] (Copier) com.IpcTestComponent: stderr. ipc_client = awsiot.greengrasscoreipc.connect(). {scriptName=services.com.IpcTestComponent.lifecycle.Run.Script, serviceName=com.IpcTestComponent, currentState=RUNNING} 2021-08-19T15:10:24.846Z [WARN] (Copier) com.IpcTestComponent: stderr. File "/usr/local/lib/python3.8/site-packages/awsiot/greengrasscoreipc/init.py", line 65, in connect. {scriptName=services.com.IpcTestComponent.lifecycle.Run.Script, serviceName=com.IpcTestComponent, currentState=RUNNING} 2021-08-19T15:10:24.846Z [WARN] (Copier) com.IpcTestComponent: stderr. connect_future = connection.connect(lifecycle_handler). {scriptName=services.com.IpcTestComponent.lifecycle.Run.Script, serviceName=com.IpcTestComponent, currentState=RUNNING} 2021-08-19T15:10:24.846Z [WARN] (Copier) com.IpcTestComponent: stderr. File "/usr/local/lib/python3.8/site-packages/awsiot/eventstreamrpc.py", line 449, in connect. {scriptName=services.com.IpcTestComponent.lifecycle.Run.Script, serviceName=com.IpcTestComponent, currentState=RUNNING} 2021-08-19T15:10:24.846Z [WARN] (Copier) com.IpcTestComponent: stderr. raise e. {scriptName=services.com.IpcTestComponent.lifecycle.Run.Script, serviceName=com.IpcTestComponent, currentState=RUNNING} 2021-08-19T15:10:24.846Z [WARN] (Copier) com.IpcTestComponent: stderr. File "/usr/local/lib/python3.8/site-packages/awsiot/eventstreamrpc.py", line 437, in connect. {scriptName=services.com.IpcTestComponent.lifecycle.Run.Script, serviceName=com.IpcTestComponent, currentState=RUNNING} 2021-08-19T15:10:24.846Z [WARN] (Copier) com.IpcTestComponent: stderr. protocol.ClientConnection.connect(. {scriptName=services.com.IpcTestComponent.lifecycle.Run.Script, serviceName=com.IpcTestComponent, currentState=RUNNING} 2021-08-19T15:10:24.846Z [WARN] (Copier) com.IpcTestComponent: stderr. File "/usr/local/lib/python3.8/site-packages/awscrt/eventstream/rpc.py", line 301, in connect. {scriptName=services.com.IpcTestComponent.lifecycle.Run.Script, serviceName=com.IpcTestComponent, currentState=RUNNING} 2021-08-19T15:10:24.846Z [WARN] (Copier) com.IpcTestComponent: stderr. _awscrt.event_stream_rpc_client_connection_connect(. {scriptName=services.com.IpcTestComponent.lifecycle.Run.Script, serviceName=com.IpcTestComponent, currentState=RUNNING} 2021-08-19T15:10:24.846Z [WARN] (Copier) com.IpcTestComponent: stderr. RuntimeError: 1047 (AWS_IO_SOCKET_CONNECTION_REFUSED): socket connection refused.. {scriptName=services.com.IpcTestComponent.lifecycle.Run.Script, serviceName=com.IpcTestComponent, currentState=RUNNING} 2021-08-19T15:10:24.996Z [INFO] (Copier) com.IpcTestComponent: Run script exited. {exitCode=1, serviceName=com.IpcTestComponent, currentState=RUNNING} Recipe File: RecipeFormatVersion: '2020-01-25' ComponentName: com.IpcTestComponent ComponentVersion: '0.0.16' ComponentDescription: 'A test component that uses IPC' ComponentPublisher: Amazon ComponentDependencies: aws.greengrass.DockerApplicationManager: VersionRequirement: ~2.0.0 aws.greengrass.TokenExchangeService: VersionRequirement: ~2.0.0 ComponentConfiguration: DefaultConfiguration: accessControl: aws.greengrass.ipc.pubsub: "com.IpcTestComponent:pubsub:1": policyDescription: Allows access to publish to queue topics operations: - "aws.greengrass#PublishToTopic" resources: - "" aws.greengrass.SecretManager: "com.IpcTestComponent:secrets:1": policyDescription: Allows access to secrets operations: - "aws.greengrass#GetSecretValue" resources: - " " Manifests: • Platform: os: all Lifecycle: Install: Script: docker pull <container_artifactory_url> RequiresPrivilege: true Run: Script: 'docker run --privileged --rm -v /greengrass/v2:/greengrass/v2 -v /greengrass/v2/ipc.socket:/greengrass/v2/ipc.socket -e AWS_REGION=$AWS_REGION -e SVCUID=$SVCUID -e AWS_GG_NUCLEUS_DOMAIN_SOCKET_FILEPATH_FOR_COMPONENT=$AWS_GG_NUCLEUS_DOMAIN_SOCKET_FILEPATH_FOR_COMPONENT -e AWS_CONTAINER_AUTHORIZATION_TOKEN=$AWS_CONTAINER_AUTHORIZATION_TOKEN -e AWS_CONTAINER_CREDENTIALS_FULL_URI=$AWS_CONTAINER_CREDENTIALS_FULL_URI <some_container_name>' RequiresPrivilege: true Docker Greengrass Core Config: system: certificateFilePath: "/tmp/certs/device.pem.crt" privateKeyPath: "/tmp/certs/private.pem.key" rootCaPath: "/tmp/certs/AmazonRootCA1.pem" rootpath: "/greengrass/v2" thingName: "NDC-POC-GreengrassCore-Docker" services: aws.greengrass.Nucleus: componentType: "NUCLEUS" version: "2.4.0" configuration: awsRegion: "us-east-2" iotRoleAlias: "GreengrassCoreTokenExchangeRoleAlias" iotDataEndpoint: "a15mvg240y6763-ats.iot.us-east-2.amazonaws.com" iotCredEndpoint: "c2gyu0pkrd8ivf.credentials.iot.us-east-2.amazonaws.com" mqtt: port: 443 greengrassDataPlanePort: 443 Greengrass Core Docker run command: sudo docker run --rm --init -it --name aws-iot-greengrass-manual -v ${PWD}/greengrass-v2-config-docker:/tmp/config/:ro -v ${PWD}/greengrass-v2-certs-docker:/tmp/certs:ro -v /var/run/docker.sock:/var/run/docker.sock --env-file .env -p 443 amazon/aws-iot-greengrass:latest & Environment variables when running print(os.environ) in the component: AWS_CONTAINER_AUTHORIZATION_TOKEN: A3H3L44ZARGNLGUT AWS_CONTAINER_CREDENTIALS_FULL_URI: http://localhost:40550/2016-11-01/credentialprovider/ AWS_REGION: us-east-2 SVCUID: A3H3L44ZARGNLGUT AWS_GG_NUCLEUS_DOMAIN_SOCKET_FILEPATH_FOR_COMPONENT: /greengrass/v2/ipc.socket tyanaws asked 3 years ago888 views 2 Answers 1 Posting the resolution to this after working with AWS support to identify the root cause. Hoping this helps somebody in the future that runs into the same issue. Root cause: A component needs to access the ipc.socket file on the Greengrass Core to use IPC functionality. The Greengrass v2 documentation explains how to provide this file to a Docker component by mounting the Greengrass Core root folder in the component’s container (https://docs.aws.amazon.com/greengrass/v2/developerguide/run-docker-container.html#docker-container-ipc). However, this only appears to work when Greengrass Core is installed directly on the server (not in a Docker container). When Greengrass Core is run in a Docker container, the documented method to provide the ipc.socket file to the component attempts to provide the file from the host machine's file system rather than the file on the Greengrass Core Docker container. This causes the component's IPC connection to fail since it does not have access to the ipc.socket file it needs from the Greengrass Core container. Solution: We need to make sure that the component has access to the ipc.socket file on the Docker container running the Greengrass Core. One way to achieve this is through creating a Docker volume for the Greengrass Core root folder that other Docker containers can access and mount. Below are the steps I used to do this. 1. Create a docker volume. Example: docker volume create gg-core-root 2. Mount the gg-core-root volume to the Greengrass core root folder when running the Greengrass Core Docker container. This will allow components to access the ipc.socket file in the Core root folder through the gg-core-root volume. Example: sudo docker run --rm --init -it --name aws-iot-greengrass-manual -v ${PWD}/greengrass-v2-config:/tmp/config/:ro -v ${PWD}/greengrass-v2-certs:/tmp/certs:ro -v gg-core-root:/greengrass/v2 -v /var/run/docker.sock:/var/run/docker.sock --env-file .env -p 443 amazon/aws-iot-greengrass:latest 3. In the component’s recipe file, mount the gg-core-root volume when the component is run Example: docker run --rm --privileged -v gg-core-root:/greengrass/v2 -e AWS_REGION=$AWS_REGION -e SVCUID=$SVCUID -e AWS_GG_NUCLEUS_DOMAIN_SOCKET_FILEPATH_FOR_COMPONENT=$AWS_GG_NUCLEUS_DOMAIN_SOCKET_FILEPATH_FOR_COMPONENT -e AWS_CONTAINER_AUTHORIZATION_TOKEN=$AWS_CONTAINER_AUTHORIZATION_TOKEN -e AWS_CONTAINER_CREDENTIALS_FULL_URI=$AWS_CONTAINER_CREDENTIALS_FULL_URI <container_name> tyanaws answered 3 years ago 0 Hi, I am also running GreenGrass in a docker container with a mounted volume. A solution for the case where you would like to have bind mount for your GreenGrass data is also possible. My docker-compose.yml file for the greengrass service looks like this: version: '3.9' services: greengrass: init: true privileged: true image: public.ecr.aws/aws-iot-greengrass/aws-iot-greengrass-v2:2.5.3-0 environment: - GGC_ROOT_PATH=/greengrass/v2 - HOST_GREENGRASS_IPC_SOCKET=$GREENGRASS_MOUNT_PATH/ipc.socket volumes: - ./credentials/:/root/.aws/:ro - $GREENGRASS_MOUNT_PATH:/greengrass/v2/ Where GREENGRASS_MOUNT_PATH environment variable is where the greengrass data is bind-mounted. In the cases where GreenGrass is run in a Docker container, where the host and the GreenGrass share the same docker host and GreenGrass runs Docker containers of its own, on should approach the volumes from the perspective of the host. In those cases, when one would like to mount the ipc.socket, the volume path should be given with respect to the host, not the greengrass container itself. Assuming that nucleus creates the ipc socket under /greengrass/v2/ipc.socket, the correct mounting would be with docker option -v HOST_GREENGRASS_IPC_SOCKET:/greengrass/v2/ipc.socket. That would look like in a docker-compose file as: version: '3.9' services: pubsub_test: image: <some image to run> privileged: true volumes: - HOST_GREENGRASS_IPC_SOCKET:/greengrass/v2/ipc.socket environment: - SVCUID - AWS_GG_NUCLEUS_DOMAIN_SOCKET_FILEPATH_FOR_COMPONENT Alternatively, you could mount the ipc socket in a custom path in the container you are running by setting both the env variable AWS_GG_NUCLEUS_DOMAIN_SOCKET_FILEPATH_FOR_COMPONENT that is passed down to the container and the bind mount path at the same time, which would look like: version: '3.9' services: pubsub_test: image: <some image to run> privileged: true volumes: - HOST_GREENGRASS_IPC_SOCKET:$SOME_CUSTOM_IPC_SOCKET_PATH environment: - SVCUID - AWS_GG_NUCLEUS_DOMAIN_SOCKET_FILEPATH_FOR_COMPONENT=$SOME_CUSTOM_IPC_SOCKET_PATH sb answered 2 years ago You are not logged in. Log in to post an answer. A good answer clearly answers the question and provides constructive feedback and encourages professional growth in the question asker. Guidelines for Answering Questions
__label__pos
0.539718
useListContext Whenever react-admin displays a List, it creates a ListContext to store the list data, as well as filters, pagination, sort state, and callbacks to update them. The ListContext is available to descendants of: • <List>, • <ListGuesser>, • <ListBase>, • <ReferenceArrayField>, • <ReferenceManyField> All descendant components can therefore access the list context, using the useListContext hook. As a matter of fact, react-admin’s <Datagrid>, <FilterForm>, and <Pagination> components all use the useListContext hook. Usage Call useListContext in a component, then use this component as a descendant of a List component. // in src/posts/Aside.js import { Typography } from '@mui/material'; import { useListContext } from 'react-admin'; export const Aside = () => { const { data, isLoading } = useListContext(); if (isLoading) return null; return ( <div> <Typography variant="h6">Posts stats</Typography> <Typography variant="body2"> Total views: {data.reduce((sum, post) => sum + post.views, 0)} </Typography> </div> ); }; // in src/posts/PostList.js import { List, Datagrid, TextField } from 'react-admin'; import Aside from './Aside'; export const PostList = () => ( <List aside={<Aside />}> <Datagrid> <TextField source="id" /> <TextField source="title" /> <TextField source="views" /> </Datagrid> </List> ); Return Value The useListContext hook returns an object with the following keys: const { // fetched data data, // an array of the list records, e.g. [{ id: 123, title: 'hello world' }, { ... }] total, // the total number of results for the current filters, excluding pagination. Useful to build the pagination controls, e.g. 23 isFetching, // boolean that is true while the data is being fetched, and false once the data is fetched isLoading, // boolean that is true until the data is available for the first time // pagination page, // the current page. Starts at 1 perPage, // the number of results per page. Defaults to 25 setPage, // a callback to change the page, e.g. setPage(3) setPerPage, // a callback to change the number of results per page, e.g. setPerPage(25) hasPreviousPage, // boolean, true if the current page is not the first one hasNextPage, // boolean, true if the current page is not the last one // sorting sort, // a sort object { field, order }, e.g. { field: 'date', order: 'DESC' } setSort, // a callback to change the sort, e.g. setSort({ field: 'name', order: 'ASC' }) // filtering filterValues, // a dictionary of filter values, e.g. { title: 'lorem', nationality: 'fr' } displayedFilters, // a dictionary of the displayed filters, e.g. { title: true, nationality: true } setFilters, // a callback to update the filters, e.g. setFilters(filters, displayedFilters) showFilter, // a callback to show one of the filters, e.g. showFilter('title', defaultValue) hideFilter, // a callback to hide one of the filters, e.g. hideFilter('title') // record selection selectedIds, // an array listing the ids of the selected rows, e.g. [123, 456] onSelect, // callback to change the list of selected rows, e.g. onSelect([456, 789]) onToggleItem, // callback to toggle the selection of a given record based on its id, e.g. onToggleItem(456) onUnselectItems, // callback to clear the selection, e.g. onUnselectItems(); // misc defaultTitle, // the translated title based on the resource, e.g. 'Posts' resource, // the resource name, deduced from the location. e.g. 'posts' refetch, // callback for fetching the list data again } = useListContext(); Declarative Version useListContext often forces you to create a new component just to access the list context. If you prefer a declarative approach based on render props, you can use the <WithListContext> component instead: import { WithListContext } from 'react-admin'; import { Typography } from '@mui/material'; export const Aside = () => ( <WithListContext render={({ data, isLoading }) => !isLoading && ( <div> <Typography variant="h6">Posts stats</Typography> <Typography variant="body2"> Total views: {data.reduce((sum, post) => sum + post.views, 0)} </Typography> </div> )} /> ); TypeScript The useListContext hook accepts a generic parameter for the record type: import { Typography } from '@mui/material'; import { List, Datagrid, TextField, useListContext } from 'react-admin'; type Post = { id: number; title: string; views: number; }; export const Aside = () => { const { data: posts, isLoading } = useListContext<Post>(); if (isLoading) return null; return ( <div> <Typography variant="h6">Posts stats</Typography> <Typography variant="body2"> {/* TypeScript knows that posts is of type Post[] */} Total views: {posts.reduce((sum, post) => sum + post.views, 0)} </Typography> </div> ); }; export const PostList = () => ( <List aside={<Aside />}> <Datagrid> <TextField source="id" /> <TextField source="title" /> <TextField source="views" /> </Datagrid> </List> ); Recipes You can find many usage examples of useListContext in the documentation, including: Tip: <ReferenceManyField>, as well as other relationship-related components, also implement a ListContext. That means you can use a <Datagrid> of a <Pagination> inside these components!
__label__pos
0.920041
Alex Alex - 6 months ago 28 Vb.net Question Unable to make DataGridView Columns readonly I am making an application where the user can enter and exit and "edit" state on different items. When entering the edit state, I want to enable certain columns on a DataGridView and when they exit, disable them. The code below is run when the Boolean EditMode changes. 'Change ReadOnly to Not EditMode 'dgv.ReadOnly = Not EditMode 'Works dgv.Columns("colCode").ReadOnly = Not EditMode 'Does not work dgv.Columns("colText").ReadOnly = Not EditMode 'Does not work dgv.Columns("colTarget").ReadOnly = Not EditMode 'Does not work dgv.Columns("colCheck").ReadOnly = Not EditMode 'Does not work When changing the entire DataGridView ReadOnly property, the grid becomes editable/not editable like I would expect it too, but I only want to enable 4/6 columns. The column names are correct, and the logic is the same, but enabling the columns individually is not changing the ReadOnly property and I am not able to edit the columns. Stepping through the debugger, when entering edit mode I can see dgv.Columns("colCode").ReadOnly = Not EditMode evaluate to dgv.Columns("colCode").ReadOnly = False but stepping past, the ReadOnly property remains true... Answer If dgv.ReadOnly = true then the columns are all forced to be ReadOnly = True. So set dgv.ReadOnly = False and set the ReadOnly property only on the columns to true.
__label__pos
0.673785
Setup Menus in Admin Panel • LOGIN • No hay productos en el carrito. Logaritmos comunes y naturales Aprenderás a convertir logaritmos de una base a otra. Ya definimos el logaritmo con base común a. El número a puede ser cualquier número real positivo distinto de 1. Cuando la base es el número e = 2.718281828459\cdots entonces decimos que la función es función logaritmo natural. Para diferenciarla de las demás bases denotamos al logaritmo natural del número x como sigue: \ln x. Como se mencionó, esta función es el logaritmo en la base e de x:     \begin{equation*} y = \log_{e} x = \ln x \end{equation*} En los cursos de matemáticas, las bases que más se utilizan son e y 10. Cuando utilizamos la base 10 decimos que el logaritmo es común o vulgar. Cuando escribimos: \log x, sin indicar la base, debemos entender que la base es 10:     \begin{equation*} y = \log_{10} x = \log x \end{equation*} Estas dos bases son las que vienen programadas en las calculadoras científicas. En la tecla que se indica \ln puedes calcular el logaritmo natural, y en la que aparece el símbolo \log se calcula el logaritmo vulgar (en base 10). Una propiedad que será útil para convertir el logaritmo de una base a otra es la siguiente:     \begin{equation*} \log_{a} N = \frac{\log_{b} N}{\log_{b} a} \end{equation*} Por ejemplo, para calcular \log_2 32 hacemos:     \begin{equation*} \log_{2} 32 = \frac{\ln 32}{\ln 2} = \frac{\log 32}{\log 2} = 5 \end{equation*} que podemos verificar manualmente. Observa que en la primera fracción usamos la base e y en la siguiente usamos la base 10. VER TODOAdd a note Añadir tu comentario A+ X
__label__pos
0.997684
Title: SYSTEMS, METHODS, AND COMPUTER-READABLE MEDIA FOR PLACING AN ASSET ON A THREE-DIMENSIONAL MODEL Kind Code: A1 Abstract: Systems, methods, and computer-readable media are provided for placing an asset on a three-dimensional model. Each asset can be associated with a pivot point and with an asset normal. A contact point on the surface of a model where an asset is to be positioned may be identified, and a surface normal that may be perpendicular to the surface at the contact point may also be identified. Then, the asset can be placed on the model such that the position of the pivot point of the asset may coincide with the position of the identified contact point on the surface of the model, and such that the orientation of the asset normal may match the orientation of the identified surface normal. Inventors: Goossens, Thomas (Paris, FR) Application Number: 12/892636 Publication Date: 03/29/2012 Filing Date: 09/28/2010 Assignee: Apple Inc. (Cupertino, CA, US) Primary Class: Other Classes: 715/849 International Classes: G06F3/048 View Patent Images: Related US Applications: 20070198250INFORMATION RETRIEVAL AND REPORTING METHOD SYSTEMAugust, 2007Mardini 20080155423NETWORK MANAGEMENT SYSTEM INTEGRATED WITH PROVISIONING SYSTEMJune, 2008Moran et al. 20070044031A Method, System and Computer Program Product for Rendering a Graphical User InterfaceFebruary, 2007Kulp et al. 20080222520Event-Sensitive Content for Mobile DevicesSeptember, 2008Balakrishnan et al. 20090231295SYSTEM FOR CLASSIFYING GESTURESSeptember, 2009Petit et al. 20070255742Category TopicsNovember, 2007Perez et al. 20040003344Method for utilizing electronic book readers to access multiple-ending electronic booksJanuary, 2004Lai et al. 20100050076MULTIPLE SELECTION ON DEVICES WITH MANY GESTURESFebruary, 2010Roth 20100057842DISPLAY PULLOUTMarch, 2010Priller et al. 20070180377Self-translating templateAugust, 2007Gittelman et al. 20060129927HTML e-mail creation system, communication apparatus, HTML e-mail creation method, and recording mediumJune, 2006Matsukawa Other References: Dempski, Kelley, Focus on Curves and Surfaces, 2002 Course Technology, 137-140. dHaussy, Nicholas, Object Spreading Methods, 2008, http://www.cgarena.com/freestuff/tutorials/maya/objectSpreading/index.html, 1-14 Foley et. al., Compuer Graphics Principles and Practice: Second Edition in C, July 1997, Addison-Wesley Systems Programming Series, 236-247. Claims: What is claimed is: 1. A method for placing an asset on a three-dimensional model, wherein the asset comprises a pivot point and an asset normal, and wherein the model comprises a surface, the method comprising: displaying the three-dimensional model; identifying a contact point on a surface of the model at which to place the asset; identifying a surface normal extending perpendicular from the surface of the model at the identified contact point; positioning the pivot point of the asset at the identified contact point; and orienting the asset normal of the asset at the identified surface normal. 2. The method of claim 1, further comprising: receiving a selection of a new contact point; identifying a new surface normal extending perpendicular from the surface of the model at the identified new contact point; and re-placing the asset on the model using the new contact point and the new surface normal. 3. The method of claim 1, wherein: the surface normal extends perpendicular to a plane that is tangent to the surface of the model at the identified contact point. 4. The method of claim 1, further comprising: receiving an input identifying a position on a display; and defining an imaginary line extending from the identified position on the display, wherein the identifying the contact point comprises identifying the contact point at an intersection of the imaginary line and the surface of the model. 5. The method of claim 4, wherein identifying the contact point further comprises: identifying the intersection of the imaginary line and the surface of the model that is closes to the display. 6. The method of claim 4, wherein the defining the imaginary line comprises: defining a line perpendicular to a plane of the display. 7. The method of claim 4, wherein the defining the imaginary line comprises: establishing an eye position of a user; and defining a line passing through the established eye position and through the identified position on the display. 8. An electronic device comprising: an input interface; a display; and control circuitry coupled to the input interface and the display, wherein the control circuitry is operative to: direct the display to display an avatar; receive from the input interface a selection of an asset to display on an external surface of the avatar, wherein the asset comprises a pivot point and an asset normal; receive from the input interface an input corresponding to a contact point on the external surface of the avatar; position the asset on the display by making the pivot point and the contact point coincide; and orient the asset on the display by matching the asset normal with a surface normal extending perpendicularly from the external surface at the contact point. 9. The electronic device of claim 8, wherein the control circuitry is further operative to: receive from the input interface a selection of the asset on the display; receive from the input interface a new input corresponding to a new contact point on the external surface of the avatar; and re-position the selected asset on the display by making the pivot point and the new contact point coincide. 10. The electronic device of claim 8, wherein: the asset normal comprises a first vector; the surface normal comprises a second vector; and the control circuitry is further operative to orient the asset on the display by making the first vector co-linear with the second vector. 11. The electronic device of claim 9, wherein the control circuitry is further operative to: identify a new surface normal extending perpendicularly from the external surface at the new contact point; and re-orient the selected asset on the display by matching the new surface normal with the asset normal. 12. The electronic device of claim 11, wherein the control circuitry is further operative to: identify an intermediate contact point between the contact point and the new contact point; identify an intermediate surface normal extending perpendicularly from the external surface at the intermediate contact point; re-position the selected asset on the display by making the pivot point and the intermediate contact point coincide; and re-orient the selected asset on the display by matching the intermediate surface normal with the asset normal. 13. A method for displacing a displayed asset on a three-dimensional model, comprising: identifying an initial position and an initial orientation of the displayed asset, wherein the initial position is determined from an initial contact point on an external surface of the model, and wherein the initial orientation is determined from a surface normal at the initial contact point; receiving an instruction to displace the asset on the three-dimensional model; identifying a sequence of new contact points corresponding to the received instruction; identifying a sequence of new surface normals, wherein each new surface normal of the sequence of new surface normals is associated with a respective new contact point of the sequence of new contact points; and displaying the displayed asset at a sequence of new placements on the three-dimensional model, wherein the position of each new placement of the sequence of new placements is determined by a respective new contact point of the sequence of new contact points, and wherein the orientation of each new placement of the sequence of new placements is determined by a respective new surface normal of the sequence of new surface normals. 14. The method of claim 13, further comprising: identifying a sequence of positions of an input from the received instruction; and defining the sequence of new contact points from the sequence of positions. 15. The method of claim 14, further comprising: defining a sequence of imaginary lines, wherein each imaginary line of the sequence of imaginary lines extends from a respective position of the identified sequence of positions; and defining each new contact point of the sequence of new contact points as an intersection of a respective imaginary line of the sequence of imaginary lines with the external surface of the model. 16. The method of claim 13, further comprising: identifying, at each new contact point of the sequence of new contact points, a plane tangent to the external surface of the model at the new contact point; and defining, for each new contact point, the respective surface normal associated with the new contact point as a perpendicular to the identified plane at the new contact point. 17. The method of claim 13, wherein: the displayed asset comprises a pivot point; and at each new placement of the sequence of new placements, the displayed asset is displayed such that the pivot point coincides with a respective new contact point of the sequence of new contact points. 18. The method of claim 17, wherein: the displayed asset further comprises an asset normal; and at each new placement of the sequence of new placements, the displayed asset is displayed such that the asset normal matches a respective new surface normal of the sequence of new surface normals. 19. The method of claim 18, wherein: the asset normal comprises a first vector; and at each new placement of the sequence of new placements, the displayed asset is displayed such that the first vector of the asset normal is co-linear with a vector of a respective new surface normal of the sequence of new surface normals. 20. A computer-readable medium for placing an asset on a three-dimensional model, the computer-readable medium comprising computer program logic recorded thereon for: displaying the three-dimensional model; identifying a contact point on a surface of the model at which to place the asset, wherein the asset comprises a pivot point and an asset normal; identifying a surface normal extending perpendicular from the surface of the model at the identified contact point; positioning the pivot point of the asset at the identified contact point; and orienting the asset normal of the asset at the identified surface normal. Description: BACKGROUND Some electronic devices can display three-dimensional models that a user can control as part of an electronic device operation. For example, gaming consoles can display three-dimensional avatars that represent a user, and the user can direct the avatar to perform specific actions in a game. The three-dimensional models can be constructed from the combination of several assets such as a body, a head, eyes, ears, nose, hair, glasses, a hat, or other accessories. The assets can be placed on and incorporated into the model (e.g., placed on and integrated into an external surface of the model), or placed adjacent to the external surface of the model. To enhance the user's experience, the user can personalize a displayed model by selecting and moving specific assets with respect to the remainder of the model. SUMMARY Systems, methods, and computer-readable media for positioning and orienting movable assets on a three-dimensional model are provided. An electronic device can display a three-dimensional model (e.g., an avatar) that may be constructed from several assets. For example, fashion accessory assets (e.g., glasses) can be placed on an external surface of a head asset of the model. Each asset can be placed or disposed on the model in a manner that may ensure that the position and orientation of the asset relative to other portions of the model are consistent when viewed from different angles. When a user moves an asset with respect to the remainder of the model, for example, by dragging the asset, the asset can move in a manner that maintains a consistent asset position and orientation with respect to other portions of the model. To ensure that the position and orientation of an asset with respect to a model are consistent when viewed from different angles when the placement of the asset is moved to a particular contact point along a surface of the model, each asset can include a pivot point and an asset normal. The pivot point can define a point of the asset, and the pivot point may have a consistent positional relationship with respect to any particular contact point along an external surface of a model on which the asset is to be placed. The asset normal can correspond to a direction with respect to the pivot point of the asset providing an orientation for the asset, and the asset normal may have a consistent orientational relationship with respect to a surface normal at any particular contact point along an external surface of a model on which the asset is to be placed. The asset normal may not necessarily be perpendicular to a particular surface of the asset. When a user provides an instruction to place or move an asset with respect to a model, a particular contact point on the external surface of the model that corresponds to the user-provided instruction may be identified. A surface normal corresponding to the identified contact point may also be identified, and may include a line that passes through the identified contact point and that is perpendicular to a plane tangent to the external surface of the model at the identified contact point. To ensure a proper position and orientation with respect to the model, the asset can be placed such that the asset's pivot point coincides with the identified contact point, and such that the asset's asset normal matches the identified surface normal. BRIEF DESCRIPTION OF THE DRAWINGS The above and other aspects of the invention, its nature, and various features will be more apparent upon consideration of the following detailed description, taken in conjunction with the accompanying drawings in which like reference characters may refer to like parts, and in which: FIG. 1 is an illustrative display of a three-dimensional model displayed by an electronic device in accordance with some embodiments of the invention; FIGS. 2A and 2B are illustrative views of an asset placed at different positions and orientations on a three-dimensional model in accordance with some embodiments of the invention; FIG. 3 is an illustrative representation for identifying a contact point on a model in accordance with some embodiments of the invention; FIG. 4 is a schematic view of a model and an asset in accordance with some embodiments of the invention; FIG. 5 is a schematic view of a model on which an asset is placed in accordance with some embodiments of the invention; FIGS. 6A-6C are illustrative displays of an asset placed at different positions and orientations on a three-dimensional model in accordance with some embodiments of the invention; FIG. 7 is a flowchart of an illustrative process for determining a position and orientation of an asset on a three-dimensional model in accordance with some embodiments of the invention; FIG. 8 is a flowchart of an illustrative process for changing a position and orientation of an asset on a three-dimensional model in accordance with some embodiments of the invention; FIG. 9 is a flowchart of an illustrative process for orienting an asset on a three-dimensional model in accordance with some embodiments of the invention; FIG. 10 is a flowchart of an illustrative process for displacing an asset displayed on a three-dimensional model in accordance with some embodiments of the invention; and FIG. 11 is a schematic view of an illustrative electronic device for displaying a three-dimensional model with an asset in accordance with some embodiments of the invention. DETAILED DESCRIPTION Systems, methods, and computer-readable media for placing an asset on a three-dimensional model are provided and described with reference to FIGS. 1-11. A three-dimensional model may be displayed to a user. The model can be customized by disposing assets on an external surface of the model. A user can select a particular point on the model for disposing an asset by selecting a contact point on an external surface of the model. The contact point can include a single point on the external surface of the model with respect to which the asset is to be placed. A surface normal that defines a line or vector that includes the contact point and that is perpendicular to a plane that is tangent to the external surface of the model at the contact point may then be determined. Each asset can include a pivot point and an asset normal. The pivot point, which may be pre-defined for an asset, can include a specific point of the asset that is to be placed in a consistent positional relationship with respect to the particular contact point along an external surface of a model at which the asset is to be placed. The asset normal, which may also be pre-defined for an asset, can include a line or vector with respect to the pivot point or other point of the asset, and the asset normal may have a consistent orientational relationship with respect to a surface normal at any particular contact point along an external surface of a model at which the asset it to be placed. An asset may be placed on a model such that the pivot point of the asset may coincide with the contact point along the surface of the model, and such that the asset normal of the asset may match the surface normal at the contact point. An electronic device can display different content for enjoyment by a user. In some cases, an electronic device can display a three-dimensional model as part of an application operating on the device. FIG. 1 is an illustrative display of a three-dimensional model in accordance with some embodiments of the invention. Display 100, provided by electronic device 190, can include model 110 provided in front of background 102. Model 110 can represent any suitable object including, for example, a person, an animal, a place, or a thing (e.g., an imaginary being). In some cases, model 110 can include an avatar. A user can create model 110 by selecting assets from asset bar 120, and positioning specific assets on the model. The assets can include, for example, a face, mouth, eyes, ears, nose, mustache, beard, hair, eyebrows, glasses, hats, accessories (e.g., jewelry or band-aids), clothing, or other components that can be included on or integrated into a model. The assets can be provided from a source of assets (e.g., a library of assets that may be stored locally on or remotely from electronic device 190). When a user selects an asset type on asset bar 120 such as, for example, hair asset type 122, device 190 can display a menu or listing of one or more different hair options (not shown). The user can select one of the hair options to be applied to the model (e.g., as hair asset 112). A user can select any suitable number of options associated with a single asset to place on a model (e.g., asset size, shape, style, etc.). For example, a user can select different colors for an asset. As shown in FIG. 1, to change a color of hair asset 112, a user can select one of the color options provided by color menu 140 of asset bar 120. In some embodiments, a user can customize the model by moving an asset to different placements with respect to the rest of the model. In the example of display 200 of model 210 in FIGS. 2A and 2B, respective displays 200A and 200B of model 210 can include glasses asset 212 in two different placements (e.g., on the bridge of the nose of model 210 of FIG. 2A, and on the forehead of model 210 of FIG. 2B). The user can move asset 212 using any suitable approach including, for example, by dragging asset 212 with respect to model 210, or by using directional instructions (e.g., directional keys of an input interface). As an asset moves, the position and orientation of the asset can remain consistent relative to the remainder of the model. For example, a particular point of the asset (e.g., an asset pivot point) may remain in contact with an external surface of the model (e.g., a particular point of glasses asset 212 may be positioned in contact with the surface of the face of model 210 in FIG. 2A and in contact with the surface of the hair of model 210 in FIG. 2B). In addition, the asset may maintain a consistent orientation relative to an external surface of the model (e.g., glasses asset 212 may be oriented such that the eye pieces of the glasses asset are in a plane that is co-planar with a plane tangent to the model at the contact point of asset 212 with model 210). Different approaches can be used to determine a proper position and orientation of an asset relative to a model, and to maintain the proper position and orientation when the asset is moved. An electronic device can determine a point on a model at which to place an asset using any suitable approach. In some cases, an electronic device can identify an input point at which a user instructs the device to place an asset. FIG. 3 is an illustrative representation for identifying a contact point on a model in accordance with some embodiments of the invention. An electronic device can provide model 310 for display in display window 320 (e.g., a display window of the device). A user can provide an input to the device to identify a specific position at which to place an asset. For example, a user can move cursor 322 provided by the device on display 320 to a specific input position 324 on display 320. As another example, a user can provide a direct input (e.g., a touch input) identifying a specific location on display 320 (e.g., by touching display 320 at input position 324). Input position 324 can be identified, for example, using coordinates corresponding to display 320 of the device. Once input position 324 has been identified, the electronic device can identify contact point 332 on external surface 312 of model 310 at which to place an asset. Contact point 332 can correspond to input position 324 in one of many suitable ways. For example, the electronic device can project imaginary line 330 passing through input position 324 and perpendicular to window 320. As another example, the electronic device can determine an arbitrary, expected, or actual eye position of the user, and may define imaginary line 330 passing through the determined eye position and through input position 324. The electronic device can then identify contact point 332 as the point on external surface 312 at which imaginary line 330 first intersects with surface 312. Therefore, the particular location of contact point 332 may depend, for example, on a depth of model 310 relative to window 320. Once the electronic device has identified the contact point on the surface of the model at which to place an asset, the electronic device can establish a surface normal that may be oriented perpendicular to the surface of the model at the identified contact point. FIG. 4 is a schematic view of a model and an asset in accordance with some embodiments of the invention. Model 410 can include external surface 411 defining an outer surface of the model, and on which contact point 412 for an asset can be identified. For example, contact point 412 can be identified based on an identified input position provided by a user (e.g., input position 324, FIG. 3). Using identified contact point 412, an electronic device can identify tangent plane 416 that may include contact point 412 and that may be tangent to external surface 411 at contact point 412 (e.g., if external surface 411 is curved). If external surface 411 is not curved in the vicinity of contact point 412, external surface 411 can serve as plane 416. As shown, tangent plane 416 may also extend within a plane perpendicular to the drawing sheet of FIG. 4. To identify tangent plane 416, the electronic device can identify surface normal 414 that may extend perpendicular to tangent plane 416 and through contact point 414. Surface normal 414 can be quantified as a vector having a starting point and an end point in a coordinate system established by the device. Asset 420, selected by a user to be placed on model 410 at contact point 412, can include features for positioning and orienting the asset. In particular, asset 420 can include pivot point 422 and asset normal 424. Pivot point 422 and/or asset normal 424 can be provided as pre-defined metadata associated with asset 420. Pivot point 422 can correspond to a point of asset 420 that is to be placed in contact with external surface 411 of model 410. Pivot point 422 can correspond to any suitable portion of asset 420 including, for example, a barycenter or center of gravity of the asset. Alternatively, pivot point 422 may be a point distanced from a physical or material portion of asset 420. For example, if asset 420 is a ring-shaped angelic halo to be suspended above the head of a model, then pivot point 422 may be removed from the ring-like structure by a distance equal to the distance at which the halo asset is to be suspended away from (e.g., above) the model. The pivot point can include a specific point of the asset that is to be placed in a consistent positional relationship with respect to the particular contact point along an external surface of a model at which the asset it to be placed. For example, in the case of a glasses asset, the pivot point can be a point on the bridge between the eyepieces of the glasses. In the case of an earring asset, the pivot point can be selected as a point on a branch of the earring to be passed through an ear. In addition to including pivot point 422, asset 420 can include asset normal 424 that may define an orientation for displaying asset 420 on model 410. In particular, an artist can define asset normal 424 to indicate a direction or orientation in which asset 420 is to be placed on a model relative to a line that is perpendicular to an external surface of the model and that includes the identified contact point. Asset normal 424 can be defined using any suitable approach including, for example, as a vector between two points in space. For example, asset normal 424 can include a line or vector with respect to pivot point 422, and asset normal 424 may have a consistent orientational relationship with respect to a surface normal at any particular contact point along an external surface of a model on which the asset it to be placed. To place an asset on a model, an electronic device can display the asset such that the asset's pivot point may coincide with the identified contact point on the surface of the model, and such that the asset normal of the asset may matches the surface normal. FIG. 5 is a schematic view of a model on which an asset is placed in accordance with some embodiments of the invention. Model 510 can include contact point 512 on a model surface 511 and surface normal 514, which may be identified as described with respect to contact points 312 and 412 and surface normal 414. Asset 520 can include pivot point 522 and asset normal 524 for determining the positioning and orientation of asset 520 on model 510. An electronic device can position asset 520 such that pivot point 522 may coincide with or becomes concurrent with contact point 512. In addition, the electronic device can orient asset 520 such that asset normal 524 may match or may be aligned with surface normal 514. This particular placement can ensure that asset 520 may be positioned and oriented one model 510 in a consistent manner, as may be expected by the artist and by the end user, regardless of the position of contact point 512 with respect to the shape of model surface 511. In some cases, an asset can be positioned and oriented such that some portions of the asset can intersect with the external surface of a model. For example, portions of an asset that extend past an external surface of a model can be hidden from view of the user by the model. FIGS. 6A-6C are illustrative views of respective displays 600A-600C of an asset placed at different contact points on a model in accordance with some embodiments of the invention. Based on these different contact points of asset 612 with respect to model 610, different portions of asset 612 may be visible to a user when the orientation of model 610 with respect to the user is maintained as shown in FIGS. 6A-6C. For example, different portions or amounts of eyeglasses asset 612 may be visible in displays 600A-600C of FIGS. 6A and 6C. Using an interface, a user can move an asset displayed on a model by defining a new contact point on the external surface of the model. For example, a user can select an asset, and provide an input to an electronic device for identifying a new contact point (e.g., the user can select and drag the asset on a display window to identify a new contact point for the asset on the model). In response to defining a new contact point, the electronic device can re-draw the asset on the model such that the pivot point of the asset matches the new contact point. If the electronic device detects a sequence of new contact points (e.g., as the user drags the asset), the electronic device can display the asset in a succession of placements corresponding to the sequence of new contact points. In addition to changing the contact points of the asset on the model, the electronic device can determine how to orient the asset at each new contact point. In some cases, the electronic device can identify a new surface normal corresponding to each new contact point, and may orient the asset such that the asset normal matches the new surface normal. If the electronic device detects a sequence of new contact points and identifies a corresponding sequence of new surface normals, the electronic device can orient the asset at each identified new contact point using the corresponding new surface normal. For example, a user can drag asset 612 from the placement shown in display 600A of FIG. 6A to the placement shown in display 600C of FIG. 6C passing through the placement shown in display 600B of FIG. 6B. FIG. 7 is a flowchart of an illustrative process for placing an asset on a model in accordance with some embodiments of the invention. Process 700 can begin at step 702. At step 704, the electronic device can display a three-dimensional model, such as an avatar. At step 706, the electronic device can identify a contact point and a surface normal that correspond to a detected user input. For example, the electronic device can detect an input provided by a user of the device, and can define an imaginary line extending to the three-dimensional model from a position of the detected input. The point of intersection of the line with the model can be defined as the contact point. The electronic device can also identify a vector that is normal to an external surface of the model at the contact point. At step 708, the electronic device can identify a pivot point and an asset normal of an asset selected by a user for placement on the three-dimensional model at the contact point. For example, the electronic device can identify a particular asset selected by a user to place on the three-dimensional model, and can then identify pivot point and an asset normal associated with the particular asset. For example, a pivot point and an asset normal may be defined for an asset by an artist creating the asset. At step 710, the electronic device can place the asset on the three-dimensional model using the contact point, pivot point, surface normal, and asset normal. In particular, the electronic device can place the asset on the model such that the contact point and the pivot point may coincide with one another, and such that the asset normal may match the surface normal. For example, the pivot point of the asset may be positioned at the identified contact point and the asset normal of the asset may be oriented along the identified surface normal. Process 700 can then end at step 712. FIG. 8 is a flowchart of an illustrative process for changing a placement of an asset on a model in accordance with some embodiments of the invention. Process 800 can begin at step 802. At step 804, an electronic device can receive a selection of an asset displayed on a three-dimensional model. The asset can include a pivot point and an asset normal used to position and orient the asset on the three-dimensional model. At step 806, the electronic device can detect an input at a particular input position, for example, as a touch input or as a cursor position on a display. At step 808, the electronic device can identify a new contact point and a new surface normal corresponding to the particular input position. For example, the electronic device can define a line extending from the particular input position towards the three-dimensional model, and can define the new contact point as the intersection of the line with an external surface of the model. In some cases, the electronic device can identify a sequence of contact points when a sequence of inputs is detected (e.g., as a user drags a cursor across a screen). At step 810, the electronic device can change the placement of the asset on the model to correspond to the new contact point and to the new surface normal. For example, the electronic device can change the placement of the asset such that the pivot point of the asset matches the new contact point, and such that the asset normal of the asset matches the new surface normal. If a sequence of inputs is detected, the electronic device can identify a sequence of placements for the asset. Process 800 can end at step 812. FIG. 9 is a flowchart of an illustrative process for placing an asset on a three-dimensional model in accordance with some embodiments of the invention. Process 900 can begin at step 902. At step 904, an electronic device can display a three-dimensional model (e.g., display an avatar). At step 906, an asset to display on a surface of the model is identified. For example, a user can select an asset to display on the model. The asset can include a pivot point and an asset normal. At step 908, the electronic device can identify a contact point on a surface of the model at which to place the asset. In some cases, the contact point can correspond to an input position provided by a user. At step 910, the electronic device can define a surface normal extending perpendicular from the surface of the model at the identified contact point. For example, the electronic device can identify a plane tangent to the surface of the model at the contact point, and can define a vector perpendicular to the plane to serve as the surface normal. At step 912, the electronic device can place the asset on the model, for example, such that the pivot point is concurrent or coincides with the contact point, and such that the asset normal matches the surface normal. This can ensure that the asset is properly positioned and oriented on the model. Process 900 can end at step 914. FIG. 10 is a flowchart of an illustrative process for displacing an asset displayed on a three-dimensional model in accordance with some embodiments of the invention. Process 1000 can begin at step 1002. At step 1004, an electronic device can identify an initial position and an initial orientation of an asset on a three-dimensional model. The initial position can be determined from an initial contact point of the asset on the model (e.g., by the initial position of the asset's pivot point) and the initial orientation can be determined from an initial surface normal extending perpendicular from a surface of the model at the determined initial contact point. At step 1006, the electronic device can receive an instruction to displace the asset on the three-dimensional model. At step 1008, the electronic device can identify a single new contact point or a sequence of new contact points corresponding to the received instruction. For example, the electronic device can identify a sequence of new contact points corresponding to the dragging of the asset to a new placement on the model. At step 1010, the electronic device can identify a sequence of new surface normals associated with respective new contact points of the sequence of new contact points. Each new surface normal can include a vector perpendicular a plane tangent to the surface of the model at an associated new contact point of the sequence of new contact points. At step 1012, the electronic device can display the asset at a sequence of new positions corresponding to the sequence of new contact points. The asset may also be displayed at a sequence of new orientations corresponding to the sequence of new surface normals, each of which may be associated with a respective new contact point of the sequence of new contact points. Process 1000 can then end at step 1014. Any suitable electronic device can be used to display an asset on a three-dimensional model. FIG. 11 is a schematic view of an illustrative electronic device 1100 for displaying a three-dimensional model with an asset in accordance with some embodiments of the invention. Electronic device 1100 may be any portable, mobile, or hand-held electronic device configured to present a three-dimensional model or an asset to a user wherever the user travels. Alternatively, electronic device 1100 may not be portable at all, but may instead be generally stationary. Electronic device 1100 can include, but is not limited to, a music player (e.g., an iPod™ available by Apple Inc. of Cupertino, Calif.), video player, still image player, game player, other media player, music recorder, movie or video camera or recorder, still camera, other media recorder, radio, medical equipment, domestic appliance, transportation vehicle instrument, musical instrument, calculator, cellular telephone (e.g., an iPhone™ available by Apple Inc.), other wireless communication device, personal digital assistant, remote control, pager, computer (e.g., a desktop, laptop, tablet, server, etc.), monitor, television, stereo equipment, set up box, set-top box, boom box, modem, router, printer, and combinations thereof. In some embodiments, electronic device 1100 may perform a single function (e.g., a device dedicated to presenting visual content) and, in other embodiments, electronic device 1100 may perform multiple functions (e.g., a device that presents visual content, plays music, and receives and transmits telephone calls). Electronic device 1100 may include a processor 1102, memory 1104, power supply 1106, input interface or component 1108, and display 1110. Electronic device 1100 may also include a bus 1112 that may provide one or more wired or wireless communication links or paths for transferring data and/or power to, from, or between various other components of device 1100. In some embodiments, one or more components of electronic device 1100 may be combined or omitted. Moreover, electronic device 1100 may include other components not combined or included in FIG. 11 and/or several instances of one or more of the components shown in FIG. 11. Memory 1104 may include one or more storage mediums, including for example, a hard-drive, flash memory, non-volatile memory, permanent memory such as read-only memory (“ROM”), semi-permanent memory such as random access memory (“RAM”), any other suitable type of storage component, or any combination thereof. Memory 1104 may include cache memory, which may be one or more different types of memory used for temporarily storing data for electronic device application programs. Memory 1104 may store media data (e.g., music and image files), software (e.g., a boot loader program, one or more application programs of an operating system for implementing functions on device 1100, etc.), firmware, preference information (e.g., media playback preferences), lifestyle information (e.g., food preferences), exercise information (e.g., information obtained by exercise monitoring equipment), transaction information (e.g., information such as credit card information), wireless connection information (e.g., information that may enable device 1100 to establish a wireless connection), subscription information (e.g., information that keeps track of podcasts or television shows or other media a user subscribes to), contact information (e.g., telephone numbers and e-mail addresses), calendar information, any other suitable data, or any combination thereof. Power supply 1106 may provide power to one or more of the components of device 1100. In some embodiments, power supply 1106 can be coupled to a power grid (e.g., when device 1100 is not a portable device, such as a desktop computer). In some embodiments, power supply 1106 can include one or more batteries for providing power (e.g., when device 1100 is a portable device, such as a cellular telephone). As another example, power supply 1106 can be configured to generate power from a natural source (e.g., solar power using solar cells). One or more input components 1108 may be provided to permit a user to interact or interface with device 1100. For example, input component 1108 can take a variety of forms, including, but not limited to, an electronic device pad, dial, click wheel, scroll wheel, touch screen, one or more buttons (e.g., a keyboard), mouse, joy stick, track ball, microphone, camera, proximity sensor, light detector, and combinations thereof. Each input component 1108 can be configured to provide one or more dedicated control functions for making selections or issuing commands associated with operating device 1100. Electronic device 1100 may also include one or more output components that may present information (e.g., visual, audible, and/or tactile information) to a user of device 1100. An output component of electronic device 1100 may take various forms, including, but not limited to, audio speakers, headphones, audio line-outs, visual displays, antennas, infrared ports, rumblers, vibrators, or combinations thereof. For example, electronic device 1100 may include display 1110 as an output component. Display 1110 may include any suitable type of display or interface for presenting visual content to a user. In some embodiments, display 1110 may include a display embedded in device 1100 or coupled to device 1100 (e.g., a removable display). Display 1110 may include, for example, a liquid crystal display (“LCD”), a light emitting diode (“LED”) display, an organic light-emitting diode (“OLED”) display, a surface-conduction electron-emitter display (“SED”), a carbon nanotube display, a nanocrystal display, any other suitable type of display, or combination thereof. Alternatively, display 1110 can include a movable display or a projecting system for providing a display of content on a surface remote from electronic device 1100, such as, for example, a video projector, a head-up display, or a three-dimensional (e.g., holographic) display. As another example, display 1110 may include a digital or mechanical viewfinder, such as a viewfinder of the type found in compact digital cameras, reflex cameras, or any other suitable still or video camera. In some embodiments, display 1110 may include display driver circuitry, circuitry for driving display drivers, or both. Display 1110 can be operative to present visual content provided by device 1100 (e.g., an avatar constructed from several assets). It should be noted that one or more input components and one or more output components may sometimes be referred to collectively herein as an input/output (“I/O”) interface (e.g., input component 1108 and display 1110 as I/O interface 1111). It should also be noted that input component 1110 and display 1110 may sometimes be a single I/O component, such as a touch screen that may receive input information through a user's touch of a display screen and that may also provide visual information to a user via that same display screen. Electronic device 1100 may also be provided with an enclosure or housing 1101 that may at least partially enclose one or more of the components of device 1100 for protecting them from debris and other degrading forces external to device 1100. In some embodiments, one or more of the components may be provided within its own housing (e.g., input component 1110 may be an independent keyboard or mouse within its own housing that may wirelessly or through a wire communicate with processor 1102, which may be provided within its own housing). Processor 1102 of device 1100 may include any processing or control circuitry operative to control the operations and performance of one or more components of electronic device 1100. For example, processor 1102 may be used to run operating system applications, firmware applications, media playback applications, media editing applications, or any other application. In some embodiments, processor 1102 may receive input signals from input component 1108 and/or drive output signals through display 1110. It is to be understood that the steps shown in each one of processes 700-1000 of FIGS. 7-10, respectively, are merely illustrative and that existing steps may be modified or omitted, additional steps may be added, and the order of certain steps may be altered. Moreover, the processes described with respect to FIGS. 7-10, as well as any other aspects of the invention, may each be implemented in hardware or a combination of hardware and software. Embodiments of the invention can also be embodied as computer readable code on a computer readable medium. The computer readable medium may be any data storage device that can store data which can thereafter be read by a computer system. Examples of the computer readable medium include read-only memory (“ROM”), random-access memory (“RAM”), CD-ROMs, DVDs, magnetic tape, and optical data storage devices. The computer readable medium can also be distributed over network-coupled computer systems so that the computer readable code may be stored and executed in a distributed fashion. Although many of the embodiments of the present invention are described herein with respect to personal computing devices, it should be understood that the present invention is not limited to personal computing applications, but is generally applicable to other applications. Insubstantial changes from the claimed subject matter as viewed by a person with ordinary skill in the art, now known or later devised, are expressly contemplated as being equivalently within the scope of the claims. Therefore, obvious substitutions now or later known to one with ordinary skill in the art are defined to be within the scope of the defined elements. The above-described embodiments of the invention are presented for purposes of illustration and not of limitation.
__label__pos
0.923381
summaryrefslogtreecommitdiff path: root/old/README.ethumb blob: 3494520e831ab2862d4acfaee4b70e9f953e1c13 (plain) 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 Ethumb 1.7.99 ****************************************************************************** FOR ANY ISSUES PLEASE EMAIL: [email protected] ****************************************************************************** Ethumb - Thumbnail generation library FEATURES ======== * create thumbnails with a predefined frame (possibly an edje frame); * have an option to create fdo-like thumbnails; * have a client/server utility. API === It's possible to set the following properties of thumbnails: * size * format (jpeg, png, eet...) * aspect: * have crop? * crop alignment? * video: * video_time * document: * page * frame: edje file, group and swallow part to use when generating thumbnails * directory: directory where to save thumbnails * category: to be used as DIRECTORY/CATEGORY/md5.format Path generation should provide the following: * If no path to save the thumbnail is specified, the following is used: * if CATEGORY, return ~/.thumbnail/CATEGORY/md5.format * else if size (128x128 or 256x256), format (png), aspect (keep aspect, no crop) and no frame matches, return ~/.thumbnail/{normal,large}/md5.png * else return WxH-FORMAT-[framed-]ASPECT Client server provides the following: * multiple client support * per-client configuration, avoid exchanging parameters over and over again * per-client queue, when client disconnect (ie: dies), remove whole queue * all clients have same priority, so queue is mixed for processing * cancel thumb generation request * communication over (for now) dbus and (future) ecore-ipc and unix sockets * interface of client library is independent of the communication method selected ------------------------------------------------------------------------------ COMPILING AND INSTALLING: ./configure make (do this as root unless you are installing in your users directories): make install
__label__pos
0.846983
July 21, 2024 Understanding Insecure Direct Object Reference (IDOR) Vulnerabilities Insecure Direct Object Reference (IDOR) is a security vulnerability that arises when attackers gain unauthorized access or manipulate objects by exploiting identifiers in a web application’s URLs or parameters. This occurs due to inadequate access control checks, failing to verify whether a user has the proper authorization to access specific data. Consider a scenario where a web application displays transaction details through a URL, like: https://www.payment-application.com/transaction.php?id=1341 A malicious actor might attempt to tamper with the `id` parameter, substituting values like 1342: https://www.payment-application.com/transaction.php?id=1342 Depending on the application’s implementation, transaction 1342 might belong to another user account, and the unauthorized hacker should not have access. If the developer neglects to enforce proper authorization checks, the attacker could exploit this vulnerability, leading to an insecure direct object reference. Types of IDOR Attacks:   1. URL Tampering: Exploiting IDOR vulnerabilities through URL tampering is a simple method that requires minimal technical expertise. Attackers can easily change parameter values in the web browser’s address bar. 2. Body Manipulation: Similar to URL tampering, body manipulation involves modifying values within the document’s body, such as radio buttons, checkboxes, or hidden form elements. 3. Cookie or JSON ID Manipulation: Attackers can manipulate Cookies or JSON objects, altering values like user or session IDs stored between the client and server, potentially exploiting IDOR vulnerabilities. 4. Path Traversal: Path traversal, a unique form of IDOR, enables attackers to directly access or manipulate files and folders on the server, providing deeper access than other types of IDOR attacks. Detecting IDOR Vulnerabilities:   1. Look for Integer Values: Examine features and their functionalities. Manipulating integer values, such as order IDs, in URLs may reveal unauthorized access to sensitive information. 2. Updating Account Settings: Manipulating parameters related to account settings, especially during updates, might lead to the unintended editing of another user’s profile. 3. Querying for Information: During processes like checkout, parameter-controlled requests may expose sensitive user information. Even if parameters are not apparent, testing with common identifiers is recommended. Impacts of IDOR Vulnerability: 1. Exposure of Confidential Information: Unauthorized access can lead to exposure of personal information. 2. Authentication Bypass: IDOR can function as an authentication bypass mechanism, granting access to numerous accounts. 3. Alteration of Data: Attackers may manipulate and alter user data, potentially leading to record manipulation. Remediating IDOR Vulnerability: – Avoid Displaying Private Object References: Developers should refrain from displaying private references like keys or file names. – Implement Proper Parameter Validation: Ensure validation of parameters to prevent unauthorized access. – Verify All Referenced Objects: Thoroughly verify all referenced objects to strengthen security. – Secure Token Generation: Generate tokens that are exclusively mapped to individual users and not publicly accessible. – Use Random Identifiers: Employ random identifiers to enhance the difficulty of guessing by potential attackers. – Implement Rigorous User Input Validation: Ensure robust validation of user inputs to thwart potential exploitation of vulnerabilities. In summary, Insecure Direct Object Reference (IDOR) vulnerabilities pose significant threats to web applications, allowing unauthorized access and manipulation of sensitive data. The diverse range of IDOR attacks, from URL tampering to path traversal, underscores the importance of robust security measures. If you are interested and want to learn about the vulnerability then do visit Portswigger Web Academy Labs on IDOR at https://portswigger.net/web-security/access-control/idor Comments from Facebook
__label__pos
0.998713
0 Lets say Alice and Bob have a secret key k1. This key is 50 chars long and is cryptographically random. Alice generates a cryptographically random string (e.g. 16 chars long) and hashes this string and k1 with an algorithm like pbkfd2 or scrypt to generate a new key k2. Alice then encrypts the message with k2 using AES-CBC. Alice sends the encrypted message and the random string (in plaintext) to Bob. Bob then generates k2 by hashing k1 and the random string to decrypt the message. This way every message is encrypted with a different key. Is this approach secure? 1 • "This key is 50 chars long" keys are binary. 32 bytes would result in 256 bits of key material, which should be plenty. However, you seem to be talking about a passphrase. Your random string should probably be considered a salt. CBC is vulnerable against plaintext & padding oracle attacks, and is by itself not secure for transport security. This all besides the answer given by mti2935. So no, not secure. Mar 20 at 12:13 1 Answer 1 2 This protocol would not provide perfect forward secrecy (PFS). Consider a passive eavesdropper, Eve, between Alice and Bob, who records all of the messages sent between Alice and Bob, for many years. Eventually, Alice experiences a breach, and k1 is disclosed. Eve now has everything she needs to go back and decrypt all of the messages sent between Alice and Bob. With protocols that provide PFS (such as Signal Protocol and modern versions of TLS), this type of attack is not possible. 2 • What if Alice encrypts the random string with Bobs public key and sends it like this. Would it provide PFS then? – LUMPAAK Dec 7, 2022 at 21:42 • No, because if Bob's private key is ever compromised, then Eve can go back and decrypt everything that was ever encrypted with Bob's public key and sent to Bob. For PFS, you need ephemeral key exchange. See signal.org/blog/asynchronous-security for some interesting reading on this subject. – mti2935 Dec 7, 2022 at 22:32 You must log in to answer this question. Not the answer you're looking for? Browse other questions tagged .
__label__pos
0.963103
4 Google offered me the possibility to to used Google reminders in my calendar, if I wanted. I said, ok, let's try it. Now Google Calendar has gone and there seems to be no way to get it back: How do I get "Tasks" in my list of calendars? Is there some kind of reset to basic I can do to remove the darn reminders or at least a method of getting reminders to sync with tasks? 6 Click on the drop down arrow next to Reminders in your calendar list. There should be an option to switch back to Google Tasks. |improve this answer||||| Your Answer By clicking “Post Your Answer”, you agree to our terms of service, privacy policy and cookie policy Not the answer you're looking for? Browse other questions tagged or ask your own question.
__label__pos
0.924618
DrMister DrMister - 6 months ago 32 React JSX Question Why does flow make me check for an undefined value that is already defined? In the following code flow throws an error if I don't check for the existence of response . But the const response definition should guarantee that response is available. Why does flow not accept omitting the check for the existence of response ? /* @flow */ // ... import dependencies export function* loadDepartments(): Generator<*, *, *> { try { const response = yield call(getJson, endpoints.departments); if (response && typeof response.data !== 'undefined') { yield put(actions.loadingDepartmentsSucceeded(response.data)); } } catch (errors) { yield put(actions.loadingDepartmentsFailed(errors)); } } Answer Event if you used const, the value of response can be undefined, depending on what your call function yields. A variable defined with const can indeed contain the value undefined. In case response is undefined, then the check if (typeof response.data !== 'undefined') { would raise a TypeError because you can't read property "data" of undefined.
__label__pos
0.999984
Consume a large of data in a file with Logstash Hi, I have a logstash that have consume nginx logs and another applications in multiples input file in logstash configuration, but when we increase the acess in the website, we have a lot of access and the logstash delay to collect the information and we have a role in dashboard. My redis queue still clear all the time. Someone has a tip about this? How many messages is Logstash processing per second? Do you have any time consuming filters? How's the CPU load? In my index have 109090727 documents. So we have 2525 by second. I have many filters with several business rules in logstash what read the logs files. the problem happens when we have a lot of access in nginx :frowning: Another information is I have only one logstash to consume several logs not only the nginx log Well, it sounds like you might need to profile and optimize your filters a bit. If you show us what you've got we might be able to help out. To increase throughput make sure you saturate your CPUs, e.g. by increasing the number of filter workers with the -w startup option. We talk about and we will make some changes in Logstash Log Reader and in the Logstash Indexer. :smile:
__label__pos
0.772432
STMicroelectronics Technical C Plus Plus Questions Part I Download PDF of This Page (Size: 170K) Examrace Placement Series prepares you for the toughest placement exams to top companies. Note: All the programs are tested under Turbo C + + 3.0, 4.5 and Microsoft VC + + 6.0 compilers. It is assumed that. Programs run under Windows environment. The underlying machine is an x86 based system. Program is compiled using Turbo C/C + + compiler. The program output may depend on the information based on this assumptions (for example sizeof (int) = = 2 may be assumed). 1. class Sample { public: Int * ptr; Sample (int i) { ptr = new int (i); } ~Sample () { delete ptr; }void PrintVal () { cout << “The value is” << * ptr; }}; void SomeFunc (Sample x) { cout << “Say i am in someFunc” << endl; }int main () { Sample s1 = 10; SomeFunc (s1); s1. PrintVal (); } Answer: Say i am in someFunc Null pointer assignment (Run-time error) Explanation: As the object is passed by value to SomeFunc the destructor of the object is called when the control returns from the function. So when PrintVal is called it meets up with ptr that has been freed. The solution is to pass the Sample object by reference to SomeFunc: Void SomeFunc (Sample &x) { cout << “Say i am in someFunc” << endl; } because when we pass objects by refernece that object is not destroyed. While returning from the function. 2. Which is the parameter that is added to every non-static member function when it is called? Answer: This pointer 3. class base { public: Int bval; base () { bval = 0; } }; class deri: Public base { public: Int dval; deri () { dval = 1; }}; void SomeFunc (base * arr, int size) { for int i = 0; i cout < bval; cout < } int main () { base BaseArr[5]; SomeFunc (BaseArr, 5); deri DeriArr[5]; SomeFunc (DeriArr, 5); } Answer: 00000 01010 Explanation: The function SomeFunc expects two arguments. The first one is a pointer to an array of base class objects and the second one is the sizeof the array. The first call of someFunc calls it with an array of bae objects, so it works correctly and prints the bval of all the objects. When Somefunc is called the second time the argument passed is the pointeer to an array of derived class objects and not the array of base class objects. But that is what the function expects to be sent. So the derived class pointer is promoted to base class pointer and the address is sent to the function. SomeFunc () knows nothing about this and just treats the pointer as an array of base class objects. So when arr + + is met, the size of base class object is taken into consideration and is incremented by sizeof (int) bytes for bval (the deri class objects have bval and dval as members and so is of size > = sizeof (int) + sizeof (int) ). 4. class base public: Void baseFun () { cout<< “from base” < }; class deri: Public base {public: Void baseFun () { cout<< “from derived” < }}; void SomeFunc (base * baseObj) { baseObj → baseFun (); } int main () { base baseObject; SomeFunc (&baseObject); deri deriObject; SomeFunc (&deriObject); } Answer: From base from base Explanation: As we have seen in the previous case, SomeFunc expects a pointer to a base class. Since a pointer to a derived class object is passed, it treats the argument only as a base class pointer and the corresponding base function is called. 5. class base public: Virtual void baseFun () { cout<< “from base” < }; class deri: Public base { public: Void baseFun () { cout<< “from derived” < }}; void SomeFunc (base * baseObj) { baseObj → baseFun (); } int main () { base baseObject; SomeFunc (&baseObject); deri deriObject; SomeFunc (&deriObject); } Answer: From base from derived Explanation: Remember that baseFunc is a virtual function. That means that it supports run-time polymorphism. So the function corresponding to the derived class object is called. Answer: Compiler Error: ‘ra’ reference must be initialized Explanation: Pointers are different from references. One of the main differences is that the pointers can be both initialized and assigned, whereas references can only be initialized. So this code issues an error. Const int size = 5; void print (int * ptr) { cout< } void print (int ptr[size]) { cout< } void main () { int a[size] = { 1, 2, 3, 4, 5}; int * b = new int (size); print (a); print (b); } Answer: Compiler Error: Function ‘void print (int * )’ already has a body Explanation: Arrays cannot be passed to functions, only pointers (for arrays, base addresses) can be passed. So the arguments int * ptr and int prt[size] have no difference as function arguments. In other words, both the functoins have the same signature and so cannot be overloaded. */class some { public: ~some () { cout<< “some's destructor” < }}; void main () { some s; s. ~some (); }/* Answer: Some's destructor some's destructor Explanation: Destructors can be called explicitly. Here's. ~some () ‘explicitly calls the destructor of's’ When main () returns, destructor of s is called again, hence the result. */#include class fig2d { int dim1; int dim2; public: Fig2d () { dim1 = 5; dim2 = 6; } virtual void operator<< (ostream & rhs); }; void fig2d: Operator<< (ostream &rhs) { rhs <dim1<< “” <dim2<< “” }/* class fig3d: Public fig2d { int dim3; public: Fig3d () { dim3 = 7; } virtual void operator<< (ostream &rhs); }; void fig3d: Operator<< (ostream &rhs) { fig2d: Operator << (rhs); rhs <dim3; } */void main () { fig2d obj1;//fig3d obj2; obj1 << cout;//obj2 << cout; }/* Answer: 5 6 Explanation: In this program, the << operator is overloaded with ostream as argument. This enables the ‘cout’ to be present at the right-hand-side. Normally, ‘cout’ is implemented as global function, but it doesn't mean that ‘cout’ is not possible to be overloaded as member function. Overloading << as virtual member function becomes handy when the class in which it is overloaded is inherited, and this becomes available to be overrided. This is as opposed to global friend functions, where friend's are not inherited. */class opOverload { public: Bool operator = = (opOverload temp); }; bool opOverload: Operator = = (opOverload temp) { if ( * this = = temp) { cout<< “The both are same objects\n” return true; } else { cout<< “The both are different\n” return false; } } void main () { opOverload a1, a2; a1 = = a2; } Answer: Runtime Error: Stack Overflow Explanation: Just like normal functions, operator functions can be called recursively. This program just illustrates that point, by calling the operator = = function recursively, leading to an infinite loop. Class complex { double re; double im; public: Complex (): Re (1), im (0.5) { } bool operator = = (complex &rhs); operator int () { }}; bool complex: Operator = = (complex &rhs) { if ( (this → re = = rhs. Re) && (this → im = = rhs. Im) ) return true; else return false; } int main () { complex c1; cout<< c1; } Answer: Garbage value Explanation: The programmer wishes to print the complex object using output re-direction operator, which he has not defined for his lass. But the compiler instead of giving an error sees the conversion function and converts the user defined object to standard object and prints some garbage value. Class complex { double re; double im; public: Complex (): Re (0), im (0) { }} complex (double n) { re = n, im = n; }; complex (int m, int n) { re = m, im = n; } void print () { cout< }; void main () { complex c3; double i = 5; c3 = i; c3. Print (); } Answer: 5, 5 Explanation: Though no operator = function taking complex, double is defined, the double on the rhs is converted into a temporary object using the single argument constructor taking double and assigned to the lvalue. Answer: Compiler Error: ‘ra’ reference must be initialized Explanation: Pointers are different from references. One of the main differences is that the pointers can be both initialized and assigned, whereas references can only be initialized. So this code issues an error. Try It Yourself 1. Determine the output of the ‘C + +’ Codelet. Class base { public: Out () { cout<< “base” } }; class deri { public: Out () { cout<< “deri” } }; void main () { deri dp[3]; base * bp = (base * ) dp; for (int i = 0; i<3; i + + ) (bp + + ) → out (); } 2. Justify the use of virtual constructors and destructors in C + +. 3. Each C + + object possesses the 4 member fns (which can be declared by the programmer explicitly or by the implementation if they are not available), What are those 4 functions? 4. What is wrong with this class declaration? class something { char * str; public: Something () { st = new char[10]; } ~something () { delete str; } } 5. Inheritance is also known as____relationship. Containership as ________ relationship. 6. When is it necessary to use member-wise initialization list (also known as header initialization list) in C + +? 7. Which is the only operator in C + + which can be overloaded but NOT inherited. 8. Is there anything wrong with this C + + class declaration? class temp { int value1; mutable int value2; public: Void fun (int val) const { ( (temp * ) this) → value1 = 10; value2 = 10; }} Solve These 1. What is a modifier? Answer: A modifier, also called a modifying function is a member function that changes the value of at least one data member. In other words, an operation that modifies the state of an object. Modifiers are also known as mutatorS' 2. What is an accessor? Answer: An accessor is a class operation that does not modify the state of an object. The accessor functions need to be declared as const operations 3. Differentiate between a template class and class template. Answer: Template class: A generic definition or a parameterized class not instantiated until the client provides the needed information. It's jargon for plain templates. Class template: A class template specifies how individual classes can be constructed much like the way a class specifies how individual objects can be constructed. It's jargon for plain classes. 4. When does a name clash occur? Answer: A name clash occurs when a name is defined in more than one place. For example. two different class libraries could give two different classes the same name. If you try to use many class libraries at the same time, there is a fair chance that you will be unable to compile or link the program because of name clashes. 5. Define namespace. Answer: It is a feature in c + + to minimize name collisions in the global name space. This namespace keyword assigns a distinct name to a library that allows other libraries to use the same identifier names without creating any name collisions. Furthermore, the compiler uses the namespace signature for differentiating the definitions. 6. What is the use of using declaration. Answer: A using declaration makes it possible to use a name from a namespace without the scope operator. 7. What is an Iterator class? Answer: A class that is used to traverse through the objects maintained by a container class. There are five categories of iterators: Input iterators, output iterators, forward iterators, bidirectional iterators, random access. 8. An iterator is an entity that gives access to the contents of a container object without violating encapsulation constraints. Access to the contents is granted on a one-at-a-time basis in order. The order can be storage order (as in lists and queues) or some arbitrary order (as in array indices) or according to some ordering relation (as in an ordered binary tree). The iterator is a construct, which provides an interface that, when called, yields either the next element in the container, or some value denoting the fact that there are no more elements to examine. Iterators hide the details of access to and update of the elements of a container class. The simplest and safest iterators are those that permit read-only access to the contents of a container class. The following code fragment shows how an iterator might appear in code: Cont_iter: = new cont_iterator (); x: = cont_iter. Next (); while x/= none do … s (x); … x: = cont_iter. Next (); end; In this example, cont_iter is the name of the iterator. It is created on the first line by instantiation of cont_iterator class, an iterator class defined to iterate over some container class, cont. Succesive elements from the container are carried to x. The loop terminates when x is bound to some empty value (Here, none). In the middle of the loop, there is s (x) an operation on x, the current element from the container. The next element of the container is obtained at the bottom of the loop. 9. List out some of the OODBMS available. Answer: GEMSTONE/OPAL of Gemstone systems. ONTOS of Ontos. Objectivity of Objectivity inc. Versant of Versant object technology. Object store of Object Design. ARDENT of ARDENT software. POET of POET software. 10. List out some of the object-oriented methodologies. Answer: Object Oriented Development (OOD) (Booch 1991, 1994). Object Oriented Analysis and Design (OOA/D) (Coad and Yourdon 1991). Object Modelling Techniques (OMT) (Rumbaugh 1991). Object Oriented Software Engineering (Objectory) (Jacobson 1992). Object Oriented Analysis (OOA) (Shlaer and Mellor 1992). The Fusion Method (Coleman 1991). 11. What is an incomplete type? Answer: Incomplete types refers to pointers in which there is non availability of the implementation of the referenced location or it points to some location whose value is not available for modification. Example: Int * i = 0 × 400//i points to address 400 * i = 0;//set the value of memory location pointed by i. Incomplete types are otherwise called uninitialized pointers. 12. What is a dangling pointer? Answer: A dangling pointer arises when you use the address of an object after its lifetime is over. This may occur in situations like returning addresses of the automatic variables from a function or using the address of the memory block after it is freed. 13. Differentiate between the message and method. Answer: Message Method Objects communicate by sending messages Provides response to a message. To each other. A message is sent to invoke a method. It is an implementation of an operation. 14. What is an adaptor class or Wrapper class? Answer: A class that has no functionality of its own. Its member functions hide the use of a third party software component or an object with the non-compatible interface or a non-object-oriented implementation. 15. What is a Null object? Answer: It is an object of some class whose purpose is to indicate that a real object of that class does not exist. One common use for a null object is a return value from a member function that is supposed to return an object with some specified properties but cannot find such an object. 16. What is class invariant? Answer: A class invariant is a condition that defines all valid states for an object. It is a logical condition to ensure the correct working of a class. Class invariants must hold when an object is created, and they must be preserved under all operations of the class. In particular all class invariants are both preconditions and post-conditions for all operations or member functions of the class. 17. What do you mean by Stack unwinding? Answer: It is a process during exception handling when the destructor is called for all local objects between the place where the exception was thrown and where it is caught. 18. Define precondition and post-condition to a member function. Answer: Precondition: A precondition is a condition that must be true on entry to a member function. A class is used correctly if preconditions are never false. An operation is not responsible for doing anything sensible if its precondition fails to hold. For example, the interface invariants of stack class say nothing about pushing yet another element on a stack that is already full. We say that isful () is a precondition of the push operation. Post-condition: A post-condition is a condition that must be true on exit from a member function if the precondition was valid on entry to that function. A class is implemented correctly if post-conditions are never false. For example, after pushing an element on the stack, we know that isempty () must necessarily hold. This is a post-condition of the push operation. 19. What are the conditions that have to be met for a condition to be an invariant of the class? Answer: The condition should hold at the end of every constructor. The condition should hold at the end of every mutator (non-const) operation. 20. What are proxy objects? Answer: Objects that stand for other objects are called proxy objects or surrogates. Example: Emplate class Array2D { public: Class Array1D { public: T& operator[] (int index); const T& operator[] (int index) const; … }; Array1D operator[] (int index); const Array1D operator[] (int index) const; … }. The following then becomes legal: Array2Ddata (10, 20); ____ cout Here data[3] yields an Array1D object and the operator [] invocation on that object yields the float in position (3, 6) of the original two dimensional array. Clients of the Array2D class need not be aware of the presence of the Array1D class. Objects of this latter class stand for one-dimensional array objects that, conceptually, do not exist for clients of Array2D. Such clients program as if they were using real, live, two-dimensional arrays. Each Array1D object stands for a one-dimensional array that is absent from a conceptual model used by the clients of Array2D. In the above example, Array1D is a proxy class. Its instances stand for one-dimensional arrays that, conceptually, do not exist.
__label__pos
0.999876
aboutsummaryrefslogtreecommitdiffstats path: root/lib/bitmap.c blob: a578a018919977579063bb599d4fd462bae6a54b (plain) 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 796 797 798 799 800 801 802 803 804 805 806 807 808 809 810 811 812 813 814 815 816 817 818 819 820 821 822 823 824 825 826 827 828 829 830 831 832 833 834 835 836 837 838 839 840 841 842 843 844 845 846 847 848 849 850 851 852 853 854 855 856 857 858 859 860 861 862 863 864 865 866 867 868 869 870 871 872 873 874 875 876 877 878 879 880 881 882 883 884 885 886 887 888 889 890 891 892 893 894 895 896 897 898 899 900 901 902 903 904 905 906 907 908 909 910 911 912 913 914 915 916 917 918 919 920 921 922 923 924 925 926 927 928 929 930 931 932 933 934 935 936 937 938 939 940 941 942 943 944 945 946 947 948 949 950 951 952 953 954 955 956 957 958 959 960 961 962 963 964 965 966 967 968 969 970 971 972 973 974 975 976 977 978 979 980 981 982 983 984 985 986 987 988 989 990 991 992 993 994 995 996 997 998 999 1000 1001 1002 1003 1004 1005 1006 1007 1008 1009 1010 1011 1012 1013 1014 1015 1016 1017 1018 1019 1020 1021 1022 1023 1024 1025 1026 1027 1028 1029 1030 1031 1032 1033 1034 1035 1036 1037 1038 1039 1040 1041 1042 1043 1044 1045 1046 1047 1048 1049 1050 1051 1052 1053 1054 1055 1056 1057 1058 1059 1060 1061 1062 1063 1064 1065 1066 1067 1068 1069 1070 1071 1072 1073 1074 /* * lib/bitmap.c * Helper functions for bitmap.h. * * This source code is licensed under the GNU General Public License, * Version 2. See the file COPYING for more details. */ #include <linux/export.h> #include <linux/thread_info.h> #include <linux/ctype.h> #include <linux/errno.h> #include <linux/bitmap.h> #include <linux/bitops.h> #include <linux/bug.h> #include <asm/page.h> #include <asm/uaccess.h> /* * bitmaps provide an array of bits, implemented using an an * array of unsigned longs. The number of valid bits in a * given bitmap does _not_ need to be an exact multiple of * BITS_PER_LONG. * * The possible unused bits in the last, partially used word * of a bitmap are 'don't care'. The implementation makes * no particular effort to keep them zero. It ensures that * their value will not affect the results of any operation. * The bitmap operations that return Boolean (bitmap_empty, * for example) or scalar (bitmap_weight, for example) results * carefully filter out these unused bits from impacting their * results. * * These operations actually hold to a slightly stronger rule: * if you don't input any bitmaps to these ops that have some * unused bits set, then they won't output any set unused bits * in output bitmaps. * * The byte ordering of bitmaps is more natural on little * endian architectures. See the big-endian headers * include/asm-ppc64/bitops.h and include/asm-s390/bitops.h * for the best explanations of this ordering. */ int __bitmap_equal(const unsigned long *bitmap1, const unsigned long *bitmap2, unsigned int bits) { unsigned int k, lim = bits/BITS_PER_LONG; for (k = 0; k < lim; ++k) if (bitmap1[k] != bitmap2[k]) return 0; if (bits % BITS_PER_LONG) if ((bitmap1[k] ^ bitmap2[k]) & BITMAP_LAST_WORD_MASK(bits)) return 0; return 1; } EXPORT_SYMBOL(__bitmap_equal); void __bitmap_complement(unsigned long *dst, const unsigned long *src, unsigned int bits) { unsigned int k, lim = bits/BITS_PER_LONG; for (k = 0; k < lim; ++k) dst[k] = ~src[k]; if (bits % BITS_PER_LONG) dst[k] = ~src[k]; } EXPORT_SYMBOL(__bitmap_complement); /** * __bitmap_shift_right - logical right shift of the bits in a bitmap * @dst : destination bitmap * @src : source bitmap * @shift : shift by this many bits * @nbits : bitmap size, in bits * * Shifting right (dividing) means moving bits in the MS -> LS bit * direction. Zeros are fed into the vacated MS positions and the * LS bits shifted off the bottom are lost. */ void __bitmap_shift_right(unsigned long *dst, const unsigned long *src, unsigned shift, unsigned nbits) { unsigned k, lim = BITS_TO_LONGS(nbits); unsigned off = shift/BITS_PER_LONG, rem = shift % BITS_PER_LONG; unsigned long mask = BITMAP_LAST_WORD_MASK(nbits); for (k = 0; off + k < lim; ++k) { unsigned long upper, lower; /* * If shift is not word aligned, take lower rem bits of * word above and make them the top rem bits of result. */ if (!rem || off + k + 1 >= lim) upper = 0; else { upper = src[off + k + 1]; if (off + k + 1 == lim - 1) upper &= mask; upper <<= (BITS_PER_LONG - rem); } lower = src[off + k]; if (off + k == lim - 1) lower &= mask; lower >>= rem; dst[k] = lower | upper; } if (off) memset(&dst[lim - off], 0, off*sizeof(unsigned long)); } EXPORT_SYMBOL(__bitmap_shift_right); /** * __bitmap_shift_left - logical left shift of the bits in a bitmap * @dst : destination bitmap * @src : source bitmap * @shift : shift by this many bits * @nbits : bitmap size, in bits * * Shifting left (multiplying) means moving bits in the LS -> MS * direction. Zeros are fed into the vacated LS bit positions * and those MS bits shifted off the top are lost. */ void __bitmap_shift_left(unsigned long *dst, const unsigned long *src, unsigned int shift, unsigned int nbits) { int k; unsigned int lim = BITS_TO_LONGS(nbits); unsigned int off = shift/BITS_PER_LONG, rem = shift % BITS_PER_LONG; for (k = lim - off - 1; k >= 0; --k) { unsigned long upper, lower; /* * If shift is not word aligned, take upper rem bits of * word below and make them the bottom rem bits of result. */ if (rem && k > 0) lower = src[k - 1] >> (BITS_PER_LONG - rem); else lower = 0; upper = src[k] << rem; dst[k + off] = lower | upper; } if (off) memset(dst, 0, off*sizeof(unsigned long)); } EXPORT_SYMBOL(__bitmap_shift_left); int __bitmap_and(unsigned long *dst, const unsigned long *bitmap1, const unsigned long *bitmap2, unsigned int bits) { unsigned int k; unsigned int lim = bits/BITS_PER_LONG; unsigned long result = 0; for (k = 0; k < lim; k++) result |= (dst[k] = bitmap1[k] & bitmap2[k]); if (bits % BITS_PER_LONG) result |= (dst[k] = bitmap1[k] & bitmap2[k] & BITMAP_LAST_WORD_MASK(bits)); return result != 0; } EXPORT_SYMBOL(__bitmap_and); void __bitmap_or(unsigned long *dst, const unsigned long *bitmap1, const unsigned long *bitmap2, unsigned int bits) { unsigned int k; unsigned int nr = BITS_TO_LONGS(bits); for (k = 0; k < nr; k++) dst[k] = bitmap1[k] | bitmap2[k]; } EXPORT_SYMBOL(__bitmap_or); void __bitmap_xor(unsigned long *dst, const unsigned long *bitmap1, const unsigned long *bitmap2, unsigned int bits) { unsigned int k; unsigned int nr = BITS_TO_LONGS(bits); for (k = 0; k < nr; k++) dst[k] = bitmap1[k] ^ bitmap2[k]; } EXPORT_SYMBOL(__bitmap_xor); int __bitmap_andnot(unsigned long *dst, const unsigned long *bitmap1, const unsigned long *bitmap2, unsigned int bits) { unsigned int k; unsigned int lim = bits/BITS_PER_LONG; unsigned long result = 0; for (k = 0; k < lim; k++) result |= (dst[k] = bitmap1[k] & ~bitmap2[k]); if (bits % BITS_PER_LONG) result |= (dst[k] = bitmap1[k] & ~bitmap2[k] & BITMAP_LAST_WORD_MASK(bits)); return result != 0; } EXPORT_SYMBOL(__bitmap_andnot); int __bitmap_intersects(const unsigned long *bitmap1, const unsigned long *bitmap2, unsigned int bits) { unsigned int k, lim = bits/BITS_PER_LONG; for (k = 0; k < lim; ++k) if (bitmap1[k] & bitmap2[k]) return 1; if (bits % BITS_PER_LONG) if ((bitmap1[k] & bitmap2[k]) & BITMAP_LAST_WORD_MASK(bits)) return 1; return 0; } EXPORT_SYMBOL(__bitmap_intersects); int __bitmap_subset(const unsigned long *bitmap1, const unsigned long *bitmap2, unsigned int bits) { unsigned int k, lim = bits/BITS_PER_LONG; for (k = 0; k < lim; ++k) if (bitmap1[k] & ~bitmap2[k]) return 0; if (bits % BITS_PER_LONG) if ((bitmap1[k] & ~bitmap2[k]) & BITMAP_LAST_WORD_MASK(bits)) return 0; return 1; } EXPORT_SYMBOL(__bitmap_subset); int __bitmap_weight(const unsigned long *bitmap, unsigned int bits) { unsigned int k, lim = bits/BITS_PER_LONG; int w = 0; for (k = 0; k < lim; k++) w += hweight_long(bitmap[k]); if (bits % BITS_PER_LONG) w += hweight_long(bitmap[k] & BITMAP_LAST_WORD_MASK(bits)); return w; } EXPORT_SYMBOL(__bitmap_weight); void bitmap_set(unsigned long *map, unsigned int start, int len) { unsigned long *p = map + BIT_WORD(start); const unsigned int size = start + len; int bits_to_set = BITS_PER_LONG - (start % BITS_PER_LONG); unsigned long mask_to_set = BITMAP_FIRST_WORD_MASK(start); while (len - bits_to_set >= 0) { *p |= mask_to_set; len -= bits_to_set; bits_to_set = BITS_PER_LONG; mask_to_set = ~0UL; p++; } if (len) { mask_to_set &= BITMAP_LAST_WORD_MASK(size); *p |= mask_to_set; } } EXPORT_SYMBOL(bitmap_set); void bitmap_clear(unsigned long *map, unsigned int start, int len) { unsigned long *p = map + BIT_WORD(start); const unsigned int size = start + len; int bits_to_clear = BITS_PER_LONG - (start % BITS_PER_LONG); unsigned long mask_to_clear = BITMAP_FIRST_WORD_MASK(start); while (len - bits_to_clear >= 0) { *p &= ~mask_to_clear; len -= bits_to_clear; bits_to_clear = BITS_PER_LONG; mask_to_clear = ~0UL; p++; } if (len) { mask_to_clear &= BITMAP_LAST_WORD_MASK(size); *p &= ~mask_to_clear; } } EXPORT_SYMBOL(bitmap_clear); /** * bitmap_find_next_zero_area_off - find a contiguous aligned zero area * @map: The address to base the search on * @size: The bitmap size in bits * @start: The bitnumber to start searching at * @nr: The number of zeroed bits we're looking for * @align_mask: Alignment mask for zero area * @align_offset: Alignment offset for zero area. * * The @align_mask should be one less than a power of 2; the effect is that * the bit offset of all zero areas this function finds plus @align_offset * is multiple of that power of 2. */ unsigned long bitmap_find_next_zero_area_off(unsigned long *map, unsigned long size, unsigned long start, unsigned int nr, unsigned long align_mask, unsigned long align_offset) { unsigned long index, end, i; again: index = find_next_zero_bit(map, size, start); /* Align allocation */ index = __ALIGN_MASK(index + align_offset, align_mask) - align_offset; end = index + nr; if (end > size) return end; i = find_next_bit(map, end, index); if (i < end) { start = i + 1; goto again; } return index; } EXPORT_SYMBOL(bitmap_find_next_zero_area_off); /* * Bitmap printing & parsing functions: first version by Nadia Yvette Chambers, * second version by Paul Jackson, third by Joe Korty. */ #define CHUNKSZ 32 #define nbits_to_hold_value(val) fls(val) #define BASEDEC 10 /* fancier cpuset lists input in decimal */ /** * __bitmap_parse - convert an ASCII hex string into a bitmap. * @buf: pointer to buffer containing string. * @buflen: buffer size in bytes. If string is smaller than this * then it must be terminated with a \0. * @is_user: location of buffer, 0 indicates kernel space * @maskp: pointer to bitmap array that will contain result. * @nmaskbits: size of bitmap, in bits. * * Commas group hex digits into chunks. Each chunk defines exactly 32 * bits of the resultant bitmask. No chunk may specify a value larger * than 32 bits (%-EOVERFLOW), and if a chunk specifies a smaller value * then leading 0-bits are prepended. %-EINVAL is returned for illegal * characters and for grouping errors such as "1,,5", ",44", "," and "". * Leading and trailing whitespace accepted, but not embedded whitespace. */ int __bitmap_parse(const char *buf, unsigned int buflen, int is_user, unsigned long *maskp, int nmaskbits) { int c, old_c, totaldigits, ndigits, nchunks, nbits; u32 chunk; const char __user __force *ubuf = (const char __user __force *)buf; bitmap_zero(maskp, nmaskbits); nchunks = nbits = totaldigits = c = 0; do { chunk = ndigits = 0; /* Get the next chunk of the bitmap */ while (buflen) { old_c = c; if (is_user) { if (__get_user(c, ubuf++)) return -EFAULT; } else c = *buf++; buflen--; if (isspace(c)) continue; /* * If the last character was a space and the current * character isn't '\0', we've got embedded whitespace. * This is a no-no, so throw an error. */ if (totaldigits && c && isspace(old_c)) return -EINVAL; /* A '\0' or a ',' signal the end of the chunk */ if (c == '\0' || c == ',') break; if (!isxdigit(c)) return -EINVAL; /* * Make sure there are at least 4 free bits in 'chunk'. * If not, this hexdigit will overflow 'chunk', so * throw an error. */ if (chunk & ~((1UL << (CHUNKSZ - 4)) - 1)) return -EOVERFLOW; chunk = (chunk << 4) | hex_to_bin(c); ndigits++; totaldigits++; } if (ndigits == 0) return -EINVAL; if (nchunks == 0 && chunk == 0) continue; __bitmap_shift_left(maskp, maskp, CHUNKSZ, nmaskbits); *maskp |= chunk; nchunks++; nbits += (nchunks == 1) ? nbits_to_hold_value(chunk) : CHUNKSZ; if (nbits > nmaskbits) return -EOVERFLOW; } while (buflen && c == ','); return 0; } EXPORT_SYMBOL(__bitmap_parse); /** * bitmap_parse_user - convert an ASCII hex string in a user buffer into a bitmap * * @ubuf: pointer to user buffer containing string. * @ulen: buffer size in bytes. If string is smaller than this * then it must be terminated with a \0. * @maskp: pointer to bitmap array that will contain result. * @nmaskbits: size of bitmap, in bits. * * Wrapper for __bitmap_parse(), providing it with user buffer. * * We cannot have this as an inline function in bitmap.h because it needs * linux/uaccess.h to get the access_ok() declaration and this causes * cyclic dependencies. */ int bitmap_parse_user(const char __user *ubuf, unsigned int ulen, unsigned long *maskp, int nmaskbits) { if (!access_ok(VERIFY_READ, ubuf, ulen)) return -EFAULT; return __bitmap_parse((const char __force *)ubuf, ulen, 1, maskp, nmaskbits); } EXPORT_SYMBOL(bitmap_parse_user); /** * bitmap_print_to_pagebuf - convert bitmap to list or hex format ASCII string * @list: indicates whether the bitmap must be list * @buf: page aligned buffer into which string is placed * @maskp: pointer to bitmap to convert * @nmaskbits: size of bitmap, in bits * * Output format is a comma-separated list of decimal numbers and * ranges if list is specified or hex digits grouped into comma-separated * sets of 8 digits/set. Returns the number of characters written to buf. * * It is assumed that @buf is a pointer into a PAGE_SIZE area and that * sufficient storage remains at @buf to accommodate the * bitmap_print_to_pagebuf() output. */ int bitmap_print_to_pagebuf(bool list, char *buf, const unsigned long *maskp, int nmaskbits) { ptrdiff_t len = PTR_ALIGN(buf + PAGE_SIZE - 1, PAGE_SIZE) - buf; int n = 0; if (len > 1) n = list ? scnprintf(buf, len, "%*pbl\n", nmaskbits, maskp) : scnprintf(buf, len, "%*pb\n", nmaskbits, maskp); return n; } EXPORT_SYMBOL(bitmap_print_to_pagebuf); /** * __bitmap_parselist - convert list format ASCII string to bitmap * @buf: read nul-terminated user string from this buffer * @buflen: buffer size in bytes. If string is smaller than this * then it must be terminated with a \0. * @is_user: location of buffer, 0 indicates kernel space * @maskp: write resulting mask here * @nmaskbits: number of bits in mask to be written * * Input format is a comma-separated list of decimal numbers and * ranges. Consecutively set bits are shown as two hyphen-separated * decimal numbers, the smallest and largest bit numbers set in * the range. * * Returns 0 on success, -errno on invalid input strings. * Error values: * %-EINVAL: second number in range smaller than first * %-EINVAL: invalid character in string * %-ERANGE: bit number specified too large for mask */ static int __bitmap_parselist(const char *buf, unsigned int buflen, int is_user, unsigned long *maskp, int nmaskbits) { unsigned a, b; int c, old_c, totaldigits; const char __user __force *ubuf = (const char __user __force *)buf; int at_start, in_range; totaldigits = c = 0; bitmap_zero(maskp, nmaskbits); do { at_start = 1; in_range = 0; a = b = 0; /* Get the next cpu# or a range of cpu#'s */ while (buflen) { old_c = c; if (is_user) { if (__get_user(c, ubuf++)) return -EFAULT; } else c = *buf++; buflen--; if (isspace(c)) continue; /* * If the last character was a space and the current * character isn't '\0', we've got embedded whitespace. * This is a no-no, so throw an error. */ if (totaldigits && c && isspace(old_c)) return -EINVAL; /* A '\0' or a ',' signal the end of a cpu# or range */ if (c == '\0' || c == ',') break; if (c == '-') { if (at_start || in_range) return -EINVAL; b = 0; in_range = 1; continue; } if (!isdigit(c)) return -EINVAL; b = b * 10 + (c - '0'); if (!in_range) a = b; at_start = 0; totaldigits++; } if (!(a <= b)) return -EINVAL; if (b >= nmaskbits) return -ERANGE; if (!at_start) { while (a <= b) { set_bit(a, maskp); a++; } } } while (buflen && c == ','); return 0; } int bitmap_parselist(const char *bp, unsigned long *maskp, int nmaskbits) { char *nl = strchrnul(bp, '\n'); int len = nl - bp; return __bitmap_parselist(bp, len, 0, maskp, nmaskbits); } EXPORT_SYMBOL(bitmap_parselist); /** * bitmap_parselist_user() * * @ubuf: pointer to user buffer containing string. * @ulen: buffer size in bytes. If string is smaller than this * then it must be terminated with a \0. * @maskp: pointer to bitmap array that will contain result. * @nmaskbits: size of bitmap, in bits. * * Wrapper for bitmap_parselist(), providing it with user buffer. * * We cannot have this as an inline function in bitmap.h because it needs * linux/uaccess.h to get the access_ok() declaration and this causes * cyclic dependencies. */ int bitmap_parselist_user(const char __user *ubuf, unsigned int ulen, unsigned long *maskp, int nmaskbits) { if (!access_ok(VERIFY_READ, ubuf, ulen)) return -EFAULT; return __bitmap_parselist((const char __force *)ubuf, ulen, 1, maskp, nmaskbits); } EXPORT_SYMBOL(bitmap_parselist_user); /** * bitmap_pos_to_ord - find ordinal of set bit at given position in bitmap * @buf: pointer to a bitmap * @pos: a bit position in @buf (0 <= @pos < @nbits) * @nbits: number of valid bit positions in @buf * * Map the bit at position @pos in @buf (of length @nbits) to the * ordinal of which set bit it is. If it is not set or if @pos * is not a valid bit position, map to -1. * * If for example, just bits 4 through 7 are set in @buf, then @pos * values 4 through 7 will get mapped to 0 through 3, respectively, * and other @pos values will get mapped to -1. When @pos value 7 * gets mapped to (returns) @ord value 3 in this example, that means * that bit 7 is the 3rd (starting with 0th) set bit in @buf. * * The bit positions 0 through @bits are valid positions in @buf. */ static int bitmap_pos_to_ord(const unsigned long *buf, unsigned int pos, unsigned int nbits) { if (pos >= nbits || !test_bit(pos, buf)) return -1; return __bitmap_weight(buf, pos); } /** * bitmap_ord_to_pos - find position of n-th set bit in bitmap * @buf: pointer to bitmap * @ord: ordinal bit position (n-th set bit, n >= 0) * @nbits: number of valid bit positions in @buf * * Map the ordinal offset of bit @ord in @buf to its position in @buf. * Value of @ord should be in range 0 <= @ord < weight(buf). If @ord * >= weight(buf), returns @nbits. * * If for example, just bits 4 through 7 are set in @buf, then @ord * values 0 through 3 will get mapped to 4 through 7, respectively, * and all other @ord values returns @nbits. When @ord value 3 * gets mapped to (returns) @pos value 7 in this example, that means * that the 3rd set bit (starting with 0th) is at position 7 in @buf. * * The bit positions 0 through @nbits-1 are valid positions in @buf. */ unsigned int bitmap_ord_to_pos(const unsigned long *buf, unsigned int ord, unsigned int nbits) { unsigned int pos; for (pos = find_first_bit(buf, nbits); pos < nbits && ord; pos = find_next_bit(buf, nbits, pos + 1)) ord--; return pos; } /** * bitmap_remap - Apply map defined by a pair of bitmaps to another bitmap * @dst: remapped result * @src: subset to be remapped * @old: defines domain of map * @new: defines range of map * @nbits: number of bits in each of these bitmaps * * Let @old and @new define a mapping of bit positions, such that * whatever position is held by the n-th set bit in @old is mapped * to the n-th set bit in @new. In the more general case, allowing * for the possibility that the weight 'w' of @new is less than the * weight of @old, map the position of the n-th set bit in @old to * the position of the m-th set bit in @new, where m == n % w. * * If either of the @old and @new bitmaps are empty, or if @src and * @dst point to the same location, then this routine copies @src * to @dst. * * The positions of unset bits in @old are mapped to themselves * (the identify map). * * Apply the above specified mapping to @src, placing the result in * @dst, clearing any bits previously set in @dst. * * For example, lets say that @old has bits 4 through 7 set, and * @new has bits 12 through 15 set. This defines the mapping of bit * position 4 to 12, 5 to 13, 6 to 14 and 7 to 15, and of all other * bit positions unchanged. So if say @src comes into this routine * with bits 1, 5 and 7 set, then @dst should leave with bits 1, * 13 and 15 set. */ void bitmap_remap(unsigned long *dst, const unsigned long *src, const unsigned long *old, const unsigned long *new, unsigned int nbits) { unsigned int oldbit, w; if (dst == src) /* following doesn't handle inplace remaps */ return; bitmap_zero(dst, nbits); w = bitmap_weight(new, nbits); for_each_set_bit(oldbit, src, nbits) { int n = bitmap_pos_to_ord(old, oldbit, nbits); if (n < 0 || w == 0) set_bit(oldbit, dst); /* identity map */ else set_bit(bitmap_ord_to_pos(new, n % w, nbits), dst); } } EXPORT_SYMBOL(bitmap_remap); /** * bitmap_bitremap - Apply map defined by a pair of bitmaps to a single bit * @oldbit: bit position to be mapped * @old: defines domain of map * @new: defines range of map * @bits: number of bits in each of these bitmaps * * Let @old and @new define a mapping of bit positions, such that * whatever position is held by the n-th set bit in @old is mapped * to the n-th set bit in @new. In the more general case, allowing * for the possibility that the weight 'w' of @new is less than the * weight of @old, map the position of the n-th set bit in @old to * the position of the m-th set bit in @new, where m == n % w. * * The positions of unset bits in @old are mapped to themselves * (the identify map). * * Apply the above specified mapping to bit position @oldbit, returning * the new bit position. * * For example, lets say that @old has bits 4 through 7 set, and * @new has bits 12 through 15 set. This defines the mapping of bit * position 4 to 12, 5 to 13, 6 to 14 and 7 to 15, and of all other * bit positions unchanged. So if say @oldbit is 5, then this routine * returns 13. */ int bitmap_bitremap(int oldbit, const unsigned long *old, const unsigned long *new, int bits) { int w = bitmap_weight(new, bits); int n = bitmap_pos_to_ord(old, oldbit, bits); if (n < 0 || w == 0) return oldbit; else return bitmap_ord_to_pos(new, n % w, bits); } EXPORT_SYMBOL(bitmap_bitremap); /** * bitmap_onto - translate one bitmap relative to another * @dst: resulting translated bitmap * @orig: original untranslated bitmap * @relmap: bitmap relative to which translated * @bits: number of bits in each of these bitmaps * * Set the n-th bit of @dst iff there exists some m such that the * n-th bit of @relmap is set, the m-th bit of @orig is set, and * the n-th bit of @relmap is also the m-th _set_ bit of @relmap. * (If you understood the previous sentence the first time your * read it, you're overqualified for your current job.) * * In other words, @orig is mapped onto (surjectively) @dst, * using the map { <n, m> | the n-th bit of @relmap is the * m-th set bit of @relmap }. * * Any set bits in @orig above bit number W, where W is the * weight of (number of set bits in) @relmap are mapped nowhere. * In particular, if for all bits m set in @orig, m >= W, then * @dst will end up empty. In situations where the possibility * of such an empty result is not desired, one way to avoid it is * to use the bitmap_fold() operator, below, to first fold the * @orig bitmap over itself so that all its set bits x are in the * range 0 <= x < W. The bitmap_fold() operator does this by * setting the bit (m % W) in @dst, for each bit (m) set in @orig. * * Example [1] for bitmap_onto(): * Let's say @relmap has bits 30-39 set, and @orig has bits * 1, 3, 5, 7, 9 and 11 set. Then on return from this routine, * @dst will have bits 31, 33, 35, 37 and 39 set. * * When bit 0 is set in @orig, it means turn on the bit in * @dst corresponding to whatever is the first bit (if any) * that is turned on in @relmap. Since bit 0 was off in the * above example, we leave off that bit (bit 30) in @dst. * * When bit 1 is set in @orig (as in the above example), it * means turn on the bit in @dst corresponding to whatever * is the second bit that is turned on in @relmap. The second * bit in @relmap that was turned on in the above example was * bit 31, so we turned on bit 31 in @dst. * * Similarly, we turned on bits 33, 35, 37 and 39 in @dst, * because they were the 4th, 6th, 8th and 10th set bits * set in @relmap, and the 4th, 6th, 8th and 10th bits of * @orig (i.e. bits 3, 5, 7 and 9) were also set. * * When bit 11 is set in @orig, it means turn on the bit in * @dst corresponding to whatever is the twelfth bit that is * turned on in @relmap. In the above example, there were * only ten bits turned on in @relmap (30..39), so that bit * 11 was set in @orig had no affect on @dst. * * Example [2] for bitmap_fold() + bitmap_onto(): * Let's say @relmap has these ten bits set: * 40 41 42 43 45 48 53 61 74 95 * (for the curious, that's 40 plus the first ten terms of the * Fibonacci sequence.) * * Further lets say we use the following code, invoking * bitmap_fold() then bitmap_onto, as suggested above to * avoid the possibility of an empty @dst result: * * unsigned long *tmp; // a temporary bitmap's bits * * bitmap_fold(tmp, orig, bitmap_weight(relmap, bits), bits); * bitmap_onto(dst, tmp, relmap, bits); * * Then this table shows what various values of @dst would be, for * various @orig's. I list the zero-based positions of each set bit. * The tmp column shows the intermediate result, as computed by * using bitmap_fold() to fold the @orig bitmap modulo ten * (the weight of @relmap). * * @orig tmp @dst * 0 0 40 * 1 1 41 * 9 9 95 * 10 0 40 (*) * 1 3 5 7 1 3 5 7 41 43 48 61 * 0 1 2 3 4 0 1 2 3 4 40 41 42 43 45 * 0 9 18 27 0 9 8 7 40 61 74 95 * 0 10 20 30 0 40 * 0 11 22 33 0 1 2 3 40 41 42 43 * 0 12 24 36 0 2 4 6 40 42 45 53 * 78 102 211 1 2 8 41 42 74 (*) * * (*) For these marked lines, if we hadn't first done bitmap_fold() * into tmp, then the @dst result would have been empty. * * If either of @orig or @relmap is empty (no set bits), then @dst * will be returned empty. * * If (as explained above) the only set bits in @orig are in positions * m where m >= W, (where W is the weight of @relmap) then @dst will * once again be returned empty. * * All bits in @dst not set by the above rule are cleared. */ void bitmap_onto(unsigned long *dst, const unsigned long *orig, const unsigned long *relmap, unsigned int bits) { unsigned int n, m; /* same meaning as in above comment */ if (dst == orig) /* following doesn't handle inplace mappings */ return; bitmap_zero(dst, bits); /* * The following code is a more efficient, but less * obvious, equivalent to the loop: * for (m = 0; m < bitmap_weight(relmap, bits); m++) { * n = bitmap_ord_to_pos(orig, m, bits); * if (test_bit(m, orig)) * set_bit(n, dst); * } */ m = 0; for_each_set_bit(n, relmap, bits) { /* m == bitmap_pos_to_ord(relmap, n, bits) */ if (test_bit(m, orig)) set_bit(n, dst); m++; } } EXPORT_SYMBOL(bitmap_onto); /** * bitmap_fold - fold larger bitmap into smaller, modulo specified size * @dst: resulting smaller bitmap * @orig: original larger bitmap * @sz: specified size * @nbits: number of bits in each of these bitmaps * * For each bit oldbit in @orig, set bit oldbit mod @sz in @dst. * Clear all other bits in @dst. See further the comment and * Example [2] for bitmap_onto() for why and how to use this. */ void bitmap_fold(unsigned long *dst, const unsigned long *orig, unsigned int sz, unsigned int nbits) { unsigned int oldbit; if (dst == orig) /* following doesn't handle inplace mappings */ return; bitmap_zero(dst, nbits); for_each_set_bit(oldbit, orig, nbits) set_bit(oldbit % sz, dst); } EXPORT_SYMBOL(bitmap_fold); /* * Common code for bitmap_*_region() routines. * bitmap: array of unsigned longs corresponding to the bitmap * pos: the beginning of the region * order: region size (log base 2 of number of bits) * reg_op: operation(s) to perform on that region of bitmap * * Can set, verify and/or release a region of bits in a bitmap, * depending on which combination of REG_OP_* flag bits is set. * * A region of a bitmap is a sequence of bits in the bitmap, of * some size '1 << order' (a power of two), aligned to that same * '1 << order' power of two. * * Returns 1 if REG_OP_ISFREE succeeds (region is all zero bits). * Returns 0 in all other cases and reg_ops. */ enum { REG_OP_ISFREE, /* true if region is all zero bits */ REG_OP_ALLOC, /* set all bits in region */ REG_OP_RELEASE, /* clear all bits in region */ }; static int __reg_op(unsigned long *bitmap, unsigned int pos, int order, int reg_op) { int nbits_reg; /* number of bits in region */ int index; /* index first long of region in bitmap */ int offset; /* bit offset region in bitmap[index] */ int nlongs_reg; /* num longs spanned by region in bitmap */ int nbitsinlong; /* num bits of region in each spanned long */ unsigned long mask; /* bitmask for one long of region */ int i; /* scans bitmap by longs */ int ret = 0; /* return value */ /* * Either nlongs_reg == 1 (for small orders that fit in one long) * or (offset == 0 && mask == ~0UL) (for larger multiword orders.) */ nbits_reg = 1 << order; index = pos / BITS_PER_LONG; offset = pos - (index * BITS_PER_LONG); nlongs_reg = BITS_TO_LONGS(nbits_reg); nbitsinlong = min(nbits_reg, BITS_PER_LONG); /* * Can't do "mask = (1UL << nbitsinlong) - 1", as that * overflows if nbitsinlong == BITS_PER_LONG. */ mask = (1UL << (nbitsinlong - 1)); mask += mask - 1; mask <<= offset; switch (reg_op) { case REG_OP_ISFREE: for (i = 0; i < nlongs_reg; i++) { if (bitmap[index + i] & mask) goto done; } ret = 1; /* all bits in region free (zero) */ break; case REG_OP_ALLOC: for (i = 0; i < nlongs_reg; i++) bitmap[index + i] |= mask; break; case REG_OP_RELEASE: for (i = 0; i < nlongs_reg; i++) bitmap[index + i] &= ~mask; break; } done: return ret; } /** * bitmap_find_free_region - find a contiguous aligned mem region * @bitmap: array of unsigned longs corresponding to the bitmap * @bits: number of bits in the bitmap * @order: region size (log base 2 of number of bits) to find * * Find a region of free (zero) bits in a @bitmap of @bits bits and * allocate them (set them to one). Only consider regions of length * a power (@order) of two, aligned to that power of two, which * makes the search algorithm much faster. * * Return the bit offset in bitmap of the allocated region, * or -errno on failure. */ int bitmap_find_free_region(unsigned long *bitmap, unsigned int bits, int order) { unsigned int pos, end; /* scans bitmap by regions of size order */ for (pos = 0 ; (end = pos + (1U << order)) <= bits; pos = end) { if (!__reg_op(bitmap, pos, order, REG_OP_ISFREE)) continue; __reg_op(bitmap, pos, order, REG_OP_ALLOC); return pos; } return -ENOMEM; } EXPORT_SYMBOL(bitmap_find_free_region); /** * bitmap_release_region - release allocated bitmap region * @bitmap: array of unsigned longs corresponding to the bitmap * @pos: beginning of bit region to release * @order: region size (log base 2 of number of bits) to release * * This is the complement to __bitmap_find_free_region() and releases * the found region (by clearing it in the bitmap). * * No return value. */ void bitmap_release_region(unsigned long *bitmap, unsigned int pos, int order) { __reg_op(bitmap, pos, order, REG_OP_RELEASE); } EXPORT_SYMBOL(bitmap_release_region); /** * bitmap_allocate_region - allocate bitmap region * @bitmap: array of unsigned longs corresponding to the bitmap * @pos: beginning of bit region to allocate * @order: region size (log base 2 of number of bits) to allocate * * Allocate (set bits in) a specified region of a bitmap. * * Return 0 on success, or %-EBUSY if specified region wasn't * free (not all bits were zero). */ int bitmap_allocate_region(unsigned long *bitmap, unsigned int pos, int order) { if (!__reg_op(bitmap, pos, order, REG_OP_ISFREE)) return -EBUSY; return __reg_op(bitmap, pos, order, REG_OP_ALLOC); } EXPORT_SYMBOL(bitmap_allocate_region); /** * bitmap_copy_le - copy a bitmap, putting the bits into little-endian order. * @dst: destination buffer * @src: bitmap to copy * @nbits: number of bits in the bitmap * * Require nbits % BITS_PER_LONG == 0. */ #ifdef __BIG_ENDIAN void bitmap_copy_le(unsigned long *dst, const unsigned long *src, unsigned int nbits) { unsigned int i; for (i = 0; i < nbits/BITS_PER_LONG; i++) { if (BITS_PER_LONG == 64) dst[i] = cpu_to_le64(src[i]); else dst[i] = cpu_to_le32(src[i]); } } EXPORT_SYMBOL(bitmap_copy_le); #endif Privacy Policy
__label__pos
0.86915
#bonita How to set the initial value of Input Form in UI Designer Hi, Just want to ask how to show in the input field a specific value. I want to show it once the task is been returned to the user for them to see the initial value and then they will decide whether to retain the value or not. I used java scripting method but it is not working. I don't know if my javascript below is correct since I dont know the name or id of the input box in bonita. I only use the value I assign as the name of the field and then set the value. Here is my javascript below: How will I force the user to comment in bonita Hi, I want to force the user to encode a comment in the comment tab when they will reject the task. The comment tab I am talking about is the default comment tab in bonita when viewing the task not the custom comment input. Please help. Thanks How to fetch data in UI designer and update the records. Hi, I have a scenario wherein I want to update records once created when it is rejected by the approver. First I created and fill up the task when I submit it will go to the manager. When the manager login and rejects the task it will go back to the initiator and the initiator will update the fields required. I want to fetch the previous data and will allow the user to update his/her records then will submit again to the manager. How to assign task to a specific group and all users in a group can view it? Hi, How can I assign a task to a group and users can view it and whoever approves the task will be fetch? The purpose of this is for the task not to be pending for a long time and other users can approve certain task for it to be release right away. Thanks How to make attachment not required? Hi, I want to make the attachment not required. I use these File upload and download process example (http://community.bonitasoft.com/project/file-upload-and-download-process...). I cannot submit the form when it is empty. How can I make it optional and can submit the form when it is null and have an attachment. If this cannot be done is there a workaround for attachment. Note: I use the multiple attachment option since our user attach multiple of documents. How to convert code and id in forms bonita? Hi, I need to convert the id and code to name. I save the id and code in my main table instead of the name. But when I display the content of table it shows the id and code. I have tables that handle the equivalent of the code and id. How can i convert it in my UI Designer. Please help. Thanks how to include groovy script in .bar file? how to include groovy script in .bar file? When does HiDPI support come for Bonita BPM Studio? It's horrible to work with Bonita BPM Studio on a HiDPI display because the underlying eclipse is not supporting it. I work with a team with normal DPI displays on a shared repository, so we always have to adjust the size of the elements or stay with overlapping elements and text. 4k_prozess.png Bonita Error: be_TRT_2 Exception We are using Bonita Open Solution version 5.6 and we are getting the following exception: WARNING: Activity will be put in the state FAILED due to: org.ow2.bonita.facade.exception.BonitaWrapperException: org.ow2.bonita.facade.exception.RoleMapperInvocationException: Bonita Error: be_TRT_2Exception caught while executing role mapper: org.ow2.bonita.facade.def.InternalConnectorDefinition@556eafa6 Can anyone offer some help on how I could trace or solve this problem? Testing Notifications
__label__pos
0.852237
Documentation Visual SQL (beta) FAQs Use CASE statements in Visual SQL CASE statements use conditional logic to alter query results or to perform calculations on your query results. It’s a flexible SQL and SQLite expression that can be used anywhere a query is set in Chartio. It can also be combined with other functions or formulas. There’s a lot you can do with CASE statements, and you can incorporate various columns into these or use them as part of other formulas. Take a look at SQL and SQLite documentation online for additional examples using CASE statements or sign up for one of our webinars on using SQLite in the Data Pipeline. Below are some examples where you could use CASE statements in Chartio: In the Pipeline To alter values or strings You can add a similar CASE statements as above in your Visual Mode charts by using the Edit Column Action or Calculated Column Action with a Custom formula in the Pipeline. Let’s say we want to alter our query results by transforming “FB” to “Facebook”. We’d enter the following in the Formula field: case when "Campaign Id" = 'FB' then 'Facebook' else "Campaign Id" end Edit your column's results using a CASE statement You can also use the CASE statement to alter multiple query results. For example, if we wanted to rename all the results here, we would instead use the statement below: case when "Campaign Id" = 'FB' then 'Facebook' when "Campaign Id" = 'AW' then 'Adwords' when "Campaign Id" = 'TV' then 'Television' when "Campaign Id" = 'WM' then 'Web' else "Campaign Id" end Tip! If you’re looking to do this for multiple charts and are not able to correct your results in your database directly, consider creating a Custom Column in your Data Source Schema instead of creating a CASE statement in every chart. Edit multiple results using a CASE statement To perform conditional calculations You can also use a CASE statement to perform calculations or set various types of conditions for existing columns or in new columns. For example, say we got a 15% refund on our costs for Facebook campaigns, we can use a CASE statement to set a calculation to adjust the cost for just those rows. Doing this enables you to transform your data using values that aren’t stored elsewhere or that you may want to present differently than what’s in your database. In this case, we’d use the following statement: case when "Campaign Id" = 'FB' then ("Cost"-("Cost"*0.15)) else "Cost" end Add a column to calculate adjusted cost using a CASE statement Multiple conditions per case You can use ANDs and ORs in your CASE statements to check if a result matches multiple conditions before performing an operation on it. For example, if we wanted to flag any Facebook (FB) or Adwords (AW) campaigns that exceeded our budget, we could use the following CASE statement in a Calculated Column: case when ("Campaign Id"='FB' or "Campaign Id"='AW') and "Cost">4000 then 'Exceeded budget' else 'OK' end Use your CASE statements to check if a result matches multiple conditions In queries Conditional SELECT statements using Controls The following example pulls the “Campaign Id” and “Cost” columns from the Marketing table in our SaaS Company Demo Data data source. Let’s say we also have a Text Input Control on our dashboard, and we want to allow the user to choose the type of aggregation performed on the costs of each campaign ID by typing either Average, Minimum, or Maximum into the Text Input. If none of those options is used, the default is to get the total sum of costs for each campaign ID. To do this, we can modify the SELECT clause in SQL Mode with the following: SELECT "Marketing"."campaign_id" AS "Campaign Id", case when {TEXT_INPUT} = 'Average' then AVG("Marketing"."cost") when {TEXT_INPUT} = 'Minimum' then MIN("Marketing"."cost") when {TEXT_INPUT} = 'Maximum' then MAX("Marketing"."cost") else SUM("Marketing"."cost") end as "Cost" where TEXT_INPUT is the name of our Text Input Control. Note: Our Text Input in this example does not have Multi-value selected in its settings. "Campaign Id" and "Cost" columns from the Marketing table in our SaaS Company Demo Data data source
__label__pos
0.992406
I need help with the following program: Create a structure STRING which represents a string ( as singly linked list). Check if the string is a palindrome using stack and function which prototype is int palindrome(STRING *str); Code: #include <stdio.h> #include <stdlib.h> typedef struct { char *info; }STRING; typedef struct node { STRING str; struct node *next; }NODE; void formList(NODE **head,STRING *str) { NODE *node=(NODE *)malloc(sizeof(NODE)); node->str=*str; node->next=NULL; if(*head==NULL) *head=node; else { NODE *temp=*head; while(temp->next) temp=temp->next; temp->next=node; } } void push(NODE **tos,STRING *str) { NODE *node=(NODE *)malloc(sizeof(NODE)); node->str=*str; node->next=*tos; *tos=node; } int palindrome(STRING *str) { NODE **head; NODE *temp=*head; NODE *temp1=*head; NODE *tos=NULL; while(temp!=NULL) { push(&tos,str); temp=temp->next; } while(temp1!=NULL) { if(temp1==tos) { temp1=temp1->next; tos=tos->next; } else return 0; } return 1; } void read(STRING *str) { printf("info:"); scanf("%s",str->info); } int main() { NODE *head=0; STRING str; read(&str); formList(&head,&str); if(palindrome(str.info)==1) printf("string is a palindrome"); else printf("string is not a palindrome"); return 0; } Segmentation faults: |75|warning: passing argument 1 of 'palindrome' from incompatible pointer type [enabled by default]| |39|note: expected 'struct STRING *' but argument is of type 'char *'| |42|warning: 'head' is used uninitialized in this function [-Wuninitialized]| How to resolve these errors? The info member of STRING is a char*, NOT a numeric or boolean value. What is your intent here? Be a part of the DaniWeb community We're a friendly, industry-focused community of developers, IT pros, digital marketers, and technology enthusiasts meeting, learning, and sharing knowledge.
__label__pos
1
Importing Data in Python Cheat Sheet with Comprehensive Tutorial Importing Data in Python Cheat Sheet with Comprehensive Tutorial Getting your Trinity Audio player ready... Looking for an effective and handy Python code repository in the form of Importing Data in Python Cheat Sheet? Your journey ends here where you will learn the essential handy tips quickly and efficiently with proper explanations which will make any type of data importing journey into the Python platform super easy. Introduction Are you a Python enthusiast looking to import data into your code with ease? Whether you’re working on Data Analysis, Machine Learning, or any other data-related task, having a well-organized Importing Data in Python Cheat Sheet for importing data in Python is invaluable.  So, let me present to you an Importing Data in Python Cheat Sheet which will make your life easier. For initiating any data science project, first, you need to analyze the data. But before diving into stuff like data cleaning, data munging, or making cool visualizations, first, you need to figure out how to get your data into Python. Importing Data in Python Cheat SheetYou probably already know that there are a bunch of ways to do that, depending on what kind of files you are working with. In this Importing Data in Python Cheat Sheet article, we will explore the essential techniques and libraries that will make data import a breeze. From reading CSV files to accessing databases, we will get you covered about anything and everything. Here we will upskill you with the Pandas library which stands as a highly favored asset amongst data scientists, facilitating seamless data manipulation and analysis. Alongside Matplotlib, a key tool for data visualization, and NumPy, the foundational library for scientific computing upon which Pandas was constructed.  This Importing Data in Python Cheat Sheet guide offers you a swift introduction to the fundamentals of data importing in Python. It equips you with the essential knowledge to embark on the journey of refining and managing your data effectively. Let’s dive in! Importing Data from Different Sources Unlock the world of data importation in Python with our handy Importing Data in Python Cheat Sheet. This Importing Data in Python Cheat Sheet guide takes you on a journey through the fundamentals of bringing data into your workspace. Here’s what you’ll discover:  Diverse Data Sources: Learn to import not just plain text files but also data from a variety of other software formats, including Excel spreadsheets, SQL, and relational databases. Efficient Data Exploration: Discover how to seamlessly navigate your filesystem, ask for assistance when needed, and kickstart your data exploration journey. In a nutshell, this cheat sheet will equip you with the essential knowledge to dive into the exciting domain of data science with Python. Get ready to supercharge your data-handling skills! Do you want to learn more? Try out our Python course for the Data Science tutorial 1. Importing Data from CSV files CSV files are ubiquitous when it comes to storing tabular data. Python provides several libraries to read CSV files effortlessly. One of the most popular options is the panda’s library. Here’s how you can use it: >>> import pandas as pd # Read CSV file into a DataFrame >>> data = pd.read_csv(‘data.csv’) # Provide data.csv file path using ‘/’ within quotes if the data is not in the same directory of python # Access the data in the DataFrame >> print(data.head()) • Importing Flat Data CSV Files with Pandas  >>> import pandas as pd >>> source_file = ‘flat_data_csv.csv’ >>> data = pd.read_csv(source_file,          Nrows = 10, #Number of rows of the source file to read           header = None, # Column number to be used as column names           sep = ‘\t’, # ‘\t’ to be considered as the delimiter           comment = ‘#’, # ‘#’ Character to split the comments           na_values =[“”]) # ”” string that is NULL value to recognize as NA/NaN 2. Importing Data from Excel files When working with Excel files, the panda’s library again comes to the rescue. It provides a simple way to read Excel files into DataFrames with the help of below Python codes: >>> import pandas as pd # Read Excel file into a DataFrame >>> data = pd.read_excel(‘data.xlsx’) # Provide data.csv file path using ‘/’ within quotes if the data is not in the same directory of python # Access the data in the DataFrame >> print(data.head()) 3. Importing Plain Text Data Files >>> import pandas as pd >>> filename = ‘data.txt’ >>> file = open(filename, mode=’r’) #Open the file for reading >>> text = file.read() #Read a file’s contents >>> print(file.closed) #Check whether file is closed >>> file.close() #Close file >> print(text) Use the content manager with: >>> with open(‘data.txt’, ‘r’) as file:      print(file.readline()) #Read a single line      print(file.readline())      print(file.readline()) 4. Importing Table Data Flat Files Table data flat files typically refer to structured data files where information is organized in rows and columns, resembling a table or spreadsheet. These flat files are plain text files with a specific structure, often using delimiters like commas (CSV – Comma-Separated Values) or tabs (TSV – Tab-Separated Values) to separate data elements.  Python provides various libraries and methods for working with table data flat files, making it easy to read, manipulate, and analyze structured data efficiently. These files are commonly used for tasks like data import, data transformation, and data analysis in fields like data science, research, and database management. • Importing Table Data Flat Text Files with NumPy >>> import numpy as np >>> filename = ‘flat_data.txt’ >>> file = open(filename, mode=’r’) #Open the file for reading >>> text = file.read() #Read a file’s contents >>> print(file.closed) #Check whether file is closed >>> file.close() #Close file >> print(text) • Importing Table Data Flat Text Files with one data type:  >>> import numpy as np >>> filename = ‘flat_data_one_datatype.txt’ >>> data = np.loadtxt(filename,     delimiter=’,’, # ‘,‘ delimiter is used to separate the values of the string      skiprows = 2,  # Skipping the initial 2 lines      usecols = [0,2], # Read the 1st and 3rd column      dtype = str)     # String is the data type of the resulting output array • Importing Table Data Flat Text with mixed data type  >>> import numpy as np >>> filename = flat_data_mixed_datatype.csv’ >>> data = np.genfromtxt (filename,     Delimiter = ‘,’,  ‘,‘ delimiter is used to separate the values of the string     names = True,   # Capture the names from the column header     dtype = None) >>> data_array = np.recfromcsv(filename) #The default dtype of the np.recfromcsv() function is None 5. Importing JSON files into Python Using the below codes one can import any JSON file into Python: # Open JSON file >>> with open (‘data.json’) as file :     data = json.load(file) # Access the data >> print (data) 6. Importing from SQL databases Python has excellent support for interacting with databases. The Panda’s library, combined with the sqlalchemy library, enables seamless importing of data from SQL databases: >>> import pandas as pd >>> from sqlalchemy import create_engine # Connect to the database >>> engine = create_engine(‘sqlite:///data.db’) # Import data using a SQL query >>> query = ‘SELECT * FROM table_name’ >>> data = pd.read_sql(query, con=engine) # Access the data >> print(data.head()) Pro Tip: The read_sql function also supports other database engines like MySQL, PostgreSQL, and more. Managing Data Formats and Encoding After importing data into Python, we need to deal with managing the data formats and their encoding. In this step, we ensure that the data is correctly interpreted and manipulated. This step includes tasks such as handling different file formats (e.g., CSV, JSON), converting data types, handling character encoding (e.g., UTF-8), and addressing missing or inconsistent data.  Properly managing data formats and encoding is crucial to maintaining data integrity and compatibility for subsequent analysis and processing. Dealing with different encodings When importing data, you might encounter different encodings. To handle encoding-related issues, you can use the Chardet library, which automatically detects encoding: >>> import chardet # Detect the encoding of a file >>> with open (‘data.txt’, ‘rb’) as file:     raw_data = file.read()     result = chardet.detect(raw_data) # Get the detected encoding >>> encoding = result[‘encoding’] >>> print (f”Detected Encoding: {encoding}”) Specifying data types Sometimes, the default data types inferred by import libraries may not match your specific needs. To overcome this, you can specify the desired data types, ensuring accurate data representation: >>> import pandas as pd # Read CSV file with specific data types >>> data = pd.read_csv(‘data.csv’, dtype = {‘column_name’: int}) # Access the data >> print(data.head()) Exploring Your Data in Python After properly importing data into Python and managing its data types along with encoding, you need to explore your data in Python to observe the data quality before you start your analysis. Below are the techniques to carry out data explorations using Python codes: Exploring Data using NumPy Arrays  >>> data_array.dtype  # Data type of array elements >>> data_array.shape  # Array  dimensions > len(data_array)   # Length of array Exploring Data using Pandas DataFrames  >>> df.head()   # Return first DataFrame rows >>> df.tail()   # Return last DataFrame rows >>> df.index    # Describe index >>> df.columns  # Describe DataFrame columns >>> df.info()   # Info of a DataFrame >> data_array = data.values  # Converting from a DataFrame to a NumPy array Exploring Excel Spreadsheets Data >>> source_file = ‘excel_data.xlsx’ >>> data = pd.ExcelFile(source_file) >>> df sheet2 = data.parse(‘2020-2023’,           Skiprows =[0],            Names = [‘Country’, ‘AAM: War(2022)’]) >>> df sheetl = data.parse(0,           parse_cols = [0],            skiprows = [0],            names = [‘Country’]) To access the sheet names, use the sheet_names attribute: >> data.sheet_names Accessing the Python Help Section In case you are confused with any of the above codes or getting errors while running with your datasets, then you can explore the help section of Python to solve your specific issues. To access the help section of Python directly using coding, use the below codes: >>> np.info(np.ndarray.dtype) >> help(pd.read_csv) FAQs How to Import a Dataset in a Python Python Jupyter Notebook? To import a dataset in a Python Jupyter Notebook, you can use libraries like Pandas. Begin by installing Pandas if it’s not already installed. Then, use the read_csv() method to import CSV files, or other methods for different formats.  Ensure your dataset is in the same directory or provide the file path. You can also use web URLs for remote datasets. Once imported, you can access, manipulate, and analyze the data effectively within your Jupyter Notebook, making it a powerful tool for data science and analysis tasks. What is the Difference Between NumPy and Pandas? NumPy and Pandas are two popular Python libraries that are used for data manipulation and analysis. While both libraries are used for data-related tasks, they serve different purposes. NumPy is a fundamental library of Python that is used to perform scientific computing. It provides high-performance multidimensional arrays and tools to deal with them.  A NumPy array is a grid of values (of the same type) that are indexed by a tuple of positive integers. NumPy arrays are fast, easy to understand, and give users the right to perform calculations across arrays.  Pandas, on the other hand, is built on top of NumPy and provides high-level data manipulation tools and structures tailored for working with structured and labeled data. Pandas provide high-performance, fast, easy-to-use data structures, and data analysis tools for manipulating numeric data and time series.  In pandas, we can import data from various file formats like JSON, SQL, Microsoft Excel, etc. Pandas is capable of providing multi-dimensional arrays and has a 2D table object called DataFrame.  Here are some of the key differences between NumPy and Pandas: • Data compatibility: While Pandas primarily works with tabular data, the NumPy module works with numerical data. • Tools: Pandas include powerful data analysis tools like DataFrame and Series, whereas the NumPy module offers Arrays. • Performance: Pandas consume more memory than NumPy, but it has better performance when the number of rows is 500K or more. NumPy has better performance when the number of rows is 50K or less.  Conclusion Importing data is an indispensable step in many Python applications. Having a cheat sheet with the right techniques and libraries can save you valuable time and effort. In this article, we covered the essentials of importing data from CSV files, Excel files, JSON files, and SQL databases. We also explored how to manage data formats and encode data properly. So go ahead and explore the vast world of data with Python! Author • Neha Singh Written by: I’m a full-time freelance writer and editor who enjoys wordsmithing. The 8 years long journey as a content writer and editor has made me relaize the significance and power of choosing the right words. Prior to my writing journey, I was a trainer and human resource manager. WIth more than a decade long professional journey, I find myself more powerful as a wordsmith. As an avid writer, everything around me inspires me and pushes me to string words and ideas to create unique content; and when I’m not writing and editing, I enjoy experimenting with my culinary skills, reading, gardening, and spending time with my adorable little mutt Neel.
__label__pos
0.991756
slick slick - 9 months ago 20 Bash Question How to cancel a process when Control+C is pressed in a shell script? I have a script that does some work. It does it in a proper way but it takes a while before it ends. Because I want to simulate that "job is running now" I added a spinner cursor animation. I call it at the beginning of my script, then I assign the PID of that function to a variable and when everything is done, I simply kill it to stop the animation. #!/bin/bash spinner & SPINNER_PID=$! # a lot of stuff going on here... # takes some time to finish kill $SPINNER_PID &>/dev/null printf "All done" Function definition: spinner() { local i sp n sp='/-\|' n=${#sp} while sleep 0.1; do printf "%s\b" "${sp:i++%n:1}" done } All works fine if I won't interrupt. Imagine that I want to cancel my long task by calling Control+C . It cancels but the spinner obviously still animates since it's noting else that an infinite loop. enter image description here How I can kill it when script is cancelled manually and won't reach to kill $SPINNER_PID &>/dev/null ? Answer There seems to be a statement called trap that allows you to run code when Ctrl-C is pressed. There is a good explanation of it here, and this website explains how to run a function on Ctrl-C. trap your_func INT
__label__pos
0.513954
Eszkola Rozwiązywanie równań kwadratowych – Zadanie 2 obliczenia Rozwiąż równanie kwadratowe a) \(6x^2+11x-35=0\) b) \(3x^2+6x-18\frac{1}{3}=0\) c) \(3x^2-10x+5=0\) d) \(x^2+4x+5=0\) Aby rozwiązać równanie kwadratowe, najpierw obliczamy deltę \(\Delta\), następnie \(x_1\) oraz \(x_2\), jeśli istnieją. \(\Delta=b^2-4ac\) \(x_1=\frac{-b-\sqrt{\Delta}}{2a} \:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\: x_2=\frac{-b+\sqrt{\Delta}}{2a}\) Rozwiązanie a) \(6x^2+11x-35=0\) Z równania odczytujemy \(a=6;b=11;c=-35\) i obliczamy deltę: \(\Delta=11^2-4\cdot 6\cdot (-35)=121+840=961\) Wartość delta jest dodatnia, oznacza to, że równanie posiada dwa rozwiązania. \(\sqrt{\Delta}=\sqrt{961}=31\) \(x_1=\frac{-11-31}{2\cdot 6} \:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\: x_2=\frac{-11+31}{2\cdot 6}\) \(x_1=\frac{-42}{12} \:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\: x_2=\frac{20}{12}\) \(x_1=-3\frac{6}{12} \:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\: x_2=1\frac{8}{12}\) \(x_1=-3\frac{1}{2} \:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\: x_2=1\frac{2}{3}\) Odpowiedź: Równanie posiada dwa rozwiązania \(x=-3\frac{1}{2}\) oraz \(x=1\frac{2}{3}\). b) \(3x^2+6x-18\frac{1}{3}=0\) Z równania odczytujemy \(a=3;b=6;c=-18\frac{1}{3}\) i obliczamy deltę: \(\Delta=6^2-4\cdot 3\cdot (-18\frac{1}{3})=36+220=256\) Wartość delta jest dodatnia, oznacza to, że równanie posiada dwa rozwiązania. \(\sqrt{\Delta}=\sqrt{256}=16\) \(x_1=\frac{-22}{6} \:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\: x_2=\frac{10}{6}\) \(x_1=-3\frac{4}{6} \:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\: x_2=1\frac{4}{6}\) \(x_1=-3\frac{2}{3} \:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\: x_2=1\frac{2}{3}\) Odpowiedź: Równanie posiada dwa rozwiązania \(x=-3\frac{2}{3}\) oraz \(x=1\frac{2}{3}\). c) \(3x^2-10x+5=0\) Z równania odczytujemy \(a=3;b=-10;c=5\) i obliczamy deltę: \(\Delta=(-10)^2-4\cdot 3\cdot 5=100-60=40\) Wartość delta jest dodatnia, oznacza to, że równanie posiada dwa rozwiązania. \(\sqrt{\Delta}=\sqrt{40}=\sqrt{4\cdot 10}=2\sqrt{10}\) \(x_1=\frac{-(-10)-2\sqrt{10}}{2\cdot 3} \:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\: x_2=\frac{-(-10)+2\sqrt{10}}{2\cdot 3}\) \(x_1=\frac{10-2\sqrt{10}}{6} \:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\: x_2=\frac{10+2\sqrt{10}}{6}\) \(x_1=\frac{10}{6}-\frac{2\sqrt{10}}{6} \:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\: x_2=\frac{10}{6}+\frac{2\sqrt{10}}{6}\) \(x_1=1\frac{2}{3}-\frac{\sqrt{10}}{3} \:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\: x_2=1\frac{2}{3}+\frac{\sqrt{10}}{3}\) Odpowiedź: Rozwiązaniem równania są : \(x_1=1\frac{2}{3}-\frac{\sqrt{10}}{3} \:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\: x_2=1\frac{2}{3}+\frac{\sqrt{10}}{3}\) d) \(x^2+4x+5=0\) Z równania odczytujemy \(a=1;b=4;c=5\) i obliczamy deltę: \(\Delta=4^2-4\cdot 1\cdot 5=16-20=-4\) Wartość delta jest ujemna, oznacza to, że równanie nie posiada rozwiązania. Odpowiedź: Podane równanie nie posiada rozwiązania. Jak obliczyć rozwiązywanie równań kwadratowych – zadanie 2 - wyniki 2×1 =
__label__pos
0.999808
Skip to content Permalink Branch: master Find file Copy path Find file Copy path Fetching contributors… Cannot retrieve contributors at this time 361 lines (316 sloc) 11.7 KB // Copyright 2011 Google Inc. All rights reserved. // Use of this source code is governed by the Apache 2.0 // license that can be found in the LICENSE file. /* Package delay provides a way to execute code outside the scope of a user request by using the taskqueue API. To declare a function that may be executed later, call Func in a top-level assignment context, passing it an arbitrary string key and a function whose first argument is of type context.Context. The key is used to look up the function so it can be called later. var laterFunc = delay.Func("key", myFunc) It is also possible to use a function literal. var laterFunc = delay.Func("key", func(c context.Context, x string) { // ... }) To call a function, invoke its Call method. laterFunc.Call(c, "something") A function may be called any number of times. If the function has any return arguments, and the last one is of type error, the function may return a non-nil error to signal that the function should be retried. The arguments to functions may be of any type that is encodable by the gob package. If an argument is of interface type, it is the client's responsibility to register with the gob package whatever concrete type may be passed for that argument; see http://golang.org/pkg/gob/#Register for details. Any errors during initialization or execution of a function will be logged to the application logs. Error logs that occur during initialization will be associated with the request that invoked the Call method. The state of a function invocation that has not yet successfully executed is preserved by combining the file name in which it is declared with the string key that was passed to the Func function. Updating an app with pending function invocations should safe as long as the relevant functions have the (filename, key) combination preserved. The filename is parsed according to these rules: * Paths in package main are shortened to just the file name (github.com/foo/foo.go -> foo.go) * Paths are stripped to just package paths (/go/src/github.com/foo/bar.go -> github.com/foo/bar.go) * Module versions are stripped (/go/pkg/mod/github.com/foo/[email protected]/baz.go -> github.com/foo/bar/baz.go) There is some inherent risk of pending function invocations being lost during an update that contains large changes. For example, switching from using GOPATH to go.mod is a large change that may inadvertently cause file paths to change. The delay package uses the Task Queue API to create tasks that call the reserved application path "/_ah/queue/go/delay". This path must not be marked as "login: required" in app.yaml; it must be marked as "login: admin" or have no access restriction. */ package delay // import "google.golang.org/appengine/delay" import ( "bytes" stdctx "context" "encoding/gob" "errors" "fmt" "go/build" stdlog "log" "net/http" "path/filepath" "reflect" "regexp" "runtime" "strings" "golang.org/x/net/context" "google.golang.org/appengine" "google.golang.org/appengine/internal" "google.golang.org/appengine/log" "google.golang.org/appengine/taskqueue" ) // Function represents a function that may have a delayed invocation. type Function struct { fv reflect.Value // Kind() == reflect.Func key string err error // any error during initialization } const ( // The HTTP path for invocations. path = "/_ah/queue/go/delay" // Use the default queue. queue = "" ) type contextKey int var ( // registry of all delayed functions funcs = make(map[string]*Function) // precomputed types errorType = reflect.TypeOf((*error)(nil)).Elem() // errors errFirstArg = errors.New("first argument must be context.Context") errOutsideDelayFunc = errors.New("request headers are only available inside a delay.Func") // context keys headersContextKey contextKey = 0 stdContextType = reflect.TypeOf((*stdctx.Context)(nil)).Elem() netContextType = reflect.TypeOf((*context.Context)(nil)).Elem() ) func isContext(t reflect.Type) bool { return t == stdContextType || t == netContextType } var modVersionPat = regexp.MustCompile("@v[^/]+") // fileKey finds a stable representation of the caller's file path. // For calls from package main: strip all leading path entries, leaving just the filename. // For calls from anywhere else, strip $GOPATH/src, leaving just the package path and file path. func fileKey(file string) (string, error) { if !internal.IsSecondGen() || internal.MainPath == "" { return file, nil } // If the caller is in the same Dir as mainPath, then strip everything but the file name. if filepath.Dir(file) == internal.MainPath { return filepath.Base(file), nil } // If the path contains "_gopath/src/", which is what the builder uses for // apps which don't use go modules, strip everything up to and including src. // Or, if the path starts with /tmp/staging, then we're importing a package // from the app's module (and we must be using go modules), and we have a // path like /tmp/staging1234/srv/... so strip everything up to and // including the first /srv/. // And be sure to look at the GOPATH, for local development. s := string(filepath.Separator) for _, s := range []string{filepath.Join("_gopath", "src") + s, s + "srv" + s, filepath.Join(build.Default.GOPATH, "src") + s} { if idx := strings.Index(file, s); idx > 0 { return file[idx+len(s):], nil } } // Finally, if that all fails then we must be using go modules, and the file is a module, // so the path looks like /go/pkg/mod/github.com/foo/[email protected]/baz.go // So... remove everything up to and including mod, plus the @.... version string. m := "/mod/" if idx := strings.Index(file, m); idx > 0 { file = file[idx+len(m):] } else { return file, fmt.Errorf("fileKey: unknown file path format for %q", file) } return modVersionPat.ReplaceAllString(file, ""), nil } // Func declares a new Function. The second argument must be a function with a // first argument of type context.Context. // This function must be called at program initialization time. That means it // must be called in a global variable declaration or from an init function. // This restriction is necessary because the instance that delays a function // call may not be the one that executes it. Only the code executed at program // initialization time is guaranteed to have been run by an instance before it // receives a request. func Func(key string, i interface{}) *Function { f := &Function{fv: reflect.ValueOf(i)} // Derive unique, somewhat stable key for this func. _, file, _, _ := runtime.Caller(1) fk, err := fileKey(file) if err != nil { // Not fatal, but log the error stdlog.Printf("delay: %v", err) } f.key = fk + ":" + key t := f.fv.Type() if t.Kind() != reflect.Func { f.err = errors.New("not a function") return f } if t.NumIn() == 0 || !isContext(t.In(0)) { f.err = errFirstArg return f } // Register the function's arguments with the gob package. // This is required because they are marshaled inside a []interface{}. // gob.Register only expects to be called during initialization; // that's fine because this function expects the same. for i := 0; i < t.NumIn(); i++ { // Only concrete types may be registered. If the argument has // interface type, the client is resposible for registering the // concrete types it will hold. if t.In(i).Kind() == reflect.Interface { continue } gob.Register(reflect.Zero(t.In(i)).Interface()) } if old := funcs[f.key]; old != nil { old.err = fmt.Errorf("multiple functions registered for %s in %s", key, file) } funcs[f.key] = f return f } type invocation struct { Key string Args []interface{} } // Call invokes a delayed function. // err := f.Call(c, ...) // is equivalent to // t, _ := f.Task(...) // _, err := taskqueue.Add(c, t, "") func (f *Function) Call(c context.Context, args ...interface{}) error { t, err := f.Task(args...) if err != nil { return err } _, err = taskqueueAdder(c, t, queue) return err } // Task creates a Task that will invoke the function. // Its parameters may be tweaked before adding it to a queue. // Users should not modify the Path or Payload fields of the returned Task. func (f *Function) Task(args ...interface{}) (*taskqueue.Task, error) { if f.err != nil { return nil, fmt.Errorf("delay: func is invalid: %v", f.err) } nArgs := len(args) + 1 // +1 for the context.Context ft := f.fv.Type() minArgs := ft.NumIn() if ft.IsVariadic() { minArgs-- } if nArgs < minArgs { return nil, fmt.Errorf("delay: too few arguments to func: %d < %d", nArgs, minArgs) } if !ft.IsVariadic() && nArgs > minArgs { return nil, fmt.Errorf("delay: too many arguments to func: %d > %d", nArgs, minArgs) } // Check arg types. for i := 1; i < nArgs; i++ { at := reflect.TypeOf(args[i-1]) var dt reflect.Type if i < minArgs { // not a variadic arg dt = ft.In(i) } else { // a variadic arg dt = ft.In(minArgs).Elem() } // nil arguments won't have a type, so they need special handling. if at == nil { // nil interface switch dt.Kind() { case reflect.Chan, reflect.Func, reflect.Interface, reflect.Map, reflect.Ptr, reflect.Slice: continue // may be nil } return nil, fmt.Errorf("delay: argument %d has wrong type: %v is not nilable", i, dt) } switch at.Kind() { case reflect.Chan, reflect.Func, reflect.Interface, reflect.Map, reflect.Ptr, reflect.Slice: av := reflect.ValueOf(args[i-1]) if av.IsNil() { // nil value in interface; not supported by gob, so we replace it // with a nil interface value args[i-1] = nil } } if !at.AssignableTo(dt) { return nil, fmt.Errorf("delay: argument %d has wrong type: %v is not assignable to %v", i, at, dt) } } inv := invocation{ Key: f.key, Args: args, } buf := new(bytes.Buffer) if err := gob.NewEncoder(buf).Encode(inv); err != nil { return nil, fmt.Errorf("delay: gob encoding failed: %v", err) } return &taskqueue.Task{ Path: path, Payload: buf.Bytes(), }, nil } // Request returns the special task-queue HTTP request headers for the current // task queue handler. Returns an error if called from outside a delay.Func. func RequestHeaders(c context.Context) (*taskqueue.RequestHeaders, error) { if ret, ok := c.Value(headersContextKey).(*taskqueue.RequestHeaders); ok { return ret, nil } return nil, errOutsideDelayFunc } var taskqueueAdder = taskqueue.Add // for testing func init() { http.HandleFunc(path, func(w http.ResponseWriter, req *http.Request) { runFunc(appengine.NewContext(req), w, req) }) } func runFunc(c context.Context, w http.ResponseWriter, req *http.Request) { defer req.Body.Close() c = context.WithValue(c, headersContextKey, taskqueue.ParseRequestHeaders(req.Header)) var inv invocation if err := gob.NewDecoder(req.Body).Decode(&inv); err != nil { log.Errorf(c, "delay: failed decoding task payload: %v", err) log.Warningf(c, "delay: dropping task") return } f := funcs[inv.Key] if f == nil { log.Errorf(c, "delay: no func with key %q found", inv.Key) log.Warningf(c, "delay: dropping task") return } ft := f.fv.Type() in := []reflect.Value{reflect.ValueOf(c)} for _, arg := range inv.Args { var v reflect.Value if arg != nil { v = reflect.ValueOf(arg) } else { // Task was passed a nil argument, so we must construct // the zero value for the argument here. n := len(in) // we're constructing the nth argument var at reflect.Type if !ft.IsVariadic() || n < ft.NumIn()-1 { at = ft.In(n) } else { at = ft.In(ft.NumIn() - 1).Elem() } v = reflect.Zero(at) } in = append(in, v) } out := f.fv.Call(in) if n := ft.NumOut(); n > 0 && ft.Out(n-1) == errorType { if errv := out[n-1]; !errv.IsNil() { log.Errorf(c, "delay: func failed (will retry): %v", errv.Interface()) w.WriteHeader(http.StatusInternalServerError) return } } } You can’t perform that action at this time.
__label__pos
0.969596
DataGridViewClipboardCopyMode Enum Definition Defines constants that indicate whether content is copied from a DataGridView control to the Clipboard. public enum class DataGridViewClipboardCopyMode public enum DataGridViewClipboardCopyMode type DataGridViewClipboardCopyMode = Public Enum DataGridViewClipboardCopyMode Inheritance DataGridViewClipboardCopyMode Fields Disable 0 Copying to the Clipboard is disabled. EnableAlwaysIncludeHeaderText 3 The text values of selected cells can be copied to the Clipboard. Header text is included for rows and columns that contain selected cells. EnableWithAutoHeaderText 1 The text values of selected cells can be copied to the Clipboard. Row or column header text is included for rows or columns that contain selected cells only when the SelectionMode property is set to RowHeaderSelect or ColumnHeaderSelect and at least one header is selected. EnableWithoutHeaderText 2 The text values of selected cells can be copied to the Clipboard. Header text is not included. Examples The following code example demonstrates how to enable copying in the DataGridView control. For the complete example, see How to: Enable Users to Copy Multiple Cells to the Clipboard from the Windows Forms DataGridView Control. private void Form1_Load(object sender, System.EventArgs e) { // Initialize the DataGridView control. this.DataGridView1.ColumnCount = 5; this.DataGridView1.Rows.Add(new string[] { "A", "B", "C", "D", "E" }); this.DataGridView1.Rows.Add(new string[] { "F", "G", "H", "I", "J" }); this.DataGridView1.Rows.Add(new string[] { "K", "L", "M", "N", "O" }); this.DataGridView1.Rows.Add(new string[] { "P", "Q", "R", "S", "T" }); this.DataGridView1.Rows.Add(new string[] { "U", "V", "W", "X", "Y" }); this.DataGridView1.AutoResizeColumns(); this.DataGridView1.ClipboardCopyMode = DataGridViewClipboardCopyMode.EnableWithoutHeaderText; } private void CopyPasteButton_Click(object sender, System.EventArgs e) { if (this.DataGridView1 .GetCellCount(DataGridViewElementStates.Selected) > 0) { try { // Add the selection to the clipboard. Clipboard.SetDataObject( this.DataGridView1.GetClipboardContent()); // Replace the text box contents with the clipboard text. this.TextBox1.Text = Clipboard.GetText(); } catch (System.Runtime.InteropServices.ExternalException) { this.TextBox1.Text = "The Clipboard could not be accessed. Please try again."; } } } Private Sub Form1_Load(ByVal sender As Object, _ ByVal e As System.EventArgs) Handles Me.Load ' Initialize the DataGridView control. Me.DataGridView1.ColumnCount = 5 Me.DataGridView1.Rows.Add(New String() {"A", "B", "C", "D", "E"}) Me.DataGridView1.Rows.Add(New String() {"F", "G", "H", "I", "J"}) Me.DataGridView1.Rows.Add(New String() {"K", "L", "M", "N", "O"}) Me.DataGridView1.Rows.Add(New String() {"P", "Q", "R", "S", "T"}) Me.DataGridView1.Rows.Add(New String() {"U", "V", "W", "X", "Y"}) Me.DataGridView1.AutoResizeColumns() Me.DataGridView1.ClipboardCopyMode = _ DataGridViewClipboardCopyMode.EnableWithoutHeaderText End Sub Private Sub CopyPasteButton_Click(ByVal sender As Object, _ ByVal e As System.EventArgs) Handles CopyPasteButton.Click If Me.DataGridView1.GetCellCount( _ DataGridViewElementStates.Selected) > 0 Then Try ' Add the selection to the clipboard. Clipboard.SetDataObject( _ Me.DataGridView1.GetClipboardContent()) ' Replace the text box contents with the clipboard text. Me.TextBox1.Text = Clipboard.GetText() Catch ex As System.Runtime.InteropServices.ExternalException Me.TextBox1.Text = _ "The Clipboard could not be accessed. Please try again." End Try End If End Sub Remarks This enumeration is used by the ClipboardCopyMode property to indicate whether users can copy the text values of selected cells to the Clipboard and whether row and column header text is included. Applies to See also
__label__pos
0.526273
Sunday , July 12 2020 Home / Reviews / Cutting-edge Technology / The Ultimate Guide to Augmented Reality Technology The Ultimate Guide to Augmented Reality Technology Introduction to Augmented Reality Augmented reality may not be as exciting as a virtual reality roller coaster ride, however, it may prove to be a very useful tool in our everyday lives. It holds this potential because it brings elements of the virtual world, into the real world, enhancing the things we see, hear, and feel. Among the other reality technologies, augmented reality lies in the middle of the mixed reality spectrum, being between the real and virtual world.  Augmented Reality Definition What is Augmented Reality? An enhanced version of reality where live direct or indirect views of physical real-world environments are augmented with superimposed computer-generated images over a user’s view of the real-world, thus enhancing one’s current perception of reality.  Augmented Reality Explained  Simple Explanation of Augmented Reality The origin of the word augmented is augment, which means to add something. In the case of augmented reality (also called AR), graphics, sounds, and touch feedback are added into our natural world. Unlike virtual reality, which requires you to inhabit an entirely virtual environment, augmented reality uses your existing natural environment and simply overlays virtual information on top of it. As both virtual and real worlds harmoniously coexist, users of augmented reality experience a new and improved world where virtual information is used as a tool to provide assistance in everyday activities. Applications of augmented reality can be as simple as a text-notification or as complicated as an instruction on how to perform a life-threatening surgical procedure. They can highlight certain features, enhance understandings, and provide accessible and timely data. Cell phones apps and business applications are a few of the many applications driving augmented reality application development. The key point is that the information provided is highly topical and relevant to what you want you are doing. Types of Augmented Reality Augmented Reality Categories Several categories of augmented reality technology exist, each with varying differences in their objectives and applicational use cases. Below, we explore the various types of technologies that make up augmented reality: Marker Based Augmented Reality Marker-based augmented reality (also called Image Recognition) uses a camera and some type of visual marker, such as a QR/2D code, to produce a result only when the marker is sensed by a reader. Marker based applications use a camera on the device to distinguish a markerfrom any other real world object. Distinct, but simple patterns (such as a QR code) are used as the markers, because they can be easily recognized and do not require a lot of processing power to read. The position and orientation is also calculated, in which some type of content and/or information is then overlaied the marker. Markerless Augmented Reality As one of the most widely implemented applications of augmented reality, markerless (also called location-based, position-based, or GPS) augmented reality, uses a GPS, digital compass, velocity meter, or accelerometer which is embedded in the device to provide data based on your location. A strong force behind markerless augmented reality technology is the wide availability of smartphones and location detection features they provide. It is most commonly used for mapping directions, finding nearby businesses, and other location-centric mobile applications. Projection Based Augmented Reality Projection based augmented reality works by projecting artificial light onto real world surfaces. Projection based augmented reality applications allow for human interaction by sending light onto a real world surface and then sensing the human interaction (i.e. touch) of that projected light. Detecting the user’s interaction is done by differentiating between an expected (or known) projection and the altered projection (caused by the user’s interaction). Another interesting application of projection based augmented reality utilizes laser plasma technology to project a three-dimensional (3D) interactive hologram into mid-air. Superimposition Based Augmented Reality Superimposition based augmented reality either partially or fully replaces the original view of an object with a newly augmented view of that same object. In superimposition based augmented reality, object recognition plays a vital role because the application cannot replace the original view with an augmented one if it cannot determine what the object is. A strong consumer-facing example of superimposition based augmented reality could be found in the Ikea augmented reality furniture catalogue. By downloading an app and scanning selected pages in their printed or digital catalogue, users can place virtual ikea furniture in their own home with the help of augmented reality. How Does Augmented Reality Work? How Does AR work? In order to understand how augmented reality technology works, one must first understand its objective: to bring computer generated objects into the real world, which only the user can see. In most augmented reality applications, a user will see both synthetic and natural light. This is done by overlaying projected images on top of a pair of see-through goggles or glasses, which allow the images and interactive virtual objects to layer on top of the user’s view of the real world. Augmented Reality devices are often self-contained, meaning that unlike the Oculus Rift or HTC Vive VR headsets, they are completely untethered and do not need a cable or desktop computer to function. How Do Augmented Reality Devices Work (Inside) Augmented realities can be displayed on a wide variety of displays, from screens and monitors, to handheld devices or glasses. Google Glass and other head-up displays (HUD) put augmented reality directly onto your face, usually in the form of glasses. Handheld devices employ small displays that fit in users hands, including smartphones and tablets. As reality technologies continue to advance, augmented reality devices will gradually require less hardware and start being applied to things like contact lenses and virtual retinal displays. Key Components to Augmented Reality Devices • Sensors and Cameras Sensors are usually on the outside of the augmented reality device, and gather a user’s real world interactions and communicate them to be processed and interpreted. Cameras are also located on the outside of the device, and visually scan to collect data about the surrounding area. The devices take this information, which often determines where surrounding physical objects are located, and then formulates a digital model to determine appropriate output. In the case of Microsoft Hololens, specific cameras perform specific duties, such as depth sensing. Depth sensing cameras work in tandem with two “environment understanding cameras” on each side of the device. Another common type of camera is a standard several megapixel camera (similar to the ones used in smartphones) to record pictures, videos, and sometimes information to assist with augmentation. • Projection While “Projection Based Augmented Reality” is a category in-itself, we are specifically referring to a miniature projector often found in a forward and outward-facing position on wearable augmented reality headsets. The projector can essentially turn any surface into an interactive environment. As mentioned above, the information taken in by the cameras used to examine the surrounding world, is processed and then projected onto a surface in front of the user; which could be a wrist, a wall, or even another person. The use of projection in augmented reality devices means that screen real estate will eventually become a lesser important component. In the future, you may not need an iPad to play an online game of chess because you will be able to play it on the tabletop in front of you. • Processing Augmented reality devices are basically mini-supercomputers packed into tiny wearable devices. These devices require significant computer processing power and utilize many of the same components that our smartphones do. These components include a CPU, a GPU, flash memory, RAM, Bluetooth/Wifi microchip, global positioning system (GPS) microchip, and more. Advanced augmented reality devices, such as the Microsoft Hololens utilize an accelerometer (to measure the speed in which your head is moving), a gyroscope (to measure the tilt and orientation of your head), and a magnetometer (to function as a compass and figure out which direction your head is pointing) to provide for truly immersive experience. • Reflection Mirrors are used in augmented reality devices to assist with the way your eye views the virtual image. Some augmented reality devices may have “an array of many small curved mirrors” (as with the Magic Leap augmented reality device) and others may have a simple double-sided mirror with one surface reflecting incoming light to a side-mounted camera and the other surface reflecting light from a side-mounted display to the user’s eye. In the Microsoft Hololens, the use of “mirrors” involves see-through holographic lenses (Microsoft refers to them as waveguides) that use an optical projection system to beam holograms into your eyes. A so-called light engine, emits the light towards two separate lenses (one for each eye), which consists of three layers of glass of three different primary colors (blue, green, red). The light hits those layers and then enters the eye at specific angles, intensities and colors, producing a final holistic image on the eye’s retina. Regardless of method, all of these reflection paths have the same objective, which is to assist with image alignment to the user’s eye. How Augmented Reality is Controlled Augmented reality devices are often controlled either by touch a pad or voice commands. The touch pads are often somewhere on the device that is easily reachable. They work by sensing the pressure changes that occur when a user taps or swipes a specific spot. Voice commands work very similar to the way they do on our smartphones. A tiny microphone on the device will pick up your voice and then a microprocessor will interpret the commands. Voice commands, such as those on the Google Glass augmented reality device, are preprogrammed from a list of commands that you can use. On the Google Glass, nearly all of them start with “OK, Glass,” which alerts your glasses that a command is soon to follow. For example, “OK, Glass, take a picture” will send a command to the microprocessor to snap a photo of whatever you’re looking at. Augmented Reality Use Case Example: Healthcare A strong example of augmented reality in use is in the field of healthcare. From a routine checkup, to a complex surgical procedure, augmented reality can provide immense benefits and efficiencies to both patient and healthcare professional. Physical Exams Imagine that you walk into your scheduled doctor (or dentist) appointment, only to find your doctor (or dentist) wearing an augmented reality headset (e.g. Google Glass). Although it may look strange, this technology allows him (or her) to access past records, pictures, and other historical data in real-time to discuss with you. Instantly accessing this digital information without have to log into a computer or check a records room, proves to be a major benefit to healthcare professionals. Integration of augmented reality assisted systems with patient record management technologies is already highly desirable utility. Data integrity and accessibility is a major benefit to this type of system, where record access becomes instantaneous and consistent across all professionals to the most current records, instructions, and policies. Surgical Procedures Let’s take this example one step further and imagine that we are going in for a surgical procedure. Before the anesthesia takes effect, we notice that the doctor is wearing an augmented reality headset. The doctor will use this throughout the procedure for things such as display of surgical checklists and display of patient vital signs in a dashboard fashion. Augmented reality assisted surgical technologies assist professionals by providing things such as interfaces to operating room medical devices, graphical overlay-based guidance, recording & archiving of procedures, live feeds to remote users, and instant access to patient records. They can also allow for computer generated images to be projected onto any part of the body for treatment or can be combined with scanned real time images. The benefits of using augmented reality include a reduced risk of delays in surgery due to lack of familiarity with new or old conditions, reduced risk of errors in performing surgical procedures, and reduced risk for contamination if the device allows surgeons to access information without having to remove gloves (i.e. hands-free) to check instruments and data. The Ultimate Guide to Augmented Reality TechnologyFrom realitytechnologies.com About [email protected] Check Also Amazon is developing high-profile sci-fi TV adaptations Amazon Studios is on a quest – a quest to bring Jeff Bezos a ‘Game … Leave a Reply Your email address will not be published. Required fields are marked *
__label__pos
0.797657
创建接收器 演示如何创建 Cloud Logging 接收器。 包含此代码示例的文档页面 如需查看上下文中使用的代码示例,请参阅以下文档: 代码示例 C# 如需了解如何安装和使用 Logging 客户端库,请参阅 Logging 客户端库 private void CreateSink(string sinkId, string logId) { var sinkClient = ConfigServiceV2Client.Create(); CreateSinkRequest sinkRequest = new CreateSinkRequest(); LogSink myLogSink = new LogSink(); myLogSink.Name = sinkId; // This creates a sink using a Google Cloud Storage bucket // named the same as the projectId. // This requires editing the bucket's permissions to add the Entity Group // named '[email protected]' with 'Owner' access for the bucket. // If this is being run with a Google Cloud service account, // that account will need to be granted 'Owner' access to the Project. // In Powershell, use this command: // PS > Add-GcsBucketAcl <your-bucket-name> -Role OWNER -Group [email protected] myLogSink.Destination = "storage.googleapis.com/" + s_projectId; LogName logName = new LogName(s_projectId, logId); myLogSink.Filter = $"logName={logName.ToString()}AND severity<=ERROR"; ProjectName projectName = new ProjectName(s_projectId); sinkRequest.Sink = myLogSink; sinkClient.CreateSink(projectName, myLogSink, _retryAWhile); Console.WriteLine($"Created sink: {sinkId}."); } Go 如需了解如何安装和使用 Logging 客户端库,请参阅 Logging 客户端库 ctx := context.Background() _, err := client.CreateSink(ctx, &logadmin.Sink{ ID: "severe-errors-to-gcs", Destination: "storage.googleapis.com/logsinks-bucket", Filter: "severity >= ERROR", }) Java 如需了解如何安装和使用 Logging 客户端库,请参阅 Logging 客户端库 SinkInfo sinkInfo = SinkInfo.of(sinkName, DatasetDestination.of(datasetName)); Sink sink = logging.create(sinkInfo); Node.js 如需了解如何安装和使用 Logging 客户端库,请参阅 Logging 客户端库 // Imports the Google Cloud client libraries const {Logging} = require('@google-cloud/logging'); const {Storage} = require('@google-cloud/storage'); // Creates clients const logging = new Logging(); const storage = new Storage(); /** * TODO(developer): Uncomment the following lines to run the code. */ // const sinkName = 'Name of your sink, e.g. my-sink'; // const bucketName = 'Desination bucket, e.g. my-bucket'; // const filter = 'Optional log filer, e.g. severity=ERROR'; // The destination can be a Cloud Storage bucket, a Cloud Pub/Sub topic, // or a BigQuery dataset. In this case, it is a Cloud Storage Bucket. // See https://cloud.google.com/logging/docs/api/tasks/exporting-logs for // information on the destination format. const destination = storage.bucket(bucketName); const sink = logging.sink(sinkName); /** * The filter determines which logs this sink matches and will be exported * to the destination. For example a filter of 'severity>=INFO' will send * all logs that have a severity of INFO or greater to the destination. * See https://cloud.google.com/logging/docs/view/advanced_filters for more * filter information. */ const config = { destination: destination, filter: filter, }; async function createSink() { // See https://googleapis.dev/nodejs/logging/latest/Sink.html#create await sink.create(config); console.log(`Created sink ${sinkName} to ${bucketName}`); } createSink(); PHP 如需了解如何安装和使用 Logging 客户端库,请参阅 Logging 客户端库 /** Create a log sink. * * @param string $projectId The Google project ID. * @param string $sinkName The name of the sink. * @param string $destination The destination of the sink. * @param string $filterString The filter for the sink. */ function create_sink($projectId, $sinkName, $destination, $filterString) { $logging = new LoggingClient(['projectId' => $projectId]); $logging->createSink( $sinkName, $destination, ['filter' => $filterString] ); printf("Created a sink '%s'." . PHP_EOL, $sinkName); } Python 如需了解如何安装和使用 Logging 客户端库,请参阅 Logging 客户端库 def create_sink(sink_name, destination_bucket, filter_): """Creates a sink to export logs to the given Cloud Storage bucket. The filter determines which logs this sink matches and will be exported to the destination. For example a filter of 'severity>=INFO' will send all logs that have a severity of INFO or greater to the destination. See https://cloud.google.com/logging/docs/view/advanced_filters for more filter information. """ logging_client = logging.Client() # The destination can be a Cloud Storage bucket, a Cloud Pub/Sub topic, # or a BigQuery dataset. In this case, it is a Cloud Storage Bucket. # See https://cloud.google.com/logging/docs/api/tasks/exporting-logs for # information on the destination format. destination = "storage.googleapis.com/{bucket}".format(bucket=destination_bucket) sink = logging_client.sink(sink_name, filter_=filter_, destination=destination) if sink.exists(): print("Sink {} already exists.".format(sink.name)) return sink.create() print("Created sink {}".format(sink.name)) Ruby 如需了解如何安装和使用 Logging 客户端库,请参阅 Logging 客户端库 require "google/cloud/logging" logging = Google::Cloud::Logging.new storage = Google::Cloud::Storage.new # bucket_name = "name-of-my-storage-bucket" bucket = storage.create_bucket bucket_name # Grant owner permission to Cloud Logging service email = "[email protected]" bucket.acl.add_owner "group-#{email}" # sink_name = "name-of-my-sink" sink = logging.create_sink sink_name, "storage.googleapis.com/#{bucket.id}" puts "#{sink.name}: #{sink.filter} -> #{sink.destination}" 后续步骤 如需搜索和过滤其他 Google Cloud 产品的代码示例,请参阅 Google Cloud 示例浏览器
__label__pos
0.663562
Reader Monad via ZIO sbt configuration /*** libraryDependencies += "dev.zio" %% "zio" % "1.0.0-RC19-2" */ Usage trait HasEnv { def env: Map[String, String] } def readEnv(name: String): zio.ZIO[HasEnv, Throwable, String] = zio.ZIO fromFunctionM { r => zio.ZIO effect { r.env(name) } } trait HasReadLn { def readLn(): String } val readLn: zio.ZIO[HasReadLn, Throwable, String] = zio.ZIO fromFunctionM { r => zio.ZIO effect { r.readLn() } } trait HasWrite { def write(output: String): Unit } def write(output: String): zio.ZIO[HasWrite, Throwable, Unit] = zio.ZIO fromFunctionM { r => zio.ZIO effect { r.write(output) } } val enProgram: zio.ZIO[HasReadLn with HasWrite, Throwable, Unit] = for { _ <- write("What's your name? ") name <- readLn _ <- write(s"Hello, ${name}!\n") } yield () val esProgram: zio.ZIO[HasReadLn with HasWrite, Throwable, Unit] = for { _ <- write("¿Cómo te llamas? ") name <- readLn _ <- write(s"¡Hola, ${name}!\n") } yield () val program: zio.ZIO[HasEnv with HasReadLn with HasWrite, Throwable, Unit] = for { lang <- readEnv("LANG") _ <- if (lang.startsWith("es")) { esProgram } else { enProgram } } yield () zio.Runtime.default.unsafeRun { program.provide { new HasEnv with HasReadLn with HasWrite { override val env: Map[String, String] = sys.env override def readLn(): String = scala.io.StdIn.readLine() override def write(output: String): Unit = print(output) } } } Demo This file is literate Scala, and can be run using Codedown: $ curl https://earldouglas.com/posts/effect-systems/zio.md | codedown scala > script.scala $ LANG=es sbt -Dsbt.main.class=sbt.ScriptMain script.scala ¿Cómo te llamas? James ¡Hola, James!
__label__pos
0.999936
SQL Select Statement Structured Query Language SQL Tutorial & Tips Select Insert Update Delete Open Cursor Fetch Close Cursor Union SELECT Syntax: SELECT [ALL | DISTINCT] select_list [INTO [new_table_name]] [FROM {table_name | view_name}[(optimizer_hints)] [[, {table_name2 | view_name2}[(optimizer_hints)] [..., {table_name16 | view_name16}[(optimizer_hints)]]] [WHERE clause] [GROUP BY clause] [HAVING clause] [ORDER BY clause] [COMPUTE clause] [FOR BROWSE] Get H&R Block At Home Online Basic File for FREE with H&R Block At Home Online Free Edition. H&R Block At Home (FREE, DELUXE or PREMIUM versions) ALL Retrieves all rows in the results. ALL is the default. DISTINCT Includes only unique rows in the results. Null values are considered equal for the purposes of the DISTINCT keyword; only one NULL is selected no matter how many are encountered. select_list Specifies the columns to select. Can be one or more of the following: - Asterisk (*), representing all columns listed in the order in which they were specified in the CREATE TABLE statement for all tables in the FROM clause, in the order they appear. - A list of column names, specified in the order in which you want to see them. If the select_list contains multiple column names, separate the names with commas. - A column name and column heading that will replace the default column heading (the column name), in the following form: column_heading = column_name or column_name column_heading The column_heading must be in quotation marks if spaces are used. For example: SELECT 'Author Last Name' = au_lname FROM authors - An expression (a column name, constant, function, or any combination of column names, constants, and functions connected by an operator(s), a CASE expression, or a subquery). For details, see the Expressions topic. - The IDENTITYCOL keyword instead of the name of a column that has the IDENTITY property. For details, see the CREATE TABLE statement. - A local or global variable. For details, see the Variables topic. - A local variable assignment, in the form: @variable = expression Note When the select_list includes a variable assignment(s), it cannot be combined with data-retrieval operations. Partial Listing of ALL the Vacation Rentals Available: Disney Vacation Homes Cheap Hawaii Vacations! - Save up to 50% Florida Vacation Rentals Need a Vacation? Rent a Private Villa, Home, or Condo. Click here! INTO new_table_name Creates a new table based on the columns specified in the select_list and the rows chosen in the WHERE clause. To select into a permanent table, the select into/bulkcopy option must be on (by executing the sp_dboption system stored procedure). By default, the select into/bulkcopy option is off in newly created databases. The new table name (new_table_name) must follow the same rules as table_name (described later in this section) with these exceptions: - If select into/bulkcopy is on in the database where the table is to be created, a permanent table is created. The table name must be unique in the database and conform to the rules for Identifiers. - If select into/bulkcopy is not on in the database where the table is to be created, permanent tables cannot be created using SELECT INTO; only local or global temporary tables can be created. To create a temporary table, the table name must begin with a pound sign (#). For details on temporary tables, see the CREATE TABLE statement. SELECT INTO is a two-step operation. The first step creates the table. The user executing the statement must have CREATE TABLE permission in the destination database. The second step inserts the specified rows into the new table. If the second step fails for any reason (hardware failure, exceeding a system resource, and so on), the new table will exist but have no rows. You can use SELECT INTO to create an identical table definition (different table name) with no data by having a false condition in the WHERE clause. You cannot use SELECT INTO with the COMPUTE clause or inside a user-defined transaction. For details about user-defined transactions, see the Transactions topic. When selecting an existing identity column into a new table, the new column inherits the IDENTITY property unless one of the following conditions is true: - The SELECT statement contains a join, GROUP BY clause, or aggregate function. - Multiple SELECT statements are joined with UNION. - The identity column is listed more than once in the select_list. - The identity column is part of an expression. If any of these conditions is true, the column is created NOT NULL instead of inheriting the IDENTITY property. If none of the conditions is true, the new table will inherit the identity column. All rules, restrictions, and so on, for the identity columns apply to the new table. Save Money on Your Home Phone Service! Stop Paying Too Much for Home Phone Service - Save Money with Phone.com! FROM Indicates the specific table(s) and view(s) that are used in the SELECT statement. FROM is required except when the select_list contains only constants, variables, and arithmetic expressions (no column names). The FROM clause allows a maximum of 16 tables and views. Tables in subqueries are counted as part of this total. table_name | view_name = [[database.]owner.]{table_name. | view_name.} Specifies the name(s) of the table(s) and view(s) used in the SELECT statement. If the list contains more than one table or view, separate the names with commas. If the table(s) or view(s) exist in another database(s), use a fully qualified table or view name (database_name.owner.object_name). Each table_name or view_name can be given an alias, either for convenience or to distinguish the different roles that a table or view plays in a self-join or subquery. Aliases (when defined) must be used for any ambiguous column references and must always match the alias reference (the full table name cannot be used if an alias has been defined). To use an alias, specify the object name, and then a space, and then the alias name, like this: SELECT au_lname, au_fname, title FROM titles t, authors a, titleauthor ta WHERE ta.title_id = t.title_id AND ta.au_id = a.au_id ORDER BY title, au_lname, au_fname The order of the tables and views after the FROM keyword does not affect the results set returned. Free Junk Car Removal Be Green! Recycle your car for free! Junk Car Recycling WHERE clause = WHERE search_conditions Specifies the restricting conditions for the rows returned in the results set. There is no limit to the number of search_conditions that can be included in an SQL statement. For more information, see the Search Conditions topic. GROUP BY clause = GROUP BY [ALL] aggregate_free_expression [, aggregate_free_expression]... GROUP BY Specifies the groups into which the table will be partitioned and, if aggregate functions are included in the select_list, finds a summary value for each group. You can refer to these new summary columns in the HAVING clause. The text and image datatypes cannot be used in a GROUP BY clause. When a GROUP BY clause is used, each item in the select_list must produce a single value for each group. A table can be grouped by any combination of columns; however, you cannot group by a column heading-you must use a column name or an expression. In Transact-SQL, any expression is valid (although not with column headings). With standard SQL, you can group only by a column. You can use GROUP BY for a column or expression that does not appear in the select_list. Null values in the GROUP BY column are put into a single group. The aggregate functions, which calculate summary values from the non-null values in a column, can be divided into two groups: Scalar Aggregate functions are applied to all the rows in a table (producing a single value per function). An aggregate function in the select_list with no GROUP BY clause applies to the whole table and is one example of a scalar. Vector Aggregate functions are applied to all rows that have the same value in a specified column or expression with the GROUP BY clause and, optionally, the HAVING clause (producing a value for each group per function). For the details about aggregate functions, see the Functions topic. ALL Includes all groups in the results, even those that don't have any rows that meet the search_conditions. aggregate_free_expression Is an expression that includes no aggregate functions. Aggregate functions can be used in the select_list preceding the GROUP BY clause. For the details about aggregate functions, see the Functions topic. HAVING clause = HAVING search_conditions Specifies a different type of restriction for aggregate functions in the select_list; the search_conditions restrict the rows returned by the query but do not affect the calculation(s) of the aggregate function(s). When a WHERE clause is used, the search_conditions restrict the rows that are included in the calculation of the aggregate function but do not restrict the rows returned by the query. The text and image datatypes cannot be used in a HAVING clause. There is no limit on the number of conditions that can be included in search_conditions. You can use a HAVING clause without a GROUP BY clause. When the HAVING clause is used with GROUP BY ALL, the HAVING clause negates the meaning of ALL. ORDER BY clause = ORDER BY {{table_name. | view_name.}column_name | select_list_number | expression} [ASC | DESC] [...{{table_name16. | view_name16.}column_name | select_list_number | expression} [ASC | DESC]] Sorts the results by columns. You can sort as many as 16 columns. In Transact-SQL, the ORDER BY clause can include items that do not appear in the select_list. You can sort by a column name, a column heading (or alias), an expression, or a number representing the position of the item in the select_list (the select_list_number). If you sort by select_list_number, the columns to which the ORDER BY clause refers must be included in the select_list. The select_list can be a single asterisk (*). If you use COMPUTE BY, you must also specify an ORDER BY clause. Null values are sorted before all others, and text or image columns cannot be used in an ORDER BY clause. Subqueries and view definitions cannot include an ORDER BY clause, a COMPUTE clause, or the INTO keyword. However, through Transact-SQL extensions, you can sort by expressions and aggregates if you use their select_list_number in the ORDER BY clause. COMPUTE clause = COMPUTE row_aggregate(column_name) [, row_aggregate(column_name)...] [BY column_name [, column_name]...] COMPUTE Used with row aggregate functions (SUM, AVG, MIN, MAX, and COUNT) to generate control-break summary values. The summary values appear as additional rows in the query results, allowing you to see detail rows and summary rows within one results set. You can calculate summary values for subgroups, and you can calculate more than one aggregate function for the same group. The COMPUTE clause cannot be used with INTO and cannot contain aliases for column names, although aliases can be used in the select_list. The COMPUTE keyword can be used without BY to generate grand totals, grand counts, and so on. The ORDER BY clause is optional only if you use the COMPUTE keyword without BY. BY Indicates that values for row aggregate functions are to be calculated for subgroups. Whenever the value of BY changes, row aggregate function values are generated. If you use BY, you must also use an ORDER BY clause. Listing more than one item after BY breaks a group into subgroups and applies a function at each level of grouping. The columns listed after COMPUTE clause must be identical to or a subset of those listed after ORDER BY clause, and must be in the same left-to-right order, start with the same expression, and not skip any expression. For example, if the ORDER BY clause is: ORDER BY a, b, c The COMPUTE clause can be any (or all) of these: COMPUTE BY a, b, c COMPUTE BY a, b COMPUTE BY a FOR BROWSE Allows you to perform updates while viewing data in client application programs using DB-Library. A table can be browsed in an application under the following conditions: - The table includes a time-stamped column (defined with the timestamp datatype). - The table has a unique index. - The FOR BROWSE option is at the end of the SELECT statement(s) sent to SQL Server. For details, see Microsoft SQL Server Programming DB-Library for C. Do not use the optimizer_hint HOLDLOCK in a SELECT statement that includes the FOR BROWSE option. The FOR BROWSE option cannot appear in SELECT statements joined by the UNION operator. invisibleSHIELD for iPhone 3G Protect your iPhone 3G from Scratches - Get the invisibleSHIELD. Scratch Proof Your New iPod Touch, Get The invisibleSHIELD. Free Shipping. Lifetime Replacement Guarantee. Scratch proof your cell phone, buy the invisibleSHIELD T-mobile G1 Screen Protector invisibleSHIELD Remarks: The length returned for text columns included in the select_list defaults to whichever is the smallest > the actual size of the text, the default TEXTSIZE session setting, or the hardcoded application limit. To change the length of returned text for the session, use the SET statement. By default, the limit on the length of text data returned with a SELECT statement is 4K. To retrieve data from remote SQL Servers, you can call remote stored procedures. For more information, see the CREATE PROCEDURE and EXECUTE statements. Using the GROUP BY clause and the HAVING clause The following list shows the requirements for processing a SELECT with the GROUP BY clause and the HAVING clause, and it shows how the rows returned in the results set are derived: 1. The WHERE clause excludes rows that do not meet its search_conditions. 2. The GROUP BY clause collects the surviving rows into one group for each unique value in the GROUP BY clause. Omitting the GROUP BY clause creates a single group for the whole table. 3. The HAVING clause excludes rows that do not meet its search_conditions. The HAVING clause tests only rows, but the presence or absence of a GROUP BY clause can make the behavior of a HAVING clause appear contradictory. For example: - When the query includes a GROUP BY clause, the HAVING clause excludes groups from the results. - By default, the HAVING clause can refer to aggregates only when the query contains no GROUP BY clause. - To allow queries that contain aggregates or a GROUP BY clause with items in the select_list that are not in the GROUP BY clause and are not aggregate functions, set trace flag 204. For details, see the Trace Flags topic. 4. Aggregate functions specified in the select_list calculate summary values for each surviving group. For the GROUP BY clause, the HAVING clause, and aggregate functions to accomplish the goal of one row and one summary value per group, ANSI-standard SQL requires: - Columns in a select_list must also be in the GROUP BY clause or be parameters of aggregate functions. - Columns in a HAVING clause must have only one value. - A query with a HAVING clause should have a GROUP BY clause. But if it doesn't, all the rows not excluded by the WHERE clause are considered to be a single group. Transact-SQL extensions to standard SQL make displaying data more flexible by allowing references to columns and expressions that are not used for creating groups or summary calculations. For example: - The GROUP BY clause can include expressions. - GROUP BY ALL displays all groups, even those excluded from calculations by a WHERE clause. TrustedID Banner 468x60 Permission: SELECT permission defaults to the owner of the table or view, who can grant it to other users using the GRANT statement. If the INTO clause is used to create a permanent table, then the user must have CREATE TABLE permission in the destination database. Examples: A. Simple SELECT: All Rows, All Columns This example returns all rows (no WHERE clause) and all columns (*) from the publishers table in the pubs database. SELECT * FROM publishers B. Simple SELECT: Subset of Columns, All Rows This example returns all rows (no WHERE clause) and only a subset of the columns (pub_id, pub_name, city, state) from the publishers table in the pubs database. SELECT pub_id, pub_name, city, state FROM publishers C. Simple SELECT: Subset of Rows, Subset of Columns This examples returns only the rows where the advance given is less than $10,000 and there are current year-to-date sales. SELECT pub_id, total = sum (ytd_sales) FROM titles WHERE advance < $10000 AND ytd_sales IS NOT NULL D. SELECT with GROUP BY, COMPUTE, and ORDER BY Clauses This example returns only those rows with current year-to-date sales and then computes the average book cost and total advances in descending order by type. Four columns of data are returned including a truncated title. Notice that all computed columns appear within the select_list. SELECT title = CONVERT(char(20), title), type, price, advance FROM titles WHERE ytd_sales IS NOT NULL ORDER BY type DESC COMPUTE AVG(price), SUM(advance) BY type COMPUTE SUM(price), SUM(advance) title type price advance ---------------------- ----------------- ------- ---------- Fifty Years in Bucki trad_cook 11.95 4,000.00 Onions, Leeks, and G trad_cook 20.95 7,000 Sushi, Anyone? trad_cook 14.99 8,000.00 avg ========= 15.96 sum ========= 19,000.00 title type price advance ---------------------- ----------------- ------- ---------- Computer Phobic AND psychology 21.59 7,000.00 Emotional Security: psychology 7.99 4,000.00 Is Anger the Enemy? psychology 10.95 2,275.00 Life Without Fear psychology 7.00 6,000.00 Prolonged Data Depri psychology 19.99 2,000.00 avg ========= 13.50 sum ========= 21,275.00 title type price advance ---------------------- ----------------- ------- ---------- But Is It User Frien popular_comp 22.95 7,000.00 Secrets of Silicon V popular_comp 20.00 8,000.00 avg ========= 21.48 sum ========= 15,000.00 title type price advance ---------------------- ----------------- ------- ---------- Silicon Valley Gastr mod_cook 19.99 0.00 The Gourmet Microwav mod_cook 2.99 15,000.00 avg ========= 11.49 sum ========= 15,000.00 title type price advance ---------------------- ----------------- ------- ---------- Cooking with Compute business 11.95 5,000.00 Straight Talk About business 19.99 5,000.00 The Busy Executive's business 19.99 5,000.00 You Can Combat Compu business 2.99 10,125.00 avg ========= 13.73 sum ========= 25,125.00 sum ========= 236.26 sum ========= 88,400.00 (22 row(s) affected) E. All Rows with Computed Sums This example shows only three columns in the select_list and gives totals based on all prices and all advances at the end of the results. SELECT type, price, advance FROM titles COMPUTE SUM(price), SUM(advance) type price advance ------------ -------------------------- -------------------------- business 19.99 5,000.00 business 11.95 5,000.00 business 2.99 10,125.00 business 19.99 5,000.00 mod_cook 19.99 0.00 mod_cook 2.99 15,000.00 UNDECIDED (null) (null) popular_comp 22.95 7,000.00 popular_comp 20.00 8,000.00 popular_comp (null) (null) psychology 21.59 7,000.00 psychology 10.95 2,275.00 psychology 7.00 6,000.00 psychology 19.99 2,000.00 psychology 7.99 4,000.00 trad_cook 20.95 7,000.00 trad_cook 11.95 4,000.00 trad_cook 14.99 8,000.00 sum ========================== 236.26 sum ========================== 95,400.00 (19 row(s) affected) F. Create a Temporary Table with SELECT INTO This example causes a temporary table to be created in tempdb. To use this table, always refer to it with the exact name shown, including the pound sign (#). SELECT * INTO #coffeetabletitles FROM titles WHERE price < $20 SELECT name FROM sysobjects WHERE name LIKE '#c%' name ------------------------------ (0 row(s) affected) SELECT name FROM tempdb..sysobjects WHERE name LIKE '#c%' go name ------------------------------ #coffeetabletitles__0000EC153E (1 row(s) affected) G. Create a Permanent Table with SELECT INTO This example shows the steps needed to create a permanent table. USE master go sp_dboption 'pubs', 'select into', TRUE go CHECKPOINTing database that was changed. USE pubs go SELECT * INTO newtitles FROM titles WHERE price > $25 OR price < $20 (12 row(s) affected) SELECT name FROM sysobjects WHERE name LIKE 'new%' go name ------------------------------ newtitles (1 row(s) affected) H. Optimizer Hints: TABLOCK and HOLDLOCK The following partial transaction shows how to place an explicit shared table lock on t1 without the overhead of reading any records from it. BEGIN TRAN SELECT count(*) FROM t1 (TABLOCK HOLDLOCK) I. Optimizer Hints: Using the Name of an Index This example shows how to force the optimizer to use a nonclustered index to retrieve rows from a table. SELECT au_lname, au_fname, phone FROM authors (INDEX = aunmind) WHERE au_1name = 'Smith' J. Optimizer Hints: Forcing a Table Scan This example shows that using an index of 0 will force a table scan. SELECT emp_id, fname, lname, hire_date FROM employee (index = 0) WHERE hire_date '10/1/1994' iPod Superstore + over 1000 iPod Accessories Cell Phone Accessories 80% off America's Best Multi-Attraction Passes! Home      Portfolio
__label__pos
0.880883
Draw a map. A sliding map with tiles. Draw a map. A sliding map with tiles. In this article, we want to show how, using the Aspose.GIS library and public data, you can build a sliding map that will be generated in real-time. Thanks to the new library feature, we can now query GIS data from the database via SQL query. Here is what we should get as a result: Result Repository source code here. Data preparation. First of all, we will need geospatial information that we can load into the database. One of the popular sources of such information is OpenStreetMap, so let’s use it. The most convenient way, in my opinion, is to extract data in pbf format from the public resource https://download.geofabrik.de/ . For example, let’s download Hungary. At the next stage, we need a working instance of PostGIS. Of course, you can use a locally installed version of PostgreSQL, but I find it very convenient to use Docker containers. Let’s install PostGIS using a docker compose file: services: postgis: image: postgis/postgis environment: - POSTGRES_DB=gis - POSTGRES_USER=gis - POSTGRES_PASSWORD=password ports: - 5432:5432 volumes: - d:\\local_folder:/usr/share/gisdata The volume d:\local_folder:/usr/share/gisdata is needed to load GIS data from the local machine. Next, let’s run our container: docker compose up Connect to the database instance using pgAdmin and create the Hungary database there: Create DB or through the SQL command: CREATE DATABASE Hungary; Add the necessary extensions to this database: Extentions These will be the postgis and hstore extensions. hstore is an extension that allows you to use the key-value data type. OpenStreetMap widely uses this type to describe attributes that do not fall into the category of main ones, and therefore no separate fields are created for them, but they are stored in the tags field. There is also an SQL-like version of the commands: CREATE EXTENSION IF NOT EXISTS hstore; CREATE EXTENSION IF NOT EXISTS postgis; Now let’s connect to the container, in my case it is local_folder-postgis-1: docker exec -it local_folder-postgis-1 sh And install the program that will import data from the pbf file into the database: apt-get update && apt-get install -y osm2pgsql Make sure that the hungary-latest.osm.pbf file is located in your local_folder folder and then run the import command: osm2pgsql --create --database=Hungary --user=gis --password --host=localhost --port=5432 --hstore /usr/share/gisdata/hungary-latest.osm.pbf In the case of Hungary, it took me one and a half minutes to complete this command. The --create option means the simple creation mode of a new database. By the way, besides everything else, there is also the --append mode, which allows updating the data if they have changed: osm2pgsql --append --slim OSMFILE The --hstore option tells the application to additionally create a tags field of the hstore type to store additional information about features and geometries. Back-end So, our data is ready to use. The next step on the way to creating a map is creating the back-end. The goal of our back-end is to generate special tiles, usually sized 256*256, from which, like a mosaic, the map will be assembled in the browser. Each tile is uniquely identified by a combination of such parameters as Z, which is the degree of zooming in/out of the map, X is the row in the tile array, and Y is the column. You can read more about the nature of tiles here. Our backend will be on ASP.NET Core accordingly, so let’s get started with creating the project. So let’s create a project based on pre-installed ASP.NET Core MVC template in Visual Studio. Next, install the NuGet package Aspose.GIS into the project that will generate the tiles: dotnet add package Aspose.GIS --version 24.6.0 Clean the project from unnecessary files. So that the structure looks approximately as pictured below: Solution explorer Then delete the contents of the wwwroot/lib folder, as we will be installing our dependencies through libman. Below is the structure of the libman.json file: { "version": "1.0", "defaultProvider": "cdnjs", "libraries": [ { "library": "[email protected]", "destination": "wwwroot/lib/leaflet/" }, { "library": "[email protected]", "destination": "wwwroot/lib/bootstrap/", "files": [ "css/bootstrap-reboot.min.css" ] } ] } Added the client dependencies in the _Layout.cshtml file: <!DOCTYPE html> <html lang="en"> <head> <meta charset="utf-8" /> <meta name="viewport" content="width=device-width, initial-scale=1.0" /> <title>@ViewData["Title"] - Aspose.GIS.TilesTest</title> <link href="~/lib/bootstrap/css/bootstrap-reboot.min.css" rel="stylesheet" /> @await RenderSectionAsync("Styles", required: false) </head> <body> @RenderBody() @await RenderSectionAsync("Scripts", required: false) </body> </html> And also edit Index.cshtml: @{ ViewData["Title"] = "Home Page"; } <div id="map"></div> @section Styles { <link href="~/lib/leaflet/leaflet.min.css" rel="stylesheet" /> <link href="~/css/map.css" rel="stylesheet" asp-append-version="true"/> } @section Scripts { <script src="~/lib/leaflet/leaflet.min.js"></script> <script src="~/js/map.js" asp-append-version="true"></script> } In this case, bootstrap-reboot.min.css resets the default style settings, and leaflet.min.js is responsible for rendering the map, i.e., assembling the pieces from the tiles into a map. Let’s set the height of the map block to the full height of the visible area in the map.css file: #map { min-height: 100vh; } The content of the map.js file is also quite simple, but a little more interesting: var map = L.map('map').setView([47.59995, 19.36623], 13); const tiles = L.tileLayer('/tiles/{z}/{x}/{y}.png', { maxZoom: 19, minZoom: 11 }).addTo(map); Here we use the API of the leaflet library, where we specify the id of the map block 'map', then in the setView method we set the coordinates of the place from which the initial map loading will start, and also the scale, for example, 13. Note the tileLayer method, it accepts a pattern string for the tile request to the server. This address can be either absolute for accessing third-party tile servers or relative as in our case. The route '/tiles/{z}/{x}/{y}.png' has not yet been implemented by us, and this is the most important part of our narrative that we have yet to implement. To implement the request handler for generating tiles, let’s first define a separate route in the Program.cs file: app.MapControllerRoute(name: "tiles", pattern: "tiles/{z}/{x}/{y}.png", defaults: new { controller = "Tiles", action = "Index" }); app.MapControllerRoute( name: "default", pattern: "{controller=Home}/{action=Index}/{id?}"); Here, two routes are defined, the first one for the tiles and the second one is the standard one. In the case of MapControllerRoute, order matters, so to avoid unexpected behavior, it is worth placing the route for the tiles before the standard route. Next, let’s move on to the handler itself. Create the TilesController.cs controller. It will have a single action of the following type: public async Task<ActionResult> Index(int z, int x, int y) And so, now we have all the necessary data to calculate the bounding box covered by the tile in the corresponding coordinate system. In the current implementation, our data in the database is stored in the Web Mercator coordinate system. OpenStreetMap by default provides data in this coordinate system. Web Mercator is projection(coordinate system form) is most often used for mapping in various popular services because it covers almost the entire earth, and distances are measured in meters, which simplifies calculations, unlike, for example, WGS 84, where distances are measured in angles, making calculations more complex. So, well, now let’s define the ‘half of the world’ constant, which is needed to calculate the tile coordinates, recall that distances are measured in meters: private const double _halfOfWorld = 20037508.34; The calculation of coordinates is as follows: double worldSize = _halfOfWorld * 2; double tileSize = worldSize / Math.Pow(2, z); double min_x = x * tileSize - _halfOfWorld; double max_x = (x + 1) * tileSize - _halfOfWorld; double min_y = _halfOfWorld - (y + 1) * tileSize; double max_y = _halfOfWorld - y * tileSize; min_x = Math.Round(min_x, 10); max_x = Math.Round(max_x, 10); min_y = Math.Round(min_y, 10); max_y = Math.Round(max_y, 10); double ext_min_x = min_x - (max_x - min_x) * 0.05; double ext_max_x = max_x + (max_x - min_x) * 0.05; double ext_min_y = min_y - (max_y - min_y) * 0.05; double ext_max_y = max_y + (max_y - min_y) * 0.05; In the Web Mercator projection, the “world” is a square rectangle, so Y and X have the same lengths. Ok, next, note that there are min_x, max_x, min_y, max_y values, these values are the real sizes of the tile, but as you can see there is an extended version with the ext_prefix, which has sides extended by 5%. This is done intentionally, and this is the bounding box that will actually be substituted into the query. The trick is that we extract an area slightly larger than necessary, but during rendering, we specify exactly the area we need. It is necessary for that when we will draw a tile that the rendering engine did not draw the boundaries of cut shapes, it creates an undesirable effect of the presence of stacks at the tiles. Next, let’s consider what the query looks like: var cult = CultureInfo.InvariantCulture; // for dot decimal separator string query = $@"WITH envelope (box) AS ( VALUES(ST_MakeEnvelope({ext_min_x.ToString(cult)}, {ext_min_y.ToString(cult)}, {ext_max_x.ToString(cult)}, {ext_max_y.ToString(cult)}, 3857))) SELECT osm_id, ""addr:housename"", ""addr:housenumber"", 'polygon' as ""source"", building, admin_level, place, landuse, water, name, ST_AsEWKB(ST_ClipByBox2D(way, envelope.box)) as way FROM public.planet_osm_polygon CROSS JOIN envelope WHERE ST_Intersects(way, envelope.box) AND ({z} < 15 AND ST_Area(way) > 5000 OR {z} >= 15) UNION ALL SELECT osm_id, ""addr:housename"", ""addr:housenumber"", 'roads' as ""source"", building, admin_level, place, landuse, water, name, ST_AsEWKB(ST_ClipByBox2D(way, envelope.box)) as way FROM public.planet_osm_roads CROSS JOIN envelope WHERE ST_Intersects(way, envelope.box) UNION ALL SELECT osm_id, ""addr:housename"", ""addr:housenumber"", 'point' as ""source"", building, admin_level, place, landuse, water, name, ST_AsEWKB(ST_ClipByBox2D(way, envelope.box)) as way FROM public.planet_osm_point CROSS JOIN envelope WHERE ST_Intersects(way, envelope.box) AND {z} >= 15 UNION ALL SELECT osm_id, ""addr:housename"", ""addr:housenumber"", 'line' as ""source"", building, admin_level, place, landuse, water, name, ST_AsEWKB(ST_ClipByBox2D(way, envelope.box)) as way FROM public.planet_osm_line CROSS JOIN envelope WHERE ST_Intersects(way, envelope.box)"; What happens here is that during data import, the osm2pgsql utility creates a number of tables (planet_osm_polygon, planet_osm_roads, planet_osm_point, planet_osm_line) that correspond to different geometries and attributes but related to one loaded area. The way field is the geometry. The remaining attributes will be useful when rendering. We pre-calculate the bounding box for a tile through ST_MakeEnvelope function that will be used in all join queries and store the result in the envelope variable. We specify that the passed coordinates correspond to the 3857 coordinate system (Web Mercator). In the WHERE expression, we specify that we want to retrieve all geometries in the table that intersect with the envelope area using the ST_Intersects function. [!WARNING] A very important point is that at the current stage of database integration, the library can read geometries in WKB format, but it is better to use EWKB, i.e., Extended Well-Known Binary, then there will be no need to pass the spatial coordinate system information as a separate field since it will already be embedded in the geometry information. For these purposes, we use the ST_AsEWKB function. The ST_ClipByBox2D function is used to clip geometries that go beyond the boundaries of the bounding box. Well, now is the key moment, how to execute the query and get a layer with a set of features that will be rendered on the tile. It’s quite simple: VectorLayer inputLayer; using (var conn = new NpgsqlConnection("Host=127.0.0.1;Username=gis;Password=password;Database=Hungary")) { var builder = new DatabaseDataSourceBuilder(); builder .FromQuery(query) .GeometryField("way") .AddAttribute("osm_id", AttributeDataType.Long) .AddAttribute("addr:housenumber", AttributeDataType.String) .AddAttribute("building", AttributeDataType.String) .AddAttribute("name", AttributeDataType.String) .AddAttribute("source", AttributeDataType.String) .AddAttribute("admin_level", AttributeDataType.Integer) .AddAttribute("place", AttributeDataType.String) .AddAttribute("landuse", AttributeDataType.String) .AddAttribute("water", AttributeDataType.String); conn.Open(); var inputLayer = await builder.Build().ReadAsync(conn); } We have received one layer that contains all the geometries intersecting with our tile. Okay, now we can color the map a little bit, like cities, water bodies, or forests. To do this, we need to break our single layer into separate independent ones that will match specific criteria, such as forests or rivers. Here’s an example: var cities = inputLayer.Where(x => x.GetValue<string>("place") == "city"); citiesLayer = CopyToNewLayer(cities, inputLayer); var forest = inputLayer.Where(x => x.GetValue<string>("landuse") == "forest"); forestLayer = CopyToNewLayer(forest, inputLayer); var water = inputLayer.Where(x => !x.IsValueNull("water")); waterLayer = CopyToNewLayer(water, inputLayer); Below you will see how we will use these layers. The CopyToNewLayer function is auxiliary, it helps create new layers; for this article, it is not of great importance, you can look at its implementation in the repository that I referenced at the beginning of the article. And now we just need to render all this into a PNG tile and return it to the client. This is a sample code: using var map = new Map(256, 256); var pngStream = new MemoryStream(); var labeling = new RuleBasedLabeling { { x => x.GetValue<string>("source") == "polygon", new SimpleLabeling("addr:housenumber") }, LabelingRule.CreateElseRule(new SimpleLabeling("name")) }; map.SpatialReferenceSystem = SpatialReferenceSystem.WebMercator; map.Extent = new Extent(min_x, min_y, max_x, max_y, SpatialReferenceSystem.WebMercator); map.Add(citiesLayer, new SimpleFill { FillColor = Color.PeachPuff }, labeling); map.Add(forestLayer, new SimpleFill { FillColor = Color.PaleGreen }, labeling); map.Add(waterLayer, new SimpleFill { FillColor = Color.SkyBlue }, labeling); map.Add(buildingsLayer, new SimpleFill { FillColor = Color.SandyBrown }, labeling); map.Render(AbstractPath.FromStream(pngStream), Renderers.Png); pngStream.Seek(0, SeekOrigin.Begin); return File(pngStream, "image/png"); Here, a Map object is created with the standard tile size of 256x256. Essentially, a Map is a canvas for rendering the tile. Next, we initialize a special object for labeling, to which rules for rendering text on the drawn geometric shapes are passed, for example, house numbers, street names, etc. In this case, if it is not a polygon and/or the addr:housenumber attribute is empty, the data will be taken from the name attribute. An important point is that the Map object needs to explicitly specify the coordinate system in which it will render: map.SpatialReferenceSystem = SpatialReferenceSystem.WebMercator; Next, we need to set the real area of the tile that should be rendered, not the expanded area that we requested from the database. To do this, we explicitly set the rendering area through the Extent property: map.Extent = new Extent(min_x, min_y, max_x, max_y, SpatialReferenceSystem.WebMercator); And then we sequentially add layers to the tile. The first one is added below all, and the last one is on top. Simultaneously, we pass the SimpleFill object, which contains layer rendering settings, and the SimpleLabeling object. And finally we just need to render the tile as a byte stream in memory, then reset the stream to the beginning, and pass the stream to the ASP.NET Core platform for further transfer to the client: map.Render(AbstractPath.FromStream(pngStream), Renderers.Png); pngStream.Seek(0, SeekOrigin.Begin); return File(pngStream, "image/png"); Hopefully we were able to pass on the basic ideas and techniques of map building to you. We wish you good luck in your experiments.
__label__pos
0.849691
What is cloud computing Edit 0 likes 16 views 0 members ABOUT Well, if we think of the term “cloud” we picture a white/grey/black smoke moving in our atmosphere taking in water from different places along its path and pouring it elsewhere. Well, the term “cloud” in our technical world is just slightly different. Just like our regular “cloud” our cloud is used by many MNC’s for storing data processing it and accessing it from any place in the world through an online platform known as the internet. Technical definition: “Cloud Computing is a paradigm shift that provides computing over the internet.” (Or) “We can also define it as, using someone else’s serve’s host to store data and access it at will, without much active work for the user” Who invented cloud computing? Ans) The term Cloud Computing was the first noted companies internal document known as Compaq. Late it was popularized by amazon.com when they first launched an elastic compute cloud. When was Cloud computing first invented? Ans) It was invented in 1996, in an internal document of Compaq. What is cloud computing? Ans) Using someone else’s serve’s host to store data and access it at will, without much active work for the user. where does cloud computing operate from? Ans) it is operated through the medium of the internet. where in the world can we access it from? Ans) cloud computing is widely spread, network-based and used for storage. It can be accessed from anywhere in the world using a computer with an active internet and of course credentials. Why do we need to use Cloud computing? Ans) − Cloud Computing generally refers to at data centers that are available for use (anyone who wants to use it), over the internet. − Clouds can be used by one organization solely or multiple organizations. − Cloud computing relies on sharing or resources to reduce costs on hardware every time a Company expands and to achieve a logical outcome. − It helps the organization to get their applications up and running faster than usual when operated through a cloud. − Cloud Computing requires less maintenance TYPES OF CLOUD COMPUTING There square measure 3 sorts of cloud computing: − Software as a Service(SaaS): SaaS is that the commonest kind of cloud computing. It delivers complete, user-ready applications over the net. These usually don't ought to be downloaded and put in on every individual user’s pc, saving technical employees millions of time. Maintenance and troubleshooting square measure handled entirely by the seller. Software programs perform specific functions and square measure usually intuitive to use. Examples embrace Salesforce’s suite of client relationship management tools, Microsoft workplace 365 merchandise, Google Apps, QuickBooks, Dropbox, Zen desk, and Slack. These square measure absolutely purposeful productivity tools that may be custom to the users wants while not secret writing or programming. SaaS provides the best quantity of client support. − Infrastructure as a Service(IaaS): IaaS is that the most open-ended kind of cloud service for organizations that need to try to loads of customization themselves. the best good thing about IaaS is further capability, which may be accessed on demand for long or short wants. IaaS makes it potential for tech-savvy businesses to rent enterprise-grade IT resources and infrastructure to stay pace with growth, while not requiring giant capital investments. With IaaS, a 3rd party hosts parts of infrastructure, like hardware, servers, firewalls, and storage capability. However, users usually bring their own in operation systems and middleware. A business that's developing a replacement product would possibly like better to use AN IaaS supplier to make a testing atmosphere before deploying the program in-house. purchasers usually access cloud servers through a dashboard or AN API. IaaS is absolutely self-service. − Platform as a Service(PaaS): PaaS provides the building blocks for computer code creation, together with development tools, code libraries, servers, programming environments, and preconfigured app parts. With PaaS, the seller takes care of back-end issues like security, infrastructure, and information integration thus users will specialize in building, hosting, and testing apps quicker and at lower prices. With a platform like Salesforce, resources square measure standardized and consolidated thus you don’t ought to reinvent the wheel when you build a replacement app. Multiple developers will work on a similar project at the same time. In several cases, folks while not secret writing skills will produce problem-solving business applications with drag-and-drop page layouts, point-and-click field creation, and customizable news dashboards. PUBLIC, PRIVATE, AND HYBRID CLOUDS: There are many sorts of platform services. Each PaaS choice is either public, private, or a hybrid mixture of the two. Public PaaS is hosted within the cloud, and its infrastructure is managed by the supplier. Private PaaS, on the opposite hand, is housed in onsite servers or personal networks and is maintained by the user. Hybrid PaaS uses parts from each public and personal, and is capable of corporal punishment applications from multiple cloud infrastructures. PaaS is any classified betting on whether or not it's open or closed supply, whether or not it's mobile compatible (PaaS), and what business sorts it caters to. Businesses are taking advantage of the latest PaaS capabilities to any source tasks that may have otherwise relied on native solutions. this is often all created attainable through advances in cloud computing. EXAMPLES OF CLOUD COMPUTING: Most shoppers and businesses area unit already victimization the cloud, whether or not they comprehend it or not. If you stream music, look on-line, have social media accounts, or use mobile banking, you victimization the cloud. As digital technologies grow ever a lot of powerful and out there, apps and cloud-based platforms have become nearly universally widespread. Here is a unit a number of the ways in which the cloud has reworked trendy life. Entertainment — Movies and music that wont to take up area in cabinets or on shelves area unit currently accessed from afar through cloud-based streaming services like Netflix or Spotify. Here you can enroll for an aws training course now and become a master in cloud computing: https://intellipaat.com/aws-certification-training-online-India/ EXAMPLES OF CLOUD COMPUTING: Most shoppers and businesses area unit already victimization the cloud, whether or not they comprehend it or not. If you stream music, look on-line, have social media accounts, or use mobile banking, your victimization the cloud. As digital technologies grow ever a lot of powerful and out there, apps and cloud-based platforms have become nearly universally widespread. Here is a unit a number of the ways in which the cloud has reworked trendy life. Entertainment — Movies and music that wont to take up area in cabinets or on shelves area unit currently accessed from afar through cloud-based streaming services like Netflix or Spotify. frequently. currently, they're ordinarily unbroken within the cloud (think of Google Docs and Dropbox), accessible from anyplace and recorded in real-time. Mobile banking — Banks like Chase, Wells metropolis, and Bank of America all believe the cloud. Customers will transfer cash to co-workers in seconds from their mobile phones or take photos of checks and deposit them just about, while not ever setting foot in an exceedingly bank. Transactions area unit searchable, and statements area unit keep within the bank’s information, accessible on-demand, eliminating the requirement for paper files. Customer relationship management — client relationship management (CRM) package allows businesses to alter communications with customers, manage leads, and fine-tune promoting efforts across departments. An intelligent package will send follow-ups like cart abandonment emails. each step of the client journey across all of a business’ touchpoints may be coupled, coordinated, and analyzed. Human resources and payroll — once the cloud is employed for human resources functions, businesses see a rise in productivity and price blessings over businesses utilizing older technology. Recruiting, onboarding, and worker knowledge management area unit all a lot of economical. time unit groups will simply read resumes, type candidates, monitor performance, and access records with single-point trailing. Accounting — Cloud-based accounting applications do most of a similar thing as desktop accounting package however they run on remote servers and area units accessed via the net. the advantages embrace integration across departments, therefore all stakeholders have access to instantly updated figures and projections. Cloud accounting applications contour knowledge entry, eliminate redundancy and cut back the possibility of errors. Inventory management and supply — Ordering, stocking, selling, and delivering merchandise is way a lot of economical once inventory is managed with cloud-based applications. merchandise with barcodes may be scanned each step of the manner. Vendors, managers, and supply coordinators will see inventory levels and grasp wherever merchandise area unit in real-time. rearrangement may be machine-controlled. CREDITS 0 COMMENTS Comment
__label__pos
0.502116
  How to Build a Monitoring Application With the Google Cloud Vision API So go ahead and set the environment variable value as shown below: On a Mac / Linux, you can do the following: $ export GOOGLE_APPLICATION_CREDENTIALS=<Path To Your JSON Key File> On Windows, you can do the following: SET GOOGLE_APPLICATION_CREDENTIALS=<Path To Your JSON Key File> 5. Python Code The code shown below has been adopted from the official Label tutorial that is present in the Cloud Vision API Documentation. The modifications made are to retrieve 5 Label features instead of just one and to print out all the labels instead of just one. label.py import argparse import base64 import httplib2 from apiclient.discovery import build from oauth2client.client import GoogleCredentials def main(photo_file):  '''Run a label request on a single image'''  API_DISCOVERY_FILE = ' HTTPS://vision.googleapis.com/$discovery/rest?version=v1'   HTTP = httplib2.Http()  credentials = GoogleCredentials.get_application_default().create_scoped(      ['https://www.googleapis.com/auth/cloud- Platform'])  credentials.authorize(http)  service = build('vision', 'v1', http, discoveryServiceUrl=API_DISCOVERY_FILE)  with open(photo_file, 'rb') as image:    image_content = base64.b64encode(image.read())    service_request = service.images().annotate(      body={        'requests': [{          'image': {            'content': image_content           },          'features': [{            'type': 'LABEL_DETECTION',            'maxResults': 5,           }]         }]      })    response = service_request.execute()    for results in response['responses']:      if 'labelAnnotations' in results:                  for annotations in results['labelAnnotations']:          print('Found label %s, score = %s' % (annotations['description'],annotations['score']))    return 0 if __name__ == '__main__':  parser = argparse.ArgumentParser()  parser.add_argument(    'image_file', help='The image you\'d like to label.')  args = parser.parse_args()  main(args.image_file) The Python code is straightforward as explained below: 1. The image file to analyze is provided as a command line argument to the program. 2. The first step is to use the Google Application Default Credentials to validate the program. Remember we had set the environment variable GOOGLE_APPLICATION_CREDENTIALS to the JSON Key file before we ran this program. Not doing that will result in an Not Authorized Error. 3. The image file bytes are then read and base64 encoded. 4. We then make a request to the Cloud Vision API by making the images.annotate() method where we pass in the request object that contains the image bytes and the Feature Detection that we are interested in. In our case, it is just one i.e. the LABEL_DETECTION and we ask for 5 results, which will be given to us in increasing order of probability. 5. Finally, we print out the labels and their scores that were returned by the Cloud Vision API. Sample Run To run the above code, ensure that you have the Python development environment setup along with some libraries that are required. The Prerequisites section of the Label Tutorial has the details. You can run the label analysis on any image file name via image.py <filename>. Shown below are two images of the same spot. One of the images does not have a person in it and the other one does. The LABEL_DETECTION output for each of the images is shown too: Found label waterway, score = 0.81399214 Found label road, score = 0.79218566 Found label wall, score = 0.74516481 Found label flooring, score = 0.58240849 Found label sidewalk, score = 0.51532209 Found label man, score = 0.8414287 Found label portrait photography, score = 0.81915128 Found label male, score = 0.79989046 Found label child, score = 0.76658708 Found label people, score = 0.73413825 The results are not very accurate for some of the labels that it found but you can see where things stand today and the possibilities that this opens up. Sample Projects We use the Python API above to invoke the Google Cloud Vision API but client libraries and samples exist in multiple other languages with more samples. Developers have started writing wrappers around the core REST API in multiple languages. One such is the pigeon API, which is a wrapper around Cloud Vision API in Go Language. Visit the Github page for Cloud Vision API samples including sample applications written in multiple languages like Python, Java, Go, Node.js, PHP and mobile platforms Android and iOS, at the time of writing this article. Be sure to read the next Machine Learning article: An Analysis of Brexit With the MonkeyLearn Machine Learning API  
__label__pos
0.529269
FormArray : Cannot find control with unspecified name attribute in angular6 I created a form using the reactive and presentation form. When I invoke the fields for the submission form, this gives me the error. I created a form using the reactive and presentation form. When I invoke the fields for the submission form, this gives me the error. this Erorr : Cannot find control with unspecified name attribute addProductFG:FormGroup; cats:Category[]; subCats:Category[]; PD:Productdetail[]; selectedCat:number; valueIngrident=new FormArray([]); public loading=false; constructor(private fb:FormBuilder,private productService:ProductinfoService,private catService:CategoryService) { } ngOnInit() { this.loading=true; this.InitialFrom(); this.GetMainCat(); } public CreateValueFiled(PD:Productdetail[]){ PD.map(element => { this.valueIngrident.push( new FormGroup({ infoId:new FormControl(element.id), value:new FormControl('') }) ) }); } public GetMainCat(){ this.catService.GetMainCat().subscribe( res=>{ this.cats=res; this.loading=false; } ) } get ValueFormControl(){ return this.addProductFG.get('values') as FormArray; } public InitialFrom():FormGroup{ this.addProductFG=this.fb.group({ productTitle:['',Validators.compose([Validators.required])], productName:['',Validators.compose([Validators.required])], color:['',Validators.compose([Validators.required])], productImageName:['',Validators.compose([Validators.required])], price:['',Validators.compose([Validators.required])], gurantyMonth:['',Validators.compose([Validators.required])], gurantyCompanyName:['',Validators.compose([Validators.required])], values:this.valueIngrident }) return this.addProductFG; } public ChangeSubCat(id:number){ this.loading=true; this.catService.GetSubCatByCatId(id).subscribe( res=>{ this.subCats=res; this.loading=false; } ) } public ChangeFormByType(id:number){ this.loading=true; this.productService.GetPCIBySubId(id).subscribe( res=>{ this.PD=res, this.CreateValueFiled(this.PD), this.loading=false; } ) } and in HTML : <div formArray="values"> <div *ngFor="let valueCtrl of ValueFormControl.controls; let i=index" [formGroupName]="i"> <div class="form-inline lbin"> <label>g </label> <input formControlName="value" > </div> </div> </div> and And this is my sample code in stackblitz Demo Whats the Problem ? How Can I Solve This problem ? Angular 8 Forms - How to Validate Forms in Angular using Form Builders Angular 8 Forms - How to Validate Forms in Angular using Form Builders In this Angular 8 forms tutorial gives an overview of the form builder in Angular 8 and how to validate forms in Angular 8 using form builders Forms controls and form groups in Angular Form controls are basically classes that can hold both the data values and the validation information of any form element, which means to say every form input you have in a reactive form should be bound by a form control. They are the basic units that make up reactive forms. Form groups are constructs that basically wrap a collection of form controls. Just as the control gives you access to the state of an element, the group gives the same access, but to the state of the wrapped controls. Every single form control in the form group is identified by name when initializing. Generating form controls Setting up form controls, especially for very long forms, can quickly become both monotonous and stressful. Angular provides a helper service to solve this problem so that you can always obey the DRY concept of avoiding repetition. This service is called the form builder service. Before we start… To be able to follow through this article’s demonstration, you should have: • Node version 11.0 installed on your machine • Node Package Manager version 6.7 (usually ships with the Node installation) • Angular CLI version 8.0 • The latest version of Angular (version 8) // run the command in a terminal ng version Confirm that you are using version 8, and update to 8 if you are not. • Download this tutorial’s starter project here to follow through the demonstrations. • Unzip the project and initialize the Node modules in your terminal with this command: npm install Other things that would be nice to have are: • A working knowledge of the Angular framework at a beginner level • Familiarity with form controls in Angular will be a plus but not a requirement Demo In this tutorial, you will be taken through a code-along journey building a reactive form with the form builder. If you have followed this post from the start, you will have downloaded and opened the starter project in VS Code. If you open the employee.component.ts, file it should look like this: import { Component, OnInit } from '@angular/core'; import { FormControl, FormGroup } from '@angular/forms' @Component({ selector: 'app-employee', templateUrl: './employee.component.html', styleUrls: ['./employee.component.css'] }) export class EmployeeComponent implements OnInit { bioSection = new FormGroup({ firstName: new FormControl(''), lastName: new FormControl(''), age: new FormControl(''), stackDetails: new FormGroup({ stack: new FormControl(''), experience: new FormControl('') }), address: new FormGroup({ country: new FormControl(''), city: new FormControl('') }) }); constructor() { } ngOnInit() { } callingFunction() { console.log(this.bioSection.value); } } You can see that every single form control — and even the form group that partitions it — is spelled out, so over time, you as the developer keep repeating yourself. The form builder helps to solve this efficiency problem. To use the form builder, you must first register it. Registering the form builder To register the form builder in a component, the first thing to do is import it from Angular forms: import { FormBuilder } from ‘@angular/forms’; The next step is to inject the form builder service, which is an injectable provider that comes with the reactive forms module. You can then use the form builder after injecting it. Navigate to the employee.component.ts file and copy in the code block below: import { Component, OnInit } from '@angular/core'; import { FormBuilder } from '@angular/forms' @Component({ selector: 'app-employee', templateUrl: './employee.component.html', styleUrls: ['./employee.component.css'] }) export class EmployeeComponent implements OnInit { bioSection = this.fb.group({ firstName: [''], lastName: [''], age: [''], stackDetails: this.fb.group({ stack: [''], experience: [''] }), address: this.fb.group({ country: [''], city: [''] }) }); constructor(private fb: FormBuilder) { } ngOnInit() { } callingFunction() { console.log(this.bioSection.value); } } This does exactly the same thing as the previous code block you saw at the start, but you can see there is a lot less code and more structure — and, thus, optimal usage of resources. Form builders not only help to make your reactive forms’ code efficient, but they are also important for form validation. Form validation Using reactive forms in Angular, you can validate your forms inside the form builders. Run your application in development with the command: ng serve You will discover that the form submits even when you do not input values into the text boxes. This can easily be checked with form validators in reactive forms. The first thing to do, as with all elements of reactive forms, is to import it from Angular forms. import { Validators } from '@angular/forms'; You can now play around with the validators by specifying the form controls that must be filled in order for the submit button to be active. Copy the code block below into the employee.component.ts file: The last thing to do is to make sure the submit button’s active settings are set accordingly. Navigate to the employee.component.html file and make sure the submit statement looks like this: <button type=”submit” [disabled]=”!bioSection.valid”>Submit Application</button> If you run your application now, you will see that if you do not set an input for first name, you cannot submit the form — isn’t that cool? There are many more cool form validation tips you can get from the official guide here. Displaying input values and status The last thing you should know is how to use the value and status properties to display, in real time, the input values of your reactive form and whether it can be submitted or not. The reactive forms API lets you use the value and status properties on your form group or form controls in the template section. Open your employee.component.html file and copy in the code block below: <form [formGroup]="bioSection" (ngSubmit)="callingFunction()"> <h3>Bio Details </h3> <label> First Name: <input type="text" formControlName="firstName"> </label> <br> <label> Last Name: <input type="text" formControlName="lastName"> </label> <br> <label> Age: <input type="text" formControlName="age"> </label> <div formGroupName="stackDetails"> <h3>Stack Details</h3> <label> Stack: <input type="text" formControlName="stack"> </label> <br> <label> Experience: <input type="text" formControlName="experience"> </label> </div> <div formGroupName="address"> <h3>Address</h3> <label> Country: <input type="text" formControlName="country"> </label> <br> <label> City: <input type="text" formControlName="city"> </label> </div> <button type="submit" [disabled]="!bioSection.valid">Submit Application</button> <p> Real-time data: {{ bioSection.value | json }} </p> <p> Your form status is : {{ bioSection.status }} </p> </form> This displays both the value and the status for submission for you in the interface as you use the form. The complete code to this tutorial can be found here on GitHub. Conclusion This article gives an overview of the form builder and how it is a great efficiency enabler for form controls and form groups. It also shows how important it can be for handling form validation easily with reactive forms. Happy hacking! What TypeScript taught me about JavaScript What TypeScript taught me about JavaScript What TypeScript taught me about JavaScript. TypeScript was designed to make the most sense out of any JavaScript code. How void behaves in both TypeScript and JavaScript. What Symbols are and why they can be unique. Why substitutability is such an important concept for TypeScript TypeScript was designed to make the most sense out of any JavaScript code. Given the dynamic nature of JavaScript, this can lead to some very interesting typings that may seem odd at a first glance. In this talk, we will look at JavaScript scenarios that are easy to understand, but complex to define. We then see what tools TypeScript provides to make the most dynamical behaviour predictable, in the most elegant way possible. Join us and learn: • How void behaves in both TypeScript and JavaScript • What Symbols are and why they can be unique • The constructor interface pattern, and why classes are more complex than you might think • Why substitutability is such an important concept for TypeScript ... and much more! Manage reactive form controls with form groups in Angular 8 Manage reactive form controls with form groups in Angular 8 We explain how you can divide form controls by form groups in Angular 8, providing a platform to easily access the template element as groups. Why are reactive forms important? With reactive forms, you will discover that it is easier to build cleaner forms. Because every JavaScript framework advises that you don’t make the template clustered, this has become a priority as the form logic now lies in the component class. It also reduces the need to use a lot of directives and even end-to-end testing since you can now easily test your forms. It gives the developer all the control, and nothing is implicit anymore — every choice about inputs and controls must be made intentionally and, of course, explicitly. In Angular, form controls are classes that can hold both the data values and the validation information of any form element. That is to say, every form input you have in a reactive form should be bound by a form control. These are the basic units that make up reactive forms. In this article, you will be shown how form controls can be divided by form groups to create clusters to provide a platform to easily access the template element as groups. What is a form group? Form groups wrap a collection of form controls; just as the control gives you access to the state of an element, the group gives the same access but to the state of the wrapped controls. Every single form control in the form group is identified by name when initializing. A FormGroup aggregates the values of each child FormControl into one object, with each control name as the key. It calculates its status by reducing the status values of its children. Before you start… To be able to follow through this article’s demonstration, you should have: // run the command in a terminal ng version Confirm that you are using version 8, and update to 8 if you are not. • Download the Augury Chrome extension here. • Download this tutorial’s starter project here to follow through the demonstrations. • Unzip the project and initialize the Node modules in your terminal with this command: npm install Other things that would be nice to have are: • A working knowledge of the Angular framework at a beginner level • Familiarity with form controls in Angular will be a plus but not a requirement Demo To illustrate the concept of form groups, we will go through the process of building a reactive form so that you can fully grasp how to set it up with form groups. From here, we assume you have downloaded the starter project on GitHub and opened it in VS Code. Registering form groups The first thing to do is to tell Angular that you want to make use of the form group by importing it inside the appropriate component. Navigate to the employee.component.ts file and copy in the code block below: import { Component, OnInit } from '@angular/core'; import { FormControl, FormGroup } from '@angular/forms' @Component({ selector: 'app-employee', templateUrl: './employee.component.html', styleUrls: ['./employee.component.css'] }) export class EmployeeComponent implements OnInit { bioSection = new FormGroup({ firstName: new FormControl(''), lastName: new FormControl(''), age: new FormControl('') }); constructor() { } ngOnInit() { } } Here the form group was both imported and initialized to group together some form controls that compose the bio section of the form. To reflect this group, you have to associate the model to the view with the form group name, like this: // copy inside the employee.component.html file <form [formGroup]="bioSection" (ngSubmit)="callingFunction()"> <label> First Name: <input type="text" formControlName="firstName"> </label> <label> Last Name: <input type="text" formControlName="lastName"> </label> <label> Age: <input type="text" formControlName="age"> </label> <button type="submit">Submit Application</button> </form> Just like the form control, the form group name is used to identify the form group in the view, and on submit, the callingFunction will be triggered. Your app.component.html file should look like this: <div style="text-align:center"> <h2>Angular Job Board </h2> <app-employee></app-employee> </div> Now run your application in development with the command: ng serve It should look like this: Nesting form groups Yes, the reactive forms API makes it possible to nest a form group inside another form group. Copy the code block below into the employee.component.ts file: import { Component, OnInit } from '@angular/core'; import { FormControl, FormGroup } from '@angular/forms' @Component({ selector: 'app-employee', templateUrl: './employee.component.html', styleUrls: ['./employee.component.css'] }) export class EmployeeComponent implements OnInit { bioSection = new FormGroup({ firstName: new FormControl(''), lastName: new FormControl(''), age: new FormControl(''), stackDetails: new FormGroup({ stack: new FormControl(''), experience: new FormControl('') }), address: new FormGroup({ country: new FormControl(''), city: new FormControl('') }) }); constructor() { } ngOnInit() { } callingFunction() { console.log(this.bioSection.value); } } Here you see that the main form group wrapper is the bio section inside which both the stack details group and the address group is nested. It is important to note — as you see in the code block — that nested form groups are not defined by the assignment statement, but rather with the colon, just like you will a form control. Reflecting this in the view will look like this: // copy inside the employee.component.html file <form [formGroup]="bioSection" (ngSubmit)="callingFunction()"> <h3>Bio Details </h3> <label> First Name: <input type="text" formControlName="firstName"> </label> <br> <label> Last Name: <input type="text" formControlName="lastName"> </label> <br> <label> Age: <input type="text" formControlName="age"> </label> <div formGroupName="stackDetails"> <h3>Stack Details</h3> &lt;label&gt; Stack: &lt;input type="text" formControlName="stack"&gt; &lt;/label&gt; &lt;br&gt; &lt;label&gt; Experience: &lt;input type="text" formControlName="experience"&gt; &lt;/label&gt; </div> <div formGroupName="address"> <h3>Address</h3> &lt;label&gt; Country: &lt;input type="text" formControlName="country"&gt; &lt;/label&gt; &lt;br&gt; &lt;label&gt; City: &lt;input type="text" formControlName="city"&gt; &lt;/label&gt; </div> <button type="submit">Submit Application</button> </form> It is very important that every name in the model and the view match — you do not misspell the form control names! When you save and run the application, if you do get any errors, read the error message and correct the misspelling you must have used. You can style your component with the style instructions below: input[type=text] { width: 30%; padding: 8px 14px; margin: 2px; box-sizing: border-box; } button { font-size: 12px; margin: 2px; padding: 8px 14px; } If you run the application, you should see something like this in your browser: When you use the form and submit, you will see your input results returned in the browser console. The complete code to this tutorial can be found here on GitHub. Conclusion In addition to learning about form controls, you have now been introduced to the important concept of grouping these controls. You were also shown why grouping them is very important, as it ensures that their collective instances can be captured at once. The next concept we will be looking at is form builders. Thanks for reading. If you liked this post, share it with all of your programming buddies! Further reading ☞ Best 50 Angular Interview Questions for Frontend Developers in 2019 To become an Outstanding AngularJs Developer - part 2 To become an Outstanding AngularJs Developer - part 1 To become an effective Angular developer, you need to learn 19 things in this article Top 18 Mistakes AngularJS Developers Makes AngularJS Directive with Example: ng-init, ng-app, ng-repeat, ng-model ☞ Angular 8 (formerly Angular 2) - The Complete Guide Originally published on blog.logrocket.com
__label__pos
0.954091
Website accessibility is often thought of as a means to help those with disabilities. However, accessible websites offer a better experience for everyone. From easy navigation to boosted search engine optimization, making your website accessible can positively impact all users. The Importance of Accessible Websites People with a variety of disabilities can use accessible websites. This includes people who are blind or have low vision, people who are deaf or hard of hearing, and people with physical disabilities. However, they also benefit those who are without disabilities. First, accessible websites improve the navigation and adaptability of websites to a broader range of formats. Second, accessible websites can help businesses reach a larger audience. Finally, accessible websites can help promote social inclusion and help people of all abilities participate in the online community. Accessibility is a critical aspect of web design in the 21st century. Optimization companies such as accessiBe base their business on improving websites to comply with the Americans With Disabilities Act and other government regulations. (This example was chosen based on accessiBe reviews on Slashdot). The History of Accessible Web Design Accessibility in web design can be traced back to the early days of the World Wide Web. The first web browsers were developed in the early 1990s and were not designed with accessibility in mind. As the web became more popular, people with disabilities began to demand access to the same information and services that everyone else could enjoy. In the late 1990s, the World Wide Web Consortium (W3C) released the Web Content Accessibility Guidelines (WCAG), which provided a set of standards for making web content more accessible. WCAG 1.0 was released in 1999, and WCAG 2.0 was released in 2008. Since then, accessible web design has become essential to the web development process. Many governments and organizations have adopted WCAG as their accessibility standard, and many web developers ensure accessibility in their design and development process. Creating an Accessible Website WCAG guidelines help make web content more accessible to people with disabilities by providing a set of standards developed by the World Wide Web Consortium (W3C). Four principles underlie WCAG guidelines: perceivability, operability, understandability, and robustness. It is possible to make your website accessible to people with a wide range of disabilities by following these guidelines. For web content to be more accessible, four principles are outlined: 1. user interface components and information must be viewed in ways that are easily comprehendible by users. 2. It must be possible to navigate and operate the user interface components. 3. User interface components and information must be easy to understand. 4. Assistive technologies should be able to understand the content if it is robust enough. In Summary When a website is accessible, it is easier to navigate and find the information you need. The layout is typically more straightforward and more organized. This makes it less likely that you will get lost or frustrated while trying to use the website. Some common accessibility features include: • Making sure the website can be used with a screen reader • Making sure the website can be used with a keyboard • Making sure the website can be used with a Braille reader • Making sure the website can be used with a voice recognition system These features make it possible for people with a wide range of disabilities to use the website. Overall, increasing the accessibility of websites creates a better user experience for everyone. People with disabilities can use the website more quickly, and people without disabilities can also use it more efficiently. This makes the website more enjoyable to use regardless of ability.
__label__pos
0.966521
[Android] 做了一个星空背景的动态 Drawable - StarrySky android Dec 05, 2019 新项目需要做一个星空背景,顺便说下怎么做一个动态 Drawable 先看最终效果图: 我们的目标是一个叫 StarrySky动态Drawable, 用法像这样: imageView.setImageDrawable(starrySky) // or imageView.background = starrySky starrySky.start() 所以基础结构就是 class StarrySky: Drawable(), Animatable { /// xxx override fun draw(canvas: Canvas) override fun start() override fun stop() override fun isRunning() } 分析一下效果图,就是在随机位置加了很多点,然后这些点以随机速度随机方向做匀速直线运动。 那我们所有需要的要素都在这里了: 1. 随机位置 2. 随机速度 3. 随机方向 那我就定义一个类保存这些要素就好, class Star( var x: Float, var y: Float, var speed: Int, // pixels per second var direction: Int // degree (0-360) ) 因为是星星都动态的,所以要可以计算下一帧的位置,加一个move方法来计算。 class Star( var x: Float, var y: Float, var speed: Int, // pixels per second var direction: Int // degree (0-360) ) { fun move(delta: Int) { x += speed * delta / 1000f * cos(direction.toFloat()) y += speed * delta / 1000f * sin(direction.toFloat()) } } 然后给 StarrySky 加一个列表来保存这些星星, 为了避免concurrent异常,加上粗暴的同步锁 val stars = HashSet<Star>() private val LOCK = Any() fun addStar(star: Star) { synchronized(LOCK) { stars.add(star) } } fun removeStar(star: Star) { synchronized(LOCK) { stars.remove(star) } } fun copyStar(): HashSet<Star> { synchronized(LOCK) { val set = HashSet<Star>() set.addAll(stars) return set } } 画出来: fun draw(canvas: Canvas) { canvas.drawColor(backgroundColor) val currentStars = copyStar() for (star in currentStars) { canvas.drawCircle(star.x, star.y, 2f, starPaint) } } 怎么让他们动起来呢? 方法很多,Timer ValueAnimator 甚至手动delay都可以。我们的目标就是每过 16ms(每秒60帧) 能更新一下我们的位置。然后告诉 drawable,我位置更新了,你可以重新画一遍了。 我这里用了Timer fun start() { /// xxx timer.schedule(object : TimerTask() { override fun run() { val currentTime = System.currentTimeMillis() update((currentTime - lastTime).toInt()) lastTime = currentTime } }, 0, 16) } fun update(delta: Int) { // xxx // 这里要注意处理同步问题, 我就简写 for star in stars: star.move(delta) } ok,新位置计算结束 告诉 drawable 重新绘制: fun update(delta: Int) { // 计算新位置后 invalidateSelf() } 这样就能让星星在夜空中动起来了。 不过,我们想想,星空中不只有星星。还有月亮、太阳和超人。我们可以优化优化让它变得更通用。 做成通用型 Drawable 我们回去看看星星模型 class Star( var x: Float, var y: Float, var speed: Int, // pixels per second var direction: Int // degree (0-360) ) { fun move(delta: Int) { x += speed * delta / 1000f * cos(direction.toFloat()) y += speed * delta / 1000f * sin(direction.toFloat()) } } 我们先给他重新命个名,叫Model 月亮、太阳、超人等等,这每类模型和星星一样的点在于,他们必然都有坐标,但是移动模式可能不一样。 所以我们可以把速度和方向提出来做抽象得到: abstract class Model( var position: Point ) { abstract fun move(delta: Int) } 星星我们知道怎么画,就画个小圆就行了。可是如果你想要画月亮画太阳,或者画个大星星,那肯定就不能还是那样了。 所以绘制的部分也要抽象出来。 abstract class Model( var position: Point ) { abstract fun move(delta: Int) abstract fun draw(canvas: Canvas) } 星星就变成了 class Star(position: Point, val speed: Int, val direction: Int, paint: Paint): Model(position) { fun move(delta: Int) { position.x += speed * delta / 1000f * cos(direction.toFloat()) position.y += speed * delta / 1000f * sin(direction.toFloat()) } fun draw(canvas: Canvas) { canvas.drawCircle(position.x, position.y, 2f, paint) } } 现在整个星空StarrySky看起来像这样了: class StarrySky: Drawable(), Animatable { /// xxx val models = HashSet<Model>() // 开始动画 override fun start() { // 每 16s 更新 timer.schedule(object : TimerTask() { override fun run() { val currentTime = System.currentTimeMillis() update((currentTime - lastTime).toInt()) lastTime = currentTime // 重绘 invalidateSelf() } }, 0, 16) } override fun stop() override fun isRunning() // 计算新位置 fun update(delta: Int) { models.forEach { it.move(delta) } } // 绘制模型 override fun draw(canvas: Canvas) { models.forEach { it.draw(canvas) } } } 当然,这里都是伪代码。你实际写代码还要注意更多的细节,比如 Set 的同步问题,物体移动出范围后如何处理的问题。 当这些问题你都处理好了,美丽的星空就从你的手中诞生了。 /* 看板娘 */
__label__pos
0.999928
Notifications Good job! You’re all caught up Join challenges and check your notification settings if you don’t receive notifications. We’re actively adding new notifications. Read our blog post for more info Notifications TOSCA Editor - Web Application Wireframe Challenge (1) Dismiss notification Earlier Dismiss All You have no more notifications May 4, 2017 Single Round Match 713 Editorials We now have some of the SRM 713 Editorials published. We are awaiting or reviewing the submissions for some of them. If you would like to contribute an editorial for those problems you may do so by submitting to their respective challenge. Thanks to stni, pakhandimarcose18 , GoogleHireMe for contributing to the SRM 713 editorials.   Single Round Match 713 Round 1 – Division II, Level One SymmetryDetection by pakhandi The problem can be solved by traversing the matrix and checking for symmetry in a greedy / ad-hoc way. Let us first solve the problem for the vertical-symmetry. We want to check if, for every cell [i][j], if it is equal to [i][cols – j – 1]. In this case we don’t need to visit every cell. We only need to check this condition for cells in all the rows but only half the columns. The proof of this optimization is very trivial and is left as an exercise for the reader. For the horizontal-symmetry, we can use the same method for all the columns but half the rows, comparing [i][j] with [rows – i – 1][j]. The method is very similar to what is used for checking palindrome. Please check the implementation below for details. class SymmetryDetection { public: string detect(vector board) { bool h = 1, v = 1; int rows = board.size(); int cols = board[0].size(); for(int i = 0; i < rows; i++) { for(int j = 0; j <= cols / 2; j++) { if(board[i][j] != board[i][cols - j - 1]) { v = 0; } } } for(int i = 0; i <= rows / 2; i++) { for(int j = 0; j < cols; j++) { if(board[i][j] != board[rows - i - 1][j]) { h = 0; } } } if(h && v) { return "Both"; } else if(h) { return "Horizontally symmetric"; } else if(v) { return "Vertically symmetric"; } else { return "Neither"; } } };   Single Round Match 713 Round 1 – Division II, Level Two PowerEquationEasy by marcose18 In this question we will iterate through all the values & it’s powers which are less than n. Now we will calculate answer for a number i & it’s powers in one go . Now for some power of i, let’s assume i ^ x we have to calculate all such values where (i ^ x) ^ c = (i ^ y) ^ d such that c, d & i ^ y <= n. Now clearly x, y <= maxpow where i ^ maxpow <= n while i ^ (maxpow + 1) > n. Also (i ^ x) ^ c = (i ^ y) ^ d implies x * c = y * d. In short (x / y) = (d / c). So for fixed values of x, y we have to find c & d such that it satisfies above equation while it is also <= n. So to find such c & d, reduce x & y to it’s lowest form by dividing both x & y by gcd(x, y). Now it’s easy to see that total possible values = n / max(tempx, tempy) where tempx & tempy are x / gcd & y / gcd respectively. Do the above for all values of i and add it to answer. One final thing to note here is if i > sqrt(n) then it contains no power that is less than n i.e i ^ x = i ^ y only for x = y = 1. So each of them will contribute n to the final answer and thus we will loop only upto sqrt(n) instead of n. Below is the code of above explanation. Here HashSet is used to keep track of visited numbers and their powers. import java.util.*; import java.util.regex.*; import java.text.*; import java.math.*; public class PowerEquationEasy { // Calculating gcd of two numbers. static int gcd(int a, int b) { if (a < b) { int temp = a; a = b; b = temp; } if (b == 0) return a; return gcd(b, a % b); } public int count(int n) { long ans = n * 1L * n; long mod = (long) 1E9 + 7; ans %= mod; HashSet set = new HashSet<>(); // Iterate through all values but not those which are stored in the hashset which are nothing but powers of earlier values and we have calculated answers for them before. for (int i = 2; i * i <= n; i++) { if (set.contains(i)) continue; int maxpow = 0; int temp = i; while (temp <= n) { ++maxpow; set.add(temp); temp *= i; } // Find the answer for all x & y such that (i ^ x) ^ c = (i ^ y) ^ d. for (int x = 1; x <= maxpow; x++) for (int y = 1; y <= maxpow; y++) { int g = gcd(x, y); int tempx = x / g, tempy = y / g; ans += n / Math.max(tempx, tempy); while (ans >= mod) ans -= mod; } } // All numbers greater than sqrt(n) will have no power that is less than n and for each of them there will be ‘n’ identities. ans += (n - set.size() - 1) * 1L * n; ans %= mod; return (int) ans; } }    Single Round Match 713 Round 1 – Division II, Level Three DFSCountEasy by GoogleHireMe In this problem, you have to calculate number of different paths to traverse the graph using depth first search. First of all, let’s try to set the possible upper bound for the answer. There are exactly N! (n-factorial) permutations of numbers from 1 to N. All of those sequences will be valid when we have a complete graph. So, the maximal possible answer can reach 13! = 6227020800, and this means that a brute force (trying all the possible paths) wouldn’t fit into time bounds. To solve this problem, we can use dynamic programming. In every dynamic programming problem you have to detect four things: 1. a) define a state 2. b) find a way to split the problem into subproblems 3. c) define the base state: the subproblem for which you know the solution immediately, without splitting it into subproblems. 4. d) find a way to combine the subproblems’ solutions to get the solution of the main problem Let’s tackle this step by step: a) Let’s define our problem’s state as a pair of <start_node, available_nodes>, where start_node is the node from which we start building the path, and available_nodes – set of nodes which are not yet visited. Then, to find the solution to the initial problem, we would have to sum up all solutions of subproblems with all possible start nodes when all nodes as available. Let’s consider the following example: node 1 is not available (since it was visited before) and start_node is 2; State: start_node=2, available_nodes={3, 4, 5, 6, 7, 8, 9}   b) To split the problem into subproblems, we can just try to extend our existing path and make a one DFS call to one of the available In the example, we can go to the next possible subproblems (2 is no longer available, start_node highlighted with blue color): c) The base state can be the state with only one available node – it means that there is nowhere else to go, and the answer is 1 (only that node, and that’s the end). d) Now, we have to find a way to combine the solutions to the subproblems. And this is the place where things get really interesting: The first thing we should notice is that once we decided which child to go next, all other children that are connected (directly or via available nodes) will not be available once the dfs() will return, since they will be marked as used in one of the dfs()’s subcalls of the call on that child. For instance, in the example above, when we call dfs for node 3 from the node 2, it will mark nodes 4, 5 and 6 as used. Thus, after the end of the call, we wouldn’t be able to call dfs for the node 6 from the node 2 anymore. Let’s call the subset of the “reachable” nodes as the component. In the example above, once we consider node 2 as start_node and node 1 as unavailable, we will have to components: consisting of nodes {3, 4, 5, 6} and {7, 8, 9}. Since you can pick only one node per component to move to, the number of dfs() calls from the start_node equals to the number of components in that state. Since the components are independent from each other, the result for the given state will be the multiplication of number of paths for all components separately. Also, because of the independance of those components (you cannot reach one from another), we can do those above mentioned dfs() calls in any order, that means that we have to multiply the result by K! (k-factorial) where K is the number of components for the current state. One more thing we still have to consider – what if one component contains more than one neighbour of the start_node. Since we can “enter” the component only once (after that all nodes of the component will be marked as used), the number of different paths for the component will be the sum of those subproblems with different children as start_node.   A few notes on the implementation: the recursive solution looks nice, however, to avoid solving the same subproblem multiple times, it’s important to implement this recursion with memoization of the answers for every state previously calculated. We can achieve this with map data structure. In my solution I use breadth first search to find the components. Also, I use the bitmask to represent available nodes instead of the set, since the number of the nodes is small, and bit operations are generally “cheaper” (in terms of performance) than the operations on the set. Here, 1 on the ith position of the binary representation of the mask variable means that the ith node is in available_nodes set. Complexity: Let n be equal to the number of nodes. In the worst case, for every state (2^n * n) we will perform breadth first search ( O(n*n)) for every neighbor (n). Thus O(2^n * n * n * n^2) = O( 2^n * n^4 ) To sum up, the solution will look like follows: #include <string> #include <vector> #include <map> #include <queue> using namespace std; class DFSCountEasy { int n; vector<string> G; map<pair<int, int>, long long> cache; long long f(int mask, int v) { // Remove current node from the available nodes mask = mask & (~(1 << v)); if (cache.count({mask, v})) { // The solution for this subproblem was calculated before return cache[{mask, v}]; } map<int, long long> components; for (int i = 0; i < n; i++) { if (G[v][i] == 'Y' && (mask & (1 << i))) { // BFS for child i queue<int> q; q.push(i); // Bitmask for the set of nodes included in the corresponding component int submask = 1 << i; while (!q.empty()) { int u = q.front(); q.pop(); for (int j = 0; j < n; j++) if (mask & (1 << j)) { if (G[u][j] == 'Y' && !(submask & (1 << j))) { submask |= 1 << j; q.push(j); } } } components[submask] += f(submask, i); } } long long res = 1; for (auto x : components) res *= x.second; for (int i = 0; i < components.size(); i++) res *= i + 1; return cache[{mask, v}] = res; } public: long long count(vector<string> _G) { G = _G; n = G.size(); long long res = 0; for (int i = 0; i < n; i++) res += f((1 << n) - 1, i); return res; } }; Single Round Match 713 Round 1 – Division I, Level One PowerEquation by stni  If a=c, b=d, we have n*n identities. Here, 1^1=1^2=1^3=…=1^n,—-(n-1) 1^2=1^1=1^3=…=1^n,—-(n-1) 1^n=1^1=1^2=…=1^(n-1) —-(n-1) For a=1 and b=1, we need to add another n*n-1. So we already have n*(2*n-1) identities obviously. For a != c: Without loss of generality, we say ac. For all integers not power of other integers p, let a=p^i, c=p^j number of c we can choose from equals to number of powers of least common multiple(LCM) of a and c. j/gcd(i,j) is the min distance between powers of LCM, n/(j/gcd(i,j)) is the number of such identities in [1..n] multiply 2 for symmetry of a-c, b-d Complexity is O(n) #include<set> using namespace std; typedef long long ll; const ll mod=1e9+7; ll gcd(ll a,ll b){ return a?gcd(b%a,a):b; } class PowerEquation{ public: int count(int n0){ ll n=n0; set<ll>se; //all power bases ll ans=n*(n*2-1)%mod; for(ll p=2;p*p<=n;p++){ if(se.count(p)) continue; //we already calculated p's base ll t=p; ll k=0;//number of powers of t while(t<=n){ se.insert(t); t*=p; k++; } for(ll i=1;i<=k;i++){ for(ll j=i+1;j<=k;j++){ ans+=n/(j/gcd(i,j))*2; ans%=mod; } } } return (int)ans; } };   Single Round Match 713 Round 1 – Division I, Level Two DFSCount by stni We use an 15-bit integer to remember which nodes we visited during the dfs process. If we have visited the i-th node, we set the i-th bit to 1. If it is not visited, the i-th bit is set to 0. We need a dfs function with two parameters. The parameters being an integer as the 15-bit integer and an integer which keeps the last visited node. In dfs: If all nodes connected to the last node are visited -there is only 1 way to do dfs which is “to do nothing and return”. If there is a node i connected to this node and not visited,  -we can append i to p. We dfs with node i as last visited node, get the number of ways we can do dfs, and 15-bit visited nodes. We get the second integer which shows the nodes we visited from i. If we do the dfs again from last node with parameter visited being the newly returned visited integer. The ways to do dfs from last node is to multiply these 2 returned values because 2 orders exist. Suppose nodes visited in the first dfs are n1, nodes visited in the second dfs are n2. We can visit n1 then visit n2, so after i we append n1 then n2 to p. We can also visit n2 then n1, so after i we append n2 then n1 to p. They all start from last node, and n1 includes node i. If we have a set of nodes not visited and we visited node last, the number of ways we can do dfs is always the same. So we memoize that in a 2D vector to avoid repeated calculations. The first dimension is all 15-bit integer which stores the visited nodes, size 1<<n, The second dimension is 15, which is the last node we visited. We have n nodes, each has n potential positions. Complexity is O(n^2) #include<vector> #include<array> #include<string> using namespace std; typedef long long ll; vector<string>G; vector<array<pair<ll,ll>, 15>>mem; class DFSCountEasy{ public: pair<ll,ll> dfs(ll visited, ll last){ if(mem[visited][last].second)return mem[visited][last]; pair<ll,ll> r(0,0); for(int i=0;i<G.size();i++){ if((last!=G.size()&&G[last][i]=='N')||(visited&(1ll<<i)))continue; pair<ll,ll>t1=dfs(visited|(1<<i), i); pair<ll,ll>t2=dfs(t1.second,last); r.first+=t1.first*t2.first; r.second=t1.second|t2.second; } if(r.second==0){ r=make_pair(1, visited); } return mem[visited][last]=r; } ll count(vectorG){ ::G=G; mem.clear(); mem.resize(1<<G.size()); pair<ll,ll> r=dfs(0,G.size()); return r.first; } };  Single Round Match 713 Round 1 – Division I, Level Three CoinsQuery We are awaiting the submission for the following editorial. If you would like to contribute an editorial for this problem you may do so by submitting to this challenge.   Harshit Mehta UNLEASH THE GIG ECONOMY. START A PROJECT OR TALK TO SALES Close Sign up for the Topcoder Monthly Customer Newsletter Thank you Your information has been successfully received You will be redirected in 10 seconds
__label__pos
0.970636
Service 与 Pod 的 DNS 你的工作负载可以使用 DNS 发现集群内的 Service,本页说明具体工作原理。 Kubernetes 为 Service 和 Pod 创建 DNS 记录。 你可以使用一致的 DNS 名称而非 IP 地址访问 Service。 Kubernetes DNS 除了在集群上调度 DNS Pod 和 Service, 还配置 kubelet 以告知各个容器使用 DNS Service 的 IP 来解析 DNS 名称。 集群中定义的每个 Service (包括 DNS 服务器自身)都被赋予一个 DNS 名称。 默认情况下,客户端 Pod 的 DNS 搜索列表会包含 Pod 自身的名字空间和集群的默认域。 Service 的名字空间 DNS 查询可能因为执行查询的 Pod 所在的名字空间而返回不同的结果。 不指定名字空间的 DNS 查询会被限制在 Pod 所在的名字空间内。 要访问其他名字空间中的 Service,需要在 DNS 查询中指定名字空间。 例如,假定名字空间 test 中存在一个 Pod,prod 名字空间中存在一个服务 data Pod 查询 data 时没有返回结果,因为使用的是 Pod 的名字空间 test Pod 查询 data.prod 时则会返回预期的结果,因为查询中指定了名字空间。 DNS 查询可以使用 Pod 中的 /etc/resolv.conf 展开。kubelet 会为每个 Pod 生成此文件。例如,对 data 的查询可能被展开为 data.test.svc.cluster.localsearch 选项的取值会被用来展开查询。要进一步了解 DNS 查询,可参阅 resolv.conf 手册页面 nameserver 10.32.0.10 search <namespace>.svc.cluster.local svc.cluster.local cluster.local options ndots:5 概括起来,名字空间 test 中的 Pod 可以成功地解析 data.prod 或者 data.prod.svc.cluster.local DNS 记录 哪些对象会获得 DNS 记录呢? 1. Services 2. Pods 以下各节详细介绍已支持的 DNS 记录类型和布局。 其它布局、名称或者查询即使碰巧可以工作,也应视为实现细节, 将来很可能被更改而且不会因此发出警告。 有关最新规范请查看 Kubernetes 基于 DNS 的服务发现 Service A/AAAA 记录 “普通” Service(除了无头 Service)会以 my-svc.my-namespace.svc.cluster-domain.example 这种名字的形式被分配一个 DNS A 或 AAAA 记录,取决于 Service 的 IP 协议族。 该名称会解析成对应 Service 的集群 IP。 “无头(Headless)” Service (没有集群 IP)也会以 my-svc.my-namespace.svc.cluster-domain.example 这种名字的形式被指派一个 DNS A 或 AAAA 记录, 具体取决于 Service 的 IP 协议族。 与普通 Service 不同,这一记录会被解析成对应 Service 所选择的 Pod IP 的集合。 客户端要能够使用这组 IP,或者使用标准的轮转策略从这组 IP 中进行选择。 SRV 记录 Kubernetes 根据普通 Service 或 Headless Service 中的命名端口创建 SRV 记录。每个命名端口, SRV 记录格式为 _my-port-name._my-port-protocol.my-svc.my-namespace.svc.cluster-domain.example。 普通 Service,该记录会被解析成端口号和域名:my-svc.my-namespace.svc.cluster-domain.example。 无头 Service,该记录会被解析成多个结果,及该服务的每个后端 Pod 各一个 SRV 记录, 其中包含 Pod 端口号和格式为 auto-generated-name.my-svc.my-namespace.svc.cluster-domain.example 的域名。 Pod A/AAAA 记录 一般而言,Pod 会对应如下 DNS 名字解析: pod-ip-address.my-namespace.pod.cluster-domain.example 例如,对于一个位于 default 名字空间,IP 地址为 172.17.0.3 的 Pod, 如果集群的域名为 cluster.local,则 Pod 会对应 DNS 名称: 172-17-0-3.default.pod.cluster.local 通过 Service 暴露出来的所有 Pod 都会有如下 DNS 解析名称可用: pod-ip-address.service-name.my-namespace.svc.cluster-domain.example Pod 的 hostname 和 subdomain 字段 当前,创建 Pod 时其主机名取自 Pod 的 metadata.name 值。 Pod 规约中包含一个可选的 hostname 字段,可以用来指定 Pod 的主机名。 当这个字段被设置时,它将优先于 Pod 的名字成为该 Pod 的主机名。 举个例子,给定一个 hostname 设置为 "my-host" 的 Pod, 该 Pod 的主机名将被设置为 "my-host"。 Pod 规约还有一个可选的 subdomain 字段,可以用来指定 Pod 的子域名。 举个例子,某 Pod 的 hostname 设置为 “foo”,subdomain 设置为 “bar”, 在名字空间 “my-namespace” 中对应的完全限定域名(FQDN)为 “foo.bar.my-namespace.svc.cluster-domain.example”。 示例: apiVersion: v1 kind: Service metadata: name: default-subdomain spec: selector: name: busybox clusterIP: None ports: - name: foo # 实际上不需要指定端口号 port: 1234 targetPort: 1234 --- apiVersion: v1 kind: Pod metadata: name: busybox1 labels: name: busybox spec: hostname: busybox-1 subdomain: default-subdomain containers: - image: busybox:1.28 command: - sleep - "3600" name: busybox --- apiVersion: v1 kind: Pod metadata: name: busybox2 labels: name: busybox spec: hostname: busybox-2 subdomain: default-subdomain containers: - image: busybox:1.28 command: - sleep - "3600" name: busybox 如果某无头 Service 与某 Pod 在同一个名字空间中,且它们具有相同的子域名, 集群的 DNS 服务器也会为该 Pod 的全限定主机名返回 A 记录或 AAAA 记录。 例如,在同一个名字空间中,给定一个主机名为 “busybox-1”、 子域名设置为 “default-subdomain” 的 Pod,和一个名称为 “default-subdomain” 的无头 Service,Pod 将看到自己的 FQDN 为 "busybox-1.default-subdomain.my-namespace.svc.cluster-domain.example"。 DNS 会为此名字提供一个 A 记录或 AAAA 记录,指向该 Pod 的 IP。 “busybox1” 和 “busybox2” 这两个 Pod 分别具有它们自己的 A 或 AAAA 记录。 EndpointSlice 对象可以为任何端点地址及其 IP 指定 hostname Pod 的 setHostnameAsFQDN 字段 特性状态: Kubernetes v1.22 [stable] 当 Pod 配置为具有全限定域名 (FQDN) 时,其主机名是短主机名。 例如,如果你有一个具有完全限定域名 busybox-1.default-subdomain.my-namespace.svc.cluster-domain.example 的 Pod, 则默认情况下,该 Pod 内的 hostname 命令返回 busybox-1,而 hostname --fqdn 命令返回 FQDN。 当你在 Pod 规约中设置了 setHostnameAsFQDN: true 时,kubelet 会将 Pod 的全限定域名(FQDN)作为该 Pod 的主机名记录到 Pod 所在名字空间。 在这种情况下,hostnamehostname --fqdn 都会返回 Pod 的全限定域名。 Pod 的 DNS 策略 DNS 策略可以逐个 Pod 来设定。目前 Kubernetes 支持以下特定 Pod 的 DNS 策略。 这些策略可以在 Pod 规约中的 dnsPolicy 字段设置: • "Default": Pod 从运行所在的节点继承名称解析配置。 参考相关讨论获取更多信息。 • "ClusterFirst": 与配置的集群域后缀不匹配的任何 DNS 查询(例如 "www.kubernetes.io") 都将转发到从节点继承的上游名称服务器。集群管理员可能配置了额外的存根域和上游 DNS 服务器。 参阅相关讨论 了解在这些场景中如何处理 DNS 查询的信息。 • "ClusterFirstWithHostNet":对于以 hostNetwork 方式运行的 Pod,应显式设置其 DNS 策略 "ClusterFirstWithHostNet"。 • 注意:这在 Windows 上不支持。 有关详细信息,请参见下文 • "None": 此设置允许 Pod 忽略 Kubernetes 环境中的 DNS 设置。Pod 会使用其 dnsConfig 字段所提供的 DNS 设置。 参见 Pod 的 DNS 配置节。 下面的示例显示了一个 Pod,其 DNS 策略设置为 "ClusterFirstWithHostNet", 因为它已将 hostNetwork 设置为 true apiVersion: v1 kind: Pod metadata: name: busybox namespace: default spec: containers: - image: busybox:1.28 command: - sleep - "3600" imagePullPolicy: IfNotPresent name: busybox restartPolicy: Always hostNetwork: true dnsPolicy: ClusterFirstWithHostNet Pod 的 DNS 配置 特性状态: Kubernetes v1.14 [stable] Pod 的 DNS 配置可让用户对 Pod 的 DNS 设置进行更多控制。 dnsConfig 字段是可选的,它可以与任何 dnsPolicy 设置一起使用。 但是,当 Pod 的 dnsPolicy 设置为 "None" 时,必须指定 dnsConfig 字段。 用户可以在 dnsConfig 字段中指定以下属性: • nameservers:将用作于 Pod 的 DNS 服务器的 IP 地址列表。 最多可以指定 3 个 IP 地址。当 Pod 的 dnsPolicy 设置为 "None" 时, 列表必须至少包含一个 IP 地址,否则此属性是可选的。 所列出的服务器将合并到从指定的 DNS 策略生成的基本名称服务器,并删除重复的地址。 • searches:用于在 Pod 中查找主机名的 DNS 搜索域的列表。此属性是可选的。 指定此属性时,所提供的列表将合并到根据所选 DNS 策略生成的基本搜索域名中。 重复的域名将被删除。Kubernetes 最多允许 6 个搜索域。 • options:可选的对象列表,其中每个对象可能具有 name 属性(必需)和 value 属性(可选)。 此属性中的内容将合并到从指定的 DNS 策略生成的选项。 重复的条目将被删除。 以下是具有自定义 DNS 设置的 Pod 示例: apiVersion: v1 kind: Pod metadata: namespace: default name: dns-example spec: containers: - name: test image: nginx dnsPolicy: "None" dnsConfig: nameservers: - 1.2.3.4 searches: - ns1.svc.cluster-domain.example - my.dns.search.suffix options: - name: ndots value: "2" - name: edns0 创建上面的 Pod 后,容器 test 会在其 /etc/resolv.conf 文件中获取以下内容: nameserver 1.2.3.4 search ns1.svc.cluster-domain.example my.dns.search.suffix options ndots:2 edns0 对于 IPv6 设置,搜索路径和名称服务器应按以下方式设置: kubectl exec -it dns-example -- cat /etc/resolv.conf 输出类似于: nameserver 2001:db8:30::a search default.svc.cluster-domain.example svc.cluster-domain.example cluster-domain.example options ndots:5 DNS 搜索域列表限制 特性状态: Kubernetes 1.26 [beta] Kubernetes 本身不限制 DNS 配置,最多可支持 32 个搜索域列表,所有搜索域的总长度不超过 2048。 此限制分别适用于节点的解析器配置文件、Pod 的 DNS 配置和合并的 DNS 配置。 Windows 节点上的 DNS 解析 • 在 Windows 节点上运行的 Pod 不支持 ClusterFirstWithHostNet。 Windows 将所有带有 . 的名称视为全限定域名(FQDN)并跳过全限定域名(FQDN)解析。 • 在 Windows 上,可以使用的 DNS 解析器有很多。 由于这些解析器彼此之间会有轻微的行为差别,建议使用 Resolve-DNSName powershell cmdlet 进行名称查询解析。 • 在 Linux 上,有一个 DNS 后缀列表,当解析全名失败时可以使用。 在 Windows 上,你只能有一个 DNS 后缀, 即与该 Pod 的命名空间相关联的 DNS 后缀(例如:mydns.svc.cluster.local)。 Windows 可以解析全限定域名(FQDN),和使用了该 DNS 后缀的 Services 或者网络名称。 例如,在 default 命名空间中生成一个 Pod,该 Pod 会获得的 DNS 后缀为 default.svc.cluster.local。 在 Windows 的 Pod 中,你可以解析 kubernetes.default.svc.cluster.localkubernetes, 但是不能解析部分限定名称(kubernetes.defaultkubernetes.default.svc)。 接下来 有关管理 DNS 配置的指导, 请查看配置 DNS 服务 最后修改 October 20, 2022 at 9:39 AM PST: [zh] sync dns-pod-service.md (3f4b589f5e)
__label__pos
0.91738
Friendica Communications Platform (please note that this is a clone of the repository at github, issues are handled there) https://friendi.ca You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.           171 lines 3.8 KiB <?php /** * @file object/Conversation.php */ if (class_exists('Conversation')) { return; } use Friendica\Core\BaseObject; require_once 'boot.php'; require_once 'object/Item.php'; require_once 'include/text.php'; /** * A list of threads * * We should think about making this a SPL Iterator */ class Conversation extends BaseObject { private $threads = array(); private $mode = null; private $writable = false; private $profile_owner = 0; private $preview = false; public function __construct($mode, $preview) { $this->set_mode($mode); $this->preview = $preview; } /** * Set the mode we'll be displayed on */ private function set_mode($mode) { if($this->get_mode() == $mode) return; $a = $this->get_app(); switch($mode) { case 'network': case 'notes': $this->profile_owner = local_user(); $this->writable = true; break; case 'profile': $this->profile_owner = $a->profile['profile_uid']; $this->writable = can_write_wall($a,$this->profile_owner); break; case 'display': $this->profile_owner = $a->profile['uid']; $this->writable = can_write_wall($a,$this->profile_owner); break; default: logger('[ERROR] Conversation::set_mode : Unhandled mode ('. $mode .').', LOGGER_DEBUG); return false; break; } $this->mode = $mode; } /** * Get mode */ public function get_mode() { return $this->mode; } /** * Check if page is writable */ public function is_writable() { return $this->writable; } /** * Check if page is a preview */ public function is_preview() { return $this->preview; } /** * Get profile owner */ public function get_profile_owner() { return $this->profile_owner; } /** * Add a thread to the conversation * * Returns: * _ The inserted item on success * _ false on failure */ public function add_thread($item) { $item_id = $item->get_id(); if(!$item_id) { logger('[ERROR] Conversation::add_thread : Item has no ID!!', LOGGER_DEBUG); return false; } if($this->get_thread($item->get_id())) { logger('[WARN] Conversation::add_thread : Thread already exists ('. $item->get_id() .').', LOGGER_DEBUG); return false; } /* * Only add will be displayed */ if($item->get_data_value('network') === NETWORK_MAIL && local_user() != $item->get_data_value('uid')) { logger('[WARN] Conversation::add_thread : Thread is a mail ('. $item->get_id() .').', LOGGER_DEBUG); return false; } if($item->get_data_value('verb') === ACTIVITY_LIKE || $item->get_data_value('verb') === ACTIVITY_DISLIKE) { logger('[WARN] Conversation::add_thread : Thread is a (dis)like ('. $item->get_id() .').', LOGGER_DEBUG); return false; } $item->set_conversation($this); $this->threads[] = $item; return end($this->threads); } /** * Get data in a form usable by a conversation template * * We should find a way to avoid using those arguments (at least most of them) * * Returns: * _ The data requested on success * _ false on failure */ public function get_template_data($conv_responses) { $a = get_app(); $result = array(); $i = 0; foreach($this->threads as $item) { if($item->get_data_value('network') === NETWORK_MAIL && local_user() != $item->get_data_value('uid')) continue; $item_data = $item->get_template_data($conv_responses); if(!$item_data) { logger('[ERROR] Conversation::get_template_data : Failed to get item template data ('. $item->get_id() .').', LOGGER_DEBUG); return false; } $result[] = $item_data; } return $result; } /** * Get a thread based on its item id * * Returns: * _ The found item on success * _ false on failure */ private function get_thread($id) { foreach($this->threads as $item) { if($item->get_id() == $id) return $item; } return false; } }
__label__pos
0.998051
TeX - LaTeX Stack Exchange is a question and answer site for users of TeX, LaTeX, ConTeXt, and related typesetting systems. It's 100% free, no registration required. Sign up Here's how it works: 1. Anybody can ask a question 2. Anybody can answer 3. The best answers are voted up and rise to the top In LyX, when I enter quoted text, the generated LaTeX source has uses different quotes for left and right: `` for the former and '' for the latter so that "Ugly quotes" produces ``Ugly quotes'' which (for some fonts and engines) produces corresponding (unacceptable) output for the left quotes. Is there a way to prevent this? Are there packages that I can use to "repair" the left quotes? [OS X 10.7.3; LyX 2.0.3] share|improve this question      What unacceptable output do you get? – Ben Alpert Mar 11 '12 at 22:53 3   If you're using XeLaTeX, you need to teach LyX to add the Ligatures=TeX option to the font definition with \fontspec or \setmainfont (or the other similar commands). – egreg Mar 11 '12 at 22:56      @egreg: Cool. That fixes the immediate problem (though it's still odd that they're there in the source). – raxacoricofallapatorius Mar 11 '12 at 23:06      It's by no means odd: opening quotes should be different from closing ones. Inputting both as " requires some kind of intervention. You can input them as and – egreg Mar 11 '12 at 23:11      @ereg: That's what they look like as displayed by LyX, but the code changes them as above. In any case, It looks like your comments are my answer. – raxacoricofallapatorius Mar 11 '12 at 23:37 up vote 2 down vote accepted A convenient way to input quotes is to write "text" but, of course, the quotes must be rendered differently. The traditional TeX way is to input quotes as ``text'' and it seems that "smart quotes" in LyX does this translation. However this will produce by default the correct and expected result in case XeLaTeX is used for typesetting with system font different from the default Latin Modern. There are two way out: 1. Teach LyX to add the option Ligatures=TeX when the document fonts are chosen with \setmainfont or similar commands \setmainfont[Ligatures=TeX]{Linux Libertine O} 2. Input quotes with the proper Unicode characters “text” share|improve this answer      That works. It would be nice if there were a LyX setting to generate unicode quotes. – raxacoricofallapatorius Mar 12 '12 at 0:39 You can set the quote style in LyX for each document. Just go to Document Settings > Language > Quote Style See 3.9.3.2 Quotes in the User Guide (Help > User Guide) share|improve this answer 1   I can't get that to have any effect. – raxacoricofallapatorius Mar 12 '12 at 0:35 Your Answer   discard By posting your answer, you agree to the privacy policy and terms of service. Not the answer you're looking for? Browse other questions tagged or ask your own question.
__label__pos
0.533378
[ANN] DataPipes.jl You’re not wrong :laughing: Despite the apparent simplicity of _ placeholder syntax, the design of a convenient and general but also simple meaning for _ has resisted all our efforts. (At this point there’s been a ridiculous amount of effort put into exploring that design space.) So now we’ve got a bunch of special purpose packages which make various tradeoffs but are mostly centered around piping. Underscores.jl is my attempt at a general _ placeholder syntax which happens to work with pipes rather than a special purpose piping syntax package. 4 Likes (: At least some of the packages you listed are focused on specific usecases, so it shouldn’t be very difficult to choose among these. For example, Chain.jl clearly aims at dataframe operations with their calling conventions; DataPipes.jl’s focus is at data processing operations with common functions following the Julia convetion; Underscores.jl - see Chris’s post. Nevertheless, there is clearly lots of overlap. Something like @p sum(_ * 2, 1:10) would look basically the same with any approach. It doesn’t seem easily avoidable indeed… For example, Chain and DataPipes cannot follow the same conventions without adding boilerplate, because typical calls of dataframe functions are significantly different from other functions. 3 Likes Agreed. DataFrames is very clever and convenient with its domain specific column naming syntax. It’s just unfortunate that the syntax doesn’t mesh well with any reasonably general meaning of _. The situation is kind of unsatisfactory but it’s unclear how to proceed without a good language syntax option for placeholders. A lightweight way to delimit the extent of the lambda helps but there hasn’t been a lot of support for that, nor a really nice syntax candidate. (The delimiter could either go outside the function which accepts the lambda (as in Underscores.jl and DataPipes) or on the lambda itself, a bit like Swift’s shorthand argument names or key paths.) Interesting comparison with Swift! Their let articleIDs = articles.map { $0.id } is indeed a nice and consice lambda syntax. Does it nest? Regarding key paths: maybe I missed something, but they seem much less general compared to arbitrary functions. As I understand, their close equivalent in Julia are lens in Setfields.jl/Accessors.jl, which have even more advanced functionality. I guess you mean that a good general meaning of _ would swallow the => operators as part of the lambda function body? I more or less disagree. • For a standalone _ (without another piece of syntax to delimit the scope of the lambda) I think the “tight” meaning implemented in #24990 is the best: it’s far from covering all use case but it does have general usefulness, and it’s super clear and readable. • For more complex lambda bodies, a delimiter like @_ is required. We can have both, and they both work nicely with DataFrames :slight_smile: # Would work if #24990 is merged transform(df, :a => _.^2 => :a_square) # Already works with Underscores.jl transform(df, :a => @_ exp.(_.^2) => :exp_a_square) and btw both are pretty verbose (: E.g., transform(df, :a => @_ exp.(_.^2) => :exp_a_square, :a => @_ _.^2 => :a_square) for a dataframe vs @p mutate(exp_a_square=exp(_.a^2), a_square=_.a^2, df) with DataPipes for other tables. Not even talking about multicolumn functions: @p mutate(comp_val=_.a > _.b ? _.a^2 : _.a, df) With Base only (no DataPipes) the first example becomes somewhat less pretty, but still shorter than for dataframes: map(r -> (r..., exp_a_square=exp(r.a^2), a_square=r.a^2), df) 2 Likes It’s worth noting that this exact same DataPipes.mutate syntax works with Underscores.@_ (and has done since shortly after it was released). So this is further evidence that these packages are very similar in design. If we could agree on a general solution for implicit __ vs and naming of arguments, it may be that these packages can join together in some way. julia> using TypedTables julia> t = Table(a=[1,2,3], b=[4,5,6]); julia> @_ mutate(exp_a_square=exp(_.a^2), a_square=_.a^2, t) Table with 4 columns and 3 rows: a b exp_a_square a_square ┌───────────────────────────── 1 │ 1 4 2.71828 1 2 │ 2 5 54.5982 4 3 │ 3 6 8103.08 9 julia> @_ mutate(comp_val=_.a > _.b ? _.a^2 : _.a, t) Table with 3 columns and 3 rows: a b comp_val ┌─────────────── 1 │ 1 4 1 2 │ 2 5 2 3 │ 3 6 3 mutate seems pretty handy. Perhaps something like it could go into SplitApplyCombine? 1 Like mutate seems straight from dplyr; if a package is going to go that route I think it would be good to look more comprehensively at the dplyr API so Julia’s version can get a cohesive look-and-feel. 1 Like Totally agree that there is a significant overlap, and short single-step examples work completely the same in these packages (maybe even in Chain.jl). I myself don’t see a general and still no-boilerplate interpretation of the differences, but would be very curious to know if there is any. DataPipes clearly implements a less general approach, but is more convenient for piped data analysis (hence the name (: ). Also, there are pipe-related features on top, like @export macro, and I also plan to add @aside macro like in Chains.jl. They don’t seem fit for a really general _-package like Underscores.jl, but I may be mistaken here. Currently, this function (and some other short ones) is defined in DataPipes, but not documented. The reason is I don’t know where it is best to put them, and they may be changed/removed at any time. Maybe you are right and SAC.jl is the right place for them to go… I agree with Chris, that specific functions like mutate should really be out of scope of DataPipes and similar packages. Don’t know or use dplyr myself, but in would be interesting to see someone attempt to implement a similar interface in Julia, if there is none yet. Currently available functions (Base, SAC.jl, …) may be less “cohesive”, but are more general than dplyr and dataframes. 1 Like That’s true, there’s some things which will only apply to pipes but are super handy such as having variables for partial results assigned within the pipeline. Underscores.jl actually does have a small accommodation for |> syntax (also , <|, .|> and .<|), but only in the sense that it recurses into such expressions and applies the same _ replacement rules inside them, rather than treating them as normal call expressions. Interesting, especially the reverse pipe. Do you have any neat examples with these? Not with the reverse pipe! I think <| was added at the suggestion of someone else. Maybe in the thread at ANN: Underscores.jl: Placeholder syntax for closures Composition is normal composition. Like in your @f macro, but you don’t need a separate macro: @f map(_^2) |> filter(exp(_) > 5) # vs @_ map(_^2, __) ∘ filter(exp(_) > 5, __) 1 Like That’s a clever and general solution! Indeed, having a pure syntax indication such as __ in the first pipe step helps distinguish between a function definition and application. 1 Like Yes it kind of neatly falls into place in this case. Another approach to "the problem of __" is to define versions of common functions which return a function: Base.map(f::Function) = xs->map(f, xs) Base.filter(f::Function) = xs->filter(f, xs) Then we’d have julia> data = 5:12 5:12 julia> @_ data |> filter(_>10) |> map(_^2) 2-element Vector{Int64}: 121 144 This is type piracy, of course! But this version of Base.filter isn’t defined, and though Base.map(f::Function) is defined the existing definition of mapping over zero collections seems pretty useless. Funny, this is already implemented in Transducers.jl (all the more reasons to use this awesome package) using Transducers using Underscores julia> data = 5:12 5:12 julia> @_ data |> Filter(_>10) |> Map(_^2) |> collect 2-element Vector{Int64}: 121 144 and no type piracy here. 2 Likes Yes Transducers.jl is really cool for many reasons. But sometimes you just want to do some quick data processing without any extra dependencies, which is why I kind of wish we had versions of normal map() and filter() as above. 1 Like Out of my habit, I find it easier to write/read code where the functions are embedded (# 2) rather than the form # 1. But I like the ability to shorten the syntax. The first form I tried was # 5, which doesn’t work. Then I found the other shapes that give the expected result, but I’m not sure I understand why # 5 doesn’t work and # 4 does. But maybe there is an even more correct way to get what I was looking for. @p 1:4 |> map(_^2) |> filter(exp(_) > 5) #1 filter(x->exp(x)>5,map(x->x^2,1:4)) #2 @p filter(exp(_)>5, @p begin map(_^2,1:4) end ) #3 @p filter(exp(_)>5, @p(map(_^2,1:4))) #4 @p filter(exp(_)>5, @p map(_^2,1:4) ) #5 I would like to have some considerations on the different efficiency of the various forms All these five variants work for me and give exactly the same results with DataPipes.jl 0.1.7. Could you please share the error you are getting? Maybe you could get rid of map and broadcast, for example (respecting the number of steps): @p 1:4 .|> _^2 |> filter(exp(_)>5) 1 Like ok. I have the same version (downloaded yesterday) and now I get the same results as you. I don’t know what to think. I apologize for the wrong report. I had initially tried with the expression @p filter(exp(_)>5, map(_^2,1:4) ) which I then corrected in form # 5 and since it seemed to me (perhaps confused with the initial form) that this didn’t work, so I tried first with # 3 and then with # 4. Could you in this case use the nested form with the placeholder _1? PS is it possible to retrieve the log of the outputs (the one of the inputs I have) of yesterday’s session made in the vscode environment?
__label__pos
0.807711
NGINX Security Configurations NGINX Security Configurations Security Configurations for NGINX In this tutorial we will show how to set up the additional security configurations for your PHP application hosted with NGINX application server. You can use the following types of security configurations: A. Security through authentication To provide this, come through the next steps: • Generate hash from your password. For that you can use any htpasswd tool or online service (for example, http://www.htpasswdgenerator.net/). • Create simple text file with previously generated hash. • Click Config button for your server. • Upload the created file to the /var/www/webroot/ROOT directory. password hash nginx • In the /etc/nginx directory, open nginx.conf file and modify directory configurations: • authentication for the whole application Modify the location configurations by adding the following strings: auth_basic “Restricted area”; auth_basic_user_file /var/www/webroot/ROOT/.htpasswd; • nginx security configurations 11 • authentication for the separate directory Add the following location strings stating the path to the required directory: location ~ /directory_path {    auth_basic  “Restricted”;    auth_basic_user_file /var/www/webroot/ROOT/.htpasswd; } nginx directory authentication • Save the changes and restart NGINX As a result, while accessing the application or the protected directory a user will be requested to authenticate. authentication required B. Security through setting up criteria You can provide security for your application through setting up different criteria, for example, allow or deny access by IP address. • The Allow and Deny directives are used to specify which clients are or are not allowed access to the server. The rules are checked in sequence until the first match is found. • Open nginx.conf file in the /etc/nginx directory and add necessary directives: • deny access to the whole application Modify the location configurations using the strings of the following type: deny  xx.xx.xx.x; allow xx.xx.xx.x; deny all; • deny access nginx • deny access to the separate directory Add the following location strings stating the path to the needed directory: location /directory_path { deny    xx.xx.xx.x; allow   xx.xx.xx.x; deny    all; } deny ip access As a result, a user with any IP except of the allowed ones will see the 403 error while trying to open your application. 403 forbidden nginx Note: • Denying access through IP makes sense only if you use Public IP feature. • Both criteria access restrictions and password-based authentication may be implemented simultaneously. In that case, the Satisfy directive is used to determine how the two sets of restrictions interact. More information you can get here.
__label__pos
0.863596
Home My Page Projects Code Snippets Project Openings SML/NJ Summary Activity Forums Tracker Lists Tasks Docs Surveys News SCM Files SCM Repository [smlnj] View of /sml/trunk/src/MLRISC/library/regset.sml ViewVC logotype View of /sml/trunk/src/MLRISC/library/regset.sml Parent Directory Parent Directory | Revision Log Revision Log Revision 412 - (download) (annotate) Fri Sep 3 00:25:03 1999 UTC (19 years, 10 months ago) by monnier File size: 3422 byte(s) This commit was generated by cvs2svn to compensate for changes in r411, which included commits to RCS files with non-trunk default branches. (* * Register set datatype. Implemented as sorted lists. * * -- Allen *) structure RegSet :> REGISTER_SET = struct type reg = int type regset = reg list val empty = [] fun sort [] = [] | sort (l as [_]) = l | sort (l as [x,y]) = if Int.<(x,y) then l else if x = y then [x] else [y,x] | sort l = let val (a,b) = split (l,[],[]) in mergeUniq(sort a, sort b) end and split ([],a,b) = (a,b) | split (r::rs,a,b) = split(rs,r::b,a) and mergeUniq(l as u::us, l' as v::vs) = if u = v then mergeUniq(l,vs) else if Int.<(u,v) then u::mergeUniq(us,l') else v::mergeUniq(l,vs) | mergeUniq(l,[]) = l | mergeUniq([],l) = l fun union [] = [] | union (r::rs) = mergeUniq(r,union rs) fun difference ([],_) = [] | difference (set,[]) = set | difference (set as r::rs,set' as r'::rs') = if r = r' then difference(rs,set') else if r < r' then r::difference(rs,set') else (* r > r' *) difference(set,rs') fun intersect (set,[]) = [] | intersect ([],set) = [] | intersect (set as r::rs,set' as r'::rs') = if r = r' then r::intersect(rs,rs') else if r < r' then intersect(rs,set') else intersect(set,rs') fun intersects [] = [] | intersects [a] = a | intersects (a::b) = intersect(a,intersects b) fun ==([],[]) = true | ==(r::rs,r'::rs') = (r : int) = r' andalso ==(rs,rs') | ==(_,_) = false fun isEmpty [] = true | isEmpty _ = false val app = List.app fun contains ([], r) = false | contains (r'::rs,r) = r' = r orelse (r > r' andalso contains(rs,r)) fun exists (set, []) = false | exists (set, r::rs) = contains(set,r) orelse exists(set,rs) fun insert([],r) = [r] | insert(set as r'::rs,r) = if r = r' then set else if r' < r then r'::insert(rs,r) else r::set fun insertChanged (set,r) = let fun ins [] = ([r],true) | ins (set as r'::rs) = if r = r' then (set,false) else if r > r' then let val (rs,changed) = ins rs in if changed then (r'::rs,true) else (set,false) end else (r::set,true) in ins set end fun remove ([],r) = [] | remove (set as r'::rs,r) = if r' = r then rs else if r' < r then r'::remove(rs,r) else set fun removeChanged (set,r) = let fun rmv [] = ([],false) | rmv (set as r'::rs) = if r = r' then (rs,true) else if r > r' then let val (rs,changed) = rmv rs in if changed then (r'::rs,true) else (set,false) end else (set,false) in rmv set end fun fromList l = sort l fun fromSortedList l = l fun toList set = set fun toString set = let fun collect([],l) = l | collect(r::rs,l) = Int.toString r::collect'(rs,l) and collect'(rs,l) = let val l = collect(rs,l) in case l of [_] => l | l => ","::l end in String.concat("{"::collect(set,["}"])) end val op + = mergeUniq val op - = difference val op * = intersect end [email protected] ViewVC Help Powered by ViewVC 1.0.0  
__label__pos
0.998197
25 The Bitcoin wiki describes a transaction's script as something that describes "how the next person wanting to spend the Bitcoins being transferred can gain access to them". The script for "a typical Bitcoin transfer to destination Bitcoin address D" is described as requiring of the future spender: 1. a public key that, when hashed, yields destination address D embedded in the script, and 2. a signature to show evidence of the private key corresponding to the public key just provided. What useful alternative scripts could be made? What practical situations would they serve, and what client features would be required to support them? 1 26 If a full implementation of the scripting language were in place, then pretty much all of the following could be implemented. However, there are serious security concerns with some of these and they warrant further analysis before finding their way into the clients. The referenced Scripts link in the original question contains several script examples covering the following use cases which are worth listing before getting to the more exotic cases: The further linked Contracts page on the wiki provides these additional use cases which are somewhat more complex: • Providing a refundable deposit - handy for proving that you're willing to spend money to ensure that you're reputable, with the option to get it back after a given time • Escrow and dispute mediation - permits several parties that don't trust each other to trade using their common trust of a given set of trusted third parties (this is the classic M of N signatures to release the funds contract) • Assurance contracts - essentially to allow pledges to be given for the greater good by competing parties (example is to pay for a lighthouse) • Using external state - Scripts can consult addresses that may be attached to "oracle scripts" which can perform transaction signing based on complex internal logic (example is to guarantee payment of an inheritance upon death or coming of age whichever is the sooner) • Trading across chains - Allows other Bitcoin based currencies to trade against each other (thus potentially solving this problem for countries wishing to use Bitcoin) Clearly there are many opportunities for exotic transaction types: • Proof of knowledge - to prove that knowledge was acquired at a certain time a transaction could be created that references an oracle script that could provide verification. The oracle may not exist at the time of presentation, but it would be trivial to run the script and verify the outcome. • Payment on successful prediction - another external state use case to allow an external oracle to sign a transaction based on pre-arranged transaction outcomes (examples include gambling, spread betting etc) All in all, it shows that Bitcoin is a very effective financial trading instrument. 2 • Thanks for the detailed reply. Apologies for not reading the wiki page more carefully, as I scrolled down I thought it was pure technical description and I zoned out! But your summary is terser and more accessible than the wiki content. – Ash Moran Sep 4 '11 at 21:17 • Sad that some of the scripts links died, gotta search history. Jan 14 '21 at 22:06 2 I believe that such scripts are the mechanism by which Namecoin added its DNS-like capabilities. There are also numerous other types of transactions in the works including multi-signature (M of N) transactions that would allow for a built-in escrow functionality as well as adding additional "signers" on an account (i.e. "joint" Bitcoin accounts). Your Answer By clicking “Post Your Answer”, you agree to our terms of service, privacy policy and cookie policy Not the answer you're looking for? Browse other questions tagged or ask your own question.
__label__pos
0.553595
打印单链接列表中的节点 [英] printing nodes from a singly-linked list 查看:0 本文介绍了打印单链接列表中的节点的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧! 问题描述 我创建了一个节点类,它是一个链表类。有什么方法可以打印出这个列表中的元素吗?我创建了print()方法,但它只返回第一个元素,即21。如何循环访问该列表? public class ListNode { private int item; private ListNode next; public ListNode(int item, ListNode next){ this.item = item; this.next = next; } public ListNode(int item){ this(item, null); } public int print(){ return item; } public static void main(String[] args) { ListNode list = new ListNode(21, new ListNode(5, new ListNode(19, null))); System.out.println(list.print()); } } 推荐答案 public String toString() { String result = item + " "; if (next != null) { result += next.toString(); } return result; } 然后您可以简单地执行 System.out.println(list.toString()); (我将您的函数从print重命名为toString,以便更准确地描述它的作用) 这篇关于打印单链接列表中的节点的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋! 查看全文 登录 关闭 扫码关注1秒登录 发送“验证码”获取 | 15天全站免登陆
__label__pos
0.972107
Why do pull quotes exist on the web? There you are reading an article when suddenly it’s interrupted by a big piece of text that’s repeating something you just read in the previous paragraph. Or it’s interrupted by a big piece of text that’s spoiling a sentence that you are about to read in subsequent paragraphs. There you are reading an article when suddenly it’s interrupted by a big piece of text that’s repeating something you just read in the previous paragraph. To be honest, I find pull quotes pretty annoying in printed magazines too, but I can at least see the justification for them there: if you’re flipping through a magazine, they act as eye-catching inducements to stop and read (in much the same way that good photography does or illustration does). But once you’re actually reading an article, they’re incredibly frustrating. You either end up learning to blot them out completely, or you end up reading the same sentence twice. You either end up learning to blot them out completely, or you end up reading the same sentence twice. Blotting them out is easier said than done on a small-screen device. At least on a large screen, pull quotes can be shunted off to the side, but on handheld devices, pull quotes really make no sense at all. Are pull quotes online an example of a skeuomorph? “An object or feature which imitates the design of a similar artefact made from another material.” I think they might simply be an example of unexamined assumptions. The default assumption is that pull quotes on the web are fine, because everyone else is doing pull quotes on the web. But has anybody ever stopped to ask why? It was this same spiral of unexamined assumptions that led to the web drowning in a sea of splash pages in the early 2000s. I think they might simply be an example of unexamined assumptions. I’m genuinely curious to hear the design justification for pull quotes on the web (particularly on mobile), because as a reader, I can give plenty of reasons for their removal. This was originally posted on my own site. Written by A web developer and author living and working in Brighton, England. Everything I post on Medium is a copy — the originals are on my own website, adactio.com Get the Medium app A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
__label__pos
0.785376
suricata util-reference-config.c Go to the documentation of this file. 1 /* Copyright (C) 2007-2010 Open Information Security Foundation 2  * 3  * You can copy, redistribute or modify this Program under the terms of 4  * the GNU General Public License version 2 as published by the Free 5  * Software Foundation. 6  * 7  * This program is distributed in the hope that it will be useful, 8  * but WITHOUT ANY WARRANTY; without even the implied warranty of 9  * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 10  * GNU General Public License for more details. 11  * 12  * You should have received a copy of the GNU General Public License 13  * version 2 along with this program; if not, write to the Free Software 14  * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 15  * 02110-1301, USA. 16  */ 17  18 /** 19  * \file 20  * 21  * \author Anoop Saldanha <[email protected]> 22  */ 23  24 #include "suricata-common.h" 25 #include "detect.h" 26 #include "detect-engine.h" 27 #include "util-hash.h" 28  29 #include "util-reference-config.h" 30 #include "conf.h" 31 #include "util-unittest.h" 32 #include "util-error.h" 33 #include "util-debug.h" 34 #include "util-fmemopen.h" 35  36 /* Regex to parse each line from reference.config file. The first substring 37  * is for the system name and the second for the url */ 38 /*-----------------------------------------------------------system-------------------url----*/ 39 #define SC_RCONF_REGEX "^\\s*config\\s+reference\\s*:\\s*([a-zA-Z][a-zA-Z0-9-_]*)\\s+(.+)\\s*$" 40  41 /* Default path for the reference.conf file */ 42 #define SC_RCONF_DEFAULT_FILE_PATH CONFIG_DIR "/reference.config" 43  44 static pcre *regex = NULL; 45 static pcre_extra *regex_study = NULL; 46  47 /* the hash functions */ 48 uint32_t SCRConfReferenceHashFunc(HashTable *ht, void *data, uint16_t datalen); 49 char SCRConfReferenceHashCompareFunc(void *data1, uint16_t datalen1, 50  void *data2, uint16_t datalen2); 51 void SCRConfReferenceHashFree(void *ch); 52  53 /* used to get the reference.config file path */ 54 static const char *SCRConfGetConfFilename(const DetectEngineCtx *de_ctx); 55  57 { 58  const char *eb = NULL; 59  int eo; 60  int opts = 0; 61  62  regex = pcre_compile(SC_RCONF_REGEX, opts, &eb, &eo, NULL); 63  if (regex == NULL) { 64  SCLogDebug("Compile of \"%s\" failed at offset %" PRId32 ": %s", 65  SC_RCONF_REGEX, eo, eb); 66  return; 67  } 68  69  regex_study = pcre_study(regex, 0, &eb); 70  if (eb != NULL) { 71  pcre_free(regex); 72  regex = NULL; 73  SCLogDebug("pcre study failed: %s", eb); 74  return; 75  } 76  77  return; 78 } 79  81 { 82  if (regex != NULL) { 83  pcre_free(regex); 84  regex = NULL; 85  } 86  if (regex_study != NULL) { 87  pcre_free(regex_study); 88  regex_study = NULL; 89  } 90 } 91  92  93 /** 94  * \brief Inits the context to be used by the Reference Config parsing API. 95  * 96  * This function initializes the hash table to be used by the Detection 97  * Engine Context to hold the data from reference.config file, 98  * obtains the file descriptor to parse the reference.config file, and 99  * inits the regex used to parse the lines from reference.config file. 100  * 101  * \param de_ctx Pointer to the Detection Engine Context. 102  * 103  * \retval 0 On success. 104  * \retval -1 On failure. 105  */ 106 static FILE *SCRConfInitContextAndLocalResources(DetectEngineCtx *de_ctx, FILE *fd) 107 { 108  const char *filename = NULL; 109  110  /* init the hash table to be used by the reference config references */ 114  if (de_ctx->reference_conf_ht == NULL) { 115  SCLogError(SC_ERR_HASH_TABLE_INIT, "Error initializing the hash " 116  "table"); 117  goto error; 118  } 119  120  /* if it is not NULL, use the file descriptor. The hack so that we can 121  * avoid using a dummy reference file for testing purposes and 122  * instead use an input stream against a buffer containing the 123  * reference strings */ 124  if (fd == NULL) { 125  filename = SCRConfGetConfFilename(de_ctx); 126  if ((fd = fopen(filename, "r")) == NULL) { 127 #ifdef UNITTESTS 128  if (RunmodeIsUnittests()) 129  goto error; // silently fail 130 #endif 131  SCLogError(SC_ERR_FOPEN, "Error opening file: \"%s\": %s", filename, 132  strerror(errno)); 133  goto error; 134  } 135  } 136  137  return fd; 138  139  error: 140  if (de_ctx->reference_conf_ht != NULL) { 142  de_ctx->reference_conf_ht = NULL; 143  } 144  if (fd != NULL) { 145  fclose(fd); 146  fd = NULL; 147  } 148  149  return NULL; 150 } 151  152  153 /** 154  * \brief Returns the path for the Reference Config file. We check if we 155  * can retrieve the path from the yaml conf file. If it is not present, 156  * return the default path for the reference.config file which is 157  * "./reference.config". 158  * 159  * \retval log_filename Pointer to a string containing the path for the 160  * reference.config file. 161  */ 162 static const char *SCRConfGetConfFilename(const DetectEngineCtx *de_ctx) 163 { 164  const char *path = NULL; 165  166  if (de_ctx != NULL && strlen(de_ctx->config_prefix) > 0) { 167  char config_value[256]; 168  snprintf(config_value, sizeof(config_value), 169  "%s.reference-config-file", de_ctx->config_prefix); 170  171  /* try loading prefix setting, fall back to global if that 172  * fails. */ 173  if (ConfGet(config_value, &path) != 1) { 174  if (ConfGet("reference-config-file", &path) != 1) { 175  return (char *)SC_RCONF_DEFAULT_FILE_PATH; 176  } 177  } 178  } else { 179  if (ConfGet("reference-config-file", &path) != 1) { 180  return (char *)SC_RCONF_DEFAULT_FILE_PATH; 181  } 182  } 183  return path; 184 } 185  186 /** 187  * \brief Releases local resources used by the Reference Config API. 188  */ 189 static void SCRConfDeInitLocalResources(DetectEngineCtx *de_ctx, FILE *fd) 190 { 191  if (fd != NULL) { 192  fclose(fd); 193  fd = NULL; 194  } 195  196  return; 197 } 198  199 /** 200  * \brief Releases de_ctx resources related to Reference Config API. 201  */ 203 { 204  if (de_ctx->reference_conf_ht != NULL) 206  207  de_ctx->reference_conf_ht = NULL; 208  209  return; 210 } 211  212 /** 213  * \brief Converts a string to lowercase. 214  * 215  * \param str Pointer to the string to be converted. 216  */ 217 static char *SCRConfStringToLowercase(const char *str) 218 { 219  char *new_str = NULL; 220  char *temp_str = NULL; 221  222  if ((new_str = SCStrdup(str)) == NULL) { 223  return NULL; 224  } 225  226  temp_str = new_str; 227  while (*temp_str != '\0') { 228  *temp_str = tolower((unsigned char)*temp_str); 229  temp_str++; 230  } 231  232  return new_str; 233 } 234  235 /** 236  * \brief Parses a line from the reference config file and adds it to Reference 237  * Config hash table DetectEngineCtx->reference_conf_ht. 238  * 239  * \param rawstr Pointer to the string to be parsed. 240  * \param de_ctx Pointer to the Detection Engine Context. 241  * 242  * \retval 0 On success. 243  * \retval -1 On failure. 244  */ 245 static int SCRConfAddReference(char *rawstr, DetectEngineCtx *de_ctx) 246 { 247  char system[64]; 248  char url[1024]; 249  250  SCRConfReference *ref_new = NULL; 251  SCRConfReference *ref_lookup = NULL; 252  253 #define MAX_SUBSTRINGS 30 254  int ret = 0; 255  int ov[MAX_SUBSTRINGS]; 256  257  ret = pcre_exec(regex, regex_study, rawstr, strlen(rawstr), 0, 0, ov, 30); 258  if (ret < 0) { 259  SCLogError(SC_ERR_REFERENCE_CONFIG, "Invalid Reference Config in " 260  "reference.config file"); 261  goto error; 262  } 263  264  /* retrieve the reference system */ 265  ret = pcre_copy_substring((char *)rawstr, ov, 30, 1, system, sizeof(system)); 266  if (ret < 0) { 267  SCLogError(SC_ERR_PCRE_GET_SUBSTRING, "pcre_copy_substring() failed"); 268  goto error; 269  } 270  271  /* retrieve the reference url */ 272  ret = pcre_copy_substring((char *)rawstr, ov, 30, 2, url, sizeof(url)); 273  if (ret < 0) { 274  SCLogError(SC_ERR_PCRE_GET_SUBSTRING, "pcre_copy_substring() failed"); 275  goto error; 276  } 277  278  /* Create a new instance of the parsed Reference string */ 279  ref_new = SCRConfAllocSCRConfReference(system, url); 280  if (ref_new == NULL) 281  goto error; 282  283  /* Check if the Reference is present in the HashTable. In case it's present 284  * ignore it, as it's a duplicate. If not present, add it to the table */ 285  ref_lookup = HashTableLookup(de_ctx->reference_conf_ht, ref_new, 0); 286  if (ref_lookup == NULL) { 287  if (HashTableAdd(de_ctx->reference_conf_ht, ref_new, 0) < 0) { 288  SCLogDebug("HashTable Add failed"); 289  } 290  } else { 291  SCLogDebug("Duplicate reference found inside reference.config"); 293  } 294  295  return 0; 296  297  error: 298  return -1; 299 } 300  301 /** 302  * \brief Checks if a string is a comment or a blank line. 303  * 304  * Comments lines are lines of the following format - 305  * "# This is a comment string" or 306  * " # This is a comment string". 307  * 308  * \param line String that has to be checked. 309  * 310  * \retval 1 On the argument string being a comment or blank line. 311  * \retval 0 Otherwise. 312  */ 313 static int SCRConfIsLineBlankOrComment(char *line) 314 { 315  while (*line != '\0') { 316  /* we have a comment */ 317  if (*line == '#') 318  return 1; 319  320  /* this line is neither a comment line, nor a blank line */ 321  if (!isspace((unsigned char)*line)) 322  return 0; 323  324  line++; 325  } 326  327  /* we have a blank line */ 328  return 1; 329 } 330  331 /** 332  * \brief Parses the Reference Config file and updates the 333  * DetectionEngineCtx->reference_conf_ht with the Reference information. 334  * 335  * \param de_ctx Pointer to the Detection Engine Context. 336  */ 337 static void SCRConfParseFile(DetectEngineCtx *de_ctx, FILE *fd) 338 { 339  char line[1024]; 340  uint8_t i = 1; 341  342  while (fgets(line, sizeof(line), fd) != NULL) { 343  if (SCRConfIsLineBlankOrComment(line)) 344  continue; 345  346  SCRConfAddReference(line, de_ctx); 347  i++; 348  } 349  350 #ifdef UNITTESTS 351  SCLogInfo("Added \"%d\" reference types from the reference.config file", 352  de_ctx->reference_conf_ht->count); 353 #endif /* UNITTESTS */ 354  return; 355 } 356  357 /** 358  * \brief Returns a new SCRConfReference instance. The reference string 359  * is converted into lowercase, before being assigned to the instance. 360  * 361  * \param system Pointer to the system. 362  * \param url Pointer to the reference url. 363  * 364  * \retval ref Pointer to the new instance of SCRConfReference. 365  */ 367  const char *url) 368 { 369  SCRConfReference *ref = NULL; 370  371  if (system == NULL) { 372  SCLogError(SC_ERR_INVALID_SIGNATURE, "Invalid arguments. system NULL"); 373  return NULL; 374  } 375  376  if ((ref = SCMalloc(sizeof(SCRConfReference))) == NULL) { 377  return NULL; 378  } 379  memset(ref, 0, sizeof(SCRConfReference)); 380  381  if ((ref->system = SCRConfStringToLowercase(system)) == NULL) { 382  SCFree(ref); 383  return NULL; 384  } 385  386  if (url != NULL && (ref->url = SCStrdup(url)) == NULL) { 387  SCFree(ref->system); 388  SCFree(ref); 389  return NULL; 390  } 391  392  return ref; 393 } 394  395 /** 396  * \brief Frees a SCRConfReference instance. 397  * 398  * \param Pointer to the SCRConfReference instance that has to be freed. 399  */ 401 { 402  if (ref != NULL) { 403  if (ref->system != NULL) 404  SCFree(ref->system); 405  406  if (ref->url != NULL) 407  SCFree(ref->url); 408  409  SCFree(ref); 410  } 411  412  return; 413 } 414  415 /** 416  * \brief Hashing function to be used to hash the Reference name. Would be 417  * supplied as an argument to the HashTableInit function for 418  * DetectEngineCtx->reference_conf_ht. 419  * 420  * \param ht Pointer to the HashTable. 421  * \param data Pointer to the data to be hashed. In this case, the data 422  * would be a pointer to a SCRConfReference instance. 423  * \param datalen Not used by this function. 424  */ 425 uint32_t SCRConfReferenceHashFunc(HashTable *ht, void *data, uint16_t datalen) 426 { 427  SCRConfReference *ref = (SCRConfReference *)data; 428  uint32_t hash = 0; 429  int i = 0; 430  431  int len = strlen(ref->system); 432  433  for (i = 0; i < len; i++) 434  hash += tolower((unsigned char)ref->system[i]); 435  436  hash = hash % ht->array_size; 437  438  return hash; 439 } 440  441 /** 442  * \brief Used to compare two References that have been stored in the HashTable. 443  * This function is supplied as an argument to the HashTableInit function 444  * for DetectionEngineCtx->reference_conf_ct. 445  * 446  * \param data1 Pointer to the first SCRConfReference to be compared. 447  * \param len1 Not used by this function. 448  * \param data2 Pointer to the second SCRConfReference to be compared. 449  * \param len2 Not used by this function. 450  * 451  * \retval 1 On data1 and data2 being equal. 452  * \retval 0 On data1 and data2 not being equal. 453  */ 454 char SCRConfReferenceHashCompareFunc(void *data1, uint16_t datalen1, 455  void *data2, uint16_t datalen2) 456 { 457  SCRConfReference *ref1 = (SCRConfReference *)data1; 458  SCRConfReference *ref2 = (SCRConfReference *)data2; 459  int len1 = 0; 460  int len2 = 0; 461  462  if (ref1 == NULL || ref2 == NULL) 463  return 0; 464  465  if (ref1->system == NULL || ref2->system == NULL) 466  return 0; 467  468  len1 = strlen(ref1->system); 469  len2 = strlen(ref2->system); 470  471  if (len1 == len2 && memcmp(ref1->system, ref2->system, len1) == 0) { 472  SCLogDebug("Match found inside Reference-Config hash function"); 473  return 1; 474  } 475  476  return 0; 477 } 478  479 /** 480  * \brief Used to free the Reference Config Hash Data that was stored in 481  * DetectEngineCtx->reference_conf_ht Hashtable. 482  * 483  * \param data Pointer to the data that has to be freed. 484  */ 485 void SCRConfReferenceHashFree(void *data) 486 { 488  489  return; 490 } 491  492 /** 493  * \brief Loads the Reference info from the reference.config file. 494  * 495  * The reference.config file contains references that can be used in 496  * Signatures. Each line of the file should have the following format - 497  * config reference: system_name, reference_url. 498  * 499  * \param de_ctx Pointer to the Detection Engine Context that should be updated 500  * with reference information. 501  * 502  * \retval 0 On success. 503  * \retval -1 On failure. 504  */ 506 { 507  fd = SCRConfInitContextAndLocalResources(de_ctx, fd); 508  if (fd == NULL) { 509 #ifdef UNITTESTS 510  if (RunmodeIsUnittests() && fd == NULL) { 511  return -1; 512  } 513 #endif 514  SCLogError(SC_ERR_OPENING_FILE, "please check the \"reference-config-file\" " 515  "option in your suricata.yaml file"); 516  return -1; 517  } 518  519  SCRConfParseFile(de_ctx, fd); 520  SCRConfDeInitLocalResources(de_ctx, fd); 521  522  return 0; 523 } 524  525 /** 526  * \brief Gets the refernce config from the corresponding hash table stored 527  * in the Detection Engine Context's reference conf ht, given the 528  * reference name. 529  * 530  * \param ct_name Pointer to the reference name that has to be looked up. 531  * \param de_ctx Pointer to the Detection Engine Context. 532  * 533  * \retval lookup_rconf_info Pointer to the SCRConfReference instance from 534  * the hash table on success; NULL on failure. 535  */ 536 SCRConfReference *SCRConfGetReference(const char *rconf_name, 537  DetectEngineCtx *de_ctx) 538 { 539  SCRConfReference *ref_conf = SCRConfAllocSCRConfReference(rconf_name, NULL); 540  if (ref_conf == NULL) 541  return NULL; 542  SCRConfReference *lookup_ref_conf = HashTableLookup(de_ctx->reference_conf_ht, 543  ref_conf, 0); 544  546  return lookup_ref_conf; 547 } 548  549 /*----------------------------------Unittests---------------------------------*/ 550  551  552 #ifdef UNITTESTS 553  554 /** 555  * \brief Creates a dummy reference config, with all valid references, for 556  * testing purposes. 557  */ 559 { 560  const char *buffer = 561  "config reference: one http://www.one.com\n" 562  "config reference: two http://www.two.com\n" 563  "config reference: three http://www.three.com\n" 564  "config reference: one http://www.one.com\n" 565  "config reference: three http://www.three.com\n"; 566  567  FILE *fd = SCFmemopen((void *)buffer, strlen(buffer), "r"); 568  if (fd == NULL) 569  SCLogDebug("Error with SCFmemopen() called by Reference Config test code"); 570  571  return fd; 572 } 573  574 /** 575  * \brief Creates a dummy reference config, with some valid references and a 576  * couple of invalid references, for testing purposes. 577  */ 579 { 580  const char *buffer = 581  "config reference: one http://www.one.com\n" 582  "config_ reference: two http://www.two.com\n" 583  "config reference_: three http://www.three.com\n" 584  "config reference: four\n" 585  "config reference five http://www.five.com\n"; 586  587  FILE *fd = SCFmemopen((void *)buffer, strlen(buffer), "r"); 588  if (fd == NULL) 589  SCLogDebug("Error with SCFmemopen() called by Reference Config test code"); 590  591  return fd; 592 } 593  594 /** 595  * \brief Creates a dummy reference config, with all invalid references, for 596  * testing purposes. 597  */ 599 { 600  const char *buffer = 601  "config reference one http://www.one.com\n" 602  "config_ reference: two http://www.two.com\n" 603  "config reference_: three http://www.three.com\n" 604  "config reference: four\n"; 605  606  FILE *fd = SCFmemopen((void *)buffer, strlen(buffer), "r"); 607  if (fd == NULL) 608  SCLogDebug("Error with SCFmemopen() called by Reference Config test code"); 609  610  return fd; 611 } 612  613 /** 614  * \test Check that the reference file is loaded and the detection engine 615  * content reference_conf_ht loaded with the reference data. 616  */ 617 static int SCRConfTest01(void) 618 { 620  int result = 0; 621  622  if (de_ctx == NULL) 623  return result; 624  626  SCRConfLoadReferenceConfigFile(de_ctx, fd); 627  628  if (de_ctx->reference_conf_ht == NULL) 629  goto end; 630  631  result = (de_ctx->reference_conf_ht->count == 3); 632  if (result == 0) 633  printf("FAILED: de_ctx->reference_conf_ht->count %u: ", de_ctx->reference_conf_ht->count); 634  635  end: 636  if (de_ctx != NULL) 637  DetectEngineCtxFree(de_ctx); 638  return result; 639 } 640  641 /** 642  * \test Check that invalid references present in the reference.config file 643  * aren't loaded. 644  */ 645 static int SCRConfTest02(void) 646 { 648  int result = 0; 649  650  if (de_ctx == NULL) 651  return result; 652  654  SCRConfLoadReferenceConfigFile(de_ctx, fd); 655  656  if (de_ctx->reference_conf_ht == NULL) 657  goto end; 658  659  result = (de_ctx->reference_conf_ht->count == 0); 660  661  662  end: 663  if (de_ctx != NULL) 664  DetectEngineCtxFree(de_ctx); 665  return result; 666 } 667  668 /** 669  * \test Check that only valid references are loaded into the hash table from 670  * the reference.config file. 671  */ 672 static int SCRConfTest03(void) 673 { 675  int result = 0; 676  677  if (de_ctx == NULL) 678  return result; 679  681  SCRConfLoadReferenceConfigFile(de_ctx, fd); 682  683  if (de_ctx->reference_conf_ht == NULL) 684  goto end; 685  686  result = (de_ctx->reference_conf_ht->count == 1); 687  688  end: 689  if (de_ctx != NULL) 690  DetectEngineCtxFree(de_ctx); 691  return result; 692 } 693  694 /** 695  * \test Check if the reference info from the reference.config file have 696  * been loaded into the hash table. 697  */ 698 static int SCRConfTest04(void) 699 { 701  int result = 1; 702  703  if (de_ctx == NULL) 704  return 0; 705  707  SCRConfLoadReferenceConfigFile(de_ctx, fd); 708  709  if (de_ctx->reference_conf_ht == NULL) 710  goto end; 711  712  result = (de_ctx->reference_conf_ht->count == 3); 713  714  result &= (SCRConfGetReference("one", de_ctx) != NULL); 715  result &= (SCRConfGetReference("two", de_ctx) != NULL); 716  result &= (SCRConfGetReference("three", de_ctx) != NULL); 717  result &= (SCRConfGetReference("four", de_ctx) == NULL); 718  719  end: 720  if (de_ctx != NULL) 721  DetectEngineCtxFree(de_ctx); 722  return result; 723 } 724  725 /** 726  * \test Check if the reference info from the invalid reference.config file 727  * have not been loaded into the hash table, and cross verify to check 728  * that the hash table contains no reference data. 729  */ 730 static int SCRConfTest05(void) 731 { 733  int result = 1; 734  735  if (de_ctx == NULL) 736  return 0; 737  739  SCRConfLoadReferenceConfigFile(de_ctx, fd); 740  741  if (de_ctx->reference_conf_ht == NULL) 742  goto end; 743  744  result = (de_ctx->reference_conf_ht->count == 0); 745  746  result &= (SCRConfGetReference("one", de_ctx) == NULL); 747  result &= (SCRConfGetReference("two", de_ctx) == NULL); 748  result &= (SCRConfGetReference("three", de_ctx) == NULL); 749  result &= (SCRConfGetReference("four", de_ctx) == NULL); 750  result &= (SCRConfGetReference("five", de_ctx) == NULL); 751  752  end: 753  if (de_ctx != NULL) 754  DetectEngineCtxFree(de_ctx); 755  return result; 756 } 757  758 /** 759  * \test Check if the reference info from the reference.config file have 760  * been loaded into the hash table. 761  */ 762 static int SCRConfTest06(void) 763 { 765  int result = 1; 766  767  if (de_ctx == NULL) 768  return 0; 769  771  SCRConfLoadReferenceConfigFile(de_ctx, fd); 772  773  if (de_ctx->reference_conf_ht == NULL) 774  goto end; 775  776  result = (de_ctx->reference_conf_ht->count == 1); 777  778  result &= (SCRConfGetReference("one", de_ctx) != NULL); 779  result &= (SCRConfGetReference("two", de_ctx) == NULL); 780  result &= (SCRConfGetReference("three", de_ctx) == NULL); 781  result &= (SCRConfGetReference("four", de_ctx) == NULL); 782  result &= (SCRConfGetReference("five", de_ctx) == NULL); 783  784  end: 785  if (de_ctx != NULL) 786  DetectEngineCtxFree(de_ctx); 787  return result; 788 } 789  790 #endif /* UNITTESTS */ 791  792 /** 793  * \brief This function registers unit tests for Reference Config API. 794  */ 796 { 797  798 #ifdef UNITTESTS 799  UtRegisterTest("SCRConfTest01", SCRConfTest01); 800  UtRegisterTest("SCRConfTest02", SCRConfTest02); 801  UtRegisterTest("SCRConfTest03", SCRConfTest03); 802  UtRegisterTest("SCRConfTest04", SCRConfTest04); 803  UtRegisterTest("SCRConfTest05", SCRConfTest05); 804  UtRegisterTest("SCRConfTest06", SCRConfTest06); 805 #endif /* UNITTESTS */ 806  807  return; 808 } FILE * SCRConfGenerateInValidDummyReferenceConfigFD03(void) Creates a dummy reference config, with all invalid references, for testing purposes. void SCReferenceConfInit(void) #define SCLogDebug(...) Definition: util-debug.h:335 int HashTableAdd(HashTable *ht, void *data, uint16_t datalen) Definition: util-hash.c:113 char SCRConfReferenceHashCompareFunc(void *data1, uint16_t datalen1, void *data2, uint16_t datalen2) Used to compare two References that have been stored in the HashTable. This function is supplied as a... SCRConfReference * SCRConfAllocSCRConfReference(const char *system, const char *url) Returns a new SCRConfReference instance. The reference string is converted into lowercase, before being assigned to the instance. void SCRConfDeInitContext(DetectEngineCtx *de_ctx) Releases de_ctx resources related to Reference Config API. void SCRConfRegisterTests(void) This function registers unit tests for Reference Config API. #define SCFmemopen Definition: util-fmemopen.h:52 char config_prefix[64] Definition: detect.h:852 HashTable * HashTableInit(uint32_t size, uint32_t(*Hash)(struct HashTable_ *, void *, uint16_t), char(*Compare)(void *, uint16_t, void *, uint16_t), void(*Free)(void *)) Definition: util-hash.c:34 void SCRConfDeAllocSCRConfReference(SCRConfReference *ref) Frees a SCRConfReference instance. uint32_t SCRConfReferenceHashFunc(HashTable *ht, void *data, uint16_t datalen) Hashing function to be used to hash the Reference name. Would be supplied as an argument to the HashT... uint32_t array_size Definition: util-hash.h:37 int ConfGet(const char *name, const char **vptr) Retrieve the value of a configuration node. Definition: conf.c:331 main detection engine ctx Definition: detect.h:723 #define str(s) #define SC_RCONF_DEFAULT_FILE_PATH uint32_t count Definition: util-hash.h:39 #define MAX_SUBSTRINGS #define SCLogError(err_code,...) Macro used to log ERROR messages. Definition: util-debug.h:294 void UtRegisterTest(const char *name, int(*TestFn)(void)) Register unit test. #define SC_RCONF_REGEX void SCRConfReferenceHashFree(void *ch) Used to free the Reference Config Hash Data that was stored in DetectEngineCtx->reference_conf_ht Has... HashTable * reference_conf_ht Definition: detect.h:754 int RunmodeIsUnittests(void) Definition: suricata.c:259 FILE * SCRConfGenerateValidDummyReferenceConfigFD01(void) Creates a dummy reference config, with all valid references, for testing purposes. void HashTableFree(HashTable *ht) Definition: util-hash.c:79 #define SCMalloc(a) Definition: util-mem.h:166 void * HashTableLookup(HashTable *ht, void *data, uint16_t datalen) Definition: util-hash.c:193 Holds a reference from the file - reference.config. #define SCLogInfo(...) Macro used to log INFORMATIONAL messages. Definition: util-debug.h:254 #define SCFree(a) Definition: util-mem.h:228 SCRConfReference * SCRConfGetReference(const char *rconf_name, DetectEngineCtx *de_ctx) Gets the refernce config from the corresponding hash table stored in the Detection Engine Context&#39;s r... #define SCStrdup(a) Definition: util-mem.h:212 uint8_t len FILE * SCRConfGenerateInValidDummyReferenceConfigFD02(void) Creates a dummy reference config, with some valid references and a couple of invalid references... void SCReferenceConfDeinit(void) void DetectEngineCtxFree(DetectEngineCtx *) Free a DetectEngineCtx:: int SCRConfLoadReferenceConfigFile(DetectEngineCtx *de_ctx, FILE *fd) Loads the Reference info from the reference.config file. DetectEngineCtx * DetectEngineCtxInit(void)
__label__pos
0.98443
Related Documentation Download this Manual 29.1.8 Creating a New Advisor: An Example This section documents the steps to create an Advisor. To create an Advisor, select the Create Advisor button from the Advisors page. The new advisor page is displayed. This example creates an Advisor that checks if connections have been killed using the KILL statement and generates an event. Create your custom advisor by following these steps: 1. Using the Advisor Name text box, give the Advisor an appropriate name, such as "Connections killed". 2. From the Advisor Category drop down list box, choose an Advisor category for your Advisor. 3. Define the variable for your expression in the Variable Assignment frame. • In the Variable text box, enter %connections_killed%, the variable used in the Expression text box. • In the Data Item drop-down list, select the mysql:status:Com_kill entry. • In the Instance text box, enter local. 4. Enter the following expression in the Expression text area. '%connections_killed% > THRESHOLD' 5. Set the following threshold: • Set the Info Alert level to 0. An informational event is generated if 1 or more connections are killed. 6. Add appropriate entries for the Problem Description, Advice, and Links text areas. Optionally, use Wiki markup for these text areas. You can also reference the %connections_killed% variable in these text areas. 7. Save the Advisor After you create the Advisor, schedule it against the MySQL server you want to monitor. For instructions on Configure Advisor, see Table 17.3, “Advisor Edit Menu Controls”.
__label__pos
0.961069
Take the 2-minute tour × Stack Overflow is a question and answer site for professional and enthusiast programmers. It's 100% free. I want to create a global namespace for my application and in that namespace I want other namespaces: E.g. Dashboard.Ajax.Post() Dashboard.RetrieveContent.RefreshSalespersonPerformanceContent(); I also want to place them in seperate files: • Ajax.js • RetrieveContent.js However I have tried using this method, however it won't work because the same variable name is being used for the namespace in 2 seperate places. Can anyone offer an alternative? Thanks. share|improve this question      Name your namespace something differently? –  meder Aug 5 '10 at 0:50      I guess that is one option, however I was hoping to include everything under one namespace as I thought it would be more ordered that way. –  Jamie C Aug 5 '10 at 1:14 11 Answers 11 You just need to make sure that you don't stomp on your namespace object if it's already been created. Something like this would work: (function() { // private vars can go in here Dashboard = Dashboard || {}; Dashboard.Ajax = { Post: function() { ... } }; })(); And the RetrieveContent file would be defined similarly. share|improve this answer Here is a very good article on various "Module Patterns" in JavaScript. There is a very nice little section on how you can augment modules, or namespaces and maintain a cross-file private state. That is to say, the code in separate files will be executed sequentially and properly augment the namespace after it is executed. I have not explored this technique thoroughly so no promises... but here is the basic idea. dashboard.js (function(window){ var dashboard = (function () { var my = {}, privateVariable = 1; function privateMethod() { // ... } my.moduleProperty = 1; my.moduleMethod = function () { // ... }; return my; }()); window.Dashboard = dashboard; })(window); dashboard.ajax.js var dashboard = (function (my) { var _private = my._private = my._private || {}, _seal = my._seal = my._seal || function () { delete my._private; delete my._seal; delete my._unseal; }, _unseal = my._unseal = my._unseal || function () { my._private = _private; my._seal = _seal; my._unseal = _unseal; }; // permanent access to _private, _seal, and _unseal my.ajax = function(){ // ... } return my; }(dashboard || {})); dashboard.retrieveContent.js var dashboard = (function (my) { var _private = my._private = my._private || {}, _seal = my._seal = my._seal || function () { delete my._private; delete my._seal; delete my._unseal; }, _unseal = my._unseal = my._unseal || function () { my._private = _private; my._seal = _seal; my._unseal = _unseal; }; // permanent access to _private, _seal, and _unseal my.retrieveContent = function(){ // ... } return my; }(dashboard || {})); share|improve this answer      Could you please explain to me how does this allow accessing private variables that were declared in another file? More specified if I were to call dashboard._seal(), how does dashboard._unseal() allow me to access the private again? –  Parth Shah May 30 at 8:55 The Yahoo Namespace function is exactly designed for this problem. Added: The source of the function is available. You can copy it into your own code if you want, change the root from YAHOO to something else, etc. share|improve this answer There are several libraries that already offer this sort of functionality if you want to use or examine a pre-baked (that is, a tested) solution. The simplest and most bug free one to get going with is probably jQuery.extend, with the deep argument set to true. (The reason I say it is bug free is not because I think that jQuery.extend suffers from less bugs than any of the other libraries -- but because it offers a clear option to deep copy attributes from the sender to the receiver -- which most of the other libraries explicitly do not provide. This will prevent many hard-to-diagnose bugs from cropping up in your program later because you used a shallow-copy extend and now have functions executing in contexts you weren't expecting them to be executing in. (If however you are cognizant of how you will be extending your base library while designing your methods, this should not be a problem.) share|improve this answer With the NS object created, you should just be able to add to it from where ever. Although you may want to try var NS = NS || {}; to ensure the NS object exists and isn't overwritten. // NS is a global variable for a namespace for the app's code var NS = NS || {}; NS.Obj = (function() { // Private vars and methods always available to returned object via closure var foo; // ... // Methods in here are public return { method: function() { } }; }()); share|improve this answer You could do something like this... HTML page using namespaced library: <html> <head> <title>javascript namespacing</title> <script src="dashboard.js" type="text/javascript"></script> <script src="ajax.js" type="text/javascript"></script> <script src="retrieve_content.js" type="text/javascript"></script> <script type="text/javascript"> alert(Dashboard.Ajax.Post()); alert(Dashboard.RetrieveContent.RefreshSalespersonPerformanceContent()); Dashboard.RetrieveContent.Settings.Timeout = 1500; alert(Dashboard.RetrieveContent.Settings.Timeout); </script> </head> <body> whatever... </body> </html> Dashboard.js: (function(window, undefined){ var dashboard = {}; window.Dashboard = dashboard; })(window); Ajax.js: (function(){ var ajax = {}; ajax.Post = function() { return "Posted!" }; window.Dashboard.Ajax = ajax })(); Retrieve_Content.js: (function(){ var retrieveContent = {}; retrieveContent.RefreshSalespersonPerformanceContent = function() { return "content retrieved" }; var _contentType; var _timeout; retrieveContent.Settings = { "ContentType": function(contentType) { _contentType = contentType; }, "ContentType": function() { return _contentType; }, "Timeout": function(timeout) { _timeout = timeout; }, "Timeout": function() { return _timeout; } }; window.Dashboard.RetrieveContent = retrieveContent; })(); The Dashboard.js acts as the starting point for all namespaces under it. The rest are defined in their respective files. In the Retrieve_Content.js, I added some extra properties in there under Settings to give an idea of how to do that, if needed. share|improve this answer      There's no guarantee though that retrieve_content.js will be loaded and parsed after Dashboard.js. If any of the dependent libraries are loaded before Dashboard.js is loaded then the assignments will fail. –  Sean Vieira Aug 5 '10 at 4:51 1   In general Dashboard.js will be loaded and parsed first, but yes, that isn't guaranteed. The Dashboard object could be checked before assignment and created if necessesary, but that would require some duplicated code in retrieve_content.js and ajax.js. The separate files requirement of the OP led me to the above. –  ironsam Aug 5 '10 at 5:31 I believe the module pattern might be right up your alley. Here's a good article regarding different module patterns. http://www.adequatelygood.com/2010/3/JavaScript-Module-Pattern-In-Depth share|improve this answer      Answers with just links are not that useful. There should be enough of an explanation here. The question is how to define objects in namespaces from two different locations –  Juan Mendes Feb 13 '12 at 18:10 1   @JuanMendes: The article explains exactly that. Several of the top answers are basically just links as well. Besides, this was more than a year and a half ago. –  Cristian Sanchez Feb 13 '12 at 19:17      I never really found the module pattern to be appropriate for namespacing. A namespace isn't a block of "reusable" code (though individual pieces of the NS may be). Object literals have always worked well enough for me. –  1nfiniti Jul 29 '13 at 20:55 I highly recommend you use this technique: https://github.com/mckoss/namespace namespace.lookup('com.mydomain.mymodule').define(function (ns) { var external = namespace.lookup('com.domain.external-module'); function myFunction() { ... } ... ns.extend({ 'myFunction': myFunction, ... }); }); I've been using this pattern for a couple of years; I wish more libraries would do the same thing; it's made it much easier for me to share code across my different projects as well. share|improve this answer i wrote this function to simplify creating namespaces. Mabey it will help you. function ns(nsstr) { var t = nsstr.split('.'); var obj = window[t[0]] = window[t[0]] || {}; for (var i = 1; i < t.length; i++) { obj[t[i]] = obj[t[i]] || {}; obj = obj[t[i]]; } } ns('mynamespace.isawesome.andgreat.andstuff'); mynamespace.isawesome.andgreat.andstuff = 3; console.log(mynamespace.isawesome.andgreat.andstuff); share|improve this answer bob.js can help in defining your namespaces (among others): bob.ns.setNs('Dashboard.Ajax', { Post: function () { /*...*/ } }); bob.ns.setNs('Dashboard.RetrieveContent', { RefreshSalespersonPerformanceContent: function () { /*...*/ } }); share|improve this answer Implementation: namespace = function(packageName) { // Local variables. var layers, layer, currentLayer, i; // Split the given string into an array. // Each element represents a namespace layer. layers = packageName.split('.'); // If the top layer does not exist in the global namespace. if (eval("typeof " + layers[0]) === 'undefined') { // Define the top layer in the global namesapce. eval(layers[0] + " = {};"); } // Assign the top layer to 'currentLayer'. eval("currentLayer = " + layers[0] + ";"); for (i = 1; i < layers.length; ++i) { // A layer name. layer = layers[i]; // If the layer does not exist under the current layer. if (!(layer in currentLayer)) { // Add the layer under the current layer. currentLayer[layer] = {}; } // Down to the next layer. currentLayer = currentLayer[layer]; } // Return the hash object that represents the last layer. return currentLayer; }; Result: namespace('Dashboard.Ajax').Post = function() { ...... }; namespace('Dashboard.RetrieveContent').RefreshSalespersonPerformanceContent = function() { ...... }; Gist: namespace.js share|improve this answer Your Answer   discard By posting your answer, you agree to the privacy policy and terms of service. Not the answer you're looking for? Browse other questions tagged or ask your own question.
__label__pos
0.590806
1、顺序表遗留问题 1. 中间/头部的插入删除,时间复杂度为O(N) 2. 增容需要申请新空间,使用malloc、realloc等函数拷贝数据,释放旧空间。会有不小的消耗。 3. 当我们以2倍速度增容时,势必会有一定的空间浪费。例如当前容量为100,满了以后增容到200,若下次只是插入5个数据,就会浪费95个数据大小的空间。 为了改进、弥补顺序表的缺点,我们发明了链表。 2、什么是链表 概念:物理顺序:链表是一种物理存储结构上非连续、非顺序的存储结构。            逻辑顺序:是通过链表中的指针链接次序实现的。    3、接口函数以及结点的定义 typedef int SLTDataType; typedef struct SListNode { SLTDataType data; struct SListNode* next; }SListNode; //单链表打印 void SLTPrint(SListNode* phead); //尾插 void SLTPushBack(SListNode** pphead,SLTDataType x); //创建一个新结点的函数,并将新结点的数据初始化为x SListNode* BuySLTNode(SLTDataType x); //尾删 void SLTPopBack(SListNode** pphead); //头插 void SLTPushFront(SListNode** pphead, SLTDataType x); //头删 void SLTPopFront(SListNode** pphead); // 单链表查找 SListNode* SListFind(SListNode* plist, SLTDataType x); // 单链表在pos位置之后插入x void SListInsertAfter(SListNode* pos, SLTDataType x); // 单链表删除pos位置之后的值 void SListEraseAfter(SListNode* pos); //单链表在pos前面插入 void SListInsertBefore(SListNode** pphead, SListNode* pos, SLTDataType x); //单链表在删除在pos位置的值 void SListErasepos(SListNode** pphead, SListNode* pos); // 单链表的销毁 void SListDestroy(SListNode* plist);  为了方便,我们将链表的数据类型定义为int,同时定义一个struct ListNode*类型的指针next,用来指向下一个结点。 由于链表不需要事前申请一块连续的空间,因此不需要初始化,只需定义一个SlistNode*类型的指针phead,作为链表的头指针,指向这个链表。 4、链表的打印 //单链表打印 void SLTPrint(SListNode* phead) { SListNode* cur = phead; while (cur) { printf("%d->", cur->data); cur = cur->next; } printf("NULL\n"); } 创建一个cur指针变量指向头结点,然后用cur=cur->next来遍历,打印完data后再到下一个,直到打印完最后一个data,cur指向最后结点的next(指向NULL)。为了便于前期理解,最后打印一个NULL。下文插入操作后,演示打印效果。 5、创建(Buy)一个新结点 //创建一个新结点的函数,并将新结点的数据初始化为x SListNode* BuySLTNode(SLTDataType x) { //建立一个新的结点,并将data初始化,next置为NULL SListNode* newnode = (SListNode*)malloc(sizeof(SListNode)); if (NULL == newnode) { perror("malloc fail"); return NULL; } newnode->data = x; newnode->next = NULL; return newnode; } 先用malloc申请一个结点大小的空间,并将其交给一个newnode指针,确认申请成功后,对newnodd进行初始化,data=x,next置为NULL,(因为不知道要指向哪里,只是申请一个新结点,next指向的问题交给调用此函数的人来完成)。 然后将newnode返回,使用的人用某个结点的next指向它,就可以链接一个新结点。 6、尾插 //尾插 void SLTPushBack(SListNode** pphead,SLTDataType x) { SListNode* tmp = BuySLTNode(x); if (NULL == tmp) { return; } SListNode* newnode = tmp; //分情况,原来是否为空链表 if (*pphead == NULL)//为空链表 { *pphead = newnode;//建立phead与新结点的联系 } else { //先找到原来的尾结点,然后建立其与新结点的联系 SListNode* tail = *pphead; while (tail->next != NULL) { tail = tail->next; } //找到原来的尾结点后 tail->next = newnode; } } 链表是为了改进顺序表中头插和中间插入效率而产生的,与顺序表是互补关系,顺序表尾插时直接插入即可,消耗很小。但链表尾插时,需要先进行找尾操作,然后再尾插新数据。 注意:第一个参数是SListNode** pphead,用来接收头指针的地址,这是一次传址调用,可以改变phead本身的内容,由于phead是头指针,改变的就是头指针指向的内容。 第一次尾插时,phead指向NULL,利用*pphead使其指向新的第一个结点。 如果用一个SListNode* plist来接收,只是phead的形参,即一份临时拷贝,用plist接收newnode,只是在尾插函数内部起作用,而对外部的phead无影响,结果变成尾插一大顿,phead仍然指向NULL。 若phead不为NULL,则创建一个tail指针进行找尾操作。找尾过程中,tail->next为NULL,则此时tail指向尾结点,将newnode赋给tail->next即完成尾插。  上图为尾插1234后的结果。 还有一点要注意:assert的使用,不能见到指针就用assert,有时传入的数据就是一个NULL,比如插入第一个元素时phead就是NULL,如果使用assert(phead),就会报错导致无法插入。 而对于pphead即phead的地址,是一定不为NULL,就可以进行assert(pphead)的操作。 注意*pphead与pphead的区别 7、尾删 //尾删 void SLTPopBack(SListNode** pphead) { assert(pphead);//先检查pphead,再检查*pphead,若pphead为NULL,*pphead就会产生错误 assert(*pphead);//若原来为空链表,则无法删除 SListNode* cur = *pphead; //分情况,原来链表中有一个 还是 多个元素 if (NULL == cur->next) { free(cur); cur = NULL; *pphead = NULL; } else// 多个元素先找到倒数第二个 { SListNode* pretail = *pphead; while (pretail->next->next != NULL) { pretail = pretail->next; } free(pretail->next); pretail->next = NULL; } } 先用assert确保phead不为NULL,即确保删除的不是空链表。 如果链表只有一个元素,free后,将phead置为NULL,使其成为空链表 当链表中有多个元素时,需要先找到尾结点的前一个元素pretail,方法是遍历,直到pretail->next->next为NULL,这里通过两个next,可以找到尾结点的那个next中的NULL,因此多个元素时使用这种方法,若只有一个元素则next的next会对NULL->,产生对于野指针解引用的错误。 总结:链表进行尾插尾删操作时,要先进行找尾,此过程需要遍历,效率较低,因此进行尾部操作时不如顺序表。 删除空链表时会报错。  8、头插 //头插 void SLTPushFront(SListNode** pphead, SLTDataType x) { //先创造一个新结点 SListNode* tmp = BuySLTNode(x); if (NULL == tmp) { return; } SListNode* newnode = tmp; //分情况讨论,原来链表是否为空 //if (NULL == *pphead) //{ // //如果为空,直接将phead与新结点连接即可 // *pphead = newnode; //} //else //{ // //不为空,建立新结点与原来头结点的联系 // //将phead与原来头结点的联系变为与新结点的联系 // //原来A->B,现在A->C->B,先建立C->B,再建立A->C // newnode->next = *pphead; // *pphead = newnode; //} //是否为空都按不为空处理,next要么存原来头结点地址,要么为 NULL newnode->next = *pphead; *pphead = newnode; } 链表的头插很简单,先buy一个新结点,然后将newnode->next指向原来的头结点,无论原来是否为NULL,再用phead指向这个新结点。 9、头删 //头删 void SLTPopFront(SListNode** pphead) { assert(*pphead); SListNode* first = *pphead; *pphead = (*pphead)->next; free(first); first = NULL; } 创建first指针指向头结点,然后将phead指向下一个结点,不论是否为NULL,然后free掉first。 10、查找  // 单链表查找 SListNode* SListFind(SListNode* phead, SLTDataType x) { SListNode* cur = phead; while (NULL != cur) { if (cur->data == x) { return cur; } cur = cur->next; } return NULL; } 利用cur遍历链表,如果找到x就返回cur,即存有x的结点的地址,没找到就继续遍历,cur==NULL时仍然未找到,则返回NULL。  11、指定位置pos后方插入数据 // 单链表在指定位置后方插入 不用遍历 void SListInsertAfter(SListNode* pos,SLTDataType x) { assert(pos); SListNode* newnode = BuySLTNode(x); if (NULL == newnode) { perror("ButSLTNode fail"); return; } newnode->next = pos->next; pos->next = newnode; } 将newnode->next连接pos的next,不论正负,然后将newnode赋给pos的next,此过程中不需要遍历,解决了顺序表在中间位置插入效率低的问题。 12、删除指定位置pos后方的数据 // 删除后面的 不用遍历 void SListEraseAfter(SListNode* pos) { assert(pos); assert(pos->next != NULL); SListNode* del = pos->next; pos->next = del->next; free(del); del = NULL;//del为形参,是否置空影响不大,到函数外自动销毁,一般由使用者进行置空 } 必须保证pos->next不为NULL,因为是删除pos位置的下一个元素,则至少有2个元素,若只有一个,next为NULL,再解引用会变为野指针。 用del保存要删除的位置,将pos的next与要删除元素的下一个元素连接,不论是否为NULL,然后free掉del,完成删除,也不需要遍历。 总结:链表的优点在于头部以及中间位置的插入和删除不需要遍历,这里中间位置,实际是pos的下一个元素。 任意位置后方进行插入/删除,不涉及phead的改变,因此只需要传入phead即可。  13、在指定位置pos的前方插入元素 在前方插入删除涉及phead,因此传入&phead //单链表指定位置之前插入元素,需要找prev void SListInsertBefore(SListNode** pphead,SListNode* pos,SLTDataType x) { assert(*pphead);//不能为空,否则Find找不到pos assert(pos); SListNode* newnode = BuySLTNode(x); SListNode* prev = *pphead; if (pos == *pphead) { newnode->next = pos; *pphead = newnode; } else//找到pos前一个位置 { SListNode* prev = *pphead; while (prev->next != pos) { prev = prev->next; } newnode->next = pos; prev->next = newnode; } } 若链表中只有一个元素,则会改变phead的指向,这里调用头插SLTPushFront也可以。 >=2个元素时,先找到prev,然后插入。 需要进行遍历找到prev。  如果要找的位置pos是NULL,直接报错即可,是使用该函数的人的问题,函数只要完成应该完成的任务即可。 注意:假设传入的是plist,而不是它的地址,仍然用pphead接收,后续*pphead就会发生错误。因此可assert(pphead)。pphead实际上一定不为空,因此只要传入plist地址就要assert 14、删除指定位置pos的数据 //在指定位置pos的删除 void SListErasepos(SListNode** pphead, SListNode* pos) { assert(pphead); assert(*pphead); assert(pos); SListNode* cur = *pphead; if (cur == pos) { *pphead = pos->next; free(cur); cur = NULL; } else { SListNode* prev = *pphead; while (prev->next != pos) { prev = prev->next; } prev->next = pos->next; free(pos); } }    如果pos指向第一个结点,即为头删,也可以使用SLTPopFront头删函数 接收Find的ret在erase后可以置为NULL,在函数外部完成,内部为形参。在内部改变需要传pos的二级指针。 而对于pphead则只能传二级指针,在函数内部修改phead的指向。 销毁时,利用tmp指针变量销毁当前位置,然后next找到下一个位置,遍历销毁。 总结:尾插尾删头插头删都需要pphead,insertafter和eraseafter仅需要pos,查找和打印仅需要phead,insertbefore、erasepos需要pphead、pos。 pos  phead  pphead  *pphead 的assert问题。 链表的优点缺点,某些位置是否需要遍历。 面试题:不告知phead,只知道pos,怎么在pos指向的数据之前插入数据? 例如原来为  1   2,pos指向2,可以在2的位置后插入3,然后将2、3两个data交换,结果为132,当然,此时pos指向的是3。 如果是删除不给phead删除pos当前位置,则将pos位置的值改为下一个位置的值,然后删除掉下一个位置的数据。(不能删除尾结点) 技术 下载桌面版 GitHub 百度网盘(提取码:draw) Gitee 云服务器优惠 阿里云优惠券 腾讯云优惠券 华为云优惠券 站点信息 问题反馈 邮箱:[email protected] QQ群:766591547 关注微信
__label__pos
0.979418
HP OpenVMS Systems ask the wizard Content starts here Self-referential DCL procedure? » close window The Question is: How can I know the path of my DCL command procedure from inside of it? The Answer is : F$ENVIRONMENT("PROCEDURE") will return the filespec of the currently executing procedure. This can then be parsed to determine the device and directory: $ ThisFile=F$ENVIRONMENT("PROCEDURE") $ ThisDev=F$PARSE(ThisFile,,,"DEVICE") $ ThisDir=F$PARSE(ThisFile,,,"DIRECTORY") answer written or last revised on ( 14-MAY-2001 ) » close window  
__label__pos
0.929805
Table Of Contents Edge Detection Filter (G Dataflow) Last Modified: June 25, 2019 Extracts the contours in gray-level values. Any image connected to the input image dst must be the same image type connected to image src. The image type connected to the input image mask must be an 8-bit image. The connected source image must have been created with a border capable of supporting the size of the processing matrix. For example, a 3 × 3 matrix has a minimum border size of 1. The border size of the destination image is not important. This node modifies the source image. If you need the original source image, create a copy of the image using the Copy Image node before using this node. connector_pane_image datatype_icon Canny filter parameters Cluster containing the following filter parameters: datatype_icon Sigma Sigma of the Gaussian smoothing filter that the node applies to the image before performing the edge detection. datatype_icon HThresh High threshold input that defines the upper percentage of pixel values in the image from which the edge detection algorithm chooses the seed or starting point of an edge segment. Values range from 0 to 1. datatype_icon LThres Low threshold input that is multiplied by the HThresh value to define a lower threshold for all the pixels in an edge segment. datatype_icon WindowSize WindowSize defines the size of the Gaussian filter that the VI applies to the image. The size must be an odd number. datatype_icon image src Reference to the source image. datatype_icon image mask 8-bit image that specifies the region of the small image to be copied. Only pixels in the image src image that correspond to a non-zero pixel in the mask image are copied. All other pixels keep their original values. The entire image is processed if image mask is not connected. datatype_icon image dst Reference to the destination image. datatype_icon method Type of edge-detection filter to use. The following filters are valid: Name Description Differentiation Processing with a 2 × 2 matrix Gradient Processing with a 2 × 2 matrix Prewitt Processing with a 3 × 3 matrix Roberts Processing with a 2 × 2 matrix Sigma Processing with a 3 × 3 matrix Sobel Processing with a 3 × 3 matrix Default: Differentiation datatype_icon error in Error conditions that occur before this node runs. The node responds to this input according to standard error behavior. Standard Error Behavior Many nodes provide an error in input and an error out output so that the node can respond to and communicate errors that occur while code is running. The value of error in specifies whether an error occurred before the node runs. Most nodes respond to values of error in in a standard, predictable way. error in does not contain an error error in contains an error If no error occurred before the node runs, the node begins execution normally. If no error occurs while the node runs, it returns no error. If an error does occur while the node runs, it returns that error information as error out. If an error occurred before the node runs, the node does not execute. Instead, it returns the error in value as error out. Default: No error datatype_icon threshold value Minimum pixel value that appears in the resulting image. NI recommends not using a value greater than 0 for this type of processing because the results from this processing are usually dark and not dynamic. Default: 0 datatype_icon image dst out Reference to the destination image. If image dst is connected, image dst out is the same as image dst. Otherwise, image dst out refers to the image referenced by image. datatype_icon error out Error information. The node produces this output according to standard error behavior. Standard Error Behavior Many nodes provide an error in input and an error out output so that the node can respond to and communicate errors that occur while code is running. The value of error in specifies whether an error occurred before the node runs. Most nodes respond to values of error in in a standard, predictable way. error in does not contain an error error in contains an error If no error occurred before the node runs, the node begins execution normally. If no error occurs while the node runs, it returns no error. If an error does occur while the node runs, it returns that error information as error out. If an error occurred before the node runs, the node does not execute. Instead, it returns the error in value as error out. Where This Node Can Run: Desktop OS: Windows FPGA: Not supported Web Server: Not supported in VIs that run in a web application Recently Viewed Topics
__label__pos
0.657106
Decompose Fractions Home » Math Vocabulary » Decompose Fractions Decompose Fractions – Introduction “Decompose” means to separate or break something apart. We can decompose numbers as well as geometric shapes. Decomposing numbers involves breaking a larger number into smaller numbers.  For example, consider breaking the number 10 into parts. We can decompose ten into five, three, and two. Alternatively, we can compose the number ten by putting together five, four, and one. There is often more than one way to decompose a number.  We all know that fractions are part of a whole. But just like whole numbers, we can compose and decompose fractions. Does that sound interesting? Let’s dive in and learn more about decomposing fractions.  What Is Decomposing Fractions? A fraction represents a part of a whole. To decompose a fraction means breaking it into smaller parts. Combining or adding all the smaller or decomposed parts must result in the initial fraction.  For instance, the fraction $\frac{3}{4}$ means we have three out of four equal parts. We can split this fraction into even smaller parts! Decomposing fractions to unit fractions We can decompose the fraction $\frac{3}{4}$  into three one-fourths. When we add these parts together, we get the fraction $\frac{3}{4}$. What happens when we decompose a fraction into a sum of smaller fractions? We break the numerator into parts. Unlike partitioning, where we usually consider the fractional concept of dividing shapes into equal parts, decomposing does not need to be of equivalent size. Let’s look at other decompositions of $\frac{3}{4}$. Decomposing fractions $\frac{3}{4} = \frac{1}{4} + \frac{2}{4}$ How to Decompose Fractions? Decomposition of fractions can be done in two ways: unit fractions or non-unit fractions. How to Decompose Fractions into Unit Fractions? A fraction with one as a numerator is called a unit fraction. When a whole is divided into equal parts, the unit fraction represents one part of the whole—for example, $\frac{1}{2}, \frac{1}{3}, \frac{1}{4}, \frac{1}{5}$, etc.  The most basic way to decompose a fraction is to break it into unit fractions.  For example,  Visual representation of breaking a fraction into unit fractions We can see that $\frac{5}{8}$ is the same as the five times the unit fraction $\frac{1}{8}$. $\frac{5}{8} = \frac{1}{8} + \frac{1}{8} + \frac{1}{8} + \frac{1}{8} + \frac{1}{8}$ When you break a fraction into unit fractions, you split it into equal parts.  Let’s take another example. Consider the fraction $\frac{9}{5}$. We can divide the fraction as shown below.  Nine one-fifths $\frac{9}{5} = \frac{1}{5} + \frac{1}{5} + \frac{1}{5} + \frac{1}{5} + \frac{1}{5} + \frac{1}{5} + \frac{1}{5} + \frac{1}{5} + \frac{1}{5}$ How to Decompose Fractions into Non-Unit Fractions? In this method, we decompose a fraction into different smaller like fractions. We can express a fraction as the sum of smaller fractions, which are not all unit fractions. Adding up all the decomposed fractions must result in the initial fraction. For example, in the figure below, $\frac{7}{9}$  of the circle is divided into four unequal parts. Decompose like fractions $\frac{7}{9} = \frac{3}{9} + \frac{1}{9} + \frac{1}{9} + \frac{2}{9}$. Let’s take another example: How do we decompose $\frac{12}{5}$?  Break an improper fraction We can break it down into $\frac{5}{5} + \frac{5}{5} + \frac{2}{5} = 1 + 1 + \frac{2}{5} = 2\frac{2}{5}$ This gives us the mixed number form of the fraction $\frac{12}{5}$.  How to Decompose Mixed Numbers? A mixed fraction is a combination of a whole number and a proper fraction, a number between two consecutive whole numbers.  For example, $1\frac{3}{4}$ is a mixed number between the whole numbers 1 and 2. Now, let’s understand how to decompose a mixed number.  Decompose mixed numbers Conclusion Practicing how to decompose fractions can help children perform various operations with fractions. It will improve their simplifying skills and teach them different techniques to master concepts and work on fractions.  Solved Examples 1. Decompose the fraction $\frac{4}{7}$ into unit fractions. Solution: To decompose 47 into unit fractions, we can split the numerator four and express the fraction as the sum of 4 one-sevenths.  $\frac{4}{7} = \frac{1}{7} + \frac{1}{7} + \frac{1}{7} + \frac{1}{7}$ 2. Write the given fraction as the sum of two different fractions: $\frac{3}{10}$. Solution: To express the fraction as the sum of two different fractions, we can decompose the numerator three into 1 and 2. So, $\frac{3}{10} = \frac{1}{10} + \frac{2}{10}$  3. Write the fraction $\frac{6}{11}$ as the sum of three equal fractions. Solution: To express the given fraction as the sum of three equal fractions, we can split the numerator six into three equal parts. $6 = 2 + 2 + 2$ So, $\frac{6}{11} = \frac{2}{11} + \frac{2}{11} + \frac{2}{11}$ 4. Decompose the improper fraction $\frac{7}{4}$ and write the mixed number form of the fraction.  Solution: To break down the improper fraction, we can split the numerator as follows: $\frac{7}{4} = \frac{4}{4} + \frac{3}{4} = 1 + \frac{3}{4}= 1\frac{3}{4}$ Practice Problems Decompose Fractions Attend this quiz & Test your knowledge. 1 Which of the following sum equals $\frac{1}{2}$? $\frac{1}{4}+\frac{1}{4}$ $\frac{1}{2}+\frac{1}{2}$ $\frac{1}{4}+\frac{1}{2}$ None of the above CorrectIncorrect Correct answer is: $\frac{1}{4}+\frac{1}{4}$ $\frac{1}{4} + \frac{1}{4} = \frac{2}{4} = \frac{1}{2}$ 2 Select a fraction to complete the equation $\frac{2}{8} + \frac{3}{8} + \frac{1}{8} + \underline{} = \frac{10}{8}$ $\frac{4}{8}$ $\frac{3}{8}$ $\frac{2}{8}$ $\frac{1}{8}$ CorrectIncorrect Correct answer is: $\frac{4}{8}$ Since $2 + 3 + 1 + 4 = 10, \frac{2}{8} + \frac{3}{8} + \frac{1}{8} + \frac{4}{8} = \frac{10}{8}$ 3 Which expression matches the model shown below? Decompose Fractions $\frac{4}{12} + \frac{1}{12} + \frac{6}{12}$ $\frac{1}{10} + \frac{1}{10} + \frac{5}{10}$ $\frac{4}{12} + \frac{11}{2} + \frac{5}{12}$ $\frac{5}{10} + \frac{5}{10}$ CorrectIncorrect Correct answer is: $\frac{4}{12} + \frac{11}{2} + \frac{5}{12}$ The whole is divided into 12 equal parts. Blue parts represent $\frac{4}{12}$ of the whole; the yellow part represents $\frac{1}{12}$, and the pink parts represent $\frac{5}{12}$. 4 Which of the following is equivalent to $\frac{1}{6} + \frac{2}{6} + \frac{2}{6}$? AA BB CC DD CorrectIncorrect Correct answer is: B $\frac{1}{6} + \frac{2}{6} + \frac{2}{6} = \frac{5}{6}$ Decompose Fractions 5 Select an expression that is more than $\frac{8}{11}$. $\frac{2}{11} + \frac{2}{11} + \frac{2}{11}$ $\frac{2}{11} + \frac{3}{11} + \frac{5}{11}$ $\frac{1}{11} + \frac{3}{11} + \frac{3}{11}$ $\frac{1}{11} + \frac{1}{11} + \frac{1}{11}$ CorrectIncorrect Correct answer is: $\frac{2}{11} + \frac{3}{11} + \frac{5}{11}$ The sum of $\frac{2}{11} + \frac{3}{11} + \frac{5}{11}$ is $\frac{10}{11}$, greater than $\frac{8}{11}$ . Frequently Asked Questions No, when we decompose a fraction, we write it as the sum of smaller fractions. To decompose a fraction means breaking it into smaller parts. Combining all the decomposed parts must result in the initial fraction.  To decompose a unit fraction, we can decompose the fraction equivalent to a unit fraction. For example, $\frac{1}{3}$ is a unit fraction. $\frac{1}{3}$ is equal to $\frac{2}{6}$, so we can write it as $\frac{2}{6}$.  $\frac{2}{6} = \frac{1}{6} + \frac{1}{6}$ Therefore, $\frac{1}{3} = \frac{1}{6} + \frac{1}{6}$ Equivalent fractions have different numerators and denominators but are equal to the same value. Decomposing a fraction means writing it as the sum of smaller fractions. Since equivalent fractions are of equal value, we cannot decompose a fraction into an equivalent fraction. For example, $\frac{3}{6}$ cannot be decomposed into $\frac{1}{2}$ because $\frac{3}{6} = \frac{1}{2}$.
__label__pos
1
Chat now with support Chat with Support One Identity Safeguard for Privileged Sessions 6.0.12 - REST API Reference Guide Introduction Using the SPS REST API Basic settings User management and access control Managing SPS General connection settings HTTP connections Citrix ICA connections RDP connections SSH connections Telnet connections VNC connections Search, download, and index sessions Reporting Health and maintenance Advanced authentication and authorization Completing the Welcome Wizard using REST Enable and configure analytics using REST Privileges of usergroups This endpoint lists the usergroups configured on SPS, and the privileges (ACLs) of each group. Note that currently you cannot edit the privileges (ACLs) of the groups using the REST API. If you change the privileges of a usergroup on the SPS web interface, the changes will apply to the users when they authenticate again on SPS, the privileges of active sessions are not affected. URL GET https://<IP-address-of-SPS>/api/configuration/aaa/acls Cookies Cookie name Description Required Values session_id Contains the authentication token of the user Required The value of the session ID cookie received from the REST server in the authentication response, for example, a1f71d030e657634730b9e887cb59a5e56162860. For details on authentication, see Authenticate to the SPS REST API. Note that this session ID refers to the connection between the REST client and the SPS REST API. It is not related to the sessions that SPS records (and which also have a session ID, but in a different format). Sample request The following command lists the local users. curl --cookie cookies https://<IP-address-of-SPS>/api/configuration/aaa/acls Response The following is a sample response received when querying the endpoint. For details of the meta object, see Message format. { "body": [ { "group": "basic-view", "objects": [ "/special/basic" ], "permission": "read" }, { "group": "basic-write", "objects": [ "/special/basic" ], "permission": "write" }, { "group": "auth-view", "objects": [ "/special/auth" ], "permission": "read" }, { "group": "auth-write", "objects": [ "/special/auth" ], "permission": "write" }, { "group": "search", "objects": [ "/special/searchmenu" ], "permission": "read" }, { "group": "changelog", "objects": [ "/special/changelog" ], "permission": "read" }, { "group": "policies-view", "objects": [ "/special/pol" ], "permission": "read" }, { "group": "policies-write", "objects": [ "/special/pol" ], "permission": "write" }, { "group": "ssh-view", "objects": [ "/special/ssh" ], "permission": "read" }, { "group": "ssh-write", "objects": [ "/special/ssh" ], "permission": "write" }, { "group": "rdp-view", "objects": [ "/special/rdp" ], "permission": "read" }, { "group": "rdp-write", "objects": [ "/special/rdp" ], "permission": "write" }, { "group": "telnet-view", "objects": [ "/special/telnet" ], "permission": "read" }, { "group": "telnet-write", "objects": [ "/special/telnet" ], "permission": "write" }, { "group": "vnc-view", "objects": [ "/special/vnc" ], "permission": "read" }, { "group": "vnc-write", "objects": [ "/special/vnc" ], "permission": "write" }, { "group": "indexing", "objects": [ "/special/search/search", "/special/bap" ], "permission": "write" }, { "group": "ica-view", "objects": [ "/special/ica" ], "permission": "read" }, { "group": "ica-write", "objects": [ "/special/ica" ], "permission": "write" }, { "group": "api", "objects": [ "/special/rpcapi" ], "permission": "write" }, { "group": "http-view", "objects": [ "/special/http" ], "permission": "read" }, { "group": "http-write", "objects": [ "/special/http" ], "permission": "write" }, { "group": "indexer-view", "objects": [ "/special/indexer" ], "permission": "read" }, { "group": "indexer-write", "objects": [ "/special/indexer" ], "permission": "write" }, ], "key": "acls", "meta": { "first": "/api/configuration/aaa/acls", "href": "/api/configuration/aaa/acls", "last": "/api/configuration/aaa/settings", "next": "/api/configuration/aaa/local_database", "parent": "/api/configuration/aaa", "previous": null, "transaction": "/api/transaction" } } Element Type Description body Top level element (JSON object) Contains the properties of the user. group string The name of the usergroup. objects list The list of privileges that the group has access to. permission read | write The type of the permission. The group needs write access to configure an object, or to perform certain actions. Status and error codes The following table lists the typical status and error codes for this request. For a complete list of error codes, see Application level error codes. Code Description Notes 401 Unauthenticated The requested resource cannot be retrieved because the client is not authenticated and the resource requires authorization to access it. The details section contains the path that was attempted to be accessed, but could not be retrieved. 401 AuthenticationFailure Authenticating the user with the given credentials has failed. 404 NotFound The requested object does not exist. Audit data access rules This endpoint enables you to restrict the search and access privileges of usergroups to audit data. URL GET https://<IP-address-of-SPS>/api/acl/audit_data Cookies Cookie name Description Required Values session_id Contains the authentication token of the user Required The value of the session ID cookie received from the REST server in the authentication response, for example, a1f71d030e657634730b9e887cb59a5e56162860. For details on authentication, see Authenticate to the SPS REST API. Note that this session ID refers to the connection between the REST client and the SPS REST API. It is not related to the sessions that SPS records (and which also have a session ID, but in a different format). Sample request The following command lists the available audit data access rules. curl --cookie cookies https://<IP-address-of-SPS>/api/acl/audit_data Response The following is a sample response received when querying the endpoint. For details of the meta object, see Message format. { "items": [ { "key": "autogenerated-10211162955b9621d4eb244", "meta": { "href": "/api/acl/audit_data/autogenerated-10211162955b9621d4eb244" } } ], "meta": { "href": "/api/acl/audit_data", "parent": "/api/acl", "remaining_seconds": 600, "transaction": "/api/transaction" } } Element Type Description items Top-level element (list of JSON objects) List of endpoints (objects) available from the current endpoint. key string The ID of the endpoint. meta Top-level item (JSON object) Contains the path to the endpoint. href string (relative path) The path of the resource that returned the response. Query a specific audit data access rule To find out the contents of a particular audit data access rule, complete the following steps: NOTE: If you have an SPS user who has Search > Search in all connections privileges in AAA > Access Control, the autogenerated-all-data-access-id rule is automatically generated. Therefore, you can almost always query this audit data access rule. 1. Query the https://<IP-address-of-SPS>/api/acl/audit_data/<key-of-rule-to-be-queried> endpoint. curl --cookie cookies https://<IP-address-of-SPS>/api/acl/audit_data/<key-of-rule-to-be-queried> The following is a sample response received. For details of the meta object, see Message format. { "body": { "name": "my_ssh_rule", "query": "psm.connection_policy:my_ssh_connection_policy", "groups": [ "ssh-view", "ssh-write" ] }, "key": "autogenerated-10211162955b9621d4eb244", "meta": { "href": "/api/acl/audit_data/autogenerated-10211162955b9621d4eb244", "parent": "/api/acl/audit_data", "remaining_seconds": 600, "transaction": "/api/transaction" } } Elements Type Description body Top-level element (JSON object) Contains the JSON object of the rule.   name string The human-readable name of the audit data access rule that you specified when you created the rule.   query string The query that members of the usergroup(s) are allowed to perform. groups list The usergroup(s) whose access to audit data you want to restrict. Status and error codes The following table lists the typical status and error codes for this request. For a complete list of error codes, see Application level error codes. Code Description Notes 201 Created The new resource was successfully created. 400 SemanticError The configuration contains semantic errors, inconsistencies or other problems that would put the system into an unreliable state if the configuration had been applied. The details section contains the errors that were found in the configuration. 401 Unauthenticated The requested resource cannot be retrieved because the client is not authenticated and the resource requires authorization to access it. The details section contains the path that was attempted to be accessed, but could not be retrieved. 401 AuthenticationFailure Authenticating the user with the given credentials has failed. 404 NotFound The requested object does not exist. Active sessions The api/active-sessions endpoint has only one parameter and it only serves the DELETE request that terminates the specified session. URL GET https://<IP-address-of-SPS>/api/active-sessions Cookies Cookie name Description Required Values session_id Contains the authentication token of the user Required The value of the session ID cookie received from the REST server in the authentication response, for example, a1f71d030e657634730b9e887cb59a5e56162860. For details on authentication, see Authenticate to the SPS REST API. Note that this session ID refers to the connection between the REST client and the SPS REST API. It is not related to the sessions that SPS records (and which also have a session ID, but in a different format). Sample request The following command lists the ACLs: curl --cookie cookies https://<IP-address-of-SPS>/api/configuration/aaa/acls The user (in this example, user1) has to be a member of a group that has read and write/perform privileges for Active Sessions (/special/active_sessions). After authentication, user1 can delete the active session determined by the session ID. curl -k --user user1 --cookie-jar /tmp/cookie https://192.168.122.194/api/authentication curl -k --cookie /tmp/cookie https://192.168.122.194/api/active-sessions?id=svc/rpokH8fD9kx6CaxNLznKx2/test:12 -X DELETE Status and error codes The following table lists the typical status and error codes for this request. For a complete list of error codes, see Application level error codes. Code Description Notes 400 SessionIdMissing No session id is given in the id query parameter. 500 SessionTerminationFailed The session could not be terminated due to internal errors. Manage users and usergroups locally on SPS Contains the endpoints for managing users and usergroups locally on SPS. URL GET https://<IP-address-of-SPS>/api/configuration/aaa/local_database Cookies Cookie name Description Required Values session_id Contains the authentication token of the user Required The value of the session ID cookie received from the REST server in the authentication response, for example, a1f71d030e657634730b9e887cb59a5e56162860. For details on authentication, see Authenticate to the SPS REST API. Note that this session ID refers to the connection between the REST client and the SPS REST API. It is not related to the sessions that SPS records (and which also have a session ID, but in a different format). Sample request The following command lists the endpoints of the local database. curl --cookie cookies https://<IP-address-of-SPS>/api/configuration/aaa/local_database Response The following is a sample response received when listing the endpoint. For details of the meta object, see Message format. { "items": [ { "key": "groups", "meta": { "href": "/api/configuration/aaa/local_database/groups" } }, { "key": "users", "meta": { "href": "/api/configuration/aaa/local_database/users" } } ], "meta": { "first": "/api/configuration/aaa/acls", "href": "/api/configuration/aaa/local_database", "last": "/api/configuration/aaa/settings", "next": "/api/configuration/aaa/settings", "parent": "/api/configuration/aaa", "previous": "/api/configuration/aaa/acls", "transaction": "/api/transaction" } } Element Description groups Endpoint that contains local usergroups. users Endpoint that contains local usernames. Status and error codes The following table lists the typical status and error codes for this request. For a complete list of error codes, see Application level error codes. Code Description Notes 401 Unauthenticated The requested resource cannot be retrieved because the client is not authenticated and the resource requires authorization to access it. The details section contains the path that was attempted to be accessed, but could not be retrieved. 401 AuthenticationFailure Authenticating the user with the given credentials has failed. 404 NotFound The requested object does not exist. Related Documents The document was helpful. Select Rating I easily found the information I needed. Select Rating
__label__pos
0.813455
Search for a tool Complex Number Modulus Tool for calculating the value of the modulus of a complex number. The modulus of a complex number \( z \) is written \( | z | \) (absolute value) and consists of the length of the segment between the point of origin of the complex plane and the point \( z \). Results Complex Number Modulus - Tag(s) : Mathematics dCode and you dCode is free and its tools are a valuable help in games, puzzles and problems to solve every day! You have a problem, an idea for a project, a specific need and dCode can not (yet) help you? You need custom development? Contact-me! Team dCode read all messages and answer them if you leave an email (not published). It is thanks to you that dCode has the best Complex Number Modulus tool. Thank you. This page is using the new English version of dCode, please make comments ! Complex Number Modulus Sponsored ads Modulus (Absolute Value) Calculator Tool for calculating the value of the modulus of a complex number. The modulus of a complex number \( z \) is written \( | z | \) (absolute value) and consists of the length of the segment between the point of origin of the complex plane and the point \( z \). Answers to Questions How to calculate the modulus of a complex number? The module is the length (absolute value) qualifying the complex numberhref \( z = a + ib \) on the complex plane, it is denoted \( | z | \) and is equal to \( | z | = \sqrt{a ^ 2 + b ^ 2} \) with \( a = \Re{z} \) the real part and \( b = \Im {z} \) the imaginary part. Consider \( z = 1+i \) (of abscissa 1 and of ordinate 1 on the complex plane) then the modulus equals \( |z| = \sqrt{1^2+1^2} = \sqrt{2} \) The module of a real numberhref is equivalent to its absolute value. What are the properties of modulus? Consider the complex numbershref \(z, z_1, z_2 \), the complex module has the following properties: $$ |z_1 \cdot z_2| = |z_1| \cdot |z_2| $$ $$ \left| \frac{z_1}{z_2} \right| = \frac{|z_1|}{|z_2|} \iff z_2 \ne 0 $$ $$ |z_1+z_2| \le |z_1|+|z_2| $$ A modulus is an absolute value, therefore necessarily positive (or null): $$ |z| \ge 0 $$ The modulus of a complex numberhref and its conjugate are equal: $$ |\overline z|=|z| $$ Ask a new question Source code dCode retains ownership of the source code of the script Complex Number Modulus. Except explicit open source licence (indicated Creative Commons / free), any algorithm, applet, snippet, software (converter, solver, encryption / decryption, encoding / decoding, ciphering / deciphering, translator), or any function (convert, solve, decrypt, encrypt, decipher, cipher, decode, code, translate) written in any informatic langauge (PHP, Java, C#, Python, Javascript, etc.) which dCode owns rights can be transferred after sales quote. So if you need to download the Complex Number Modulus script for offline use, for you, your company or association, see you on contact page ! Questions / Comments Team dCode read all messages and answer them if you leave an email (not published). It is thanks to you that dCode has the best Complex Number Modulus tool. Thank you. Source : http://www.dcode.fr/complex-number-modulus © 2017 dCode — The ultimate 'toolkit' to solve every games / riddles / geocaches. dCode
__label__pos
0.997496
  Home Java Example Java Core How To Retrieve Image From From MySQL Using Java Questions:Ask|Latest     Share on Google+Share on Google+ How To Retrieve Image From From MySQL Using Java Advertisement In this section we will discuss about how to retrieve image from the MySQL using Java. How To Retrieve Image From From MySQL Using Java In this section we will discuss about how to retrieve image from the MySQL using Java. This example explains you about all the steps that how to retrieve image from MySQL database in Java. To retrieve an image from the database we will have to run the "select" SQL query. To retrieve an image from the database we can use the following line of code where fetching the resultset as follows : Blob img = rs.getBlob(columnIndex); InputStream is = img.getBinaryStream() Or you can directly use the following line of code where fetching the resultset as follows : InputStream is = rs.getBinaryStream(columnIndex); Example Here I am giving a simple example which will demonstrate you about how to retrieve image from the database. In this example we will first create a database table where we will insert the image into the table (How Read Here). Then we will create a Java class where we will write the code for connecting the database and then we will write a SQL query for searching the records into the database table and then we will get the image value using rs.getBinaryStream(columnIndex). Further for the convenience to get the image detail we will read the image using read() method of ImageIO class and kept this image into the BufferedImage and next we will find the image width and height. Source Code RetrieveImageFromDB.java import java.awt.image.BufferedImage; import java.io.InputStream; import java.sql.Connection; import java.sql.DriverManager; import java.sql.PreparedStatement; import java.sql.ResultSet; import javax.imageio.ImageIO; public class RetrieveImageFromDB { public static void main(String args[]) { String className = "com.mysql.jdbc.Driver"; String url = "jdbc:mysql://localhost/record"; String user = "root"; String password = "root"; Connection con = null; PreparedStatement ps = null; InputStream is = null; try { Class.forName(className); con = DriverManager.getConnection(url, user, password); System.out.println("img_id \t img_title \t \t img_location \t \t \t \t \t img_name \t WidthXHeight"); ps = con.prepareStatement("select * from image"); ResultSet rs = ps.executeQuery(); while(rs.next()) { int img_id = rs.getInt(1); String img_title = rs.getString(2); String img_location = rs.getString(4); String img_name = rs.getString(5); is = rs.getBinaryStream(3); BufferedImage bimg = ImageIO.read(is); int width = bimg.getWidth(); int height = bimg.getHeight(); System.out.println(img_id + " \t " + img_title+ " \t " + img_location+ " \t " + img_name+ " \t "+ width+"X"+height); } rs.close(); is.close(); } catch(Exception e) { e.printStackTrace(); } } } Output When you will execute the above code and let there are two records in the database table then you will get the output as follows : The database table is as follows : And the output will be as follows : Download Source Code Advertisement Follow us on Twitter, or add us on Facebook or Google Plus to keep you updated with the recent trends of Java and other open source platforms. Posted on: January 30, 2013 Recommend the tutorial Ask Questions?    Discuss: How To Retrieve Image From From MySQL Using Java   Post your Comment Your Name (*) : Your Email : Subject (*): Your Comment (*):   Reload Image     Comments Mani February 8, 2013 Retrieve Image from MySQL I went through the code. Is there any possible to display the image itself instead of displaying its attributes ? For example using JSP, I need to show all the records in one html table with one of the columns displaying the image. Thanks Mani
__label__pos
0.980167
Tell me more × Stack Overflow is a question and answer site for professional and enthusiast programmers. It's 100% free, no registration required. I am struggeling with a custom find in cakephp 2.1. In my model I have this function: public function findByGenres($data = array()) { $this->Item2genre->Behaviors->attach('Containable', array('autoFields' => false)); $this->Item2genre->Behaviors->attach('Search.Searchable'); $query = $this->Item2genre->getQuery('all', array( 'conditions' => array('Genre.name' => $data['genre']), 'fields' => array('item_id'), 'contain' => array('Genre') )); return $query; } This returns the following query: SELECT `Item`.`id` FROM `items` AS `Item` WHERE `Item`.`id` IN(SELECT `Item2genre`.`item_id` FROM `item2genre` AS Item2genre LEFT JOIN `genres` AS Genre ON(`genre_id` = `Genre`.`id`) WHERE `Genre`.`name` IN ('Comedy', 'Thriller') ) The result of the query returns Items with either 'Comedy' or 'Thriller' genre associated to them. How can I modify the query to only return Items with 'Comedy' AND 'Thriller' genre associated to them? Any suggestions? edit: content of data is: 'genre' => array( (int) 0 => 'Comedy', (int) 1 => 'Thriller' ) share|improve this question   what is the content of $data['genre']? –  noslone Dec 3 '12 at 14:10 add comment 1 Answer up vote 4 down vote accepted You would want your 'conditions' key to be this: 'conditions' => array( array('Genre.name' => 'Comedy'), array('Genre.name' => 'Thriller') ) So specifically to your problem your $data['genre'] is array('Comedy', 'Thriller'). So you could create a variable that has contents similar to what you need by doing: $conditions = array(); foreach ($data['genre'] as $genre) { $conditions[] = array('Genre.name' => $genre); } share|improve this answer   thank you very much! –  3und80 Dec 3 '12 at 15:15 add comment Your Answer   discard By posting your answer, you agree to the privacy policy and terms of service. Not the answer you're looking for? Browse other questions tagged or ask your own question.
__label__pos
0.991298
Ask Your Question 1 wrong variable in solution of an inequality asked 2011-12-03 10:37:53 +0200 Elmi gravatar image Hi everybody, I am new to sage, trying to solve an inequality using sage, i.e: m,a,d,w,e,c=var('m,a,d,w,e,c') x_br=1/2*(2*c*d - d*e - d*w)/(2*d*m - e + w) solve(x_br<=d/2,m) But what I get is: [[w == -2*d*m + c], [d == 0], [max(-2*d*m + e, -d*m + c) < w, 0 < d], [-2*d*m + e < w, w < -d*m + c, d < 0, -d*m + e < c], [-d*m + c < w, w < -2*d*m + e, d < 0, c < -d*m + e], [w < min(-d*m + c, -2*d*m + e), 0 < d]] It is solved based on w instead of m. I tried different variables but the result is always the same. What should I do to have solution based on x? Thank you in advance. edit retag flag offensive close merge delete 1 Answer Sort by » oldest newest most voted 0 answered 2011-12-03 12:07:52 +0200 benjaminfjones gravatar image The problem here might be that Sage doesn't know whether your variables are positive or negative. If you don't make more assumptions there may be no way to solve the inequality symbolically. It looks like in your inequality, the solution would strongly depend on whether (2*d*m - e + w) is positive or negative, and also whether d is positive or negative. Try looking at the documentation for the function solve_ineq which uses Maxima to solve single inequalities in one variable or systems of inequalities in several variables. edit flag offensive delete link more Comments Thank you. But actually there is no assumptions about `(2*d*m - e + w)` in my problem. The only assumption is d>0 which when I apply it, it does not make any changes in the solution. Although I solved the problem by hand, I am so curious to know how to achieve desired arrangement in variables which may help me through remaining of my work. Elmi gravatar imageElmi ( 2011-12-04 11:09:30 +0200 )edit Your Answer Please start posting anonymously - your entry will be published after you log in or create a new account. Add Answer Question Tools 1 follower Stats Asked: 2011-12-03 10:37:53 +0200 Seen: 138 times Last updated: Dec 03 '11
__label__pos
0.834076
Changes between Version 31 and Version 32 of DoxygenPlugin Ignore: Timestamp: Sep 5, 2006, 11:41:37 AM (10 years ago) Author: Christian Boos Comment: Replaced the Wiki Macros section by a TracLinks section, as the doxygen: is not a macro, but a link resolver Legend: Unmodified Added Removed Modified • DoxygenPlugin v31 v32   154154---- 155155 156 == Wiki Macros ==  156== TracLinks == 157157 158 Using provided wiki macro you can make a link to doxygen documentation page wherever you want. Specified links are related to setted up doxygen path:  158It's possible to create links to doxygen documentation from anywhere within a Wiki text, by using the `doxygen:` link prefix. 159159  160The general syntax of such links is: `doxygen:documentation_path/documentation_target`, where `documentation_path` is optional.  161If `documentation_path` is not specified, the `[doxygen] default_documentation` setting will be used instead.  162  163The `documentation_target` part is used for specifying what Doxygen generated content will be displayed when following the link. It can be:  164 - the name of one of the many documentation summary page generated by Doxygen:  165   * annotated, classes, dirs, files, functions, globals, hierarchy, index, inherits, main, namespaces and namespacemembers  166 - the name of a documented struct or class  167 - the name of a directory {o}  168 - the name of a file {o}  169  170Some examples: 160171{{{ 161172[doxygen:main.html Documentation] # Simple documentation in doxygen path. 162 [doxygen:FirstProject/main.html First]   # Multiple documentation in separate 163 [doxygen:SecondProject/main.html Second] # directories in doxygen path.  173  174[doxygen:FirstProject/annotated Annotated List of Classes in FirstProject]  175[doxygen:SecondProject/main.html Main doc index for SecondProject] 164176}}} 165177
__label__pos
0.952346
Printbutton in flash to print htmlpage I do not know how to make a button t print a specific htmlpage. I guess it is a script. Do not know the script. Anyone? Jeje, does it have to be in flash? Otherwise: < script type=“text/javascript” > function docprint() { if (navigator.appName == “Netscape”) { window.print(); } else { document.body.insertAdjacentHTML(“beforeEnd”, “< object id=‘PrHandle’ width=0 height=0 classid=‘CLSID:8856F961-340A-11D0-A96B-00C04FD705A2’ >< /object >”); PrHandle.ExecWB(6,2); } } < /script > The above is to be placed in HEAD. Below is the form: < form > < input type=“button” value=“Skriv ut” onClick=“docprint();” > < /form > Yes, but I want to put the code in the flashbutton. Can I put in the url-thing at the action set? Please the code to put a flash button to print iut a HTMLpage? Because I already have the code in HTML. Thank you as in “a flash button which prints the swf file” or as in “a flash button which prints a defined html document” or “a flash button which prints the document the file is loaded in”?
__label__pos
0.999566
Logo ROOT   Reference Guide RooDataHistSliceIter.cxx Go to the documentation of this file. 1/***************************************************************************** 2 * Project: RooFit * 3 * Package: RooFitCore * 4 * @(#)root/roofitcore:$Id$ 5 * Authors: * 6 * WV, Wouter Verkerke, UC Santa Barbara, [email protected] * 7 * DK, David Kirkby, UC Irvine, [email protected] * 8 * * 9 * Copyright (c) 2000-2005, Regents of the University of California * 10 * and Stanford University. All rights reserved. * 11 * * 12 * Redistribution and use in source and binary forms, * 13 * with or without modification, are permitted according to the terms * 14 * listed in LICENSE (http://roofit.sourceforge.net/license.txt) * 15 *****************************************************************************/ 16 17/** 18\file RooDataHistSliceIter.cxx 19\class RooDataHistSliceIter 20\ingroup Roofitcore 21 22RooDataHistSliceIter iterates over all bins in a RooDataHist that 23occur in a slice defined by the bin coordinates of the input 24sliceSet. 25**/ 26 27#include "RooFit.h" 28 29#include "RooDataHist.h" 30#include "RooArgSet.h" 31#include "RooAbsLValue.h" 33 34using namespace std; 35 37; 38 39 40 41//////////////////////////////////////////////////////////////////////////////// 42/// Construct an iterator over all bins of RooDataHist 'hist' in the slice defined 43/// by the values of the arguments in 'sliceArg' 44 45RooDataHistSliceIter::RooDataHistSliceIter(RooDataHist& hist, RooAbsArg& sliceArg) : _hist(&hist), _sliceArg(&sliceArg) 46{ 47 // Calculate base index (for 0th bin) for slice 48 RooAbsArg* sliceArgInt = hist.get()->find(sliceArg.GetName()) ; 49 dynamic_cast<RooAbsLValue&>(*sliceArgInt).setBin(0) ; 50 51 if (hist._vars.getSize()>1) { 52 _baseIndex = hist.calcTreeIndex() ; 53 } else { 54 _baseIndex = 0 ; 55 } 56 57 _nStep = dynamic_cast<RooAbsLValue&>(*sliceArgInt).numBins() ; 58 59// cout << "RooDataHistSliceIter" << endl ; 60// hist.Print() ; 61// cout << "hist._iterator = " << hist._iterator << endl ; 62 63 Int_t i=0 ; 64 for (const auto arg : hist._vars) { 65 if (arg==sliceArgInt) break ; 66 i++ ; 67 } 68 _stepSize = hist._idxMult[i] ; 69 _curStep = 0 ; 70 71} 72 73 74 75//////////////////////////////////////////////////////////////////////////////// 76/// Copy constructor 77 79 TIterator(other), 80 _hist(other._hist), 81 _sliceArg(other._sliceArg), 82 _baseIndex(other._baseIndex), 83 _stepSize(other._stepSize), 84 _nStep(other._nStep), 85 _curStep(other._curStep) 86{ 87} 88 89 90 91//////////////////////////////////////////////////////////////////////////////// 92/// Destructor 93 95{ 96} 97 98 99 100//////////////////////////////////////////////////////////////////////////////// 101/// Dummy 102 104{ 105 return 0 ; 106} 107 108 109 110 111//////////////////////////////////////////////////////////////////////////////// 112/// Iterator increment operator 113 115{ 116 if (_curStep==_nStep) { 117 return 0 ; 118 } 119 120 // Select appropriate entry in RooDataHist 122 123 // Increment iterator position 124 _curStep++ ; 125 126 return _sliceArg ; 127} 128 129 130 131//////////////////////////////////////////////////////////////////////////////// 132/// Reset iterator position to beginning 133 135{ 136 _curStep=0 ; 137} 138 139 140 141//////////////////////////////////////////////////////////////////////////////// 142/// Iterator dereference operator, not functional for this iterator 143 145{ 146 Int_t step = _curStep == 0 ? _curStep : _curStep - 1; 147 // Select appropriate entry in RooDataHist 148 _hist->get(_baseIndex + step*_stepSize) ; 149 150 return _sliceArg ; 151} 152 153 154//////////////////////////////////////////////////////////////////////////////// 155/// Returns true if position of this iterator differs from position 156/// of iterator 'aIter' 157 159{ 160 if ((aIter.IsA() == RooDataHistSliceIter::Class())) { 161 const RooDataHistSliceIter &iter(dynamic_cast<const RooDataHistSliceIter &>(aIter)); 162 return (_curStep != iter._curStep); 163 } 164 165 return false; 166} void Class() Definition: Class.C:29 int Int_t Definition: RtypesCore.h:43 #define ClassImp(name) Definition: Rtypes.h:361 RooAbsArg is the common abstract base class for objects that represent a value (of arbitrary type) an... Definition: RooAbsArg.h:73 Int_t getSize() const RooAbsArg * find(const char *name) const Find object with given name in list. RooArgSet _vars Definition: RooAbsData.h:273 Abstract base class for objects that are lvalues, i.e. Definition: RooAbsLValue.h:26 virtual Int_t numBins(const char *rangeName=0) const =0 virtual void setBin(Int_t ibin, const char *rangeName=0)=0 RooDataHistSliceIter iterates over all bins in a RooDataHist that occur in a slice defined by the bin... RooDataHistSliceIter(const RooDataHistSliceIter &other) Copy constructor. virtual ~RooDataHistSliceIter() Destructor. virtual TObject * Next() Iterator increment operator. virtual bool operator!=(const TIterator &aIter) const Returns true if position of this iterator differs from position of iterator 'aIter'. virtual void Reset() Reset iterator position to beginning. virtual const TCollection * GetCollection() const Dummy. virtual TObject * operator*() const Iterator dereference operator, not functional for this iterator. The RooDataHist is a container class to hold N-dimensional binned data. Definition: RooDataHist.h:40 virtual const RooArgSet * get() const Definition: RooDataHist.h:79 Int_t calcTreeIndex() const Calculate the index for the weights array corresponding to to the bin enclosing the current coordinat... std::vector< Int_t > _idxMult Definition: RooDataHist.h:184 Collection abstract base class. Definition: TCollection.h:63 Iterator abstract base class. Definition: TIterator.h:30 virtual const char * GetName() const Returns name of object. Definition: TNamed.h:47 Mother of all ROOT objects. Definition: TObject.h:37
__label__pos
0.880527
Reassembly memory can be allocated upfront? Hello, I see that reassembly memory for tcp streams are allocated only to prealloc values configured during the startup. And later when the reassembly memory is required Suricata allocates it dynamically and adds it to pool. In my case I want to allocate most of the memory required by Suricata upfront. Is any method readily available? I can set the prealloc to memcap/sizeof(tcp_segment)/threads_count but will it restrict the preallocated reassembly memory only to particular thread or will it be in a common pool, so that when one thread is overloaded with tcp flows it can take more memory than allocated to it(provided other threads are underloaded)? Thanks Do you mean it can be like spare flow pool ,so that tcp segment pool could dynamic adjust it’s own segment pool size?
__label__pos
0.962893
Issuu on Google+ Can Irrational Numbers Be Negative Can Irrational Numbers Be Negative The best way of understanding that negative of an irrational number is an irrational number or can Irrational Numbers be negative is mention below. Friends first we discuss about irrational number:- irrational number are number that can be represented by a fraction. Means they don’t have terminating or repeating decimal. Example of irrational number is Pi (3.14) Friends let us discuss about the topic of a irrational number can be negative: we can say that a negative irrational number definitely is irrational number. Let us take a simple example to prove that negative of an irrational number is an irrational. Suppose Y is an irrational number but –y is rational number that means –y= p/q for some integer p and q . That’s a contradiction because y=-(-y) =? Irrational number cannot be obtained be dividing one integer by another. So -1/3=-0.333 is not a irrational because it is obtained by the ratio of two integer. 1 and 3. Know More About Anti Derivative Of Arctan Tutorcircle.com Page No. : ­ 1/4 Irrational number can’t have a finite decimal expression the numbers we wrote are actually -1428479/10000000 irrational number like I said can’t have a finite or even periodic decimal expression. Negative has nothing to do with the property of being rational or not. A negative number might be rational or irrational. Rational numbers are once that can be written as fractions such as 1/5. the number -1/5 is also rational. Once that cannot be written as fractions are irrational such as the square root of 2, but the negative square root of two is also irrational. Negative irrational number such as negative pi, negative square root of 2 . But some negative irrational number that are rational include -2, -13, -8, -4/7,-241/39, 5/0 etc. This is all for today.In above articles we discuss about that can irrational numbers be negative number. Q 1. Find the positive and negative rational numbers. 1, -2/3, 8/9, 18/19, -7/3, -7/8, 45, -16/13, 45/96, -78/93? Solution: Positive rational numbers: 1, 8/9, 18/19, 45, 45/96 Negative rational number: -2/3, -7/3, -7/8, -16/13, -78/93. Q 2. Find the positive and negative rational numbers. 21, -12/31, 58/9, 181/194, -17/37, -17/8, 445, -168/113, 145/96, -278/93, -96/78, 72/45? Solution: Positive rational numbers: 21, 58/9, 181/194, 445, 145/96, 72/45 Negative rational number: -12/31, -17/37, -17/8, -168/113, -278/93, -96/78. From the above discussion, we surely get the answer of the question that can a rational number be negative? Read  More About Analytic Geometry Problems Tutorcircle.com Page No. : ­ 2/4 1. All the rational numbers are subset of real numbers or we can say all rational numbers lie in real line. 2. Countless rational numbers lie between two rational numbers. 3. There can be infinite numbers of rational numbers between two integers. 4. Any integer can be represented as rational number. 5. Rational numbers are countable numbers as we can easily count them. Rational numbers are very densely populated as I mentioned above that there can be infinite rational number between two integers. We can also find many rational numbers between two numbers. We can perform many Operations on Rational Numbers like addition, subtraction, division and multiplication. Tutorcircle.com Page No. : ­ 2/3 Page No. : ­ 3/4 Thank You TutorCircle.com Can Irrational Numbers Be Negative
__label__pos
0.941236
smileham smileham - 8 months ago 45 iOS Question Realm.io RLMException on commitWriteTransaction - Index out of bounds I've been using the following code successfully, and all of a sudden, for one of my models, Realm throws an index out of bounds error on commitWriteTransaction . The Realm objects are successfully created, and it's only on the final line below that the error appears, and it only happens for one of my models. I did update the server return recently, but the model still appears correct as it successfully creates an object from the server data (model shown below). Btw I need to delete and reinstall the app every time it crashes. If I try to open it again, it'll crash before getting anywhere (I'm assuming that's because the DB is messed up). What's going on? And how do I fix this? Code: NSDictionary *responseDictionary = (NSDictionary *)responseObject; //response from AFNetworking call to my server RLMRealm *realm = [RLMRealm defaultRealm]; [realm beginWriteTransaction]; for (NSDictionary *dict in responseDictionary){ MyModel *object = [[class alloc] initMyModelWithDictionary:dict]; //class is known // (print object) - see output below [realm addOrUpdateObject:object]; } [realm commitWriteTransaction]; // Error thrown here Model printout example (one of the ones from dict ) MyModel { id = 32; created_at = 2016-07-02 03:39:15 +0000; updated_at = 2016-07-02 03:39:15 +0000; intA = 1; intB = 2; intC = 0; boolA = 1; boolB = 1; boolC = 1; boolD = 0; } Error: Terminating app due to uncaught exception 'RLMException', reason: 'Index 0 is out of bounds (must be less than 0)' Answer This exception is only thrown when either a RLMArray, RLMLinkingObjects, or RLMResults has an out-of-bounds access. Given that index 0 is out of bounds, it must be empty when the 0th index is accessed. Realm isn't itself accessing this collection when you call commitWriteTransaction. Instead, it is delivering a notification to your code that is performing the out-of-bounds access. You should be able to easily find out where this is occurring by turning on exception breakpoints.
__label__pos
0.942443
Java Programming Tutorial 1: Installing Tools These instructions will lead you through the process of installing tools needed in order to program in Java on a Windows computer. 1. You need a JDK (Java Development Kit) installed on your computer. You don’t need to download any package with additional tools, such as JavaFX or Java EE, just the JDK by itself. You can get it from the following website: 2. http://www.oracle.com/technetwork/java/javase/downloads/index.html 3. You should have Eclipse installed on your computer (we will use this as the Java code editor). You need the “Eclipse IDE for Java Developers.” You can get it from the following website: 4. http://www.eclipse.org/downloads/ Then unzip the file to some directory (say, “C:\Program Files” – it should create a subdirectory called “eclipse”) Leave a Comment
__label__pos
0.903986
Focus Topic: Simple Interest (Arithmetic/Commercial Mathematics) CAT Simple Interest Installments Questions: An important concept for the topic One concept that you should most definitely look into in an in-depth manner is the concept of CAT Simple Interest Installments. Installements is actually a part of both simple and compound interest. Let’s explore what this is concept is all about. What is an installment? Well, we all know that at times we don’t have enough money to buy something say a car. That doesn’t stops us from buying it. So, what do we do? We agree to pay the dealer a certain money at the end of a certain time period (this time period may be monthly or yearly). These partial payments which we pay after certain time intervals are called installments. But is this so simple? Is there any benefit to us or to dealer? Why dealer agrees to receive money in parts rather than the whole money? But is this so simple? Is there any benefit to us or to dealer? Why dealer agrees to receive money in parts rather than the whole money? The dealer in this case earns interest on the money he has effectively lent to us. The installments decided in this case consist of two components, which is the principal amount and the interest amount. Let’s take up a couple of tricky problems to understand how CAT Simple Interest Installments questions work. Question: A tennis racquet worth Rs. 700 was bought by paying a down payment of Rs.100 and 6 equal installments of Rs. 100 each. Calculate the rate of interest. 1. 300/7 2. 400/7 3. 56 4. 300/11 Solution: As the amount has to be paid in 6 equal installments, the payment on principal to be made will end on the completion of 6 months. He will have to pay (600/6=Rs.100 per month) Principal for the first month will be 600, for second month=500, for third month=400 and so on. Total sum or principal= 600+500+400+300+200+100=2100 Amount= Principal + interest= 600+100=700 (for one month) Rate = 100* interest/ principal * time Rate= (100*100)/(2100*1/12)=400/7 % Hence, option (b) Question: What annual installment will discharge a loan of Rs. 1025 due in 2 yrs @ 5% per annum? 1. 200 2. 300 3. 400 4. 500 Solution: Let the installment be equal= x At the end of the first year, the amount paid will be = 1.05x At the end of the second year, the amount paid will be= x Total amount paid is = 2.05x This is equal to Rs. 1025. Therefore, 2.05x=1025 x=500 Hence, option (d) The complete concept notes for this topic are provided in the link below. Kindly read the full concept article to understand the topic. Join Our Newsletter Get the latest updates from our side, including offers and free live updates, on email. Close Join our Free TELEGRAM GROUP for exclusive content and updates Close
__label__pos
0.989371
Samacheer Kalvi 11th Maths Solutions Chapter 2 Basic Algebra Ex 2.7 You can Download Samacheer Kalvi 11th Maths Book Solutions Guide Pdf, Tamilnadu State Board help you to revise the complete Syllabus and score more marks in your examinations. Tamilnadu Samacheer Kalvi 11th Maths Solutions Chapter 2 Basic Algebra Ex 2.7 Question 1. Factorize: x4 + 1. (Hint: Try completing the square.) Solution: Samacheer Kalvi 11th Maths Solutions Chapter 2 Basic Algebra Ex 2.7 1 Samacheer Kalvi 11th Maths Solutions Chapter 2 Basic Algebra Ex 2.7 Question 2. If x2 + x + 1 is a factor of the polynomial 3x3 + 8x2 + 8x + a, then find the value of a. Solution: Let 3x3 + 8x2 + 8x + a = (x2 + x + 1) (3x + a) . Equating coefficient of x 8 = a + 3 8 – 3 = a a = 5 Samacheer Kalvi 11th Maths Solutions Chapter 2 Basic Algebra Ex 2.7 Additional Questions Solved Question 1. Solve for x2 – 7x3 + 8x2 + 8x – 8 = 0. given 3 – \(\sqrt{5}\) is a root Solution: when 3 – \(\sqrt{5}\) is a root, 3 + \(\sqrt{5}\) is the other root. S.o.r. = (3 – \(\sqrt{5}\)) + (3 + \(\sqrt{5}\)) = 6 Samacheer Kalvi 11th Maths Solutions Chapter 2 Basic Algebra Ex 2.7 10 The equation is x2 – 6x + 4 = 0 Now x4 – 7x3 + 8x2 + 8x – 8 = (x2 – 6x + 4) (x2 + px – 2) Equating co-eff of x 12 + 4p = 8 4p = 8 – 12 = -4 So the other factor is x2 – x – 2 Now solving x2 – x – 2 = 0 Samacheer Kalvi 11th Maths Solutions Chapter 2 Basic Algebra Ex 2.7 11 Samacheer Kalvi 11th Maths Solutions Chapter 2 Basic Algebra Ex 2.7 Question 2. Solve the equation x3 + 5x2 – 16x – 14 = 0. given x + 7 is a root Solution: x3 + 5x2 – 16x – 14 = (x + 7) (x2 + px – 2) Equating co-eff of x 7p – 2 = -16 7p = -16 + 2 = -14 ⇒ p = -2 So the other factor is x2 – 2x – 2 Samacheer Kalvi 11th Maths Solutions Chapter 2 Basic Algebra Ex 2.7 12 Leave a Comment Your email address will not be published. Required fields are marked *
__label__pos
0.981538
1 I'm trying to separately crop PDF's even and odd pages, by building on top of the accepted answer from How to crop odd and even pages differently in a PDF? My Automator workflow, roughly: 1. automatically Extract Odd & Even Pages; each output PDF filename is suffixed with "(Even Pages)" or "(Odd Pages)" 2. pause Automator Workflow with Ask for Confirmation and manually crop each of the two output PDFs (using Rectangular Selection and Crop in Preview) 3. select the two cropped PDFs using Get Folder Contents 4. Combine PDF Pages with Shuffling pages option The issue is step 4. which inevitably seems to drop any Crop from step 2. The combined PDF has no crop applied to it, even though the two even & odd input PDFs are definitely cropped. Is this expected behaviour from Combine PDF Pages? PDF metadata and annotations do seem to get dropped, does Crop as well? enter image description here 1 • Also doesn't preserve any form text. – malhal 10 hours ago 1 There is a package called pdfjoin which uses pdflatex to combine pdf files. You could try joining your pdf's in the following way: pdfjoin 01.pdf 02.pdf which will combine the files into a single pdf called 02-combined.pdf, in your current working directory. If you don't have LaTeX installed you can follow this guide to do so. This preserved crops for me when I tried. If you can get this working on your system, you could then maybe look at putting this together in a bash script. 1 Preview does not 'destructively' crop images. (It tells you as much when you crop a PDF.) There are four different 'boxes' used to describe the page size of a PDF. When you crop a PDF in Preview, it alters the 'cropBox', but the entire page data is still there in the 'mediaBox', and it is this value that Automator uses to get the PDF pages. The Combine Pages action uses (at its heart) a python script to combine the pages. This loads each page into a new CoreGraphics object, which is also why the metadata and annotations get dropped. It should be possible to create a script that uses the cropBox instead of mediaBox, and which preserves annotations. You must log in to answer this question. Not the answer you're looking for? Browse other questions tagged .
__label__pos
0.926887
This is an archived post. You won't be able to vote or comment. all 190 comments [–]Guy_from_work 311 points312 points  (31 children) As a father of 256 infants i often run into this problem. EDIT: now i have enough points to feed each one of my infants thank you all! [–]ItsInMyPants 91 points92 points  (11 children) overflow is never funny. [–]cyberpants 40 points41 points  (5 children) Don't worry, he'll be fine so long as he doesn't have a zero-based child indexing system. [–]Captainpatch 17 points18 points  (4 children) I'm pretty sure he had zero kids at some point. The hotel was just trying to protect him from overflowing his puny 8-bit child meter. Now he has zero children again and some other adjacent memory value is fucked. I blame engineering for never expecting somebody to need more than an 8-bit integer for children. Just look at Gengis Kahn, how many children only survived by the grace of their backup under their mother's file? [–]tytotabuki 2 points3 points  (3 children) Did you ever put in a bug report? Maybe in the next patch there will be a fix and we can at least get 10-bits. [–][deleted] 1 point2 points  (2 children) They're probably going to hold back until the tech permits a 16 bit system to be implemented at the same cost. [–]Harkonen_inc 1 point2 points  (1 child) Computers! [–][deleted] 0 points1 point  (0 children) No, Baby Filing System. [–]chefox 10 points11 points  (0 children) Technically, that would be underflow (of an unsigned 8-bit variable). More likely, it's an incorrect cast from a signed sentinel value 0xFF (-1) to unsigned (255). [–]invisibletotoro 8 points9 points  (3 children) Byte me [–]Jeroknite 1 point2 points  (2 children) I was going to, but I bit my lip. [–]orus 1 point2 points  (1 child) I would like a nibble. [–]MikeSD34 0 points1 point  (0 children) Spread the word! [–]PanFlute 28 points29 points  (5 children) Just wait until one of them is a toddler, then you'll be fine again. [–]RalphiesBoogers 16 points17 points  (4 children) [–]Norse_of_60 15 points16 points  (3 children) Buy them tiaras? [–]IMasturbateToMyself 9 points10 points  (2 children) NO [–]chickenfun1 2 points3 points  (1 child) They must WIN the tiaras. [–]Xemxah 2 points3 points  (0 children) THEY MUST PAY THE IRON PRICE [–]Au_Contrarian 9 points10 points  (5 children) Does not compute. You must mean you run into this problem as a father of -1 infants. Edit: 0 infants? I guess if you add 1 to an unsigned char 255 you get 0? [–]satanlovescandy 3 points4 points  (2 children) Why are you storing infants in a char? In fact, why are any of you storing infants in objects that aren't arrays? Do you want to store these children are simply remember whether or not they exist? [–]chickenfun1 3 points4 points  (0 children) it was a high char. [–]buster2Xk 3 points4 points  (0 children) Your edit is correct. [–]GobblesJollyRanchers 1 point2 points  (1 child) Do all of them have the same mother or does you being on maury almost guarantee that you're the father? [–]Guy_from_work 0 points1 point  (0 children) I was at on the show alot Due to a massive amount of infants (mostly from different mothers). Going on the show for the first few kids i was happy when Maury said " You are the father" but after a while it was like a day at the office, id just walk in and he would calmly say "We both know what the answer is, you have been here more times then i can count." So yeah 256 kids later...and maybe more on the way? [–]eydryan 0 points1 point  (0 children) Don't worry, they're probably also counting zero! [–]chazzeromus 0 points1 point  (0 children) Actually, as a one byte float, it would be Nan. Mantissa set, sign bit set, max exponent. Babies don't make sense. Uses one byte to count babies, can't tell whether pessimistic or optimistic. [–]silvab 0 points1 point  (0 children) Just for fun, let's pretend that this was actually true, and that your wife was pregnant for the usual 9 months. That means that this woman would be giving birth, nonstop, every 9 months, for 192 years. Take a few years for the chance of a twin being born, although the probabilities are astronomically low. [–]LNMagic 0 points1 point  (0 children) Are you a programmer? Just label the first infant "0." [–]rbe 68 points69 points  (24 children) Hexadecimal FF [–]TenZero10 52 points53 points  (7 children) Wouldn't want an infant overflow now, would we? [–]Munkeynz 10 points11 points  (5 children) What does it mean to have a large negative number of infants..? [–]IOUaUsername 31 points32 points  (1 child) infinite misscarriage [–]moogmania 11 points12 points  (0 children) Awesome metal band name [–]Intrexa 5 points6 points  (0 children) Not in this case. The next number would be either 0 or 256, depending on the number of bytes. We already know it's either an unsigned single byte, or larger then 1 byte. [–]RealityChickCheck 1 point2 points  (0 children) Sperm... inside a wad of kleenex [–]propaglandist 0 points1 point  (0 children) MURDER [–][deleted] 1 point2 points  (0 children) God damn it, I was going to make that joke and I am four hours too late. Damn you sleep. [–][deleted] 3 points4 points  (5 children) Reminds me of Civilization 1, where my brother managed to go above 127 children per household, it became something like −89, fell because of that, got back to +112, and after a couple of swings, managed to go to -16, so that it could go over the zero in the next round. He did this two times. On god mode. I know it also wasn’t the only parameter that wrapped like that. (And then we modded the game beyond insanity. 16 units walking settlers transporting 8 aircraft carriers inside them over water, and throwing nukes… Diplomacy with decisions between “Yes” and “Yes*”… All nations replaced by stereotypical parody nations like “Nazis”, “Jews”, “Gays”, “Blacks”, etc… with hilarious diplomacy dialogs and names… Intro total-converted into a big joke about somebody falling over, bumping his head, and in a phase of insanity/stupor creating this (your) civilization… And so on. ;) [–][deleted] 5 points6 points  (4 children) Something similar happened on my friends NCAA football game. He lost a game because he scored 266 points or something. The final score was like 7 to 10. [–]blueberrywine 2 points3 points  (3 children) My +/- went up so high in NHL 2010 that after +255 it jumped straight to -255. I had to climb out of a 255 goals against count. [–]Intrexa 5 points6 points  (2 children) Things that never happened for 500: This. For this to happen, you need to store the int in 9 bits (a byte + 1 bit, extremely rare, too much work for such little anything), have the display be signed (admittedly likely), and use ones complement integer encoding (extremely rare) It's just a perfect storm of why and bother of coding that it would take to create the situation you described. 127 to -128, maybe, definitely not 255 to -255. [–]blueberrywine 0 points1 point  (0 children) Also keep in mind this is EA we're talking about. [–]blueberrywine -1 points0 points  (0 children) I assumed it to be 255 because I was unsure of the exact amount I had. All I know is that it was over 250 and the next rime I saw it was is the -200's. [–]jamzamurai 36 points37 points  (23 children) The real crazy shit starts at 256 infants [–]jj_yossarian 2 points3 points  (0 children) We need another octet to address this baby. [–]Zimyver 5 points6 points  (19 children) Oh, didn't you know after 255 it will be -255. EDIT: corrected from -1. [–][deleted] 29 points30 points  (0 children) It'd be 0. [–]Intrexa 4 points5 points  (1 child) It would either be 0 or 256, depending on the number of bytes. We already know it's either an unsigned single byte, or larger then 1 byte. [–]poizan42 1 point2 points  (0 children) They could be using signed Baker's bytes. [–][deleted] 1 point2 points  (0 children) 255 to -255 would be a byte + a sign bit, nine bits of data. Unlikely, but possible I guess. EDIT: You unlucky devil, though. [–]oh_no_a_hobo 0 points1 point  (13 children) Is it -1 or -255? Trying to figure out how computers think. [–]eltommonator 16 points17 points  (2 children) Should be 0. An 8 bit variable can hold 256 values, so given that the we can reach 255 that leaves 256 possible values that we can get when we include zero. If you have an 8 bit signed number, the highest value it will reach is 128 before it wraps around to -127 (or it could be 127, -128 but I don't think so) 127 before it wraps around to -128 and that gives 256 possible values. Of course, it's all about how we interpret these bits anyway. We could interpret them to be any number we want, but we could only interpret a total of 256 possible unique numbers. It's just generally standard that we take the two possible above interpretations which we find in the C language definition of a 'char' and an 'unsigned char' [–]John_Duh 1 point2 points  (1 child) The usual way for signed integers are that you have one "higher" for the negative values so that is -128 and 127. Why well it has to do with how you interpret negative binary numbers. They start with all ones at -1 and then remove them until -128 which is 0b10000000. [–]eltommonator 0 points1 point  (0 children) Yes, that makes sense. Thanks, corrected. [–]Intrexa 1 point2 points  (2 children) Neither. It would either be 0 or 256, depending on the number of bytes. We already know it's either an unsigned single byte, or larger then 1 byte. [–]RagingIce 0 points1 point  (1 child) could be a signed 16 bit integer [–]Intrexa 0 points1 point  (0 children) Right, which in that case it would be the second option, and be 256. I'm sorry of I wasn't clear, I wasn't implying the second option was also unsigned, it's of unknown signage [–]Rulanda -2 points-1 points  (2 children) Wouldn't it be -256? 0 should be the first positive number and thus be the 256th positive which menas you need 256 negative ones, but no negative zero.. did that confuse anybody else beside myself? :D [–]bub0r 3 points4 points  (1 child) its this: unsigned char (1 byte): 0-255 signed char (1 byte): -128-127 short unsigned int (2 bytes): 0-65535 short signed int (2 bytes): -32768-32,767 So it will be 0 or 256 UNLESS a 9 bit signed int is used, then it will be -256 [–]Rulanda -4 points-3 points  (0 children) yeah, something like that is what I wanted to say, it would be 0 or -256, but never -1. [–]gagaouz 0 points1 point  (0 children) What if last year, I had sex with 365 different women and had a 70% fertility rate?! That would mean around next month I couldn't bring all my babies to your room holiday inn?! Ban holiday inn for infant rights!! [–]Stobie 0 points1 point  (0 children) Zero infants is fine. [–]straighttoplaid 12 points13 points  (1 child) I just showed this to my wife and she said "Where would you even get 255 infants? Most people are very stingy with their infants." [–]ObviousFlaw 2 points3 points  (0 children) I'm not! Listen folks, I'm practically GIVING them away! [–][deleted] 13 points14 points  (0 children) They store the number of infant guests in an unsigned byte, so if you bring in 256 of them, it rolls back to zero and suddenly there's 256 infants that are off the records. This is how human trafficking works. Source: I have driven a car. [–]inktar 22 points23 points  (13 children) They misplaced the decimal, it's actually 25.5 infants. [–]Ozzertron 16 points17 points  (4 children) Who brings half a baby?! That's awful! [–]lettherebepuns 42 points43 points  (0 children) Solomon [–]hells_cowbells 9 points10 points  (1 child) Someone who is 4.5 months pregnant? [–]ronnyman123 2 points3 points  (0 children) Fair enough. [–]rumckle 9 points10 points  (0 children) Someone who isn't very hungry [–][deleted] 8 points9 points  (6 children) no it's 2.55 infants. [–]tikketyboo 4 points5 points  (0 children) That's because the average American family has 2.55 children. [–]KennyFuckingPowers 8 points9 points  (4 children) Who brings a fifty-fifth of a baby?! That's awesome! [–]Baryogenesis 15 points16 points  (1 child) 1/55 ~ 0.018 0.55 = 11/20 I'd say I hate to be that guy, but I actually don't. [–]KennyFuckingPowers 1 point2 points  (0 children) Fuckin all stars don't have time to learn math. [–][deleted] 9 points10 points  (1 child) you may not think that extra .05 makes much difference when you're holding 1/2 a baby in your arms, but it does. [–]Kalontas 2 points3 points  (0 children) So the halved baby is male? [–]Norse_of_60 1 point2 points  (0 children) Conjoined [–]bluecalx2 8 points9 points  (3 children) I saw this exact same thing in my hotel room in Dublin a few months ago. At the time, I laughed and figured it was a typo. Now I am confused. [–]Intrexa 5 points6 points  (2 children) I'm sure it's because of the limitation of a piece of software that is common to hotels. [–]bluecalx2 0 points1 point  (0 children) That's plausible, although it seems odd that no one would notice this and fix it across more than one hotel chain. [–]CODDE117 0 points1 point  (0 children) That would explain a LOT of the comments. [–]aldennn 5 points6 points  (1 child) Haha remember that one time we had 257 in Cozumel... [–][deleted] 4 points5 points  (0 children) The best part about Mexico is that you can put as many infants in your room as you want and nobody ever says shit as long as you tip them a couple bucks extra. [–]cakeonaplate 11 points12 points  (2 children) hotel motel Holiday Inn we at the... [–]jessr94 3 points4 points  (0 children) "so, you and 254 of your closest friends can come…" [–]Barrel-rider 2 points3 points  (0 children) Dammit. I need to cancel a reservation... [–]ganymedesearat 2 points3 points  (0 children) Something about outdated processor limits. [–]PsychoNerd91 2 points3 points  (0 children) I feel that we should test this theory. [–]TwoLegsJoe 2 points3 points  (4 children) I like to think that's how many you could efficiently pack in there. Like, you're physically unable to fit 256 in there. [–]zductiv 5 points6 points  (0 children) Does the room come with a blender? [–][deleted] 0 points1 point  (2 children) Well, for that we need to find the average size of a Holiday Inn room, subtract some from things in the room, and find the average size of an infant. This needs to be on xkcd what-if. [–]Easilycrazyhat 1 point2 points  (0 children) Assuming a baby is (very roughly) 20in x 3in x 4in, that makes it approx. 0.139 feet3. The best information I could find on a holiday inn hotel room was a room in hong kong that has 23 square meters of space, which (according to the math I hastily looked up) equals about 247 square feet. Assuming that the room is about 10' high, this leaves 2475 cubic feet of space in the room. This means that you could fit at least 17,818 babies in that room.....if you really needed to [–]TheCakeFlavor 0 points1 point  (0 children) I'm on it. [–]offtheEcliptic 2 points3 points  (0 children) But if you bring a 256th infant, it just rolls back. [–]eak125 1 point2 points  (0 children) We now know the capacity of the infant ball pit in the suite! CANNONBALL!!! [–]ironworker3 1 point2 points  (0 children) Fucking pedo bear, now there is a max! [–]r00tbeer 1 point2 points  (0 children) As an employee of a HI I can confirm that the safe maximum number of infants per room is ACTUALLY 257. This must be an outdated card. [–]inpu 1 point2 points  (0 children) They should really upgrade to 16bit rooms. [–]Music-Chicky25 1 point2 points  (0 children) Only 3 adults are allowed to 225 infants??? Poor babies, and poor adults!! [–]Jer_Cough 1 point2 points  (2 children) You could probably double that if you use those vacuum sweater storage bags. [–][deleted] 0 points1 point  (1 child) [–]Jer_Cough 0 points1 point  (0 children) I lol'd, That is my new wallpaper. [–]Karabaja 1 point2 points  (0 children) const unsigned char max_infants = 0xFF; [–]mr_dumptruck 1 point2 points  (0 children) Of course. Assuming it's a class C hotel room you'd have to start a new baby subnet. [–]Trigunesq 1 point2 points  (0 children) Challenge accepted [–][deleted] 1 point2 points  (1 child) ah unsigned integer kid if they have twins it will invoke undefined behavior. [–]Kalontas 1 point2 points  (0 children) It will spawn a .MissingNo kid. [–]Bubba326 1 point2 points  (0 children) Challenge accepted [–]Tdhutchi 1 point2 points  (0 children) Imagine being next to the room with 255 babies. [–]Kalontas 1 point2 points  (0 children) It's an error in the matrix. They accidentally left a default full integer. [–]Deto 1 point2 points  (0 children) If you add one more infant, you overflow and go back to having 0 infants. [–][deleted] 1 point2 points  (0 children) well that escalated quickly [–]right_in_two 1 point2 points  (1 child) I would have thought the maximum would be 400 BABIES. [–]DUBLH[S] 1 point2 points  (0 children) Give them powerthirst and they'll run as fast as KENYANS! [–]jumbofighter 0 points1 point  (0 children) BRING ON THE INFANTS! [–]guyver_dio 0 points1 point  (0 children) They just forgot the decimal, 2.55 [–]fragilebroken 0 points1 point  (0 children) Had to check comments to see if someone had proposed the infants to fill the volume of the room theory. Success. I also enjoy the decimal concept. [–]lomlomlom 0 points1 point  (0 children) theres only one way to find out for sure [–]chrisinurpants 0 points1 point  (3 children) Perfect for pedos... Or dead baby jokes... [–]NyQuil012 2 points3 points  (2 children) Like what if I wanted a snack after the orgy? [–]chrisinurpants 1 point2 points  (1 child) You must have escaped that recent pedo bust. [–]NyQuil012 2 points3 points  (0 children) Escaped it? Shit, I called it in. More for me. [–]RealityChickCheck 0 points1 point  (0 children) Clown college starts early for infants [–]Chanther 0 points1 point  (0 children) I heard that they tried to get this changed, but the baby-farming lobby in Boston is too strong. [–]hotsaucehelper 0 points1 point  (0 children) Oh, man, and I was all set to open up the Hotsaucehelper's MATERNITY WAREHOUSE/BIRTHING STADIUM in a Boston Holiday Inn.... [–]perb123 0 points1 point  (1 child) Octomom approves. [–]NyQuil012 1 point2 points  (0 children) The Duggars obviously don't stay at this hotel. [–]mrbarry1024 0 points1 point  (0 children) IRL subnetting [–]robboywonder 0 points1 point  (1 child) are children binary? [–]Norse_of_60 0 points1 point  (0 children) its not that black or white. [–]UHRossy 0 points1 point  (0 children) Whatchu doin? [–][deleted] 0 points1 point  (0 children) The comments on reddit never disappoint :) [–][deleted] 0 points1 point  (0 children) Everyone knows it's a bad idea to have more infants than ip ports. [–]DNoo 0 points1 point  (0 children) maybe they meant insects [–]Geler 0 points1 point  (0 children) I found the owner of this Holiday Inn. [–][deleted] 0 points1 point  (0 children) That must be for the cockroaches to obey. [–]SuspiciousLamp 0 points1 point  (0 children) The system is 8-bit. [–]eeks12 0 points1 point  (0 children) Sandusky probably stays there on weekends. [–][deleted] 0 points1 point  (0 children) So many babies!! [–]wolfman863 0 points1 point  (0 children) Challenge accepted. [–]awsm_me123 0 points1 point  (0 children) so most mexicans cant rent rooms there? [–]kill-t 0 points1 point  (0 children) infINT [–]p4bl0 0 points1 point  (0 children) They must store them in a VARCHAR. [–]blueberrywine 0 points1 point  (0 children) That is way too many infants. [–]svenM 0 points1 point  (0 children) 255 children should be enough for everybody Bill Gates [–]Unshavenhelga 0 points1 point  (0 children) That's a lot of babies. [–]SomeFakeInternetName 0 points1 point  (0 children) Even the Duggars couldn't fill that room. [–]Watchmaker85 0 points1 point  (1 child) Holiday inn in Saugus? [–]DUBLH[S] 0 points1 point  (0 children) It's in Cambridge. A couple blocks from the end/start of the green line [–]notaplatypus 0 points1 point  (0 children) so one child = 127.5 infants? good to know [–]Robiwasabi 0 points1 point  (0 children) Is is just me or is there one too many legs there??? [–][deleted] 0 points1 point  (0 children) Binary 11111111. [–]CODDE117 0 points1 point  (0 children) /r/programming leaked! FLOODED! [–]Su13lim1nal 0 points1 point  (0 children) Sometimes typos aren't funny. [–]Bread_Heads 0 points1 point  (0 children) Every sperm is sacred. [–]cutofmyjib 0 points1 point  (0 children) Byte overflow. [–]GingerNinja141 0 points1 point  (0 children) Coming soon, it's Jon and Kate plus 256, followed by duo-centi-penta-deca-hexo-mom [–]cyberphonic 0 points1 point  (0 children) Total number of guests = 3. Total number of infants per room = 255. This can only mean that Holiday Inn rooms contain up to 252 infants before you even check in. I suspect foul play. [–]blore40 0 points1 point  (0 children) Holiday Inn? Are you feeling smarter? [–]no_thks_havin_butter 0 points1 point  (0 children) Yeahhhh! Babies EVERYWHERE!! [–]aljasser 0 points1 point  (1 child) make sense, class C host's maximum IP addresses [–][deleted] -1 points0 points  (0 children) maximum size of a byte, you fucking mongoloid. [–]Brillians[🍰] 0 points1 point  (0 children) A byte of babies [–]h_lehmann 0 points1 point  (0 children) Not the fault of that Holiday Inn. I recently booked a room through expedia.com, at a hotel having nothing to do with Holiday Inns, and I noticed the same wierd restriction. Someone obviously set a 8-bit variable default value as -1. [–]stillonthecouch 0 points1 point  (0 children) Hope this is the one that just had a meth lab discovered at it in peabody [–]Nutsack_Clapton 0 points1 point  (0 children) OK, so in the event of a fire, who evacuates all of these potential infants? [–]dogfunky 0 points1 point  (0 children) OR 11111111 in binary [–]Beer_Can_Is_Good 0 points1 point  (0 children) Brookline? [–]FoobarMontoya 0 points1 point  (0 children) You'll never need more than 1 babybyte of storage [–]yangar 0 points1 point  (0 children) But I need more Starcraft upgrades... [–]soulknight56 0 points1 point  (0 children) 55 less and this could be a joke for /r/starcraft [–]JSleek -1 points0 points  (0 children) Challenge accepted.
__label__pos
0.643804
精华 从零开始写一个Javascript解析器 发布于 7 个月前 作者 axetroy 6015 次浏览 来自 分享 最近在研究 AST, 之前有一篇文章 面试官: 你了解过 Babel 吗?写过 Babel 插件吗? 答: 没有。卒 为什么要去了解它? 因为懂得 AST 真的可以为所欲为 简单点说,使用 Javascript 运行Javascript代码。 这篇文章来告诉你,如何写一个最简单的解析器。 前言(如果你很清楚如何执行自定义 js 代码,请跳过) 在大家的认知中,有几种执行自定义脚本的方法?我们来列举一下: Web 创建 script 脚本,并插入文档流 function runJavascriptCode(code) { const script = document.createElement("script"); script.innerText = code; document.body.appendChild(script); } runJavascriptCode("alert('hello world')"); eval 无数人都在说,不要使用eval,虽然它可以执行自定义脚本 eval("alert('hello world')"); 参考链接: Why is using the JavaScript eval function a bad idea? setTimeout setTimeout 同样能执行,不过会把相关的操作,推到下一个事件循环中执行 setTimeout("console.log('hello world')"); console.log("I should run first"); // 输出 // I should run first // hello world' new Function new Function("alert('hello world')")(); 参考链接: Are eval() and new Function() the same thing? NodeJs require 可以把 Javascript 代码写进一个 Js 文件,然后在其他文件 require 它,达到执行的效果。 NodeJs 会缓存模块,如果你执行 N 个这样的文件,可能会消耗很多内存. 需要执行完毕后,手动清除缓存。 Vm const vm = require("vm"); const sandbox = { animal: "cat", count: 2 }; vm.runInNewContext('count += 1; name = "kitty"', sandbox); 以上方式,除了 Node 能优雅的执行以外,其他都不行,API 都需要依赖宿主环境。 解释器用途 在能任何执行 Javascript 的代码的平台,执行自定义代码。 比如小程序,屏蔽了以上执行自定义代码的途径 那就真的不能执行自定义代码了吗? 非也 工作原理 基于 AST(抽象语法树),找到对应的对象/方法, 然后执行对应的表达式。 这怎么说的有点绕口呢,举个栗子console.log("hello world"); 原理: 通过 AST 找到console对象,再找到它log函数,最后运行函数,参数为hello world 准备工具 • Babylon, 用于解析代码,生成 AST • babel-types, 判断节点类型 • astexplorer, 随时查看抽象语法树 开始撸代码 我们以运行console.log("hello world")为例 打开astexplorer, 查看对应的 AST 1 由图中看到,我们要找到console.log("hello world"),必须要向下遍历节点的方式,经过FileProgramExpressionStatementCallExpressionMemberExpression节点,其中涉及到IdentifierStringLiteral节点 我们先定义visitors, visitors是对于不同节点的处理方式 const visitors = { File(){}, Program(){}, ExpressionStatement(){}, CallExpression(){}, MemberExpression(){}, Identifier(){}, StringLiteral(){} }; 再定义一个遍历节点的函数 /** * 遍历一个节点 * @param {Node} node 节点对象 * @param {*} scope 作用域 */ function evaluate(node, scope) { const _evalute = visitors[node.type]; // 如果该节点不存在处理函数,那么抛出错误 if (!_evalute) { throw new Error(`Unknown visitors of ${node.type}`); } // 执行该节点对应的处理函数 return _evalute(node, scope); } 下面是对各个节点的处理实现 const babylon = require("babylon"); const types = require("babel-types"); const visitors = { File(node, scope) { evaluate(node.program, scope); }, Program(program, scope) { for (const node of program.body) { evaluate(node, scope); } }, ExpressionStatement(node, scope) { return evaluate(node.expression, scope); }, CallExpression(node, scope) { // 获取调用者对象 const func = evaluate(node.callee, scope); // 获取函数的参数 const funcArguments = node.arguments.map(arg => evaluate(arg, scope)); // 如果是获取属性的话: console.log if (types.isMemberExpression(node.callee)) { const object = evaluate(node.callee.object, scope); return func.apply(object, funcArguments); } }, MemberExpression(node, scope) { const { object, property } = node; // 找到对应的属性名 const propertyName = property.name; // 找对对应的对象 const obj = evaluate(object, scope); // 获取对应的值 const target = obj[propertyName]; // 返回这个值,如果这个值是function的话,那么应该绑定上下文this return typeof target === "function" ? target.bind(obj) : target; }, Identifier(node, scope) { // 获取变量的值 return scope[node.name]; }, StringLiteral(node) { return node.value; } }; function evaluate(node, scope) { const _evalute = visitors[node.type]; if (!_evalute) { throw new Error(`Unknown visitors of ${node.type}`); } // 递归调用 return _evalute(node, scope); } const code = "console.log('hello world')"; // 生成AST树 const ast = babylon.parse(code); // 解析AST // 需要传入执行上下文,否则找不到``console``对象 evaluate(ast, { console: console }); 在 Nodejs 中运行试试看 $ node ./index.js hello world 然后我们更改下运行的代码 const code = "console.log(Math.pow(2, 2))"; 因为上下文没有Math对象,那么会得出这样的错误 TypeError: Cannot read property 'pow' of undefined 记得传入上下文evaluate(ast, {console, Math}); 再运行,又得出一个错误Error: Unknown visitors of NumericLiteral 原来Math.pow(2, 2)中的 2,是数字字面量 2 节点是NumericLiteral, 但是在visitors中,我们却没有定义这个节点的处理方式. 那么我们就加上这么个节点: NumericLiteral(node){ return node.value; } 再次运行,就跟预期结果一致了 $ node ./index.js 4 到这里,已经实现了最最基本的函数调用了 进阶 既然是解释器,难道只能运行 hello world 吗?显然不是 我们来声明个变量吧 var name = "hello world"; console.log(name); 先看下 AST 结构 3 visitors中缺少VariableDeclarationVariableDeclarator节点的处理,我们给加上 VariableDeclaration(node, scope) { const kind = node.kind; for (const declartor of node.declarations) { const {name} = declartor.id; const value = declartor.init ? evaluate(declartor.init, scope) : undefined; scope[name] = value; } }, VariableDeclarator(node, scope) { scope[node.id.name] = evaluate(node.init, scope); } 运行下代码,已经打印出hello world 我们再来声明函数 function test() { var name = "hello world"; console.log(name); } test(); 根据上面的步骤,新增了几个节点 BlockStatement(block, scope) { for (const node of block.body) { // 执行代码块中的内容 evaluate(node, scope); } }, FunctionDeclaration(node, scope) { // 获取function const func = visitors.FunctionExpression(node, scope); // 在作用域中定义function scope[node.id.name] = func; }, FunctionExpression(node, scope) { // 自己构造一个function const func = function() { // TODO: 获取函数的参数 // 执行代码块中的内容 evaluate(node.body, scope); }; // 返回这个function return func; } 然后修改下CallExpression // 如果是获取属性的话: console.log if (types.isMemberExpression(node.callee)) { const object = evaluate(node.callee.object, scope); return func.apply(object, funcArguments); } else if (types.isIdentifier(node.callee)) { // 新增 func.apply(scope, funcArguments); // 新增 } 运行也能过打印出hello world 完整示例代码 其他 限于篇幅,我不会讲怎么处理所有的节点,以上已经讲解了基本的原理。 对于其他节点,你依旧可以这么来,其中需要注意的是: 上文中,作用域我统一用了一个 scope,没有父级/子级作用域之分 也就意味着这样的代码是可以运行的 var a = 1; function test() { var b = 2; } test(); console.log(b); // 2 处理方法: 在递归 AST 树的时候,遇到一些会产生子作用域的节点,应该使用新的作用域,比如说functionfor in 最后 以上只是一个简单的模型,它连玩具都算不上,依旧有很多的坑。比如: • 变量提升, 作用域应该有预解析阶段 • 作用域有很多问题 • 特定节点,必须嵌套在某节点下。比如 super()就必须在 Class 节点内,无论嵌套多少层 • this 绑定 连续几个晚上的熬夜之后,我写了一个比较完善的库vm.js,基于jsjs修改而来,站在巨人的肩膀上。 与它不同的是: • 重构了递归方式,解决了一些没法解决的问题 • 修复了多项 bug • 添加了测试用例 • 支持 es6 以及其他语法糖 目前正在开发中, 等待更加完善之后,会发布第一个版本。 欢迎大佬们拍砖和 PR. 小程序今后变成大程序,业务代码通过 Websocket 推送过来执行,小程序源码只是一个空壳,想想都刺激. 项目地址: https://github.com/axetroy/vm.js 在线预览: http://axetroy.github.io/vm.js/ 原文: http://axetroy.xyz/#/post/172 15 回复 最近我也在看编译原理和编译器相关的,顶一下 老铁,不错哦,参考你的博客和代码我也写了篇,下一步准备用java实现。 赞,楼主是属于有多大压力就有多大成长型的人才。 好像很多node玩的很好的人,同时也玩go。 先码为敬。。 How old are you ? 厉害,已star。 滋磁,不过想自举还得依赖于C++和汇编… 回到顶部
__label__pos
0.979199
Create token waves with acronym? how to place a token acronym in creation? example: bitcoin = btc 1 Like If your token has a high rating, WCT weight, and votes sum, you can create a ticker for that token for the exchange. https://docs.wavesplatform.com/en/waves-token-rating/token-management.html#ticker 1 Like great, solved this topic, thank you very much 1 Like
__label__pos
0.999731
mybatis 返回的一对多collection对象包含另一个对象 java | 2021-03-22 11:16:48 | 阅读 142 次 | 评论(0) 对象1 package com.project.dto; import lombok.Data; import java.util.List; @Data public class DeptDTO { private String deptCode; private String deptName; private String companyName; private List<UserDTO> children; } 对象2 package com.project.dto; import lombok.Data; @Data public class UserDTO { private String id; private String name; } resultMap 映射 <resultMap id="DeptUserResultMap" type="com.project.dto.DeptDTO"> <result column="deptCode" jdbcType="VARCHAR" property="deptCode"/> <result column="companyName" jdbcType="VARCHAR" property="companyName"/> <result column="deptName" jdbcType="VARCHAR" property="deptName"/> <collection property="children" ofType="com.project.dto.UserDTO"> <result column="userId" property="id"/> <result column="userName" property="name"/> </collection> </resultMap> mybatis sql编写 <select id="selectByCompanyId" resultMap="DeptUserResultMap"> select a.user_id as userId, d.name as userName, b.id as deptCode,b.department_name as deptName,b.parent_id, c.company_name as companyName from employee_detail a left join base_department b on FIND_IN_SET(b.id,a.dept_id) left join base_company c on a.company_id = c.id inner join base_people d on a.people_id = d.id and d.del_flag = 0 where 1=1 <if test="companyId != null"> and a.company_id = #{companyId} </if> <if test="userName != null"> and a.user_name like concat('%',#{userName},'%') </if> group by a.user_id,b.id </select> 返回结果 { "code": 0, "msg": null, "count": 0, "data": [ { "deptCode": null, "deptName": null, "companyName": "***公司", "children": [ { "id": "2222", "name": "test" } ] } ] } 说明:返回结果可以是查询某公司的成员,公司与成员为一对多关系,可以查询多个公司;或是目录权限查询 需要拓展可以参考https://blog.csdn.net/BushQiang/article/details/100707245 **************************************转载注明出处,谢谢***************************************** 文章评论,共0条 游客请输入验证码
__label__pos
0.943608
# Importing 'engine' module import engine # Adding global variables: ATIH_PATH = engine.macro.ProgramFiles_x86 + r"\Acronis\TrueImageHome\TrueImage.exe" ATIH_TITLE = "[REGEXPTITLE:Acronis.True.Image]" ATIH_BACKUP_WIZARD_TITLE = "[CLASS:ArchiveWizard]" # Create new 'engine' object myTC = engine(logfile = r".\results.log") # Creating new 'application' and 'window' objects: atih = myTC.application(ATIH_PATH) atih_main_window = atih.window(title = ATIH_TITLE) atih_backup_wizard_window = atih.window(title = ATIH_BACKUP_WIZARD_TITLE) # Run created application and wait for main window to appear atih_pid = atih.run(wait = False, show = engine.MAXIMIZE) atih_main_window.wait(60) # Run Backup wizard window and create its screen shot, then close it atih_main_window.click(x = 445, y = 322) atih_main_window.click(image = r".\extra\mydiskdrives.png", click = 1, button = "left", timeout = 60) atih_backup_wizard_window.wait(60) atih_backup_wizard_window.capture(filename = ".\screenshots\backup_wiz.png") atih_backup_wizard_window.close() atih_main_window.close() # Exit myTC.exit(0)
__label__pos
0.997865
Visit complete MongoDB roadmap ← Back to Topics List $include The $include projection operator is used in queries to specify the fields that should be returned in the result documents. By using $include, you can choose to retrieve only fields of interest, making your query more efficient by minimizing the amount of data returned. The syntax for $include is as follows: { field: 1; } Here, field is the name of the field to include, and 1 indicates that you want the field included in the result documents. You can include multiple fields by specifying them in a comma-separated list: { field1: 1, field2: 1, field3: 1 } Example Suppose we have a collection called books with the following documents: [ { title: 'The Catcher in the Rye', author: 'J.D. Salinger', year: 1951, genre: 'Literary fiction', }, { title: 'To Kill a Mockingbird', author: 'Harper Lee', year: 1960, genre: 'Southern Gothic', }, { title: 'Of Mice and Men', author: 'John Steinbeck', year: 1937, genre: 'Novella', }, ]; If you want to retrieve only the title and author fields from the documents in the books collection, you can use the $include projection operator as follows: db.books.find({}, { title: 1, author: 1, _id: 0 }); The result will be: [ { title: 'The Catcher in the Rye', author: 'J.D. Salinger', }, { title: 'To Kill a Mockingbird', author: 'Harper Lee', }, { title: 'Of Mice and Men', author: 'John Steinbeck', }, ]; Note that we have also excluded the _id field (which is included by default) by setting it to 0. Keep in mind that you cannot combine $include and $exclude (or 1 and 0) in the same query, except for the _id field, which can be excluded even when other fields are being included. Community roadmap.sh is the 6th most starred project on GitHub and is visited by hundreds of thousands of developers every month. Roadmaps Best Practices Guides Videos Store YouTube roadmap.sh by Kamran Ahmed Community created roadmaps, articles, resources and journeys to help you choose your path and grow in your career. © roadmap.sh · FAQs · Terms · Privacy ThewNewStack The leading DevOps resource for Kubernetes, cloud-native computing, and the latest in at-scale development, deployment, and management.
__label__pos
0.730519
Take the 2-minute tour × Super User is a question and answer site for computer enthusiasts and power users. It's 100% free, no registration required. Possible Duplicate: Alternative Windows command shell and console? Is there an alternative to the standard windows command (cmd.exe) prompt? It is sometimes difficult reading debug data, I'm curious to know if there are any alternatives that could offer tabbed windows etc. share|improve this question marked as duplicate by Tom Wijsman, Diago Aug 24 '11 at 12:26 This question has been asked before and already has an answer. If those answers do not fully address your question, please ask a new question.      cmd.exe is a command shell, it only handles command input. The interface (tabs, toolbars, etc) would be part of a terminal program (such as Console). –  grawity Aug 11 '10 at 9:53      @Nelson - I just flagged it as a dup, so a mod can close it. –  ripper234 Aug 24 '11 at 12:04 1 Answer 1 up vote 3 down vote accepted Try Console. Console is a Windows console window enhancement. Console features include: multiple tabs, text editor-like text selection, different background types, alpha and color-key transparency, configurable font, different window styles. share|improve this answer Not the answer you're looking for? Browse other questions tagged or ask your own question.
__label__pos
0.902627
When to Use Graphics for Your Site Graphics can help seize and maintain viewer interest by adding aesthetic appeal to your site. In addition, the use of graphics can be greatly beneficial if you wish to translate your ideas visually. Graphics have the ability to transform a monotonous site into a striking platform completely.  However, implementing graphics isn’t as simple as shooting fish in a barrel, considering the host of factors that must be taken into consideration to ensure potency. That being said, we dedicate this post to help you grasp the impact of graphics and when it’s appropriate to use them.  When Should You Use Graphics? While graphics can provide a great deal of value, it’s often quite easy to overuse them. Graphics should only be used if it makes sense to use them. Put differently, your use of visual content has to serve a purpose or at least be relevant to the rest of the content on your web page.  Occasionally, you may find that written description isn’t quite as telling as visual description, and in that case, the use of graphics is highly recommended. For example, if you’re selling a product and you’d like for your audience to view the product’s finer details, graphics are the way to go.  In some cases, you can find yourself not needing to implement graphics whatsoever, do you just leave your web page image-less? No, you should at least add one or two images as a means of arresting your audience’s attention or a means of breaking up text content on your web page.  Three Factors to Consider Before Using Graphics Like we’ve already mentioned, using graphics haphazardly can do more harm than good, which is why we highly recommend taking the following factors into consideration beforehand.  1. File Format There’s an extensive array of image file formats that are available, with the most common being JPG, PNG, and GIF. It’s important to note the advantages and disadvantages of each format so that you choose the format that will offer you the results you’re looking for.  If you the image that you’d like to add to your website doesn’t belong to any of the three formats mentioned in the previous paragraph, we highly recommend that you convert the file into a JPG, PNG, or GIF file. You can do so using website building tools or third-party software.  Image File Formats:  • JPG – This format is intended primarily for photographs, and it does a great job of handling numerous colors manageably and effectively, resulting in optimal quality and reasonable file size. You can adjust the image quality and its relation with the size of the file using tools that offer the exportation of JPEG image files.  • PNG – This format is relatively new compared to the JPG and GIF files. This is an excellent format when it comes to dealing with large fields of color and supporting various levels of transparency. The PNG format allows you to make semi-opaque parts of your image show through without suffering from poor transparency.  • GIF – This format is the oldest of the three, and it’s used for flat-color graphics. In our present time, the GIF format isn’t a favored format for standard web graphics, but its ability to retain numerous frames in one file is the reason it’s still popular in a lot of applications, especially animated GIFs or images with solid color or text.  2. Image Size Generally speaking, image files that are smaller in size tend to be much better for web graphics. The reason why is that smaller file sizes load more quickly, allowing your web page to load in its entirety a lot quicker, which helps significantly increase your website’s usability. However, you should keep in mind that decreasing the size of an image file decreases its quality quite notably, so you need to adjust your images in such a way that grants you a reasonable file size without compromising the quality of the image, which can be done using various tools.  3. Resolution Adjusting the resolution of an image can help you achieve a smaller file size without taking away from the image’s quality. How can you do that? It all begins by understanding that the resolution of an image is measured in dots per inch (DPI), and most monitors can only display 72dpi.  What we’re trying to say is that if you have an image that features a resolution that’s higher than 72dpi, it’s just going to take up a lot of space without adding to the quality of the image, so you’ll need to reduce the image’s resolution to 72dpi or less, depending on the size you’re aiming for.  Five Steps to Utilize Web Graphics Effectively There are a bunch of best practices that you should be aware of before employing web graphics on your website. By sticking to these practices, you help ensure maximum effectiveness.  1. Graphics Should Serve a Purpose Using graphics that don’t align properly with the purpose and style of your page is pointless. The images should go hand in hand with the design of your page as well as its content. If the images you’d like to add don’t serve a definite purpose that benefits your site, abstain from adding them. 2. Refrain From Using Larger Images We’ve already talked about the importance of favoring smaller file sizes over larger files, but we feel the need to stress how significant this is. The smaller the images on your web page are, the faster your web page is going to load, which helps reduce the bounce rate exponentially.  3. Don’t Prioritize Images Over Texts We understand that it can be quite tempting to utilize graphics to convey textual information, but the overuse of graphics-based text is associated with a broad range of problems that include the inability to resize such graphics to accommodate the viewer’s needs and longer loading time 4. Always Utilize Textual Alternatives  Whenever you decide to use images, you should always incorporate textual alternatives for your images, especially if these images are used as navigation buttons, since not every single viewer will be able to see images for whatever reason. You can do that by using the Alt Text feature.  5. Make Sure There’s Ample Contrast This is especially important if you’re using text within graphics. You must make sure that there’s enough contrast between the background of the image and the text so that it becomes easier for viewers to read the text. You should also take color deficiency into consideration.  Final Thoughts Pictures speak a thousand words, but if you misuse or overuse graphics on your website, you’re going to find it tough to reap any benefits out of it. Hopefully, the information we’ve shared in this article has given you the insight you need to implement web graphics effectively. 
__label__pos
0.571213
C++编程中break语句和continue语句的学习教程 break 语句 break 语句可终止执行最近的封闭循环或其所在条件语句。 控制权将传递给该语句结束之后的语句(如果有的话)。 break; 备注 break 语句与 switch 条件语句以及 do、for 和 while 循环语句配合使用。 在 switch 语句中,break 语句将导致程序执行 switch 语句之外的下一语句。 如果没有 break 语句,则将执行从匹配的 case 标签到 switch 语句末尾之间的每个语句,包括 default 子句。 在循环中,break 语句将终止执行最近的 do、for 或 while 封闭语句。 控制权将传递给终止语句之后的语句(如果有的话)。 在嵌套语句中,break 语句只终止直接包围它的 do、for、switch 或 while 语句。 你可以使用 return 或 goto 语句从较深嵌套的结构转移控制权。 示例 以下代码演示如何在 for 循环中使用 break 语句。 #include <iostream> using namespace std; int main() { // An example of a standard for loop for (int i = 1; i < 10; i++) { cout << i << '\n'; if (i == 4) break; } // An example of a range-based for loop int nums []{1, 2, 3, 4, 5, 6, 7, 8, 9, 10}; for (int i : nums) { if (i == 4) { break; } cout << i << '\n'; } } 在每个用例中: 1 2 3 以下代码演示如何在 while 循环和 do 循环中使用 break。 #include <iostream> using namespace std; int main() { int i = 0; while (i < 10) { if (i == 4) { break; } cout << i << '\n'; i++; } i = 0; do { if (i == 4) { break; } cout << i << '\n'; i++; } while (i < 10); } 在每个用例中: 0 1 2 3 以下代码演示如何在 switch 语句中使用 break。 如果你要分别处理每个用例,则必须在每个用例中使用 break;如果不使用 break,则执行下一用例中的代码。 #include <iostream> using namespace std; enum Suit{ Diamonds, Hearts, Clubs, Spades }; int main() { Suit hand; . . . // Assume that some enum value is set for hand // In this example, each case is handled separately switch (hand) { case Diamonds: cout << "got Diamonds \n"; break; case Hearts: cout << "got Hearts \n"; break; case Clubs: cout << "got Clubs \n"; break; case Spades: cout << "got Spades \n"; break; default: cout << "didn't get card \n"; } // In this example, Diamonds and Hearts are handled one way, and // Clubs, Spades, and the default value are handled another way switch (hand) { case Diamonds: case Hearts: cout << "got a red card \n"; break; case Clubs: case Spades: default: cout << "didn't get a red card \n"; } } continue 语句 强制转移对最小封闭 do、for 或 while 循环的控制表达式的控制。 语法 continue; 备注 将不会执行当前迭代中的所有剩余语句。确定循环的下一次迭代,如下所示: 在 do 或 while 循环中,下一个迭代首先会重新计算 do 或 while 语句的控制表达式。 在 for 循环中(使用语法 for(init-expr; cond-expr; loop-expr)),将执行 loop-expr 子句。然后,重新计算 cond-expr 子句,并根据结果确定该循环结束还是进行另一个迭代。 下面的示例演示了如何使用 continue 语句跳过代码部分并启动循环的下一个迭代。 // continue_statement.cpp #include <stdio.h> int main() { int i = 0; do { i++; printf_s("在继续之前\n"); continue; printf("在继续之后,不被输出\n"); } while (i < 3); printf_s("在do循环之后\n"); } 输出: 在继续之前 在继续之前 在继续之前 在do循环之后 时间: 2016-01-13 java中break和continue源码解析 在自己学习java语言的过程中,很容易把break和continue的用法混淆.为了便于以后快速查阅及温习,在此特留学习笔记一份. 简述 在任何迭代语句的主体部分,都可以用break和continue控制循环的流程.其中,break用于强行退出循环,不执行循环中剩余的语句.而continue则停止执行当前迭代,然后退回循环起始处,开始下一次迭代. 源码 下面这个程序向大家展示了break和continue在for和while循环中的例子: package com.mufeng.thefourth javaScript如何跳出多重循环break、continue 先来说说break和continue之间的区别 摘自JavaScript高级程序设计 for(var i=0;i<10;i++){ if(i>5){ break; } } console.log(i); ---6 •当i=5和10的时候,会执行到break,并退出循环 for(var i=1;i<10;i++){ if(i>5){ continue; } num++; } console.log(num); ---4 var num=0; for(var i=1;i<10;i 关于break和continue以及label的区别和作用(详解) break和continue的区别和作用: break用于完全结束一个循环[一般只退出一重循环],跳出循环体执行循环后面的语句 continue是跳过当次循环中剩下的语句,执行下一次循环. 标号label 标号提供了一种简单的break语句所不能实现的控制循环的方法,当在循环语句中碰到break时, 不管其它控制变量,都会终止.但是,当你嵌套在几层循环中想退出循环时又怎么办呢?break只退出一重循环, 但你可以用标号label标出你想退出哪一个语句.规定标号label必需放在循环之前(意味着循 C/C++ break和continue区别及使用方法 C/C++ break和continue区别及使用方法 break可以离开当前switch.for.while.do while的程序块,并前进至程序块后下一条语句,在switch中主要用来中断下一个case的比较.在for.while与do while中,主要用于中断目前的循环执行. continue的作用与break类似,主要用于循环,所不同的是break会结束程序块的执行,而continue只会结束其之后程序块的语句,并跳回循环程序块的开头继续下一个循环,而不是离开循环. 1. #incl 详解Kotlin:forEach也能break和continue 详解Kotlin:forEach也能break和continue 这样的问题.也就是说,他们想用forEach而不是for循环,因为这很fp,很洋气(我也喜欢), 但是他们又想使用break和continue,也就是普通的流程控制语句中的控制语句. 这很不fp,因为原本有filter是用于完成这个工作的,还有flapMap.BennyHuo在他发的文章里面也说的是这种方法. filter很fp,但是会导致两次遍历,这样的话给人一股效率很低的赶脚.而Java8的Stream API就只会遍历一次, 详解Kotlin 高阶函数 与 Lambda 表达式 详解Kotlin 高阶函数 与 Lambda 表达式 高阶函数(higher-order function)是一种特殊的函数, 它接受函数作为参数, 或者返回一个函数. 这种函数的一个很好的例子就是 lock() 函数, 它的参数是一个锁对象(lock object), 以及另一个函数, 它首先获取锁, 运行对象函数, 然后再释放锁: fun <T> lock(lock: Lock, body: () -> T): T { lock.lock() try { return body() 详解Kotlin Android开发中的环境配置 详解Kotlin Android开发中的环境配置 在Android Studio上面进行安装插件 在Settings ->Plugins ->Browse repositores.. ->kotlin 安装完成后重启Android Studio就生效了 如图所示: 在Android Studio中做Kotlin相关配置 (1)在根目录 的build.gradle中进行配置使用,代码如下: buildscript { ext.kotlin_version = '1.1.2-4' repos 详解Kotlin中的面向对象(二) 详解Kotlin中的面向对象(二) 在Kotlin中的面向对象(一)中,介绍了Kotlin类的相关操作,本文将在上文的基础上,继续介绍属性.接口等同样重要的面向对象的功能. 属性 class AttrDemo{ private var attr1 : String = ""; protected var attr2 : String = ""; public var attr3 : String = ""; var varattr : Strin 详解Kotlin中的变量和方法 详解Kotlin中的变量和方法 变量 Kotlin 有两个关键字定义变量:var 和 val, 变量的类型在后面. var 定义的是可变变量,变量可以被重复赋值.val 定义的是只读变量,相当于java的final变量. 变量的类型,如果可以根据赋值推测,可以省略. var name: String = "jason" name = "jame" val max = 10 常量 Java 定义常量用关键字 static final, Kotlin 没有static, 详解Kotlin的空指针处理 详解Kotlin的空指针处理 Kotlin的空指针处理相比于java有着极大的提高,可以说是不用担心出现NullPointerException的错误,kotlin对于对象为null的情况有严格的界定,编码的阶段就需要用代码表明引用是否可以为null,为null的情况需要强制性的判断处理. 咋看一下这些在java里面其实也有,问题是一般开发中不写也是可以的(大部分开发不会花很多时间考虑这些),等出了空指针错误再一个个打补丁.这样往往会遗漏很多空指针,后期的解决仅仅是做一个if判断,没有从根源解决 详解 Kotlin Reference Basic Types, String, Array and Imports 详解 Kotlin Reference  Basic Types, String, Array and Imports 基本数据类型 Kotlin中支持的基本数据类型及它所占Bit宽度: Type Bit width Double 64 Float 32 Long 64 Int 32 Short 16 Byte 8 Char 在kotlin中 并不是一个数值类型 kotlin不支持8进制, 支持 2.10.16进制 下面的代码,示例了: 关于2.10.16进制: 使用下划线在数值常量赋值数据中: 详解Kotlin中如何实现类似Java或C#中的静态方法 大家可以在网络上搜到不少这样的文章,官方推荐是包级函数,也有人说用伴生对象(companion class).这些都是不错的选择,但并不完善,我们在不同的情况下有更好的选择.我总结了几种方法,分别是:包级函数.伴生对象.扩展函数和对象声明.这需要大家根据不同的情况进行选择. 一.包级函数 Kotlin和Java及C#不同的是,可以在包里面直接声明函数.做法和类中是一样的,这里就不多说了,的确是一个非常好的选择.适用于函数不需要不包内部的类进行数据共享的方法. 二.伴生对象 从语义上来讲,伴生函数 详解Kotlin和anko融合进行Android开发 kotlin是一门基于jvm的编程语言,最近进行了关于kotlin和 anko的研究.并且结合现在的APP设计模式,设想了初步的开发方式.并且准备应用在新的项目中. Kotlin和anko Kotlin是大名鼎鼎的JB公司开发的jvm语言,官网地址为:http://kotlinlang.org/ 官网的介绍为: Statically typed programming language for the JVM, Android and the browser Kotlin的设计思想非常的轻量,尽 详解mybatis foreach collection示例 在SQL开发过程中,动态构建In集合条件查询是比较常见的用法,在Mybatis中提供了foreach功能,该功能比较强大,它允许你指定一个集合,声明集合项和索引变量,它们可以用在元素体内.它也允许你指定开放和关闭的字符串,在迭代之间放置分隔符.这个元素是很智能的,它不会偶然地附加多余的分隔符. 下面是一个演示示例: <select id="findByIdsMap" resultMap="BaseResultMap"> Select <includ
__label__pos
0.940943
As a cluster administrator, you can modify the new project template to automatically include NetworkPolicy objects when you create a new project. If you do not yet have a customized template for new projects, you must first create one. Modifying the template for new projects As a cluster administrator, you can modify the default project template so that new projects are created using your custom requirements. To create your own custom project template: Procedure 1. Log in as a user with cluster-admin privileges. 2. Generate the default project template: $ oc adm create-bootstrap-project-template -o yaml > template.yaml 3. Use a text editor to modify the generated template.yaml file by adding objects or modifying existing objects. 4. The project template must be created in the openshift-config namespace. Load your modified template: $ oc create -f template.yaml -n openshift-config 5. Edit the project configuration resource using the web console or CLI. • Using the web console: 1. Navigate to the AdministrationCluster Settings page. 2. Click Global Configuration to view all configuration resources. 3. Find the entry for Project and click Edit YAML. • Using the CLI: 1. Edit the project.config.openshift.io/cluster resource: $ oc edit project.config.openshift.io/cluster 6. Update the spec section to include the projectRequestTemplate and name parameters, and set the name of your uploaded project template. The default name is project-request. Project configuration resource with custom project template apiVersion: config.openshift.io/v1 kind: Project metadata: ... spec: projectRequestTemplate: name: <template_name> 7. After you save your changes, create a new project to verify that your changes were successfully applied. Adding network policy objects to the new project template As a cluster administrator, you can add network policy objects to the default template for new projects. OpenShift Container Platform will automatically create all the NetworkPolicy CRs specified in the template in the project. Prerequisites • Your cluster is using a default CNI network provider that supports NetworkPolicy objects, such as the OpenShift SDN network provider with mode: NetworkPolicy set. This mode is the default for OpenShift SDN. • You installed the OpenShift CLI (oc). • You must log in to the cluster with a user with cluster-admin privileges. • You must have created a custom default project template for new projects. Procedure 1. Edit the default template for a new project by running the following command: $ oc edit template <project_template> -n openshift-config Replace <project_template> with the name of the default template that you configured for your cluster. The default template name is project-request. 2. In the template, add each NetworkPolicy object as an element to the objects parameter. The objects parameter accepts a collection of one or more objects. In the following example, the objects parameter collection includes several NetworkPolicy objects: objects: - apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-same-namespace spec: podSelector: ingress: - from: - podSelector: {} - apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-openshift-ingress spec: ingress: - from: - namespaceSelector: matchLabels: network.openshift.io/policy-group: ingress podSelector: {} policyTypes: - Ingress ... 3. Optional: Create a new project to confirm that your network policy objects are created successfully by running the following commands: 1. Create a new project: $ oc new-project <project> (1) 1 Replace <project> with the name for the project you are creating. 2. Confirm that the network policy objects in the new project template exist in the new project: $ oc get networkpolicy NAME POD-SELECTOR AGE allow-from-openshift-ingress <none> 7s allow-from-same-namespace <none> 7s
__label__pos
0.907446
October 20th, Q&A session: Get you issues solved and questions answered! GitHub logo Edit SQL Performance Tuning This article outlines basic and advanced optimization techniques for Ignite SQL queries. Some of the sections are also useful for debugging and troubleshooting. Basic Considerations: Ignite vs RDBMS Ignite is frequently compared to relational databases for their SQL capabilities with an expectation that existing SQL queries, created for an RDBMS, will work out of the box and perform faster in Ignite without any changes. Usually, such an assumption is based on the fact that Ignite stores and processes data in-memory. However, it’s not enough just to put data in RAM and expect an order of magnitude increase in performance. Generally, extra tuning is required. Below you can see a standard checklist of best practices to consider before you benchmark Ignite against an RDBMS or do any performance testing: • Ignite is optimized for multi-nodes deployments with RAM as a primary storage. Don’t try to compare a single-node Ignite cluster to a relational database. You should deploy a multi-node Ignite cluster with the whole copy of data in RAM. • Be ready to adjust your data model and existing SQL queries. Use the affinity colocation concept during the data modelling phase for proper data distribution. Remember, it’s not enough just to put data in RAM. If your data is properly colocated, you can run SQL queries with JOINs at massive scale and expect significant performance benefits. • Define secondary indexes and use other standard, and Ignite-specific, tuning techniques described below. • Keep in mind that relational databases leverage local caching techniques and, depending on the total data size, an RDBMS can complete some queries even faster than Ignite even in a multi-node configuration. If your data set is around 10-100GB and an RDBMS has enough RAM for caching data locally than it, for instance, can outperform a multi-node Ignite cluster because the latter will be utilizing the network. Store much more data in Ignite to see the difference. Using the EXPLAIN Statement Ignite supports the EXPLAIN statement which could be used to read the execution plan of a query. Use this command to analyse your queries for possible optimization. Note that the plan will contain multiple rows: the last one will contain a query for the reducing side (usually your application), others are for map nodes (usually server nodes). Read the Distributed Queries section to learn how queries are executed in Ignite. EXPLAIN SELECT name FROM Person WHERE age = 26; The execution plan is generated by H2 as described here. OR Operator and Selectivity If a query contains an OR operator, then indexes may not be used as expected depending on the complexity of the query. For example, for the query select name from Person where gender='M' and (age = 20 or age = 30), an index on the gender field will be used instead of an index on the age field, although the latter is a more selective index. As a workaround for this issue, you can rewrite the query with UNION ALL (notice that UNION without ALL will return DISTINCT rows, which will change the query semantics and will further penalize your query performance): SELECT name FROM Person WHERE gender='M' and age = 20 UNION ALL SELECT name FROM Person WHERE gender='M' and age = 30 Avoid Having Too Many Columns Avoid having too many columns in the result set of a SELECT query. Due to limitations of the H2 query parser, queries with 100+ columns may perform worse than expected. Lazy Loading By default, Ignite attempts to load the whole result set to memory and send it back to the query initiator (which is usually your application). This approach provides optimal performance for queries of small or medium result sets. However, if the result set is too big to fit in the available memory, it can lead to prolonged GC pauses and even OutOfMemoryError exceptions. To minimize memory consumption, at the cost of a moderate performance hit, you can load and process the result sets lazily by passing the lazy parameter to the JDBC and ODBC connection strings or use a similar method available for Java, .NET, and C++ APIs: SqlFieldsQuery query = new SqlFieldsQuery("SELECT * FROM Person WHERE id > 10"); // Result set will be loaded lazily. query.setLazy(true); jdbc:ignite:thin://192.168.0.15?lazy=true var query = new SqlFieldsQuery("SELECT * FROM Person WHERE id > 10") { // Result set will be loaded lazily. Lazy = true }; Querying Colocated Data When Ignite executes a distributed query, it sends sub-queries to individual cluster nodes to fetch the data and groups the results on the reducer node (usually your application). If you know in advance that the data you are querying is colocated by the GROUP BY condition, you can use SqlFieldsQuery.collocated = true to tell the SQL engine to do the grouping on the remote nodes. This will reduce network traffic between the nodes and query execution time. When this flag is set to true, the query is executed on individual nodes first and the results are sent to the reducer node for final calculation. Consider the following example, in which we assume that the data is colocated by department_id (in other words, the department_id field is configured as the affinity key). SELECT SUM(salary) FROM Employee GROUP BY department_id Because of the nature of the SUM operation, Ignite will sum the salaries across the elements stored on individual nodes, and then send these sums to the reducer node where the final result will be calculated. This operation is already distributed, and enabling the collocated flag will only slightly improve performance. Let’s take a slightly different example: SELECT AVG(salary) FROM Employee GROUP BY department_id In this example, Ignite has to fetch all (salary, department_id) pairs to the reducer node and calculate the results there. However, if employees are colocated by the department_id field, i.e. employee data for the same department is stored on the same node, setting SqlFieldsQuery.collocated = true will reduce query execution time because Ignite will calculate the averages for each department on the individual nodes and send the results to the reducer node for final calculation. Enforcing Join Order When this flag is set, the query optimizer will not reorder tables in joins. In other words, the order in which joins are applied during query execution will be the same as specified in the query. Without this flag, the query optimizer can reorder joins to improve performance. However, sometimes it might make an incorrect decision. This flag helps to control and explicitly specify the order of joins instead of relying on the optimizer. Consider the following example: SELECT * FROM Person p JOIN Company c ON p.company = c.name where p.name = 'John Doe' AND p.age > 20 AND p.id > 5000 AND p.id < 100000 AND c.name NOT LIKE 'O%'; This query contains a join between two tables: Person and Company. To get the best performance, we should understand which join will return the smallest result set. The table with the smaller result set size should be given first in the join pair. To get the size of each result set, let’s test each part. Q1: SELECT count(*) FROM Person p where p.name = 'John Doe' AND p.age > 20 AND p.id > 5000 AND p.id < 100000; Q2: SELECT count(*) FROM Company c where c.name NOT LIKE 'O%'; After running Q1 and Q2, we can get two different outcomes: Case 1: Q1 30000 Q2 100000 Q2 returns more entries than Q1. In this case, we don’t need to modify the original query, because smaller subset has already been located on the left side of the join. Case 2: Q1 50000 Q2 10000 Q1 returns more entries than Q2. So we need to change the initial query as follows: SELECT * FROM Company c JOIN Person p ON p.company = c.name where p.name = 'John Doe' AND p.age > 20 AND p.id > 5000 AND p.id < 100000 AND c.name NOT LIKE 'O%'; The force join order hint can be specified as follows: Increasing Index Inline Size Every entry in the index has a constant size which is calculated during index creation. This size is called index inline size. Ideally this size should be enough to store full indexed entry in serialized form. When values are not fully included in the index, Ignite may need to perform additional data page reads during index lookup, which can impair performance if persistence is enabled. Here is how values are stored in the index: int 0 1 5 | tag | value | Total: 5 bytes long 0 1 9 | tag | value | Total: 9 bytes String 0 1 3 N | tag | size | UTF-8 value | Total: 3 + string length POJO (BinaryObject) 0 1 5 | tag | BO hash | Total: 5 For primitive data types (bool, byte, short, int, etc.), Ignite automatically calculates the index inline size so that the values are included in full. For example, for int fields, the inline size is 5 (1 byte for the tag and 4 bytes for the value itself). For long fields, the inline size is 9 (1 byte for the tag + 8 bytes for the value). For binary objects, the index includes the hash of each object, which is enough to avoid collisions. The inline size is 5. For variable length data, indexes include only first several bytes of the value. Therefore, when indexing fields with variable-length data, we recommend that you estimate the length of your field values and set the inline size to a value that includes most (about 95%) or all values. For example, if you have a String field with 95% of the values containing 10 characters or fewer, you can set the inline size for the index on that field to 13. The inline sizes explained above apply to single field indexes. However, when you define an index on a field in the value object or on a non-primary key column, Ignite creates a composite index by appending the primary key to the indexed value. Therefore, when calculating the inline size for composite indexes, add up the inline size of the primary key. Below is an example of index inline size calculation for a cache where both key and value are complex objects. public class Key { @QuerySqlField private long id; @QuerySqlField @AffinityKeyMapped private long affinityKey; } public class Value { @QuerySqlField(index = true) private long longField; @QuerySqlField(index = true) private int intField; @QuerySqlField(index = true) private String stringField; // we suppose that 95% of the values are 10 symbols } The following table summarizes the inline index sizes for the indexes defined in the example above. Index Kind Recommended Inline Size Comment (_key) Primary key index 5 Inlined hash of a binary object (5) (affinityKey, _key) Affinity key index 14 Inlined long (9) + binary object’s hash (5) (longField, _key) Secondary index 14 Inlined long (9) + binary object’s hash (5) (intField, _key) Secondary index 10 Inlined int (5) + binary object up to hash (5) (stringField, _key) Secondary index 18 Inlined string (13) + binary object’s hash (5) (assuming that the string is ~10 symbols) Note that you will only have to set the inline size for the index on stringField. For other indexes, Ignite will calculate the inline size automatically. Refer to the Configuring Index Inline Size section for the information on how to change the inline size. You can check the inline size of an existing index in the INDEXES system view. Warning Note that since Ignite encodes strings to UTF-8, some characters use more than 1 byte. Query Parallelism By default, a SQL query is executed in a single thread on each participating Ignite node. This approach is optimal for queries returning small result sets involving index search. For example: SELECT * FROM Person WHERE p.id = ?; Certain queries might benefit from being executed in multiple threads. This relates to queries with table scans and aggregations, which is often the case for HTAP and OLAP workloads. For example: SELECT SUM(salary) FROM Person; The number of threads created on a single node for query execution is configured per cache and by default equals 1. You can change the value by setting the CacheConfiguration.queryParallelism parameter. If you create SQL tables using the CREATE TABLE command, you can use a cache template to set this parameter. If a query contains JOINs, then all the participating caches must have the same degree of parallelism. Index Hints Index hints are useful in scenarios when you know that one index is more suitable for certain queries than another. You can use them to instruct the query optimizer to choose a more efficient execution plan. To do this, you can use USE INDEX(indexA,…​,indexN) statement as shown in the following example. SELECT * FROM Person USE INDEX(index_age) WHERE salary > 150000 AND age < 35; Partition Pruning Partition pruning is a technique that optimizes queries that use affinity keys in the WHERE condition. When executing such a query, Ignite will scan only those partitions where the requested data is stored. This will reduce query time because the query will be sent only to the nodes that store the requested partitions. In the following example, the employee objects are colocated by the id field (if an affinity key is not set explicitly then the primary key is used as the affinity key): CREATE TABLE employee (id BIGINT PRIMARY KEY, department_id INT, name VARCHAR) /* This query is sent to the node where the requested key is stored */ SELECT * FROM employee WHERE id=10; /* This query is sent to all nodes */ SELECT * FROM employee WHERE department_id=10; In the next example, the affinity key is set explicitly and, therefore, will be used to colocate data and direct queries to the nodes that keep primary copies of the data: CREATE TABLE employee (id BIGINT PRIMARY KEY, department_id INT, name VARCHAR) WITH "AFFINITY_KEY=department_id" /* This query is sent to all nodes */ SELECT * FROM employee WHERE id=10; /* This query is sent to the node where the requested key is stored */ SELECT * FROM employee WHERE department_id=10; Note Refer to affinity colocation page for more details on how data gets colocated and how it helps boost performance in distributed storages like Ignite. Skip Reducer on Update When Ignite executes a DML operation, it first fetches all the affected intermediate rows for analysis to the reducer node (usually your application), and only then prepares batches of updated values that will be sent to remote nodes. This approach might affect performance and saturate the network if a DML operation has to move many entries. Use this flag as a hint for the SQL engine to do all intermediate rows analysis and updates “in-place” on the server nodes. The hint is supported for JDBC and ODBC connections. //jdbc connection string jdbc:ignite:thin://192.168.0.15/skipReducerOnUpdate=true SQL On-heap Row Cache Ignite stores data and indexes in its own memory space outside of Java heap. This means that with every data access, a part of the data will be copied from the off-heap space to Java heap, potentially deserialized, and kept in the heap as long as your application or server node references it. The SQL on-heap row cache is intended to store hot rows (key-value objects) in Java heap, minimizing resources spent for data copying and deserialization. Each cached row refers to an entry in the off-heap region and can be invalidated when one of the following happens: • The master entry stored in the off-heap region is updated or removed. • The data page that stores the master entry is evicted from RAM. The on-heap row cache can be enabled for a specific cache/table (if you use CREATE TABLE to create SQL tables and caches, then the parameter can be passed via a cache template): <bean class="org.apache.ignite.configuration.IgniteConfiguration"> <property name="cacheConfiguration"> <bean class="org.apache.ignite.configuration.CacheConfiguration"> <property name="name" value="myCache"/> <property name="sqlOnheapCacheEnabled" value="true"/> </bean> </property> </bean> If the row cache is enabled, you might be able to trade RAM for performance. You might get up to a 2x performance increase for some SQL queries and use cases by allocating more RAM for rows caching purposes. Warning SQL On-Heap Row Cache Size Presently, the cache is unlimited and can occupy as much RAM as allocated to your memory data regions. Make sure to: • Set the JVM max heap size equal to the total size of all the data regions that store caches for which this on-heap row cache is enabled. • Tune JVM garbage collection accordingly. Using TIMESTAMP instead of DATE Use the TIMESTAMP type instead of DATE whenever possible. Presently, the DATE type is serialized/deserialized very inefficiently resulting in performance degradation.
__label__pos
0.881741
Up Markup, discount, and tax A lot of “real-life” math deals with percents and money. You will need to know how to figure out the price of something in a store after a discount. You will also need to know how to add tax to your items to make sure you brought enough money! If you do these problems enough, you can find shortcuts or “tricks” and use mental math to estimate but there is also a fail-proof procedure that will get you the exact result every time. Here it is: \(\frac{{IS}}{{OF}} = \frac{\% }{{100}}\) This handy proportion can solve any percent problem. Let me explain how it works.  Example 1:  Let’s start off with a basic percent problem.  What is 12% of 155?  The percent always goes over the 100. The 155 is going to be our “OF” and we are looking for our “IS”. So the proportion will look like this: \(\frac{x}{{155}} = \frac{{12}}{{100}}\)     Now, cross multiply. \(x \bullet 100 = 12 \bullet 155\) \(100x = 1860\) \(x = 18.6\) 18.6 is 12% of 155. Example 2:  Let’s try another.  9 is what percent of 215?  9 will be our “IS”, 215 is our “OF” and we are looking for our percent. \(\frac{9}{{215}} = \frac{x}{{100}}\) \(9 \bullet 100 = x \bullet 215\) \(900 = 215x\) \(x = 4.19\) Now let’s apply this to shopping and figuring out discounts and taxes. Discounts are subtracted off the price and tax is added on. Example 3:  Original price of a parrot is $194.50 There is a 5% discount. Tax is 3%.  Let’s first figure this out: What is 5% of 194.50? \(\frac{x}{{194.50}} = \frac{5}{{100}}\) \(x \bullet 100 = 5 \bullet 194.50\) \(100x = 972.50\) \(x = 9.725\) Let’s subtract this off the original price. $194.50 -        9.725               5% Discount   184.775 Next, let’s figure this out: What is 3% of 184.775? \(\frac{x}{{184.775}} = \frac{3}{{100}}\) \(x \bullet 100 = 3 \bullet 184.775\) \(100x = 554.325\) \(x = 5.54325\) Let’s add this to the discounted price. 184.775 +  5.54325                 3% Tax 190.31825 Let’s round off to two decimal places since we are dealing with money. Our parrot will cost $190.32 So this seems like a lot of work for one problem. There is a shorter way! You can also change the percents to decimals and use multiplication. Let’s try one that way. Example 4:  Original price of a hat is $28.50 There is a discount of 40%. Tax is 5%.   When you change a percent to a decimal you divide by 100 or just simply move the decimal to the left 2 places. 40% = .40 5% = .05  It actually won’t matter is you subtract the discount first or add the tax first. Let’s deal with the discount first. \({\rm{Price }}--{\rm{ }}({\rm{discount}} \bullet {\rm{price}})\) \(28.50 - (.40 \bullet 28.50) = 17.1\) Now, using this new price, let’s add the tax. \({\rm{New price }} + {\rm{ }}({\rm{tax}} \bullet {\rm{new price}})\) \(17.1 + (.05 \bullet 17.1) = 17.955\) Below you can download some free math worksheets and practice. Downloads: 8252 x Find the selling price of each item. This free worksheet contains 10 assignments each with 24 questions with answers. Example of one question: Percents-Markup,-discount,-and-tax-easy Watch below how to solve this example:   Downloads: 8389 x Find the selling price of each item. This free worksheet contains 10 assignments each with 24 questions with answers. Example of one question: Percents-Markup,-discount,-and-tax-medium Watch below how to solve this example:   Downloads: 5941 x Find the selling price of each item. This free worksheet contains 10 assignments each with 24 questions with answers. Example of one question: Percents-Markup,-discount,-and-tax-hard Watch below how to solve this example:       Facebook PageYouTube Channel Algebra and Pre-Algebra Beginning Algebra Adding and subtracting integer numbers Dividing integer numbers Multiplying integer numbers Sets of numbers Order of operations The Distributive Property Verbal expressions Beginning Trigonometry Finding angles Finding missing sides of triangles Finding sine, cosine, tangent Equations Absolute value equations Distance, rate, time word problems Mixture word problems Work word problems One step equations Multi step equations Exponents Graphing exponential functions Operations and scientific notation Properties of exponents Writing scientific notation Factoring By grouping Common factor only Special cases Linear Equations and Inequalities Plotting points Slope Graphing absolute value equations Percents Percent of change Markup, discount, and tax Polynomials Adding and subtracting Dividing Multiplying Naming Quadratic Functions Completing the square by finding the constant Graphing Solving equations by completing the square Solving equations by factoring Solving equations by taking square roots Solving equations with The Quadratic Formula Understanding the discriminant Inequalities Absolute value inequalities Graphing Single Variable Inequalities Radical Expressions Adding and subtracting Dividing Equations Multiplying Simplifying single radicals The Distance Formula The Midpoint Formula Rational Expressions Adding and subtracting Equations Multiplying and dividing Simplifying and excluded values Systems of Equations and Inequalities Graphing systems of inequalities Solving by elimination Solving by graphing Solving by substitution Word problems
__label__pos
0.99001
Data Security Posture Management Aug 01, 2023The Hacker NewsData Security / DSPM Data Security Posture Management is an approach to securing cloud data by ensuring that sensitive data always has the correct security posture – regardless of where it’s been duplicated or moved to. So, what is DSPM? Here’s a quick example: Let’s say you’ve built an excellent security posture for your cloud data. For the sake of this example, your data is in production, it’s protected behind a firewall, it’s not publicly accessible, and your IAM controls have limited access properly. Now along comes a developer and replicates that data into a lower environment. What happens to that fine security posture you’ve built? Well, it’s gone – and now the data is only protected by the security posture in that lower environment. So if that environment is exposed or improperly secured – so is all that sensitive data you’ve been trying to protect. Security postures just don’t travel with their data. Data Security Posture Management (DSPM) was created to solve this problem. How Does Data Security Posture Management Work? If we want a data security posture that travels with the data and helps you remediate issues, we need a solution that does three things: • Discovers all the data in your public cloud – including shadow data that’s been created but isn’t used or monitored. • Understands what security posture the data is supposed to have • Prioritizes alerts based on data sensitivity and offers contextualized remediation plans Data discovery and classification tools have been around for years. But they’ve lacked the ability to offer any business context. If you can find sensitive data but don’t know whether it’s business critical or not, and don’t understand its security posture, it’s not much help to the security team that’s trying to prioritize thousands of alerts from different tools. For example, let’s say a data discovery tool finds PII data. You wouldn’t need an alert if it has the proper security posture. A good DSPM solution wouldn’t waste your time with one. Why is Data Security Posture Management So Critical Now? It’s an answer you’ve heard before: the cloud. Before widespread adoption of public cloud infrastructure, securing data meant securing your data center with a firewall. Even if your data was copied or moved, it still stayed inside your organization’s data center. There wasn’t a difference between your infrastructure security and your data security. But for cloud-first companies, sensitive data travels constantly across your cloud, to environments with different security postures. So the need arose to build a product that makes sure all this traveling data has the right security posture. Wait, Doesn’t Cloud Security Posture Management (CSPM) Already Do This? CSPM solutions are built to secure cloud infrastructure while DSPM is focused on cloud data. The difference is significant. A CSPM is built to find vulnerabilities in cloud resources, like VMs and VPC networks. Some may also be able to provide very basic insights on the data, like identifying PII in text files in VMs and S3 buckets. Beyond these basic abilities, CSPM products are often data agnostic and don’t prioritize remediation based on data sensitivity. DSPM, on the other hand, is about the data itself. This includes identifying data vulnerabilities like overexposure, access controls, data flows, and anomalies. A DPSM solution connects the dots between data and the infrastructure security, allowing security teams to understand what sensitive data is at risk instead of showing them a list of vulnerabilities to remediate. Essentially DSPM is adding a layer of data security and data context over the infrastructure security. How Does Data Security Posture Management Understand What Data is Sensitive? Some data is obviously sensitive – social security numbers, credit card information, and healthcare data for example. These need to be protected not only for security reasons, but to stay compliant with regulations like PCI-DSS, HIPAA, and more. But a good DSPM solution needs to go beyond this. To truly provide value, it should be able to autonomously draw conclusions about the type of sensitive data it’s finding – and be able to find data that isn’t structured as simply as a credit card number. By understanding and clustering metadata and leveraging ML technologies, DSPMs can find intellectual property, customer data and more that can’t be discovered just from using regular expressions. Another critical factor is data ownership. DSPM should integrate with data catalogs to understand who is responsible for the data. Finally, there’s the issue of scale. One of the major weaknesses of legacy data discovery and classification solutions is that they aren’t able to scan and classify and the scale of modern cloud infrastructures. DSPM must be able to scan petabytes of data effectively and efficiently, to ensure everything is discovered – without breaking your cloud bill. Conclusion: DSPM = Security that Travels with Your Data Data Security Posture Management is new, and with that comes the natural skepticism of ‘do we really need another security acronym?’ But DSPM is solving real security problems caused by the move to the cloud and can help prevent major data breaches. Customer information, company secrets, and source code leaks aren’t caused by initial failures to protect sensitive data. They’re caused by the ease with which data is replicated and moved around – without the security posture following. Data Security Posture Management promises to make sure that wherever your data travels in the cloud – your security posture follows and data risks are minimized. To learn more about DSPM and how Sentra can help find, classify and secure your cloud data, get a demo here. Found this article interesting? Follow us on Twitter and LinkedIn to read more exclusive content we post. The post “What is Data Security Posture Management (DSPM)?” appeared first on The Hacker News Source:The Hacker News – [email protected] (The Hacker News)
__label__pos
0.738605