id
int64 0
25.6k
| text
stringlengths 0
4.59k
|
---|---|
9,000 | inside the try block we get the lambda function that is appropriate to the requested action we use lock to protect access to the call dictionaryalthough arguably we are being overly cautious as alwayswe do as little as possible within the scope of the lock--in this case we just do dictionary lookup to get reference to function once we have the function we call itpassing self as the first argument and the rest of the data tuple as the other arguments here we are doing function callso no self is passed by python this does not matter since we pass self in ourselvesand inside the lambda the passed-in self is used to call the method in the normal way the outcome is that the callself method(*data[ :])is madewhere method is the method corresponding to the action given in data[ if the action is to shut downa custom finish exception is raised in the shutdown(methodin which case we know that the client cannot expect replyso we just return but for any other action we pickle the result of calling the action' corresponding method (using pickle protocol version )and write the size of the pickle and then the pickled data itself def get_car_details(selflicense)with self carslockcar copy copy(self cars get(licensenone)if car is not nonereturn (truecar seatscar mileagecar ownerreturn (false"this license is not registered"this method begins by trying to acquire the car data lock--and blocks until it gets the lock it then uses the dict get(method with second argument of none to get the car with the given license--or to get none the car is immediately copied and the with statement is finished this ensures that the lock is in force for the shortest possible time although reading does not change the data being readbecause we are dealing with mutable collection it is possible that another method in another thread wants to change the dictionary at the same time as we want to read it--using lock prevents this from happening outside the scope of the lock we now have copy of the car object (or nonewhich we can deal with at our leisure without blocking any other threads like all the car registration action-handling methodswe return tuple whose first item is boolean success/failure flag and whose other items vary none of these methods has to worry or even know how its data is returned to the client beyond the "tuple with boolean first itemsince all the network interaction is encapsulated in the handle(method def change_mileage(selflicensemileage)if mileage return (false"cannot set negative mileage"with self carslockcar self cars get(licensenone |
9,001 | networking if car is not noneif car mileage mileagecar mileage mileage return (truenonereturn (false"cannot wind the odometer back"return (false"this license is not registered"in this method we can do one check without acquiring lock at all but if the mileage is non-negative we must acquire lock and get the relevant carand if we have car ( if the license is valid)we must stay within the scope of the lock to change the mileage as requested--or to return an error tuple if no car has the given license (car is none)we drop out of the with statement and return an error tuple it would seem that if we did the validation in the client we could avoid some network traffic entirelyfor examplethe client could give an error message (or simply preventnegative mileages even though the client ought to do thiswe must still have the check in the server since we cannot assume that the client is bug-free and although the client gets the car' mileage to use as the default mileage we cannot assume that the mileage entered by the user (even if it is greater than the current mileageis validbecause some other client could have increased the mileage in the meantime so we can only do the definitive validation at the serverand only within the scope of lock the change_owner(method is very similarso we won' reproduce it here def new_registration(selflicenseseatsmileageowner)if not licensereturn (false"cannot set an empty license"if seats not in { }return (false"cannot register car with invalid seats"if mileage return (false"cannot set negative mileage"if not ownerreturn (false"cannot set an empty owner"with self carslockif license not in self carsself cars[licensecar(seatsmileageownerreturn (truenonereturn (false"cannot register duplicate license"again we are able to do lot of error checking before accessing the registration databut if all the data is valid we acquire lock if the license is not in the requesthandler cars dictionary (and it shouldn' be since new registration should have an unused license)we create new car object and store it in the dictionary this must all be done within the scope of the same lock because we must not allow any other client to add car with this license in the time be |
9,002 | tween the check for the license' existence in the requesthandler cars dictionary and adding the new car to the dictionary def shutdown(self*ignore)self server shutdown(raise finish(if the action is to shut down we call the server' shutdown(method--this will stop it from accepting any further requestsalthough it will continue running while it is still servicing any existing requests we then raise custom exception to notify the handler(that we are finished--this causes the handler(to return without sending any reply to the client summary ||this showed that creating network clients and servers can be quite straightforward in python thanks to the standard library' networking modulesand the struct and pickle modules in the first section we developed client program and gave it single functionhandle_request()to send and receive arbitrary picklable data to and from server using generic data format of "length plus picklein the second section we saw how to create server subclass using the classes from the socketserver module and how to implement request handler class to service the server' client requests here the heart of the network interaction was confined to single methodhandle()that can receive and send arbitrary picklable data from and to clients the socket and socketserver modules and many other modules in the standard librarysuch as asyncoreasynchatand sslprovide far more functionality than we have used here but if the networking facilities provided by the standard library are not sufficientor are not high-level enoughit is worth looking at the third-party twisted networking framework (www twistedmatrix comas possible alternative exercises ||the exercises involve modifying the client and server programs covered in this the modifications don' involve lot of typingbut will need little bit of care to get right copy car_registration_server py and car_registration py and modify them so that they exchange data using protocol versioned at the network level this could be donefor exampleby passing two integers in the struct (lengthprotocol versioninstead of one |
9,003 | networking this involves adding or modifying about ten lines in the client program' handle_request(functionand adding or modifying about sixteen lines in the server program' handle(method--including code to handle the case where the protocol version read does not match the one expected solutions to this and to the following exercises are provided in car_registration_ans py and car_registration_server_ans py copy the car_registration_server py program (or use the one developed in exercise )and modify it so that it offers new actionget_licenses_ starting_with the action should accept single parametera string the method implementing the action should always return -tuple of (truelist of licenses)there is no error (falsecasesince no matches is not an error and simply results in true and an empty list being returned retrieve the licenses (the requesthandler cars dictionary' keyswithin the scope of lockbut do all the other work outside the lock to minimize blocking one efficient way to find matching licenses is to sort the keys and then use the bisect module to find the first matching license and then iterate from there another possible approach is to iterate over the licensespicking out those that start with the given stringperhaps using list comprehension apart from the additional importthe call dictionary will need an extra couple of lines for the action the method to implement the action can be done in fewer than ten lines this is not difficultalthough care is required solution that uses the bisect module is provided in car_registration_server _ans py copy the car_registration py program (or use the one developed in exercise )and modify it to take advantage of the new server (car_registration_server_ans pythis means changing the retrieve_car_details(function so that if the user enters an invalid license they get prompted to enter the start of license and then get list to choose from here is sample of interaction using the new function (with the server already runningwith the menu edited slightly to fit on the pageand with what the user types shown in bold)( )ar ( )ileage ( )wner ( )ew car licenseda licenseda seats mileage ownerjonathan lynn ( )ar ( )ileage ( )wner ( )ew car license [da ] this license is not registered start of licensez ( )top server ( )uit [ ]( )top server ( )uit [ ] |
9,004 | no licence starts with start of licensea ( he ( ( abk enter choice ( to cancel) licenseabk seats mileage owneranthony jay the change involves deleting one line and adding about twenty more lines it is slightly tricky because the user must be allowed to get out or to go on at each stage make sure that you test the new functionality for all cases (no license starts with the given stringone licence starts with itand two or more start with ita solution is provided in car_registration_ans py |
9,005 | dbm databases sql databases database programming |||for most software developers the term database is usually taken to mean an rdbms (relational database management systemthese systems use tables (spreadsheet-like gridswith rows equating to records and columns equating to fields the tables and the data they hold are created and manipulated using statements written in sql (structured query languagepython provides an api (application programming interfacefor working with sql databases and it is normally distributed with the sqlite database as standard another kind of database is dbm (database managerthat stores any number of key-value items python' standard library comes with interfaces to several dbmsincluding some that are unix-specific dbms work just like python dictionaries except that they are normally held on disk rather than in memory and their keys and values are always bytes objects and may be subject to length constraints the shelve module covered in this first section provides convenient dbm interface that allows us to use string keys and any (picklableobjects as values if the available dbms and the sqlite database are insufficientthe python package indexpypi python org/pypihas large number of database-related packagesincluding the bsddb dbm ("berkeley db")and interfaces to popular client/server databases such as db informixingresmysqlodbcand postgresql using sql databases requires knowledge of the sql language and the manipulation of strings of sql statements this is fine for those experienced with sqlbut is not very pythonic there is another way to interact with sql databases--use an orm (object relational mappertwo of the most popular orms for python are available as third-party libraries--they are sqlalchemy (www sqlalchemy organd sqlobject (www sqlobject orgone particularly nice feature of using an orm is that it allows us to use python syntax--creating objects and calling methods--rather than using raw sql |
9,006 | database programming in this we will implement two versions of program that maintains list of dvdsand keeps track of each dvd' titleyear of releaselength in minutesand director the first version uses dbm (via the shelve moduleto store its dataand the second version uses the sqlite database both programs can also load and save simple xml formatmaking it possiblefor exampleto export dvd data from one program and import it into the other the sql-based version offers slightly more functionality than the dbm oneand has slightly cleaner data design dbm databases bytes ||the shelve module provides wrapper around dbm that allows us to interact with the dbm as though it were dictionaryproviding that we use only string keys and picklable values behind the scenes the shelve module converts the keys and values to and from bytes objects the shelve module uses the best underlying dbm availableso it is possible that dbm file saved on one machine won' be readable on anotherif the other machine doesn' have the same dbm one solution is to provide xml import and export for files that must be transportable between machines--something we've done for this section' dvd programdvds-dbm py for the keys we use the dvdstitles and for the values we use tuples holding the directoryearand duration thanks to the shelve module we don' have to do any data conversion and can just treat the dbm object as dictionary since the structure of the program is similar to interactive menu-driven programs that we have seen beforewe will focus just on those aspects that are specific to dbm programming here is an extract from the program' main(functionwith the menu handling omitteddb none trydb shelve open(filenameprotocol=pickle highest_protocolfinallyif db is not nonedb close(here we have opened (or created if it does not existthe specified dbm file for both reading and writing each item' value is saved as pickle using the specified pickle protocolexisting items can be read even if they were saved using lower protocol since python can figure out the correct protocol to use for reading pickles at the end the dbm is closed--this has the effect of clearing the dbm' internal cache and ensuring that the disk file reflects any changes that have been madeas well as closing the file |
9,007 | the program offers options to addeditlistremoveimportand export dvd data we will skip importing and exporting the data from and to xml format since it is very similar to what we have done in and apart from addingwe will omit most of the user interface codeagain because we have seen it before in other contexts def add_dvd(db)title console get_string("title""title"if not titlereturn director console get_string("director""director"if not directorreturn year console get_integer("year""year"minimum= maximum=datetime date today(yearduration console get_integer("duration (minutes)""minutes"minimum= maximum= * db[title(directoryeardurationdb sync(this functionlike all the functions called by the program' menuis passed the dbm object (dbas its sole parameter most of the function is concerned with getting the dvd' detailsand in the penultimate line we store the key-value item in the dbm filewith the dvd' title as the key and the directoryearand duration (pickled together by shelveas the value in keeping with python' usual consistencydbms provide the same api as dictionariesso we don' have to learn any new syntax beyond the shelve open(function that we saw earlier and the shelve shelf sync(method that is used to clear the shelve' internal cache and synchronize the disk file' data with the changes that have been applied--in this case just adding new item def edit_dvd(db)old_title find_dvd(db"edit"if old_title is nonereturn title console get_string("title""title"old_titleif not titlereturn directoryearduration db[old_titledb[title(directoryeardurationif title !old_titledel db[old_titledb sync( |
9,008 | database programming to be able to edit dvdthe user must first choose the dvd to work on this is just matter of getting the title since titles are used as keys with the values holding the other data since the necessary functionality is needed elsewhere ( when removing dvd)we have factored it out into separate find_dvd(function that we will look at next if the dvd is found we get the user' changesusing the existing values as defaults to speed up the interaction (we have omitted most of the user interface code for this function since it is almost the same as that used when adding dvd at the end we store the data just as we did when adding if the title is unchanged this will have the effect of overwriting the associated valueand if the title is different this has the effect of creating new key-value itemin which case we delete the original item def find_dvd(dbmessage)message "(start oftitle to message while truematches [start console get_string(message"title"if not startreturn none for title in dbif title lower(startswith(start lower())matches append(titleif len(matches= print("there are no dvds starting with"startcontinue elif len(matches= return matches[ elif len(matchesdisplay_limitprint("too many dvds start with { }try entering "more of the titleformat(start)continue elsematches sorted(matcheskey=str lowerfor imatch in enumerate(matches)print("{ }{ }format( match)which console get_integer("number (or to cancel)""number"minimum= maximum=len(matches)return matches[which if which ! else none to make finding dvd as quick and easy as possible we require the user to type in only one or the first few characters of its title once we have the start of the title we iterate over the dbm and create list of matches if there is one match we return itand if there are several matches (but fewer than display_limitan integer set elsewhere in the programwe display them all in case-insensitive order with number beside each one so that the user |
9,009 | can choose the title just by entering its number (the console get_integer(function accepts even if the minimum is greater than zero so that can be used as cancelation value this behavior can be switched off by passing allow_zero=false we can' use enterthat isnothingto mean cancelsince entering nothing means accepting the default def list_dvds(db)start "if len(dbdisplay_limitstart console get_string("list those starting with "[enter=all]""start"print(for title in sorted(dbkey=str lower)if not start or title lower(startswith(start lower())directoryearduration db[titleprint("{title({year}{durationminute{ }by "{director}format(util (duration)**locals())listing all the dvds (or those whose title starts with particular substringis simply matter of iterating over the dbm' items the util (function is simply lambda "if = else " "so here it returns an "sif the duration is not one minute def remove_dvd(db)title find_dvd(db"remove"if title is nonereturn ans console get_bool("remove { }?format(title)"no"if ansdel db[titledb sync(removing dvd is matter of finding the one the user wants to removeasking for confirmationand if we get itdeleting the item from the dbm we have now seen how to open (or createa dbm file using the shelve moduleand how to add items to itedit its itemsiterate over its itemsand remove items unfortunatelythere is flaw in our data design director names are duplicatedand this could easily lead to inconsistenciesfor exampledirector danny devito might be entered as "danny de vitofor one movie and "danny devitofor another one solution would be to have two dbm filesthe main dvd file with title keys and (yeardurationdirector idvaluesand director file with director id ( integerkeys and director name values we avoid this flaw in the next section' sql database version of the program by using two tablesone for dvds and another for directors |
9,010 | database programming ||sql databases interfaces to most popular sql databases are available from third-party modulesand out of the box python comes with the sqlite module (and with the sqlite database)so database programming can be started right away sqlite is lightweight sql databaselacking many of the features ofsaypostgresqlbut it is very convenient for prototypingand may prove sufficient in many cases to make it as easy as possible to switch between database backendspep (python database api specification provides an api specification called db-api that database interfaces ought to honor--the sqlite modulefor examplecomplies with the specificationbut not all the third-party modules do there are two major objects specified by the apithe connection object and the cursor objectand the apis they must support are shown in tables and in the case of the sqlite moduleits connection and cursor objects both provide many additional attributes and methods beyond those required by the db-api specification the sql version of the dvds program is dvds-sql py the program stores directors separately from the dvd data to avoid duplication and offers one more menu option that lets the user list the directors the two tables are shown in figure the program has slightly fewer than lineswhereas the previous section' dvds-dbm py program is slightly fewer than lineswith most of the difference due to the fact that we must use sql queries rather than perform simple dictionary-like operationsand because we must create the database' tables the first time the program runs dvds id title year duration director_id directors id name figure the dvd program' database design the main(function is similar to beforeonly this time we call custom connect(function to make the connection def connect(filename)create not os path exists(filenamedb sqlite connect(filenameif createcursor db cursor( |
9,011 | table db-api connection object methods syntax description db close(closes the connection to the database (represented by the db object which is obtained by calling connect(functiondb commit(commits any pending transaction to the databasedoes nothing for databases that don' support transactions db cursor(returns database cursor object through which queries can be executed rolls back any pending transaction to the state that existed before the transaction begandoes nothing for databases that don' support transactions db rollback(cursor execute("create table directors ("id integer primary key autoincrement unique not null"name text unique not null)"cursor execute("create table dvds ("id integer primary key autoincrement unique not null"title text not null"year integer not null"duration integer not null"director_id integer not null"foreign key (director_idreferences directors)"db commit(return db the sqlite connect(function returns database objecthaving opened the database file it is given and created an empty database file if the file did not exist in view of thisprior to calling sqlite connect()we note whether the database is going to be created from scratchbecause if it iswe must create the tables that the program relies on all queries are executed through database cursoravailable from the database object' cursor(method notice that both tables are created with an id field that has an autoincrement constraint--this means that sqlite will automatically populate the ids with unique numbersso we can leave these fields to sqlite when inserting new records sqlite supports limited range of data types--essentially just booleansnumbersand strings--but this can be extended using data "adaptors"either the predefined ones such as those for dates and datetimesor custom ones that we can use to represent any data types we like the dvds program does not need this functionalitybut if it were requiredthe sqlite module' documentation explains the details the foreign key syntax we have used may not be the same as the syntax for other databasesand in any case it is merely doc |
9,012 | database programming table db-api cursor object attributes and methods syntax arraysize description the (readable/writablenumber of rows that fetchmany(will return if no size is specified close(closes the cursorcthis is done automatically when the cursor goes out of scope description read-only sequence of -tuples (nametype_codedisplay_sizeinternal_sizeprecisionscalenull_ok)describing each successive column of cursor execute(sqlparamsexecutes the sql query in string sqlreplacing each placeholder with the corresponding parameter from the params sequence or mapping if given executemanysqlseq_of_paramsexecutes the sql query once for each item in the seq_of_params sequence of sequences or mappingsthis method should not be used for operations that create result sets (such as select statementsreturns sequence of all the rows that have not yet been fetched (which could be all of themreturns sequence of rows (each row itself being sequence)size defaults to arraysize fetchall( fetchmany(sizec fetchone(returns the next row of the query result set as sequenceor none when the results are exhausted raises an exception if there is no result set rowcount the read-only row count for the last operation ( selectinsertupdateor deleteor - if not available or not applicable umenting our intentionsince sqliteunlike many other databasesdoes not enforce relational integrity (howeversqlite does have workaround based on sqlite ' genfkey command one other sqlite -specific quirk is that its default behavior is to support implicit transactionsso there is no explicit "start transactionmethod def add_dvd(db)title console get_string("title""title"if not titlereturn director console get_string("director""director"if not directorreturn year console get_integer("year""year"minimum= maximum=datetime date today(year |
9,013 | duration console get_integer("duration (minutes)""minutes"minimum= maximum= * director_id get_and_set_director(dbdirectorcursor db cursor(cursor execute("insert into dvds "(titleyeardurationdirector_id"values (????)"(titleyeardurationdirector_id)db commit(this function starts with the same code as the equivalent function from the dvds-dbm py programbut once we have gathered the datait is quite different the director the user entered may or may not be in the directors tableso we have get_and_set_director(function that inserts the director if they are not already in the databaseand in either case returns the director' id ready for it to be inserted into the dvds table with all the data available we execute an sql insert statement we don' need to specify record id since sqlite will automatically provide one for us in the query we have used question marks for placeholders each is replaced by the corresponding value in the sequence that follows the string containing the sql statement named placeholders can also be used as we will see when we look at editing record although it is possible to avoid using placeholders and simply format the sql string with the data embedded into itwe recommend always using placeholders and leaving the burden of correctly encoding and escaping the data items to the database module another benefit of using placeholders is that they improve security since they prevent arbitrary sql from being maliciously injected into query def get_and_set_director(dbdirector)director_id get_director_id(dbdirectorif director_id is not nonereturn director_id cursor db cursor(cursor execute("insert into directors (namevalues (?)"(director,)db commit(return get_director_id(dbdirectorthis function returns the id of the given directorinserting new director record if necessary if record is inserted we retrieve its id using the get_director_id(function we tried in the first place def get_director_id(dbdirector)cursor db cursor(cursor execute("select id from directors where name=?"(director,) |
9,014 | database programming fields cursor fetchone(return fields[ if fields is not none else none the get_director_id(function returns the id of the given director or none if there is no such director in the database we use the fetchone(method because there is either zero or one matching record (we know that there are no duplicate directors because the directors table' name field has unique constraintand in any case we always check for the existence of director before adding new one the fetch methods always return sequence of fields (or none if there are no more records)even ifas herewe have asked to retrieve only single field def edit_dvd(db)titleidentity find_dvd(db"edit"if title is nonereturn title console get_string("title""title"titleif not titlereturn cursor db cursor(cursor execute("select dvds yeardvds durationdirectors name "from dvdsdirectors "where dvds director_id directors id and "dvds id=:id"dict(id=identity)yeardurationdirector cursor fetchone(director console get_string("director""director"directorif not directorreturn year console get_integer("year""year"year datetime date today(yearduration console get_integer("duration (minutes)""minutes"durationminimum= maximum= * director_id get_and_set_director(dbdirectorcursor execute("update dvds set title=:titleyear=:year"duration=:durationdirector_id=:director_id "where id=:identity"locals()db commit(to edit dvd record we must first find the record the user wants to work on if record is found we begin by giving the user the opportunity to change the title then we retrieve the other fields so that we can provide the existing values as defaults to minimize what the user must type since they can just press enter to accept default here we have used named placeholders (of the form :name)and must therefore provide the corresponding values using mapping for the select statement we have used freshly created dictionaryand for the update statement we have used the dictionary returned by locals( |
9,015 | we could use fresh dictionary for bothin which case for the update we would pass dict(title=titleyear=yearduration=durationdirector_id=director_idid=identity)instead of locals(once we have all the fields and the user has entered any changes they wantwe retrieve the corresponding director id (inserting new director record if necessary)and then update the database with the new data we have taken the simplistic approach of updating all the record' fields rather than only those which have actually been changed when we used dbm file the dvd title was used as the keyso if the title changedwe created new key-value item and deleted the original but here every dvd record has unique id which is set when the record is first insertedso we are free to change the value of any other field with no further work necessary def find_dvd(dbmessage)message "(start oftitle to message cursor db cursor(while truestart console get_string(message"title"if not startreturn (nonenonecursor execute("select titleid from dvds "where title like order by title"(start "%",)records cursor fetchall(if len(records= print("there are no dvds starting with"startcontinue elif len(records= return records[ elif len(recordsdisplay_limitprint("too many dvds ({ }start with { }try entering "more of the titleformat(len(records)start)continue elsefor irecord in enumerate(records)print("{ }{ }format( record[ ])which console get_integer("number (or to cancel)""number"minimum= maximum=len(records)return records[which if which ! else (nonenonethis function performs the same service as the find_dvd(function in the dvdsdbm py programand returns -tuple (titledvd id)or (nonenonedepending on whether record was found instead of iterating over all the data we have used the sql wildcard operator (%)so only the relevant records are retrieved |
9,016 | database programming and since we expect the number of matching records to be smallwe fetch them all at once into sequence of sequences if there is more than one matching record and few enough to displaywe print the records with number beside each one so that the user can choose the one they want in much the same way as they could in the dvds-dbm py program def list_dvds(db)cursor db cursor(sql ("select dvds titledvds yeardvds duration"directors name from dvdsdirectors "where dvds director_id directors id"start none if dvd_count(dbdisplay_limitstart console get_string("list those starting with "[enter=all]""start"sql +and dvds title like ?sql +order by dvds titleprint(if start is nonecursor execute(sqlelsecursor execute(sql(start "%",)for record in cursorprint("{ [ ]({ [ ]}{ [ ]minutesby { [ ]}formatrecord)to list the details of each dvd we do select query that joins the two tablesadding second element to the where clause if there are more records (returned by our dvd_count(functionthan the display limit we then execute the query and iterate over the results each record is sequence whose fields are those matching the select query def dvd_count(db)cursor db cursor(cursor execute("select count(*from dvds"return cursor fetchone()[ we factored these lines out into separate function because we need them in several different functions we have omitted the code for the list_directors(function since it is structurally very similar to the list_dvds(functiononly simpler because it lists only one field (namedef remove_dvd(db)titleidentity find_dvd(db"remove"if title is none |
9,017 | return ans console get_bool("remove { }?format(title)"no"if anscursor db cursor(cursor execute("delete from dvds where id=?"(identity,)db commit(this function is called when the user asks to delete recordand it is very similar to the equivalent function in the dvds-dbm py program we have now completed our review of the dvds-sql py program and seen how to create database tablesselect recordsiterate over the selected recordsand insertupdateand delete records using the execute(method we can execute any arbitrary sql statement that the underlying database supports sqlite offers much more functionality than we needed hereincluding an auto-commit mode (and other kinds of transaction control)and the ability to create functions that can be executed inside sql queries it is also possible to provide factory function to control what is returned for each fetched record ( dictionary or custom type instead of sequence of fieldsadditionallyit is possible to create in-memory sqlite databases by passing ":memory:as the filename summary ||back in we saw several different ways of saving and loading data from diskand in this we have seen how to interact with data types that hold their data on disk rather than in memory for dbm files the shelve module is very convenient since it stores string-object items if we want complete control we can of course use any of the underlying dbms directly one nice feature of the shelve module and of the dbms generally is that they use the dictionary apimaking it easy to retrieveaddeditand remove itemsand to convert programs that use dictionaries to use dbms instead one small inconvenience of dbms is that for relational data we must use separate dbm file for each key-value tablewhereas sqlite stores all the data in single file for sql databasessqlite is useful for prototypingand in many cases in its own rightand it has the advantage of being supplied with python as standard we have seen how to obtain database object using the connect(function and how to execute sql queries (such as create tableselectinsertupdateand deleteusing the database cursor' execute(method python offers complete range of choices for disk-based and in-memory data storagefrom binary filestext filesxml filesand picklesto dbms and sql |
9,018 | database programming databasesand this makes it possible to choose exactly the right approach for any given situation exercise ||write an interactive console program to maintain list of bookmarks for each bookmark keep two pieces of informationthe url and name here is an example of the program in actionbookmarks (bookmarks dbm( programming in python ( pyqt ( python ( qtrac ltd ( scientific tools for python ( )dd ( )dit ( )ist ( )emove ( )uit [ ] number of bookmark to edit url [name [pyqt]pyqt (python bindings for gui librarythe program should allow the user to addeditlistand remove bookmarks to make identifying bookmark for editing or removing as easy as possiblelist the bookmarks with numbers and ask the user to specify the number of the bookmark they want to edit or remove store the data in dbm file using the shelve module and with names as keys and urls as values structurally the program is very similar to dvds-dbm pyexcept for the find_bookmark(function which is much simpler than find_dvd(since it only has to get an integer from the user and use that to find the corresponding bookmark' name as courtesy to usersif no protocol is specifiedprepend the url the user adds or edits with the entire program can be written in fewer than lines (assuming the use of the console module for console get_string(and similara solution is provided in bookmarks py |
9,019 | python' regular expression language the regular expression module regular expressions ||| regular expression is compact notation for representing collection of strings what makes regular expressions so powerful is that single regular expression can represent an unlimited number of strings--providing they meet the regular expression' requirements regular expressions (which we will mostly call "regexesfrom now onare defined using mini-language that is completely different from python--but python includes the re module through which we can seamlessly create and use regexes regexes are used for five main purposesparsingidentifying and extracting pieces of text that match certain criteria--regexes are used for creating ad hoc parsers and also by traditional parsing tools searchinglocating substrings that can have more than one formfor examplefinding any of "pet png""pet jpg""pet jpeg"or "pet svgwhile avoiding "carpet pngand similar searching and replacingreplacing everywhere the regex matches with stringfor examplefinding "bicycleor "human powered vehicleand replacing either with "bikesplitting stringssplitting string at each place the regex matchesfor examplesplitting everywhere colon-space or equals ("or "="occurs validationchecking whether piece of text meets some criteriafor examplecontains currency symbol followed by digits the regexes used for searchingsplittingand validation are often fairly small and understandable,making them ideal for these purposes howeveralthough good book on regular expressions is mastering regular expressions by jeffrey friedlisbn it does not explicitly cover pythonbut python' re module offers very similar functionality to the perl regular expression engine that the book covers in depth |
9,020 | parsing xml files regular expressions regexes are widely and successfully used to create parsersthey do have limitation in that areathey are only able to deal with recursively structured text if the maximum level of recursion is known alsolarge and complex regexes can be difficult to read and maintain so apart from simple casesfor parsing the best approach is to use tool designed for the purpose--for exampleuse dedicated xml parser for xml if such parser isn' availablethen an alternative to using regexes is to use generic parsing toolan approach that is covered in at its simplest regular expression is an expression ( literal character)optionally followed by quantifier more complex regexes consist of any number of quantified expressions and may include assertions and may be influenced by flags this first section introduces and explains all the key regular expression concepts and shows pure regular expression syntax--it makes minimal reference to python itself then the second section shows how to use regular expressions in the context of python programmingdrawing on all the material covered in the earlier sections readers familiar with regular expressions who just want to learn how they work in python could skip to the second section the covers the complete regex language offered by the re moduleincluding all the assertions and flags we indicate regular expressions in the text using boldshow where they match using underliningand show captures using shading python' regular expression language ||in this section we look at the regular expression language in four subsections the first subsection shows how to match individual characters or groups of charactersfor examplematch aor match bor match either or the second subsection shows how to quantify matchesfor examplematch onceor match at least onceor match as many times as possible the third subsection shows how to group subexpressions and how to capture matching textand the final subsection shows how to use the language' assertions and flags to affect how regular expressions work characters and character classes |the simplest expressions are just literal characterssuch as or and if no quantifier is explicitly given it is taken to be "match one occurrencefor examplethe regex tune consists of four expressionseach implicitly quantified to match onceso it matches one followed by one followed by one followed by one eand hence matches the strings tune and attuned |
9,021 | string escapes although most characters can be used as literalssome are "special characters"--these are symbols in the regex language and so must be escaped by preceding them with backslash (\to use them as literals the special characters are ^$?+*{}[]()most of python' standard string escapes can also be used within regexesfor example\ for newline and \ for tabas well as hexadecimal escapes for characters using the \xhh\uhhhhand \uhhhhhhhh syntaxes in many casesrather than matching one particular character we want to match any one of set of characters this can be achieved by using character class--one or more characters enclosed in square brackets (this has nothing to do with python classand is simply the regex term for "set of charactersa character class is an expressionand like any other expressionif not explicitly quantified it matches exactly one character (which can be any of the characters in the character classfor examplethe regex [ea] matches both red and radarbut not read similarlyto match single digit we can use the regex [ for convenience we can specify range of characters using hyphenso the regex [ - also matches digit it is possible to negate the meaning of character class by following the opening bracket with caretso [^ - matches any character that is not digit note that inside character classapart from \the special characters lose their special meaningalthough in the case of it acquires new meaning (negationif it is the first character in the character classand otherwise is simply literal caret alsosignifies character range unless it is the first characterin which case it is literal hyphen since some sets of characters are required so frequentlyseveral have shorthand forms--these are shown in table with one exception the shorthands can be used inside character setsso for examplethe regex [\da-fa-fmatches any hexadecimal digit the exception is which is shorthand outside character class but matches literal inside character class quantifiers | quantifier has the form { ,nwhere and are the minimum and maximum times the expression the quantifier applies to must match for exampleboth { , } { , and { , match feelbut neither matches felt writing quantifier after every expression would soon become tediousand is certainly difficult to read fortunatelythe regex language supports several convenient shorthands if only one number is given in the quantifier it is taken to be both the minimum and the maximumso { is the same as { , and as we noted in the preceding sectionif no quantifier is explicitly givenit is assumed to be one ( { , or { })thereforeee is the same as { , } { , and { } { }so both { and ee match feel but not felt |
9,022 | regular expressions table character class shorthands symbol meaning matches any character except newlineor any character at all with the re dotall flagor inside character class matches literal \ matches unicode digitor [ - with the re ascii flag meaning of the flags \ matches unicode nondigitor [^ - with the re ascii flag \ matches unicode whitespaceor \ \ \ \ \vwith the re ascii flag \ matches unicode nonwhitespaceor [\ \ \ \ \vwith the re ascii flag \ matches unicode "wordcharacteror [ -za- - _with the re ascii flag \ matches unicode non-"wordcharacteror [^ -za- - _with the re ascii flag having different minimum and maximum is often convenient for exampleto match travelled and traveled (both legitimate spellings)we could use either travel{ , }ed or travell{ , }ed the { , quantification is so often used that it has its own shorthand form?so another way of writing the regex (and the one most likely to be used in practiceis travell?ed two other quantification shorthands are providedwhich stands for { , ("at least one"and which stands for { , ("any number of")in both cases is the maximum possible number allowed for quantifierusually at least all the quantifiers are shown in table the quantifier is very useful for exampleto match integers we could use \dsince this matches one or more digits this regex could match in two places in the string for example and sometimes typos are the result of pressing key too long we could use the regex bevel+ed to match the legitimate beveled and bevelledand the incorrect bevellled if we wanted to standardize on the one spellingand match only occurrences that had two or more lswe could use bevell+ed to find them the quantifier is less usefulsimply because it can so often lead to unexpected results for examplesupposing that we want to find lines that contain comments in python fileswe might try searching for #but this regex will match any line whatsoeverincluding blank lines because the meaning is "match any number of # "--and that includes none as rule of thumb for those new to regexesavoid using at alland if you do use it (or if you use ?)make sure there is at least one other expression in the regex that has nonzero quantifier--so at least one quantifier other than or since both of these can match their expression zero times |
9,023 | table regular expression quantifiers syntax meaning eor { , greedily match zero or one occurrence of expression ?or { , }nongreedily match zero or one occurrence of expression eor { ,greedily match one or more occurrences of expression +or { ,}nongreedily match one or more occurrences of expression eor { ,greedily match zero or more occurrences of expression *or { ,}nongreedily match zero or more occurrences of expression {mmatch exactly occurrences of expression { ,greedily match at least occurrences of expression { ,}nongreedily match at least occurrences of expression {,ngreedily match at most occurrences of expression {, }nongreedily match at most occurrences of expression { ,ngreedily match at least and at most occurrences of expression nongreedily match at least and at most occurrences of expression { , }it is often possible to convert uses to uses and vice versa for examplewe could match "tasselledwith at least one using tassell*ed or tassel+edand match those with two or more ls using tasselll*ed or tassell+ed if we use the regex \dit will match but why does it match all the digitsrather than just the first oneby defaultall quantifiers are greedy--they match as many characters as they can we can make any quantifier nongreedy (also called minimalby following it with symbol (the question mark has two different meanings--on its own it is shorthand for the { , quantifierand when it follows quantifier it tells the quantifier to be nongreedy for example\ +can match the string in three different places and here is another example\ ?matches zero or one digitsbut prefers to match none since it is nongreedy--on its own it suffers the same problem as in that it will match nothingthat isany text at all nongreedy quantifiers can be useful for quick and dirty xml and html parsing for exampleto match all the image tagswriting (match one "<"then one " "then one " "then one " "then zero or more of any character apart from newlinethen one ">"will not work because the part is greedy and will match everything including the tag' closing >and will keep going until it reaches the last in the entire text |
9,024 | regular expressions three solutions present themselves (apart from using proper parserone is ]*(match characters and then the tag' closing character)another is (match <imgthen any number of charactersbut nongreedilyso it will stop immediately before the tag' closing >and then the >)and third combines bothas in ]*?none of them is correctthoughsince they can all match which is not valid since we know that an image tag must have src attributea more accurate regex is ]*?src=\ +[^>]*?this matches the literal characters <imgthen one or more whitespace charactersthen nongreedily zero or more of anything except (to skip any other attributes such as alt)then the src attribute (the literal characters srcthen at least one "wordcharacter)and then any other non-characters (including noneto account for any other attributesand finally the closing grouping and capturing |in practical applications we often need regexes that can match any one of two or more alternativesand we often need to capture the match or some part of the match for further processing alsowe sometimes want quantifier to apply to several expressions all of these can be achieved by grouping with ()and in the case of alternatives using alternation with alternation is especially useful when we want to match any one of several quite different alternatives for examplethe regex aircraft|airplane|jet will match any text that contains "aircraftor "airplaneor "jetthe same thing can be achieved using the regex air(craft|plane)|jet herethe parentheses are used to group expressionsso we have two outer expressionsair(craft|planeand jet the first of these has an inner expressioncraft|planeand because this is preceded by air the first outer expression can match only "aircraftor "airplaneparentheses serve two different purposes--to group expressions and to capture the text that matches an expression we will use the term group to refer to grouped expression whether it captures or notand capture and capture group to refer to captured group if we used the regex (aircraft|airplane|jetit would not only match any of the three expressionsbut would also capture whichever one was matched for later reference compare this with the regex (air(craft|plane)|jetwhich has two captures if the first expression matches ("aircraftor "airplaneas the first capture and "craftor "planeas the second capture)and one capture if the second expression matches ("jet"we can switch off the capturing effect by following an opening parenthesis with ?:so for example(air(?:craft|plane)|jetwill have only one capture if it matches ("aircraftor "airplaneor "jet" grouped expression is an expression and so can be quantified like any other expression the quantity is assumed to be one unless explicitly given for |
9,025 | exampleif we have read text file with lines of the form key=valuewhere each key is alphanumericthe regex (\ +)=+will match every line that has nonempty key and nonempty value (recall that matches anything except newlines and for every line that matchestwo captures are madethe first being the key and the second being the value for examplethe key=value regular expression will match the entire line topicphysical geography with the two captures shown shaded notice that the second capture includes some whitespaceand that whitespace before the is not accepted we could refine the regex to be more flexible in accepting whitespaceand to strip off unwanted whitespace using somewhat longer version\ ]*(\ +)\ ]*=\ ]*+this matches the same line as before and also lines that have whitespace around the signbut with the first capture having no leading or trailing whitespaceand the second capture having no leading whitespace for exampletopic physical geography we have been careful to keep the whitespace matching parts outside the capturing parenthesesand to allow for lines that have no whitespace at all we did not use \ to match whitespace because that matches newlines (\nwhich could lead to incorrect matches that span lines ( if the re multiline flag is usedand for the value we did not use \ to match nonwhitespace because we want to allow for values that contain whitespace ( english sentencesto avoid the second capture having trailing whitespace we would need more sophisticated regexwe will see this in the next subsection captures can be referred to using backreferencesthat isby referring back to an earlier capture group one syntax for backreferences inside regexes themselves is \ where is the capture number captures are numbered starting from one and increasing by one going from left to right as each new (capturingleft parenthesis is encountered for exampleto simplistically match duplicated words we can use the regex (\ +)\ +\ which matches "word"then at least one whitespaceand then the same word as was captured (capture number is created automatically without the need for parenthesesit holds the entire matchthat iswhat we show underlined we will see more sophisticated way to match duplicate words later in long or complicated regexes it is often more convenient to use names rather than numbers for captures this can also make maintenance easier since adding or removing capturing parentheses may change the numbers but won' affect names to name capture we follow the opening parenthesis with ? for example(? \ +)=(? +has two captures called "keyand "valuethe syntax for backreferences to named captures inside note that backreferences cannot be used inside character classesthat isinside [regex flags |
9,026 | regular expressions regex is (? =namefor example(? \ +)\ +(? =wordmatches duplicate words using capture called "wordassertions and flags |one problem that affects many of the regexes we have looked at so far is that they can match more or different text than we intended for examplethe regex aircraft|airplane|jet will match "waterjetand "jetskias well as "jetthis kind of problem can be solved by using assertions an assertion does not match any textbut instead says something about the text at the point where the assertion occurs one assertion is \ (word boundary)which asserts that the character that precedes it must be "word(\wand the character that follows it must be non"word(\ )or vice versa for examplealthough the regex jet can match twice in the text the jet and jetski are noisythat isthe jet and jetski are noisythe regex \bjet\ will match only oncethe jet and jetski are noisy in the context of the original regexwe could write it either as \baircraft\ |\bairplane\ |\bjet\ or more clearly as \ (?:aircraft|airplane|jet)\bthat isword boundarynoncapturing expressionword boundary many other assertions are supportedas shown in table we could use assertions to improve the clarity of key=value regexfor exampleby changing it to ^(\ +)=([^\ ]+and setting the re multiline flag to ensure that each key=value is taken from single line with no possibility of spanning lines-providing no part of the regex matches newlineso we can' usesay\ (the flags are shown in table their syntaxes are described at the end of this subsectionand examples are given in the next section and if we want to strip whitespace from the ends and use named capturesthe regex becomes^\ ]*(? \ +)\ ]*=\ ]*(? [^\ ]+)(?<!\ ]even though this regex is designed for fairly simple taskit looks quite complicated one way to make it more maintainable is to include comments in it this can be done by adding inline comments using the syntax (?#the comment)but in practice comments like this can easily make the regex even more difficult to read much nicer solution is to use the re verbose flag--this allows us to freely use whitespace and normal python comments in regexeswith the one constraint that if we need to match whitespace we must either use \ or character class such as here' the key=value regex with comments^\ ](? \ +\ ]*=\ ](? [^\ ]+(?<!\ ]start of line and optional leading whitespace the key text the equals with optional surrounding whitespace the value text negative lookbehind to avoid trailing whitespace regex flags |
9,027 | table regular expression assertions symbo meaning matches at the startalso matches after each newline with the re multiline flag matches at the endalso matches before each newline with the re multiline flag \ matches at the start \ \ matches at "wordboundaryinfluenced by the re ascii flag--inside character class this is the escape for the backspace character matches at non-"wordboundaryinfluenced by the re ascii flag \ matches at the end (?=ematches if the expression matches at this assertion but does not advance over it--called lookahead or positive lookahead (?!ematches if the expression does not match at this assertion and does not advance over it--called negative lookahead (?<=ematches if the expression matches immediately before this assertion--called positive lookbehind (?<!ematches if the expression does not match immediately before this assertion--called negative lookbehind raw strings in the context of python program we would normally write regex like this inside raw triple quoted string--raw so that we don' have to double up the backslashesand triple quoted so that we can spread it over multiple lines in addition to the assertions we have discussed so farthere are additional assertions which look at the text in front of (or behindthe assertion to see whether it matches (or does not matchan expression we specify the expressions that can be used in lookbehind assertions must be of fixed length (so the quantifiers ?+and cannot be usedand numeric quantifiers must be of fixed sizefor example{ }in the case of the key=value regexthe negative lookbehind assertion means that at the point it occurs the preceding character must not be space or tab this has the effect of ensuring that the last character captured into the "valuecapture group is not space or tab (yet without preventing spaces or tabs from appearing inside the captured textlet' consider another example suppose we are reading multiline text that contains the names "helen patricia sharman""jim sharman""sharman joshi""helen kelly"and so onand we want to match "helen patricia"but only when referring to "helen patricia sharmanthe easiregex flags |
9,028 | regular expressions est way is to use the regex \ (helen\ +patricia)\ +sharman\ but we could also achieve the same thing using lookahead assertionfor example\ (helen\ +patricia)(?=\ +sharman\bthis will match "helen patriciaonly if it is preceded by word boundary and followed by whitespace and "sharmanending at word boundary to capture the particular variation of the forenames that is used ("helen""helen "or "helen patricia")we could make the regex slightly more sophisticatedfor example\ (helen(?:\ +(?: |patricia))?)\ +(?=sharman\bthis matches word boundary followed by one of the forename forms--but only if this is followed by some whitespace and then "sharmanand word boundary note that only two syntaxes perform capturing(eand (?penone of the other parenthesized forms captures this makes perfect sense for the lookahead and lookbehind assertions since they only make statement about what follows or precedes them--they are not part of the matchbut rather affect whether match is made it also makes sense for the last two parenthesized forms that we will now consider we saw earlier how we can backreference capture inside regex either by number ( \ or by name ( (? =name)it is also possible to match conditionally depending on whether an earlier match occurred the syntaxes are (?(id)yes_expand (?(id)yes_exp|no_expthe id is the name or number of an earlier capture that we are referring to if the capture succeeded the yes_exp will be matched here if the capture failed the no_exp will be matched if it is given let' consider an example suppose we want to extract the filenames referred to by the src attribute in html img tags we will begin just by trying to match the src attributebut unlike our earlier attempt we will account for the three forms that the attribute' value can takesingle quoteddouble quotedand unquoted here is an initial attemptsrc=(["'])([^"'>]+)\ the ([^"'>]+part captures greedy match of at least one character that isn' quote or this regex works fine for quoted filenamesand thanks to the \ matches only when the opening and closing quotes are the same but it does not allow for unquoted filenames to fix this we must make the opening quote optional and therefore match only if it is present here is revised regexsrc=(["'])?([^"'>]+)(?( )\ we did not provide no_exp since there is nothing to match if no quote is given unfortunatelythis doesn' work quite right it will work fine for quoted filenamesbut for unquoted filenames it will work only if the src attribute is the last attribute in the tagotherwise it will incorrectly match text into the next attribute the solution is to treat the two cases (quoted and unquotedseparatelyand to use alternationsrc=((["'])([^\ >]+?)\ |([^">]+)now let' see the regex in contextcomplete with named groupsnonmatching parenthesesand comments |
9,029 | <img\ [^>]*src(?(? ["'](? [^\ >]+?(? =quote(? [^">]+[^>]* start of the tag any attributes that precede the src start of the src attribute opening quote image filename closing quote matching the opening quote ---or alternatively--unquoted image filename any attributes that follow the src end of the tag the indentation is just for clarity the noncapturing parentheses are used for alternation the first alternative matches quote (either single or double)then the image filename (which may contain any characters except for the quote that matched or >)and finallyanother quote which must be the same as the matching quote we also had to use minimal matching+?for the filenameto ensure that the match doesn' extend beyond the first matching closing quote this means that filename such as " ' herepngwill match correctly note also that to refer to the matching quote inside the character class we had to use numbered backreference\ instead of (? =quote)since only numbered backreferences work inside character classes the second alternative matches an unquoted filename-- string of characters that don' include quotesspacesor due to the alternationthe filename is captured in "qimage(capture number or in "uimage(capture number since (? =quotematches but doesn' capture)so we must check for both the final piece of regex syntax that python' regular expression engine offers is means of setting the flags usually the flags are set by passing them as additional parameters when calling the re compile(functionbut sometimes it is more convenient to set them as part of the regex itself the syntax is simply (?flagswhere flags is one or more of (the same as passing re ascii) (re ignorecase) (re multiline) (re dotall)and (re verboseif the flags are set this way they should be put at the start of the regexthey match nothingso their effect on the regex is only to set the flags the regular expression module ||the re module provides two ways of working with regexes one is to use the functions listed in table )where each function is given regex as its first argument each function converts the regex into an internal format-- the letters used for the flags are the same as the ones used by perl' regex enginewhich is why is used for re dotall and is used for re verbose regex flags |
9,030 | regular expressions process called compiling--and then does its work this is very convenient for one-off usesbut if we need to use the same regex repeatedly we can avoid the cost of compiling it at each use by compiling it once using the re compile(function we can then call methods on the compiled regex object as many times as we like the compiled regex methods are listed in table match re search( "#[\da-fa- ]{ }\ "textthis code snippet shows the use of an re module function the regex matches html-style colors (such as # abif match is found the re search(function returns match objectotherwiseit returns none the methods provided by match objects are listed in table if we were going to use this regex repeatedlywe could compile it once and then use the compiled regex whenever we needed itcolor_re re compile( "#[\da-fa- ]{ }\ "match color_re search(textas we noted earlierwe use raw strings to avoid having to escape backslashes another way of writing this regex would be to use the character class [\da-fand pass the re ignorecase flag as the last argument to the re compile(callor to use the regex (? )#[\da- ]{ }\ which starts with the ignore case flag if more than one flag is required they can be combined using the or operator (|)for examplere multiline|re dotallor (?msif embedded in the regex itself we will round off this section by reviewing some examplesstarting with some of the regexes shown in earlier sectionsso as to illustrate the most commonly used functionality that the re module provides let' start with regex to spot duplicate wordsdouble_word_re re compile( "\ (? \ +)\ +(? =word)(?!\ )"re ignorecasefor match in double_word_re finditer(text)print("{ is duplicatedformat(match group("word"))the regex is slightly more sophisticated than the version we made earlier it starts at word boundary (to ensure that each match starts at the beginning of word)then greedily matches one or more "wordcharactersthen one or more whitespace charactersthen the same word again--but only if the second occurrence of the word is not followed by word character if the input text was "win in vain"without the first assertion there would be one match and one capturewin in vain there aren' two matches because while (?pmatches and capturesthe \sand (? =wordparts only match the use of the word boundary assertion ensures that the first word matched is whole wordso we end up with no match or capture since there is no du |
9,031 | plicate whole word similarlyif the input text was "one and and two let' say"without the last assertion there would be two matches and two capturesone and and two let' say the use of the lookahead assertion means that the second word matched is whole wordso we end up with one match and one captureone and and two let' say the for loop iterates over every match object returned by the finditer(method and we use the match object' group(method to retrieve the captured group' text we could just as easily (but less maintainablyhave used group( )--in which case we need not have named the capture group at all and just used the regex \ (\ +)\ +\ (?!\wanother point to note is that we could have used word boundary \ at the endinstead of (?!\wanother example we presented earlier was regex for finding the filenames in html image tags here is how we would compile the regexadding flags so that it is not case-sensitiveand allowing us to include commentsimage_re re compile( ""<img\sstart of tag [^>]*non-src attributes srcstart of src attribute (?(? ["']opening quote (? [^\ >]+?image filename (? =quoteclosing quote ---or alternatively--(? [^">]+unquoted image filename [^>]*non-src attributes end of the tag """re ignorecase|re verboseimage_files [for match in image_re finditer(text)image_files append(match group("qimage"or match group("uimage")again we use the finditer(method to retrieve each match and the match object' group(function to retrieve the captured texts each time match is made we don' know which of the image groups ("qimageor "uimage"has matchedbut using the or operator provides neat solution for this since the case insensitivity applies only to img and srcwe could drop the re ignorecase flag and use [ii][mm][ggand [ss][rr][ccinstead although this would make the regex less clearit might make it faster since it would not require the text being matched to be set to upper(or lower-case--but it is likely to make difference only if the regex was being used on very large amount of text |
9,032 | regular expressions table the regular expression module' functions syntax description re compilerfreturns compiled regex with its flags set to if specified (the flags are described in table re escape(sreturns string with all nonalphanumeric characters backslash-escaped--thereforethe returned string has no special regex characters re findallrsfreturns all nonoverlapping matches of regex in string (influenced by the flags if givenif the regex has captureseach match is returned as tuple of captures re finditerrsfreturns match object for each nonoverlapping match of regex in string (influenced by the flags if givenre matchrsfreturns match object if the regex matches at the start of string (influenced by the flags if given)otherwisereturns none returns match object if the regex matches anywhere in string (influenced by the flags if given)otherwisereturns none returns the list of strings that results from splitting string on every occurrence of regex doing up to splits (or as many as possible if no is givenand for python influenced by flags if givenif the regex has capturesthese are included in the list between the parts they split re searchrsfre splitrsmfre subrxsmfreturns copy of string with every (or up to if givenand for python influenced by flags if givenmatch of regex replaced with --this can be string or functionsee text re subnrxs mfthe same as re sub(except that it returns -tuple of the resultant string and the number of substitutions that were made table the regular expression module' flags flag meaning re or re ascii makes \ \ \ \ \wand \ assume that strings are asciithe default is for these character class shorthands to depend on the unicode specification re or re ignorecase makes the regex match case-insensitively re or re multiline re or re dotall makes match at the start and after each newline and match before each newline and at the end makes match every character including newlines re or re verbose allows whitespace and comments to be included |
9,033 | table regular expression object methods syntax description rx findall( startendreturns all nonoverlapping matches of the regex in string (or in the start:end slice of sif the regex has captureseach match is returned as tuple of captures rx finditer( startendreturns match object for each nonoverlapping match in string (or in the start:end slice of srx flags the flags that were set when the regex was compiled rx groupindex rx pattern dictionary whose keys are capture group names and whose values are group numbersempty if no names are used returns match object if the regex matches at the start of string (or at the start of the start:end slice of )otherwisereturns none the string from which the regex was compiled rx search(sstartendreturns match object if the regex matches anywhere in string (or in the start:end slice of )otherwisereturns rx match(sstartendnone rx split(smreturns the list of strings that results from splitting string on every occurrence of the regex doing up to splits (or as many as possible if no is givenif the regex has capturesthese are included in the list between the parts they split rx sub(xsmreturns copy of string with every (or up to if givenmatch replaced with --this can be string or functionsee text the same as re sub(except that it returns -tuple of the resultant string and the number of substitutions that were made rx subn(xs mone common task is to take an html text and output just the plain text that it contains naturally we could do this using one of python' parsersbut simple tool can be created using regexes there are three tasks that need to be donedelete any tagsreplace entities with the characters they representand insert blank lines to separate paragraphs here is function (taken from the html text py programthat does the jobdef html text(html_text)def char_from_entity(match)code html entities name codepoint get(match group( ) xfffdreturn chr(code |
9,034 | regular expressions text re sub( """"html_text# text re sub( "]*?>""\ \ "text# text re sub( "]*?>"""text# text re sub( "&#(\ +);"lambda mchr(int( group( )))texttext re sub( "&([ -za- ]+);"char_from_entitytext# text re sub( "\ (?:\xa \ ]+\ )+""\ "text# return re sub( "\ \ +""\ \ "text strip()# the first regexmatches html commentsincluding those with other html tags nested inside them the re sub(function replaces as many matches as it finds with the replacement--deleting the matches if the replacement is an empty stringas it is here (we can specify maximum number of matches by giving an additional integer argument at the end we are careful to use nongreedy (minimalmatching to ensure that we delete one comment for each matchif we did not do this we would delete from the start of the first comment to the end of the last comment in python the re sub(function does not accept any flags as argumentsand since means "any character except newline"we must look for or \ and we must look for these using alternation rather than character classsince inside character class has its literal meaningthat isperiod an alternative would be to begin the regex with the flag embeddedfor example(? )or we could compile regex object with the re dotall flagin which case the regex would simply be from python re split()re sub()and re subn()can all accept flags argumentso we could simply use and pass the re dotall flag the second regex]*?>matches opening paragraph tags (such as or it matches the opening < (or < )then any attributes (using nongreedy matching)and finally the closing the second call to the re sub(function uses this regex to replace opening paragraph tags with two newline characters (the standard way to delimit paragraph in plain text filethe third regex]*?>matches any tag and is used in the third re sub(call to delete all the remaining tags html entities are way of specifying non-ascii characters using ascii characters they come in two forms&namewhere name is the name of the character--for example©for ( )--and &#digitswhere digits are decimal digits identifying the unicode code point--for example&# for ythe fourth call to re sub(uses the regex &#(\ +);which matches the digits form and captures the digits into capture group instead of literal replacement text we have passed lambda function when function is passed to re sub(it calls the function once for each time it matchespassing the match object as the function' sole argument inside the lambda function we retrieve the digits (as |
9,035 | string)convert to an integer using the built-in int(functionand then use the built-in chr(function to obtain the unicode character for the given code point the function' return value (or in the case of lambda expressionthe result of the expressionis used as the replacement text the fifth re sub(call uses the regex &([ -za- ]+)to capture named entities the standard library' html entities module contains dictionaries of entitiesincluding name codepoint whose keys are entity names and whose values are integer code points the re sub(function calls the local char_from_entity(function every time it has match the char_from_entity(function uses dict get(with default argument of xfffd (the code point of the standard unicode replacement character--often depicted as this ensures that code point is always retrieved and it is used with the chr(function to return suitable character to replace the named entity with--using the unicode replacement character if the entity name is invalid the sixth re sub(call' regex\ (?:\xa \ ]+\ )+is used to delete lines that contain only whitespace the character class we have used contains spacea nonbreaking space (which  entities are replaced with in the preceding regex)and tab the regex matches newline (the one at the end of line that precedes one or more whitespace-only lines)then at least one (and as many as possiblelines that contain only whitespace since the match includes the newlinefrom the line preceding the whitespace-only lines we must replace the match with single newlineotherwisewe would delete not just the whitespace-only lines but also the newline of the line that preceded them the result of the seventh and last re sub(call is returned to the caller this regex\ \ +is used to replace sequences of two or more newlines with exactly two newlinesthat isto ensure that each paragraph is separated by just one blank line in the html example none of the replacements were directly taken from the match (although html entity names and numbers were used)but in some situations the replacement might need to include all or some of the matching text for exampleif we have list of nameseach of the form forename middlename middlenamen surnamewhere there may be any number of middle names (including none)and we want to produce new version of the list with each item of the form surnameforename middlename middlenamenwe can easily do so using regexnew_names [for name in namesname re sub( "(\ +(?:\ +\ +)*)\ +(\ +)" "\ \ "namenew_names append(namethe first part of the regex(\ +(?:\ +\ +)*)matches the forename with the first \wexpression and zero or more middle names with the (?:\ +\ +)ex |
9,036 | regular expressions pression the middle name expression matches zero or more occurrences of whitespace followed by word the second part of the regex\ +(\ +)matches the whitespace that follows the forename (and middle namesand the surname if the regex looks bit too much like line noisewe can use named capture groups to improve legibility and make it more maintainablename re sub( "(? \ +(?:\ +\ +)*) "\ +(? \ +)" "\ \ "namecaptured text can be referred to in sub(or subn(function or method by using the syntax \ or \ where is the number of the capture group and id is the name or number of the capture group--so \ is the same as \gand in this examplethe same as \ this syntax can also be used in the string passed to match object' expand(method why doesn' the first part of the regex grab the entire nameafter allit is using greedy matching in fact it willbut then the match will fail because although the middle names part can match zero or more timesthe surname part must match exactly oncebut the greedy middle names part has grabbed everything having failedthe regular expression engine will then backtrackgiving up the last "middle nameand thus allowing the surname to match although greedy matches match as much as possiblethey stop if matching more would make the match fail for exampleif the name is "john le carre"the regex will first match the entire namethat isjohn le carre this satisfies the first part of the regex but leaves nothing for the surname part to matchand since the surname is mandatory (it has an implicit quantifier of )the regex has failed since the middle names part is quantified by *it can match zero or more times (currently it is matching twiceleand carre")so the regular expression engine can make it give up some of its match without causing it to fail thereforethe regex backtracksgiving up the last \ +\ ( carre")so the match becomes john le carre with the match satisfying the whole regex and with the two match groups containing the correct texts there' one weakness in the regex as writtenit doesn' cope correctly with forenames that are written using an initialsuch as "james loewen"or " tolkeinthis is because \ matches word characters and these don' include period one obvious--but incorrect--solution is to change the forenames part of the regex' \wexpression to [\ ]+in both places that it occurs period in character class is taken to be literal periodand character class shorthands retain their meaning inside character classesso the new expression matches word characters or periods but this would allow for names like "" " "and so on in view of thisa more subtle approach is required |
9,037 | table match object attributes and methods syntax description end(greturns the end position of the match in the text for group if given (or for group the whole match)returns - if the group did not participate in the match endpos the search' end position (the end of the text or the end given to match(or search()returns string with capture markers (\ \ \gand similarreplaced by the corresponding captures expand(sm group(greturns the numbered or named capture group gif more than one is given tuple of corresponding capture groups is returned (the whole match is group groupdictdefaultreturns dictionary of all the named capture groups with the names as keys and the captures as valuesif default is given this is the value used for capture groups that did not participate in the match groupsdefaultreturns tuple of all the capture groups starting from if default is given this is the value used for capture groups that did not participate in the match lastgroup the name of the highest numbered capturing group that matched or none if there isn' one or if no names are used the number of the highest capturing group that matched or none if there isn' one the start position to look from (the start of the text or the start given to match(or search() lastindex pos re the regex object which produced this match object span(greturns the start and end positions of the match in the text for group if given (or for group the whole match)returns (- - if the group did not participate in the match start(greturns the start position of the match in the text for group if given (or for group the whole match)returns - if the group did not participate in the match string the string that was passed to match(or search(name re sub( "(? \ +?(?:\ +\ +?)*) "\ +(? \ +)" "\ \ "namehere we have changed the forenames part of the regex (the first linethe first part of the forenames regex matches one or more word characters optionally followed by period the second part matches at least one whitespace charac |
9,038 | regular expressions terthen one or more word characters optionally followed by periodwith the whole of this second part itself matching zero or more times when we use alternation (|with two or more alternatives capturingwe don' know which alternative matchedso we don' know which capture group to retrieve the captured text from we can of course iterate over all the groups to find the nonempty onebut quite often in this situation the match object' lastindex attribute can give us the number of the group we want we will look at one last example to illustrate this and to give us little bit more regex practice suppose we want to find out what encoding an htmlxmlor python file is using we could open the file in binary modeand readsaythe first bytes into bytes object we could then close the filelook for an encoding in the bytesand reopen the file in text mode using the encoding we found or using fallback encoding (such as utf- the regex engine expects regexes to be supplied as stringsbut the text the regex is applied to can be strbytesor bytearray objectand when bytes or bytearray objects are usedall the functions and methods return bytes instead of stringsand the re ascii flag is implicitly switched on for html files the encoding is normally specified in tag (if specified at all)for example<meta http-equiv='content-typecontent='text/htmlcharset=iso- - '/xml files are utf- by defaultbut this can be overriddenfor examplepython files are also utf- by defaultbut again this can be overridden by including line such as encodinglatin or -*codinglatin -*immediately after the shebang line here is how we would find the encodingassuming that the variable binary is bytes object containing the first bytes of an htmlxmlor python filematch re search( """(?<![-\ ]# (?:(?:en)?coding|charset# (?:=(["'])?([-\ ]+)(?( )\ # |:\ *([-\ ]+))""encode("utf ")binaryre ignorecase|re verboseencoding match group(match lastindexif match else "utf to search bytes object we must specify pattern that is also bytes object in this case we want the convenience of using raw stringso we use one and convert it to bytes object as the re search(function' first argument conditional matching the first part of the regex itself is lookbehind assertion that says that the match cannot be preceded by hyphen or word character the second part matches "encoding""coding"or "charsetand could have been written as (?:encoding|coding|charsetwe have made the third part span two lines to emphasise the fact that it has two alternating parts=(["'])?([-\ ]+)(?( )\ |
9,039 | and :\ *([-\ ]+)only one of which can match the first of these matches an equals sign followed by one or more word or hyphen characters (optionally enclosed in matching quotes using conditional match)and the second matches colon and then optional whitespace followed by one or more word or hyphen characters (recall that hyphen inside character class is taken to be literal hyphen if it is the first characterotherwiseit means range of charactersfor example[ - we have used the re ignorecase flag to avoid having to write (?:(?:[ee][nn])[cc][oo][dd][ii][nn][gg]|[cc][hh][aa][rr][ss][ee][tt]and we have used the re verbose flag so that we can lay out the regex neatly and include comments (in this case just numbers to make the parts easy to refer to in this textthere are three capturing match groupsall in the third part(["'])which captures the optional opening quote([-\ ]+which captures an encoding that follows an equals signand the second ([-\ ]+(on the following linethat captures an encoding that follows colon we are only interested in the encodingso we want to retrieve either the second or third capture grouponly one of which can match since they are alternatives the lastindex attribute holds the index of the last matching capture group (either or when match occurs in this example)so we retrieve whichever matchedor use default encoding if no match was made we have now seen all of the most frequently used re module functionality in actionso we will conclude this section by mentioning one last function the re split(function (or the regex object' split(methodcan split strings based on regex one common requirement is to split text on whitespace to get list of words this can be done using re split( "\ +"textwhich returns list of words (or more precisely list of stringseach of which matches \ +regular expressions are very powerful and usefuland once they are learnedit is easy to see all text problems as requiring regex solution but sometimes using string methods is both sufficient and more appropriate for examplewe can just as easily split on whitespace by using text split(since the str split(method' default behavior (or with first argument of noneis to split on \ssummary ||regular expressions offer powerful way of searching texts for strings that match particular patternand for replacing such strings with other strings which themselves can depend on what was matched in this we saw that most characters are matched literally and are implicitly quantified by { we also learned how to specify character classes--sets of characters to match--and how to negate such sets and include |
9,040 | regular expressions ranges of characters in them without having to write each character individually we learned how to quantify expressions to match specific number of times or to match from given minimum to given maximum number of timesand how to use greedy and nongreedy matching we also learned how to group one or more expressions together so that they can be quantified (and optionally capturedas unit the also showed how what is matched can be affected by using various assertionssuch as positive and negative lookahead and lookbehindand by various flagsfor exampleto control the interpretation of the period and whether to use case-insensitive matching the final section showed how to put regexes to use within the context of python programs in this section we learned how to use the functions provided by the re moduleand the methods available from compiled regexes and from match objects we also learned how to replace matches with literal stringswith literal strings that contain backreferencesand with the results of function calls or lambda expressionsand how to make regexes more maintainable by using named captures and comments exercises || in many contexts ( in some web forms)users must enter phone numberand some of these irritate users by accepting only specific format write program that reads phone numbers with the three-digit area and seven-digit local codes accepted as ten digitsor separated into blocks using hyphens or spacesand with the area code optionally enclosed in parentheses for exampleall of these are valid( ( and read the phone numbers from sys stdin and for each one echo the number in the form "( or report an error for any that are invalidor that don' have exactly ten digits the regex to match these phone numbers is about ten lines long (in verbose modeand is quite straightforward solution is provided in phone pywhich is about twenty-five lines long write small program that reads an xml or html file specified on the command line and for each tag that has attributesoutputs the name of the tag with its attributes shown underneath for examplehere is an extract from the program' output when given one of the python documentation' index html fileshtml xmlns |
9,041 | meta http-equiv content-type content text/htmlcharset=utf- li class right style margin-right px one approach is to use two regexesone to capture tags with their attributes and another to extract the name and value of each attribute attribute values might be quoted using single or double quotes (in which case they may contain whitespace and the quotes that are not used to enclose them)or they may be unquoted (in which case they cannot contain whitespace or quotesit is probably easiest to start by creating regex to handle quoted and unquoted values separatelyand then merging the two regexes into single regex to cover both cases it is best to use named groups to make the regex more readable this is not easyespecially since backreferences cannot be used inside character classes solution is provided in extract_tags pywhich is less than lines long the tag and attributes regex is just one line the attribute name-value regex is half dozen lines and uses alternationconditional matching (twicewith one nested inside the other)and both greedy and nongreedy quantifiers |
9,042 | bnf syntax and parsing terminology writing handcrafted parsers pythonic parsing with pyparsing lex/yacc-style parsing with ply introduction to parsing |||parsing is fundamental activity in many programsand for all but the most trivial casesit is challenging topic parsing is often done when we need to read data that is stored in custom format so that we can process it or perform queries on it or we may be required to parse dsl (domain-specific language)--these are mini task-specific languages that appear to be growing in popularity whether we need to read data in custom format or code written using dslwe will need to create suitable parser this can be done by handcraftingor by using one of python' generic parsing modules python can be used to write parsers using any of the standard computer science techniquesusing regexesusing finite state automatausing recursive descent parsersand so on all of these approaches can work quite wellbut for data or dsls that are complex--for examplerecursively structured and featuring operators that have different precedences and associativities--they can be challenging to get right alsoif we need to parse many different data formats or dslshandcrafting each parser can be time-consuming and tedious to maintain fortunatelyfor some data formatswe don' have to write parser at all for examplewhen it comes to parsing xmlpython' standard library comes with domsaxand element tree parserswith other xml parsers available as third-party add-ons file formats dynamic code execution in factpython has built-in support for reading and writing wide range of data formatsincluding delimiter-separated data with the csv modulewindows-style ini files with the configparser modulejson data with the json moduleand also few othersas mentioned in python does not provide any built-in support for parsing other languagesalthough it does provide the shlex module which can be used to create lexer for unix shelllike mini-languages (dsls)and the tokenize module that provides lexer for python source code and of coursepython can execute python code using the built-in eval(and exec(functions |
9,043 | introduction to parsing in generalif python already has suitable parser in the standard libraryor as third-party add-onit is usually best to use it rather than to write our own when it comes to parsing data formats or dsls for which no parser is availablerather than handcrafting parserwe can use one of python' third-party general-purpose parsing modules in this we will introduce two of the most popular third-party parsers one of these is paul mcguire' pyparsing modulewhich takes unique and very pythonic approach the other is david beazley' ply (python lex yacc)which is closely modeled on the classic unix lex and yacc toolsand that makes extensive use of regexes many other parsers are availablewith many listed at www dabeaz com/ply (at the bottom of the page)and of coursein the python package indexpypi python org/pypi this first section provides brief introduction to the standard bnf (backus-naur formsyntax used to describe the grammars of data formats and dsls in that section we will also explain the basic terminology the remaining sections all cover parsing itselfwith the second section covering handcrafted parsersusing regexesand using recursive descentas natural follow-on from the regular expressions the third section introduces the pyparsing module the initial examples are the same as those for which handcrafted parsers are created in the second section--this is to help learn the pyparsing approachand also to provide the opportunity to compare and contrast the section' last example has more ambitious grammar and is new in this section the last section introduces the ply moduleand shows the same examples we used in the pyparsing sectionagain for ease of learning and to provide basis for comparison note that with one exceptionthe handcrafted parsers section is where each data format and dsl is describedits bnf givenand an example of the data or dsl shownwith the other sections providing backreferences to these where appropriate the exception is the first-order logic parser whose details are given in the pyparsing sectionwith corresponding backreferences in the ply section bnf syntax and parsing terminology ||parsing is means of transforming data that is in some structured format--whether the data represents actual dataor statements in programming languageor some mixture of both--into representation that reflects the data' structure and that can be used to infer the meaning that the data represents the parsing process is most often done in two phaseslexing (also called lexical analysistokenizingor scanning)and parsing proper (also called syntactic analysis |
9,044 | for examplegiven sentence in the english languagesuch as "the dog barked"we might transform the sentence into sequence of (part-of-speechword -tuples((definite_article"the")(noun"dog")(verb"barked")we would then perform syntactic analysis to see if this is valid english sentence in this case it isbut our parser would have to rejectsay"the barked dogthe lexing phase is used to convert the data into stream of tokens in typical caseseach token holds at least two pieces of informationthe token' type (the kind of data or language construct being represented)and the token' value (which may be empty if the type stands for itself--for examplea keyword in programming languagethe parsing phase is where parser reads each token and performs some semantic action the parser operates according to predefined set of grammar rules that define the syntax that the data is expected to follow (if the data doesn' follow the syntax rules the parser will correctly fail in multiphase parsersthe semantic action consists of building up an internal representation of the input in memory (called an abstract syntax tree--ast)which serves as input to the next phase once the ast has been constructedit can be traversedfor exampleto query the dataor to write the data out in different formator to perform computations that correspond to the meanings encoded in the data data formats and dsls (and programming languages generallycan be described using grammar-- set of syntax rules that define what is valid syntax for the data or language of coursejust because statement is syntactically valid doesn' mean that it makes sense--for example"the cat ate democracyis syntactically valid englishbut meaningless nonethelessbeing able to define the grammar is very usefulso much so that there is commonly used syntax for describing grammars--bnf (backus-naur formcreating bnf is the first step to creating parserand although not formally necessaryfor all but the most trivial grammars it should be considered essential here we will describe very simple subset of bnf syntax that is sufficient for our needs in bnf there are two kinds of itemterminals and nonterminals terminal is an item which is in its final formfor examplea literal number or string nonterminal is an item that is defined in terms of zero or more other items (which themselves may be terminals or nonterminalsevery nonterminal must ultimately be defined in terms of zero or more terminals figure shows an example bnf that defines the syntax of file of "attributes"to put things into perspective in practiceparsing english and other natural languages is very difficult problemseefor examplethe natural language toolkit (www nltk orgfor more information |
9,045 | introduction to parsing attribute_file ::(attribute '\ ')attribute ::name '=value name ::[ -za- ]\wvalue ::'true'false\ [ -za- ]\wfigure bnf for file of attributes the symbol ::means is defined as nonterminals are written in uppercase italics ( valueterminals are either literal strings enclosed in quotes (such as '=and 'true'or regular expressions (such as \ +the definitions (on the right of the ::=are made up of one or more terminals or nonterminals--these must be encountered in the sequence given to meet the definition howeverthe vertical bar (|is used to indicate alternativesso instead of matching in sequencematching any one of the alternatives is sufficient to meet the definition terminals and nonterminals can be quantified with (zero or onei optional)(one or more)or (zero or more)without an explicit quantifier they are quantified to match exactly once parentheses can be used for grouping two or more terminals or nonterminals that we want to treat as unitfor exampleto group alternatives or for quantification bnf always has "start symbol"--this is the nonterminal that must be matched by the entire input we have adopted the convention that the first nonterminal is always the start symbol in this example there are four nonterminalsattribute_file (the start symbol)attributenameand value an attribute_file is defined as one or more of an attribute followed by newline an attribute is defined as name followed by literal ( terminal)followed by value since both the name and value parts are nonterminalsthey must themselves be defined the name is defined by regular expression ( terminalthe value is defined by any of four alternativestwo literals and two regular expressions (all of which are terminalssince all the nonterminals are defined in terms of terminals (or in terms of nonterminals which themselves are ultimately defined in terms of terminals)the bnf is complete there is generally more than one way to write bnf figure shows an alternative version of the attribute_file bnf attribute_file ::attributeattribute ::name '=value '\nname ::[ -za- ]\wvalue ::'true'false\dname figure an alternative bnf for file of attributes |
9,046 | here we have moved the newline to the end of the attribute nonterminalthus simplifying the definition of attribute_file we have also reused the name nonterminal in the value--although this is dubious change since it is mere coincidence that they can both match the same regex this version of the bnf should match exactly the same text as the first one once we have bnf we can "testit mentally or on paper for examplegiven the text "depth \ "we can work through the bnf to see if the text matchesstarting with the first nonterminalattribute_file this nonterminal begins by matching another nonterminalattribute and the attribute nonterminal begins by matching yet another nonterminalnamewhich in turn must match the terminal regex[ -za- ]\wthe regex does indeed match the beginning of the textmatching "depththe next thing that attribute must match is terminalthe literal and here the match fails because "depthis followed by space at this point the parser should report that the given text does not match the grammar in this particular case we must either fix the data by eliminating the space before and after the =or opt to change the grammar--for examplechanging the (firstdefinition of attribute to name \ \svalue after doing few paper tests and refining the grammar like this we should have much clearer idea of what our bnf will and won' match bnf must be complete to be validbut valid bnf is not necessarily correct one one problem is with ambiguity--in the example shown here the literal value true matches the value nonterminal' first alternative ('true')and also its last alternative ([ -za- ]\ *this doesn' stop the bnf from being validbut it is something that parser implementing the bnf must account for and as we will see later in this bnfs can become quite tricky since sometimes we define things in terms of themselves this can be another source of ambiguity--and can result in unparseable grammars precedence and associativity are used to decide the order in which operators should be applied in expressions that don' have parentheses precedence is used when there are different operatorsand associativity is used when the operators are the same for an example of precedencethe python expression evaluates to this means that has higher precedence in python than because the expression behaved as if it were written ( another way of saying this is "in pythonbinds more tightly than +for an example of associativitythe expression evaluates to this means that is left-associativethat iswhen an expression contains two or more / they will be evaluated from left to right here was evaluated first to produce and then to produce by contrastthe operator is right-associativewhich is why we can write when there are two or more = they are evaluated from right to leftso is evaluated firstgiving valueand then giving value if was not right-associative the |
9,047 | introduction to parsing expression would fail (assuming that didn' exist beforesince it would start by trying to assign the value of nonexistent variable to precedence and associativity can sometimes work together for exampleif two different operators have the same precedence (this is commonly the case with and -)without the use of parenthesestheir associativities are all that can be used to determine the evaluation order expressing precedence and associativity in bnf can be done by composing factors into terms and terms into expressions for examplethe bnf in figure defines the four basic arithmetic operations over integersas well as parenthesized subexpressionsand all with the correct precedences and (left to rightassociativities integer ::\dadd_operator ::'+'-scale_operator ::'*'/expression ::term (add_operator term)term ::factor (scale_operator factor)factor ::'-'(integer '(expression ')'figure bnf for arithmetic operations the precedence relationships are set up by the way we combine expressionstermsand factorswhile the associativities are set up by the structure of each of the expressiontermand factor' nonterminalsdefinitions if we need right to left associativitywe can use the following structurepower_expression ::factor ('**power_expression)the recursive use of power_expression forces the parser to work right to left dealing with precedence and associativity can be avoided altogetherwe can simply insist that the data or dsl uses parentheses to make all the relationships explicit although this is easy to doit isn' doing any favors for the users of our data format or of our dslso we prefer to incorporate precedence and associativity where they are appropriate there is lot more to parsing than we have mentioned here--seefor examplethe book parsing techniquesa practical guidementioned in the bibliography nonethelessthis should be sufficient to get startedalthough additional reading is recommended for those planning to create complex and sophisticated parsers another way to avoid precedence and associativity--and which doesn' require parentheses--is to use polish or reverse polish notationsee wikipedia org/wiki/polish_notation bnf |
9,048 | now that we have passing familiarity with bnf syntax and with some of the terminology used in parsingwe will write some parsersstarting with ones written by hand writing handcrafted parsers ||in this section we will develop three handcrafted parsers the first is little more than an extension of the key-value regex seen in the previous but shows the infrastructure needed to use such regex the second is also regex-basedbut is actually finite state automata since it has two states both the first and second examples are data parsers the third example is parser for dsl and uses recursive descent since the dsl allows expressions to be nested in later sections we will develop new versions of these parsers using pyparsing and plyand for the dsl in particular we will see how much easier it is to use generic parser generator than to handcraft parser simple key-value data parsing |the book' examples include program called playlists py this program can read playlist in (extended moving picture experts group audio layer uniform resource locatorformatand output an equivalent playlist in pls (play list format--or vice versa in this subsection we will write parser for pls formatand in the following subsection we will write parser for format both parsers are handcrafted and both use regexes the pls format is essentially the same as windows ini formatso we ought to use the standard library' configparser module to parse it howeverthe pls format is ideal for creating first data parsersince its simplicity leaves us free to focus on the parsing aspectsso for the sake of example we won' use the configparser module in this case we will begin by looking at tiny extract from pls file to get feel for the datathen we will create bnfand then we will create parser to read the data the extract is shown in figure we have omitted most of the data as indicated by the ellipsis there is only one ini-style header line[playlist]with all the other entries in simple key=value format one unusual aspect is that key names are repeated--but with numbers appended to keep them all unique three pieces of data are maintained for each songthe filename (in this example using windows path separators)the titleand the duration (called "length"in seconds in this particular examplethe first song has known durationbut the last entry' duration is unknownwhich is signified by negative number pyparsing keyvalue parser ply keyvalue parser |
9,049 | introduction to parsing [playlistfile =blondie\atomic\ -atomic ogg title =blondie atomic length = file =blondie\atomic\ - ' gonna love you too ogg title =blondie ' gonna love you too length =- numberofentries= version= figure an extract from pls file the bnf we have created can handle pls filesand is actually generic enough to handle similar key-value formats too the bnf is shown in figure pls ::(line '\ ')line ::ini_header key_value comment blank ini_header ::'[[^]]']key_value ::key \ '=\svaluekey ::\wvalue ::comment ::blank ::^figure bnf for the pls file format attributes bnf the bnf defines pls as one or more of line followed by newline each line can be an ini_headera key_valuea commentor blank the ini_header is defined to be an open bracketfollowed by one or more characters (excluding close bracket)followed by close bracket--we will skip these the key_value is subtly different from the attribute in the attribute_file example shown in the previous section in that the value is optionalalsohere we allow whitespace before and after the this means that line such as "title =\nis valid in this bnfas well as the ones that we would expect to be valid such as "length= \nthe key is sequence of one or more alphanumeric charactersand the value is any sequence of characters comments are python-style and we will skip themsimilarlyblank lines (blankare allowed but will be skipped the purpose of our parser is to populate dictionary with key-value items matching those in the filebut with lowercase keys the playlists py program uses the parser to obtain dictionary of playlist data which it then outputs in the requested format we won' cover the playlists py program itself since it |
9,050 | isn' relevant to parsing as suchand in any case it can be downloaded from the book' web site the parsing is done in single function that accepts an open file object (file)and boolean (lowercase_keysthat has default value of false the function uses two regexes and populates dictionary (key_valuesthat it returns we will look at the regexes and then the code that parses the file' lines and that populates the dictionary ini_header re compile( "^\[[^]]+\]$"although we want to ignore ini headers we still need to identify them the regex makes no allowance for leading or trailing whitespace--this is because we will be stripping whitespace from each line that is read so there will never be any the regex itself matches the start of the linethen an open bracketthen one or more characters (but not close brackets)then close bracketand finallythe end of the line key_value_re re compile( "^(? \ +)\ *=\ *(? *)$"keyvalue regex the key_value_re regex allows for whitespace around the signbut we only capture the actual key and value the value is quantified by so can be empty alsowe use named captures since these are clearer to read and easier to maintain because they are not affected by new capture groups being added or removed--something that would affect us if we used numbers to identify the capture groups key_values {for linoline in enumerate(filestart= )line line strip(if not line or line startswith("#")continue key_value key_value_re match(lineif key_valuekey key_value group("key"if lowercase_keyskey key lower(key_values[keykey_value group("value"elseini_header ini_header match(lineif not ini_headerprint("failed to parse line { }{ }format(linoline)enumerate(function we process the file' contents line by lineusing the built-in enumerate(function to return -tuples of the line number (starting from as is traditional when dealing with text files)and the line itself we strip off whitespace so that |
9,051 | introduction to parsing we can immediately skip blank lines (and use slightly simpler regexes)we also skip comment lines since we expect most lines to be key=value lineswe always try to match the key_value_re regex first if this succeeds we extract the keyand lowercase it if necessary then we add the key and the value to the dictionary if the line is not key=value linewe try to match ini header--and if we get match we simply ignore it and continue to the next lineotherwise we report an error (it would be quite straightforward to create dictionary whose keys are ini headers and whose values are dictionaries of the headerskey-values--but if we want to go that farwe really ought to use the configparser module the regexes and the code are quite straightforward--but they are dependent on each other for exampleif we didn' strip whitespace from each line we would have to change the regexes to allow for leading and trailing whitespace here we found it more convenient to strip the whitespacebut there may be occasions where we do things the other way round--there is no one single correct approach at the end (not shown)we simply return the key_values dictionary one disadvantage of using dictionary in this particular case is that every key-value pair is distinctwhereas in factitems with keys that end in the same number ( "title ""file "and "length "are logically related the playlists py program has function (songs_from_dictionary()not shownbut in the book' source codethat reads in key-value dictionary of the kind returned by the code shown here and returns list of song tuples--something we will do directly in the next subsection playlist data parsing |the playlists py program mentioned in the previous subsection can read and write pls format files in this subsection we will write parser that can read files in format and that returns its results in the form of list of collections namedtuple(objectseach of which holds titlea duration in secondsand filename as usualwe will begin by looking at an extract of the data we want to parsethen we will create suitable bnfand finally we will create parser to parse the data the data extract is shown in figure we have omitted most of the data as indicated by the ellipsis the file must begin with the line #extm each entry occupies two lines the first line of an entry starts with #extinfand provides the duration in seconds and the title the second line of an entry has the filename just like with pls formata negative duration signifies that the duration is unknown pyparsing parser ply parser |
9,052 | #extm #extinf: ,blondie atomic blondie\atomic\ -atomic ogg #extinf:- ,blondie ' gonna love you too blondie\atomic\ - ' gonna love you too ogg figure an extract from file the bnf is shown in figure it defines as the literal text #extm followed by newline and then one or more entrys each entry consists of an info followed by newline then filename followed by newline an info starts with the literal text #extinffollowed by the duration specified by secondsthen commaand then the title the seconds is defined as an optional minus sign followed by one or more digits both the title and filename are loosely defined as sequences of any characters except newlines ::'#extm \nentryentry ::info '\nfilename '\ninfo ::'#extinf:seconds ',title seconds ::'-'\dtitle ::[^\ ]filename ::[^\ ]figure bnf for the format named tuples before reviewing the parser itselfwe will first look at the named tuple that we will use to store each resultsong collections namedtuple("song""title seconds filename"this is much more convenient than using dictionary with keys like "file ""title "and so onand where we have to write code to match up all those keys that end in the same number we will review the parser' code in four very short parts for ease of explanation if fh readline(!"#extm \ "print("this is not file"return [songs [info_re re compile( "#extinf:(? -?\ +),(? +)" |
9,053 | introduction to parsing want_infowant_filename range( state want_info the open file object is in variable fh if the file doesn' start with the correct text for file we output an error message and return an empty list the song named tuples will be stored in the songs list the regex is for matching the bnf' info nonterminal the parser itself is always in one of two stateseither want_info (the start stateor want_filename in the want_info state the parser tries to get the title and secondsand in the want_filename state the parser creates new song and adds it to the songs list for linoline in enumerate(fhstart= )line line strip(if not linecontinue we iterate over each line in the given open file object in similar way to what we did for the pls parser in the previous subsectiononly this time we start the line numbers from since we handle line before entering the loop we strip whitespace and skip blank linesand do further processing depending on which state we are in if state =want_infoinfo info_re match(lineif infotitle info group("title"seconds int(info group("seconds")state want_filename elseprint("failed to parse line { }{ }formatlinoline)if we are expecting an info line we attempt to match the info_re regex to extract the title and the number of seconds then we change the parser' state so that it expects the next line to be the corresponding filename we don' have to check that the int(conversion works ( by using try except)since the text used in the conversion always matches valid integer because of the regex pattern (-?\ +elif state =want_filenamesongs append(song(titlesecondsline)title seconds none state want_info if we are expecting filename line we simply append new song with the previously set title and secondsand with the current line as the filename |
9,054 | we then restore the parser' state to its start state ready to parse another song' details at the end (not shown)we return the songs list to the caller and thanks to the use of named tupleseach song' attributes can be conveniently accessed by namefor examplesongs[ title keeping track of state using variable as we have done here works well in many simple cases but in general this approach is insufficient for dealing with data or dsls that can contain nested expressions in the next subsection we will see how to maintain state in the face of nesting parsing the blocks domain-specific language |the blocks py program is provided as one of the book' examples it reads one or more blk files that use custom text format--blocks formata made-up language--that are specified on the command lineand for each one creates an svg (scalable vector graphicsfile with the same namebut with its suffix changed to svg while the rendered svg files could not be accused of being prettythey provide good visual representation that makes it easy to see mistakes in the blk filesas well as showing the potentiality that even simple dsl can make possible [[lightbluedirector/[[lightgreensecretary/[minion # [[minion # figure the hierarchy blk file figure shows the complete hierarchy blk fileand figure shows how the hierarchy svg file that the blocks py program produces is rendered the blocks format has essentially two elementsblocks and new row markers blocks are enclosed in brackets blocks may be emptyin which case they are used as spacers occupying one cell of notional grid blocks may also contain text and optionally color new row markers are forward slashes and they indicate where new row should begin in figure two new row markers are used each time and this is what creates the two blank rows that are visible in figure the blocks format also allows blocks to be nested inside one anothersimply by including blocks and new row markers inside block' bracketsafter the block' text pyparsing blocks parser ply blocks parser |
9,055 | introduction to parsing figure the hierarchy svg file figure shows the complete messagebox blk file in which blocks are nestedand figure shows how the messagebox svg file is rendered [# ccdemessagebox window [lightgrayframe [[whitemessage text/[goldenrodok button[[#ff cancel button[figure the messagebox blk file colors can be specified using the names supported by the svg formator as hexadecimal values (indicated by leading #the blocks file shown in figure has one outer block ("messagebox window")an inner block ("frame")and several blocks and new row markers inside the inner block the whitespace is used purely to make the structure clearer to human readersit is ignored by the blocks format figure the messagebox svg file |
9,056 | now that we have seen couple of blocks fileswe will look at the blocks bnf to more formally understand what constitutes valid blocks file and as preparation for parsing this recursive format the bnf is shown in figure blocks ::nodesnodes ::new_row\snodenode ::'[\ (color ':')\sname\snodes\ ']color ::'#[\da-fa- ]{ [ -za- ]\wname ::[^][/]new_row ::'/figure bnf for the blk format the bnf defines blocks file as having one or more nodes nodes consists of zero or more new_rows followed by one or more nodes node is left bracket followed by an optional color followed by an optional name followed by zero or more nodes followed by right square bracket the color is simply hash (poundsymbol followed by six hexadecimal digits and colonor sequence of one or more alphanumeric characters that begins with an alphabetic characterand followed by colon the name is sequence of any characters but excluding brackets or forward slashes new_row is literal forward slash as the many occurrences of \ssuggestwhitespace is allowed anywhere between terminals and nonterminals and is of no significance the definition of the node nonterminal is recursive because it contains the nodes nonterminal which itself is defined in terms of the node nonterminal recursive definitions like this are easy to get wrong and can lead to parsers that loop endlesslyso it might be worthwhile doing some paper-based testing to make sure the grammar does terminatethat isthat given valid input the grammar will reach all terminals rather than endlessly looping from one nonterminal to another previouslyonce we had bnfwe have dived straight into creating parser and doing the processing as we parse this isn' practical for recursive grammars because of the potential for elements to be nested what we will need to do is to create class to represent each block (or new rowand that can hold list of nested child blockswhich themselves might contain childrenand so on we can then retrieve the parser' results as list (which will contain lists within lists as necessary to represent nested blocks)and we can convert this list into tree with an "emptyroot block and all the other blocks as its children in the case of the hierarchy blk examplethe root block has list of new rows and of child blocks (including empty blocks)none of which have any children this is illustrated in figure --the hierarchy blk file was shown earlier ( the messagebox blk example has root block that has one child block (the "messagebox window")which itself has one child block (the "frame") |
9,057 | block has no name and the color white the children list contains blocks and nones--the latter representing new row markers rather than rely on users of the block class remembering all of these conventionswe have provided some module methods to abstract them away class blockdef __init__(selfnamecolor="white")self name name self color color self children [def has_children(self)return bool(self childrenthe block class is very simple the has_children(method is provided as convenience for the blockoutput py module we haven' provided any explicit api for adding childrensince clients are expected to work directly with the children list attribute get_root_block lambdablock(nonenoneget_empty_block lambdablock(""get_new_row lambdanone is_new_row lambda xx is none lambda functions these four tiny helper functions provide abstractions for the block class' conventions they mean that programmers using the block module don' have to remember the conventionsjust the functionsand also give us little bit of wiggle room should we decide to change the conventions later on now that we have the block class and supporting functions (all defined in the block py module file imported by the blocks py program that contains the parser)we are ready to write blk parser the parser will create root block and populate it with children (and children' childrenetc )to represent the parsed blk fileand which can then be passed to the blockoutput save_blocks_as_svg(function the parser is recursive descent parser--this is necessary because the blocks format can contain nested blocks the parser consists of data class that is initialized with the text of the file to be parsed and that keeps track of the current parse position and provides methods for advancing through the text in additionthe parser has group of parse functions that operate on an instance of the data classadvancing through the data and populating stack of blocks some of these functions call each other recursivelyreflecting the recursive nature of the data which is also reflected in the bnf |
9,058 | introduction to parsing we will begin by looking at the data classthen we will see how the class is used and the parsing startedand then we will review each parsing function as we encounter it class datadef __init__(selftext)self text text self pos self line self column self brackets self stack [block get_root_block()the data class holds the text of the file we are parsingthe position we are up to (self pos)and the ( -basedline and column this position represents it also keeps track of the brackets (adding one to the count for every open bracket and subtracting one for every close bracketthe stack is list of blocksinitialized with an empty root block at the end we will return the root block--if the parse was successful this block will have child blocks (which may have their own child blocksetc )representing the blocks data def location(self)return "line { }column { }format(self lineself columnthis is tiny convenience method to return the current location as string containing the line and column numbers def advance_by(selfamount)for in range(amount)self _advance_by_one(the parser needs to advance through the text as it parses for convenienceseveral advancing methods are providedthis one advances by the given number of characters def _advance_by_one(self)self pos + if (self pos len(self textand self text[self pos="\ ")self line + self column elseself column + |
9,059 | all the advancing methods use this private method to actually advance the parser' position this means that the code to keep the line and column numbers up-to-date is kept in one place def advance_to_position(selfposition)while self pos positionself _advance_by_one(this method advances to given index position in the textagain using the private _advance_by_one(method def advance_up_to(selfcharacters)while (self pos len(self textand self text[self posnot in characters and self text[self posisspace())self _advance_by_one(if not self pos len(self text)return false if self text[self posin charactersreturn true raise lexerror("expected '{ }but got '{ }'format(charactersself text[self pos])this method advances over whitespace until the character at the current position is one of those in the given string of characters it differs from the other advance methods in that it can fail (since it might reach nonwhitespace character that is not one of the expected characters)it returns boolean to indicate whether it succeeded class lexerror(exception)pass this exception class is used internally by the parser we prefer to use custom exception rather thansayvalueerrorbecause it makes it easier to distinguish our own exceptions from python' when debugging data data(texttryparse(dataexcept lexerror as errraise valueerror("error {{ }}:{ }{ }formatdata location()err)return data stack[ the top-level parsing is quite simple we create an instance of the data class based on the text we want to parse and then we call the parse(function (which we will see in momentto perform the parsing if an error occurs custom lexerror is raisedwe simply convert this to valueerror to insulate any caller |
9,060 | introduction to parsing from the internal exceptions we use unusuallythe error message contains an escaped str format(field name--the caller is expected to use this to insert the filenamesomething we cannot do here because we are only given the file' textnot the filename or file object at the end we return the root blockwhich should have children (and their childrenrepresenting the parsed blocks def parse(data)while data pos len(data text)if not data advance_up_to("[]/")break if data text[data pos="["data brackets + parse_block(dataelif data text[data pos="/"parse_new_row(dataelif data text[data pos="]"data brackets - data advance_by( elseraise lexerror("expecting '['']'or '/'"but got '{ }'format(data text[data pos])if data bracketsraise lexerror("ran out of text when expecting '{ }'format(']if data brackets else '[')this function is the heart of the recursive descent parser it iterates over the text looking for the start or end of block or new row marker if it reaches the start of block it increments the brackets count and calls parse_block()if it reaches new row marker it calls parse_new_row()and if it reaches the end of block it decrements the brackets count and advances to the next character if any other character is encountered it is an error and is reported accordingly similarlywhen all the data has been parsedif the brackets count is not zero the function reports the error def parse_block(data)data advance_by( nextblock data text find("["data posendofblock data text find("]"data posif nextblock =- or endofblock nextblockparse_block_data(dataendofblockelseblock parse_block_data(datanextblock |
9,061 | data stack append(blockparse(datadata stack pop(this function begins by advancing by one character (to skip the start-of-block open bracketit then looks for the next start of block and the next end of block if there is no following block or if the next end of block is before the start of another block then this block does not have any nested blocksso we can simply call parse_block_data(and give it an end position of the end of this block if this block does have one or more nested blocks inside it we parse this block' data up to where its first nested block begins we then push this block onto the stack of blocks and recursively call the parse(function to parse the nested block (or blocks--and their nested blocksetc and at the end we pop this block off the stack since all the nesting has been handled by the recursive calls def parse_block_data(dataend)color none colon data text find(":"data posif - colon endcolor data text[data pos:colondata advance_to_position(colon name data text[data pos:endstrip(data advance_to_position(endif not name and color is noneblock block get_empty_block(elseblock block block(namecolordata stack[- children append(blockreturn block this function is used to parse one block' data--up to the given end point in the text--and to add corresponding block object to the stack of blocks we start by trying to find colorand if we find onewe advance over it next we try to find the block' text (its name)although this can legitimately be empty if we have block with no name or color we create an empty blockotherwise we create block with the given name and color once the block has been created we add it as the last child of the stack of block' top block (initially the top block is the root blockbut if we have nested blocks it could be some other block that has been pushed on top at the end we return the block so that it can be pushed onto the stack of blocks--something we do only if the block has other blocks nested inside it def parse_new_row(data)data stack[- children append(block get_new_row()data advance_by( |
9,062 | introduction to parsing this is the easiest of the parsing functions it simply adds new row as the last child of the stack of block' top blockand advances over the new row character this completes the review of the blocks recursive descent parser the parser does not require huge amount of codefewer than linesbut that' still more than percent more lines than the pyparsing version needsand about percent more lines than the ply version needs and as we will seeusing pyparsing or ply is much easier than handcrafting recursive descent parser--and they also lead to parsers that are much easier to maintain the conversion into an svg file using the blockoutput save_blocks_as_svg(function is the same for all the blocks parserssince they all produce the same root block and children structures we won' review the function' code since it isn' relevant to parsing as such--it is in the blockoutput py module file that comes with the book' examples we have now finished reviewing the handcrafted parsers in the following two sections we will show pyparsing and ply versions of these parsers in additionwe will show parser for dsl that would need quite sophisticated recursive descent parser if we did it by handand that really shows that as our needs growusing generic parser scales much better than handcrafted solution pythonic parsing with pyparsing ||writing recursive descent parsers by hand can be quite tricky to get rightand if we need to create many parsers it can soon become tedious both to write them and especially to maintain them one obvious solution is to use generic parsing moduleand those experienced with bnfs or with the unix lex and yacc tools will naturally gravitate to similar tools in the section following this one we cover ply (python lex yacc) tool that exemplifies this classic approach but in this section we will look at very different kind of parsing toolpyparsing pyparsing is described by its authorpaul mcguireas "an alternative approach to creating and executing simple grammarsvs the traditional lex/yacc approachor the use of regular expressions(although in factregexes can be used with pyparsing for those used to the traditional approachpyparsing requires some reorientation in thinking the payback is the ability to develop parsers that do not require lot of code--thanks to pyparsing providing many high-level elements that can match common constructs--and which are easy to understand and maintain pyparsing is available under an open source license and can be used in both noncommercial and commercial contexts howeverpyparsing is not |
9,063 | included in python' standard libraryso it must be downloaded and installed separately--although for linux users it is almost certainly available through the package management system it can be obtained from pyparsing wikispaces com--click the page' download link it comes in the form of an executable installation program for windows and in source form for unix-like systems such as linux and mac os the download page explains how to install it pyparsing is contained in single module filepyparsing_py pyso it can easily be distributed with any program that uses it quick introduction to pyparsing |pyparsing makes no real distinction between lexing and parsing insteadit provides functions and classes to create parser elements--one element for each thing to be matched some parser elements are provided predefined by pyparsingothers can be created by calling pyparsing functions or by instantiating pyparsing classes parser elements can also be created by combining other parser elements together--for exampleconcatenating them with to form sequence of parser elementsor or-ing them with to form set of parser element alternatives ultimatelya pyparsing parser is simply collection of parser elements (which themselves may be made up of parser elementsetc )composed together if we want to process what we parsewe can process the results that pyparsing returnsor we can add parse actions (code snippetsto particular parser elementsor some combination of both pyparsing provides wide range of parser elementsof which we will briefly describe some of the most commonly used the literal(parser element matches the literal text it is givenand caselessliteral(does the same thing but ignores case if we are not interested in some part of the grammar we can use suppress()this matches the literal text (or parser elementit is givenbut does not add it to the results the keyword(element is almost the same as literal(except that it must be followed by nonkeyword character--this prevents match where keyword is prefix of something else for examplegiven the data text"filename"literal("file"will match filenamewith the name part left for the next parser element to matchbut keyword("file"won' match at all another important parser element is word(this element is given string that it treats as set of charactersand will match any sequence of any of the given characters for examplegiven the data text"abacus"word("abc"will match abacus if the word(element is given two stringsthe first is taken to contain those characters that are valid for the first character of the match and the second to contain those characters that are valid for the remaining characters this is typically used to match identifiers--for exampleword(alphas |
9,064 | introduction to parsing alphanumsmatches text that starts with an alphabetic character and that is followed by zero or more alphanumeric characters (both alphas and alphanums are predefined strings of characters provided by the pyparsing module less frequently used alternative to word(is charsnotin(this element is given string that it treats as set of charactersand will match all the characters from the current parse position onward until it reaches character from the given set of characters it does not skip whitespace and it will fail if the current parse character is in the given setthat isif there are no characters to accumulate two other alternatives to word(are also used one is skipto()this is similar to charsnotin(except that it skips whitespace and it always succeeds--even if it accumulates nothing (an empty stringthe other is regex(which is used to specify regex to match pyparsing also has various predefined parser elementsincluding restofline that matches any characters from the point the parser has reached until the end of the linepythonstylecomment which matches python-style commentquotedstring that matches string that' enclosed in single or double quotes (with the start and end quotes matching)and many others there are also many helper functions provided to cater for common cases for examplethe delimitedlist(function returns parser element that matches list of items with given delimiterand makehtmltags(returns pair of parser elements to match given html tag' start and endand for the start also matches any attributes the tag may have parsing elements can be quantified in similar way to regexesusing optional()zeroormore()oneormore()and some others if no quantifier is specifiedthe quantity defaults to elements can be grouped using group(and combined using combine()--we'll see what these do further on once we have specified all of our individual parser elements and their quantitieswe can start to combine them to make parser we can specify parser elements that must follow each other in sequence by creating new parser element that concatenates two or more existing parser elements together--for exampleif we have parser elements key and value we can create key_value parser element by writing key_value key suppress("="value we can specify parser elements that can match any one of two or more alternatives by creating new parser element that ors two or more existing parser elements together--for exampleif we have parser elements true and false we can create boolean parser element by writing boolean true false notice that for the key_value parser element we did not need to say anything about whitespace around the by defaultpyparsing will accept any amount of whitespace (including nonebetween parser elementsso for examplepyparsing treats the bnf definition key '=value as if it were written \skey \ '=\svalue \ (this default behavior can be switched offof course |
9,065 | note that here and in the subsections that followwe import each pyparsing name that we need individually for examplefrom pyparsing_py import (alphanumsalphascharsnotinforwardgrouphexnumsoneormoreoptionalparseexceptionparsesyntaxexceptionsuppresswordzeroormorethis avoids using the import syntax which can pollute our namespace with unwanted namesbut at the same time affords us the convenience to write alphanums and word(rather than pyparsing_py alphanums and pyparsing_py word()and so on before we finish this quick introduction to pyparsing and look at the examples in the following subsectionsit is worth noting couple of important ideas relating to how we translate bnf into pyparsing parser pyparsing has many predefined elements that can match common constructs we should always use these elements wherever possible to ensure the best possible performance alsotranslating bnfs directly into pyparsing syntax is not always the right approach pyparsing has certain idiomatic ways of handling particular bnf constructsand we should always follow these to ensure that our parser runs efficiently here we'll very briefly review few of the predefined elements and idioms one common bnf definition is where we have an optional item for exampleoptional_item ::item empty bnf if we translated this directly into pyparsing we would writeoptional_item item empty(wrongthis assumes that item is some parser element defined earlier the empty(class provides parser element that can match nothing although syntactically correctthis goes against the grain of how pyparsing works the correct pyparsing idiom is much simpler and involves using predefined elementoptional_item optional(itemsome bnf statements involve defining an item in terms of itself for exampleto represent list of variables (perhaps the arguments to function)we might have the bnfvar_list ::variable variable ',var_list variable ::[ -za- ]\wat first sight we might be tempted to translate this directly into pyparsing syntaxbnf |
9,066 | introduction to parsing variable word(alphasalphanumsvar_list variable variable suppress(","var_list wrongthe problem seems to be simply matter of python syntax--we can' refer to var_list before we have defined it pyparsing offers solution to thiswe can create an "emptyparser element using forward()and then later on we can append parse elements--including itself--to it so now we can try again var_list forward(var_list <(variable variable suppress(","var_listwrongthis second version is syntactically validbut againit goes against the grain of how pyparsing works--and as part of larger parser its use could lead to parser that is very slowor that simply doesn' work (note that we must use parentheses to ensure that the whole right-hand expression is appended and not just the first part because <has higher precedence level than |that isit binds more tightly than although its use is not appropriate herethe forward(class is very useful in other contextsand we will use it in couple of the examples in the following subsections instead of using forward(in situations like thisthere are alternative coding patterns that go with the pyparsing grain here is the simplest and most literal versionvar_list variable zeroormore(suppress(","variablethis pattern is ideal for handling binary operatorsfor exampleplus_expression operand zeroormore(suppress("+"operandboth of these kinds of usage are so common that pyparsing offers convenience functions that provide suitable parser elements we will look at the operatorprecedence(function that is used to create parser elements for unarybinaryand ternary operators in the example in the last of the following subsections for delimited liststhe convenience function to use is delimitedlist()which we will show nowand which we will use in an example in the following subsectionsvar_list delimitedlist(variablethe delimitedlist(function takes parser element and an optional delimiter--we didn' need to specify the delimiter in this case because the default is commathe delimiter we happen to be using so far the discussion has been fairly abstract in the following four subsections we will create four parserseach of increasing sophisticationthat demonstrate how to make the best use of the pyparsing module the first three parsers are pyparsing versions of the handcrafted parsers we created in the previous |
9,067 | sectionthe fourth parser is new and much more complexand is shown in this sectionand in lex/yacc form in the following section simple key-value data parsing |handcrafted keyvalue parser in the previous section' first subsection we created handcrafted regex-based key-value parser that was used by the playlists py program to read pls files in this subsection we will create parser to do the same jobbut this time using the pyparsing module as beforethe purpose of our parser is to populate dictionary with key-value items matching those in the filebut with lowercase keys an extract from pls file is shown in figure ( )and the bnf is shown in figure ( since pyparsing skips whitespace by defaultwe can ignore the bnf' blank nonterminal and optional whitespace (\ *we will look at the code in three partsfirstthe creation of the parser itselfseconda helper function used by the parserand thirdthe call to the parser to parse pls file all the code is quoted from the readkeyvalue py module file that is imported by the playlists py program key_values {left_bracketright_bracketequals map(suppress"[]="ini_header left_bracket charsnotin("]"right_bracket key_value word(alphanumsequals restofline key_value setparseaction(accumulatecomment "#restofline parser oneormore(ini_header key_valueparser ignore(commentfor this particular parserinstead of reading the results at the end we will accumulate results as we gopopulating the key_values dictionary with each key=value we encounter functionalstyle programming the left and right brackets and the equals signs are important elements of the grammarbut are of no interest in themselves so for each of them we create suppress(parser element--this will match the appropriate characterbut won' include the character in the results (we could have written each of them individuallyfor exampleas left_bracket suppress("[")and so onbut using the built-in map(function is more convenient the definition of the ini_header parser element follows quite naturally from the bnfa left bracketthen any characters except right bracketand then right bracket we haven' defined parse action for this parser elementso although the parser will match any occurrences that it encountersnothing will be done with themwhich is what we want ply keyvalue parser |
9,068 | introduction to parsing the key_value parser element is the one we are really interested in this matches "word"-- sequence of alphanumeric charactersfollowed by an equals signfollowed by the rest of the line (which may be emptythe restofline is predefined parser element supplied by pyparsing since we want to accumulate results as we go we add parse action ( function referenceto the key_value parser element--this function will be called for every key=value that is matched although pyparsing provides predefined pythonstylecomment parser elementhere we prefer the simpler literal("#"followed by the rest of the line (and thanks to pyparsing' smart operator overloading we were able to write the literal as string because when we concatenated it with another parser element to produce the comment parser elementpyparsing promoted the to be literal(the parser itself is parser element that matches one or more ini_header or key_value parser elementsand that ignores comment parser elements def accumulate(tokens)keyvalue tokens key key lower(if lowercase_keys else key key_values[keyvalue this function is called once for each key=value match the tokens parameter is tuple of the matched parser elements in this case we would have expected the tuple to have the keythe equals signand the valuebut since we used suppress(on the equals sign we get only the key and the valuewhich is exactly what we want the lowercase_keys variable is boolean created in an outer scope and that for pls files is set to true (note that for ease of explanation we have shown this function after the creation of the parseralthough in fact it must be defined before we create the parser since the parser refers to it tryparser parsefile(fileexcept parseexception as errprint("parse error{ }format(err)return {return key_values with the parser set up we are ready to call the parsefile(methodwhich in this example takes the name of pls file and attempts to parse it if the parse fails we output simple error message based on what pyparsing tells us at the end we return the key_values dictionary--or an empty dictionary if the parsing failed--and we ignore the parsefile(method' return value since we did all our processing in the parse action |
9,069 | parser pythonic parsing with pyparsing playlist data parsing |in the previous section' second subsection we created handcrafted regexbased parser for files in this subsection we will create parser to do the same thingbut this time using the pyparsing module an extract from file is shown in figure ( )and the bnf is shown in figure ( as we did when reviewing the previous subsection' pls parserwe will review the parser in three partsfirst the creation of the parserthen the helper functionand finally the call to the parser just as with the pls parserwe are ignoring the parser' return value and instead populating our data structure as the parsing progresses (in the following two subsections we will create parsers whose return values are used songs [title restofline("title"filename restofline("filename"seconds combine(optional("-"word(nums)setparseactionlambda tokensint(tokens[ ]))("seconds"info suppress("#extinf:"seconds suppress(","title entry info lineend(filename lineend(entry setparseaction(add_songparser suppress("#extm "oneormore(entrysong named tuple we begin by creating an empty list that will hold the song named tuples although the bnf is quite simplesome of the parser elements are more complex than those we have seen so far notice also that we create the parser elements in reverse order to the order used in the bnf this is because in python we can only refer to things that already existso for examplewe cannot create parser element for an entry before we have created one for an info since the former refers to the latter the title and filename parser elements are ones that match every character from the parse position where they are tried until the end of the line this means that they can match any charactersincluding whitespace--but not including newline which is where they stop we also give these parser elements namesfor example"title"--this allows us to conveniently access them by name as an attribute of the tokens object that is given to parse action functions the seconds parser element matches an optional minus sign followed by digits(nums is predefined pyparsing string that contains the digitswe use combine(to ensure that the sign (if presentand digits are returned as single string (it is possible to specify separator for combine()but there is no need in this casesince the default of an empty string is exactly what we want the ply parser |
9,070 | introduction to parsing parse action is so simple that we have used lambda the combine(ensures that there is always precisely one token in the tokens tupleand we use int(to convert this to an integer if parse action returns valuethat value becomes the value associated with the token rather than the text that was matched we have also given name to the token for convenience of access later on the info parse action consists of the literal string that indicates an entryfollowed by the secondsfollowed by commafollowed by the title--and all this is defined very simply and naturally in way that matches the bnf notice also that we use suppress(for the literal string and for the comma since although both are essential for the grammarthey are of no interest to us in terms of the data itself the entry parser element is very easy to definesimply an info followed by newlinethen filename followed by newline--the lineend(is predefined pyparsing parser element to match newline and since we are populating our list of songs as we parse rather than at the endwe give the entry parser element parse action that will be called whenever an entry is matched the parser itself is parser element that matches the literal string that indicates filefollowed by one or more entrys def add_song(tokens)songs append(song(tokens titletokens secondstokens filename)mapping unpacking the add_song(function is simpleespecially since we named the parser elements we are interested in and are therefore able to access them as attributes of the tokens object and of coursewe could have written the function even more compactly by converting the tokens to dictionary and using mapping unpacking--for examplesongs append(song(**tokens asdict())tryparser parsefile(fhexcept parseexception as errprint("parse error{ }format(err)return [return songs the code for calling parserelement parsefile(is almost identical to the code we used for the pls parseralthough in this case instead of passing filename we opened file in text mode and passed in the io textiowrapper returned by the built-in open(function as the fh ("file handle"variable we have now finished reviewing two simple pyparsing parsersand seen many of the most commonly used parts of the pyparsing api in the following two subsections we will look at more complex parsersboth of which are recursivethat isthey have nonterminals whose definition includes themselvesand in |
9,071 | the final example we will also see how to handle operators and their precedences and associativities parsing the blocks domain-specific language |handcrafted blocks parser in the previous section' third subsection we created recursive descent parser for blk files in this subsection we will create pyparsing implementation of blocks parser that should be easier to understand and be more maintainable two example blk files are shown in figures ( and ( the bnf for the blocks format is shown in figure ( we will look at the creation of the parser elements in two partsthen we will look at the helper functionand then we will see how the parser is called and at the end we will see how the parser' results are transformed into root block with child blocks (which themselves may contain child blocksetc )that is our required output left_bracketright_bracket map(suppress"[]"new_rows word("/")("new_rows"setparseactionlambda tokenslen(tokens new_rows)name charsnotin("[]/\ ")("name"setparseactionlambda tokenstokens name strip()color (word("#"hexnumsexact= word(alphasalphanums))("color"empty_node (left_bracket right_bracketsetparseactionlambdaemptyblockas always with pyparsing parserswe create parser elements to match the bnf from last to first so that for every parser element we create that depends on one or more other parser elementsthe elements it depends on already exist the brackets are an important part of the bnfbut are of no interest to us for the resultsso we create suitable suppress(parser elements for them for the new_rows parser element it might be tempting to use literal("/")--but that must match the given text exactly whereas we want to match as many / as are present having created the new_rows parser elementwe give name to its results and add parsing action that replaces the string of one or more / with an integer count of how many / there were notice also that because we gave name to the resultwe can access the result ( the matched text)by using the name as an attribute of the tokens object in the lambda the name parser element is slightly different from that specified in the bnf in that we have chosen to disallow not only brackets and forward slashesbut also newlines againwe give the result name we also set parse actionthis ply blocks parser |
9,072 | introduction to parsing time to strip whitespace since whitespace (apart from newlinesis allowed as part of nameyet we don' want any leading or trailing whitespace for the color parser element we have specified that the first character must be followed by exactly six hexadecimal digits (seven characters in all)or sequence of alphanumeric characters with the first character alphabetic we have chosen to handle empty nodes specially we define an empty node as left bracket followed by right bracketand replace the brackets with the value emptyblock which earlier in the file is defined as emptyblock this means that in the parser' results list we represent empty blocks with and as noted earlierwe represent new rows by an integer row count (which will always be nodes forward(node_data optional(color suppress(":")optional(namenode_data setparseaction(add_blocknode left_bracket node_data nodes right_bracket nodes <group(zeroormore(optional(new_rowsoneormore(node empty_node))we define nodes to be forward(parser elementsince we need to use it before we specify what it matches we have also introduced new parser element that isn' in the bnfnode_datawhich matches the optional color and optional name we give this parser element parse action that will create new blockso each time node_data is encountered block will be added to the parser' results list the node parser element is defined very naturally as direct translation of the bnf notice that both the node_data and nodes parser elements are optional (the former consisting of two optional elementsthe latter quantified by zero or more)so empty nodes are correctly allowed finallywe can define the nodes parser element since it was originally created as forward(we must append parser elements to it using <here we have set nodes to be zero or more of an optional new row and one or more nodes notice that we put node before empty_node--since pyparsing matches left to right we normally order parser elements that have common prefixes from longest to shortest matching we have also grouped the nodes parser element' results using group()--this ensures that each nodes is created as list in its own right this means that node that contains nodes will be represented by block for the nodeand by list for the contained nodes--and which in turn may contain blocksor integers for empty nodes or new rowsand so on it is because of this recursive structure that we had to create nodes as forward()and also why we must use the <operator (which in pyparsing is used to append)to add the group(parser element and the elements it contains to the nodes element |
9,073 | one important but subtle point to note is that we used the operator rather than the operator in the definition of the node parser element we could just as easily have used +since both (parserelement __add__()and (parserelement __sub__()do the same job--they return parser element that represents the concatenation of the two parser elements that are the operator' operands the reason we chose to use rather than is due to subtle but important difference between them the operator will stop parsing and raise parsesyntaxexception as soon as an error is encounteredsomething that the operator doesn' do if we had used all errors would have line number of and column of but by using -any errors have the correct line and column numbers in generalusing is the right approachbut if our tests show that we are getting incorrect error locationsthen we can start to change + into - as we have done here--and in this case only single change was necessary def add_block(tokens)return block block(tokens nametokens color if tokens color else "white"whenever node_data is parsed instead of the text being returned and added to the parser' results listwe create and return block we also always set the color to white unless color is explicitly specified in the previous examples we parsed file and an open file handle (an opened io textiowrapper)here we will parse string it makes no difference to pyparsing whether we give it string or fileso long as we use parserelement parsefile(or parserelement parsestring(as appropriate in factpyparsing offers other parsing methodsincluding parserelement scanstring(which searches string for matchesand parserelement transformstring(which returns copy of the string it is givenbut with matched texts transformed into new texts by returning new text from parse actions stack [block get_root_block()tryresults nodes parsestring(textparseall=trueassert len(results= items results aslist()[ populate_children(itemsstackexcept (parseexceptionparsesyntaxexceptionas errraise valueerror("error {{ }}syntax errorline "{ }format(err lineno)return stack[ this is the first pyparsing parser where we have used the parser' results rather than created the data structures ourselves during the parsing process we expect the results to be returned as list containing single parseresults |
9,074 | introduction to parsing object we convert this object into standard python listso now we have list containing single item-- list of our results--which we assign to the items variableand that we then further process via the populate_children(call before discussing the handling of the resultswe will briefly mention the error handling if the parser fails it will raise an exception we don' want pyparsing' exceptions to leak out to clients since we may choose to change the parser generator later on soif an exception occurswe catch it and then raise our own exception ( valueerrorwith the relevant details the hierarchy blk file in the case of successful parse of the hierarchy blk examplethe items list looks like this (with occurrences of and similarreplaced with block for clarity)[ block[] block[] block[] block[]whenever we parsed an empty block we returned to the parser' results listwhenever we parsed new rows we returned the number of rowsand whenever we encountered node_datawe created block to represent it in the case of blocks they always have an empty child list ( the children attribute is set to [])since at this point we don' know if the block will have children or not so here the outer list represents the root blockthe represent empty blocksthe other integers (all in this caserepresent new rowsand the [] are empty child lists since none of the hierarchy blk file' blocks contain other blocks the messagebox blk file the messagebox blk example' items list (pretty printed to reveal its structureand again using block for clarityis[block[block[ block[] block[] block[] here we can see that the outer list (representing the root blockcontains block that has child list of one block that contains its own child listand where these children are blocks (with their own empty child lists)new rows ( and )and empty blocks ( sone problem with the list results representation is that every block' children list is empty--each block' children are in list that follows the block in the parser' results list we need to convert this structure into single root block with child blocks to this end we have created stack-- list containing single root block we then call the populate_children(function that takes the list of items returned by the parser and list with root blockand populates the root block' children (and their childrenand so onas appropriatewith the items |
9,075 | the populate_children(function is quite shortbut also rather subtle def populate_children(itemsstack)for item in itemsif isinstance(itemblock block)stack[- children append(itemelif isinstance(itemlistand itemstack append(stack[- children[- ]populate_children(itemstackstack pop(elif isinstance(itemint)if item =emptyblockstack[- children append(block get_empty_block()elsefor in range(item)stack[- children append(block get_new_row()we iterate over every item in the results list if the item is block we append it to the stack' last (topblock' child list (recall that the stack is initialized with single root block item if the item is nonempty listthen it is child list that belongs to the previous block so we append the previous block ( the top block' last childto the stack to make it the top of the stackand then recursively call populate_children(on the list item and the stack this ensures that the list item ( its child itemsis appended to the correct item' child list once the recursive call is finishedwe pop the top of the stackready for the next item if the item is an integer then it is either an empty block ( emptyblockor count of new rows if it is an empty block we append an empty block to the stack' top block' list of children if the item is new row countwe append that number of new rows to the stack' top block' list of children if the item is an empty list this signifies an empty child list and we do nothingsince by default all blocks are initialized to have an empty child list at the end the stack' top item is still the root blockbut now it has children (which may have their own childrenand so onfor the hierarchy blk examplethe populate_children(function produces the structure illustrated in figure ( and for the messagebox blk examplethe function produces the structure illustrated in figure ( the conversion into an svg file using the blockoutput save_blocks_as_svg(function is the same for all the blocks parserssince they all produce the same root block and children structures |
9,076 | introduction to parsing parsing first-order logic |in this last pyparsing subsection we will create parser for dsl for expressing formulas in first-order logic this has the most complex bnf of all the examples in the and the implementation requires us to handle operatorsincluding their precedences and associativitiessomething we have not needed to do so far there is no handcrafted version of this parser--once we have reached this level of complexity it is better to use parser generator but in addition to the pyparsing version shown herein the following section' last subsection there is an equivalent ply parser for comparison ply first-order logic parser here are few examples of the kind of first-order logical formulas that we want to be able to parsea forall xa exists ya - true true true -forall xexists ytrue (forall xexists ytrue-true true -true true forall xx true (forall xx xforall xx true (forall xx xtrue we have opted to use ascii characters rather than the proper logical operator symbolsto avoid any distraction from the parser itself sowe have used forall for exists for -for (implies)for (logical or)for (logical and)and for (logical notsince python strings are unicode it would be easy to use the real symbols--or we could adapt the parser to accept both the ascii forms shown here and the real symbols in the formulas shown herethe parentheses make difference in the last two formulas--so those formulas are different--but not for the two above them (those starting with true)which are the same despite the parentheses naturallythe parser must get these details right one surprising aspect of first-order logic is that not (~has lower precedence than equals (=)so is actually ( bthis is why logicians usually put space after bnf for our first-order logic dsl is given in figure for the sake of clarity the bnf does not include any explicit mention of whitespace (no \ or \selements)but we will assume that whitespace is allowed between all terminals and nonterminals although our subset of bnf syntax has no provision for expressing precedence or associativitywe have added comments to indicate associativities for the formulas |
9,077 | formula ::('forall'exists'symbol ':formula formula '->formula right associative formula '|formula left associative formula '&formula left associative '~formula '(formula ')term '=term 'true'falseterm ::symbol symbol '(term_list ')term_list ::term term ',term_list symbol ::[ -za- ]\wfigure bnf for first-order logic binary operators as for precedencethe order is from lowest to highest in the order shown in the bnf for the first few alternativesthat isforall and exists have the lowest precedencethen ->then |then and the remaining alternatives all have higher precedence than those mentioned here before looking at the parser itselfwe will look at the import and the line that follows it since they are different than before from pyparsing_py import (alphanumsalphasdelimitedlistforwardgroupkeywordliteralopassocoperatorprecedenceparserelementparseexceptionparsesyntaxexceptionsuppresswordparserelement enablepackrat(memoizing the import brings in some things we haven' seen before and that we will cover when we encounter them in the parser the enablepackrat(call is used to switch on an optimization (based on memoizingthat can produce considerable speedup when parsing deep operator hierarchies if we do this at all it is best to do it immediately after importing the pyparsing_py module--and before creating any parser elements although the parser is shortwe will review it in three parts for ease of explanationand then we will see how it is called we don' have any parser actions since all we want to do is to get an ast (abstract syntax tree)-- list representing what we have parsed--that we can post-process later on if we wish left_parenthesisright_parenthesiscolon map(suppress"():"forall keyword("forall"for more on packrat parsingsee bryan ford' master' thesis at pdos csail mit edu/~bafordpackrat |
9,078 | introduction to parsing exists keyword("exists"implies literal("->"or_ literal("|"and_ literal("&"not_ literal("~"equals literal("="boolean keyword("false"keyword("true"symbol word(alphasalphanumsall the parser elements created here are straightforwardalthough we had to add underscores to the end of few names to avoid conflicts with python keywords if we wanted to give users the choice of using ascii or the proper unicode symbolswe could change some of the definitions for exampleforall keyword("forall"literal(""if we are using non-unicode editor we could use the appropriate escaped unicode code pointsuch as literal("\ ")instead of the symbol term forward(term <(group(symbol group(left_parenthesis delimitedlist(termright_parenthesis)symbola term is defined in terms of itselfwhich is why we begin by creating it as forward(and rather than using straight translation of the bnf we use one of pyparsing' coding patterns recall that the delimitedlist(function returns parser element that can match list of one or more occurrences of the given parser elementseparated by commas (or by something else if we explicitly specify the separatorso here we have defined the term parser element as being either symbol followed by comma-separated list of terms or symbol--and since both start with the same parser element we must put the one with the longest potential match first formula forward(forall_expression group(forall symbol colon formulaexists_expression group(exists symbol colon formulaoperand forall_expression exists_expression boolean term formula <operatorprecedence(operand(equals opassoc left)(not_ opassoc right)(and_ opassoc left)(or_ opassoc left)(implies opassoc right)]although the formula looks quite complicated in the bnfit isn' so bad in pyparsing syntax first we define formula as forward(since it is defined in terms of itself the forall_expression and exists_expression parser elements |
9,079 | are straightforward to definewe've just used group(to make them sublists within the results list to keep their components together and at the same time distinct as unit the operatorprecedence(function (which really ought to have been called something like createoperators()creates parser element that matches one or more unarybinaryand ternary operators before calling itwe first specify what our operands are--in this case forall_expression or an exists_expression or boolean or term the operatorprecedence(function takes parser element that matches valid operandsand then list of parser elements that must be treated as operatorsalong with their arities (how many operands they take)and their associativities the resultant parser element (in this caseformulawill match the specified operators and their operands each operator is specified as threeor four-item tuple the first item is the operator' parser elementthe second is the operator' arity as an integer ( for unary operator for binary operatorand for ternary operator)the third is the associativityand the fourth is an optional parse action pyparsing infers the operatorsorder of precedence from their relative positions in the list given to the operatorprecedence(functionwith the first operator having the highest precedence and the last the lowestso the order of the items in the list we pass is important in this examplehas the highest precedence (and has no associativityso we have made it left-associative)and -has the lowest precedence and is right-associative this completes the parserso we can now look at how it is called tryresult formula parsestring(textparseall=trueassert len(result= return result[ aslist(except (parseexceptionparsesyntaxexceptionas errprint("syntax error:\ { line}\ { }^format(err(err column ))this code is similar to what we used for the blocks example in the previous subsectiononly here we have tried to give more sophisticated error handling in particularif an error occurs we print the line that had the error and on the line below it we print spaces followed by caret (^to indicate where the error was detected for exampleif we parse the invalid formulaforall xx truewe will getsyntax errorforall xx true |
9,080 | introduction to parsing in this case the error location is slightly off--the error is that should have the form xbut it is still pretty good in the case of successful parse we get list of parseresults which has single result--as before we convert this to python list earlier we saw some example formulasnow we will look at some of them againthis time with the result lists produced by the parserpretty printed to help reveal their structure we mentioned before that the operator has lower precedence than the operator--so let' see if this is handled correctly by the parser ~true -~ ['~''true']'->'['~'[' ''='' '~true -~( ['~''true']'->'['~'[' ''='' 'here we get exactly the same results for both formulaswhich demonstrates that has higher precedence than of coursewe would need to write several more test formulas to check all the casesbut this at least looks promising two of the formulas that we saw earlier were forall xx true and (forall xx xtrueand we pointed out that although the only difference between them is the parenthesesthis is sufficient to make them different formulas here are the lists the parser produces for themforall xx true 'forall'' '[' ''='' ']'&''true(forall xx xtrue 'forall'' '[' ''='' ']'&''truethe parser is clearly able to distinguish between these two formulasand creates quite different parse trees (nested listswithout the parenthesesforall' formula is everything right of the colonbut with the parenthesesforall' scope is limited to within the parentheses but what about the two formulas that again are different only in that one has parenthesesbut where the parentheses don' matterso that the formulas are |
9,081 | actually the samethese two formulas are true forall xx and true (forall xx )and fortunatelywhen parsed they both produce exactly the same list'true''&''forall'' '[' ''='' 'the parentheses don' matter here because only one valid parse is possible we have now completed the pyparsing first-order logic parserand in factall of the book' pyparsing examples if pyparsing is of interestthe pyparsing web site (pyparsing wikispaces comhas many other examples and extensive documentationand there is also an active wiki and mailing list in the next section we will look at the same examples as we covered in this sectionbut this time using the ply parser which works in very different way from pyparsing lex/yacc-style parsing with ply ||ply (python lex yaccis pure python implementation of the classic unix toolslex and yacc lex is tool that creates lexersand yacc is tool that creates parsers--often using lexer created by lex ply is described by its authordavid beazleyas "reasonably efficient and well suited for larger grammars [itprovides most of the standard lex/yacc features including support for empty productionsprecedence ruleserror recoveryand support for ambiguous grammars ply is straightforward to use and provides very extensive error checking ply is available under the lgpl open source license and so can be used in most contexts like pyparsingply is not included in python' standard libraryso it must be downloaded and installed separately--although for linux users it is almost certainly available through the package management system and from ply version the same ply modules work with both python and python if it is necessary to obtain and install ply manuallyit is available as tarball from www dabeaz com/ply on unix-like systems such as linux and mac os xthe tarball can be unpacked by executing tar xvfz ply- tar gz in console (of coursethe exact ply version may be different windows users can use |
9,082 | introduction to parsing the untar py example program that comes with this book' examples for instanceassuming the book' examples are located in :\py egthe command to execute in the console is :\python \python exe :\py eg\untar py ply tar gz once the tarball is unpackedchange directory to ply' directory--this directory should contain file called setup py and subdirectory called ply ply can be installed automatically or manually to do it automaticallyin the console execute python setup py installor on windows execute :\python \python exe setup py install alternativelyjust copy or move the ply directory and its contents to python' site-packages directory (or to your local site-packages directoryonce installedply' modules are available as ply lex and ply yacc ply makes clear distinction between lexing (tokenizingand parsing and in factply' lexer is so powerful that it is sufficient on its own to handle all the examples shown in this except for the first-order logic parser for which we use both the ply lex and ply yacc modules when we discussed the pyparsing module we began by first reviewing various pyparsing-specific conceptsand in particular how to convert certain bnf constructs into pyparsing syntax this isn' necessary with ply since it is designed to work directly with regexes and bnfsso rather than give any conceptual overviewwe will summarize few key ply conventions and then dive straight into the examples and explain the details as we go along ply makes extensive use of naming conventions and introspectionso it is important to be aware of these when we create lexers and parsers using ply every ply lexer and parser depends on variable called tokens this variable must hold tuple or list of token names--they are usually uppercase strings corresponding to nonterminals in the bnf every token must have corresponding variable or function whose name is of the form t_token_name if variable is defined it must be set to string containing regex--so normally raw string is used for convenienceif function is defined it must have docstring that contains regexagain usually using raw string in either case the regex specifies pattern that matches the corresponding token one name that is special to ply is t_error()if lexing error occurs and function with this name is definedit will be called if we want the lexer to match token but discard it from the results ( comment in programming language)we can do this in one of two ways if we are using variable then we make its name t_ignore_token_nameif we are using function then we use the normal name t_token_namebut ensure that it returns none the ply parser follows similar convention to the lexer in that for each bnf rule we create function with the prefix p_and whose docstring contains the bnf rule we're matching (only with ::replaced with :whenever |
9,083 | rule matches its corresponding function is called with parameter (called pfollowing the ply documentation' examples)this parameter can be indexed with [ corresponding to the nonterminal that the rule definesand [ and so oncorresponding to the parts on the right-hand side of the bnf precedence and associativity can be set by creating variable called precedence and giving it tuple of tuples--in precedence order--that indicate the tokensassociativities similarly to the lexerif there is parsing error and we have created function called p_error()it will be called we will make use of all the conventions described hereand morewhen we review the examples to avoid duplicating information from earlier in the the examples and explanations given here focus purely on parsing with ply it is assumed that you are familiar with the formats to be parsed and their contexts of use this means that either you have read at least this second section and the first-order logic parser from the third section' last subsectionor that you skip back using the backreferences provided when necessary simple key-value data parsing handcrafted keyvalue parser pyparsing keyvalue parser pls bnf |ply' lexer is sufficient to handle the key-value data held in pls files every ply lexer (and parserhas list of tokens which must be stored in the tokens variable ply makes extensive use of introspectionso the names of variables and functionsand even the contents of docstringsmust follow ply' conventions here are the tokens and their regexes and functions for the ply pls parsertokens ("ini_header""comment""key""value"t_ignore_ini_header "\[[^]]+\]t_ignore_comment "\*def t_key( ) "\ +if lowercase_keyst value value lower(return def t_value( ) "* value value[ :strip(return |
9,084 | introduction to parsing both the ini_header and comment tokensmatchers are simple regexesand since both use the t_ignore_ prefixboth will be correctly matched--and then discarded an alternative approach to ignoring matches is to define function that just uses the t_ prefix ( t_comment())and that has suite of pass (or return none)since if the return value is none the token is discarded for the key and value tokens we have used functions rather than regexes in such cases the regex to match must be specified in the function' docstring-and here the docstrings are raw strings since that is our practice for regexesand it means we don' have to escape backslashes when function is used the token is passed as token object (following the ply examplesnaming conventionsof type ply lex lextoken the matched text is held in the ply lex lextoken value attributeand we are permitted to change this if we wish we must always return from the function if we want the token included in the results in the case of the t_key(functionwe lowercase the matching key if the lowercase_keys variable (from an outer scopeis true and for the t_value(functionwe strip off the and any leading or trailing whitespace in addition to our own custom tokensit is conventional to define couple of ply-specific functions to provide error reporting def t_newline( ) "\ + lexer lineno +len( valuedef t_error( )line value lstrip( line find("\ "line line if =- else line[:iprint("failed to parse line { }{ }format( lineno line)the token' lexer attribute (of type ply lex lexerprovides access to the lexer itself here we have updated the lexer' lineno attribute by the number of newlines that have been matched notice that we don' have to specifically account for blank lines since the t_newline(matching function effectively does that for us if an error occurs the t_error(function is called we print an error message and at most one line of the input we add to the line number since ply' lexer lineno attribute starts counting from with all the token definitions in place we are ready to lex some data and create corresponding key-value dictionary |
9,085 | key_values {lexer ply lex lex(lexer input(file read()key none for token in lexerif token type ="key"key token value elif token type ="value"if key is noneprint("failed to parsevalue '{ }without keyformat(token value)elsekey_values[keytoken value key none the lexer reads the entire input text and can be used as an iterator that produces one token at each iteration the token type attribute holds the name of the current token--this is one of the names from the tokens list--and the token value holds the matched text--or whatever we replaced it with for each tokenif the token is key we hold it and wait for its valueand if it is value we add it using the current key to the key_values dictionary at the end (not shown)we return the dictionary to the caller just as we did with the playlists py pls regex and pyparsing parsers playlist data parsing handcrafted parser pyparsing parser |in this subsection we will develop ply parser for the format and just as we did in the previous implementationsthe parser will return its results in the form of list of song (collections namedtuple()objectseach of which holds titlea duration in secondsand filename since the format is so simpleply' lexer is sufficient to do all the parsing as before we will create list of tokenseach one corresponding to nonterminal in the bnftokens (" ""info""seconds""title""filename" bnf we haven' got an entry token--this nonterminal is made up of seconds and title instead we define two statescalled entry and filename when the lexer is in the entry state we will try to read the seconds and the titlethat isan entryand when the lexer is in the filename state we will try to read the filename to make ply understand states we must create states variable that is set to list of one or more -tuples the first item in each of the tuples is state name and the second item is the state' typeeither inclusive ( this state is in addition to the current stateor exclusive ( this state is the only active |
9,086 | introduction to parsing stateply predefines the initial state which all lexers start in here is the definition of the states variable for the ply parserstates (("entry""exclusive")("filename""exclusive")now that we have defined our tokens and our states we can define the regexes and functions to match the bnf t_m "\#extm udef t_info( ) "\#extinf: lexer begin("entry"return none def t_entry_seconds( ) "-?\ +, value int( value[:- ]return def t_entry_title( ) "[^\ ]+ lexer begin("filename"return def t_filename_filename( ) "[^\ ]+ lexer begin("initial"return by defaultthe tokensregexesand functions operate in the initial state howeverwe can specify that they are active in only one particular state by embedding the state' name after the t_ prefix so in this case the t_m regex and the t_info(function will match only in the initial statethe t_entry_seconds(and t_entry_title(functions will match only in the entry stateand the t_filename_filename(function will match only in the filename state the lexer' state is changed by calling the lexer object' begin(method with the new state' name as its argument so in this examplewhen we match the info token we switch to the entry statenow only the seconds and title tokens can match once we have matched title we switch to the filename stateand once we have matched filename we switch back to the initial state ready to match the next info token notice that in the case of the t_info(function we return nonethis means that the token will be discardedwhich is correct since although we must match #extinffor each entrywe don' need that text for the t_entry_seconds(functionwe strip off the trailing comma and replace the token' value with the integer number of seconds |
9,087 | in this parser we want to ignore spurious whitespace that may occur between tokensand we want to do so regardless of the state the lexer is in this can be achieved by creating t_ignore variableand by giving it state of any which means it is active in any statet_any_ignore \ \nthis will ensure that any whitespace between tokens is safely and conveniently ignored we have also defined two functionst_any_newline(and t_any_error()these have exactly the same bodies as the t_newline(and t_error(functions defined in the previous subsection ( )--so neither are shown here--but include the state of any in their names so that they are active no matter what state the lexer is in songs [title seconds none lexer ply lex lex(lexer input(fh read()for token in lexerif token type ="seconds"seconds token value elif token type ="title"title token value elif token type ="filename"if title is not none and seconds is not nonesongs append(song(titlesecondstoken value)title seconds none elseprint("failedfilename '{ }without title/durationformat(token value)we use the lexer in the same way as we did for the pls lexeriterating over the tokensaccumulating values (for the seconds and title)and whenever we get filename to go with the seconds and titleadding new song to the song list as beforeat the end (not shown)we return the key_values dictionary to the caller parsing the blocks domain-specific language handcrafted blocks parser |the blocks format is more sophisticated than the key-value-based pls format or the format since it allows blocks to be nested inside each other this presents no problems to plyand in fact the definitions of the tokens can be done wholly using regexes without requiring any functions or states at all |
9,088 | introduction to parsing tokens ("node_start""node_end""color""name""new_rows""empty_node"pyparsing blocks parser blk bnf t_node_start "\[t_node_end "\]t_color "(?:\#[\da-fa- ]{ }|[ -za- ]\ *):t_name "[^][/\ ]+t_new_rows "/+t_empty_node "\[\]the regexes are taken directly from the bnfexcept that we have chosen to disallow newlines in names in additionwe have defined t_ignore regex to skip spaces and tabsand t_newline(and t_error(functions that are the same as before except that t_error(raises custom lexerror with its error message rather than printing the error message with the tokens set upwe are ready to prepare for lexing and then to do the lexing stack [block get_root_block()block none brackets lexer ply lex lex(trylexer input(textfor token in lexeras with the previous blocks parsers we begin by creating stack ( listwith an empty root block this will be populated with child blocks (and the child blocks with child blocksetc to reflect the blocks that are parsedat the end we will return the root block with all its children the block variable is used to hold reference to the block that is currently being parsed so that it can be updated as we go we also keep count of the brackets purely to improve the error reporting one difference from before is that we do the lexing and the parsing of the tokens inside try except suite--this is so that we can catch any lexerror exceptions and convert them to valueerrors if token type ="node_start"brackets + block block get_empty_block(stack[- children append(blockstack append(blockelif token type ="node_end"brackets - if brackets raise lexerror("too many ']' " |
9,089 | block none stack pop(whenever we start new node we increment the brackets count and create new empty block this block is added as the last child of the stack' top block' list of children and is itself pushed onto the stack if the block has color or name we will be able to set it because we keep reference to the block in the block variable the logic used here is slightly different from the logic used in the recursive descent parser--there we pushed new blocks onto the stack only if we knew that they had nested blocks here we always push new blocks onto the stacksafe in the knowledge that they'll be popped straight off again if they don' contain any nested blocks this also makes the code simpler and more regular when we reach the end of block we decrement the brackets count--and if it is negative we know that we have had too many close brackets and can report the error immediately otherwisewe set block to none since we now have no current block and pop the top of the stack (which should never be emptyelif token type ="color"if block is none or block is_new_row(block)raise lexerror("syntax error"block color token value[:- elif token type ="name"if block is none or block is_new_row(block)raise lexerror("syntax error"block name token value if we get color or namewe set the corresponding attribute of the current block which should refer to block rather than being none or denoting new row elif token type ="empty_node"stack[- children append(block get_empty_block()elif token type ="new_rows"for in range(len(token value))stack[- children append(block get_new_row()if we get an empty node or one or more new rowswe add them as the last child of the stack' top block' list of children if bracketsraise lexerror("unbalanced brackets []"except lexerror as errraise valueerror("error {{ }}:line { }{ }formattoken lineno err) |
9,090 | introduction to parsing once lexing has finished we check that the brackets have balancedand if not we raise lexerror if lexerror occurred during lexingparsingor when we checked the bracketswe raise valueerror that contains an escaped str format(field name--the caller is expected to use this to insert the filenamesomething we cannot do here because we are given only the file' textnot the filename or file object at the end (not shown)we return stack[ ]this is the root block that should now have children (and which in turn might have children)representing the blk file we have parsed this block is suitable for passing to the blockoutput save_blocks_as_svg(functionjust as we did with the recursive descent and pyparsing blocks parsers parsing first-order logic pyparsing firstorder logic parser first-order logic bnf |in the last pyparsing subsection we created parser for first-order logic in this subsection we will create ply version that is designed to produce identical output to the pyparsing version setting up the lexer is very similar to what we did earlier the only novel aspect is that we keep dictionary of "keywordswhich we check whenever we have matched symbol (the equivalent to an identifier in programming languagehere is the lexer codecomplete except for the t_ignore regex and the t_newline(and t_error(functions which are not shown because they are the same as ones we have seen before keywords {"exists""exists""forall""forall""true""true""false""false"tokens (["symbol""colon""comma""lparen""rparen""equals""not""and""or""implies"list(keywords values())def t_symbol( ) "[ -za- ]\ * type keywords get( value"symbol"return t_equals "=t_not "~t_and "&t_or "\|t_implies "->t_colon ":t_comma ",t_lparen "\(t_rparen "\) |
9,091 | dict type the t_symbol(function is used to match both symbols (identifiersand keywords if the key given to dict get(isn' in the dictionary the default value (in this case "symbol"is returnedotherwise the key' corresponding token name is returned notice also that unlike in previous lexerswe don' change the ply lex lextoken' value attributebut we do change its type attribute to be either "symbolor the appropriate keyword token name all the other tokens are matched by simple regexes--all of which happen to match one or two literal characters in all the previous ply examples the lexer alone has been sufficient for our parsing needs but for the first-order logic bnf we need to use ply' parser as well as its lexer to do the parsing setting up ply parser is quite straightforward--and unlike pyparsing we don' have to reformulate our bnf to match certain patterns but can use the bnf directly for each bnf definitionwe create function with name prefixed by p_ and whose docstring contains the bnf statement the function is designed to process as the parser parsesit calls the function with the matching bnf statement and passes it single argument of type ply yacc yaccproduction the argument is given the name (following the ply examplesnaming conventionswhen bnf statement includes alternativesit is possible to create just one function to handle them allalthough in most cases it is clearer to create one function per alternative or set of structurally similar alternatives we will look at each of the parser functionsstarting with the one for handling quantifiers def p_formula_quantifier( )"""formula forall symbol colon formula exists symbol colon formula"" [ [ [ ] [ ] [ ]the docstring contains the bnf statement that the function corresponds tobut using rather than ::to mean is defined by note that the words in the bnf are either tokens that the lexer matches or nonterminals ( formulathat the bnf matches one ply quirk to be aware of is that if we have alternatives as we have hereeach one must be on separate line in the docstring sequence types the bnf' definition of the formula nonterminal involves many alternativesbut here we have used just the parts that are concerned with quantifiers--we will handle the other alternatives in other functions the argument of type ply yacc yaccproduction supports python' sequence apiwith each item corresponding to an item in the bnf so in all casesp[ corresponds to the nonterminal that is being defined (in this case formula)with the other items matching the parts on the right-hand side herep[ matches one of the symbols "existsor "forall" [ matches the quantified identifier (typicallyx or ) [ matches the colon token ( literal which we ignore)and [ matches the formula that is quantified this is recursive definitionso the [ item |
9,092 | introduction to parsing is itself formula which may contain formulas and so on we don' have to concern ourselves with whitespace between tokens since we created t_ignore regex which told the lexer to ignore ( skipwhitespace in this examplewe could just as easily have created two separate functionssayp_formula_forall(and p_formula_exists()giving them one alternative of the bnf each and the same suite we chose to combine them--and some of the others--simply because they have the same suites formulas in the bnf have three binary operators involving formulas since these can be handled by the same suitewe have chosen to parse them using single function and bnf with alternatives def p_formula_binary( )"""formula formula implies formula formula or formula formula and formula"" [ [ [ ] [ ] [ ]the resultthat isthe formula stored in [ ]is simply list containing the left operandthe operatorand the right operand this code says nothing about precedence and associativity--and yet we know that implies is right-associative and that the other two are left-associativeand that implies has lower precedence than the others we will see how to handle these aspects once we have finished reviewing the parser' functions def p_formula_not( )"formula not formulap[ [ [ ] [ ]def p_formula_boolean( )"""formula false true"" [ [ def p_formula_group( )"formula lparen formula rparenp[ [ def p_formula_symbol( )"formula symbolp[ [ all these formula alternatives are unarybut even though the suites for p_formula_boolean(and p_formula_symbol(are the samewe have given each one its own function since they are all logically different from each other one slightly surprising aspect of the p_formula_group(function is that we set its value to be [ rather than [ [ ]this works because we already use lists to |
9,093 | embody all the operatorsso while it would be harmless to use list here--and might be essential for other parsers--in this example it isn' necessary def p_formula_equals( )"formula term equals termp[ [ [ ] [ ] [ ]this is the part of the bnf that relates formulas and terms the implementation is straightforwardand we could have included this with the other binary operators since the function' suite is the same we chose to handle this separately purely because it is logically different from the other binary operators def p_term( )"""term symbol lparen termlist rparen symbol"" [ [ if len( = else [ [ ] [ ]def p_termlist( )"""termlist term comma termlist term"" [ [ if len( = else [ [ ] [ ]terms can either be single symbol or symbol followed by parenthesized term list ( comma-separated list of terms)and these two functions between them handle both cases def p_error( )if is noneraise valueerror("unknown error"raise valueerror("syntax errorline { }{ }formatp lineno type)if parser error occurs the p_error(function is called although we have treated the ply yacc yaccproduction argument as sequence up to nowit also has attributesand here we have used the lineno attribute to indicate where the problem occurred precedence (("nonassoc""forall""exists")("right""implies")("left""or")("left""and")("right""not")("nonassoc""equals")to set the precedences and associativities of operators in ply parserwe must create precedence variable and give it list of tuples where each tuple' first item is the required associativity and where each tuple' second and subsequent items are the tokens concerned ply will honor the specified as |
9,094 | introduction to parsing sociativities and will set the precedences from lowest (first tuple in the listto highest (last tuple in the listfor unary operatorsassociativity isn' really an issue for ply (although it can be for pyparsing)so for not we could have used "nonassocand the parsing results would not be affected at this point we have the tokensthe lexer' functionsthe parser' functionsand the precedence variable all set up now we can create ply lexer and parser and parse some text lexer ply lex lex(parser ply yacc yacc(tryreturn parser parse(textlexer=lexerexcept valueerror as errprint(errreturn [this code parses the formula it is given and returns list that has exactly the same format as the lists returned by the pyparsing version (see the end of the subsection on the pyparsing first-order logic parser to see examples of the kind of lists that the parser returns ply tries very hard to give useful and comprehensive error messagesalthough in some cases it can be overzealous--for examplewhen ply creates the firstorder logic parser for the first timeit warns that there are " shift/reduce conflictsin practiceply defaults to shifting in such casessince that' usually the right thing to doand is certainly the right action for the first-order logic parser the ply documentation explains this and many other issues that can ariseand the parser' parser out file which is produced whenever parser is created contains all the information necessary to analyze what is going on as rule of thumbshift/reduce warnings may be benignbut any other kind of warning should be eliminated by correcting the parser we have now completed our coverage of the ply examples the ply documentation (www dabeaz com/plyprovides much more information than we have had space to convey hereincluding complete coverage of all of ply' features including many that were not needed for this examples summary ||for the simplest situations and for nonrecursive grammarsusing regexes is good choice--at least for those who are comfortable with regex syntax another approach is to create finite state automata--for exampleby reading the text character by characterand maintaining one or more state variables-in pyparsingprecedences are set the other way upfrom highest to lowest |
9,095 | although this can lead to if statements with lots of elifs and nested if elifs that can be difficult to maintain for more complex grammarsand those that are recursivepyparsingplyand other generic parser generators are better choice than using regexes or finite state automataor doing handcrafted recursive descent parser of all the approachespyparsing seems to require the least amount of codealthough it can be tricky to get recursive grammars rightat least at first pyparsing works at its best when we take full advantage of its predefined functionality--of which there is quite lot more than we covered in this -and use the programming patterns that suit it this means that in more complex cases we cannot simply translate bnf directly into pyparsing syntaxbut must adapt the implementation of the bnf to fit in with the pyparsing philosophy pyparsing is an excellent moduleand it is used in many programming projects ply not only supports the direct translation of bnfsit requires that we do thisat least for the ply yacc module it also has powerful and flexible lexer which is sufficient in its own right for handling many simple grammars ply also has excellent error reporting ply uses table-driven algorithm that makes its speed independent of the size or complexity of the grammarso it tends to run faster than parsers that use recursive descent such as pyparsing one aspect of ply that may take some getting used to is its heavy reliance on introspectionwhere both docstrings and function names have significance nonethelessply is an excellent moduleand has been used to create some complex parsersincluding ones for the and zxbasic programming languages although it is generally straightforward to create parser that accepts valid inputcreating one that accepts all valid input and rejects all invalid input can be quite challenge for exampledo the first-order logic parsers in this last section accept all valid formulas and reject all invalid onesand even if we do manage to reject invalid inputdo we provide error messages that correctly identify what the problem is and where it occurredparsing is large and fascinating topicand this is designed to introduce the very basicsso further reading and practical experience are essential for those wanting to go further one other point that this hints at is that as large and wide-ranging as python' standard library ismany high-qualitythird-party packages and modules that provide very useful additional functionality are also available most of these are available through the python package indexpypi python org/pypibut some can only be discovered using search engine in generalwhen you have some specialized need that is not met by python' standard libraryit is always worth looking for third-party solution before writing your own |
9,096 | introduction to parsing exercise ||create suitable bnf and then write simple program for parsing basic bibtex book referencesand that produces output in the form of dictionary of dictionaries for examplegiven input like this@book{blanchette+summerfield author "jasmin blanchette and mark summerfield"title " +gui programming with qt second edition"year publisher "prentice hallthe expected output would be dictionary like this (herepretty printed){'blanchette+summerfield ''author''jasmin blanchette and mark summerfield''publisher''prentice hall''title'' +gui programming with qt second edition''year' each book has an identifier and this should be used as the key for the outer dictionary the value should itself be dictionary of key-value items each book' identifier can contain any characters except whitespaceand each key=value field' value can either be an integer or double-quoted string string values can include arbitrary whitespace including newlinesso replace every internal sequence of whitespace (including newlineswith single spaceand of course strip whitespace from the ends note that the last key=value for given book is not followed by comma create the parser using either pyparsing or ply if using pyparsingthe regex(class will be useful for the identifier and the quotedstring(class will be useful when defining the value use the delimitedlist(function for handling the list of key=values if using plythe lexer is sufficientproviding you use separate tokens for integer and string values solution using pyparsing should take around lineswhile one using ply might take about lines solution that includes both pyparsing and ply functions is provided in bibtex py |
9,097 | dialog-style programs main-window-style programs introduction to gui programming |||python has no native support for gui (graphical user interfaceprogrammingbut this isn' problem since many gui libraries written in other languages can be used by python programmers this is possible because many gui libraries have python wrappers or bindings--these are packages and modules that are imported and used like any other python packages and modules but which access functionality that is in non-python libraries under the hood python' standard library includes tcl/tk--tcl is an almost syntax-free scripting language and tk is gui library written in tcl and python' tkinter module provides python bindings for the tk gui library tk has three advantages compared with the other gui libraries that are available for python firstit is installed as standard with pythonso it is always availablesecondit is small (even including tcl)and thirdit comes with idle which is very useful for experimenting with python and for editing and debugging python programs unfortunatelyprior to tk tk had very dated look and very limited set of widgets ("controlsor "containersin windows-speakalthough it is fairly easy to create custom widgets in tk by composing other widgets together in layouttk does not provide any direct way of creating custom widgets from scratch with the programmer able to draw whatever they want additional tkcompatible widgets are available using the ttk library (only with python and tk and laterand the tix library--these are also part of python' standard library note that tix is not always provided on non-windows platformsmost notably ubuntuwhich at the time of this writing offers it only as an unsupported add-onso for maximum portability it is best to avoid using tix altogether the python-oriented documentation for tkttkand tix is rather sparse--most of the documentation for these libraries is written for tcl/tk programmers and may not be easy for non-tcl programmers to decipher |
9,098 | introduction to gui programming for developing gui programs that must run on any or all python desktop platforms ( windowsmac os xand linux)using only standard python installation with no additional librariesthere is just one choicetk if it is possible to use third-party libraries the number of options opens up considerably one route is to get the wck (widget construction kitwww effbot org/zone/wck htmwhich provides additional tk-compatible functionality including the ability to create custom widgets whose contents are drawn in code the other choices don' use tk and fall into two categoriesthose that are specific to particular platform and those that are cross-platform platform-specific gui libraries can give us access to platform-specific featuresbut at the price of locking us in to the platform the three most well-established crossplatform gui libraries with python bindings are pygtk (www pygtk org)pyqt (www riverbankcomputing com/software/pyqt)and wxpython (www wxpython orgall three of these offer far more widgets than tkproduce better-looking guis (although the gap has narrowed with tk and even more with ttk)and make it possible to create custom widgets drawn in code all of them are easier to learn and use than tk and all have more and much better python-oriented documentation than tk and in generalprograms that use pygtkpyqtor wxpython need less code and produce better results than programs written using tk (at the time of this writingpyqt had already been ported to python but the ports of both wxpython and pygtk were still being done yet despite its limitations and frustrationstk can be used to build useful gui programs--idle being the most well known in the python world furthermoretk development seems to have picked up latelywith tk offering theming which makes tk programs look much more nativeas well as the welcome addition of many new widgets the purpose of this is to give just flavor of tk programming--for serious gui development it is best to skip this (since it shows the vintage tk approach to gui programming)and to use one of the alternative libraries but if tk is your only option--for exampleif your users have only standard python installation and cannot or will not install third-party gui library--then realistically you will need to learn enough of the tcl language to be able to read tk' documentation in the following sections we will use tk to create two gui programs the first is very small dialog-style program that does compound interest calculations the second is more elaborate main-window-style program that manages list of bookmarks (names and urlsby using such simple data we can the only python/tk book known to the author is python and tkinter programming by john graysonisbn published in it is out of date in some areas good tcl/tk book is practical programming in tcl and tk by brent welch and ken jonesisbn all the tcl/tk documentation is online at www tcl tkand tutorials can be found at www tkdocs com |
9,099 | invoke invoke read input create gui process start event loop write output no event to processterminate yes classic console program process request to terminateno yes terminate classic gui program figure console programs versus gui programs concentrate on the gui programming aspects without distraction in the coverage of the bookmarks program we will see how to create custom dialogand how to create main window with menus and toolbarsas well as how to combine them all together to create complete working program both of the example programs use pure tkmaking no use of the ttk and tix librariesso as to ensure compatibility with python it isn' difficult to convert them to use ttkbut at the time of this writingsome of the ttk widgets provide less support for keyboard users than their tk cousinsso while ttk programs might look betterthey may also be less convenient to use but before diving into the codewe must review some of the basics of gui programming since it is bit different from writing console programs python console programs and module files always have py extensionbut for python gui programs we use pyw extension (module files always use pythoughboth py and pyw work fine on linuxbut on windowspyw ensures that windows uses the pythonw exe interpreter instead of python exeand this in turn ensures that when we execute python gui programno unnecessary console window will appear mac os works similarly to windowsusing the pyw extension for gui programs |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.