id
int64 0
25.6k
| text
stringlengths 0
4.59k
|
---|---|
6,300 | other_book pyexcel book(other_book bookdict book_dict print(other_book plainsheet sheet sheet you can set via 'xlsattribute too another_book pyexcel book(another_book xls other_book xls print(another_book mediawikisheet {class="wikitablestyle="text-alignleft;||align="right" |align="right" |align="right" |align="right" |align="right" |align="right" |align="right" |align="right" |align="right" |sheet {class="wikitablestyle="text-alignleft;|| | | | | | | | | |sheet {class="wikitablestyle="text-alignleft;|| | | | | | | | | |how about setting content via urlanother_book url "'-multiple-sheets-example xlsanother_book (continues on next page old tutorial |
6,301 | (continued from previous pagesheet +---+---+--- +---+---+--- +---+---+--- +---+---+---sheet +---+---+--- +---+---+--- +---+---+--- +---+---+---sheet +---+---+--- +---+---+--- +---+---+--- +---+---+---getters and setters you can pass on source specific parameters to getter and setter functions content "\nsheet pyexcel sheet(sheet set_csv(contentdelimiter="-"sheet csv ' , , \ \ , , \ \nsheet get_csv(delimiter="|"' | | \ \ | | \ \nread partial data when you are dealing with huge amount of datae gbobviously you would not like to fill up your memory with those data what you may want to do isrecord data from nth linetake records and stop and you only want to use your memory for the recordsnot for beginning part nor for the tail part hence partial read feature is developed to read partial data into memory for processing you can paginate by rowby column and by bothhence you dictate what portion of the data to read back but remember only row limit features help you save memory let' you use this feature to record data from nth columntake number of columns and skip the rest you are not going to reduce your memory footprint why did not see above benefitthis feature depends heavily on the implementation details support the project |
6,302 | 'pyexcel-xls' (xlrd)'pyexcel-xlsx' (openpyxl)'pyexcel-ods' (odfpyand 'pyexcel-ods ' (pyexcel-ezodfwill read all data into memory because xlsxlsx and ods file are effective zipped folderall four will unzip the folder and read the content in xml format in fullso as to make sense of all details henceduring the partial data is been returnedthe memory consumption won' differ from reading the whole data back only after the partial data is returnedthe memory comsumption curve shall jump the cliff so pagination code here only limits the data returned to your program with that said'pyexcel-xlsxr' 'pyexcel-odsr' and 'pyexcel-htmlr' does read partial data into memory those three are implemented in such way that they consume the xml(htmlwhen needed when they have read designated portion of the datathey stopeven if they are half way through in additionpyexcel' csv readers can read partial data into memory too let' assume the following file is huge csv fileimport datetime import pyexcel as pe data [ ][ ][ ][ ][ ][ pe save_as(array=datadest_file_name="your_file csv"and let' pretend to read partial datape get_sheet(file_name="your_file csv"start_row= row_limit= your_file csv+---+----+---- +---+----+---- +---+----+---- +---+----+----and you could as well do the same for columnspe get_sheet(file_name="your_file csv"start_column= column_limit= your_file csv+----+---- +----+---- +----+---- +----+---- +----+---- +----+---- +----+----obviousyou could do both at the same time old tutorial |
6,303 | pe get_sheet(file_name="your_file csv"start_row= row_limit= start_column= column_limit= your_file csv+----+---- +----+---- +----+---- +----+----the pagination support is available across all pyexcel plugins noteno column pagination support for query sets as data source formatting while transcoding big data file if you are transcoding big data setconventional formatting method would not help unless on-demand free ram is available howeverthere is way to minimize the memory footprint of pyexcel while the formatting is performed let' continue from previous example suppose we want to transcode "your_file csvto "your_file xlsbut increase each element by what we can do is to define row renderer function as the followingdef increment_by_one(row)for element in rowyield element then pass it onto save_as function using row_rendererpe isave_as(file_name="your_file csv"row_renderer=increment_by_onedest_file_name="your_file xlsx"noteif the data content is from generatorisave_as has to be used we can verify if it was done correctlype get_sheet(file_name="your_file xlsx"your_file csv+---+----+---- +---+----+---- +---+----+---- +---+----+---- +---+----+---- +---+----+----(continues on next page support the project |
6,304 | (continued from previous page +---+----+----sheetdata access iterate csv file here is the way to read the csv file and iterate through each rowsheet pyexcel get_sheet(file_name='tutorial csv'for row in sheetprint("% % (row[ ]row[ ])nameage chu chu mo mo often people wanted to use csv dict reader to read it because it has header here is how you do it with pyexcel sheet pyexcel get_sheet(file_name='tutorial csv'sheet name_columns_by_row( for row in sheetprint("% % (row[ ]row[ ])chu chu mo mo line remove the header from the actual content the removed header can be used to access its columns using the name itselffor examplesheet column['age'[ random access to individual cell top left corner of sheet is ( )meaning both row index and column index start from to randomly access cell of sheet instancetwo syntax are availablesheet[rowcolumnthis syntax helps you iterate the data by row and by column if you use excel positionsthe syntax below help you get the cell instantly without converting alphabet column index to integersheet[' 'please note that with excel positionstop left corner is ' for examplesuppose you have the following data sheethere is the example code showing how you can randomly access cellsheet pyexcel get_sheet(file_name="example xls"sheet content ++---+---+---example (continues on next page old tutorial |
6,305 | (continued from previous page++---+---+--- ++---+---+--- ++---+---+--- ++---+---+---print(sheet[ ] print(sheet[" "] sheet[ print(sheet[ ] notein order to set value to cellplease use sheet[row_indexcolumn_indexnew_value random access to rows and columns continue with previous excel fileyou can access row and column separatelysheet row[ [' ' sheet column[ [' ' use custom names instead of index alternativelyit is possible to use the first row to refer to each columnssheet name_columns_by_row( print(sheet[ " "] sheet[ " " print(sheet[ " "] you have noticed the row index has been changed it is because first row is taken as the column nameshence all rows after the first row are shifted now accessing the columns are changed toosheet column[' '[ hence access the same cellthis statement also workssheet column[' '][ further moreit is possible to use first column to refer to each rowssheet name_rows_by_column( support the project |
6,306 | to access the same cellwe can use this linesheet row[" "][ for the same reasonthe row index has been reduced by since we have named columns and rowsit is possible to access the same cell like thisprint(sheet[" "" "] sheet[" "" " print(sheet[" "" "] notewhen you have named your rows and columnsin order to set value to cellplease use sheet[row_namecolumn_namenew_value for multiple sheet fileyou can regard it as three dimensional array if you use book soyou access each cell via this syntaxbook[sheet_index][rowcolumnorbook["sheet_name"][rowcolumnsuppose you have the following sheetsand you can randomly access cell in sheetbook pyexcel get_book(file_name="example xls"print(book["sheet "][ , ] print(book[ ][ , ]the same cell tipwith pyexcelyou can regard single sheet reader as an two dimensional array and multi-sheet excel book reader as ordered dictionary of two dimensional arrays reading single sheet excel file suppose you have csvxlsxlsx file as the followingthe following code will give you the data in jsonimport json "example csv","example xlsx","example xlsmsheet pyexcel get_sheet(file_name="example xls"print(json dumps(sheet to_array())[[ ][ ][ ] old tutorial |
6,307 | read the sheet as dictionary suppose you have csvxlsxlsx file as the followingthe following code will give you data series in dictionary"example xls","example xlsx","example xlsmsheet pyexcel get_sheet(file_name="example_series xls"name_columns_by_row= sheet to_dict(ordereddict([('column '[ ])('column '[ ])('column '[ '- ])]can get an array of dictionaries per each rowsuppose you have the following datathe following code will produce what you want"example csv","example xlsx","example xlsmsheet pyexcel get_sheet(file_name="example xls"name_columns_by_row= records sheet to_records(for record in recordskeys sorted(record keys()print("{"for key in keysprint("'% ':% (keyrecord[key])print("}"' ': ' ': ' ': ' ': ' ': ' ': ' ': ' ': ' ': writing single sheet excel file suppose you have an array as the following the following code will write it as an excel file of your choice support the project |
6,308 | testcode:array [[ ][ ][ ]"output xls"output xlsx"output ods"output xlsmsheet pyexcel sheet(arraysheet save_as("output csv"suppose you have dictionary as the followingthe following code will write it as an excel file of your choiceexample_dict {"column "[ ]"column "[ ]"column "[ '- ]"output xls"output xlsx"output ods"output xlsmsheet pyexcel get_sheet(adict=example_dictsheet save_as("output csv"write multiple sheet excel file suppose you have previous data as dictionary and you want to save it as multiple sheet excel filecontent 'sheet '[ ][ ][ ]'sheet '[' '' '' '][ ][ ]'sheet '[' '' '' '][ ][ book pyexcel get_book(bookdict=contentbook save_as("output xls"you shall get xls file read multiple sheet excel file let' read the previous file backbook pyexcel get_book(file_name="output xls"sheets book to_dict(for name in sheets keys()print(namesheet (continues on next page old tutorial |
6,309 | (continued from previous pagesheet sheet work with data series in single sheet suppose you have the following data in any of the supported excel formats againsheet pyexcel get_sheet(file_name="example_series xls"name_columns_by_row= play with data you can get headersprint(list(sheet colnames)['column ''column ''column 'you can use utility function to get all in dictionarysheet to_dict(ordereddict([('column '[ ])('column '[ ])('column '[ '- ])]maybe you want to get only the data without the column headers you can call rows(insteadlist(sheet rows()[[ ][ ][ ]you can get data from the bottom to the top one by calling rrows(insteadlist(sheet rrows()[[ ][ ][ ]you might want the data arranged vertically you can call columns(insteadlist(sheet columns()[[ ][ ][ ]you can get columns in reverse sequence as well by calling rcolumns(insteadlist(sheet rcolumns()[[ ][ ][ ]do you want to flatten the datayou can get the content in one dimensional array if you are interested in playing with one dimensional enumerationyou can check out these functions enumerate()reverse()vertical()and rvertical()list(sheet enumerate()[ list(sheet reverse()[ list(sheet vertical()[ list(sheet rvertical()[ support the project |
6,310 | sheetdata manipulation the data in sheet is represented by sheet which maintains the data as list of lists you can regard sheet as two dimensional array with additional iterators random access to individual column and row is exposed by column and row column manipulation suppose have one data file as the followingsheet pyexcel get_sheet(file_name="example xls"name_columns_by_row= sheet pyexcel sheet+++column column column +==========+==========+========== +++ +++ +++and you want to update column with these data[ sheet column["column "[ sheet column[ [ sheet pyexcel sheet+++column column column +==========+==========+========== +++ +++ +++remove one column of data file if you want to remove column you can just calldel sheet column["column "sheet column["column "[ the sheet content will becomesheet pyexcel sheet++column column +==========+==========(continues on next page old tutorial |
6,311 | (continued from previous page ++ ++ ++append more columns to data file continue from previous example suppose you want add two more columns to the data file column column here is the example code to append two extra columnsextra_data ["column ""column "][ ][ ][ sheet pyexcel sheet(extra_datasheet column +sheet sheet column["column "[ sheet column["column "[ here is what you will getsheet pyexcel sheet++++column column column column +==========+==========+==========+========== ++++ ++++ ++++cherry pick some columns to be removed suppose you have the following datadata [' '' '' '' '' '' '' '' '][ , , , , , , , ](continues on next page support the project |
6,312 | (continued from previous pagesheet pyexcel sheet(dataname_columns_by_row= sheet pyexcel sheet+---+---+---+---+---+---+---+--- +===+===+===+===+===+===+===+=== +---+---+---+---+---+---+---+---and you want to remove columns named as' '' ' ''hthis is how you do itdel sheet column[' '' '' '' 'sheet pyexcel sheet+---+---+---+--- +===+===+===+=== +---+---+---+---what if the headers are in different row suppose you have the following datasheet pyexcel sheet+++ +++column column column +++ +++the way to name your columns is to use index sheet name_columns_by_row( here is what you getsheet pyexcel sheet+++column column column +==========+==========+========== +++ +++row manipulation suppose you have the following data old tutorial |
6,313 | sheet pyexcel sheet+---+---+---+ row +---+---+---+ row +---+---+---+ row +---+---+---+you can name your rows by column index at sheet name_rows_by_column( sheet pyexcel sheet++---+---+---row ++---+---+---row ++---+---+---row ++---+---+---then you can access rows by its namesheet row["row "[' '' '' 'sheetdata filtering use filter(function to apply filter immediately the content is modified suppose you have the following data in any of the supported excel formatscolumn column column import pyexcel sheet pyexcel get_sheet(file_name="example_series xls"name_columns_by_row= sheet content +++column column column +==========+==========+========== +++ +++ +++ support the project |
6,314 | filter out some data you may want to filter odd rows and print them in an array of dictionariessheet filter(row_indices=[ ]sheet content +++column column column +==========+==========+========== +++let' try to further filter out even columnssheet filter(column_indices=[ ]sheet content ++column column +==========+========== ++save the data let' save the previous filtered datasheet save_as("example_series_filter xls"when you open example_series_filter xlsyou will find these data column column how to filter out empty rows in my sheetsuppose you have the following data in sheet and you want to remove those rows with blanksimport pyexcel as pe sheet pe sheet([[ , , ],['','',''],['','',''],[ , , ]]you can use pyexcel filters rowvaluefilterwhich examines each rowreturn true if the row should be filtered out solet' define filter functiondef filter_row(row_indexrow)result [element for element in row if element !''return len(result)== and then apply the filter on the sheetdel sheet row[filter_rowsheet pyexcel sheet+---+---+---(continues on next page old tutorial |
6,315 | (continued from previous page +---+---+--- +---+---+---sheetformatting previous section has assumed the data is in the format that you want in realityyou have to manipulate the data types bit to suit your needs henceformatters comes into the scene use format(to apply formatter immediately noteintfloat and datetime values are automatically detected in csv files since pyexcel version convert column of numbers to strings suppose you have the following dataimport pyexcel data ["userid","name"][ ,"adam"][ ,"bella"][ ,"cedar"sheet pyexcel sheet(datasheet name_columns_by_row( sheet column["userid"[ as you can seeuserid column is of int type nextlet' convert the column to string formatsheet column format("userid"strsheet column["userid"[' '' '' 'cleanse the cells in spread sheet sometimesthe data in spreadsheet may have unwanted strings in all or some cells let' take an example suppose we have spread sheet that contains all strings but it as random spaces before and after the text values some field had weird characterssuch as " "data [version"comments"author "][ "release versions", eda"][" v ""useful updates   " freud"sheet pyexcel sheet(datasheet content +++version comments author  +++(continues on next page support the project |
6,316 | (continued from previous pagev release versions eda +++ v useful updates    freud +++now try to create custom cleanse functioncode-block:python def cleanse_func( ) replace(" """ rstrip(strip(return then let' create sheetformatter and apply itcode-block:python sheet map(cleanse_funcso in the endyou get thissheet content +++version comments author +++ release versions eda +++ useful updates freud +++booksheet operations access to individual sheets you can access individual sheet of book via attributebook pyexcel get_book(file_name="book xls"book sheet sheet +---+---+--- +---+---+--- +---+---+--- +---+---+---or via array notationsbook["sheet "there is space in the sheet name sheet (continues on next page old tutorial |
6,317 | (continued from previous page+---+---+--- +---+---+--- +---+---+--- +---+---+---merge excel books suppose you have two excel books and each had three sheets you can merge them and get new bookyou also can merge individual sheetsbook pyexcel get_book(file_name="book xls"book pyexcel get_book(file_name="book xlsx"merged_book book book merged_book book ["sheet "book ["sheet "merged_book book ["sheet "book merged_book book book ["sheet "manipulate individual sheets merge sheets into single sheet suppose you want to merge many csv files row by row into new sheet import glob merged pyexcel sheet(for file in glob glob("csv")merged row +pyexcel get_sheet(file_name=filemerged save_as("merged csv"how do read bookprocess it and save to new book yesyou can do that the code looks like thisimport pyexcel book pyexcel get_book(file_name="yourfile xls"for sheet in bookdo you processing with sheet do filteringpass book save_as("output xls"what would happen if save multi sheet book into "csvfile wellyou will get one csv file per each sheet suppose you have these code support the project |
6,318 | content 'sheet '[ ][ ][ ]'sheet '[' '' '' '][ ][ ]'sheet '[' '' '' '][ ][ book pyexcel book(contentbook save_as("myfile csv"you will end up with three csv filesimport glob outputfiles glob glob("myfile_csv"for file in sorted(outputfiles)print(filemyfile__sheet __ csv myfile__sheet __ csv myfile__sheet __ csv and their content is the value of the dictionary at the corresponding key alternativelyyou could use save_book_as(function pyexcel save_book_as(bookdict=contentdest_file_name="myfile csv"after have saved my multiple sheet book in csv formathow do get them back first of allyou can read them back individual as csv file using meth:~pyexcel get_sheet method secondlythe pyexcel can do the magic to load all of them back into book you will just need to provide the common name before the separator "__"book pyexcel get_book(file_name="myfile csv"book sheet +++ +++ +++ +++(continues on next page old tutorial |
6,319 | (continued from previous pagesheet +++ +++ +++ +++sheet +++ +++ +++ +++ cook book recipes warningthe pyexcel does not consider fontsstyles and charts at all in the resulting excel filesfontsstyles and charts will not be transferred these recipes give one-stop utility functions for known use cases similar functionality can be achieved using other application interfaces update one column of data file suppose you have one data file as the followingexample xls column column column and you want to update column with these data[ here is the codefrom pyexcel cookbook import update_columns custom_column {"column ":[ ]update_columns("example xls"custom_column"output xls"your output xls will have these data support the project |
6,320 | column column column update one row of data file suppose you have the same data fileexample xls row row row and you want to update the second row with these data[ here is the codefrom pyexcel cookbook import update_rows custom_row {"row ":[ ]update_rows("example xls"custom_row"output xls"pyexcel get_sheet(file_name="output xls"pyexcel sheet++----+----+----row ++----+----+----row ++----+----+----row ++----+----+----merge two files into one suppose you want to merge the following two data filesexample csv column column column example xls column column the following code will merge the tow into one filesay "output xls" cook book |
6,321 | from pyexcel cookbook import merge_two_files merge_two_files("example csv""example xls""output xls"the output xls would have the following datacolumn column column column column select candidate columns of two files and form new one suppose you have these two filesexample ods column column column column column column column column column column example xls data ["column ""column ""column ""column ""column "][ ][ ][ pyexcel sheet(datas save_as("example csv"data ["column ""column ""column ""column ""column "][ pyexcel sheet(datas save_as("example xls"and you want to filter out column and from example odsfilter out column and and merge themcolumn column column column column column the following code will do the job support the project |
6,322 | from pyexcel cookbook import merge_two_readers sheet pyexcel get_sheet(file_name="example csv"name_columns_by_row= sheet pyexcel get_sheet(file_name="example xls"name_columns_by_row= del sheet column[ del sheet column[ merge_two_readers(sheet sheet "output xls"merge two files into book where each file become sheet suppose you want to merge the following two data filesexample csv column column column example xls column column data ["column ""column ""column "][ ][ ][ pyexcel sheet(datas save_as("example csv"data ["column ""column "][ ][ pyexcel sheet(datas save_as("example xls"the following code will merge the tow into one filesay "output xls"from pyexcel cookbook import merge_all_to_a_book merge_all_to_a_book(["example csv""example xls"]"output xls"the output xls would have the following dataexample csv as sheet name and inside the sheetyou havecolumn cook book column column |
6,323 | example ods as sheet name and inside the sheetyou havecolumn column merge all excel files in directory into book where each file become sheet the following code will merge every excel files into one filesay "output xls"from pyexcel cookbook import merge_all_to_a_book import glob merge_all_to_a_book(glob glob("your_csv_directory\csv")"output xls"you can mix and match with other excel formatsxlsxlsm and ods for exampleif you are sure you have only xlsxlsmxlsxods and csv files in your_excel_file_directoryyou can do the followingfrom pyexcel cookbook import merge_all_to_a_book import glob merge_all_to_a_book(glob glob("your_excel_file_directory\*")"output xls"split book into single sheet files suppose you have many sheets in work book and you would like to separate each into single sheet excel file you can easily do thisfrom pyexcel cookbook import split_a_book split_a_book("megabook xls""output xls"import glob outputfiles glob glob("*_output xls"for file in sorted(outputfiles)print(filesheet _output xls sheet _output xls sheet _output xls for the output fileyou can specify any of the supported formats extract just one sheet from book suppose you just want to extract one sheet from many sheets that exists in work book and you would like to separate it into single sheet excel file you can easily do thisfrom pyexcel cookbook import extract_a_sheet_from_a_book extract_a_sheet_from_a_book("megabook xls""sheet ""output xls"if os path exists("sheet _output xls")print("sheet _output xls exists"(continues on next page support the project |
6,324 | (continued from previous pagesheet _output xls exists for the output fileyou can specify any of the supported formats loading from other sources get back into pyexcel list import pyexcel as two_dimensional_list [ ][ ][ ]sheet get_sheet(array=two_dimensional_listsheet pyexcel_sheet +---+----+----+---- +---+----+----+---- +---+----+----+---- +---+----+----+----dict a_dictionary_of_key_value_pair "ie" "firefox" sheet get_sheet(adict=a_dictionary_of_key_value_pairsheet pyexcel_sheet ++firefox ie ++ ++a_dictionary_of_one_dimensional_arrays "column "[ ]"column "[ ]"column "[ ]sheet get_sheet(adict=a_dictionary_of_one_dimensional_arrayssheet pyexcel_sheet (continues on next page cook book |
6,325 | (continued from previous page+++column column column +++ +++ +++ +++ +++records a_list_of_dictionaries "name"'adam'"age" }"name"'beatrice'"age" }"name"'ceri'"age" }"name"'dean'"age" sheet get_sheet(records=a_list_of_dictionariessheet pyexcel_sheet ++age name ++ adam ++ beatrice ++ ceri ++ dean ++book dict a_dictionary_of_two_dimensional_arrays 'sheet '(continues on next page support the project |
6,326 | (continued from previous page[ ][ ][ ]'sheet '[' '' '' '][ ][ ]'sheet '[' '' '' '][ ][ book get_book(bookdict=a_dictionary_of_two_dimensional_arraysbook sheet +++ +++ +++ +++sheet +++ +++ +++ +++sheet +++ +++ +++ +++how to load sheet from url suppose you have excel file somewhere hostedsheet pe get_sheet(url=sheet csv+---+---+--- +---+---+--- cook book |
6,327 | for sheet get content another_sheet sheet(another_sheet url "'-basics/multiple-sheets-example xlsanother_sheet content +---+---+--- +---+---+--- +---+---+--- +---+---+---for book how about setting content via urlanother_book book(another_book url "'-multiple-sheets-example xlsanother_book sheet +---+---+--- +---+---+--- +---+---+--- +---+---+---sheet +---+---+--- +---+---+--- +---+---+--- +---+---+---sheet +---+---+--- +---+---+--- +---+---+--- +---+---+--- support the project |
6,328 | real world cases questions and answers python flask writing to csv file and reading it pyqtimport xls file and populate qtablewidget how do write data to csv file in columns and rows from list in python how to write dictionary values to csv file using python python convert csv to xlsx how to read data from excel and set data type remove or keep specific columns in csv file how can put csv file in an arrayhow to inject csv data to database here is real case in the stack-overflow due to the author' ignorancethe user would like to have the code in matlab than python hencei am sharing my pyexcel solution here problem definition here is my csv filepdb_id want to put this data in mysql table in the formprotein_id protein_key value_of_key have created table with the following codesql """create table allproteins protein_id char( )protein_key int value_of_key int )"" real world cases |
6,329 | need the code for insert pyexcel solution if you could insert an id field to act as the primary keyit can be mapped using sqlalchemy' ormsqlite /tmp/stack db sqlitecreate table allproteins id intprotein_id char( )protein_key intvalue_of_key int )here is the data mapping script vis sqlalchemymapping your database via sqlalchemy from sqlalchemy import create_engine from sqlalchemy ext declarative import declarative_base from sqlalchemy import columnintegerstring from sqlalchemy orm import sessionmaker checkout for different database server engine create_engine("sqlite:////tmp/stack db"base declarative_base(class proteins(base)__tablename__ 'allproteinsid column(integerprimary_key=trueautoincrement=true<-appended '-field protein_id column(string( )protein_key column(integervalue_of_key column(integersession sessionmaker(bind=enginehere is the short script to get data inserted into the databaseimport pyexcel as from itertools import product data insertion code starts here sheet get_sheet(file_name="csv-to-mysql-in-matlab-code csv"delimiter='\ 'sheet name_columns_by_row( sheet name_rows_by_column( print(sheetcsv-to-mysql-in-matlab-code csv+++++ +======+========+========+========+========= +++++ +++++ +++++results [for protein_idprotein_key in product(sheet rownamessheet colnames)results append([protein_idprotein_keysheet[str(protein_id)protein_key]](continues on next page support the project |
6,330 | (continued from previous pagesheet get_sheet(array=resultssheet colnames ['protein_id''protein_key''value_of_key'print(sheet pyexcel_sheet +++protein_id protein_key value_of_key +============+=============+============== +++ +++ +++ +++ +++ +++ +++ +++ +++ +++ +++ +++sheet save_to_database(session=session()table=proteinshere is the data insertedsqlite /tmp/stack db sqliteselect from allproteins | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | real world cases |
6,331 | api documentation api reference this is intended for users of pyexcel signature functions obtaining data from excel file get_array(**keywordsget_dict([name_columns_by_row]get_records([name_columns_by_row]get_book_dict(**keywordsget_book(**keywordsget_sheet(**keywordsiget_book(**keywordsiget_array(**keywordsiget_records([custom_headers]free_resources(obtain an array from an excel source obtain dictionary from an excel source obtain list of records from an excel source obtain dictionary of two dimensional arrays get an instance of book from an excel source get an instance of sheet from an excel source get an instance of bookstream from an excel source obtain generator of an two dimensional array from an excel source obtain generator of list of records from an excel source close file handles opened by signature functions that starts with 'ipyexcel get_array pyexcel get_array(**keywordsobtain an array from an excel source it accepts the same parameters as get_sheet(but return an array instead not all parameters are needed here is table source loading from file loading from string loading from stream loading from sql loading from sql in django loading from query sets loading from dictionary loading from records loading from array loading from an url parameters file_namesheet_namekeywords file_contentfile_typesheet_namekeywords file_streamfile_typesheet_namekeywords sessiontable model any query sets(sqlalchemy or djangoadictwith_keys records array url parameters file_name file with supported file extension file_content the file content file_stream the file stream support the project |
6,332 | file_type the file type in file_content or file_stream session database session table database table modela django model adicta dictionary of one dimensional arrays url download http url for your excel file with_keys load with previous dictionary' keysdefault is true records list of dictionaries that have the same keys array two dimensional arraya list of lists sheet_name sheet name if sheet_name is not giventhe default sheet at index is loaded start_row [intdefaults to it allows you to skip rows at the begginning row_limitint defaults to - meaning till the end of the whole sheet it allows you to skip the tailing rows start_column [intdefaults to it allows you to skip columns on your left hand side column_limitint defaults to - meaning till the end of the columns it allows you to skip the tailing columns skip_row_funcit allows you to write your own row skipping functions the protocol is to return pyexcel_io constants skip_data if skipping datapyexcel_io constants take_data to read datapyexcel_io constants stop_iteration to exit the reading procedure skip_column_funcit allows you to write your own column skipping functions the protocol is to return pyexcel_io constants skip_data if skipping datapyexcel_io constants take_data to read datapyexcel_io constants stop_iteration to exit the reading procedure skip_empty_rowsbool defaults to false toggle it to true if the rest of empty rows are uselessbut it does affect the number of rows row_rendereryou could choose to write custom row renderer when the data is being read auto_detect_float defaults to true auto_detect_int defaults to true auto_detect_datetime defaults to true ignore_infinity defaults to true library choose specific pyexcel-io plugin for reading source_library choose specific data source plugin for reading parser_library choose pyexcel parser plugin for reading skip_hidden_sheetsdefault is true please toggle it to read hidden sheets parameters related to csv file format for csvfmtparams are accepted delimiter field separator lineterminator line terminator api documentation |
6,333 | encodingcsv specific specify the file encoding the csv file for exampleencoding='latin especiallyencoding='utf- -sigwould add utf bom header if used in rendereror would parse csv with utf brom header used in parser escapechar one-character string used by the writer to escape the delimiter if quoting is set to quote_none and the quotechar if doublequote is false quotechar one-character string used to quote fields containing special characterssuch as the delimiter or quotecharor which contain new-line characters it defaults to '"quoting controls when quotes should be generated by the writer and recognised by the reader it can take on any of the quote_constants (see section module contentsand defaults to quote_minimal skipinitialspace when truewhitespace immediately following the delimiter is ignored the default is false pep_ _off when true in python version pep- is turned on the default is false parameters related to xls file formatplease note the following parameters apply to pyexcel-xls more details can be found in xlrd open_workbook(logfilean open file to which messages and diagnostics are written verbosityincreases the volume of trace material written to the logfile use_mmapwhether to use the mmap module is determined heuristically use this arg to override the result current heuristicmmap is used if it exists encoding_overrideused to overcome missing or bad codepage information in older-version files formatting_infothe default is falsewhich saves memory when trueformatting information will be read from the spreadsheet file this provides all cellsincluding empty and blank cells formatting information is available for each cell ragged_rowsthe default of false means all rows are padded out with empty cells so that all rows have the same size as found in ncols true means that there are no empty cells at the ends of rows this can result in substantial memory savings if rows are of widely varying sizes see also the row_len(method pyexcel get_dict pyexcel get_dict(name_columns_by_row= **keywordsobtain dictionary from an excel source it accepts the same parameters as get_sheet(but return dictionary instead specificallyname_columns_by_row specify row to be dictionary key it is default to or first row if you would use column index insteadyou should doget_dict(name_columns_by_row=- name_rows_by_column= examples on start_rowstart_column let' assume the following file is huge csv fileimport datetime import pyexcel as pe data [ ](continues on next page support the project |
6,334 | (continued from previous page[ ][ ][ ][ ][ pe save_as(array=datadest_file_name="your_file csv"and let' pretend to read partial datape get_sheet(file_name="your_file csv"start_row= row_limit= your_file csv+---+----+---- +---+----+---- +---+----+---- +---+----+----and you could as well do the same for columnspe get_sheet(file_name="your_file csv"start_column= column_limit= your_file csv+----+---- +----+---- +----+---- +----+---- +----+---- +----+---- +----+----obviousyou could do both at the same timepe get_sheet(file_name="your_file csv"start_row= row_limit= start_column= column_limit= your_file csv+----+---- +----+---- +----+---- +----+----the pagination support is available across all pyexcel plugins noteno column pagination support for query sets as data source api documentation |
6,335 | formatting while transcoding big data file if you are transcoding big data setconventional formatting method would not help unless on-demand free ram is available howeverthere is way to minimize the memory footprint of pyexcel while the formatting is performed let' continue from previous example suppose we want to transcode "your_file csvto "your_file xlsbut increase each element by what we can do is to define row renderer function as the followingdef increment_by_one(row)for element in rowyield element then pass it onto save_as function using row_rendererpe isave_as(file_name="your_file csv"row_renderer=increment_by_onedest_file_name="your_file xlsx"noteif the data content is from generatorisave_as has to be used we can verify if it was done correctlype get_sheet(file_name="your_file xlsx"your_file csv+---+----+---- +---+----+---- +---+----+---- +---+----+---- +---+----+---- +---+----+---- +---+----+----not all parameters are needed here is table source loading from file loading from string loading from stream loading from sql loading from sql in django loading from query sets loading from dictionary loading from records loading from array loading from an url parameters file_namesheet_namekeywords file_contentfile_typesheet_namekeywords file_streamfile_typesheet_namekeywords sessiontable model any query sets(sqlalchemy or djangoadictwith_keys records array url parameters support the project |
6,336 | file_name file with supported file extension file_content the file content file_stream the file stream file_type the file type in file_content or file_stream session database session table database table modela django model adicta dictionary of one dimensional arrays url download http url for your excel file with_keys load with previous dictionary' keysdefault is true records list of dictionaries that have the same keys array two dimensional arraya list of lists sheet_name sheet name if sheet_name is not giventhe default sheet at index is loaded start_row [intdefaults to it allows you to skip rows at the begginning row_limitint defaults to - meaning till the end of the whole sheet it allows you to skip the tailing rows start_column [intdefaults to it allows you to skip columns on your left hand side column_limitint defaults to - meaning till the end of the columns it allows you to skip the tailing columns skip_row_funcit allows you to write your own row skipping functions the protocol is to return pyexcel_io constants skip_data if skipping datapyexcel_io constants take_data to read datapyexcel_io constants stop_iteration to exit the reading procedure skip_column_funcit allows you to write your own column skipping functions the protocol is to return pyexcel_io constants skip_data if skipping datapyexcel_io constants take_data to read datapyexcel_io constants stop_iteration to exit the reading procedure skip_empty_rowsbool defaults to false toggle it to true if the rest of empty rows are uselessbut it does affect the number of rows row_rendereryou could choose to write custom row renderer when the data is being read auto_detect_float defaults to true auto_detect_int defaults to true auto_detect_datetime defaults to true ignore_infinity defaults to true library choose specific pyexcel-io plugin for reading source_library choose specific data source plugin for reading parser_library choose pyexcel parser plugin for reading skip_hidden_sheetsdefault is true please toggle it to read hidden sheets parameters related to csv file format for csvfmtparams are accepted api documentation |
6,337 | delimiter field separator lineterminator line terminator encodingcsv specific specify the file encoding the csv file for exampleencoding='latin especiallyencoding='utf- -sigwould add utf bom header if used in rendereror would parse csv with utf brom header used in parser escapechar one-character string used by the writer to escape the delimiter if quoting is set to quote_none and the quotechar if doublequote is false quotechar one-character string used to quote fields containing special characterssuch as the delimiter or quotecharor which contain new-line characters it defaults to '"quoting controls when quotes should be generated by the writer and recognised by the reader it can take on any of the quote_constants (see section module contentsand defaults to quote_minimal skipinitialspace when truewhitespace immediately following the delimiter is ignored the default is false pep_ _off when true in python version pep- is turned on the default is false parameters related to xls file formatplease note the following parameters apply to pyexcel-xls more details can be found in xlrd open_workbook(logfilean open file to which messages and diagnostics are written verbosityincreases the volume of trace material written to the logfile use_mmapwhether to use the mmap module is determined heuristically use this arg to override the result current heuristicmmap is used if it exists encoding_overrideused to overcome missing or bad codepage information in older-version files formatting_infothe default is falsewhich saves memory when trueformatting information will be read from the spreadsheet file this provides all cellsincluding empty and blank cells formatting information is available for each cell ragged_rowsthe default of false means all rows are padded out with empty cells so that all rows have the same size as found in ncols true means that there are no empty cells at the ends of rows this can result in substantial memory savings if rows are of widely varying sizes see also the row_len(method pyexcel get_records pyexcel get_records(name_columns_by_row= **keywordsobtain list of records from an excel source it accepts the same parameters as get_sheet(but return list of dictionary(recordsinstead specificallyname_columns_by_row specify row to be dictionary key it is default to or first row if you would use column index insteadyou should doget_records(name_columns_by_row=- name_rows_by_column= examples on start_rowstart_column let' assume the following file is huge csv file support the project |
6,338 | import datetime import pyexcel as pe data [ ][ ][ ][ ][ ][ pe save_as(array=datadest_file_name="your_file csv"and let' pretend to read partial datape get_sheet(file_name="your_file csv"start_row= row_limit= your_file csv+---+----+---- +---+----+---- +---+----+---- +---+----+----and you could as well do the same for columnspe get_sheet(file_name="your_file csv"start_column= column_limit= your_file csv+----+---- +----+---- +----+---- +----+---- +----+---- +----+---- +----+----obviousyou could do both at the same timepe get_sheet(file_name="your_file csv"start_row= row_limit= start_column= column_limit= your_file csv+----+---- +----+---- +----+---- +----+----the pagination support is available across all pyexcel plugins api documentation |
6,339 | noteno column pagination support for query sets as data source formatting while transcoding big data file if you are transcoding big data setconventional formatting method would not help unless on-demand free ram is available howeverthere is way to minimize the memory footprint of pyexcel while the formatting is performed let' continue from previous example suppose we want to transcode "your_file csvto "your_file xlsbut increase each element by what we can do is to define row renderer function as the followingdef increment_by_one(row)for element in rowyield element then pass it onto save_as function using row_rendererpe isave_as(file_name="your_file csv"row_renderer=increment_by_onedest_file_name="your_file xlsx"noteif the data content is from generatorisave_as has to be used we can verify if it was done correctlype get_sheet(file_name="your_file xlsx"your_file csv+---+----+---- +---+----+---- +---+----+---- +---+----+---- +---+----+---- +---+----+---- +---+----+----not all parameters are needed here is table support the project |
6,340 | source loading from file loading from string loading from stream loading from sql loading from sql in django loading from query sets loading from dictionary loading from records loading from array loading from an url parameters file_namesheet_namekeywords file_contentfile_typesheet_namekeywords file_streamfile_typesheet_namekeywords sessiontable model any query sets(sqlalchemy or djangoadictwith_keys records array url parameters file_name file with supported file extension file_content the file content file_stream the file stream file_type the file type in file_content or file_stream session database session table database table modela django model adicta dictionary of one dimensional arrays url download http url for your excel file with_keys load with previous dictionary' keysdefault is true records list of dictionaries that have the same keys array two dimensional arraya list of lists sheet_name sheet name if sheet_name is not giventhe default sheet at index is loaded start_row [intdefaults to it allows you to skip rows at the begginning row_limitint defaults to - meaning till the end of the whole sheet it allows you to skip the tailing rows start_column [intdefaults to it allows you to skip columns on your left hand side column_limitint defaults to - meaning till the end of the columns it allows you to skip the tailing columns skip_row_funcit allows you to write your own row skipping functions the protocol is to return pyexcel_io constants skip_data if skipping datapyexcel_io constants take_data to read datapyexcel_io constants stop_iteration to exit the reading procedure skip_column_funcit allows you to write your own column skipping functions the protocol is to return pyexcel_io constants skip_data if skipping datapyexcel_io constants take_data to read datapyexcel_io constants stop_iteration to exit the reading procedure skip_empty_rowsbool defaults to false toggle it to true if the rest of empty rows are uselessbut it does affect the number of rows row_rendereryou could choose to write custom row renderer when the data is being read api documentation |
6,341 | auto_detect_float defaults to true auto_detect_int defaults to true auto_detect_datetime defaults to true ignore_infinity defaults to true library choose specific pyexcel-io plugin for reading source_library choose specific data source plugin for reading parser_library choose pyexcel parser plugin for reading skip_hidden_sheetsdefault is true please toggle it to read hidden sheets parameters related to csv file format for csvfmtparams are accepted delimiter field separator lineterminator line terminator encodingcsv specific specify the file encoding the csv file for exampleencoding='latin especiallyencoding='utf- -sigwould add utf bom header if used in rendereror would parse csv with utf brom header used in parser escapechar one-character string used by the writer to escape the delimiter if quoting is set to quote_none and the quotechar if doublequote is false quotechar one-character string used to quote fields containing special characterssuch as the delimiter or quotecharor which contain new-line characters it defaults to '"quoting controls when quotes should be generated by the writer and recognised by the reader it can take on any of the quote_constants (see section module contentsand defaults to quote_minimal skipinitialspace when truewhitespace immediately following the delimiter is ignored the default is false pep_ _off when true in python version pep- is turned on the default is false parameters related to xls file formatplease note the following parameters apply to pyexcel-xls more details can be found in xlrd open_workbook(logfilean open file to which messages and diagnostics are written verbosityincreases the volume of trace material written to the logfile use_mmapwhether to use the mmap module is determined heuristically use this arg to override the result current heuristicmmap is used if it exists encoding_overrideused to overcome missing or bad codepage information in older-version files formatting_infothe default is falsewhich saves memory when trueformatting information will be read from the spreadsheet file this provides all cellsincluding empty and blank cells formatting information is available for each cell ragged_rowsthe default of false means all rows are padded out with empty cells so that all rows have the same size as found in ncols true means that there are no empty cells at the ends of rows this can result in substantial memory savings if rows are of widely varying sizes see also the row_len(method support the project |
6,342 | pyexcel get_book_dict pyexcel get_book_dict(**keywordsobtain dictionary of two dimensional arrays it accepts the same parameters as get_book(but return dictionary instead here is table of parameterssource loading from file loading from string loading from stream loading from sql loading from django models loading from dictionary loading from an url parameters file_namekeywords file_contentfile_typekeywords file_streamfile_typekeywords sessiontables models bookdict url where the dictionary should have text as keys and two dimensional array as values parameters file_name file with supported file extension file_content the file content file_stream the file stream file_type the file type in file_content or file_stream session database session tables list of database table models list of django models bookdict dictionary of two dimensional arrays url download http url for your excel file sheetsa list of mixed sheet names and sheet indices to be read this is done to keep pandas compactibility with this parametermore than one sheet can be read and you have the control to read the sheets of your interest instead of all available sheets auto_detect_float defaults to true auto_detect_int defaults to true auto_detect_datetime defaults to true ignore_infinity defaults to true library choose specific pyexcel-io plugin for reading source_library choose specific data source plugin for reading parser_library choose pyexcel parser plugin for reading skip_hidden_sheetsdefault is true please toggle it to read hidden sheets parameters related to csv file format for csvfmtparams are accepted delimiter field separator api documentation |
6,343 | lineterminator line terminator encodingcsv specific specify the file encoding the csv file for exampleencoding='latin especiallyencoding='utf- -sigwould add utf bom header if used in rendereror would parse csv with utf brom header used in parser escapechar one-character string used by the writer to escape the delimiter if quoting is set to quote_none and the quotechar if doublequote is false quotechar one-character string used to quote fields containing special characterssuch as the delimiter or quotecharor which contain new-line characters it defaults to '"quoting controls when quotes should be generated by the writer and recognised by the reader it can take on any of the quote_constants (see section module contentsand defaults to quote_minimal skipinitialspace when truewhitespace immediately following the delimiter is ignored the default is false pep_ _off when true in python version pep- is turned on the default is false pyexcel get_book pyexcel get_book(**keywordsget an instance of book from an excel source here is table of parameterssource loading from file loading from string loading from stream loading from sql loading from django models loading from dictionary loading from an url parameters file_namekeywords file_contentfile_typekeywords file_streamfile_typekeywords sessiontables models bookdict url where the dictionary should have text as keys and two dimensional array as values parameters file_name file with supported file extension file_content the file content file_stream the file stream file_type the file type in file_content or file_stream session database session tables list of database table models list of django models bookdict dictionary of two dimensional arrays url download http url for your excel file sheetsa list of mixed sheet names and sheet indices to be read this is done to keep pandas compactibility with this parametermore than one sheet can be read and you have the control to read the sheets of your interest instead of all available sheets auto_detect_float defaults to true support the project |
6,344 | auto_detect_int defaults to true auto_detect_datetime defaults to true ignore_infinity defaults to true library choose specific pyexcel-io plugin for reading source_library choose specific data source plugin for reading parser_library choose pyexcel parser plugin for reading skip_hidden_sheetsdefault is true please toggle it to read hidden sheets parameters related to csv file format for csvfmtparams are accepted delimiter field separator lineterminator line terminator encodingcsv specific specify the file encoding the csv file for exampleencoding='latin especiallyencoding='utf- -sigwould add utf bom header if used in rendereror would parse csv with utf brom header used in parser escapechar one-character string used by the writer to escape the delimiter if quoting is set to quote_none and the quotechar if doublequote is false quotechar one-character string used to quote fields containing special characterssuch as the delimiter or quotecharor which contain new-line characters it defaults to '"quoting controls when quotes should be generated by the writer and recognised by the reader it can take on any of the quote_constants (see section module contentsand defaults to quote_minimal skipinitialspace when truewhitespace immediately following the delimiter is ignored the default is false pep_ _off when true in python version pep- is turned on the default is false pyexcel get_sheet pyexcel get_sheet(**keywordsget an instance of sheet from an excel source examples on start_rowstart_column let' assume the following file is huge csv fileimport datetime import pyexcel as pe data [ ][ ][ ][ ][ ][ pe save_as(array=datadest_file_name="your_file csv"and let' pretend to read partial data api documentation |
6,345 | pe get_sheet(file_name="your_file csv"start_row= row_limit= your_file csv+---+----+---- +---+----+---- +---+----+---- +---+----+----and you could as well do the same for columnspe get_sheet(file_name="your_file csv"start_column= column_limit= your_file csv+----+---- +----+---- +----+---- +----+---- +----+---- +----+---- +----+----obviousyou could do both at the same timepe get_sheet(file_name="your_file csv"start_row= row_limit= start_column= column_limit= your_file csv+----+---- +----+---- +----+---- +----+----the pagination support is available across all pyexcel plugins noteno column pagination support for query sets as data source formatting while transcoding big data file if you are transcoding big data setconventional formatting method would not help unless on-demand free ram is available howeverthere is way to minimize the memory footprint of pyexcel while the formatting is performed let' continue from previous example suppose we want to transcode "your_file csvto "your_file xlsbut increase each element by what we can do is to define row renderer function as the following support the project |
6,346 | def increment_by_one(row)for element in rowyield element then pass it onto save_as function using row_rendererpe isave_as(file_name="your_file csv"row_renderer=increment_by_onedest_file_name="your_file xlsx"noteif the data content is from generatorisave_as has to be used we can verify if it was done correctlype get_sheet(file_name="your_file xlsx"your_file csv+---+----+---- +---+----+---- +---+----+---- +---+----+---- +---+----+---- +---+----+---- +---+----+----not all parameters are needed here is table source loading from file loading from string loading from stream loading from sql loading from sql in django loading from query sets loading from dictionary loading from records loading from array loading from an url parameters file_namesheet_namekeywords file_contentfile_typesheet_namekeywords file_streamfile_typesheet_namekeywords sessiontable model any query sets(sqlalchemy or djangoadictwith_keys records array url parameters file_name file with supported file extension file_content the file content file_stream the file stream file_type the file type in file_content or file_stream session database session api documentation |
6,347 | table database table modela django model adicta dictionary of one dimensional arrays url download http url for your excel file with_keys load with previous dictionary' keysdefault is true records list of dictionaries that have the same keys array two dimensional arraya list of lists sheet_name sheet name if sheet_name is not giventhe default sheet at index is loaded start_row [intdefaults to it allows you to skip rows at the begginning row_limitint defaults to - meaning till the end of the whole sheet it allows you to skip the tailing rows start_column [intdefaults to it allows you to skip columns on your left hand side column_limitint defaults to - meaning till the end of the columns it allows you to skip the tailing columns skip_row_funcit allows you to write your own row skipping functions the protocol is to return pyexcel_io constants skip_data if skipping datapyexcel_io constants take_data to read datapyexcel_io constants stop_iteration to exit the reading procedure skip_column_funcit allows you to write your own column skipping functions the protocol is to return pyexcel_io constants skip_data if skipping datapyexcel_io constants take_data to read datapyexcel_io constants stop_iteration to exit the reading procedure skip_empty_rowsbool defaults to false toggle it to true if the rest of empty rows are uselessbut it does affect the number of rows row_rendereryou could choose to write custom row renderer when the data is being read auto_detect_float defaults to true auto_detect_int defaults to true auto_detect_datetime defaults to true ignore_infinity defaults to true library choose specific pyexcel-io plugin for reading source_library choose specific data source plugin for reading parser_library choose pyexcel parser plugin for reading skip_hidden_sheetsdefault is true please toggle it to read hidden sheets parameters related to csv file format for csvfmtparams are accepted delimiter field separator lineterminator line terminator encodingcsv specific specify the file encoding the csv file for exampleencoding='latin especiallyencoding='utf- -sigwould add utf bom header if used in rendereror would parse csv with utf brom header used in parser support the project |
6,348 | escapechar one-character string used by the writer to escape the delimiter if quoting is set to quote_none and the quotechar if doublequote is false quotechar one-character string used to quote fields containing special characterssuch as the delimiter or quotecharor which contain new-line characters it defaults to '"quoting controls when quotes should be generated by the writer and recognised by the reader it can take on any of the quote_constants (see section module contentsand defaults to quote_minimal skipinitialspace when truewhitespace immediately following the delimiter is ignored the default is false pep_ _off when true in python version pep- is turned on the default is false parameters related to xls file formatplease note the following parameters apply to pyexcel-xls more details can be found in xlrd open_workbook(logfilean open file to which messages and diagnostics are written verbosityincreases the volume of trace material written to the logfile use_mmapwhether to use the mmap module is determined heuristically use this arg to override the result current heuristicmmap is used if it exists encoding_overrideused to overcome missing or bad codepage information in older-version files formatting_infothe default is falsewhich saves memory when trueformatting information will be read from the spreadsheet file this provides all cellsincluding empty and blank cells formatting information is available for each cell ragged_rowsthe default of false means all rows are padded out with empty cells so that all rows have the same size as found in ncols true means that there are no empty cells at the ends of rows this can result in substantial memory savings if rows are of widely varying sizes see also the row_len(method pyexcel iget_book pyexcel iget_book(**keywordsget an instance of bookstream from an excel source first use case is to get all sheet names without extracting the sheets into memory here is table of parameterssource loading from file loading from string loading from stream loading from sql loading from django models loading from dictionary loading from an url parameters file_namekeywords file_contentfile_typekeywords file_streamfile_typekeywords sessiontables models bookdict url where the dictionary should have text as keys and two dimensional array as values parameters file_name file with supported file extension file_content the file content api documentation |
6,349 | file_stream the file stream file_type the file type in file_content or file_stream session database session tables list of database table models list of django models bookdict dictionary of two dimensional arrays url download http url for your excel file sheetsa list of mixed sheet names and sheet indices to be read this is done to keep pandas compactibility with this parametermore than one sheet can be read and you have the control to read the sheets of your interest instead of all available sheets auto_detect_float defaults to true auto_detect_int defaults to true auto_detect_datetime defaults to true ignore_infinity defaults to true library choose specific pyexcel-io plugin for reading source_library choose specific data source plugin for reading parser_library choose pyexcel parser plugin for reading skip_hidden_sheetsdefault is true please toggle it to read hidden sheets parameters related to csv file format for csvfmtparams are accepted delimiter field separator lineterminator line terminator encodingcsv specific specify the file encoding the csv file for exampleencoding='latin especiallyencoding='utf- -sigwould add utf bom header if used in rendereror would parse csv with utf brom header used in parser escapechar one-character string used by the writer to escape the delimiter if quoting is set to quote_none and the quotechar if doublequote is false quotechar one-character string used to quote fields containing special characterssuch as the delimiter or quotecharor which contain new-line characters it defaults to '"quoting controls when quotes should be generated by the writer and recognised by the reader it can take on any of the quote_constants (see section module contentsand defaults to quote_minimal skipinitialspace when truewhitespace immediately following the delimiter is ignored the default is false pep_ _off when true in python version pep- is turned on the default is false when you use this function to work on physical filesthis function will leave its file handle open when you finish the operation on its datayou need to call pyexcel free_resources(to close file hande(sfor csvcsvz file formatsfile handles will be left open for xlsods file formatsthe file is read all into memory and is close afterwards for xlsxfile handles will be left open in python by pyexcel-xlsx(openpyxlin other wordspyexcel-xlspyexcel-odspyexcel-ods won' leak file handles support the project |
6,350 | pyexcel iget_array pyexcel iget_array(**keywordsobtain generator of an two dimensional array from an excel source it is similiar to pyexcel get_array(but it has less memory footprint not all parameters are needed here is table source loading from file loading from string loading from stream loading from sql loading from sql in django loading from query sets loading from dictionary loading from records loading from array loading from an url parameters file_namesheet_namekeywords file_contentfile_typesheet_namekeywords file_streamfile_typesheet_namekeywords sessiontable model any query sets(sqlalchemy or djangoadictwith_keys records array url parameters file_name file with supported file extension file_content the file content file_stream the file stream file_type the file type in file_content or file_stream session database session table database table modela django model adicta dictionary of one dimensional arrays url download http url for your excel file with_keys load with previous dictionary' keysdefault is true records list of dictionaries that have the same keys array two dimensional arraya list of lists sheet_name sheet name if sheet_name is not giventhe default sheet at index is loaded start_row [intdefaults to it allows you to skip rows at the begginning row_limitint defaults to - meaning till the end of the whole sheet it allows you to skip the tailing rows start_column [intdefaults to it allows you to skip columns on your left hand side column_limitint defaults to - meaning till the end of the columns it allows you to skip the tailing columns skip_row_funcit allows you to write your own row skipping functions the protocol is to return pyexcel_io constants skip_data if skipping datapyexcel_io constants take_data to read datapyexcel_io constants stop_iteration to exit the reading procedure api documentation |
6,351 | skip_column_funcit allows you to write your own column skipping functions the protocol is to return pyexcel_io constants skip_data if skipping datapyexcel_io constants take_data to read datapyexcel_io constants stop_iteration to exit the reading procedure skip_empty_rowsbool defaults to false toggle it to true if the rest of empty rows are uselessbut it does affect the number of rows row_rendereryou could choose to write custom row renderer when the data is being read auto_detect_float defaults to true auto_detect_int defaults to true auto_detect_datetime defaults to true ignore_infinity defaults to true library choose specific pyexcel-io plugin for reading source_library choose specific data source plugin for reading parser_library choose pyexcel parser plugin for reading skip_hidden_sheetsdefault is true please toggle it to read hidden sheets parameters related to csv file format for csvfmtparams are accepted delimiter field separator lineterminator line terminator encodingcsv specific specify the file encoding the csv file for exampleencoding='latin especiallyencoding='utf- -sigwould add utf bom header if used in rendereror would parse csv with utf brom header used in parser escapechar one-character string used by the writer to escape the delimiter if quoting is set to quote_none and the quotechar if doublequote is false quotechar one-character string used to quote fields containing special characterssuch as the delimiter or quotecharor which contain new-line characters it defaults to '"quoting controls when quotes should be generated by the writer and recognised by the reader it can take on any of the quote_constants (see section module contentsand defaults to quote_minimal skipinitialspace when truewhitespace immediately following the delimiter is ignored the default is false pep_ _off when true in python version pep- is turned on the default is false parameters related to xls file formatplease note the following parameters apply to pyexcel-xls more details can be found in xlrd open_workbook(logfilean open file to which messages and diagnostics are written verbosityincreases the volume of trace material written to the logfile use_mmapwhether to use the mmap module is determined heuristically use this arg to override the result current heuristicmmap is used if it exists encoding_overrideused to overcome missing or bad codepage information in older-version files support the project |
6,352 | formatting_infothe default is falsewhich saves memory when trueformatting information will be read from the spreadsheet file this provides all cellsincluding empty and blank cells formatting information is available for each cell ragged_rowsthe default of false means all rows are padded out with empty cells so that all rows have the same size as found in ncols true means that there are no empty cells at the ends of rows this can result in substantial memory savings if rows are of widely varying sizes see also the row_len(method when you use this function to work on physical filesthis function will leave its file handle open when you finish the operation on its datayou need to call pyexcel free_resources(to close file hande(sfor csvcsvz file formatsfile handles will be left open for xlsods file formatsthe file is read all into memory and is close afterwards for xlsxfile handles will be left open in python by pyexcel-xlsx(openpyxlin other wordspyexcel-xlspyexcel-odspyexcel-ods won' leak file handles pyexcel iget_records pyexcel iget_records(custom_headers=none**keywordsobtain generator of list of records from an excel source it is similiar to pyexcel get_records(but it has less memory footprint but requires the headers to be in the first row and the data matrix should be of equal length it should consume less memory and should work well with large files examples on start_rowstart_column let' assume the following file is huge csv fileimport datetime import pyexcel as pe data [ ][ ][ ][ ][ ][ pe save_as(array=datadest_file_name="your_file csv"and let' pretend to read partial datape get_sheet(file_name="your_file csv"start_row= row_limit= your_file csv+---+----+---- +---+----+---- +---+----+---- +---+----+----and you could as well do the same for columns api documentation |
6,353 | pe get_sheet(file_name="your_file csv"start_column= column_limit= your_file csv+----+---- +----+---- +----+---- +----+---- +----+---- +----+---- +----+----obviousyou could do both at the same timepe get_sheet(file_name="your_file csv"start_row= row_limit= start_column= column_limit= your_file csv+----+---- +----+---- +----+---- +----+----the pagination support is available across all pyexcel plugins noteno column pagination support for query sets as data source formatting while transcoding big data file if you are transcoding big data setconventional formatting method would not help unless on-demand free ram is available howeverthere is way to minimize the memory footprint of pyexcel while the formatting is performed let' continue from previous example suppose we want to transcode "your_file csvto "your_file xlsbut increase each element by what we can do is to define row renderer function as the followingdef increment_by_one(row)for element in rowyield element then pass it onto save_as function using row_rendererpe isave_as(file_name="your_file csv"row_renderer=increment_by_onedest_file_name="your_file xlsx" support the project |
6,354 | noteif the data content is from generatorisave_as has to be used we can verify if it was done correctlype get_sheet(file_name="your_file xlsx"your_file csv+---+----+---- +---+----+---- +---+----+---- +---+----+---- +---+----+---- +---+----+---- +---+----+----not all parameters are needed here is table source loading from file loading from string loading from stream loading from sql loading from sql in django loading from query sets loading from dictionary loading from records loading from array loading from an url parameters file_namesheet_namekeywords file_contentfile_typesheet_namekeywords file_streamfile_typesheet_namekeywords sessiontable model any query sets(sqlalchemy or djangoadictwith_keys records array url parameters file_name file with supported file extension file_content the file content file_stream the file stream file_type the file type in file_content or file_stream session database session table database table modela django model adicta dictionary of one dimensional arrays url download http url for your excel file with_keys load with previous dictionary' keysdefault is true records list of dictionaries that have the same keys array two dimensional arraya list of lists api documentation |
6,355 | sheet_name sheet name if sheet_name is not giventhe default sheet at index is loaded start_row [intdefaults to it allows you to skip rows at the begginning row_limitint defaults to - meaning till the end of the whole sheet it allows you to skip the tailing rows start_column [intdefaults to it allows you to skip columns on your left hand side column_limitint defaults to - meaning till the end of the columns it allows you to skip the tailing columns skip_row_funcit allows you to write your own row skipping functions the protocol is to return pyexcel_io constants skip_data if skipping datapyexcel_io constants take_data to read datapyexcel_io constants stop_iteration to exit the reading procedure skip_column_funcit allows you to write your own column skipping functions the protocol is to return pyexcel_io constants skip_data if skipping datapyexcel_io constants take_data to read datapyexcel_io constants stop_iteration to exit the reading procedure skip_empty_rowsbool defaults to false toggle it to true if the rest of empty rows are uselessbut it does affect the number of rows row_rendereryou could choose to write custom row renderer when the data is being read auto_detect_float defaults to true auto_detect_int defaults to true auto_detect_datetime defaults to true ignore_infinity defaults to true library choose specific pyexcel-io plugin for reading source_library choose specific data source plugin for reading parser_library choose pyexcel parser plugin for reading skip_hidden_sheetsdefault is true please toggle it to read hidden sheets parameters related to csv file format for csvfmtparams are accepted delimiter field separator lineterminator line terminator encodingcsv specific specify the file encoding the csv file for exampleencoding='latin especiallyencoding='utf- -sigwould add utf bom header if used in rendereror would parse csv with utf brom header used in parser escapechar one-character string used by the writer to escape the delimiter if quoting is set to quote_none and the quotechar if doublequote is false quotechar one-character string used to quote fields containing special characterssuch as the delimiter or quotecharor which contain new-line characters it defaults to '"quoting controls when quotes should be generated by the writer and recognised by the reader it can take on any of the quote_constants (see section module contentsand defaults to quote_minimal skipinitialspace when truewhitespace immediately following the delimiter is ignored the default is false pep_ _off when true in python version pep- is turned on the default is false support the project |
6,356 | parameters related to xls file formatplease note the following parameters apply to pyexcel-xls more details can be found in xlrd open_workbook(logfilean open file to which messages and diagnostics are written verbosityincreases the volume of trace material written to the logfile use_mmapwhether to use the mmap module is determined heuristically use this arg to override the result current heuristicmmap is used if it exists encoding_overrideused to overcome missing or bad codepage information in older-version files formatting_infothe default is falsewhich saves memory when trueformatting information will be read from the spreadsheet file this provides all cellsincluding empty and blank cells formatting information is available for each cell ragged_rowsthe default of false means all rows are padded out with empty cells so that all rows have the same size as found in ncols true means that there are no empty cells at the ends of rows this can result in substantial memory savings if rows are of widely varying sizes see also the row_len(method when you use this function to work on physical filesthis function will leave its file handle open when you finish the operation on its datayou need to call pyexcel free_resources(to close file hande(sfor csvcsvz file formatsfile handles will be left open for xlsods file formatsthe file is read all into memory and is close afterwards for xlsxfile handles will be left open in python by pyexcel-xlsx(openpyxlin other wordspyexcel-xlspyexcel-odspyexcel-ods won' leak file handles pyexcel free_resources pyexcel free_resources(close file handles opened by signature functions that starts with 'ifor csvcsvz file formatsfile handles will be left open for xlsods file formatsthe file is read all into memory and is close afterwards for xlsxfile handles will be left open in python by pyexcel-xlsx(openpyxlin other wordspyexcel-xlspyexcel-odspyexcel-ods won' leak file handles saving data to excel file save_as(**keywordsisave_as(**keywordssave_book_as(**keywordsisave_book_as(**keywordssave sheet from data source to another one save sheet from data source to another one with less memory save book from data source to another one save book from data source to another one pyexcel save_as pyexcel save_as(**keywordssave sheet from data source to another one it accepts two sets of keywords why two setsone set is sourcethe other set is destination in order to distinguish the two setssource set will be exactly the same as the ones for pyexcel get_sheet()destination set are exactly the same as the ones for pyexcel sheet save_as but require 'destprefix api documentation |
6,357 | saving to source file memory sql django model parameters dest_file_namedest_sheet_name,dest_force_file_type keywords with prefix 'destdest_file_typedest_contentdest_sheet_namekeywords with prefix 'destdest_sessiondest_tabledest_initializerdest_mapdict dest_modeldest_initializerdest_mapdictdest_batch_size examples on start_rowstart_column let' assume the following file is huge csv fileimport datetime import pyexcel as pe data [ ][ ][ ][ ][ ][ pe save_as(array=datadest_file_name="your_file csv"and let' pretend to read partial datape get_sheet(file_name="your_file csv"start_row= row_limit= your_file csv+---+----+---- +---+----+---- +---+----+---- +---+----+----and you could as well do the same for columnspe get_sheet(file_name="your_file csv"start_column= column_limit= your_file csv+----+---- +----+---- +----+---- +----+---- +----+---- +----+---- +----+----obviousyou could do both at the same timepe get_sheet(file_name="your_file csv"start_row= row_limit= (continues on next page support the project |
6,358 | (continued from previous pagestart_column= column_limit= your_file csv+----+---- +----+---- +----+---- +----+----the pagination support is available across all pyexcel plugins noteno column pagination support for query sets as data source formatting while transcoding big data file if you are transcoding big data setconventional formatting method would not help unless on-demand free ram is available howeverthere is way to minimize the memory footprint of pyexcel while the formatting is performed let' continue from previous example suppose we want to transcode "your_file csvto "your_file xlsbut increase each element by what we can do is to define row renderer function as the followingdef increment_by_one(row)for element in rowyield element then pass it onto save_as function using row_rendererpe isave_as(file_name="your_file csv"row_renderer=increment_by_onedest_file_name="your_file xlsx"noteif the data content is from generatorisave_as has to be used we can verify if it was done correctlype get_sheet(file_name="your_file xlsx"your_file csv+---+----+---- +---+----+---- +---+----+---- +---+----+---- +---+----+---- +---+----+---- +---+----+---- api documentation |
6,359 | not all parameters are needed here is table source loading from file loading from string loading from stream loading from sql loading from sql in django loading from query sets loading from dictionary loading from records loading from array loading from an url parameters file_namesheet_namekeywords file_contentfile_typesheet_namekeywords file_streamfile_typesheet_namekeywords sessiontable model any query sets(sqlalchemy or djangoadictwith_keys records array url parameters file_name file with supported file extension file_content the file content file_stream the file stream file_type the file type in file_content or file_stream session database session table database table modela django model adicta dictionary of one dimensional arrays url download http url for your excel file with_keys load with previous dictionary' keysdefault is true records list of dictionaries that have the same keys array two dimensional arraya list of lists sheet_name sheet name if sheet_name is not giventhe default sheet at index is loaded start_row [intdefaults to it allows you to skip rows at the begginning row_limitint defaults to - meaning till the end of the whole sheet it allows you to skip the tailing rows start_column [intdefaults to it allows you to skip columns on your left hand side column_limitint defaults to - meaning till the end of the columns it allows you to skip the tailing columns skip_row_funcit allows you to write your own row skipping functions the protocol is to return pyexcel_io constants skip_data if skipping datapyexcel_io constants take_data to read datapyexcel_io constants stop_iteration to exit the reading procedure skip_column_funcit allows you to write your own column skipping functions the protocol is to return pyexcel_io constants skip_data if skipping datapyexcel_io constants take_data to read datapyexcel_io constants stop_iteration to exit the reading procedure skip_empty_rowsbool defaults to false toggle it to true if the rest of empty rows are uselessbut it does affect the number of rows support the project |
6,360 | row_rendereryou could choose to write custom row renderer when the data is being read auto_detect_float defaults to true auto_detect_int defaults to true auto_detect_datetime defaults to true ignore_infinity defaults to true library choose specific pyexcel-io plugin for reading source_library choose specific data source plugin for reading parser_library choose pyexcel parser plugin for reading skip_hidden_sheetsdefault is true please toggle it to read hidden sheets parameters related to csv file format for csvfmtparams are accepted delimiter field separator lineterminator line terminator encodingcsv specific specify the file encoding the csv file for exampleencoding='latin especiallyencoding='utf- -sigwould add utf bom header if used in rendereror would parse csv with utf brom header used in parser escapechar one-character string used by the writer to escape the delimiter if quoting is set to quote_none and the quotechar if doublequote is false quotechar one-character string used to quote fields containing special characterssuch as the delimiter or quotecharor which contain new-line characters it defaults to '"quoting controls when quotes should be generated by the writer and recognised by the reader it can take on any of the quote_constants (see section module contentsand defaults to quote_minimal skipinitialspace when truewhitespace immediately following the delimiter is ignored the default is false pep_ _off when true in python version pep- is turned on the default is false parameters related to xls file formatplease note the following parameters apply to pyexcel-xls more details can be found in xlrd open_workbook(logfilean open file to which messages and diagnostics are written verbosityincreases the volume of trace material written to the logfile use_mmapwhether to use the mmap module is determined heuristically use this arg to override the result current heuristicmmap is used if it exists encoding_overrideused to overcome missing or bad codepage information in older-version files formatting_infothe default is falsewhich saves memory when trueformatting information will be read from the spreadsheet file this provides all cellsincluding empty and blank cells formatting information is available for each cell ragged_rowsthe default of false means all rows are padded out with empty cells so that all rows have the same size as found in ncols true means that there are no empty cells at the ends of rows this can result in substantial memory savings if rows are of widely varying sizes see also the row_len(method dest_file_nameanother file name api documentation |
6,361 | dest_file_typethis is needed if you want to save to memory dest_sessionthe target database session dest_tablethe target destination table dest_modelthe target django model dest_mapdicta mapping dictionary see pyexcel sheet save_to_memory(dest_initializera custom initializer function for table or model dest_mapdictnominate headers dest_batch_sizeobject creation batch size it is django specific dest_librarychoose specific pyexcel-io plugin for writing dest_source_librarychoose specific data source plugin for writing dest_renderer_librarychoose pyexcel parser plugin for writing if csv file is destination formatpython csv fmtparams are accepted for exampledest_lineterminator will replace default to the one you specified in additionthis function use pyexcel sheet to render the data which could have performance penalty in exchangeparameters for pyexcel sheet can be passed one name_columns_by_row pyexcel isave_as pyexcel isave_as(**keywordssave sheet from data source to another one with less memory it is simliar to pyexcel save_as(except that it does not accept parameters for pyexcel sheet and it read when it writes it accepts two sets of keywords why two setsone set is sourcethe other set is destination in order to distinguish the two setssource set will be exactly the same as the ones for pyexcel get_sheet()destination set are exactly the same as the ones for pyexcel sheet save_as but require 'destprefix saving to source file memory sql django model parameters dest_file_namedest_sheet_name,dest_force_file_type keywords with prefix 'destdest_file_typedest_contentdest_sheet_namekeywords with prefix 'destdest_sessiondest_tabledest_initializerdest_mapdict dest_modeldest_initializerdest_mapdictdest_batch_size examples on start_rowstart_column let' assume the following file is huge csv fileimport datetime import pyexcel as pe data [ ][ ][ ][ ][ ][ (continues on next page support the project |
6,362 | (continued from previous pagepe save_as(array=datadest_file_name="your_file csv"and let' pretend to read partial datape get_sheet(file_name="your_file csv"start_row= row_limit= your_file csv+---+----+---- +---+----+---- +---+----+---- +---+----+----and you could as well do the same for columnspe get_sheet(file_name="your_file csv"start_column= column_limit= your_file csv+----+---- +----+---- +----+---- +----+---- +----+---- +----+---- +----+----obviousyou could do both at the same timepe get_sheet(file_name="your_file csv"start_row= row_limit= start_column= column_limit= your_file csv+----+---- +----+---- +----+---- +----+----the pagination support is available across all pyexcel plugins noteno column pagination support for query sets as data source formatting while transcoding big data file if you are transcoding big data setconventional formatting method would not help unless on-demand free ram is available howeverthere is way to minimize the memory footprint of pyexcel while the formatting is performed api documentation |
6,363 | let' continue from previous example suppose we want to transcode "your_file csvto "your_file xlsbut increase each element by what we can do is to define row renderer function as the followingdef increment_by_one(row)for element in rowyield element then pass it onto save_as function using row_rendererpe isave_as(file_name="your_file csv"row_renderer=increment_by_onedest_file_name="your_file xlsx"noteif the data content is from generatorisave_as has to be used we can verify if it was done correctlype get_sheet(file_name="your_file xlsx"your_file csv+---+----+---- +---+----+---- +---+----+---- +---+----+---- +---+----+---- +---+----+---- +---+----+----not all parameters are needed here is table source loading from file loading from string loading from stream loading from sql loading from sql in django loading from query sets loading from dictionary loading from records loading from array loading from an url parameters file_namesheet_namekeywords file_contentfile_typesheet_namekeywords file_streamfile_typesheet_namekeywords sessiontable model any query sets(sqlalchemy or djangoadictwith_keys records array url parameters file_name file with supported file extension file_content the file content file_stream the file stream support the project |
6,364 | file_type the file type in file_content or file_stream session database session table database table modela django model adicta dictionary of one dimensional arrays url download http url for your excel file with_keys load with previous dictionary' keysdefault is true records list of dictionaries that have the same keys array two dimensional arraya list of lists sheet_name sheet name if sheet_name is not giventhe default sheet at index is loaded start_row [intdefaults to it allows you to skip rows at the begginning row_limitint defaults to - meaning till the end of the whole sheet it allows you to skip the tailing rows start_column [intdefaults to it allows you to skip columns on your left hand side column_limitint defaults to - meaning till the end of the columns it allows you to skip the tailing columns skip_row_funcit allows you to write your own row skipping functions the protocol is to return pyexcel_io constants skip_data if skipping datapyexcel_io constants take_data to read datapyexcel_io constants stop_iteration to exit the reading procedure skip_column_funcit allows you to write your own column skipping functions the protocol is to return pyexcel_io constants skip_data if skipping datapyexcel_io constants take_data to read datapyexcel_io constants stop_iteration to exit the reading procedure skip_empty_rowsbool defaults to false toggle it to true if the rest of empty rows are uselessbut it does affect the number of rows row_rendereryou could choose to write custom row renderer when the data is being read auto_detect_float defaults to true auto_detect_int defaults to true auto_detect_datetime defaults to true ignore_infinity defaults to true library choose specific pyexcel-io plugin for reading source_library choose specific data source plugin for reading parser_library choose pyexcel parser plugin for reading skip_hidden_sheetsdefault is true please toggle it to read hidden sheets parameters related to csv file format for csvfmtparams are accepted delimiter field separator lineterminator line terminator api documentation |
6,365 | encodingcsv specific specify the file encoding the csv file for exampleencoding='latin especiallyencoding='utf- -sigwould add utf bom header if used in rendereror would parse csv with utf brom header used in parser escapechar one-character string used by the writer to escape the delimiter if quoting is set to quote_none and the quotechar if doublequote is false quotechar one-character string used to quote fields containing special characterssuch as the delimiter or quotecharor which contain new-line characters it defaults to '"quoting controls when quotes should be generated by the writer and recognised by the reader it can take on any of the quote_constants (see section module contentsand defaults to quote_minimal skipinitialspace when truewhitespace immediately following the delimiter is ignored the default is false pep_ _off when true in python version pep- is turned on the default is false parameters related to xls file formatplease note the following parameters apply to pyexcel-xls more details can be found in xlrd open_workbook(logfilean open file to which messages and diagnostics are written verbosityincreases the volume of trace material written to the logfile use_mmapwhether to use the mmap module is determined heuristically use this arg to override the result current heuristicmmap is used if it exists encoding_overrideused to overcome missing or bad codepage information in older-version files formatting_infothe default is falsewhich saves memory when trueformatting information will be read from the spreadsheet file this provides all cellsincluding empty and blank cells formatting information is available for each cell ragged_rowsthe default of false means all rows are padded out with empty cells so that all rows have the same size as found in ncols true means that there are no empty cells at the ends of rows this can result in substantial memory savings if rows are of widely varying sizes see also the row_len(method dest_file_nameanother file name dest_file_typethis is needed if you want to save to memory dest_sessionthe target database session dest_tablethe target destination table dest_modelthe target django model dest_mapdicta mapping dictionary see pyexcel sheet save_to_memory(dest_initializera custom initializer function for table or model dest_mapdictnominate headers dest_batch_sizeobject creation batch size it is django specific dest_librarychoose specific pyexcel-io plugin for writing dest_source_librarychoose specific data source plugin for writing dest_renderer_librarychoose pyexcel parser plugin for writing if csv file is destination formatpython csv fmtparams are accepted for exampledest_lineterminator will replace default to the one you specified support the project |
6,366 | in additionthis function use pyexcel sheet to render the data which could have performance penalty in exchangeparameters for pyexcel sheet can be passed one name_columns_by_row when you use this function to work on physical filesthis function will leave its file handle open when you finish the operation on its datayou need to call pyexcel free_resources(to close file hande(sfor csvcsvz file formatsfile handles will be left open for xlsods file formatsthe file is read all into memory and is close afterwards for xlsxfile handles will be left open in python by pyexcel-xlsx(openpyxlin other wordspyexcel-xlspyexcel-odspyexcel-ods won' leak file handles pyexcel save_book_as pyexcel save_book_as(**keywordssave book from data source to another one here is table of parameterssource loading from file loading from string loading from stream loading from sql loading from django models loading from dictionary loading from an url parameters file_namekeywords file_contentfile_typekeywords file_streamfile_typekeywords sessiontables models bookdict url where the dictionary should have text as keys and two dimensional array as values parameters file_name file with supported file extension file_content the file content file_stream the file stream file_type the file type in file_content or file_stream session database session tables list of database table models list of django models bookdict dictionary of two dimensional arrays url download http url for your excel file sheetsa list of mixed sheet names and sheet indices to be read this is done to keep pandas compactibility with this parametermore than one sheet can be read and you have the control to read the sheets of your interest instead of all available sheets auto_detect_float defaults to true auto_detect_int defaults to true auto_detect_datetime defaults to true ignore_infinity defaults to true library choose specific pyexcel-io plugin for reading api documentation |
6,367 | source_library choose specific data source plugin for reading parser_library choose pyexcel parser plugin for reading skip_hidden_sheetsdefault is true please toggle it to read hidden sheets parameters related to csv file format for csvfmtparams are accepted delimiter field separator lineterminator line terminator encodingcsv specific specify the file encoding the csv file for exampleencoding='latin especiallyencoding='utf- -sigwould add utf bom header if used in rendereror would parse csv with utf brom header used in parser escapechar one-character string used by the writer to escape the delimiter if quoting is set to quote_none and the quotechar if doublequote is false quotechar one-character string used to quote fields containing special characterssuch as the delimiter or quotecharor which contain new-line characters it defaults to '"quoting controls when quotes should be generated by the writer and recognised by the reader it can take on any of the quote_constants (see section module contentsand defaults to quote_minimal skipinitialspace when truewhitespace immediately following the delimiter is ignored the default is false pep_ _off when true in python version pep- is turned on the default is false dest_file_nameanother file name dest_file_typethis is needed if you want to save to memory dest_session the target database session dest_tables the list of target destination tables dest_models the list of target destination django models dest_mapdicts list of mapping dictionaries dest_initializers table initialization functions dest_mapdicts to nominate model or table fields optional dest_batch_size batch creation size optional where the dictionary should have text as keys and two dimensional array as values saving to source file memory sql django model parameters dest_file_namedest_sheet_namekeywords with prefix 'destdest_file_typedest_contentdest_sheet_namekeywords with prefix 'destdest_sessiondest_tablesdest_table_init_funcdest_mapdict dest_modelsdest_initializersdest_mapdictdest_batch_size pyexcel isave_book_as pyexcel isave_book_as(**keywordssave book from data source to another one it is simliar to pyexcel save_book_as(but it read when it writes this function provide some speedup but the output data is not made uniform support the project |
6,368 | here is table of parameterssource loading from file loading from string loading from stream loading from sql loading from django models loading from dictionary loading from an url parameters file_namekeywords file_contentfile_typekeywords file_streamfile_typekeywords sessiontables models bookdict url where the dictionary should have text as keys and two dimensional array as values parameters file_name file with supported file extension file_content the file content file_stream the file stream file_type the file type in file_content or file_stream session database session tables list of database table models list of django models bookdict dictionary of two dimensional arrays url download http url for your excel file sheetsa list of mixed sheet names and sheet indices to be read this is done to keep pandas compactibility with this parametermore than one sheet can be read and you have the control to read the sheets of your interest instead of all available sheets auto_detect_float defaults to true auto_detect_int defaults to true auto_detect_datetime defaults to true ignore_infinity defaults to true library choose specific pyexcel-io plugin for reading source_library choose specific data source plugin for reading parser_library choose pyexcel parser plugin for reading skip_hidden_sheetsdefault is true please toggle it to read hidden sheets parameters related to csv file format for csvfmtparams are accepted delimiter field separator lineterminator line terminator encodingcsv specific specify the file encoding the csv file for exampleencoding='latin especiallyencoding='utf- -sigwould add utf bom header if used in rendereror would parse csv with utf brom header used in parser api documentation |
6,369 | escapechar one-character string used by the writer to escape the delimiter if quoting is set to quote_none and the quotechar if doublequote is false quotechar one-character string used to quote fields containing special characterssuch as the delimiter or quotecharor which contain new-line characters it defaults to '"quoting controls when quotes should be generated by the writer and recognised by the reader it can take on any of the quote_constants (see section module contentsand defaults to quote_minimal skipinitialspace when truewhitespace immediately following the delimiter is ignored the default is false pep_ _off when true in python version pep- is turned on the default is false dest_file_nameanother file name dest_file_typethis is needed if you want to save to memory dest_session the target database session dest_tables the list of target destination tables dest_models the list of target destination django models dest_mapdicts list of mapping dictionaries dest_initializers table initialization functions dest_mapdicts to nominate model or table fields optional dest_batch_size batch creation size optional where the dictionary should have text as keys and two dimensional array as values saving to source file memory sql django model parameters dest_file_namedest_sheet_namekeywords with prefix 'destdest_file_typedest_contentdest_sheet_namekeywords with prefix 'destdest_sessiondest_tablesdest_table_init_funcdest_mapdict dest_modelsdest_initializersdest_mapdictdest_batch_size when you use this function to work on physical filesthis function will leave its file handle open when you finish the operation on its datayou need to call pyexcel free_resources(to close file hande(sfor csvcsvz file formatsfile handles will be left open for xlsods file formatsthe file is read all into memory and is close afterwards for xlsxfile handles will be left open in python by pyexcel-xlsx(openpyxlin other wordspyexcel-xlspyexcel-odspyexcel-ods won' leak file handles these flags can be passed on all signature functionsauto_detect_int automatically convert float values to integers if the float number has no decimal values( by defaultit does the detection setting it to false will turn on this behavior it has no effect on pyexcel-xlsx because it does that by default auto_detect_float automatically convert text to float values if possible this applies only pyexcel-io where csvtsvcsvz and tsvz formats are supported by defaultit does the detection setting it to false will turn on this behavior support the project |
6,370 | auto_detect_datetime automatically convert text to python datetime if possible this applies only pyexcel-io where csvtsvcsvz and tsvz formats are supported by defaultit does the detection setting it to false will turn on this behavior library name pyexcel plugin to handle file format in the situation where multiple plugins were pip installedit is confusing for pyexcel on which plugin to handle the file format for exampleboth pyexcel-xlsx and pyexcel-xls reads xlsx format now since version you can pass on library="pyexcel-xlsto handle xlsx in specific function call it is better to uninstall the unwanted pyexcel plugin using pip if two plugins for the same file type are not absolutely necessary cookbook merge_csv_to_a_book(filelist[outfilename]merge_all_to_a_book(filelist[outfilename]split_a_book(file_name[outfilename]extract_a_sheet_from_a_book(file_namesheetnamemerge list of csv files into excel book merge list of excel files into excel book split file into separate sheets extract sheet from excel book pyexcel merge_csv_to_a_book pyexcel merge_csv_to_a_book(filelistoutfilename='merged xls'merge list of csv files into excel book parameters filelist (lista list of accessible file path outfilename (strsave the sheet as pyexcel merge_all_to_a_book pyexcel merge_all_to_a_book(filelistoutfilename='merged xls'merge list of excel files into excel book parameters filelist (lista list of accessible file path outfilename (strsave the sheet as pyexcel split_a_book pyexcel split_a_book(file_nameoutfilename=nonesplit file into separate sheets parameters file_name (stran accessible file name api documentation |
6,371 | outfilename (strsave the sheets with file suffix pyexcel extract_a_sheet_from_a_book pyexcel extract_a_sheet_from_a_book(file_namesheetnameoutfilename=noneextract sheet from excel book parameters file_name (stran accessible file name sheetname (stra valid sheet name outfilename (strsave the sheet as book here' the entity relationship between booksheetrow and column constructor book([sheetsfilenamepath] read an excel book that has one or more sheets support the project |
6,372 | pyexcel book class pyexcel book(sheets=nonefilename='memory'path=noneread an excel book that has one or more sheets for csv filethere will be just one sheet __init__(sheets=nonefilename='memory'path=nonebook constructor selecting specific book according to filename extension parameters sheets dictionary of data filename the physical file path the relative path or absolute path keywords additional parameters to be passed on methods __init__([sheetsfilenamepath]get_array(**keywordsget_bookdict(**keywordsget_csv(**keywordsget_csvz(**keywordsget_dict(**keywordsget_fods(**__book get_grid get_handsontable(**keywordsget_handsontable_html(**keywordsget_html(**__book get_json book get_latex book get_latex_booktabs book get_mediawiki book get_ndjson get_ods(**keywordsbook get_orgtbl book get_pipe book get_plain get_records(**keywordsbook get_rst book get_simple get_svg(**keywordsget_texttable(**keywordsget_tsv(**keywordsget_tsvz(**keywordsget_url(**__get_xls(**keywordsget_xlsm(**keywords api documentation book constructor get data in array format get data in bookdict format get data in csv format get data in csvz format get data in dict format fods getter is not defined get data in handsontable format get data in handsontable html format html getter is not defined get data in ods format get data in records format get data in svg format get data in texttable format get data in tsv format get data in tsvz format url getter is not defined get data in xls format get data in xlsm format continued on next page |
6,373 | table continued from previous page get data in xlsx format indpendent function so that it could be called multiple times load_from_sheets(sheetsload content from existing sheets number_of_sheets(return the number of sheets plot([file_type]visualize the data register_input(file_type*[]partial(func*args**keywordsnew function with partial application of the given arguments and keywords register_io(file_type*[instance_name]partial(func*args**keywordsnew function with partial application of the given arguments and keywords register_presentation(file_type*[]partial(func*args**keywordsnew function with partial application of the given arguments and keywords remove_sheet(sheetremove sheet save_as(filename**keywordssave the content to new file save_to_database(sessiontables[]save data in sheets to database tables save_to_django_models(models[]save to database table through django model save_to_memory(file_type[stream]save the content to memory stream set_array(content**keywordsset data in array format set_bookdict(content**keywordsset data in bookdict format set_csv(content**keywordsset data in csv format set_csvz(content**keywordsset data in csvz format set_dict(content**keywordsset data in dict format set_fods(content**keywordsset data in fods format book set_grid set_handsontable(_y**_zhandsontable setter is not defined set_handsontable_html(_y**_zhandsontable html setter is not defined set_html(content**keywordsset data in html format book set_json book set_latex book set_latex_booktabs book set_mediawiki book set_ndjson set_ods(content**keywordsset data in ods format book set_orgtbl book set_pipe book set_plain set_records(content**keywordsset data in records format book set_rst book set_simple set_svg(_y**_zsvg setter is not defined set_texttable(_y**_ztexttable setter is not defined set_tsv(content**keywordsset data in tsv format set_tsvz(content**keywordsset data in tsvz format set_url(content**keywordsset data in url format set_xls(content**keywordsset data in xls format set_xlsm(content**keywordsset data in xlsm format set_xlsx(content**keywordsset data in xlsx format continued on next page get_xlsx(**keywordsinit([sheetsfilenamepath] support the project |
6,374 | table continued from previous page sheet_by_index(indexget the sheet with the specified index sheet_by_name(nameget the sheet with the specified name sheet_names(return all sheet names sort_sheets([keyreverse]to_dict(convert the book to dictionary attributes array bookdict csv csvz dict fods book grid handsontable handsontable_html html book json book latex book latex_booktabs book mediawiki book ndjson ods book orgtbl book pipe book plain records book rst book simple stream svg texttable tsv tsvz url xls xlsm xlsx get/set data in/from array format get/set data in/from bookdict format get/set data in/from csv format get/set data in/from csvz format get/set data in/from dict format set data in fods format get data in handsontable format get data in handsontable html format set data in html format get/set data in/from ods format get/set data in/from records format return stream in which the content is properly encoded get data in svg format get data in texttable format get/set data in/from tsv format get/set data in/from tsvz format set data in url format get/set data in/from xls format get/set data in/from xlsm format get/set data in/from xlsx format attribute book number_of_sheets(book sheet_names( api documentation return the number of sheets return all sheet names |
6,375 | pyexcel book number_of_sheets book number_of_sheets(return the number of sheets pyexcel book sheet_names book sheet_names(return all sheet names conversions book bookdict book url book csv book tsv book csvz book tsvz book xls book xlsm book xlsx book ods book stream get/set data in/from bookdict format set data in url format get/set data in/from csv format get/set data in/from tsv format get/set data in/from csvz format get/set data in/from tsvz format get/set data in/from xls format get/set data in/from xlsm format get/set data in/from xlsx format get/set data in/from ods format return stream in which the content is properly encoded pyexcel book bookdict book bookdict get/set data in/from bookdict format you could obtain content in bookdict format by dot notationbook bookdict and you could as well set content by dot notationbook bookdict the_io_stream_in_bookdict_format if you need to pass on more parametersyou could usebook get_bookdict(**keywordsbook set_bookdict(the_io_stream_in_bookdict_format**keywordspyexcel book url book url set data in url format you could set content in url format by dot notation support the project |
6,376 | book url if you need to pass on more parametersyou could usebook set_url(the_io_stream_in_url_format**keywordspyexcel book csv book csv get/set data in/from csv format you could obtain content in csv format by dot notationbook csv and you could as well set content by dot notationbook csv the_io_stream_in_csv_format if you need to pass on more parametersyou could usebook get_csv(**keywordsbook set_csv(the_io_stream_in_csv_format**keywordspyexcel book tsv book tsv get/set data in/from tsv format you could obtain content in tsv format by dot notationbook tsv and you could as well set content by dot notationbook tsv the_io_stream_in_tsv_format if you need to pass on more parametersyou could usebook get_tsv(**keywordsbook set_tsv(the_io_stream_in_tsv_format**keywordspyexcel book csvz book csvz get/set data in/from csvz format you could obtain content in csvz format by dot notationbook csvz and you could as well set content by dot notation api documentation |
6,377 | book csvz the_io_stream_in_csvz_format if you need to pass on more parametersyou could usebook get_csvz(**keywordsbook set_csvz(the_io_stream_in_csvz_format**keywordspyexcel book tsvz book tsvz get/set data in/from tsvz format you could obtain content in tsvz format by dot notationbook tsvz and you could as well set content by dot notationbook tsvz the_io_stream_in_tsvz_format if you need to pass on more parametersyou could usebook get_tsvz(**keywordsbook set_tsvz(the_io_stream_in_tsvz_format**keywordspyexcel book xls book xls get/set data in/from xls format you could obtain content in xls format by dot notationbook xls and you could as well set content by dot notationbook xls the_io_stream_in_xls_format if you need to pass on more parametersyou could usebook get_xls(**keywordsbook set_xls(the_io_stream_in_xls_format**keywordspyexcel book xlsm book xlsm get/set data in/from xlsm format you could obtain content in xlsm format by dot notationbook xlsm and you could as well set content by dot notation support the project |
6,378 | book xlsm the_io_stream_in_xlsm_format if you need to pass on more parametersyou could usebook get_xlsm(**keywordsbook set_xlsm(the_io_stream_in_xlsm_format**keywordspyexcel book xlsx book xlsx get/set data in/from xlsx format you could obtain content in xlsx format by dot notationbook xlsx and you could as well set content by dot notationbook xlsx the_io_stream_in_xlsx_format if you need to pass on more parametersyou could usebook get_xlsx(**keywordsbook set_xlsx(the_io_stream_in_xlsx_format**keywordspyexcel book ods book ods get/set data in/from ods format you could obtain content in ods format by dot notationbook ods and you could as well set content by dot notationbook ods the_io_stream_in_ods_format if you need to pass on more parametersyou could usebook get_ods(**keywordsbook set_ods(the_io_stream_in_ods_format**keywordspyexcel book stream book stream return stream in which the content is properly encoded example api documentation |
6,379 | import pyexcel as get_book(bookdict={" "[[ ]]}csv_stream stream texttable print(csv_stream getvalue() +--- +---where stream xls getvalue(is equivalent to xls in some situation stream xls is prefered than xls sheet examplesimport pyexcel as sheet([[ ]]' 'csv_stream stream texttable print(csv_stream getvalue() +--- +---where stream xls getvalue(is equivalent to xls in some situation stream xls is prefered than xls it is similar to save_to_memory(save changes book save_as(filename**keywordsbook save_to_memory(file_type[stream]book save_to_database(sessiontables[]book save_to_django_models(models[]save the content to new file save the content to memory stream save data in sheets to database tables save to database table through django model pyexcel book save_as book save_as(filename**keywordssave the content to new file keywords may vary depending on your file typebecause the associated file type employs different library parameters filenamea file path librarychoose specific pyexcel-io plugin for writing renderer_librarychoose pyexcel parser plugin for writing parameters related to csv file format for csvfmtparams are accepted delimiter field separator lineterminator line terminator encodingcsv specific specify the file encoding the csv file for exampleencoding='latin especiallyencoding='utf- -sigwould add utf bom header if used in rendereror would parse csv with utf brom support the project |
6,380 | header used in parser escapechar one-character string used by the writer to escape the delimiter if quoting is set to quote_none and the quotechar if doublequote is false quotechar one-character string used to quote fields containing special characterssuch as the delimiter or quotecharor which contain new-line characters it defaults to '"quoting controls when quotes should be generated by the writer and recognised by the reader it can take on any of the quote_constants (see section module contentsand defaults to quote_minimal skipinitialspace when truewhitespace immediately following the delimiter is ignored the default is false pep_ _off when true in python version pep- is turned on the default is false pyexcel book save_to_memory book save_to_memory(file_typestream=none**keywordssave the content to memory stream parameters file_type what format the stream is in stream memory stream note in python for csv and tsv formatplease pass an instance of stringio for xlsxlsxand odsan instance of bytesio pyexcel book save_to_database book save_to_database(sessiontablesinitializers=nonemapdicts=noneauto_commit=truesave data in sheets to database tables parameters session database session tables list of database tablesthat is accepted by sheet save_to_database(the sequence of tables matters when there is dependencies in between the tables for examplecar is made by car maker car maker table should be specified before car table initializers list of intialization functions for your tables and the sequence should match tablesmapdicts custom map dictionary for your data columns and the sequence should match tables auto_commit by defaultdata is committed pyexcel book save_to_django_models book save_to_django_models(modelsinitializers=nonemapdicts=none**keywordssave to database table through django model parameters models list of database modelsthat is accepted by sheet save_to_django_model(the sequence of tables matters when there is dependencies in between the tables for examplecar is made by car maker car maker table should be specified before car table api documentation |
6,381 | initializers list of intialization functions for your tables and the sequence should match tablesmapdicts custom map dictionary for your data columns and the sequence should match tables optional parameters:param batch_sizedjango bulk_create batch size :param bulk_savewhether to use bulk_create or to use single save per record sheet constructor sheet([sheetnamename_columns_by_row]two dimensional data container for filteringformatting and iteration pyexcel sheet class pyexcel sheet(sheet=nonename='pyexcel sheet'name_rows_by_column=- colnames=nonepose_before=falsetranspose_after=falsetwo dimensional data container for filteringformatting and iteration name_columns_by_row=- rownames=nonetranssheet is container for two dimensional arraywhere individual cell can be any python types other than numbersvalue of these typesstringdatetime and boolean can be mixed in the array this differs from numpy' matrix where each cell are of the same number type in order to prepare two dimensional data for your computationformatting functions help convert array cells to required types formatting can be applied not only to the whole sheet but also to selected rows or columns custom conversion function can be passed to these formatting functions for exampleto remove extra spaces surrounding the content of cella custom function is required filtering functions are used to reduce the information contained in the array variables name sheet name use to change sheet name row access data row by row column access data column by column exampleimport pyexcel as content {' '[[ ]] get_book(bookdict=contentb +--- +--- [ name 'ab[ name 'bb (continues on next page support the project |
6,382 | (continued from previous pageb+--- +---__init__(sheet=nonename='pyexcel sheet'name_columns_by_row=- name_rows_by_column= colnames=nonerownames=nonetranspose_before=falsetranspose_after=falseconstructor parameters sheet two dimensional array name this becomes the sheet name name_columns_by_row use row to name all columns name_rows_by_column use column to name all rows colnames use an external list of strings to name the columns rownames use an external list of strings to name the rows methods __init__([sheetnamename_columns_by_row]cell_value(rowcolumn[new_value]clone(column_at(indexcolumn_range(columns(contains(predicatecut(topleft_cornerbottomright_cornerdelete_columns(column_indicesdelete_named_column_at(namedelete_named_row_at(namedelete_rows(row_indicesenumerate(extend_columns(columnsextend_columns_with_rows(rowsextend_rows(rowsfilter([column_indicesrow_indices]format(formatterget_array(**keywordsget_bookdict(**keywordsget_csv(**keywordsget_csvz(**keywordsget_dict(**keywordsget_fods(**__sheet get_grid constructor random access to table cells gets the data at the specified column utility function to get column range returns left to right column iterator has something in the table get rectangle shaped data out and clear them in position delete one or more columns works only after you named columns by row take the first column as row names delete one or more rows iterate cell by cell from top to bottom and from left to right take ordereddict to extend named columns put rows on the right most side of the data take ordereddict to extend named rows apply the filter with immediate effect apply formatting action for the whole sheet get data in array format get data in bookdict format get data in csv format get data in csvz format get data in dict format fods getter is not defined continued on next page api documentation |
6,383 | table continued from previous page get_handsontable(**keywordsget data in handsontable format get_handsontable_html(**keywordsget data in handsontable html format get_html(**__html getter is not defined get_internal_array(present internal array sheet get_json sheet get_latex sheet get_latex_booktabs sheet get_mediawiki sheet get_ndjson get_ods(**keywordsget data in ods format sheet get_orgtbl sheet get_pipe sheet get_plain get_records(**keywordsget data in records format sheet get_rst sheet get_simple get_svg(**keywordsget data in svg format get_texttable(**keywordsget data in texttable format get_tsv(**keywordsget data in tsv format get_tsvz(**keywordsget data in tsvz format get_url(**__url getter is not defined get_xls(**keywordsget data in xls format get_xlsm(**keywordsget data in xlsm format get_xlsx(**keywordsget data in xlsx format group_rows_by_column(column_index_or_name)group rows with similiar column into two dimensional array init([sheetnamename_columns_by_row]custom initialization functions map(custom_functionexecute function across all cells of the sheet name_columns_by_row(row_indexuse the elements of specified row to represent individual columns name_rows_by_column(column_indexuse the elements of specified column to represent individual rows named_column_at(nameget column by its name named_columns(iterate rows using column names named_row_at(nameget row by its name named_rows(iterate rows using row names number_of_columns(the number of columns number_of_rows(the number of rows paste(topleft_corner[rowscolumns]paste rectangle shaped data after position plot([file_type]visualize the data project(new_ordered_columns[exclusion]rearrange the sheet rcolumns(returns right to left column iterator region(topleft_cornerbottomright_cornerget rectangle shaped data out register_input(file_type[instance_namepartial(func*args**keywordsnew function with ]partial application of the given arguments and keywords register_io(file_type[instance_name]partial(func*args**keywordsnew function with partial application of the given arguments and keywords continued on next page support the project |
6,384 | table continued from previous page register_presentation(file_type[]partial(func*args**keywordsnew function with partial application of the given arguments and keywords reverse(opposite to enumerate row_at(indexgets the data at the specified row row_range(utility function to get row range rows(returns top to bottom row iterator rrows(returns bottom to top row iterator rvertical(default iterator to go through each cell one by one from rightmost column to leftmost row and from bottom to top example save_as(filename**keywordssave the content to named file save_to_database(sessiontable[]save data in sheet to database table save_to_django_model(model[initializersave to database table through django model ]save_to_memory(file_type[stream]save the content to memory set_array(content**keywordsset data in array format set_bookdict(content**keywordsset data in bookdict format set_column_at(column_indexdata_array[updates column data range ]set_csv(content**keywordsset data in csv format set_csvz(content**keywordsset data in csvz format set_dict(content**keywordsset data in dict format set_fods(content**keywordsset data in fods format sheet set_grid set_handsontable(_y**_zhandsontable setter is not defined set_handsontable_html(_y**_zhandsontable html setter is not defined set_html(content**keywordsset data in html format sheet set_json sheet set_latex sheet set_latex_booktabs sheet set_mediawiki set_named_column_at(namecolumn_arraytake the first row as column names set_named_row_at(namerow_arraytake the first column as row names sheet set_ndjson set_ods(content**keywordsset data in ods format sheet set_orgtbl sheet set_pipe sheet set_plain set_records(content**keywordsset data in records format set_row_at(row_indexdata_arrayupdate row data range sheet set_rst sheet set_simple set_svg(_y**_zsvg setter is not defined set_texttable(_y**_ztexttable setter is not defined set_tsv(content**keywordsset data in tsv format set_tsvz(content**keywordsset data in tsvz format set_url(content**keywordsset data in url format set_xls(content**keywordsset data in xls format set_xlsm(content**keywordsset data in xlsm format continued on next page api documentation |
6,385 | table continued from previous page set_xlsx(content**keywordsset data in xlsx format to_array(returns an array after filtering to_dict([row]returns dictionary to_records([custom_headers]make an array of dictionaries top([lines]preview top most rows top_left([rowscolumns]preview top corner transpose(rotate the data table by degrees vertical(default iterator to go through each cell one by one from leftmost column to rightmost row and from top to bottom example attributes array bookdict colnames content csv csvz dict fods sheet grid handsontable handsontable_html html sheet json sheet latex sheet latex_booktabs sheet mediawiki sheet ndjson ods sheet orgtbl sheet pipe sheet plain records rownames sheet rst sheet simple stream svg texttable tsv tsvz url xls xlsm xlsx get/set data in/from array format get/set data in/from bookdict format return column names if any plain representation without headers get/set data in/from csv format get/set data in/from csvz format get/set data in/from dict format set data in fods format get data in handsontable format get data in handsontable html format set data in html format get/set data in/from ods format get/set data in/from records format return row names if any return stream in which the content is properly encoded get data in svg format get data in texttable format get/set data in/from tsv format get/set data in/from tsvz format set data in url format get/set data in/from xls format get/set data in/from xlsm format get/set data in/from xlsx format support the project |
6,386 | attributes sheet content sheet number_of_rows(sheet number_of_columns(sheet row_range(sheet column_range(plain representation without headers the number of rows the number of columns utility function to get row range utility function to get column range pyexcel sheet content sheet content plain representation without headers pyexcel sheet number_of_rows sheet number_of_rows(the number of rows pyexcel sheet number_of_columns sheet number_of_columns(the number of columns pyexcel sheet row_range sheet row_range(utility function to get row range pyexcel sheet column_range sheet column_range(utility function to get column range cell access sheet cell_value(rowcolumn[new_value]sheet __getitem__(asetrandom access to table cells by defaultthis class recognize from top to bottom from left to right pyexcel sheet cell_value sheet cell_value(rowcolumnnew_value=nonerandom access to table cells parameters row (introw index which starts from api documentation |
6,387 | column (intcolumn index which starts from new_value (anynew value if this is to set the value pyexcel sheet __getitem__ sheet __getitem__(asetby defaultthis class recognize from top to bottom from left to right row access sheet row_at(indexsheet set_row_at(row_indexdata_arraysheet delete_rows(row_indicessheet extend_rows(rowsgets the data at the specified row update row data range delete one or more rows take ordereddict to extend named rows pyexcel sheet row_at sheet row_at(indexgets the data at the specified row pyexcel sheet set_row_at sheet set_row_at(row_indexdata_arrayupdate row data range pyexcel sheet delete_rows sheet delete_rows(row_indicesdelete one or more rows parameters row_indices (lista list of row indices pyexcel sheet extend_rows sheet extend_rows(rowstake ordereddict to extend named rows parameters rows (ordereddist/lista list of rows column access sheet column_at(indexsheet set_column_at(column_indexdata_arraysheet delete_columns(column_indicesgets the data at the specified column updates column data range delete one or more columns continued on next page support the project |
6,388 | table continued from previous page sheet extend_columns(columnstake ordereddict to extend named columns pyexcel sheet column_at sheet column_at(indexgets the data at the specified column pyexcel sheet set_column_at sheet set_column_at(column_indexdata_arraystarting= updates column data range it works like this if the call isset_column_at( [' ',' '' '] )+--column_index <starting this function will not set element outside the current table range parameters column_index (intwhich column to be modified data_array (listone dimensional array staring (intfrom which indexthe update happens raises indexerror if column_index exceeds column range or starting exceeds row range pyexcel sheet delete_columns sheet delete_columns(column_indicesdelete one or more columns parameters column_indices (lista list of column indices pyexcel sheet extend_columns sheet extend_columns(columnstake ordereddict to extend named columns parameters columns (ordereddist/lista list of columns data series any column as row name api documentation |
6,389 | sheet name_columns_by_row(row_indexsheet rownames sheet named_column_at(namesheet set_named_column_at(nameumn_arraysheet delete_named_column_at(namecoluse the elements of specified row to represent individual columns return row names if any get column by its name take the first row as column names works only after you named columns by row pyexcel sheet name_columns_by_row sheet name_columns_by_row(row_indexuse the elements of specified row to represent individual columns the specified row will be deleted from the data :param row_indexthe index of the row that has the column names pyexcel sheet rownames sheet rownames return row names if any pyexcel sheet named_column_at sheet named_column_at(nameget column by its name pyexcel sheet set_named_column_at sheet set_named_column_at(namecolumn_arraytake the first row as column names given name to identify the column indexset the column to the given array except the column name pyexcel sheet delete_named_column_at sheet delete_named_column_at(nameworks only after you named columns by row given name to identify the column indexset the column to the given array except the column name :param str namea column name any row as column name sheet name_rows_by_column(column_indexsheet colnames sheet named_row_at(name use the elements of specified column to represent individual rows return column names if any get row by its name continued on next page support the project |
6,390 | table continued from previous page sheet set_named_row_at(namerow_arraytake the first column as row names sheet delete_named_row_at(nametake the first column as row names pyexcel sheet name_rows_by_column sheet name_rows_by_column(column_indexuse the elements of specified column to represent individual rows the specified column will be deleted from the data :param column_indexthe index of the column that has the row names pyexcel sheet colnames sheet colnames return column names if any pyexcel sheet named_row_at sheet named_row_at(nameget row by its name pyexcel sheet set_named_row_at sheet set_named_row_at(namerow_arraytake the first column as row names given name to identify the row indexset the row to the given array except the row name pyexcel sheet delete_named_row_at sheet delete_named_row_at(nametake the first column as row names given name to identify the row indexset the row to the given array except the row name conversion sheet array sheet records sheet dict sheet url sheet csv sheet tsv sheet csvz sheet tsvz sheet xls sheet xlsm api documentation get/set data in/from array format get/set data in/from records format get/set data in/from dict format set data in url format get/set data in/from csv format get/set data in/from tsv format get/set data in/from csvz format get/set data in/from tsvz format get/set data in/from xls format get/set data in/from xlsm format continued on next page |
6,391 | sheet xlsx sheet ods sheet stream table continued from previous page get/set data in/from xlsx format get/set data in/from ods format return stream in which the content is properly encoded pyexcel sheet array sheet array get/set data in/from array format you could obtain content in array format by dot notationsheet array and you could as well set content by dot notationsheet array the_io_stream_in_array_format if you need to pass on more parametersyou could usesheet get_array(**keywordssheet set_array(the_io_stream_in_array_format**keywordspyexcel sheet records sheet records get/set data in/from records format you could obtain content in records format by dot notationsheet records and you could as well set content by dot notationsheet records the_io_stream_in_records_format if you need to pass on more parametersyou could usesheet get_records(**keywordssheet set_records(the_io_stream_in_records_format**keywordspyexcel sheet dict sheet dict get/set data in/from dict format you could obtain content in dict format by dot notationsheet dict and you could as well set content by dot notation support the project |
6,392 | sheet dict the_io_stream_in_dict_format if you need to pass on more parametersyou could usesheet get_dict(**keywordssheet set_dict(the_io_stream_in_dict_format**keywordspyexcel sheet url sheet url set data in url format you could set content in url format by dot notationsheet url if you need to pass on more parametersyou could usesheet set_url(the_io_stream_in_url_format**keywordspyexcel sheet csv sheet csv get/set data in/from csv format you could obtain content in csv format by dot notationsheet csv and you could as well set content by dot notationsheet csv the_io_stream_in_csv_format if you need to pass on more parametersyou could usesheet get_csv(**keywordssheet set_csv(the_io_stream_in_csv_format**keywordspyexcel sheet tsv sheet tsv get/set data in/from tsv format you could obtain content in tsv format by dot notationsheet tsv and you could as well set content by dot notationsheet tsv the_io_stream_in_tsv_format if you need to pass on more parametersyou could use api documentation |
6,393 | sheet get_tsv(**keywordssheet set_tsv(the_io_stream_in_tsv_format**keywordspyexcel sheet csvz sheet csvz get/set data in/from csvz format you could obtain content in csvz format by dot notationsheet csvz and you could as well set content by dot notationsheet csvz the_io_stream_in_csvz_format if you need to pass on more parametersyou could usesheet get_csvz(**keywordssheet set_csvz(the_io_stream_in_csvz_format**keywordspyexcel sheet tsvz sheet tsvz get/set data in/from tsvz format you could obtain content in tsvz format by dot notationsheet tsvz and you could as well set content by dot notationsheet tsvz the_io_stream_in_tsvz_format if you need to pass on more parametersyou could usesheet get_tsvz(**keywordssheet set_tsvz(the_io_stream_in_tsvz_format**keywordspyexcel sheet xls sheet xls get/set data in/from xls format you could obtain content in xls format by dot notationsheet xls and you could as well set content by dot notationsheet xls the_io_stream_in_xls_format if you need to pass on more parametersyou could use support the project |
6,394 | sheet get_xls(**keywordssheet set_xls(the_io_stream_in_xls_format**keywordspyexcel sheet xlsm sheet xlsm get/set data in/from xlsm format you could obtain content in xlsm format by dot notationsheet xlsm and you could as well set content by dot notationsheet xlsm the_io_stream_in_xlsm_format if you need to pass on more parametersyou could usesheet get_xlsm(**keywordssheet set_xlsm(the_io_stream_in_xlsm_format**keywordspyexcel sheet xlsx sheet xlsx get/set data in/from xlsx format you could obtain content in xlsx format by dot notationsheet xlsx and you could as well set content by dot notationsheet xlsx the_io_stream_in_xlsx_format if you need to pass on more parametersyou could usesheet get_xlsx(**keywordssheet set_xlsx(the_io_stream_in_xlsx_format**keywordspyexcel sheet ods sheet ods get/set data in/from ods format you could obtain content in ods format by dot notationsheet ods and you could as well set content by dot notationsheet ods the_io_stream_in_ods_format if you need to pass on more parametersyou could use api documentation |
6,395 | sheet get_ods(**keywordssheet set_ods(the_io_stream_in_ods_format**keywordspyexcel sheet stream sheet stream return stream in which the content is properly encoded exampleimport pyexcel as get_book(bookdict={" "[[ ]]}csv_stream stream texttable print(csv_stream getvalue() +--- +---where stream xls getvalue(is equivalent to xls in some situation stream xls is prefered than xls sheet examplesimport pyexcel as sheet([[ ]]' 'csv_stream stream texttable print(csv_stream getvalue() +--- +---where stream xls getvalue(is equivalent to xls in some situation stream xls is prefered than xls it is similar to save_to_memory(formatting sheet format(formatterapply formatting action for the whole sheet pyexcel sheet format sheet format(formatterapply formatting action for the whole sheet exampleimport pyexcel as pe given dictinoary as the following data " "[ ]" "[ ]" "[ ](continues on next page support the project |
6,396 | (continued from previous page" "[ '',sheet pe get_sheet(adict=datasheet row[ [ sheet format(strsheet row[ [' '' '' '' 'sheet format(intsheet row[ [ filtering sheet filter([column_indicesrow_indices]apply the filter with immediate effect pyexcel sheet filter sheet filter(column_indices=nonerow_indices=noneapply the filter with immediate effect transformation sheet project(new_ordered_columns[exclusion]sheet transpose(sheet map(custom_functionsheet region(topleft_cornerbottomright_cornersheet cut(topleft_cornerbottomright_cornersheet paste(topleft_corner[rowscolumns]rearrange the sheet rotate the data table by degrees execute function across all cells of the sheet get rectangle shaped data out get rectangle shaped data out and clear them in position paste rectangle shaped data after position pyexcel sheet project sheet project(new_ordered_columnsexclusion=falserearrange the sheet variables new_ordered_columns new columns exclusion to exlucde named column or not defaults to false examplesheet sheet[[" "" "" "][ ][ ][ ]]name_columns_by_row= sheet project([" "" "" "]pyexcel sheet(continues on next page api documentation |
6,397 | (continued from previous page+++ +=====+=====+===== +++ +++ +++sheet project([" "" "]pyexcel sheet++ +=====+===== ++ ++ ++sheet project([" "" "]exclusion=truepyexcel sheet+ +===== + + +pyexcel sheet transpose sheet transpose(rotate the data table by degrees reference transpose(pyexcel sheet map sheet map(custom_functionexecute function across all cells of the sheet exampleimport pyexcel as pe given dictinoary as the following data " "[ ]" "[ ]" "[ ]" "[ '',(continues on next page support the project |
6,398 | (continued from previous pagesheet pe get_sheet(adict=datasheet row[ [ inc lambda value(float(valueif value !'else )+ sheet map(incsheet row[ [ pyexcel sheet region sheet region(topleft_cornerbottomright_cornerget rectangle shaped data out parameters topleft_corner (slicethe top left corner of the rectangle bottomright_corner (slicethe bottom right corner of the rectangle pyexcel sheet cut sheet cut(topleft_cornerbottomright_cornerget rectangle shaped data out and clear them in position parameters topleft_corner (slicethe top left corner of the rectangle bottomright_corner (slicethe bottom right corner of the rectangle pyexcel sheet paste sheet paste(topleft_cornerrows=nonecolumns=nonepaste rectangle shaped data after position parameters topleft_corner (slicethe top left corner of the rectangle exampleimport pyexcel as pe data [ ] [ ][ ][ ][ pe sheet(datacut <row <column data cut([ ][ ] paste([ , ]rows=datas pyexcel sheet(continues on next page api documentation |
6,399 | (continued from previous page+----+----+----+----+----+----+----+----+----+---- +----+----+----+----+----+----+----+----+----+---- +----+----+----+----+----+----+----+----+----+---- +----+----+----+----+----+----+----+----+----+---- +----+----+----+----+----+----+----+----+----+---- +----+----+----+----+----+----+----+----+----+---- +----+----+----+----+----+----+----+----+----+---- +----+----+----+----+----+----+----+----+----+---- paste([ , ]columns=datas pyexcel sheet+----+----+----+----+----+----+----+----+----+----+----+---- +----+----+----+----+----+----+----+----+----+----+----+---- +----+----+----+----+----+----+----+----+----+----+----+---- +----+----+----+----+----+----+----+----+----+----+----+---- +----+----+----+----+----+----+----+----+----+----+----+---- +----+----+----+----+----+----+----+----+----+----+----+---- +----+----+----+----+----+----+----+----+----+----+----+---- +----+----+----+----+----+----+----+----+----+----+----+---- +----+----+----+----+----+----+----+----+----+----+----+---- +----+----+----+----+----+----+----+----+----+----+----+---- +----+----+----+----+----+----+----+----+----+----+----+----save changes sheet save_as(filename**keywordssheet save_to_memory(file_type[stream]sheet save_to_database(sessiontable[]sheet save_to_django_model(model[]save the content to named file save the content to memory save data in sheet to database table save to database table through django model pyexcel sheet save_as sheet save_as(filename**keywordssave the content to named file keywords may vary depending on your file typebecause the associated file type employs different library support the project |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.