markdown
stringlengths 0
1.02M
| code
stringlengths 0
832k
| output
stringlengths 0
1.02M
| license
stringlengths 3
36
| path
stringlengths 6
265
| repo_name
stringlengths 6
127
|
---|---|---|---|---|---|
Connect to Db2The db2_doConnect routine is called when a connection needs to be established to a Db2 database. The command does not require any parameters since it relies on the settings variable which contains all of the information it needs to connect to a Db2 database.```db2_doConnect()```There are 4 additional variables that are used throughout the routines to stay connected with the Db2 database. These variables are:- hdbc - The connection handle to the database- hstmt - A statement handle used for executing SQL statements- connected - A flag that tells the program whether or not we are currently connected to a database- runtime - Used to tell %sql the length of time (default 1 second) to run a statement when timing itThe only database driver that is used in this program is the IBM DB2 ODBC DRIVER. This driver needs to be loaded on the system that is connecting to Db2. The Jupyter notebook that is built by this system installs the driver for you so you shouldn't have to do anything other than build the container.If the connection is successful, the connected flag is set to True. Any subsequent %sql call will check to see if you are connected and initiate another prompted connection if you do not have a connection to a database.
|
def db2_doConnect():
global _hdbc, _hdbi, _connected, _runtime
global _settings
if _connected == False:
if len(_settings["database"]) == 0:
return False
dsn = (
"DRIVER={{IBM DB2 ODBC DRIVER}};"
"DATABASE={0};"
"HOSTNAME={1};"
"PORT={2};"
"PROTOCOL=TCPIP;"
"UID={3};"
"PWD={4};{5}").format(_settings["database"],
_settings["hostname"],
_settings["port"],
_settings["uid"],
_settings["pwd"],
_settings["ssl"])
# Get a database handle (hdbc) and a statement handle (hstmt) for subsequent access to DB2
try:
_hdbc = ibm_db.connect(dsn, "", "")
except Exception as err:
db2_error(False,True) # errormsg(str(err))
_connected = False
_settings["database"] = ''
return False
try:
_hdbi = ibm_db_dbi.Connection(_hdbc)
except Exception as err:
db2_error(False,True) # errormsg(str(err))
_connected = False
_settings["database"] = ''
return False
_connected = True
# Save the values for future use
save_settings()
success("Connection successful.")
return True
|
_____no_output_____
|
Apache-2.0
|
db2.ipynb
|
Db2-DTE-POC/Db2-Openshift-11.5.4
|
Load/Save SettingsThere are two routines that load and save settings between Jupyter notebooks. These routines are called without any parameters.```load_settings() save_settings()```There is a global structure called settings which contains the following fields:```_settings = { "maxrows" : 10, "maxgrid" : 5, "runtime" : 1, "display" : "TEXT", "database" : "", "hostname" : "localhost", "port" : "50000", "protocol" : "TCPIP", "uid" : "DB2INST1", "pwd" : "password"}```The information in the settings structure is used for re-connecting to a database when you start up a Jupyter notebook. When the session is established for the first time, the load_settings() function is called to get the contents of the pickle file (db2connect.pickle, a Jupyter session file) that will be used for the first connection to the database. Whenever a new connection is made, the file is updated with the save_settings() function.
|
def load_settings():
# This routine will load the settings from the previous session if they exist
global _settings
fname = "db2connect.pickle"
try:
with open(fname,'rb') as f:
_settings = pickle.load(f)
# Reset runtime to 1 since it would be unexpected to keep the same value between connections
_settings["runtime"] = 1
_settings["maxgrid"] = 5
except:
pass
return
def save_settings():
# This routine will save the current settings if they exist
global _settings
fname = "db2connect.pickle"
try:
with open(fname,'wb') as f:
pickle.dump(_settings,f)
except:
errormsg("Failed trying to write Db2 Configuration Information.")
return
|
_____no_output_____
|
Apache-2.0
|
db2.ipynb
|
Db2-DTE-POC/Db2-Openshift-11.5.4
|
Error and Message FunctionsThere are three types of messages that are thrown by the %db2 magic command. The first routine will print out a success message with no special formatting:```success(message)```The second message is used for displaying an error message that is not associated with a SQL error. This type of error message is surrounded with a red box to highlight the problem. Note that the success message has code that has been commented out that could also show a successful return code with a green box. ```errormsg(message)```The final error message is based on an error occuring in the SQL code that was executed. This code will parse the message returned from the ibm_db interface and parse it to return only the error message portion (and not all of the wrapper code from the driver).```db2_error(quiet,connect=False)```The quiet flag is passed to the db2_error routine so that messages can be suppressed if the user wishes to ignore them with the -q flag. A good example of this is dropping a table that does not exist. We know that an error will be thrown so we can ignore it. The information that the db2_error routine gets is from the stmt_errormsg() function from within the ibm_db driver. The db2_error function should only be called after a SQL failure otherwise there will be no diagnostic information returned from stmt_errormsg().If the connect flag is True, the routine will get the SQLSTATE and SQLCODE from the connection error message rather than a statement error message.
|
def db2_error(quiet,connect=False):
global sqlerror, sqlcode, sqlstate, _environment
try:
if (connect == False):
errmsg = ibm_db.stmt_errormsg().replace('\r',' ')
errmsg = errmsg[errmsg.rfind("]")+1:].strip()
else:
errmsg = ibm_db.conn_errormsg().replace('\r',' ')
errmsg = errmsg[errmsg.rfind("]")+1:].strip()
sqlerror = errmsg
msg_start = errmsg.find("SQLSTATE=")
if (msg_start != -1):
msg_end = errmsg.find(" ",msg_start)
if (msg_end == -1):
msg_end = len(errmsg)
sqlstate = errmsg[msg_start+9:msg_end]
else:
sqlstate = "0"
msg_start = errmsg.find("SQLCODE=")
if (msg_start != -1):
msg_end = errmsg.find(" ",msg_start)
if (msg_end == -1):
msg_end = len(errmsg)
sqlcode = errmsg[msg_start+8:msg_end]
try:
sqlcode = int(sqlcode)
except:
pass
else:
sqlcode = 0
except:
errmsg = "Unknown error."
sqlcode = -99999
sqlstate = "-99999"
sqlerror = errmsg
return
msg_start = errmsg.find("SQLSTATE=")
if (msg_start != -1):
msg_end = errmsg.find(" ",msg_start)
if (msg_end == -1):
msg_end = len(errmsg)
sqlstate = errmsg[msg_start+9:msg_end]
else:
sqlstate = "0"
msg_start = errmsg.find("SQLCODE=")
if (msg_start != -1):
msg_end = errmsg.find(" ",msg_start)
if (msg_end == -1):
msg_end = len(errmsg)
sqlcode = errmsg[msg_start+8:msg_end]
try:
sqlcode = int(sqlcode)
except:
pass
else:
sqlcode = 0
if quiet == True: return
if (errmsg == ""): return
html = '<p><p style="border:2px; border-style:solid; border-color:#FF0000; background-color:#ffe6e6; padding: 1em;">'
if (_environment["jupyter"] == True):
pdisplay(pHTML(html+errmsg+"</p>"))
else:
print(errmsg)
# Print out an error message
def errormsg(message):
global _environment
if (message != ""):
html = '<p><p style="border:2px; border-style:solid; border-color:#FF0000; background-color:#ffe6e6; padding: 1em;">'
if (_environment["jupyter"] == True):
pdisplay(pHTML(html + message + "</p>"))
else:
print(message)
def success(message):
if (message != ""):
print(message)
return
def debug(message,error=False):
global _environment
if (_environment["jupyter"] == True):
spacer = "<br>" + " "
else:
spacer = "\n "
if (message != ""):
lines = message.split('\n')
msg = ""
indent = 0
for line in lines:
delta = line.count("(") - line.count(")")
if (msg == ""):
msg = line
indent = indent + delta
else:
if (delta < 0): indent = indent + delta
msg = msg + spacer * (indent*2) + line
if (delta > 0): indent = indent + delta
if (indent < 0): indent = 0
if (error == True):
html = '<p><pre style="font-family: monospace; border:2px; border-style:solid; border-color:#FF0000; background-color:#ffe6e6; padding: 1em;">'
else:
html = '<p><pre style="font-family: monospace; border:2px; border-style:solid; border-color:#008000; background-color:#e6ffe6; padding: 1em;">'
if (_environment["jupyter"] == True):
pdisplay(pHTML(html + msg + "</pre></p>"))
else:
print(msg)
return
|
_____no_output_____
|
Apache-2.0
|
db2.ipynb
|
Db2-DTE-POC/Db2-Openshift-11.5.4
|
Macro ProcessorA macro is used to generate SQL to be executed by overriding or creating a new keyword. For instance, the base `%sql` command does not understand the `LIST TABLES` command which is usually used in conjunction with the `CLP` processor. Rather than specifically code this in the base `db2.ipynb` file, we can create a macro that can execute this code for us.There are three routines that deal with macros. - checkMacro is used to find the macro calls in a string. All macros are sent to parseMacro for checking.- runMacro will evaluate the macro and return the string to the parse- subvars is used to track the variables used as part of a macro call.- setMacro is used to catalog a macro Set MacroThis code will catalog a macro call.
|
def setMacro(inSQL,parms):
global _macros
names = parms.split()
if (len(names) < 2):
errormsg("No command name supplied.")
return None
macroName = names[1].upper()
_macros[macroName] = inSQL
return
|
_____no_output_____
|
Apache-2.0
|
db2.ipynb
|
Db2-DTE-POC/Db2-Openshift-11.5.4
|
Check MacroThis code will check to see if there is a macro command in the SQL. It will take the SQL that is supplied and strip out three values: the first and second keywords, and the remainder of the parameters.For instance, consider the following statement:```CREATE DATABASE GEORGE options....```The name of the macro that we want to run is called `CREATE`. We know that there is a SQL command called `CREATE` but this code will call the macro first to see if needs to run any special code. For instance, `CREATE DATABASE` is not part of the `db2.ipynb` syntax, but we can add it in by using a macro.The check macro logic will strip out the subcommand (`DATABASE`) and place the remainder of the string after `DATABASE` in options.
|
def checkMacro(in_sql):
global _macros
if (len(in_sql) == 0): return(in_sql) # Nothing to do
tokens = parseArgs(in_sql,None) # Take the string and reduce into tokens
macro_name = tokens[0].upper() # Uppercase the name of the token
if (macro_name not in _macros):
return(in_sql) # No macro by this name so just return the string
result = runMacro(_macros[macro_name],in_sql,tokens) # Execute the macro using the tokens we found
return(result) # Runmacro will either return the original SQL or the new one
|
_____no_output_____
|
Apache-2.0
|
db2.ipynb
|
Db2-DTE-POC/Db2-Openshift-11.5.4
|
Split AssignmentThis routine will return the name of a variable and it's value when the format is x=y. If y is enclosed in quotes, the quotes are removed.
|
def splitassign(arg):
var_name = "null"
var_value = "null"
arg = arg.strip()
eq = arg.find("=")
if (eq != -1):
var_name = arg[:eq].strip()
temp_value = arg[eq+1:].strip()
if (temp_value != ""):
ch = temp_value[0]
if (ch in ["'",'"']):
if (temp_value[-1:] == ch):
var_value = temp_value[1:-1]
else:
var_value = temp_value
else:
var_value = temp_value
else:
var_value = arg
return var_name, var_value
|
_____no_output_____
|
Apache-2.0
|
db2.ipynb
|
Db2-DTE-POC/Db2-Openshift-11.5.4
|
Parse Args The commands that are used in the macros need to be parsed into their separate tokens. The tokens are separated by blanks and strings that enclosed in quotes are kept together.
|
def parseArgs(argin,_vars):
quoteChar = ""
inQuote = False
inArg = True
args = []
arg = ''
for ch in argin.lstrip():
if (inQuote == True):
if (ch == quoteChar):
inQuote = False
arg = arg + ch #z
else:
arg = arg + ch
elif (ch == "\"" or ch == "\'"): # Do we have a quote
quoteChar = ch
arg = arg + ch #z
inQuote = True
elif (ch == " "):
if (arg != ""):
arg = subvars(arg,_vars)
args.append(arg)
else:
args.append("null")
arg = ""
else:
arg = arg + ch
if (arg != ""):
arg = subvars(arg,_vars)
args.append(arg)
return(args)
|
_____no_output_____
|
Apache-2.0
|
db2.ipynb
|
Db2-DTE-POC/Db2-Openshift-11.5.4
|
Run MacroThis code will execute the body of the macro and return the results for that macro call.
|
def runMacro(script,in_sql,tokens):
result = ""
runIT = True
code = script.split("\n")
level = 0
runlevel = [True,False,False,False,False,False,False,False,False,False]
ifcount = 0
_vars = {}
for i in range(0,len(tokens)):
vstr = str(i)
_vars[vstr] = tokens[i]
if (len(tokens) == 0):
_vars["argc"] = "0"
else:
_vars["argc"] = str(len(tokens)-1)
for line in code:
line = line.strip()
if (line == "" or line == "\n"): continue
if (line[0] == "#"): continue # A comment line starts with a # in the first position of the line
args = parseArgs(line,_vars) # Get all of the arguments
if (args[0] == "if"):
ifcount = ifcount + 1
if (runlevel[level] == False): # You can't execute this statement
continue
level = level + 1
if (len(args) < 4):
print("Macro: Incorrect number of arguments for the if clause.")
return insql
arg1 = args[1]
arg2 = args[3]
if (len(arg2) > 2):
ch1 = arg2[0]
ch2 = arg2[-1:]
if (ch1 in ['"',"'"] and ch1 == ch2):
arg2 = arg2[1:-1].strip()
op = args[2]
if (op in ["=","=="]):
if (arg1 == arg2):
runlevel[level] = True
else:
runlevel[level] = False
elif (op in ["<=","=<"]):
if (arg1 <= arg2):
runlevel[level] = True
else:
runlevel[level] = False
elif (op in [">=","=>"]):
if (arg1 >= arg2):
runlevel[level] = True
else:
runlevel[level] = False
elif (op in ["<>","!="]):
if (arg1 != arg2):
runlevel[level] = True
else:
runlevel[level] = False
elif (op in ["<"]):
if (arg1 < arg2):
runlevel[level] = True
else:
runlevel[level] = False
elif (op in [">"]):
if (arg1 > arg2):
runlevel[level] = True
else:
runlevel[level] = False
else:
print("Macro: Unknown comparison operator in the if statement:" + op)
continue
elif (args[0] in ["exit","echo"] and runlevel[level] == True):
msg = ""
for msgline in args[1:]:
if (msg == ""):
msg = subvars(msgline,_vars)
else:
msg = msg + " " + subvars(msgline,_vars)
if (msg != ""):
if (args[0] == "echo"):
debug(msg,error=False)
else:
debug(msg,error=True)
if (args[0] == "exit"): return ''
elif (args[0] == "pass" and runlevel[level] == True):
pass
elif (args[0] == "var" and runlevel[level] == True):
value = ""
for val in args[2:]:
if (value == ""):
value = subvars(val,_vars)
else:
value = value + " " + subvars(val,_vars)
value.strip()
_vars[args[1]] = value
elif (args[0] == 'else'):
if (ifcount == level):
runlevel[level] = not runlevel[level]
elif (args[0] == 'return' and runlevel[level] == True):
return(result)
elif (args[0] == "endif"):
ifcount = ifcount - 1
if (ifcount < level):
level = level - 1
if (level < 0):
print("Macro: Unmatched if/endif pairs.")
return ''
else:
if (runlevel[level] == True):
if (result == ""):
result = subvars(line,_vars)
else:
result = result + "\n" + subvars(line,_vars)
return(result)
|
_____no_output_____
|
Apache-2.0
|
db2.ipynb
|
Db2-DTE-POC/Db2-Openshift-11.5.4
|
Substitute VarsThis routine is used by the runMacro program to track variables that are used within Macros. These are kept separate from the rest of the code.
|
def subvars(script,_vars):
if (_vars == None): return script
remainder = script
result = ""
done = False
while done == False:
bv = remainder.find("{")
if (bv == -1):
done = True
continue
ev = remainder.find("}")
if (ev == -1):
done = True
continue
result = result + remainder[:bv]
vvar = remainder[bv+1:ev]
remainder = remainder[ev+1:]
upper = False
allvars = False
if (vvar[0] == "^"):
upper = True
vvar = vvar[1:]
elif (vvar[0] == "*"):
vvar = vvar[1:]
allvars = True
else:
pass
if (vvar in _vars):
if (upper == True):
items = _vars[vvar].upper()
elif (allvars == True):
try:
iVar = int(vvar)
except:
return(script)
items = ""
sVar = str(iVar)
while sVar in _vars:
if (items == ""):
items = _vars[sVar]
else:
items = items + " " + _vars[sVar]
iVar = iVar + 1
sVar = str(iVar)
else:
items = _vars[vvar]
else:
if (allvars == True):
items = ""
else:
items = "null"
result = result + items
if (remainder != ""):
result = result + remainder
return(result)
|
_____no_output_____
|
Apache-2.0
|
db2.ipynb
|
Db2-DTE-POC/Db2-Openshift-11.5.4
|
SQL TimerThe calling format of this routine is:```count = sqlTimer(hdbc, runtime, inSQL)```This code runs the SQL string multiple times for one second (by default). The accuracy of the clock is not that great when you are running just one statement, so instead this routine will run the code multiple times for a second to give you an execution count. If you need to run the code for more than one second, the runtime value needs to be set to the number of seconds you want the code to run.The return result is always the number of times that the code executed. Note, that the program will skip reading the data if it is a SELECT statement so it doesn't included fetch time for the answer set.
|
def sqlTimer(hdbc, runtime, inSQL):
count = 0
t_end = time.time() + runtime
while time.time() < t_end:
try:
stmt = ibm_db.exec_immediate(hdbc,inSQL)
if (stmt == False):
db2_error(flag(["-q","-quiet"]))
return(-1)
ibm_db.free_result(stmt)
except Exception as err:
db2_error(False)
return(-1)
count = count + 1
return(count)
|
_____no_output_____
|
Apache-2.0
|
db2.ipynb
|
Db2-DTE-POC/Db2-Openshift-11.5.4
|
Split ArgsThis routine takes as an argument a string and then splits the arguments according to the following logic:* If the string starts with a `(` character, it will check the last character in the string and see if it is a `)` and then remove those characters* Every parameter is separated by a comma `,` and commas within quotes are ignored* Each parameter returned will have three values returned - one for the value itself, an indicator which will be either True if it was quoted, or False if not, and True or False if it is numeric.Example:``` "abcdef",abcdef,456,"856"```Three values would be returned:```[abcdef,True,False],[abcdef,False,False],[456,False,True],[856,True,False]```Any quoted string will be False for numeric. The way that the parameters are handled are up to the calling program. However, in the case of Db2, the quoted strings must be in single quotes so any quoted parameter using the double quotes `"` must be wrapped with single quotes. There is always a possibility that a string contains single quotes (i.e. O'Connor) so any substituted text should use `''` so that Db2 can properly interpret the string. This routine does not adjust the strings with quotes, and depends on the variable subtitution routine to do that.
|
def splitargs(arguments):
import types
# String the string and remove the ( and ) characters if they at the beginning and end of the string
results = []
step1 = arguments.strip()
if (len(step1) == 0): return(results) # Not much to do here - no args found
if (step1[0] == '('):
if (step1[-1:] == ')'):
step2 = step1[1:-1]
step2 = step2.strip()
else:
step2 = step1
else:
step2 = step1
# Now we have a string without brackets. Start scanning for commas
quoteCH = ""
pos = 0
arg = ""
args = []
while pos < len(step2):
ch = step2[pos]
if (quoteCH == ""): # Are we in a quote?
if (ch in ('"',"'")): # Check to see if we are starting a quote
quoteCH = ch
arg = arg + ch
pos += 1
elif (ch == ","): # Are we at the end of a parameter?
arg = arg.strip()
args.append(arg)
arg = ""
inarg = False
pos += 1
else: # Continue collecting the string
arg = arg + ch
pos += 1
else:
if (ch == quoteCH): # Are we at the end of a quote?
arg = arg + ch # Add the quote to the string
pos += 1 # Increment past the quote
quoteCH = "" # Stop quote checking (maybe!)
else:
pos += 1
arg = arg + ch
if (quoteCH != ""): # So we didn't end our string
arg = arg.strip()
args.append(arg)
elif (arg != ""): # Something left over as an argument
arg = arg.strip()
args.append(arg)
else:
pass
results = []
for arg in args:
result = []
if (len(arg) > 0):
if (arg[0] in ('"',"'")):
value = arg[1:-1]
isString = True
isNumber = False
else:
isString = False
isNumber = False
try:
value = eval(arg)
if (type(value) == int):
isNumber = True
elif (isinstance(value,float) == True):
isNumber = True
else:
value = arg
except:
value = arg
else:
value = ""
isString = False
isNumber = False
result = [value,isString,isNumber]
results.append(result)
return results
|
_____no_output_____
|
Apache-2.0
|
db2.ipynb
|
Db2-DTE-POC/Db2-Openshift-11.5.4
|
DataFrame Table CreationWhen using dataframes, it is sometimes useful to use the definition of the dataframe to create a Db2 table. The format of the command is:```%sql using create table [with data | columns asis]```The value is the name of the dataframe, not the contents (`:df`). The definition of the data types in the dataframe will be used to create the Db2 table using typical Db2 data types rather than generic CLOBs and FLOAT for numeric objects. The two options are used to handle how the conversion is done. If you supply `with data`, the contents of the df will be inserted into the table, otherwise the table is defined only. The column names will be uppercased and special characters (like blanks) will be replaced with underscores. If `columns asis` is specified, the column names will remain the same as in the dataframe, with each name using quotes to guarantee the same spelling as in the DF. If the table already exists, the command will not run and an error message will be produced.
|
def createDF(hdbc,sqlin,local_ns):
import datetime
import ibm_db
global sqlcode
# Strip apart the command into tokens based on spaces
tokens = sqlin.split()
token_count = len(tokens)
if (token_count < 5): # Not enough parameters
errormsg("Insufficient arguments for USING command. %sql using df create table name [with data | columns asis]")
return
keyword_command = tokens[0].upper()
dfName = tokens[1]
keyword_create = tokens[2].upper()
keyword_table = tokens[3].upper()
table = tokens[4]
if (keyword_create not in ("CREATE","REPLACE") or keyword_table != "TABLE"):
errormsg("Incorrect syntax: %sql using <df> create table <name> [options]")
return
if (token_count % 2 != 1):
errormsg("Insufficient arguments for USING command. %sql using df create table name [with data | columns asis | keep float]")
return
flag_withdata = False
flag_asis = False
flag_float = False
flag_integer = False
limit = -1
if (keyword_create == "REPLACE"):
%sql -q DROP TABLE {table}
for token_idx in range(5,token_count,2):
option_key = tokens[token_idx].upper()
option_val = tokens[token_idx+1].upper()
if (option_key == "WITH" and option_val == "DATA"):
flag_withdata = True
elif (option_key == "COLUMNS" and option_val == "ASIS"):
flag_asis = True
elif (option_key == "KEEP" and option_val == "FLOAT64"):
flag_float = True
elif (option_key == "KEEP" and option_val == "INT64"):
flag_integer = True
elif (option_key == "LIMIT"):
if (option_val.isnumeric() == False):
errormsg("The LIMIT must be a valid number from -1 (unlimited) to the maximun number of rows to insert")
return
limit = int(option_val)
else:
errormsg("Invalid options. Must be either WITH DATA | COLUMNS ASIS | KEEP FLOAT64 | KEEP FLOAT INT64")
return
dfName = tokens[1]
if (dfName not in local_ns):
errormsg("The variable ({dfName}) does not exist in the local variable list.")
return
try:
df_value = eval(dfName,None,local_ns) # globals()[varName] # eval(varName)
except:
errormsg("The variable ({dfName}) does not contain a value.")
return
if (isinstance(df_value,pandas.DataFrame) == False): # Not a Pandas dataframe
errormsg("The variable ({dfName}) is not a Pandas dataframe.")
return
sql = []
columns = dict(df_value.dtypes)
sql.append(f'CREATE TABLE {table} (')
datatypes = []
comma = ""
for column in columns:
datatype = columns[column]
if (datatype == "object"):
datapoint = df_value[column][0]
if (isinstance(datapoint,datetime.datetime)):
type = "TIMESTAMP"
elif (isinstance(datapoint,datetime.time)):
type = "TIME"
elif (isinstance(datapoint,datetime.date)):
type = "DATE"
elif (isinstance(datapoint,float)):
if (flag_float == True):
type = "FLOAT"
else:
type = "DECFLOAT"
elif (isinstance(datapoint,int)):
if (flag_integer == True):
type = "BIGINT"
else:
type = "INTEGER"
elif (isinstance(datapoint,str)):
maxlength = df_value[column].apply(str).apply(len).max()
type = f"VARCHAR({maxlength})"
else:
type = "CLOB"
elif (datatype == "int64"):
if (flag_integer == True):
type = "BIGINT"
else:
type = "INTEGER"
elif (datatype == "float64"):
if (flag_float == True):
type = "FLOAT"
else:
type = "DECFLOAT"
elif (datatype == "datetime64"):
type = "TIMESTAMP"
elif (datatype == "bool"):
type = "BINARY"
else:
type = "CLOB"
datatypes.append(type)
if (flag_asis == False):
if (isinstance(column,str) == False):
column = str(column)
identifier = "ABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789_"
column_name = column.strip().upper()
new_name = ""
for ch in column_name:
if (ch not in identifier):
new_name = new_name + "_"
else:
new_name = new_name + ch
new_name = new_name.lstrip('_').rstrip('_')
if (new_name == "" or new_name[0] not in "ABCDEFGHIJKLMNOPQRSTUVWXYZ"):
new_name = f'"{column}"'
else:
new_name = f'"{column}"'
sql.append(f" {new_name} {type}")
sql.append(")")
sqlcmd = ""
for i in range(0,len(sql)):
if (i > 0 and i < len(sql)-2):
comma = ","
else:
comma = ""
sqlcmd = "{}\n{}{}".format(sqlcmd,sql[i],comma)
print(sqlcmd)
%sql {sqlcmd}
if (sqlcode != 0):
return
if (flag_withdata == True):
autocommit = ibm_db.autocommit(hdbc)
ibm_db.autocommit(hdbc,False)
row_count = 0
insert_sql = ""
rows, cols = df_value.shape
for row in range(0,rows):
insert_row = ""
for col in range(0, cols):
value = df_value.iloc[row][col]
if (datatypes[col] == "CLOB" or "VARCHAR" in datatypes[col]):
value = str(value)
value = addquotes(value,True)
elif (datatypes[col] in ("TIME","DATE","TIMESTAMP")):
value = str(value)
value = addquotes(value,True)
elif (datatypes[col] in ("INTEGER","DECFLOAT","FLOAT","BINARY")):
strvalue = str(value)
if ("NAN" in strvalue.upper()):
value = "NULL"
else:
value = str(value)
value = addquotes(value,True)
if (insert_row == ""):
insert_row = f"{value}"
else:
insert_row = f"{insert_row},{value}"
if (insert_sql == ""):
insert_sql = f"INSERT INTO {table} VALUES ({insert_row})"
else:
insert_sql = f"{insert_sql},({insert_row})"
row_count += 1
if (row_count % 1000 == 0 or row_count == limit):
result = ibm_db.exec_immediate(hdbc, insert_sql) # Run it
if (result == False): # Error executing the code
db2_error(False)
return
ibm_db.commit(hdbc)
print(f"\r{row_count} of {rows} rows inserted.",end="")
insert_sql = ""
if (row_count == limit):
break
if (insert_sql != ""):
result = ibm_db.exec_immediate(hdbc, insert_sql) # Run it
if (result == False): # Error executing the code
db2_error(False)
ibm_db.commit(hdbc)
ibm_db.autocommit(hdbc,autocommit)
print("\nInsert completed.")
return
|
_____no_output_____
|
Apache-2.0
|
db2.ipynb
|
Db2-DTE-POC/Db2-Openshift-11.5.4
|
SQL ParserThe calling format of this routine is:```sql_cmd, encoded_sql = sqlParser(sql_input)```This code will look at the SQL string that has been passed to it and parse it into two values:- sql_cmd: First command in the list (so this may not be the actual SQL command)- encoded_sql: SQL with the parameters removed if there are any (replaced with ? markers)
|
def sqlParser(sqlin,local_ns):
sql_cmd = ""
encoded_sql = sqlin
firstCommand = "(?:^\s*)([a-zA-Z]+)(?:\s+.*|$)"
findFirst = re.match(firstCommand,sqlin)
if (findFirst == None): # We did not find a match so we just return the empty string
return sql_cmd, encoded_sql
cmd = findFirst.group(1)
sql_cmd = cmd.upper()
#
# Scan the input string looking for variables in the format :var. If no : is found just return.
# Var must be alpha+number+_ to be valid
#
if (':' not in sqlin): # A quick check to see if parameters are in here, but not fool-proof!
return sql_cmd, encoded_sql
inVar = False
inQuote = ""
varName = ""
encoded_sql = ""
STRING = 0
NUMBER = 1
LIST = 2
RAW = 3
PANDAS = 5
for ch in sqlin:
if (inVar == True): # We are collecting the name of a variable
if (ch.upper() in "@_ABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789[]"):
varName = varName + ch
continue
else:
if (varName == ""):
encode_sql = encoded_sql + ":"
elif (varName[0] in ('[',']')):
encoded_sql = encoded_sql + ":" + varName
else:
if (ch == '.'): # If the variable name is stopped by a period, assume no quotes are used
flag_quotes = False
else:
flag_quotes = True
varValue, varType = getContents(varName,flag_quotes,local_ns)
if (varType != PANDAS and varValue == None):
encoded_sql = encoded_sql + ":" + varName
else:
if (varType == STRING):
encoded_sql = encoded_sql + varValue
elif (varType == NUMBER):
encoded_sql = encoded_sql + str(varValue)
elif (varType == RAW):
encoded_sql = encoded_sql + varValue
elif (varType == PANDAS):
insertsql = ""
coltypes = varValue.dtypes
rows, cols = varValue.shape
for row in range(0,rows):
insertrow = ""
for col in range(0, cols):
value = varValue.iloc[row][col]
if (coltypes[col] == "object"):
value = str(value)
value = addquotes(value,True)
else:
strvalue = str(value)
if ("NAN" in strvalue.upper()):
value = "NULL"
if (insertrow == ""):
insertrow = f"{value}"
else:
insertrow = f"{insertrow},{value}"
if (insertsql == ""):
insertsql = f"({insertrow})"
else:
insertsql = f"{insertsql},({insertrow})"
encoded_sql = encoded_sql + insertsql
elif (varType == LIST):
start = True
for v in varValue:
if (start == False):
encoded_sql = encoded_sql + ","
if (isinstance(v,int) == True): # Integer value
encoded_sql = encoded_sql + str(v)
elif (isinstance(v,float) == True):
encoded_sql = encoded_sql + str(v)
else:
flag_quotes = True
try:
if (v.find('0x') == 0): # Just guessing this is a hex value at beginning
encoded_sql = encoded_sql + v
else:
encoded_sql = encoded_sql + addquotes(v,flag_quotes) # String
except:
encoded_sql = encoded_sql + addquotes(str(v),flag_quotes)
start = False
encoded_sql = encoded_sql + ch
varName = ""
inVar = False
elif (inQuote != ""):
encoded_sql = encoded_sql + ch
if (ch == inQuote): inQuote = ""
elif (ch in ("'",'"')):
encoded_sql = encoded_sql + ch
inQuote = ch
elif (ch == ":"): # This might be a variable
varName = ""
inVar = True
else:
encoded_sql = encoded_sql + ch
if (inVar == True):
varValue, varType = getContents(varName,True,local_ns) # We assume the end of a line is quoted
if (varType != PANDAS and varValue == None):
encoded_sql = encoded_sql + ":" + varName
else:
if (varType == STRING):
encoded_sql = encoded_sql + varValue
elif (varType == NUMBER):
encoded_sql = encoded_sql + str(varValue)
elif (varType == PANDAS):
insertsql = ""
coltypes = varValue.dtypes
rows, cols = varValue.shape
for row in range(0,rows):
insertrow = ""
for col in range(0, cols):
value = varValue.iloc[row][col]
if (coltypes[col] == "object"):
value = str(value)
value = addquotes(value,True)
else:
strvalue = str(value)
if ("NAN" in strvalue.upper()):
value = "NULL"
if (insertrow == ""):
insertrow = f"{value}"
else:
insertrow = f"{insertrow},{value}"
if (insertsql == ""):
insertsql = f"({insertrow})"
else:
insertsql = f"{insertsql},({insertrow})"
encoded_sql = encoded_sql + insertsql
elif (varType == LIST):
flag_quotes = True
start = True
for v in varValue:
if (start == False):
encoded_sql = encoded_sql + ","
if (isinstance(v,int) == True): # Integer value
encoded_sql = encoded_sql + str(v)
elif (isinstance(v,float) == True):
encoded_sql = encoded_sql + str(v)
else:
try:
if (v.find('0x') == 0): # Just guessing this is a hex value
encoded_sql = encoded_sql + v
else:
encoded_sql = encoded_sql + addquotes(v,flag_quotes) # String
except:
encoded_sql = encoded_sql + addquotes(str(v),flag_quotes)
start = False
return sql_cmd, encoded_sql
|
_____no_output_____
|
Apache-2.0
|
db2.ipynb
|
Db2-DTE-POC/Db2-Openshift-11.5.4
|
Variable Contents FunctionThe calling format of this routine is:```value = getContents(varName,quote,name_space)```This code will take the name of a variable as input and return the contents of that variable. If the variable is not found then the program will return None which is the equivalent to empty or null. Note that this function looks at the global variable pool for Python so it is possible that the wrong version of variable is returned if it is used in different functions. For this reason, any variables used in SQL statements should use a unique namimg convention if possible.The other thing that this function does is replace single quotes with two quotes. The reason for doing this is that Db2 will convert two single quotes into one quote when dealing with strings. This avoids problems when dealing with text that contains multiple quotes within the string. Note that this substitution is done only for single quote characters since the double quote character is used by Db2 for naming columns that are case sensitive or contain special characters.If the quote value is True, the field will have quotes around it. The name_space is the variables currently that are registered in Python.
|
def getContents(varName,flag_quotes,local_ns):
#
# Get the contents of the variable name that is passed to the routine. Only simple
# variables are checked, i.e. arrays and lists are not parsed
#
STRING = 0
NUMBER = 1
LIST = 2
RAW = 3
DICT = 4
PANDAS = 5
try:
value = eval(varName,None,local_ns) # globals()[varName] # eval(varName)
except:
return(None,STRING)
if (isinstance(value,dict) == True): # Check to see if this is JSON dictionary
return(addquotes(value,flag_quotes),STRING)
elif(isinstance(value,list) == True): # List - tricky
return(value,LIST)
elif (isinstance(value,pandas.DataFrame) == True): # Pandas dataframe
return(value,PANDAS)
elif (isinstance(value,int) == True): # Integer value
return(value,NUMBER)
elif (isinstance(value,float) == True): # Float value
return(value,NUMBER)
else:
try:
# The pattern needs to be in the first position (0 in Python terms)
if (value.find('0x') == 0): # Just guessing this is a hex value
return(value,RAW)
else:
return(addquotes(value,flag_quotes),STRING) # String
except:
return(addquotes(str(value),flag_quotes),RAW)
|
_____no_output_____
|
Apache-2.0
|
db2.ipynb
|
Db2-DTE-POC/Db2-Openshift-11.5.4
|
Add QuotesQuotes are a challenge when dealing with dictionaries and Db2. Db2 wants strings delimited with single quotes, while Dictionaries use double quotes. That wouldn't be a problems except imbedded single quotes within these dictionaries will cause things to fail. This routine attempts to double-quote the single quotes within the dicitonary.
|
def addquotes(inString,flag_quotes):
if (isinstance(inString,dict) == True): # Check to see if this is JSON dictionary
serialized = json.dumps(inString)
else:
serialized = inString
# Replace single quotes with '' (two quotes) and wrap everything in single quotes
if (flag_quotes == False):
return(serialized)
else:
return("'"+serialized.replace("'","''")+"'") # Convert single quotes to two single quotes
|
_____no_output_____
|
Apache-2.0
|
db2.ipynb
|
Db2-DTE-POC/Db2-Openshift-11.5.4
|
Create the SAMPLE Database TablesThe calling format of this routine is:```db2_create_sample(quiet)```There are a lot of examples that depend on the data within the SAMPLE database. If you are running these examples and the connection is not to the SAMPLE database, then this code will create the two (EMPLOYEE, DEPARTMENT) tables that are used by most examples. If the function finds that these tables already exist, then nothing is done. If the tables are missing then they will be created with the same data as in the SAMPLE database.The quiet flag tells the program not to print any messages when the creation of the tables is complete.
|
def db2_create_sample(quiet):
create_department = """
BEGIN
DECLARE FOUND INTEGER;
SET FOUND = (SELECT COUNT(*) FROM SYSIBM.SYSTABLES WHERE NAME='DEPARTMENT' AND CREATOR=CURRENT USER);
IF FOUND = 0 THEN
EXECUTE IMMEDIATE('CREATE TABLE DEPARTMENT(DEPTNO CHAR(3) NOT NULL, DEPTNAME VARCHAR(36) NOT NULL,
MGRNO CHAR(6),ADMRDEPT CHAR(3) NOT NULL)');
EXECUTE IMMEDIATE('INSERT INTO DEPARTMENT VALUES
(''A00'',''SPIFFY COMPUTER SERVICE DIV.'',''000010'',''A00''),
(''B01'',''PLANNING'',''000020'',''A00''),
(''C01'',''INFORMATION CENTER'',''000030'',''A00''),
(''D01'',''DEVELOPMENT CENTER'',NULL,''A00''),
(''D11'',''MANUFACTURING SYSTEMS'',''000060'',''D01''),
(''D21'',''ADMINISTRATION SYSTEMS'',''000070'',''D01''),
(''E01'',''SUPPORT SERVICES'',''000050'',''A00''),
(''E11'',''OPERATIONS'',''000090'',''E01''),
(''E21'',''SOFTWARE SUPPORT'',''000100'',''E01''),
(''F22'',''BRANCH OFFICE F2'',NULL,''E01''),
(''G22'',''BRANCH OFFICE G2'',NULL,''E01''),
(''H22'',''BRANCH OFFICE H2'',NULL,''E01''),
(''I22'',''BRANCH OFFICE I2'',NULL,''E01''),
(''J22'',''BRANCH OFFICE J2'',NULL,''E01'')');
END IF;
END"""
%sql -d -q {create_department}
create_employee = """
BEGIN
DECLARE FOUND INTEGER;
SET FOUND = (SELECT COUNT(*) FROM SYSIBM.SYSTABLES WHERE NAME='EMPLOYEE' AND CREATOR=CURRENT USER);
IF FOUND = 0 THEN
EXECUTE IMMEDIATE('CREATE TABLE EMPLOYEE(
EMPNO CHAR(6) NOT NULL,
FIRSTNME VARCHAR(12) NOT NULL,
MIDINIT CHAR(1),
LASTNAME VARCHAR(15) NOT NULL,
WORKDEPT CHAR(3),
PHONENO CHAR(4),
HIREDATE DATE,
JOB CHAR(8),
EDLEVEL SMALLINT NOT NULL,
SEX CHAR(1),
BIRTHDATE DATE,
SALARY DECIMAL(9,2),
BONUS DECIMAL(9,2),
COMM DECIMAL(9,2)
)');
EXECUTE IMMEDIATE('INSERT INTO EMPLOYEE VALUES
(''000010'',''CHRISTINE'',''I'',''HAAS'' ,''A00'',''3978'',''1995-01-01'',''PRES '',18,''F'',''1963-08-24'',152750.00,1000.00,4220.00),
(''000020'',''MICHAEL'' ,''L'',''THOMPSON'' ,''B01'',''3476'',''2003-10-10'',''MANAGER '',18,''M'',''1978-02-02'',94250.00,800.00,3300.00),
(''000030'',''SALLY'' ,''A'',''KWAN'' ,''C01'',''4738'',''2005-04-05'',''MANAGER '',20,''F'',''1971-05-11'',98250.00,800.00,3060.00),
(''000050'',''JOHN'' ,''B'',''GEYER'' ,''E01'',''6789'',''1979-08-17'',''MANAGER '',16,''M'',''1955-09-15'',80175.00,800.00,3214.00),
(''000060'',''IRVING'' ,''F'',''STERN'' ,''D11'',''6423'',''2003-09-14'',''MANAGER '',16,''M'',''1975-07-07'',72250.00,500.00,2580.00),
(''000070'',''EVA'' ,''D'',''PULASKI'' ,''D21'',''7831'',''2005-09-30'',''MANAGER '',16,''F'',''2003-05-26'',96170.00,700.00,2893.00),
(''000090'',''EILEEN'' ,''W'',''HENDERSON'' ,''E11'',''5498'',''2000-08-15'',''MANAGER '',16,''F'',''1971-05-15'',89750.00,600.00,2380.00),
(''000100'',''THEODORE'' ,''Q'',''SPENSER'' ,''E21'',''0972'',''2000-06-19'',''MANAGER '',14,''M'',''1980-12-18'',86150.00,500.00,2092.00),
(''000110'',''VINCENZO'' ,''G'',''LUCCHESSI'' ,''A00'',''3490'',''1988-05-16'',''SALESREP'',19,''M'',''1959-11-05'',66500.00,900.00,3720.00),
(''000120'',''SEAN'' ,'' '',''O`CONNELL'' ,''A00'',''2167'',''1993-12-05'',''CLERK '',14,''M'',''1972-10-18'',49250.00,600.00,2340.00),
(''000130'',''DELORES'' ,''M'',''QUINTANA'' ,''C01'',''4578'',''2001-07-28'',''ANALYST '',16,''F'',''1955-09-15'',73800.00,500.00,1904.00),
(''000140'',''HEATHER'' ,''A'',''NICHOLLS'' ,''C01'',''1793'',''2006-12-15'',''ANALYST '',18,''F'',''1976-01-19'',68420.00,600.00,2274.00),
(''000150'',''BRUCE'' ,'' '',''ADAMSON'' ,''D11'',''4510'',''2002-02-12'',''DESIGNER'',16,''M'',''1977-05-17'',55280.00,500.00,2022.00),
(''000160'',''ELIZABETH'',''R'',''PIANKA'' ,''D11'',''3782'',''2006-10-11'',''DESIGNER'',17,''F'',''1980-04-12'',62250.00,400.00,1780.00),
(''000170'',''MASATOSHI'',''J'',''YOSHIMURA'' ,''D11'',''2890'',''1999-09-15'',''DESIGNER'',16,''M'',''1981-01-05'',44680.00,500.00,1974.00),
(''000180'',''MARILYN'' ,''S'',''SCOUTTEN'' ,''D11'',''1682'',''2003-07-07'',''DESIGNER'',17,''F'',''1979-02-21'',51340.00,500.00,1707.00),
(''000190'',''JAMES'' ,''H'',''WALKER'' ,''D11'',''2986'',''2004-07-26'',''DESIGNER'',16,''M'',''1982-06-25'',50450.00,400.00,1636.00),
(''000200'',''DAVID'' ,'' '',''BROWN'' ,''D11'',''4501'',''2002-03-03'',''DESIGNER'',16,''M'',''1971-05-29'',57740.00,600.00,2217.00),
(''000210'',''WILLIAM'' ,''T'',''JONES'' ,''D11'',''0942'',''1998-04-11'',''DESIGNER'',17,''M'',''2003-02-23'',68270.00,400.00,1462.00),
(''000220'',''JENNIFER'' ,''K'',''LUTZ'' ,''D11'',''0672'',''1998-08-29'',''DESIGNER'',18,''F'',''1978-03-19'',49840.00,600.00,2387.00),
(''000230'',''JAMES'' ,''J'',''JEFFERSON'' ,''D21'',''2094'',''1996-11-21'',''CLERK '',14,''M'',''1980-05-30'',42180.00,400.00,1774.00),
(''000240'',''SALVATORE'',''M'',''MARINO'' ,''D21'',''3780'',''2004-12-05'',''CLERK '',17,''M'',''2002-03-31'',48760.00,600.00,2301.00),
(''000250'',''DANIEL'' ,''S'',''SMITH'' ,''D21'',''0961'',''1999-10-30'',''CLERK '',15,''M'',''1969-11-12'',49180.00,400.00,1534.00),
(''000260'',''SYBIL'' ,''P'',''JOHNSON'' ,''D21'',''8953'',''2005-09-11'',''CLERK '',16,''F'',''1976-10-05'',47250.00,300.00,1380.00),
(''000270'',''MARIA'' ,''L'',''PEREZ'' ,''D21'',''9001'',''2006-09-30'',''CLERK '',15,''F'',''2003-05-26'',37380.00,500.00,2190.00),
(''000280'',''ETHEL'' ,''R'',''SCHNEIDER'' ,''E11'',''8997'',''1997-03-24'',''OPERATOR'',17,''F'',''1976-03-28'',36250.00,500.00,2100.00),
(''000290'',''JOHN'' ,''R'',''PARKER'' ,''E11'',''4502'',''2006-05-30'',''OPERATOR'',12,''M'',''1985-07-09'',35340.00,300.00,1227.00),
(''000300'',''PHILIP'' ,''X'',''SMITH'' ,''E11'',''2095'',''2002-06-19'',''OPERATOR'',14,''M'',''1976-10-27'',37750.00,400.00,1420.00),
(''000310'',''MAUDE'' ,''F'',''SETRIGHT'' ,''E11'',''3332'',''1994-09-12'',''OPERATOR'',12,''F'',''1961-04-21'',35900.00,300.00,1272.00),
(''000320'',''RAMLAL'' ,''V'',''MEHTA'' ,''E21'',''9990'',''1995-07-07'',''FIELDREP'',16,''M'',''1962-08-11'',39950.00,400.00,1596.00),
(''000330'',''WING'' ,'' '',''LEE'' ,''E21'',''2103'',''2006-02-23'',''FIELDREP'',14,''M'',''1971-07-18'',45370.00,500.00,2030.00),
(''000340'',''JASON'' ,''R'',''GOUNOT'' ,''E21'',''5698'',''1977-05-05'',''FIELDREP'',16,''M'',''1956-05-17'',43840.00,500.00,1907.00),
(''200010'',''DIAN'' ,''J'',''HEMMINGER'' ,''A00'',''3978'',''1995-01-01'',''SALESREP'',18,''F'',''1973-08-14'',46500.00,1000.00,4220.00),
(''200120'',''GREG'' ,'' '',''ORLANDO'' ,''A00'',''2167'',''2002-05-05'',''CLERK '',14,''M'',''1972-10-18'',39250.00,600.00,2340.00),
(''200140'',''KIM'' ,''N'',''NATZ'' ,''C01'',''1793'',''2006-12-15'',''ANALYST '',18,''F'',''1976-01-19'',68420.00,600.00,2274.00),
(''200170'',''KIYOSHI'' ,'' '',''YAMAMOTO'' ,''D11'',''2890'',''2005-09-15'',''DESIGNER'',16,''M'',''1981-01-05'',64680.00,500.00,1974.00),
(''200220'',''REBA'' ,''K'',''JOHN'' ,''D11'',''0672'',''2005-08-29'',''DESIGNER'',18,''F'',''1978-03-19'',69840.00,600.00,2387.00),
(''200240'',''ROBERT'' ,''M'',''MONTEVERDE'',''D21'',''3780'',''2004-12-05'',''CLERK '',17,''M'',''1984-03-31'',37760.00,600.00,2301.00),
(''200280'',''EILEEN'' ,''R'',''SCHWARTZ'' ,''E11'',''8997'',''1997-03-24'',''OPERATOR'',17,''F'',''1966-03-28'',46250.00,500.00,2100.00),
(''200310'',''MICHELLE'' ,''F'',''SPRINGER'' ,''E11'',''3332'',''1994-09-12'',''OPERATOR'',12,''F'',''1961-04-21'',35900.00,300.00,1272.00),
(''200330'',''HELENA'' ,'' '',''WONG'' ,''E21'',''2103'',''2006-02-23'',''FIELDREP'',14,''F'',''1971-07-18'',35370.00,500.00,2030.00),
(''200340'',''ROY'' ,''R'',''ALONZO'' ,''E21'',''5698'',''1997-07-05'',''FIELDREP'',16,''M'',''1956-05-17'',31840.00,500.00,1907.00)');
END IF;
END"""
%sql -d -q {create_employee}
if (quiet == False): success("Sample tables [EMPLOYEE, DEPARTMENT] created.")
|
_____no_output_____
|
Apache-2.0
|
db2.ipynb
|
Db2-DTE-POC/Db2-Openshift-11.5.4
|
Check optionThis function will return the original string with the option removed, and a flag or true or false of the value is found.```args, flag = checkOption(option_string, option, false_value, true_value)```Options are specified with a -x where x is the character that we are searching for. It may actually be more than one character long like -pb/-pi/etc... The false and true values are optional. By default these are the boolean values of T/F but for some options it could be a character string like ';' versus '@' for delimiters.
|
def checkOption(args_in, option, vFalse=False, vTrue=True):
args_out = args_in.strip()
found = vFalse
if (args_out != ""):
if (args_out.find(option) >= 0):
args_out = args_out.replace(option," ")
args_out = args_out.strip()
found = vTrue
return args_out, found
|
_____no_output_____
|
Apache-2.0
|
db2.ipynb
|
Db2-DTE-POC/Db2-Openshift-11.5.4
|
Plot DataThis function will plot the data that is returned from the answer set. The plot value determines how we display the data. 1=Bar, 2=Pie, 3=Line, 4=Interactive.```plotData(flag_plot, hdbi, sql, parms)```The hdbi is the ibm_db_sa handle that is used by pandas dataframes to run the sql. The parms contains any of the parameters required to run the query.
|
def plotData(hdbi, sql):
try:
df = pandas.read_sql(sql,hdbi)
except Exception as err:
db2_error(False)
return
if df.empty:
errormsg("No results returned")
return
col_count = len(df.columns)
if flag(["-pb","-bar"]): # Plot 1 = bar chart
if (col_count in (1,2,3)):
if (col_count == 1):
df.index = df.index + 1
_ = df.plot(kind='bar');
_ = plt.plot();
elif (col_count == 2):
xlabel = df.columns.values[0]
ylabel = df.columns.values[1]
df.plot(kind='bar',x=xlabel,y=ylabel);
_ = plt.plot();
else:
values = df.columns.values[2]
columns = df.columns.values[0]
index = df.columns.values[1]
pivoted = pandas.pivot_table(df, values=values, columns=columns, index=index)
_ = pivoted.plot.bar();
else:
errormsg("Can't determine what columns to plot")
return
elif flag(["-pp","-pie"]): # Plot 2 = pie chart
if (col_count in (1,2)):
if (col_count == 1):
df.index = df.index + 1
yname = df.columns.values[0]
_ = df.plot(kind='pie',y=yname);
else:
xlabel = df.columns.values[0]
xname = df[xlabel].tolist()
yname = df.columns.values[1]
_ = df.plot(kind='pie',y=yname,labels=xname);
plt.show();
else:
errormsg("Can't determine what columns to plot")
return
elif flag(["-pl","-line"]): # Plot 3 = line chart
if (col_count in (1,2,3)):
if (col_count == 1):
df.index = df.index + 1
_ = df.plot(kind='line');
elif (col_count == 2):
xlabel = df.columns.values[0]
ylabel = df.columns.values[1]
_ = df.plot(kind='line',x=xlabel,y=ylabel) ;
else:
values = df.columns.values[2]
columns = df.columns.values[0]
index = df.columns.values[1]
pivoted = pandas.pivot_table(df, values=values, columns=columns, index=index)
_ = pivoted.plot();
plt.show();
else:
errormsg("Can't determine what columns to plot")
return
else:
return
|
_____no_output_____
|
Apache-2.0
|
db2.ipynb
|
Db2-DTE-POC/Db2-Openshift-11.5.4
|
Find a ProcedureThis routine will check to see if a procedure exists with the SCHEMA/NAME (or just NAME if no schema is supplied) and returns the number of answer sets returned. Possible values are 0, 1 (or greater) or None. If None is returned then we can't find the procedure anywhere.
|
def findProc(procname):
global _hdbc, _hdbi, _connected, _runtime
# Split the procedure name into schema.procname if appropriate
upper_procname = procname.upper()
schema, proc = split_string(upper_procname,".") # Expect schema.procname
if (proc == None):
proc = schema
# Call ibm_db.procedures to see if the procedure does exist
schema = "%"
try:
stmt = ibm_db.procedures(_hdbc, None, schema, proc)
if (stmt == False): # Error executing the code
errormsg("Procedure " + procname + " not found in the system catalog.")
return None
result = ibm_db.fetch_tuple(stmt)
resultsets = result[5]
if (resultsets >= 1): resultsets = 1
return resultsets
except Exception as err:
errormsg("Procedure " + procname + " not found in the system catalog.")
return None
|
_____no_output_____
|
Apache-2.0
|
db2.ipynb
|
Db2-DTE-POC/Db2-Openshift-11.5.4
|
Parse Call ArgumentsThis code will parse a SQL call name(parm1,...) and return the name and the parameters in the call.
|
def parseCallArgs(macro):
quoteChar = ""
inQuote = False
inParm = False
ignore = False
name = ""
parms = []
parm = ''
sqlin = macro.replace("\n","")
sqlin.lstrip()
for ch in sqlin:
if (inParm == False):
# We hit a blank in the name, so ignore everything after the procedure name until a ( is found
if (ch == " "):
ignore == True
elif (ch == "("): # Now we have parameters to send to the stored procedure
inParm = True
else:
if (ignore == False): name = name + ch # The name of the procedure (and no blanks)
else:
if (inQuote == True):
if (ch == quoteChar):
inQuote = False
else:
parm = parm + ch
elif (ch in ("\"","\'","[")): # Do we have a quote
if (ch == "["):
quoteChar = "]"
else:
quoteChar = ch
inQuote = True
elif (ch == ")"):
if (parm != ""):
parms.append(parm)
parm = ""
break
elif (ch == ","):
if (parm != ""):
parms.append(parm)
else:
parms.append("null")
parm = ""
else:
parm = parm + ch
if (inParm == True):
if (parm != ""):
parms.append(parm_value)
return(name,parms)
|
_____no_output_____
|
Apache-2.0
|
db2.ipynb
|
Db2-DTE-POC/Db2-Openshift-11.5.4
|
Get ColumnsGiven a statement handle, determine what the column names are or the data types.
|
def getColumns(stmt):
columns = []
types = []
colcount = 0
try:
colname = ibm_db.field_name(stmt,colcount)
coltype = ibm_db.field_type(stmt,colcount)
while (colname != False):
columns.append(colname)
types.append(coltype)
colcount += 1
colname = ibm_db.field_name(stmt,colcount)
coltype = ibm_db.field_type(stmt,colcount)
return columns,types
except Exception as err:
db2_error(False)
return None
|
_____no_output_____
|
Apache-2.0
|
db2.ipynb
|
Db2-DTE-POC/Db2-Openshift-11.5.4
|
Call a ProcedureThe CALL statement is used for execution of a stored procedure. The format of the CALL statement is:```CALL PROC_NAME(x,y,z,...)```Procedures allow for the return of answer sets (cursors) as well as changing the contents of the parameters being passed to the procedure. In this implementation, the CALL function is limited to returning one answer set (or nothing). If you want to use more complex stored procedures then you will have to use the native python libraries.
|
def parseCall(hdbc, inSQL, local_ns):
global _hdbc, _hdbi, _connected, _runtime, _environment
# Check to see if we are connected first
if (_connected == False): # Check if you are connected
db2_doConnect()
if _connected == False: return None
remainder = inSQL.strip()
procName, procArgs = parseCallArgs(remainder[5:]) # Assume that CALL ... is the format
resultsets = findProc(procName)
if (resultsets == None): return None
argvalues = []
if (len(procArgs) > 0): # We have arguments to consider
for arg in procArgs:
varname = arg
if (len(varname) > 0):
if (varname[0] == ":"):
checkvar = varname[1:]
varvalue = getContents(checkvar,True,local_ns)
if (varvalue == None):
errormsg("Variable " + checkvar + " is not defined.")
return None
argvalues.append(varvalue)
else:
if (varname.upper() == "NULL"):
argvalues.append(None)
else:
argvalues.append(varname)
else:
argvalues.append(None)
try:
if (len(procArgs) > 0):
argtuple = tuple(argvalues)
result = ibm_db.callproc(_hdbc,procName,argtuple)
stmt = result[0]
else:
result = ibm_db.callproc(_hdbc,procName)
stmt = result
if (resultsets != 0 and stmt != None):
columns, types = getColumns(stmt)
if (columns == None): return None
rows = []
rowlist = ibm_db.fetch_tuple(stmt)
while ( rowlist ) :
row = []
colcount = 0
for col in rowlist:
try:
if (types[colcount] in ["int","bigint"]):
row.append(int(col))
elif (types[colcount] in ["decimal","real"]):
row.append(float(col))
elif (types[colcount] in ["date","time","timestamp"]):
row.append(str(col))
else:
row.append(col)
except:
row.append(col)
colcount += 1
rows.append(row)
rowlist = ibm_db.fetch_tuple(stmt)
if flag(["-r","-array"]):
rows.insert(0,columns)
if len(procArgs) > 0:
allresults = []
allresults.append(rows)
for x in result[1:]:
allresults.append(x)
return allresults # rows,returned_results
else:
return rows
else:
df = pandas.DataFrame.from_records(rows,columns=columns)
if flag("-grid") or _settings['display'] == 'GRID':
if (_environment['qgrid'] == False):
with pandas.option_context('display.max_rows', None, 'display.max_columns', None):
pdisplay(df)
else:
try:
pdisplay(qgrid.show_grid(df))
except:
errormsg("Grid cannot be used to display data with duplicate column names. Use option -a or %sql OPTION DISPLAY PANDAS instead.")
return
else:
if flag(["-a","-all"]) or _settings["maxrows"] == -1 : # All of the rows
with pandas.option_context('display.max_rows', None, 'display.max_columns', None):
pdisplay(df)
else:
return df
else:
if len(procArgs) > 0:
allresults = []
for x in result[1:]:
allresults.append(x)
return allresults # rows,returned_results
else:
return None
except Exception as err:
db2_error(False)
return None
|
_____no_output_____
|
Apache-2.0
|
db2.ipynb
|
Db2-DTE-POC/Db2-Openshift-11.5.4
|
Parse Prepare/ExecuteThe PREPARE statement is used for repeated execution of a SQL statement. The PREPARE statement has the format:```stmt = PREPARE SELECT EMPNO FROM EMPLOYEE WHERE WORKDEPT=? AND SALARY<?```The SQL statement that you want executed is placed after the PREPARE statement with the location of variables marked with ? (parameter) markers. The variable stmt contains the prepared statement that need to be passed to the EXECUTE statement. The EXECUTE statement has the format:```EXECUTE :x USING z, y, s ```The first variable (:x) is the name of the variable that you assigned the results of the prepare statement. The values after the USING clause are substituted into the prepare statement where the ? markers are found. If the values in USING clause are variable names (z, y, s), a **link** is created to these variables as part of the execute statement. If you use the variable subsitution form of variable name (:z, :y, :s), the **contents** of the variable are placed into the USING clause. Normally this would not make much of a difference except when you are dealing with binary strings or JSON strings where the quote characters may cause some problems when subsituted into the statement.
|
def parsePExec(hdbc, inSQL):
import ibm_db
global _stmt, _stmtID, _stmtSQL, sqlcode
cParms = inSQL.split()
parmCount = len(cParms)
if (parmCount == 0): return(None) # Nothing to do but this shouldn't happen
keyword = cParms[0].upper() # Upper case the keyword
if (keyword == "PREPARE"): # Prepare the following SQL
uSQL = inSQL.upper()
found = uSQL.find("PREPARE")
sql = inSQL[found+7:].strip()
try:
pattern = "\?\*[0-9]+"
findparm = re.search(pattern,sql)
while findparm != None:
found = findparm.group(0)
count = int(found[2:])
markers = ('?,' * count)[:-1]
sql = sql.replace(found,markers)
findparm = re.search(pattern,sql)
stmt = ibm_db.prepare(hdbc,sql) # Check error code here
if (stmt == False):
db2_error(False)
return(False)
stmttext = str(stmt).strip()
stmtID = stmttext[33:48].strip()
if (stmtID in _stmtID) == False:
_stmt.append(stmt) # Prepare and return STMT to caller
_stmtID.append(stmtID)
else:
stmtIX = _stmtID.index(stmtID)
_stmt[stmtiX] = stmt
return(stmtID)
except Exception as err:
print(err)
db2_error(False)
return(False)
if (keyword == "EXECUTE"): # Execute the prepare statement
if (parmCount < 2): return(False) # No stmtID available
stmtID = cParms[1].strip()
if (stmtID in _stmtID) == False:
errormsg("Prepared statement not found or invalid.")
return(False)
stmtIX = _stmtID.index(stmtID)
stmt = _stmt[stmtIX]
try:
if (parmCount == 2): # Only the statement handle available
result = ibm_db.execute(stmt) # Run it
elif (parmCount == 3): # Not quite enough arguments
errormsg("Missing or invalid USING clause on EXECUTE statement.")
sqlcode = -99999
return(False)
else:
using = cParms[2].upper()
if (using != "USING"): # Bad syntax again
errormsg("Missing USING clause on EXECUTE statement.")
sqlcode = -99999
return(False)
uSQL = inSQL.upper()
found = uSQL.find("USING")
parmString = inSQL[found+5:].strip()
parmset = splitargs(parmString)
if (len(parmset) == 0):
errormsg("Missing parameters after the USING clause.")
sqlcode = -99999
return(False)
parms = []
parm_count = 0
CONSTANT = 0
VARIABLE = 1
const = [0]
const_cnt = 0
for v in parmset:
parm_count = parm_count + 1
if (v[1] == True or v[2] == True): # v[1] true if string, v[2] true if num
parm_type = CONSTANT
const_cnt = const_cnt + 1
if (v[2] == True):
if (isinstance(v[0],int) == True): # Integer value
sql_type = ibm_db.SQL_INTEGER
elif (isinstance(v[0],float) == True): # Float value
sql_type = ibm_db.SQL_DOUBLE
else:
sql_type = ibm_db.SQL_INTEGER
else:
sql_type = ibm_db.SQL_CHAR
const.append(v[0])
else:
parm_type = VARIABLE
# See if the variable has a type associated with it varname@type
varset = v[0].split("@")
parm_name = varset[0]
parm_datatype = "char"
# Does the variable exist?
if (parm_name not in globals()):
errormsg("SQL Execute parameter " + parm_name + " not found")
sqlcode = -99999
return(false)
if (len(varset) > 1): # Type provided
parm_datatype = varset[1]
if (parm_datatype == "dec" or parm_datatype == "decimal"):
sql_type = ibm_db.SQL_DOUBLE
elif (parm_datatype == "bin" or parm_datatype == "binary"):
sql_type = ibm_db.SQL_BINARY
elif (parm_datatype == "int" or parm_datatype == "integer"):
sql_type = ibm_db.SQL_INTEGER
else:
sql_type = ibm_db.SQL_CHAR
try:
if (parm_type == VARIABLE):
result = ibm_db.bind_param(stmt, parm_count, globals()[parm_name], ibm_db.SQL_PARAM_INPUT, sql_type)
else:
result = ibm_db.bind_param(stmt, parm_count, const[const_cnt], ibm_db.SQL_PARAM_INPUT, sql_type)
except:
result = False
if (result == False):
errormsg("SQL Bind on variable " + parm_name + " failed.")
sqlcode = -99999
return(false)
result = ibm_db.execute(stmt) # ,tuple(parms))
if (result == False):
errormsg("SQL Execute failed.")
return(False)
if (ibm_db.num_fields(stmt) == 0): return(True) # Command successfully completed
return(fetchResults(stmt))
except Exception as err:
db2_error(False)
return(False)
return(False)
return(False)
|
_____no_output_____
|
Apache-2.0
|
db2.ipynb
|
Db2-DTE-POC/Db2-Openshift-11.5.4
|
Fetch Result SetThis code will take the stmt handle and then produce a result set of rows as either an array (`-r`,`-array`) or as an array of json records (`-json`).
|
def fetchResults(stmt):
global sqlcode
rows = []
columns, types = getColumns(stmt)
# By default we assume that the data will be an array
is_array = True
# Check what type of data we want returned - array or json
if (flag(["-r","-array"]) == False):
# See if we want it in JSON format, if not it remains as an array
if (flag("-json") == True):
is_array = False
# Set column names to lowercase for JSON records
if (is_array == False):
columns = [col.lower() for col in columns] # Convert to lowercase for each of access
# First row of an array has the column names in it
if (is_array == True):
rows.append(columns)
result = ibm_db.fetch_tuple(stmt)
rowcount = 0
while (result):
rowcount += 1
if (is_array == True):
row = []
else:
row = {}
colcount = 0
for col in result:
try:
if (types[colcount] in ["int","bigint"]):
if (is_array == True):
row.append(int(col))
else:
row[columns[colcount]] = int(col)
elif (types[colcount] in ["decimal","real"]):
if (is_array == True):
row.append(float(col))
else:
row[columns[colcount]] = float(col)
elif (types[colcount] in ["date","time","timestamp"]):
if (is_array == True):
row.append(str(col))
else:
row[columns[colcount]] = str(col)
else:
if (is_array == True):
row.append(col)
else:
row[columns[colcount]] = col
except:
if (is_array == True):
row.append(col)
else:
row[columns[colcount]] = col
colcount += 1
rows.append(row)
result = ibm_db.fetch_tuple(stmt)
if (rowcount == 0):
sqlcode = 100
else:
sqlcode = 0
return rows
|
_____no_output_____
|
Apache-2.0
|
db2.ipynb
|
Db2-DTE-POC/Db2-Openshift-11.5.4
|
Parse CommitThere are three possible COMMIT verbs that can bs used:- COMMIT [WORK] - Commit the work in progress - The WORK keyword is not checked for- ROLLBACK - Roll back the unit of work- AUTOCOMMIT ON/OFF - Are statements committed on or off?The statement is passed to this routine and then checked.
|
def parseCommit(sql):
global _hdbc, _hdbi, _connected, _runtime, _stmt, _stmtID, _stmtSQL
if (_connected == False): return # Nothing to do if we are not connected
cParms = sql.split()
if (len(cParms) == 0): return # Nothing to do but this shouldn't happen
keyword = cParms[0].upper() # Upper case the keyword
if (keyword == "COMMIT"): # Commit the work that was done
try:
result = ibm_db.commit (_hdbc) # Commit the connection
if (len(cParms) > 1):
keyword = cParms[1].upper()
if (keyword == "HOLD"):
return
del _stmt[:]
del _stmtID[:]
except Exception as err:
db2_error(False)
return
if (keyword == "ROLLBACK"): # Rollback the work that was done
try:
result = ibm_db.rollback(_hdbc) # Rollback the connection
del _stmt[:]
del _stmtID[:]
except Exception as err:
db2_error(False)
return
if (keyword == "AUTOCOMMIT"): # Is autocommit on or off
if (len(cParms) > 1):
op = cParms[1].upper() # Need ON or OFF value
else:
return
try:
if (op == "OFF"):
ibm_db.autocommit(_hdbc, False)
elif (op == "ON"):
ibm_db.autocommit (_hdbc, True)
return
except Exception as err:
db2_error(False)
return
return
|
_____no_output_____
|
Apache-2.0
|
db2.ipynb
|
Db2-DTE-POC/Db2-Openshift-11.5.4
|
Set FlagsThis code will take the input SQL block and update the global flag list. The global flag list is just a list of options that are set at the beginning of a code block. The absence of a flag means it is false. If it exists it is true.
|
def setFlags(inSQL):
global _flags
_flags = [] # Delete all of the current flag settings
pos = 0
end = len(inSQL)-1
inFlag = False
ignore = False
outSQL = ""
flag = ""
while (pos <= end):
ch = inSQL[pos]
if (ignore == True):
outSQL = outSQL + ch
else:
if (inFlag == True):
if (ch != " "):
flag = flag + ch
else:
_flags.append(flag)
inFlag = False
else:
if (ch == "-"):
flag = "-"
inFlag = True
elif (ch == ' '):
outSQL = outSQL + ch
else:
outSQL = outSQL + ch
ignore = True
pos += 1
if (inFlag == True):
_flags.append(flag)
return outSQL
|
_____no_output_____
|
Apache-2.0
|
db2.ipynb
|
Db2-DTE-POC/Db2-Openshift-11.5.4
|
Check to see if flag ExistsThis function determines whether or not a flag exists in the global flag array. Absence of a value means it is false. The parameter can be a single value, or an array of values.
|
def flag(inflag):
global _flags
if isinstance(inflag,list):
for x in inflag:
if (x in _flags):
return True
return False
else:
if (inflag in _flags):
return True
else:
return False
|
_____no_output_____
|
Apache-2.0
|
db2.ipynb
|
Db2-DTE-POC/Db2-Openshift-11.5.4
|
Generate a list of SQL lines based on a delimiterNote that this function will make sure that quotes are properly maintained so that delimiters inside of quoted strings do not cause errors.
|
def splitSQL(inputString, delimiter):
pos = 0
arg = ""
results = []
quoteCH = ""
inSQL = inputString.strip()
if (len(inSQL) == 0): return(results) # Not much to do here - no args found
while pos < len(inSQL):
ch = inSQL[pos]
pos += 1
if (ch in ('"',"'")): # Is this a quote characters?
arg = arg + ch # Keep appending the characters to the current arg
if (ch == quoteCH): # Is this quote character we are in
quoteCH = ""
elif (quoteCH == ""): # Create the quote
quoteCH = ch
else:
None
elif (quoteCH != ""): # Still in a quote
arg = arg + ch
elif (ch == delimiter): # Is there a delimiter?
results.append(arg)
arg = ""
else:
arg = arg + ch
if (arg != ""):
results.append(arg)
return(results)
|
_____no_output_____
|
Apache-2.0
|
db2.ipynb
|
Db2-DTE-POC/Db2-Openshift-11.5.4
|
Main %sql Magic DefinitionThe main %sql Magic logic is found in this section of code. This code will register the Magic command and allow Jupyter notebooks to interact with Db2 by using this extension.
|
@magics_class
class DB2(Magics):
@needs_local_scope
@line_cell_magic
def sql(self, line, cell=None, local_ns=None):
# Before we event get started, check to see if you have connected yet. Without a connection we
# can't do anything. You may have a connection request in the code, so if that is true, we run those,
# otherwise we connect immediately
# If your statement is not a connect, and you haven't connected, we need to do it for you
global _settings, _environment
global _hdbc, _hdbi, _connected, _runtime, sqlstate, sqlerror, sqlcode, sqlelapsed
# If you use %sql (line) we just run the SQL. If you use %%SQL the entire cell is run.
flag_cell = False
flag_output = False
sqlstate = "0"
sqlerror = ""
sqlcode = 0
sqlelapsed = 0
start_time = time.time()
end_time = time.time()
# Macros gets expanded before anything is done
SQL1 = setFlags(line.strip())
SQL1 = checkMacro(SQL1) # Update the SQL if any macros are in there
SQL2 = cell
if flag("-sampledata"): # Check if you only want sample data loaded
if (_connected == False):
if (db2_doConnect() == False):
errormsg('A CONNECT statement must be issued before issuing SQL statements.')
return
db2_create_sample(flag(["-q","-quiet"]))
return
if SQL1 == "?" or flag(["-h","-help"]): # Are you asking for help
sqlhelp()
return
if len(SQL1) == 0 and SQL2 == None: return # Nothing to do here
# Check for help
if SQL1.upper() == "? CONNECT": # Are you asking for help on CONNECT
connected_help()
return
sqlType,remainder = sqlParser(SQL1,local_ns) # What type of command do you have?
if (sqlType == "CONNECT"): # A connect request
parseConnect(SQL1,local_ns)
return
elif (sqlType == "USING"): # You want to use a dataframe to create a table?
createDF(_hdbc,SQL1,local_ns)
return
elif (sqlType == "DEFINE"): # Create a macro from the body
result = setMacro(SQL2,remainder)
return
elif (sqlType == "OPTION"):
setOptions(SQL1)
return
elif (sqlType == 'COMMIT' or sqlType == 'ROLLBACK' or sqlType == 'AUTOCOMMIT'):
parseCommit(remainder)
return
elif (sqlType == "PREPARE"):
pstmt = parsePExec(_hdbc, remainder)
return(pstmt)
elif (sqlType == "EXECUTE"):
result = parsePExec(_hdbc, remainder)
return(result)
elif (sqlType == "CALL"):
result = parseCall(_hdbc, remainder, local_ns)
return(result)
else:
pass
sql = SQL1
if (sql == ""): sql = SQL2
if (sql == ""): return # Nothing to do here
if (_connected == False):
if (db2_doConnect() == False):
errormsg('A CONNECT statement must be issued before issuing SQL statements.')
return
if _settings["maxrows"] == -1: # Set the return result size
pandas.reset_option('display.max_rows')
else:
pandas.options.display.max_rows = _settings["maxrows"]
runSQL = re.sub('.*?--.*$',"",sql,flags=re.M)
remainder = runSQL.replace("\n"," ")
if flag(["-d","-delim"]):
sqlLines = splitSQL(remainder,"@")
else:
sqlLines = splitSQL(remainder,";")
flag_cell = True
# For each line figure out if you run it as a command (db2) or select (sql)
for sqlin in sqlLines: # Run each command
sqlin = checkMacro(sqlin) # Update based on any macros
sqlType, sql = sqlParser(sqlin,local_ns) # Parse the SQL
if (sql.strip() == ""): continue
if flag(["-e","-echo"]): debug(sql,False)
if flag("-t"):
cnt = sqlTimer(_hdbc, _settings["runtime"], sql) # Given the sql and parameters, clock the time
if (cnt >= 0): print("Total iterations in %s second(s): %s" % (_settings["runtime"],cnt))
return(cnt)
elif flag(["-pb","-bar","-pp","-pie","-pl","-line"]): # We are plotting some results
plotData(_hdbi, sql) # Plot the data and return
return
else:
try: # See if we have an answer set
stmt = ibm_db.prepare(_hdbc,sql)
if (ibm_db.num_fields(stmt) == 0): # No, so we just execute the code
result = ibm_db.execute(stmt) # Run it
if (result == False): # Error executing the code
db2_error(flag(["-q","-quiet"]))
continue
rowcount = ibm_db.num_rows(stmt)
if (rowcount == 0 and flag(["-q","-quiet"]) == False):
errormsg("No rows found.")
continue # Continue running
elif flag(["-r","-array","-j","-json"]): # raw, json, format json
row_count = 0
resultSet = []
try:
result = ibm_db.execute(stmt) # Run it
if (result == False): # Error executing the code
db2_error(flag(["-q","-quiet"]))
return
if flag("-j"): # JSON single output
row_count = 0
json_results = []
while( ibm_db.fetch_row(stmt) ):
row_count = row_count + 1
jsonVal = ibm_db.result(stmt,0)
jsonDict = json.loads(jsonVal)
json_results.append(jsonDict)
flag_output = True
if (row_count == 0): sqlcode = 100
return(json_results)
else:
return(fetchResults(stmt))
except Exception as err:
db2_error(flag(["-q","-quiet"]))
return
else:
try:
df = pandas.read_sql(sql,_hdbi)
except Exception as err:
db2_error(False)
return
if (len(df) == 0):
sqlcode = 100
if (flag(["-q","-quiet"]) == False):
errormsg("No rows found")
continue
flag_output = True
if flag("-grid") or _settings['display'] == 'GRID': # Check to see if we can display the results
if (_environment['qgrid'] == False):
with pandas.option_context('display.max_rows', None, 'display.max_columns', None):
print(df.to_string())
else:
try:
pdisplay(qgrid.show_grid(df))
except:
errormsg("Grid cannot be used to display data with duplicate column names. Use option -a or %sql OPTION DISPLAY PANDAS instead.")
return
else:
if flag(["-a","-all"]) or _settings["maxrows"] == -1 : # All of the rows
pandas.options.display.max_rows = None
pandas.options.display.max_columns = None
return df # print(df.to_string())
else:
pandas.options.display.max_rows = _settings["maxrows"]
pandas.options.display.max_columns = None
return df # pdisplay(df) # print(df.to_string())
except:
db2_error(flag(["-q","-quiet"]))
continue # return
end_time = time.time()
sqlelapsed = end_time - start_time
if (flag_output == False and flag(["-q","-quiet"]) == False): print("Command completed.")
# Register the Magic extension in Jupyter
ip = get_ipython()
ip.register_magics(DB2)
load_settings()
success("Db2 Extensions Loaded.")
|
_____no_output_____
|
Apache-2.0
|
db2.ipynb
|
Db2-DTE-POC/Db2-Openshift-11.5.4
|
Pre-defined MacrosThese macros are used to simulate the LIST TABLES and DESCRIBE commands that are available from within the Db2 command line.
|
%%sql define LIST
#
# The LIST macro is used to list all of the tables in the current schema or for all schemas
#
var syntax Syntax: LIST TABLES [FOR ALL | FOR SCHEMA name]
#
# Only LIST TABLES is supported by this macro
#
if {^1} <> 'TABLES'
exit {syntax}
endif
#
# This SQL is a temporary table that contains the description of the different table types
#
WITH TYPES(TYPE,DESCRIPTION) AS (
VALUES
('A','Alias'),
('G','Created temporary table'),
('H','Hierarchy table'),
('L','Detached table'),
('N','Nickname'),
('S','Materialized query table'),
('T','Table'),
('U','Typed table'),
('V','View'),
('W','Typed view')
)
SELECT TABNAME, TABSCHEMA, T.DESCRIPTION FROM SYSCAT.TABLES S, TYPES T
WHERE T.TYPE = S.TYPE
#
# Case 1: No arguments - LIST TABLES
#
if {argc} == 1
AND OWNER = CURRENT USER
ORDER BY TABNAME, TABSCHEMA
return
endif
#
# Case 2: Need 3 arguments - LIST TABLES FOR ALL
#
if {argc} == 3
if {^2}&{^3} == 'FOR&ALL'
ORDER BY TABNAME, TABSCHEMA
return
endif
exit {syntax}
endif
#
# Case 3: Need FOR SCHEMA something here
#
if {argc} == 4
if {^2}&{^3} == 'FOR&SCHEMA'
AND TABSCHEMA = '{^4}'
ORDER BY TABNAME, TABSCHEMA
return
else
exit {syntax}
endif
endif
#
# Nothing matched - Error
#
exit {syntax}
%%sql define describe
#
# The DESCRIBE command can either use the syntax DESCRIBE TABLE <name> or DESCRIBE TABLE SELECT ...
#
var syntax Syntax: DESCRIBE [TABLE name | SELECT statement]
#
# Check to see what count of variables is... Must be at least 2 items DESCRIBE TABLE x or SELECT x
#
if {argc} < 2
exit {syntax}
endif
CALL ADMIN_CMD('{*0}');
|
_____no_output_____
|
Apache-2.0
|
db2.ipynb
|
Db2-DTE-POC/Db2-Openshift-11.5.4
|
Set the table formatting to left align a table in a cell. By default, tables are centered in a cell. Remove this cell if you don't want to change Jupyter notebook formatting for tables. In addition, we skip this code if you are running in a shell environment rather than a Jupyter notebook
|
#%%html
#<style>
# table {margin-left: 0 !important; text-align: left;}
#</style>
|
_____no_output_____
|
Apache-2.0
|
db2.ipynb
|
Db2-DTE-POC/Db2-Openshift-11.5.4
|
Profiling TensorFlow Multi GPU Multi Node Training Job with Amazon SageMaker DebuggerThis notebook will walk you through creating a TensorFlow training job with the SageMaker Debugger profiling feature enabled. It will create a multi GPU multi node training using Horovod. (Optional) Install SageMaker and SMDebug Python SDKsTo use the new Debugger profiling features released in December 2020, ensure that you have the latest versions of SageMaker and SMDebug SDKs installed. Use the following cell to update the libraries and restarts the Jupyter kernel to apply the updates.
|
import sys
import IPython
install_needed = False # should only be True once
if install_needed:
print("installing deps and restarting kernel")
!{sys.executable} -m pip install -U sagemaker smdebug
IPython.Application.instance().kernel.do_shutdown(True)
|
_____no_output_____
|
Apache-2.0
|
sagemaker-debugger/tensorflow_profiling/tf-resnet-profiling-multi-gpu-multi-node.ipynb
|
Amirosimani/amazon-sagemaker-examples
|
1. Create a Training Job with Profiling EnabledYou will use the standard [SageMaker Estimator API for Tensorflow](https://sagemaker.readthedocs.io/en/stable/frameworks/tensorflow/sagemaker.tensorflow.htmltensorflow-estimator) to create training jobs. To enable profiling, create a `ProfilerConfig` object and pass it to the `profiler_config` parameter of the `TensorFlow` estimator. Define parameters for distributed training This parameter tells SageMaker how to configure and run horovod. If you want to use more than 4 GPUs per node then change the process_per_host paramter accordingly.
|
distributions = {
"mpi": {
"enabled": True,
"processes_per_host": 4,
"custom_mpi_options": "-verbose -x HOROVOD_TIMELINE=./hvd_timeline.json -x NCCL_DEBUG=INFO -x OMPI_MCA_btl_vader_single_copy_mechanism=none",
}
}
|
_____no_output_____
|
Apache-2.0
|
sagemaker-debugger/tensorflow_profiling/tf-resnet-profiling-multi-gpu-multi-node.ipynb
|
Amirosimani/amazon-sagemaker-examples
|
Configure rulesWe specify the following rules:- loss_not_decreasing: checks if loss is decreasing and triggers if the loss has not decreased by a certain persentage in the last few iterations- LowGPUUtilization: checks if GPU is under-utilizated - ProfilerReport: runs the entire set of performance rules and create a final output report with further insights and recommendations.
|
from sagemaker.debugger import Rule, ProfilerRule, rule_configs
rules = [
Rule.sagemaker(rule_configs.loss_not_decreasing()),
ProfilerRule.sagemaker(rule_configs.LowGPUUtilization()),
ProfilerRule.sagemaker(rule_configs.ProfilerReport()),
]
|
_____no_output_____
|
Apache-2.0
|
sagemaker-debugger/tensorflow_profiling/tf-resnet-profiling-multi-gpu-multi-node.ipynb
|
Amirosimani/amazon-sagemaker-examples
|
Specify a profiler configurationThe following configuration will capture system metrics at 500 milliseconds. The system metrics include utilization per CPU, GPU, memory utilization per CPU, GPU as well I/O and network.Debugger will capture detailed profiling information from step 5 to step 15. This information includes Horovod metrics, dataloading, preprocessing, operators running on CPU and GPU.
|
from sagemaker.debugger import ProfilerConfig, FrameworkProfile
profiler_config = ProfilerConfig(
system_monitor_interval_millis=500,
framework_profile_params=FrameworkProfile(
local_path="/opt/ml/output/profiler/", start_step=5, num_steps=10
),
)
|
_____no_output_____
|
Apache-2.0
|
sagemaker-debugger/tensorflow_profiling/tf-resnet-profiling-multi-gpu-multi-node.ipynb
|
Amirosimani/amazon-sagemaker-examples
|
Get the image URIThe image that we will is dependent on the region that you are running this notebook in.
|
import boto3
session = boto3.session.Session()
region = session.region_name
image_uri = f"763104351884.dkr.ecr.{region}.amazonaws.com/tensorflow-training:2.3.1-gpu-py37-cu110-ubuntu18.04"
|
_____no_output_____
|
Apache-2.0
|
sagemaker-debugger/tensorflow_profiling/tf-resnet-profiling-multi-gpu-multi-node.ipynb
|
Amirosimani/amazon-sagemaker-examples
|
Define estimatorTo enable profiling, you need to pass the Debugger profiling configuration (`profiler_config`), a list of Debugger rules (`rules`), and the image URI (`image_uri`) to the estimator. Debugger enables monitoring and profiling while the SageMaker estimator requests a training job.
|
import sagemaker
from sagemaker.tensorflow import TensorFlow
estimator = TensorFlow(
role=sagemaker.get_execution_role(),
image_uri=image_uri,
instance_count=2,
instance_type="ml.p3.8xlarge",
entry_point="tf-hvd-train.py",
source_dir="entry_point",
profiler_config=profiler_config,
distribution=distributions,
rules=rules,
)
|
_____no_output_____
|
Apache-2.0
|
sagemaker-debugger/tensorflow_profiling/tf-resnet-profiling-multi-gpu-multi-node.ipynb
|
Amirosimani/amazon-sagemaker-examples
|
Start training jobThe following `estimator.fit()` with `wait=False` argument initiates the training job in the background. You can proceed to run the dashboard or analysis notebooks.
|
estimator.fit(wait=False)
|
_____no_output_____
|
Apache-2.0
|
sagemaker-debugger/tensorflow_profiling/tf-resnet-profiling-multi-gpu-multi-node.ipynb
|
Amirosimani/amazon-sagemaker-examples
|
2. Analyze Profiling DataCopy outputs of the following cell (`training_job_name` and `region`) to run the analysis notebooks `profiling_generic_dashboard.ipynb`, `analyze_performance_bottlenecks.ipynb`, and `profiling_interactive_analysis.ipynb`.
|
training_job_name = estimator.latest_training_job.name
print(f"Training jobname: {training_job_name}")
print(f"Region: {region}")
|
_____no_output_____
|
Apache-2.0
|
sagemaker-debugger/tensorflow_profiling/tf-resnet-profiling-multi-gpu-multi-node.ipynb
|
Amirosimani/amazon-sagemaker-examples
|
While the training is still in progress you can visualize the performance data in SageMaker Studio or in the notebook.Debugger provides utilities to plot system metrics in form of timeline charts or heatmaps. Checkout out the notebook [profiling_interactive_analysis.ipynb](analysis_tools/profiling_interactive_analysis.ipynb) for more details. In the following code cell we plot the total CPU and GPU utilization as timeseries charts. To visualize other metrics such as I/O, memory, network you simply need to extend the list passed to `select_dimension` and `select_events`. Install the SMDebug client library to use Debugger analysis tools
|
import pip
def import_or_install(package):
try:
__import__(package)
except ImportError:
pip.main(["install", package])
import_or_install("smdebug")
|
_____no_output_____
|
Apache-2.0
|
sagemaker-debugger/tensorflow_profiling/tf-resnet-profiling-multi-gpu-multi-node.ipynb
|
Amirosimani/amazon-sagemaker-examples
|
Access the profiling data using the SMDebug `TrainingJob` utility class
|
from smdebug.profiler.analysis.notebook_utils.training_job import TrainingJob
tj = TrainingJob(training_job_name, region)
tj.wait_for_sys_profiling_data_to_be_available()
|
_____no_output_____
|
Apache-2.0
|
sagemaker-debugger/tensorflow_profiling/tf-resnet-profiling-multi-gpu-multi-node.ipynb
|
Amirosimani/amazon-sagemaker-examples
|
Plot time line chartsThe following code shows how to use the SMDebug `TrainingJob` object, refresh the object if new event files are available, and plot time line charts of CPU and GPU usage.
|
from smdebug.profiler.analysis.notebook_utils.timeline_charts import TimelineCharts
system_metrics_reader = tj.get_systems_metrics_reader()
system_metrics_reader.refresh_event_file_list()
view_timeline_charts = TimelineCharts(
system_metrics_reader,
framework_metrics_reader=None,
select_dimensions=["CPU", "GPU"],
select_events=["total"],
)
|
_____no_output_____
|
Apache-2.0
|
sagemaker-debugger/tensorflow_profiling/tf-resnet-profiling-multi-gpu-multi-node.ipynb
|
Amirosimani/amazon-sagemaker-examples
|
3. Download Debugger Profiling Report The `ProfilerReport()` rule creates an html report `profiler-report.html` with a summary of builtin rules and recommenades of next steps. You can find this report in your S3 bucket.
|
rule_output_path = estimator.output_path + estimator.latest_training_job.job_name + "/rule-output"
print(f"You will find the profiler report in {rule_output_path}")
|
_____no_output_____
|
Apache-2.0
|
sagemaker-debugger/tensorflow_profiling/tf-resnet-profiling-multi-gpu-multi-node.ipynb
|
Amirosimani/amazon-sagemaker-examples
|
Лабораторная работа 2 - Линейная и полиномиальная регрессия. Одна из множества задач, которой занимается современная физика это поиск материала для изготовления сверхпроводника, работающего при комнатной температуре. Кроме теоретических методов есть и подход со стороны статистики, который подразумевает анализ базы данных материалов для нахождения зависимости критической температуры от других физических характеристик. Именно этим Вы и займетесь.
|
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
|
_____no_output_____
|
MIT
|
lab2/Lab 2.ipynb
|
KHYehor/MachineLearning
|
В файле **data.csv** содержится весь датасет.
|
data = pd.read_csv('data.csv')
data
|
_____no_output_____
|
MIT
|
lab2/Lab 2.ipynb
|
KHYehor/MachineLearning
|
Итого имеем 21 тысячу строк и 169 колонок, из которых первые 167 - признаки, колонка **critical_temp** содержит величину, которую надо предсказать. Колонка **material** - содержит химическую формулу материала, ее можно отбросить.Выполним предобработку данных и разобъем на тренировочную и тестовую выборки:
|
# X - last two columns cut.
# Y - pre last column.
x, y = data.values[:, :-2].astype(np.float32), data.values[:, -2:-1].astype(np.float32)
np.random.seed(1337)
is_train = np.random.uniform(size=(x.shape[0],)) < 0.95
x_train, y_train = x[is_train], y[is_train]
x_test, y_test = x[~is_train], y[~is_train]
print(f'Train samples: {len(x_train)}')
print(f'Test samples: {len(x_test)}')
|
Train samples: 20210
Test samples: 1053
|
MIT
|
lab2/Lab 2.ipynb
|
KHYehor/MachineLearning
|
Реализуйте методы с пометкой `TODO` класса PolynomialRegression:Метод `preprocess` должен выполнять следующее преобразование:$$\begin{array}{l}X=\begin{bmatrix}x_{i,j}\end{bmatrix}_{m\times n}\\preprocess( X) =\begin{bmatrix}1 & x_{1,1} & \dotsc & x_{1,1} & x^{2}_{1,1} & \dotsc & x^{2}_{1,1} & \dotsc & x^{p}_{1,1} & \dotsc & x^{p}_{1,1}\\1 & x_{2,1} & \dotsc & x_{2,n} & x^{2}_{2,1} & \dotsc & x^{2}_{2,n} & \dotsc & x^{p}_{2,1} & \dotsc & x^{p}_{2,n}\\\vdots & & & & & & & & & & \\1 & x_{m,1} & \dotsc & x_{m,n} & x^{2}_{m,1} & \dotsc & x^{2}_{m,n} & \dotsc & x^{p}_{m,1} & \dotsc & x^{p}_{m,n}\end{bmatrix}_{m,N}\end{array}$$где p - степень полинома (`self.poly_deg` в коде).Таким образом, preprocess добавляет полиномиальные признаки к $X$.Метод `J` должен вычислять оценочную функцию регрессии:$$J( \theta ) =MSE( Y,\ h_{\theta }( X)) +\alpha _{1}\sum ^{N}_{i=1}\sum ^{k}_{j=1} |\hat{\theta }_{i,j} |+\alpha _{2}\sum ^{N}_{i=1}\sum ^{k}_{j=1}\hat{\theta }^{2}_{i,j}$$Метод `grad` должен вычислять градиент $\frac{\partial J}{\partial \theta }$:$${\displaystyle \frac{\partial J}{\partial \theta }} =-{\displaystyle \frac{2}{m}} X^{T} (Y-X\theta )+\begin{bmatrix}0 & & & \\ & 1 & & \\ & & \ddots & \\ & & & 1\end{bmatrix} \times ( \alpha _{1} sign(\theta )+2\alpha _{2} \theta )$$Метод `moments` должен возвращать вектор-строки $\mu,\sigma$ для среднего и стандартного отклонения каждой колонки. Помните, что колонку с единицами не нужно нормализировать, так что соответствующие среднее и стандартное отколонение для нее укажите равными 0 и 1 соответственно. Можно использовать функции [np.mean](https://numpy.org/doc/stable/reference/generated/numpy.mean.html) и [np.std](https://numpy.org/doc/stable/reference/generated/numpy.std.html).Метод `normalize` должен выполнять нормализацию $X$ на основе статистик $\mu,\sigma$, что вернул метод **moments**. Для того чтобы избежать деления на 0, можете к $\sigma$ прибавить маленькую величину, например $10^{-8}$.Метод `get_batch` должен возвращать матрицы $X_b, Y_b$ из произвольно выбранных $b$ элементов выборки ($b$ в коде - `self.batch_size`).Метод `fit` выполняет оптимизацию $J(\theta)$. Для лучшей сходимости реализуйте алгоритм оптимизации **Momentum**:$$\begin{array}{l}v_t = \gamma v_{t-1} + \alpha\nabla J(\theta_{t-1})\\\theta_t = \theta_{t-1} - v_t\end{array}$$где $\gamma$ установите равным $0.9$ (можете поэкспериментировать с другими величиными), $v_1=[0]_{N,k}$.
|
class PolynomialRegression:
def __init__(
self,
alpha1,
alpha2,
poly_deg,
learning_rate,
batch_size,
train_steps
):
self.alpha1 = alpha1
self.alpha2 = alpha2
self.poly_deg = poly_deg
self.learning_rate = learning_rate
self.batch_size = batch_size
self.train_steps = train_steps
def preprocess(self, x):
# Create first one column.
ones = [np.ones(shape=(x.shape[0], 1))]
# Polynomic scale.
powers = [x ** i for i in range(1, self.poly_deg + 1)]
# Unite into one.
result = np.concatenate(ones + powers, axis=1)
return result
def normalize(self, x):
return (x - self.mu) / (self.sigma + 1e-8)
def moments(self, x):
# Arttimetic average (a + b + ... + z) / n.
mu = np.mean(x, axis=0)
# Standart deviation.
sigma = np.std(x, axis=0)
mu[0] = 0
sigma[0] = 1
return mu, sigma
def J(self, x, y, theta):
# Theta is not multiply with first (ones) column.
circumcized_theta = theta[1::]
# Mean squared error.
mse = ((y - np.dot(x, theta)) ** 2).mean(axis=None)
# Module sum of theta (alpha1).
l1 = self.alpha1 * np.sum(np.abs(circumcized_theta), axis=None)
# Quadro sum of theta (alpha2).
l2 = self.alpha2 * np.sum(circumcized_theta ** 2, axis=None)
return mse + l1 + l2
def grad(self, x, y, theta):
# Create ones matrix.
diag = np.eye(x.shape[1], x.shape[1])
# Init first element as 0.
diag[0][0] = 0
# Left assign.
l1l2 = self.alpha1 * np.sign(theta) + 2 * self.alpha2 * theta
return (-2/x.shape[0]) * x.T @ (y - (x @ theta)) + (diag @ l1l2)
def get_batch(self, x, y):
# Return random values.
i = np.random.default_rng().choice(x.shape[0], self.batch_size, replace=False)
return x[i], y[i]
def fit(self, x, y):
## Trasform source data to polynom regression.
x = self.preprocess(x)
(m, N), (_, k) = x.shape, y.shape
# Calculate mu and standart deviation.
self.mu, self.sigma = self.moments(x)
# Normalize using average values.
x = self.normalize(x)
try:
assert np.allclose(x[:, 1:].mean(axis=0), 0, atol=1e-3)
assert np.all((np.abs(x[:, 1:].std(axis=0)) < 1e-2) | (np.abs(x[:, 1:].std(axis=0) - 1) < 1e-2))
except AssertionError as e:
print('Something wrong with normalization')
raise e
# Random x & y.
x_batch, y_batch = self.get_batch(x, y)
try:
assert x_batch.shape[0] == self.batch_size
assert y_batch.shape[0] == self.batch_size
except AssertionError as e:
print('Something wrong with get_batch')
raise e
theta = np.zeros(shape=(N, k))
v_1 = np.zeros(shape=(N, k))
v_t = v_1
for step in range(self.train_steps):
x_batch, y_batch = self.get_batch(x, y)
theta_grad = self.grad(x_batch, y_batch, theta)
v_t = 0.9 * v_t + self.learning_rate * theta_grad
theta = theta - v_t
self.theta = theta
return self
def predict(self, x):
x = self.preprocess(x)
x = self.normalize(x)
return x @ self.theta
def score(self, x, y):
y_pred = self.predict(x)
return np.abs(y - y_pred).mean()
reg = PolynomialRegression(0, 0, 1, 1e-3, 1024, 1000).fit(x_train, y_train)
print(f'Test MAE: {reg.score(x_test, y_test)}')
|
Test MAE: 12.593122309572655
|
MIT
|
lab2/Lab 2.ipynb
|
KHYehor/MachineLearning
|
Полученный MAE на тестовой выборке должен быть приблизительно равен $12.5$. Выполните поиск оптимальных параметров регуляризации $\alpha_1,\alpha_2$ по отдельности (то есть устанавливаете один параметр равным нулю и ищете второй, потом наоборот) и старшей степени полиномиальной регрессии (`poly_deg`). Обратите внимание, что поиск параметра регуляризации следует искать на логарифмической шкале. То есть, например, список кандидатов может быть задан как: `10 ** np.linspace(-5, -1, 5)`, что даст вам величины $10^{-5},10^{-4},10^{-3},10^{-2},10^{-1}$.При надобности, можете отрегулировать оптимальный `batch_size`, `learning_rate`, `training_steps`.Результаты представьте в виде графиков по примеру ниже.Дополнительные баллы будут начислены за выполнение поиска оптимальных параметров $\alpha_1,\alpha_2$ вместе. В таком случае результаты представьте при помощи [plt.matshow](https://matplotlib.org/3.3.2/api/_as_gen/matplotlib.pyplot.matshow.html).
|
a1 = 10 ** np.linspace(-9, -1, 9)
a2 = 10 ** np.linspace(-9, -1, 9)
fig, (ax1, ax2) = plt.subplots(ncols=2, nrows=1, figsize=(20, 10))
fig.suptitle('Poly deg. = 5')
ax1.set_xlabel('Alpha 1')
ax1.set_ylabel('Score')
ax1.set_xscale('log')
ax1.plot([a1i for a1i in a1], [PolynomialRegression(a1i, 0, 1, 1e-3, 1024, 1000).fit(x_train, y_train).score(x_test, y_test) for a1i in a1])
ax2.set_xlabel('Alpha 2')
ax2.set_ylabel('Score')
ax2.set_xscale('log')
ax2.plot([a2i for a2i in a2], [PolynomialRegression(0, a2i, 1, 1e-3, 1024, 1000).fit(x_train, y_train).score(x_test, y_test) for a2i in a2])
plt.show()
|
_____no_output_____
|
MIT
|
lab2/Lab 2.ipynb
|
KHYehor/MachineLearning
|
Визуализируйте зависимость предсказанной критической температуры от истинной для лучшей модели:
|
reg = PolynomialRegression(1e-5, 1e-5, 5, 1e-3, 1024, 1000).fit(x_train, y_train)
y_test_pred = reg.predict(x_test)
print(f'Test MAE: {reg.score(x_test, y_test)}')
plt.figure(figsize=(10, 10))
plt.scatter(y_test[:, 0], y_test_pred[:, 0], marker='.', c='r')
plt.xlabel('True Y')
plt.ylabel('Predicted Y')
plt.show()
|
Test MAE: 11.26531342526687
|
MIT
|
lab2/Lab 2.ipynb
|
KHYehor/MachineLearning
|
Define paths for the model and data of interest
|
model_type = "profile"
# Shared paths/constants
reference_fasta = "/users/amtseng/genomes/hg38.fasta"
chrom_sizes = "/users/amtseng/genomes/hg38.canon.chrom.sizes"
data_base_path = "/users/amtseng/att_priors/data/processed/"
model_base_path = "/users/amtseng/att_priors/models/trained_models/%s/" % model_type
chrom_set = ["chr1"]
input_length = 1346 if model_type == "profile" else 1000
profile_length = 1000
# SPI1
condition_name = "SPI1"
files_spec_path = os.path.join(data_base_path, "ENCODE_TFChIP/%s/config/SPI1/SPI1_training_paths.json" % model_type)
num_tasks = 4
num_strands = 2
task_index = None
controls = "matched"
if model_type == "profile":
model_class = profile_models.ProfilePredictorWithMatchedControls
else:
model_class = binary_models.BinaryPredictor
noprior_model_base_path = os.path.join(model_base_path, "SPI1/")
prior_model_base_path = os.path.join(model_base_path, "SPI1_prior/")
peak_retention = "all"
# GATA2
condition_name = "GATA2"
files_spec_path = os.path.join(data_base_path, "ENCODE_TFChIP/%s/config/GATA2/GATA2_training_paths.json" % model_type)
num_tasks = 3
num_strands = 2
task_index = None
controls = "matched"
if model_type == "profile":
model_class = profile_models.ProfilePredictorWithMatchedControls
else:
model_class = binary_models.BinaryPredictor
noprior_model_base_path = os.path.join(model_base_path, "GATA2/")
prior_model_base_path = os.path.join(model_base_path, "GATA2_prior/")
peak_retention = "all"
# K562
condition_name = "K562"
files_spec_path = os.path.join(data_base_path, "ENCODE_DNase/%s/config/K562/K562_training_paths.json" % model_type)
num_tasks = 1
num_strands = 1
task_index = None
controls = "shared"
if model_type == "profile":
model_class = profile_models.ProfilePredictorWithSharedControls
else:
model_class = binary_models.BinaryPredictor
noprior_model_base_path = os.path.join(model_base_path, "K562/")
prior_model_base_path = os.path.join(model_base_path, "K562_prior/")
peak_retention = "all"
# BPNet
condition_name = "BPNet"
reference_fasta = "/users/amtseng/genomes/mm10.fasta"
chrom_sizes = "/users/amtseng/genomes/mm10.canon.chrom.sizes"
files_spec_path = os.path.join(data_base_path, "BPNet_ChIPseq/%s/config/BPNet_training_paths.json" % model_type)
num_tasks = 3
num_strands = 2
task_index = None
controls = "shared"
if model_type == "profile":
model_class = profile_models.ProfilePredictorWithSharedControls
else:
model_class = binary_models.BinaryPredictor
noprior_model_base_path = os.path.join(model_base_path, "BPNet/")
prior_model_base_path = os.path.join(model_base_path, "BPNet_prior/")
peak_retention = "all"
|
_____no_output_____
|
MIT
|
notebooks/consistency_of_importance_over_different_initializations.ipynb
|
atseng95/fourier_attribution_priors
|
Get all runs/epochs with random initializations
|
def import_metrics_json(model_base_path, run_num):
"""
Looks in {model_base_path}/{run_num}/metrics.json and returns the contents as a
Python dictionary. Returns None if the path does not exist.
"""
path = os.path.join(model_base_path, str(run_num), "metrics.json")
if not os.path.exists(path):
return None
with open(path, "r") as f:
return json.load(f)
def get_model_paths(
model_base_path, metric_name="val_prof_corr_losses",
reduce_func=(lambda values: np.mean(values)), compare_func=(lambda x, y: x < y),
print_found_values=True
):
"""
Looks in `model_base_path` and for each run, returns the full path to
the best epoch. By default, the best epoch in a run is determined by
the lowest validation profile loss.
"""
# Get the metrics, ignoring empty or nonexistent metrics.json files
metrics = {run_num : import_metrics_json(model_base_path, run_num) for run_num in os.listdir(model_base_path)}
metrics = {key : val for key, val in metrics.items() if val} # Remove empties
model_paths, metric_vals = [], []
for run_num in sorted(metrics.keys(), key=lambda x: int(x)):
try:
# Find the best epoch within that run
best_epoch_in_run, best_val_in_run = None, None
for i, subarr in enumerate(metrics[run_num][metric_name]["values"]):
val = reduce_func(subarr)
if best_val_in_run is None or compare_func(val, best_val_in_run):
best_epoch_in_run, best_val_in_run = i + 1, val
model_path = os.path.join(model_base_path, run_num, "model_ckpt_epoch_%d.pt" % best_epoch_in_run)
model_paths.append(model_path)
metric_vals.append(best_val_in_run)
if print_found_values:
print("\tRun %s, epoch %d: %6.2f" % (run_num, best_epoch_in_run, best_val_in_run))
except Exception:
print("Warning: Was not able to compute values for run %s" % run_num)
continue
return model_paths, metric_vals
metric_name = "val_prof_corr_losses" if model_type == "profile" else "val_corr_losses"
noprior_model_paths, noprior_metric_vals = get_model_paths(noprior_model_base_path, metric_name=metric_name)
prior_model_paths, prior_metric_vals = get_model_paths(prior_model_base_path, metric_name=metric_name)
torch.set_grad_enabled(True)
device = torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu")
def restore_model(model_path):
model = model_util.restore_model(model_class, model_path)
model.eval()
model = model.to(device)
return model
|
_____no_output_____
|
MIT
|
notebooks/consistency_of_importance_over_different_initializations.ipynb
|
atseng95/fourier_attribution_priors
|
Data preparationCreate an input data loader, that maps coordinates or bin indices to data needed for the model
|
if model_type == "profile":
input_func = data_loading.get_profile_input_func(
files_spec_path, input_length, profile_length, reference_fasta
)
pos_examples = data_loading.get_positive_profile_coords(
files_spec_path, chrom_set=chrom_set
)
else:
input_func = data_loading.get_binary_input_func(
files_spec_path, input_length, reference_fasta
)
pos_examples = data_loading.get_positive_binary_bins(
files_spec_path, chrom_set=chrom_set
)
|
_____no_output_____
|
MIT
|
notebooks/consistency_of_importance_over_different_initializations.ipynb
|
atseng95/fourier_attribution_priors
|
Compute importances
|
# Pick a sample of 100 random coordinates/bins
num_samples = 100
rng = np.random.RandomState(20200318)
sample = pos_examples[rng.choice(len(pos_examples), size=num_samples, replace=False)]
# For profile models, add a random jitter to avoid center-bias
if model_type == "profile":
jitters = np.random.randint(-128, 128 + 1, size=len(sample))
sample[:, 1] = sample[:, 1] + jitters
sample[:, 2] = sample[:, 2] + jitters
def compute_gradients(model_paths, sample):
"""
Given a list of paths to M models and a list of N coordinates or bins, computes
the input gradients over all models, returning an M x N x I x 4 array of
gradient values and an N x I x 4 array of one-hot encoded sequence.
"""
num_models, num_samples = len(model_paths), len(sample)
all_input_grads = np.empty((num_models, num_samples, input_length, 4))
all_one_hot_seqs = np.empty((num_samples, input_length, 4))
for i in tqdm.notebook.trange(num_models):
model = restore_model(model_paths[i])
if model_type == "profile":
results = compute_predictions.get_profile_model_predictions(
model, sample, num_tasks, input_func, controls=controls,
return_losses=False, return_gradients=True, show_progress=False
)
else:
results = compute_predictions.get_binary_model_predictions(
model, sample, input_func,
return_losses=False, return_gradients=True, show_progress=False
)
all_input_grads[i] = results["input_grads"]
if i == 0:
all_one_hot_seqs = results["input_seqs"]
return all_input_grads, all_one_hot_seqs
def compute_shap_scores(model_paths, sample, batch_size=128):
"""
Given a list of paths to M models and a list of N coordinates or bins, computes
the SHAP scores over all models, returning an M x N x I x 4 array of
SHAP scores and an N x I x 4 array of one-hot encoded sequence.
"""
num_models, num_samples = len(model_paths), len(sample)
num_batches = int(np.ceil(num_samples / batch_size))
all_shap_scores = np.empty((num_models, num_samples, input_length, 4))
all_one_hot_seqs = np.empty((num_samples, input_length, 4))
for i in tqdm.notebook.trange(num_models):
model = restore_model(model_paths[i])
if model_type == "profile":
shap_explainer = compute_shap.create_profile_explainer(
model, input_length, profile_length, num_tasks, num_strands, controls,
task_index=task_index
)
else:
shap_explainer = compute_shap.create_binary_explainer(
model, input_length, task_index=task_index
)
for j in range(num_batches):
batch_slice = slice(j * batch_size, (j + 1) * batch_size)
batch = sample[batch_slice]
if model_type == "profile":
input_seqs, profiles = input_func(sample)
shap_scores = shap_explainer(
input_seqs, cont_profs=profiles[:, num_tasks:], hide_shap_output=True
)
else:
input_seqs, _, _ = input_func(sample)
shap_scores = shap_explainer(
input_seqs, hide_shap_output=True
)
all_shap_scores[i, batch_slice] = shap_scores
if i == 0:
all_one_hot_seqs[batch_slice] = input_seqs
return all_shap_scores, all_one_hot_seqs
# Compute the importance scores and 1-hot seqs
imp_type = ("DeepSHAP scores", "input gradients")[0]
imp_func = compute_shap_scores if imp_type == "DeepSHAP scores" else compute_gradients
noprior_scores, _ = imp_func(noprior_model_paths, sample)
prior_scores, one_hot_seqs = imp_func(prior_model_paths, sample)
|
_____no_output_____
|
MIT
|
notebooks/consistency_of_importance_over_different_initializations.ipynb
|
atseng95/fourier_attribution_priors
|
Compute similarity
|
def cont_jaccard(seq_1, seq_2):
"""
Takes two gradient sequences (I x 4 arrays) and computes a similarity between
them, using a continuous Jaccard metric.
"""
# L1-normalize
norm_1 = np.sum(np.abs(seq_1), axis=1, keepdims=True)
norm_2 = np.sum(np.abs(seq_2), axis=1, keepdims=True)
norm_1[norm_1 == 0] = 1
norm_2[norm_2 == 0] = 1
seq_1 = seq_1 / norm_1
seq_2 = seq_2 / norm_2
ab_1, ab_2 = np.abs(seq_1), np.abs(seq_2)
inter = np.sum(np.minimum(ab_1, ab_2) * np.sign(seq_1) * np.sign(seq_2), axis=1)
union = np.sum(np.maximum(ab_1, ab_2), axis=1)
zero_mask = union == 0
inter[zero_mask] = 0
union[zero_mask] = 1
return np.sum(inter / union)
def cosine_sim(seq_1, seq_2):
"""
Takes two gradient sequences (I x 4 arrays) and computes a similarity between
them, using a cosine similarity.
"""
seq_1, seq_2 = np.ravel(seq_1), np.ravel(seq_2)
dot = np.sum(seq_1 * seq_2)
mag_1, mag_2 = np.sqrt(np.sum(seq_1 * seq_1)), np.sqrt(np.sum(seq_2 * seq_2))
return dot / (mag_1 * mag_2) if mag_1 * mag_2 else 0
def compute_similarity_matrix(imp_scores, sim_func=cosine_sim):
"""
Given the M x N x I x 4 importance scores returned by `compute_gradients`
or `compute_shap_scores`, computes an N x M x M similarity matrix of
similarity across models (i.e. each coordinate gets a similarity matrix
across different models). By default uses cosine similarity.
"""
num_models, num_coords = imp_scores.shape[0], imp_scores.shape[1]
sim_mats = np.empty((num_coords, num_models, num_models))
for i in tqdm.notebook.trange(num_coords):
for j in range(num_models):
sim_mats[i, j, j] = 0
for k in range(j):
sim_score = sim_func(imp_scores[j][i], imp_scores[k][i])
sim_mats[i, j, k] = sim_score
sim_mats[i, k, j] = sim_score
return sim_mats
sim_type = ("Cosine", "Continuous Jaccard")[1]
sim_func = cosine_sim if sim_type == "Cosine" else cont_jaccard
noprior_sim_matrix = compute_similarity_matrix(noprior_scores, sim_func=sim_func)
prior_sim_matrix = compute_similarity_matrix(prior_scores, sim_func=sim_func)
# Plot some examples of poor consistency, particularly ones that showed an improvement
num_to_show = 100
center_view_length = 200
plot_zoom = True
midpoint = input_length // 2
start = midpoint - (center_view_length // 2)
end = start + center_view_length
center_slice = slice(550, 800)
noprior_sim_matrix_copy = noprior_sim_matrix.copy()
for i in range(len(noprior_sim_matrix_copy)):
noprior_sim_matrix_copy[i][np.diag_indices(noprior_sim_matrix.shape[1])] = np.inf # Put infinity in diagonal
diffs = np.max(prior_sim_matrix, axis=(1, 2)) - np.min(noprior_sim_matrix_copy, axis=(1, 2))
best_example_inds = np.flip(np.argsort(diffs))[:num_to_show]
best_example_inds = [7] #, 38]
for sample_index in best_example_inds:
noprior_model_ind_1, noprior_model_ind_2 = np.unravel_index(np.argmin(np.ravel(noprior_sim_matrix_copy[sample_index])), noprior_sim_matrix[sample_index].shape)
prior_model_ind_1, prior_model_ind_2 = np.unravel_index(np.argmax(np.ravel(prior_sim_matrix[sample_index])), prior_sim_matrix[sample_index].shape)
noprior_model_ind_1, noprior_model_ind_2 = 5, 17
prior_model_ind_1, prior_model_ind_2 = 13, 17
print("Sample index: %d" % sample_index)
if model_type == "binary":
bin_index = sample[sample_index]
coord = input_func(np.array([bin_index]))[2][0]
print("Coordinate: %s (bin %d)" % (str(coord), bin_index))
else:
coord = sample[sample_index]
print("Coordinate: %s" % str(coord))
print("Model indices without prior: %d vs %d" % (noprior_model_ind_1, noprior_model_ind_2))
plt.figure(figsize=(20, 2))
plt.plot(np.sum(noprior_scores[noprior_model_ind_1, sample_index] * one_hot_seqs[sample_index], axis=1), color="coral")
plt.show()
if plot_zoom:
viz_sequence.plot_weights(noprior_scores[noprior_model_ind_1, sample_index, center_slice], subticks_frequency=1000)
viz_sequence.plot_weights(noprior_scores[noprior_model_ind_1, sample_index, center_slice] * one_hot_seqs[sample_index, center_slice], subticks_frequency=1000)
plt.figure(figsize=(20, 2))
plt.plot(np.sum(noprior_scores[noprior_model_ind_2, sample_index] * one_hot_seqs[sample_index], axis=1), color="coral")
plt.show()
if plot_zoom:
viz_sequence.plot_weights(noprior_scores[noprior_model_ind_2, sample_index, center_slice], subticks_frequency=1000)
viz_sequence.plot_weights(noprior_scores[noprior_model_ind_2, sample_index, center_slice] * one_hot_seqs[sample_index, center_slice], subticks_frequency=1000)
print("Model indices with prior: %d vs %d" % (prior_model_ind_1, prior_model_ind_2))
plt.figure(figsize=(20, 2))
plt.plot(np.sum(prior_scores[prior_model_ind_1, sample_index] * one_hot_seqs[sample_index], axis=1), color="slateblue")
plt.show()
if plot_zoom:
viz_sequence.plot_weights(prior_scores[prior_model_ind_1, sample_index, center_slice], subticks_frequency=1000)
viz_sequence.plot_weights(prior_scores[prior_model_ind_1, sample_index, center_slice] * one_hot_seqs[sample_index, center_slice], subticks_frequency=1000)
plt.figure(figsize=(20, 2))
plt.plot(np.sum(prior_scores[prior_model_ind_2, sample_index] * one_hot_seqs[sample_index], axis=1), color="slateblue")
plt.show()
if plot_zoom:
viz_sequence.plot_weights(prior_scores[prior_model_ind_2, sample_index, center_slice], subticks_frequency=1000)
viz_sequence.plot_weights(prior_scores[prior_model_ind_2, sample_index, center_slice] * one_hot_seqs[sample_index, center_slice], subticks_frequency=1000)
sample_index = 7
for i in range(30):
print(i)
plt.figure(figsize=(20, 2))
plt.plot(np.sum(noprior_scores[i, sample_index] * one_hot_seqs[sample_index], axis=1), color="coral")
plt.show()
for i in range(30):
print(i)
plt.figure(figsize=(20, 2))
plt.plot(np.sum(prior_scores[i, sample_index] * one_hot_seqs[sample_index], axis=1), color="coral")
plt.show()
noprior_avg_sims, prior_avg_sims = [], []
bin_num = 30
for i in range(num_samples):
noprior_avg_sims.append(np.mean(noprior_sim_matrix[i][np.tril_indices(len(noprior_model_paths), k=-1)]))
prior_avg_sims.append(np.mean(prior_sim_matrix[i][np.tril_indices(len(prior_model_paths), k=-1)]))
noprior_avg_sims, prior_avg_sims = np.array(noprior_avg_sims), np.array(prior_avg_sims)
all_vals = np.concatenate([noprior_avg_sims, prior_avg_sims])
bins = np.linspace(np.min(all_vals), np.max(all_vals), bin_num)
fig, ax = plt.subplots(figsize=(16, 8))
ax.hist(noprior_avg_sims, bins=bins, color="coral", label="No prior", alpha=0.7)
ax.hist(prior_avg_sims, bins=bins, color="slateblue", label="With Fourier prior", alpha=0.7)
plt.legend()
plt.title(
("Mean pairwise similarities of %s between different random initializations" % imp_type) +
("\n%s %s models" % (condition_name, model_type)) +
"\nComputed over %d/%d models without/with Fourier prior on %d randomly drawn test peaks" % (len(noprior_model_paths), len(prior_model_paths), num_samples)
)
plt.xlabel("%s similarity" % sim_type)
print("Average similarity without priors: %f" % np.nanmean(noprior_avg_sims))
print("Average similarity with priors: %f" % np.nanmean(prior_avg_sims))
print("Standard error without priors: %f" % scipy.stats.sem(noprior_avg_sims, nan_policy="omit"))
print("Standard error with priors: %f" % scipy.stats.sem(prior_avg_sims, nan_policy="omit"))
w, p = scipy.stats.wilcoxon(noprior_avg_sims, prior_avg_sims, alternative="less")
print("One-sided Wilcoxon test: w = %f, p = %f" % (w, p))
avg_sim_diffs = prior_avg_sims - noprior_avg_sims
plt.figure(figsize=(16, 8))
plt.hist(avg_sim_diffs, bins=30, color="mediumorchid")
plt.title(
("Paired difference of %s similarity between different random initializations" % imp_type) +
("\n%s %s models" % (condition_name, model_type)) +
"\nComputed over %d/%d models without/with Fourier prior on %d randomly drawn test peaks" % (len(noprior_model_paths), len(prior_model_paths), num_samples)
)
plt.xlabel("Average similarity difference: with Fourier prior - no prior")
def get_bias(sim_matrix):
num_examples, num_models, _ = sim_matrix.shape
bias_vals = []
for i in range(num_models):
avg = np.sum(sim_matrix[:, i]) / (num_examples * (num_models - 1))
bias_vals.append(avg)
print("%d: %f" % (i + 1, avg))
return bias_vals
print("Model-specific bias without priors")
noprior_bias_vals = get_bias(noprior_sim_matrix)
print("Model-specific bias with priors")
prior_bias_vals = get_bias(prior_sim_matrix)
fig, ax = plt.subplots(1, 2, figsize=(10, 5))
fig.suptitle("Model-specific average Jaccard similarity vs model performance")
ax[0].scatter(noprior_bias_vals, np.array(noprior_metric_vals)[noprior_keep_mask])
ax[0].set_title("No priors")
ax[1].scatter(prior_bias_vals, np.array(prior_metric_vals)[prior_keep_mask])
ax[1].set_title("With priors")
plt.grid(False)
fig.text(0.5, 0.04, "Average Jaccard similarity with other models over all samples", ha="center", va="center")
fig.text(0.06, 0.5, "Model profile validation loss", ha="center", va="center", rotation="vertical")
# Compute some simple bounds on the expected consistency using
# the "no-prior" scores
rng = np.random.RandomState(1234)
def shuf_none(track):
# Do nothing
return track
def shuf_bases(track):
# Shuffle the importances across each base dimension separately,
# but keep positions intact
inds = np.random.rand(*track.shape).argsort(axis=1) # Each row is 0,1,2,3 in random order
return np.take_along_axis(track, inds, axis=1)
def shuf_pos(track):
# Shuffle the importances across the positions, but keep the base
# importances at each position intact
shuf = np.copy(track)
rng.shuffle(shuf)
return shuf
def shuf_all(track):
# Shuffle the importances across positions and bases
return np.ravel(track)[rng.permutation(track.size)].reshape(track.shape)
for shuf_type, shuf_func in [
("no", shuf_none), ("base", shuf_bases), ("position", shuf_pos), ("all", shuf_all)
]:
sims = []
for i in tqdm.notebook.trange(noprior_scores.shape[0]):
for j in range(noprior_scores.shape[1]):
track = noprior_scores[i, j]
track_shuf = shuf_func(track)
sims.append(sim_func(track, track_shuf))
fig, ax = plt.subplots()
ax.hist(sims, bins=30)
ax.set_title("%s similarity with %s shuffing" % (sim_type, shuf_type))
plt.show()
print("Mean: %f" % np.mean(sims))
print("Standard deviation: %f" % np.std(sims))
|
_____no_output_____
|
MIT
|
notebooks/consistency_of_importance_over_different_initializations.ipynb
|
atseng95/fourier_attribution_priors
|
Table of Contents1 init2 モデリング2.1 べースモデル2.2 グリッドサーチ3 本番モデル4 ランダムフォレスト5 work init
|
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
pwd
cd ../
cd data
cd raw
pdf_train = pd.read_csv("train.tsv", sep="\t")
pdf_train.T
pdf_test = pd.read_csv("test.tsv", sep="\t")
pdf_test.T
pdf_test.describe()
for str_col in ["C%s"%i for i in range(1,7)]:
print("# "+str_col)
v_cnt = pdf_test[str_col].value_counts(dropna=None)
print(v_cnt)
print()
def get_C_dummie(df):
dct_dummie = {}
for tgt_col in lst_srt_tgt_cols:
psr_tgt_col = df[tgt_col]
dct_dummie[tgt_col] = {}
for tgt_val in dct_C_vals[tgt_col]:
dummie = psr_tgt_col.apply(lambda x: 1 if x == tgt_val else 0)
dct_dummie[tgt_col][tgt_col + "_%s"%tgt_val] = dummie
_df = df.copy()
for tgt_col in dct_dummie.keys():
dummies = pd.DataFrame(dct_dummie[tgt_col])
_df = pd.concat([_df, dummies], axis=1)
else:
lst_str_drop_tgt_str = ["C%s"%i for i in range(1,7)]
# lst_str_drop_tgt_int = ["I%s"%i for i in range(11,15)]
_df = _df.drop(lst_str_drop_tgt_str + ["id"],1)
# _df = _df.drop(lst_str_drop_tgt_str + lst_str_drop_tgt_int + ["id"],1)
return _df
pdf_clns_train = get_C_dummie(pdf_train)
pdf_clns_test = get_C_dummie(pdf_test)
pdf_clns_train.T
pdf_clns_test.T
|
_____no_output_____
|
MIT
|
notebooks/lightGBM_v01-Copy1.ipynb
|
mnm-signate/analysys_titanic
|
モデリング
|
from sklearn.metrics import roc_auc_score
from sklearn.model_selection import ParameterGrid, KFold, train_test_split
import lightgbm as lgb
|
_____no_output_____
|
MIT
|
notebooks/lightGBM_v01-Copy1.ipynb
|
mnm-signate/analysys_titanic
|
べースモデル
|
def cross_valid_lgb(X,y,param,n_splits,random_state=1234, num_boost_round=1000,early_stopping_rounds=5):
from sklearn.metrics import roc_auc_score
from sklearn.model_selection import StratifiedKFold, train_test_split
import lightgbm as lgb
kf = StratifiedKFold(n_splits=n_splits, shuffle=True,random_state=random_state)
lst_auc = []
for train_index, test_index in kf.split(X,y):
learn_X, learn_y, test_X, test_y = X.iloc[train_index], y.iloc[train_index], X.iloc[test_index], y.iloc[test_index]
train_X, valid_X, train_y, valid_y = train_test_split(learn_X, learn_y,test_size=0.3, random_state=random_state)
lgb_train = lgb.Dataset(train_X, train_y)
lgb_valid = lgb.Dataset(valid_X, valid_y)
gbm = lgb.train(
params,
lgb_train,
num_boost_round=num_boost_round,
valid_sets=lgb_valid,
early_stopping_rounds=early_stopping_rounds,
verbose_eval = False
)
pred = gbm.predict(test_X)
auc = roc_auc_score(y_true=test_y, y_score=pred)
lst_auc.append(auc)
auc_mean = np.mean(lst_auc)
return auc_mean
def cross_lgb(X,y,param,n_splits,random_state=1234, num_boost_round=1000,early_stopping_rounds=5):
from sklearn.metrics import roc_auc_score
from sklearn.model_selection import StratifiedKFold, train_test_split
import lightgbm as lgb
kf = StratifiedKFold(n_splits=n_splits, shuffle=True,random_state=random_state)
lst_model = []
for train_index, test_index in kf.split(X,y):
learn_X, learn_y, test_X, test_y = X.iloc[train_index], y.iloc[train_index], X.iloc[test_index], y.iloc[test_index]
# train_X, valid_X, train_y, valid_y = train_test_split(learn_X, learn_y,test_size=0.3, random_state=random_state)
lgb_train = lgb.Dataset(learn_X, learn_y)
lgb_valid = lgb.Dataset(test_X, test_y)
gbm = lgb.train(
params,
lgb_train,
num_boost_round=num_boost_round,
valid_sets=lgb_valid,
early_stopping_rounds=early_stopping_rounds,
verbose_eval = False
)
lst_model.append(gbm)
return lst_model
|
_____no_output_____
|
MIT
|
notebooks/lightGBM_v01-Copy1.ipynb
|
mnm-signate/analysys_titanic
|
グリッドサーチ
|
grid = {
'boosting_type': ['goss'],
'objective': ['binary'],
'metric': ['auc'],
# 'num_leaves':[31 + i for i in range(-10, 11)],
# 'min_data_in_leaf':[20 + i for i in range(-10, 11)],
'num_leaves':[31],
'min_data_in_leaf':[20],
'max_depth':[-1]
}
X = pdf_clns_train.drop("click",1)
y = pdf_clns_train.click
score_grid = []
best_param = {}
best_auc = 0
for param in list(ParameterGrid(grid)):
auc_mean = cross_valid_lgb(
X,
y,
param=param,n_splits=3,
random_state=1234,
num_boost_round=1000,
early_stopping_rounds=5
)
score_grid.append([param,auc_mean])
if auc_mean >= best_auc:
best_auc = auc_mean
best_param = param
print(best_auc,best_param)
|
0.8083715191616564 {'boosting_type': 'goss', 'max_depth': -1, 'metric': 'auc', 'min_data_in_leaf': 20, 'num_leaves': 31, 'objective': 'binary'}
|
MIT
|
notebooks/lightGBM_v01-Copy1.ipynb
|
mnm-signate/analysys_titanic
|
本番モデル
|
lst_model = cross_lgb(X,y,param=best_param,n_splits=5,random_state=1234, num_boost_round=1000,early_stopping_rounds=5)
lst_model
lst_pred = []
for mdl in lst_model:
pred = mdl.predict(pdf_clns_test)
lst_pred.append(pred)
nparr_preds = np.array(lst_pred)
mean_pred = nparr_preds.mean(0)
mean_pred
pdf_submit = pd.DataFrame({
"id":pdf_test.id,
"score":mean_pred
})
pdf_submit.T
pdf_submit.to_csv("submit_v02_lgb5.csv", index=False, header=False)
|
_____no_output_____
|
MIT
|
notebooks/lightGBM_v01-Copy1.ipynb
|
mnm-signate/analysys_titanic
|
ランダムフォレスト
|
from sklearn.ensemble import RandomForestClassifier
X = X.fillna(0)
clf = RandomForestClassifier()
clf.fit(X,y)
pdf_clns_test = pdf_clns_test.fillna(0)
pred = clf.predict_proba(pdf_clns_test)
pred
pdf_submit_rf = pd.DataFrame({
"id":pdf_test.id,
"score":pred[:,1]
})
pdf_submit_rf.T
pdf_submit_rf.to_csv("submit_rf.csv", index=False, header=False)
|
_____no_output_____
|
MIT
|
notebooks/lightGBM_v01-Copy1.ipynb
|
mnm-signate/analysys_titanic
|
work
|
params = {
'boosting_type': 'goss',
'objective': 'binary',
'metric': 'auc',
'learning_rate': 0.1,
'num_leaves': 23,
'min_data_in_leaf': 1,
}
gbm = lgb.train(
params,
lds_train,
num_boost_round=1000,
valid_sets=lds_test,
early_stopping_rounds=5,
)
dct_C_vals
pdf_train.C1
params = {
'boosting_type': 'goss',
'objective': 'binary',
'metric': 'auc',
'verbose': 0,
'learning_rate': 0.1,
'num_leaves': 23,
'min_data_in_leaf': 1
}
kf = KFold(n_splits=3, shuffle=True,random_state=1234)
# score_grid = []
lst_auc = []
for train_index, test_index in kf.split(pdf_train):
pdf_train_kf, pdf_test_kf = pdf_clns_train.iloc[train_index], pdf_clns_train.iloc[test_index]
train, valid = train_test_split(pdf_train_kf,test_size=0.3, random_state=1234)
lgb_train = lgb.Dataset(train.drop("click",1), train["click"])
lgb_valid = lgb.Dataset(valid.drop("click",1), valid["click"])
# lgb_test = lgb.Dataset(pdf_test_kf.drop("click",1), pdf_test_kf["click"])
pdf_test_X = pdf_test_kf.drop("click",1)
pdf_test_y = pdf_test_kf["click"]
gbm = lgb.train(
params,
lgb_train,
num_boost_round=10,
valid_sets=lgb_valid,
early_stopping_rounds=5,
)
pred = gbm.predict(pdf_test_X)
auc = roc_auc_score(y_true=pdf_test_y, y_score=pred)
lst_auc.append(auc)
auc_mean = np.mean(lst_auc)
|
[1] valid_0's auc: 0.720192
Training until validation scores don't improve for 5 rounds.
[2] valid_0's auc: 0.726811
[3] valid_0's auc: 0.731667
[4] valid_0's auc: 0.737238
[5] valid_0's auc: 0.739277
[6] valid_0's auc: 0.740883
[7] valid_0's auc: 0.742216
[8] valid_0's auc: 0.743704
[9] valid_0's auc: 0.744488
[10] valid_0's auc: 0.745019
Did not meet early stopping. Best iteration is:
[10] valid_0's auc: 0.745019
[1] valid_0's auc: 0.718608
Training until validation scores don't improve for 5 rounds.
[2] valid_0's auc: 0.728747
[3] valid_0's auc: 0.730248
[4] valid_0's auc: 0.733139
[5] valid_0's auc: 0.735163
[6] valid_0's auc: 0.736818
[7] valid_0's auc: 0.739096
[8] valid_0's auc: 0.740416
[9] valid_0's auc: 0.739959
[10] valid_0's auc: 0.741348
Did not meet early stopping. Best iteration is:
[10] valid_0's auc: 0.741348
[1] valid_0's auc: 0.721096
Training until validation scores don't improve for 5 rounds.
[2] valid_0's auc: 0.729421
[3] valid_0's auc: 0.736833
[4] valid_0's auc: 0.738976
[5] valid_0's auc: 0.739032
[6] valid_0's auc: 0.739811
[7] valid_0's auc: 0.741738
[8] valid_0's auc: 0.745557
[9] valid_0's auc: 0.745884
[10] valid_0's auc: 0.748055
Did not meet early stopping. Best iteration is:
[10] valid_0's auc: 0.748055
|
MIT
|
notebooks/lightGBM_v01-Copy1.ipynb
|
mnm-signate/analysys_titanic
|
import tensorflow as tf
print(tf.__version__)
|
_____no_output_____
|
MIT
|
Course_1_Part_4_Lesson_2_Notebook.ipynb
|
poojan-dalal/fashion-MNIST
|
|
The Fashion MNIST data is available directly in the tf.keras datasets API. You load it like this:
|
mnist = tf.keras.datasets.fashion_mnist
|
_____no_output_____
|
MIT
|
Course_1_Part_4_Lesson_2_Notebook.ipynb
|
poojan-dalal/fashion-MNIST
|
Calling load_data on this object will give you two sets of two lists, these will be the training and testing values for the graphics that contain the clothing items and their labels.
|
(training_images, training_labels), (test_images, test_labels) = mnist.load_data()
|
_____no_output_____
|
MIT
|
Course_1_Part_4_Lesson_2_Notebook.ipynb
|
poojan-dalal/fashion-MNIST
|
What does these values look like? Let's print a training image, and a training label to see...Experiment with different indices in the array. For example, also take a look at index 42...that's a a different boot than the one at index 0
|
import numpy as np
np.set_printoptions(linewidth=200)
import matplotlib.pyplot as plt
plt.imshow(training_images[0])
print(training_labels[0])
print(training_images[0])
|
_____no_output_____
|
MIT
|
Course_1_Part_4_Lesson_2_Notebook.ipynb
|
poojan-dalal/fashion-MNIST
|
You'll notice that all of the values in the number are between 0 and 255. If we are training a neural network, for various reasons it's easier if we treat all values as between 0 and 1, a process called '**normalizing**'...and fortunately in Python it's easy to normalize a list like this without looping. You do it like this:
|
training_images = training_images / 255.0
test_images = test_images / 255.0
|
_____no_output_____
|
MIT
|
Course_1_Part_4_Lesson_2_Notebook.ipynb
|
poojan-dalal/fashion-MNIST
|
Now you might be wondering why there are 2 sets...training and testing -- remember we spoke about this in the intro? The idea is to have 1 set of data for training, and then another set of data...that the model hasn't yet seen...to see how good it would be at classifying values. After all, when you're done, you're going to want to try it out with data that it hadn't previously seen! Let's now design the model. There's quite a few new concepts here, but don't worry, you'll get the hang of them.
|
model = tf.keras.models.Sequential([tf.keras.layers.Flatten(),
tf.keras.layers.Dense(128, activation=tf.nn.relu),
tf.keras.layers.Dense(10, activation=tf.nn.softmax)])
|
_____no_output_____
|
MIT
|
Course_1_Part_4_Lesson_2_Notebook.ipynb
|
poojan-dalal/fashion-MNIST
|
**Sequential**: That defines a SEQUENCE of layers in the neural network**Flatten**: Remember earlier where our images were a square, when you printed them out? Flatten just takes that square and turns it into a 1 dimensional set.**Dense**: Adds a layer of neuronsEach layer of neurons need an **activation function** to tell them what to do. There's lots of options, but just use these for now. **Relu** effectively means "If X>0 return X, else return 0" -- so what it does it it only passes values 0 or greater to the next layer in the network.**Softmax** takes a set of values, and effectively picks the biggest one, so, for example, if the output of the last layer looks like [0.1, 0.1, 0.05, 0.1, 9.5, 0.1, 0.05, 0.05, 0.05], it saves you from fishing through it looking for the biggest value, and turns it into [0,0,0,0,1,0,0,0,0] -- The goal is to save a lot of coding! The next thing to do, now the model is defined, is to actually build it. You do this by compiling it with an optimizer and loss function as before -- and then you train it by calling **model.fit ** asking it to fit your training data to your training labels -- i.e. have it figure out the relationship between the training data and its actual labels, so in future if you have data that looks like the training data, then it can make a prediction for what that data would look like.
|
model.compile(optimizer = tf.optimizers.Adam(),
loss = 'sparse_categorical_crossentropy',
metrics=['accuracy'])
model.fit(training_images, training_labels, epochs=5)
|
_____no_output_____
|
MIT
|
Course_1_Part_4_Lesson_2_Notebook.ipynb
|
poojan-dalal/fashion-MNIST
|
Once it's done training -- you should see an accuracy value at the end of the final epoch. It might look something like 0.9098. This tells you that your neural network is about 91% accurate in classifying the training data. I.E., it figured out a pattern match between the image and the labels that worked 91% of the time. Not great, but not bad considering it was only trained for 5 epochs and done quite quickly.But how would it work with unseen data? That's why we have the test images. We can call model.evaluate, and pass in the two sets, and it will report back the loss for each. Let's give it a try:
|
model.evaluate(test_images, test_labels)
|
_____no_output_____
|
MIT
|
Course_1_Part_4_Lesson_2_Notebook.ipynb
|
poojan-dalal/fashion-MNIST
|
For me, that returned a accuracy of about .8838, which means it was about 88% accurate. As expected it probably would not do as well with *unseen* data as it did with data it was trained on! As you go through this course, you'll look at ways to improve this. To explore further, try the below exercises: Exercise 1:For this first exercise run the below code: It creates a set of classifications for each of the test images, and then prints the first entry in the classifications. The output, after you run it is a list of numbers. Why do you think this is, and what do those numbers represent?
|
classifications = model.predict(test_images)
print(classifications[0])
|
_____no_output_____
|
MIT
|
Course_1_Part_4_Lesson_2_Notebook.ipynb
|
poojan-dalal/fashion-MNIST
|
Hint: try running print(test_labels[0]) -- and you'll get a 9. Does that help you understand why this list looks the way it does?
|
print(test_labels[0])
|
_____no_output_____
|
MIT
|
Course_1_Part_4_Lesson_2_Notebook.ipynb
|
poojan-dalal/fashion-MNIST
|
Exercise 2: Let's now look at the layers in your model. Experiment with different values for the dense layer with 512 neurons. What different results do you get for loss, training time etc? Why do you think that's the case?
|
import tensorflow as tf
print(tf.__version__)
mnist = tf.keras.datasets.mnist
(training_images, training_labels) , (test_images, test_labels) = mnist.load_data()
training_images = training_images/255.0
test_images = test_images/255.0
model = tf.keras.models.Sequential([tf.keras.layers.Flatten(),
tf.keras.layers.Dense(1024, activation=tf.nn.relu),
tf.keras.layers.Dense(10, activation=tf.nn.softmax)])
model.compile(optimizer = 'adam',
loss = 'sparse_categorical_crossentropy')
model.fit(training_images, training_labels, epochs=5)
model.evaluate(test_images, test_labels)
classifications = model.predict(test_images)
print(classifications[0])
print(test_labels[0])
|
_____no_output_____
|
MIT
|
Course_1_Part_4_Lesson_2_Notebook.ipynb
|
poojan-dalal/fashion-MNIST
|
Exercise 3: What would happen if you remove the Flatten() layer. Why do you think that's the case? You get an error about the shape of the data. It may seem vague right now, but it reinforces the rule of thumb that the first layer in your network should be the same shape as your data. Right now our data is 28x28 images, and 28 layers of 28 neurons would be infeasible, so it makes more sense to 'flatten' that 28,28 into a 784x1. Instead of wriitng all the code to handle that ourselves, we add the Flatten() layer at the begining, and when the arrays are loaded into the model later, they'll automatically be flattened for us.
|
import tensorflow as tf
print(tf.__version__)
mnist = tf.keras.datasets.mnist
(training_images, training_labels) , (test_images, test_labels) = mnist.load_data()
training_images = training_images/255.0
test_images = test_images/255.0
model = tf.keras.models.Sequential([#tf.keras.layers.Flatten(),
tf.keras.layers.Dense(64, activation=tf.nn.relu),
tf.keras.layers.Dense(10, activation=tf.nn.softmax)])
model.compile(optimizer = 'adam',
loss = 'sparse_categorical_crossentropy')
model.fit(training_images, training_labels, epochs=5)
model.evaluate(test_images, test_labels)
classifications = model.predict(test_images)
print(classifications[0])
print(test_labels[0])
|
_____no_output_____
|
MIT
|
Course_1_Part_4_Lesson_2_Notebook.ipynb
|
poojan-dalal/fashion-MNIST
|
Exercise 4: Consider the final (output) layers. Why are there 10 of them? What would happen if you had a different amount than 10? For example, try training the network with 5You get an error as soon as it finds an unexpected value. Another rule of thumb -- the number of neurons in the last layer should match the number of classes you are classifying for. In this case it's the digits 0-9, so there are 10 of them, hence you should have 10 neurons in your final layer.
|
import tensorflow as tf
print(tf.__version__)
mnist = tf.keras.datasets.mnist
(training_images, training_labels) , (test_images, test_labels) = mnist.load_data()
training_images = training_images/255.0
test_images = test_images/255.0
model = tf.keras.models.Sequential([tf.keras.layers.Flatten(),
tf.keras.layers.Dense(64, activation=tf.nn.relu),
tf.keras.layers.Dense(5, activation=tf.nn.softmax)])
model.compile(optimizer = 'adam',
loss = 'sparse_categorical_crossentropy')
model.fit(training_images, training_labels, epochs=5)
model.evaluate(test_images, test_labels)
classifications = model.predict(test_images)
print(classifications[0])
print(test_labels[0])
|
_____no_output_____
|
MIT
|
Course_1_Part_4_Lesson_2_Notebook.ipynb
|
poojan-dalal/fashion-MNIST
|
Exercise 5: Consider the effects of additional layers in the network. What will happen if you add another layer between the one with 512 and the final layer with 10. Ans: There isn't a significant impact -- because this is relatively simple data. For far more complex data (including color images to be classified as flowers that you'll see in the next lesson), extra layers are often necessary.
|
import tensorflow as tf
print(tf.__version__)
mnist = tf.keras.datasets.mnist
(training_images, training_labels) , (test_images, test_labels) = mnist.load_data()
training_images = training_images/255.0
test_images = test_images/255.0
model = tf.keras.models.Sequential([tf.keras.layers.Flatten(),
tf.keras.layers.Dense(512, activation=tf.nn.relu),
tf.keras.layers.Dense(256, activation=tf.nn.relu),
tf.keras.layers.Dense(10, activation=tf.nn.softmax)])
model.compile(optimizer = 'adam',
loss = 'sparse_categorical_crossentropy')
model.fit(training_images, training_labels, epochs=5)
model.evaluate(test_images, test_labels)
classifications = model.predict(test_images)
print(classifications[0])
print(test_labels[0])
|
_____no_output_____
|
MIT
|
Course_1_Part_4_Lesson_2_Notebook.ipynb
|
poojan-dalal/fashion-MNIST
|
Exercise 6: Consider the impact of training for more or less epochs. Why do you think that would be the case? Try 15 epochs -- you'll probably get a model with a much better loss than the one with 5Try 30 epochs -- you might see the loss value stops decreasing, and sometimes increases. This is a side effect of something called 'overfitting' which you can learn about [somewhere] and it's something you need to keep an eye out for when training neural networks. There's no point in wasting your time training if you aren't improving your loss, right! :)
|
import tensorflow as tf
print(tf.__version__)
mnist = tf.keras.datasets.mnist
(training_images, training_labels) , (test_images, test_labels) = mnist.load_data()
training_images = training_images/255.0
test_images = test_images/255.0
model = tf.keras.models.Sequential([tf.keras.layers.Flatten(),
tf.keras.layers.Dense(128, activation=tf.nn.relu),
tf.keras.layers.Dense(10, activation=tf.nn.softmax)])
model.compile(optimizer = 'adam',
loss = 'sparse_categorical_crossentropy')
model.fit(training_images, training_labels, epochs=30)
model.evaluate(test_images, test_labels)
classifications = model.predict(test_images)
print(classifications[34])
print(test_labels[34])
|
_____no_output_____
|
MIT
|
Course_1_Part_4_Lesson_2_Notebook.ipynb
|
poojan-dalal/fashion-MNIST
|
Exercise 7: Before you trained, you normalized the data, going from values that were 0-255 to values that were 0-1. What would be the impact of removing that? Here's the complete code to give it a try. Why do you think you get different results?
|
import tensorflow as tf
print(tf.__version__)
mnist = tf.keras.datasets.mnist
(training_images, training_labels), (test_images, test_labels) = mnist.load_data()
training_images=training_images/255.0
test_images=test_images/255.0
model = tf.keras.models.Sequential([
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(512, activation=tf.nn.relu),
tf.keras.layers.Dense(10, activation=tf.nn.softmax)
])
model.compile(optimizer='adam', loss='sparse_categorical_crossentropy')
model.fit(training_images, training_labels, epochs=5)
model.evaluate(test_images, test_labels)
classifications = model.predict(test_images)
print(classifications[0])
print(test_labels[0])
|
_____no_output_____
|
MIT
|
Course_1_Part_4_Lesson_2_Notebook.ipynb
|
poojan-dalal/fashion-MNIST
|
Exercise 8: Earlier when you trained for extra epochs you had an issue where your loss might change. It might have taken a bit of time for you to wait for the training to do that, and you might have thought 'wouldn't it be nice if I could stop the training when I reach a desired value?' -- i.e. 95% accuracy might be enough for you, and if you reach that after 3 epochs, why sit around waiting for it to finish a lot more epochs....So how would you fix that? Like any other program...you have callbacks! Let's see them in action...
|
import tensorflow as tf
print(tf.__version__)
class myCallback(tf.keras.callbacks.Callback):
def on_epoch_end(self, epoch, logs={}):
if(logs.get('loss')<0.4):
print("\nReached 60% accuracy so cancelling training!")
self.model.stop_training = True
callbacks = myCallback()
mnist = tf.keras.datasets.fashion_mnist
(training_images, training_labels), (test_images, test_labels) = mnist.load_data()
training_images=training_images/255.0
test_images=test_images/255.0
model = tf.keras.models.Sequential([
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(512, activation=tf.nn.relu),
tf.keras.layers.Dense(10, activation=tf.nn.softmax)
])
model.compile(optimizer='adam', loss='sparse_categorical_crossentropy')
model.fit(training_images, training_labels, epochs=5, callbacks=[callbacks])
|
_____no_output_____
|
MIT
|
Course_1_Part_4_Lesson_2_Notebook.ipynb
|
poojan-dalal/fashion-MNIST
|
Configure through code.Restart the kernel here.
|
import logging
import sys
root_logger = logging.getLogger()
handler = logging.StreamHandler(sys.stdout)
formatter = logging.Formatter("%(levelname)s: %(message)s")
handler.setFormatter(formatter)
root_logger.addHandler(handler)
root_logger.setLevel("INFO")
logging.info("Hello logging world")
|
INFO: Hello logging world
|
MIT
|
01_Workshop-master/Chapter06/Exercise94/Exercise94.ipynb
|
Anna-MarieTomm/Learn_Python_with_Anna-Marie
|
Configure with dictConfig.Restart the kernel here.
|
import logging
from logging.config import dictConfig
dictConfig({
"version": 1,
"formatters": {
"short":{
"format": "%(levelname)s: %(message)s",
}
},
"handlers": {
"console": {
"class": "logging.StreamHandler",
"formatter": "short",
"stream": "ext://sys.stdout",
"level": "DEBUG",
}
},
"loggers": {
"": {
"handlers": ["console"],
"level": "INFO"
}
}
})
logging.info("Hello logging world")
|
INFO: Hello logging world
|
MIT
|
01_Workshop-master/Chapter06/Exercise94/Exercise94.ipynb
|
Anna-MarieTomm/Learn_Python_with_Anna-Marie
|
Configure with basicConfig.Restart the kernel here.
|
import sys
import logging
logging.basicConfig(
level="INFO",
format="%(levelname)s: %(message)s",
stream=sys.stdout
)
logging.info("Hello there!")
|
INFO: Hello there!
|
MIT
|
01_Workshop-master/Chapter06/Exercise94/Exercise94.ipynb
|
Anna-MarieTomm/Learn_Python_with_Anna-Marie
|
Configure with fileconfig.Restart the kernel here.
|
import logging
from logging.config import fileConfig
fileConfig("logging-config.ini")
logging.info("Hello there!")
|
INFO: Hello there!
|
MIT
|
01_Workshop-master/Chapter06/Exercise94/Exercise94.ipynb
|
Anna-MarieTomm/Learn_Python_with_Anna-Marie
|
[Advent of Code 2020: Day 10](https://adventofcode.com/2020/day/10) \-\-\- Day 10: Adapter Array \-\-\-Patched into the aircraft's data port, you discover weather forecasts of a massive tropical storm. Before you can figure out whether it will impact your vacation plans, however, your device suddenly turns off!Its battery is dead.You'll need to plug it in. There's only one problem: the charging outlet near your seat produces the wrong number of **jolts**. Always prepared, you make a list of all of the joltage adapters in your bag.Each of your joltage adapters is rated for a specific **output joltage** (your puzzle input). Any given adapter can take an input `1`, `2`, or `3` jolts **lower** than its rating and still produce its rated output joltage.In addition, your device has a built\-in joltage adapter rated for **`3` jolts higher** than the highest\-rated adapter in your bag. (If your adapter list were `3`, `9`, and `6`, your device's built\-in adapter would be rated for `12` jolts.)Treat the charging outlet near your seat as having an effective joltage rating of `0`.Since you have some time to kill, you might as well test all of your adapters. Wouldn't want to get to your resort and realize you can't even charge your device!If you *use every adapter in your bag* at once, what is the distribution of joltage differences between the charging outlet, the adapters, and your device?For example, suppose that in your bag, you have adapters with the following joltage ratings:```16101551117196124```With these adapters, your device's built\-in joltage adapter would be rated for `19 + 3 = `**`22`** jolts, 3 higher than the highest\-rated adapter.Because adapters can only connect to a source 1\-3 jolts lower than its rating, in order to use every adapter, you'd need to choose them like this:* The charging outlet has an effective rating of `0` jolts, so the only adapters that could connect to it directly would need to have a joltage rating of `1`, `2`, or `3` jolts. Of these, only one you have is an adapter rated `1` jolt (difference of **`1`**).* From your `1`\-jolt rated adapter, the only choice is your `4`\-jolt rated adapter (difference of **`3`**).* From the `4`\-jolt rated adapter, the adapters rated `5`, `6`, or `7` are valid choices. However, in order to not skip any adapters, you have to pick the adapter rated `5` jolts (difference of **`1`**).* Similarly, the next choices would need to be the adapter rated `6` and then the adapter rated `7` (with difference of **`1`** and **`1`**).* The only adapter that works with the `7`\-jolt rated adapter is the one rated `10` jolts (difference of **`3`**).* From `10`, the choices are `11` or `12`; choose `11` (difference of **`1`**) and then `12` (difference of **`1`**).* After `12`, only valid adapter has a rating of `15` (difference of **`3`**), then `16` (difference of **`1`**), then `19` (difference of **`3`**).* Finally, your device's built\-in adapter is always 3 higher than the highest adapter, so its rating is `22` jolts (always a difference of **`3`**).In this example, when using every adapter, there are **`7`** differences of 1 jolt and **`5`** differences of 3 jolts.Here is a larger example:```2833184231144620484724234945193839111322535817794234103```In this larger example, in a chain that uses all of the adapters, there are **`22`** differences of 1 jolt and **`10`** differences of 3 jolts.Find a chain that uses all of your adapters to connect the charging outlet to your device's built\-in adapter and count the joltage differences between the charging outlet, the adapters, and your device. **What is the number of 1\-jolt differences multiplied by the number of 3\-jolt differences?**
|
import unittest
from IPython.display import Markdown, display
from aoc_puzzle import AocPuzzle
class AdapterArray(AocPuzzle):
def parse_data(self, raw_data):
self.adapter_list = list(map(int, raw_data.split('\n')))
self.adapter_list.sort()
self.adapter_list.insert(0,0)
self.adapter_list.append(self.adapter_list[-1]+3)
def calc_jolt_diff(self, output=False):
jolt_diffs = {}
for i in range(1,len(self.adapter_list)):
adapter = self.adapter_list[i]
prev_adapter = self.adapter_list[i-1]
jdiff = adapter - prev_adapter
if jdiff not in jolt_diffs:
jolt_diffs[jdiff] = 1
else:
jolt_diffs[jdiff] += 1
jolt_diff_product = jolt_diffs[1] * jolt_diffs[3]
if output:
display(Markdown(f'### Jolt diff product: `{jolt_diff_product}`'))
return jolt_diff_product
class TestBasic(unittest.TestCase):
def test_parse_data(self):
in_data = '16\n10\n15\n5\n1\n11\n7\n19\n6\n12\n4'
exp_out = [0, 1, 4, 5, 6, 7, 10, 11, 12, 15, 16, 19, 22]
aa = AdapterArray(in_data)
self.assertEqual(aa.adapter_list, exp_out)
def test_puzzle(self):
input_data = ['16\n10\n15\n5\n1\n11\n7\n19\n6\n12\n4','28\n33\n18\n42\n31\n14\n46\n20\n48\n47\n24\n23\n49\n45\n19\n38\n39\n11\n1\n32\n25\n35\n8\n17\n7\n9\n4\n2\n34\n10\n3']
exp_output = [35,220]
for in_data, exp_out in tuple(zip(input_data, exp_output)):
aa = AdapterArray(in_data)
self.assertEqual(aa.calc_jolt_diff(), exp_out)
unittest.main(argv=[""], exit=False)
aa = AdapterArray("input/d10.txt")
aa.calc_jolt_diff(output=True)
|
_____no_output_____
|
MIT
|
AoC 2020/AoC 2020 - Day 10.ipynb
|
RubenFixit/AoC
|
--- Part Two ---To completely determine whether you have enough adapters, you'll need to figure out how many different ways they can be arranged. Every arrangement needs to connect the charging outlet to your device. The previous rules about when adapters can successfully connect still apply.The first example above (the one that starts with `16`, `10`, `15`) supports the following arrangements:```(0), 1, 4, 5, 6, 7, 10, 11, 12, 15, 16, 19, (22)(0), 1, 4, 5, 6, 7, 10, 12, 15, 16, 19, (22)(0), 1, 4, 5, 7, 10, 11, 12, 15, 16, 19, (22)(0), 1, 4, 5, 7, 10, 12, 15, 16, 19, (22)(0), 1, 4, 6, 7, 10, 11, 12, 15, 16, 19, (22)(0), 1, 4, 6, 7, 10, 12, 15, 16, 19, (22)(0), 1, 4, 7, 10, 11, 12, 15, 16, 19, (22)(0), 1, 4, 7, 10, 12, 15, 16, 19, (22)```(The charging outlet and your device's built-in adapter are shown in parentheses.) Given the adapters from the first example, the total number of arrangements that connect the charging outlet to your device is **`8`**.The second example above (the one that starts with `28`, `33`, `18`) has many arrangements. Here are a few:```(0), 1, 2, 3, 4, 7, 8, 9, 10, 11, 14, 17, 18, 19, 20, 23, 24, 25, 28, 31,32, 33, 34, 35, 38, 39, 42, 45, 46, 47, 48, 49, (52)(0), 1, 2, 3, 4, 7, 8, 9, 10, 11, 14, 17, 18, 19, 20, 23, 24, 25, 28, 31,32, 33, 34, 35, 38, 39, 42, 45, 46, 47, 49, (52)(0), 1, 2, 3, 4, 7, 8, 9, 10, 11, 14, 17, 18, 19, 20, 23, 24, 25, 28, 31,32, 33, 34, 35, 38, 39, 42, 45, 46, 48, 49, (52)(0), 1, 2, 3, 4, 7, 8, 9, 10, 11, 14, 17, 18, 19, 20, 23, 24, 25, 28, 31,32, 33, 34, 35, 38, 39, 42, 45, 46, 49, (52)(0), 1, 2, 3, 4, 7, 8, 9, 10, 11, 14, 17, 18, 19, 20, 23, 24, 25, 28, 31,32, 33, 34, 35, 38, 39, 42, 45, 47, 48, 49, (52)(0), 3, 4, 7, 10, 11, 14, 17, 20, 23, 25, 28, 31, 34, 35, 38, 39, 42, 45,46, 48, 49, (52)(0), 3, 4, 7, 10, 11, 14, 17, 20, 23, 25, 28, 31, 34, 35, 38, 39, 42, 45,46, 49, (52)(0), 3, 4, 7, 10, 11, 14, 17, 20, 23, 25, 28, 31, 34, 35, 38, 39, 42, 45,47, 48, 49, (52)(0), 3, 4, 7, 10, 11, 14, 17, 20, 23, 25, 28, 31, 34, 35, 38, 39, 42, 45,47, 49, (52)(0), 3, 4, 7, 10, 11, 14, 17, 20, 23, 25, 28, 31, 34, 35, 38, 39, 42, 45,48, 49, (52)```In total, this set of adapters can connect the charging outlet to your device in **`19208`** distinct arrangements.You glance back down at your bag and try to remember why you brought so many adapters; there must be **more than a trillion** valid ways to arrange them! Surely, there must be an efficient way to count the arrangements.**What is the total number of distinct ways you can arrange the adapters to connect the charging outlet to your device?**
|
class AdapterArray2(AdapterArray):
def count_all_arrangements(self, output=False):
arrangements_list = [1]
for a_index in range(1,len(self.adapter_list)):
adapter = self.adapter_list[a_index]
arrangements = 0
for pa_index in range(a_index):
prev_adapter = self.adapter_list[pa_index]
jdiff = adapter - prev_adapter
if jdiff <= 3:
arrangements += arrangements_list[pa_index]
arrangements_list.append(arrangements)
all_arrangements = arrangements_list[-1]
if output:
display(Markdown(f'### Total possible ways to arrange the adapters: `{all_arrangements}`'))
return all_arrangements
class TestBasic(unittest.TestCase):
def test_puzzle2(self):
input_data = ['28\n33\n18\n42\n31\n14\n46\n20\n48\n47\n24\n23\n49\n45\n19\n38\n39\n11\n1\n32\n25\n35\n8\n17\n7\n9\n4\n2\n34\n10\n3','16\n10\n15\n5\n1\n11\n7\n19\n6\n12\n4']
exp_output = [19208, 8]
for in_data, exp_out in tuple(zip(input_data, exp_output)):
aa = AdapterArray2(in_data)
self.assertEqual(aa.count_all_arrangements(), exp_out)
unittest.main(argv=[""], exit=False)
aa = AdapterArray2("input/d10.txt")
aa.count_all_arrangements(output=True)
|
_____no_output_____
|
MIT
|
AoC 2020/AoC 2020 - Day 10.ipynb
|
RubenFixit/AoC
|
An Jupyter notebook for running PCSE/WOFOST on a CGMS8 databaseThis Jupyter notebook will demonstrate how to connect and read data from a CGMS8 database for a single grid. Next the data will be used to run a PCSE/WOFOST simulation for potential and water-limited conditions, the latter is done for all soil types present in the selected grid. Results are visualized and exported to an Excel file.Note that no attempt is made to *write* data to a CGMS8 database as writing data to a CGMS database can be tricky and slow. In our experience it is better to first dump simulation results to a CSV file and use specialized loading tools for loading data into the database such as [SQLLoader](http://www.oracle.com/technetwork/database/enterprise-edition/sql-loader-overview-095816.html) for ORACLE or [pgloader](http://pgloader.io/) for PostgreSQL databases.A dedicated package is now available for running WOFOST simulations using a CGMS database: [pyCGMS](https://github.com/ajwdewit/pycgms). The steps demonstrated in this notebook are implemented in as well in the pyCGMS package which provides a nicer interface to run simulations using a CGMS database.**Prerequisites for running this notebook**Several packages need to be installed for running PCSE/WOFOST on a CGMS8 database: 1. PCSE and its dependencies. See the [PCSE user guide](http://pcse.readthedocs.io/en/stable/installing.html) for more information; 2. The database client software for the database that will be used, this depends on your database of choice. For SQLite no client software is needed as it is included with python. For Oracle you will need the [Oracle client software](http://www.oracle.com/technetwork/database/features/instant-client/index-097480.html) as well as the [python bindings for the Oracle client (cx_Oracle)](http://sourceforge.net/projects/cx-oracle/files/)). See [here](https://wiki.python.org/moin/DatabaseInterfaces) for an overview of database connectors for python; 3. The `pandas` module for processing and visualizing WOFOST output; 4. The `matplotlib` module, although we will mainly use it through pandas; Importing the relevant modulesFirst the required modules need to be imported. These include the CGMS8 data providers for PCSE as well as other relevant modules.
|
%matplotlib inline
import os, sys
data_dir = os.path.join(os.getcwd(), "data")
import matplotlib as mpl
import matplotlib.pyplot as plt
plt.style.use('ggplot')
import sqlalchemy as sa
import pandas as pd
import pcse
from pcse.db.cgms8 import GridWeatherDataProvider, AgroManagementDataProvider, SoilDataIterator, \
CropDataProvider, STU_Suitability, SiteDataProvider
from pcse.models import Wofost71_WLP_FD, Wofost71_PP
from pcse.util import DummySoilDataProvider, WOFOST71SiteDataProvider
from pcse.base_classes import ParameterProvider
print("This notebook was built with:")
print("python version: %s " % sys.version)
print("PCSE version: %s" % pcse.__version__)
|
This notebook was built with:
python version: 3.6.7 |Anaconda, Inc.| (default, Oct 24 2018, 09:45:24) [MSC v.1912 64 bit (AMD64)]
PCSE version: 5.4.1
|
MIT
|
05 Using PCSE WOFOST with a CGMS8 database.ipynb
|
CHNEWUAF/pcse_notebooks
|
Building the connection to a CGMS8 databaseThe connection to the database will be made using SQLAlchemy. This requires a database URL to be provided, the format of this URL depends on the database of choice. See the SQLAlchemy documentation on [database URLs](http://docs.sqlalchemy.org/en/latest/core/engines.htmldatabase-urls) for the different database URL formats.For this example we will use a database that was created for Anhui province in China. This database can be downloaded [here](https://wageningenur4-my.sharepoint.com/:u:/g/personal/allard_dewit_wur_nl/EdwuayKW2IhOp6zCYElA0zsB3NGxcKjZc2zE_JGfVPv89Q?e=oEgI9R).
|
cgms8_db = "d:/nobackup/CGMS8_Anhui/CGMS_Anhui_complete.db"
dbURL = "sqlite:///%s" % cgms8_db
engine = sa.create_engine(dbURL)
|
_____no_output_____
|
MIT
|
05 Using PCSE WOFOST with a CGMS8 database.ipynb
|
CHNEWUAF/pcse_notebooks
|
Defining what should be simulatedFor the simulation to run, some IDs must be provided that refer to the location (`grid_no`), crop type (`crop_no`) and year (`campaign_year`) for which the simulation should be carried out. These IDs refer to columns in the CGMS database that are used to define the relationships.
|
grid_no = 81159
crop_no = 1 # Winter-wheat
campaign_year = 2008
# if input/output should be printed set show_results=True
show_results = True
|
_____no_output_____
|
MIT
|
05 Using PCSE WOFOST with a CGMS8 database.ipynb
|
CHNEWUAF/pcse_notebooks
|
Retrieving data for the simulation from the database Weather dataWeather data will be derived from the GRID_WEATHER table in the database. By default, the entire time-series of weather data available for this grid cell will be fetched from the database.
|
weatherdata = GridWeatherDataProvider(engine, grid_no)
print(weatherdata)
|
Weather data provided by: GridWeatherDataProvider
--------Description---------
Weather data derived for grid_no: 81159
----Site characteristics----
Elevation: 29.0
Latitude: 33.170
Longitude: 116.334
Data available for 1990-01-01 - 2013-11-03
Number of missing days: 0
|
MIT
|
05 Using PCSE WOFOST with a CGMS8 database.ipynb
|
CHNEWUAF/pcse_notebooks
|
Agromanagement informationAgromanagement in CGMS mainly refers to the cropping calendar for the given crop and location.
|
agromanagement = AgroManagementDataProvider(engine, grid_no, crop_no, campaign_year)
agromanagement
|
_____no_output_____
|
MIT
|
05 Using PCSE WOFOST with a CGMS8 database.ipynb
|
CHNEWUAF/pcse_notebooks
|
Soil informationA CGMS grid cell can contain multiple soils which may or may not be suitable for a particular crop. Moreover, a complicating factor is the arrangement of soils in many soil maps which consist of *Soil Mapping Units* `(SMUs)` which are soil associations whose location on the map is known. Within an SMU, the actual soil types are known as *Soil Typological Units* `(STUs)` whose spatial delination is not known, only the percentage area within the SMU is known. Therefore, fetching soil information works in two steps: 1. First of all the `SoilDataIterator` will fetch all soil information for the given grid cell. It presents it as a list which contains all the SMUs that are present in the grid cell with their internal STU representation. The soil information is organized in such a way that the system can iterate over the different soils including information on soil physical properties as well as SMU area and STU percentage with the SMU. 2. Second, the `STU_Suitability` will contain all soils that are suitable for a given crop. The 'STU_NO' of each crop can be used to check if a particular STU is suitable for that crop. The example grid cell used here only contains a single SMU/STU combination.
|
soil_iterator = SoilDataIterator(engine, grid_no)
soil_iterator
suitable_stu = STU_Suitability(engine, crop_no)
|
_____no_output_____
|
MIT
|
05 Using PCSE WOFOST with a CGMS8 database.ipynb
|
CHNEWUAF/pcse_notebooks
|
Crop parametersCrop parameters are needed for parameterizing the crop simulation model. The `CropDataProvider` will retrieve them from the database for the given crop_no, grid_no and campaign_year.
|
cropd = CropDataProvider(engine, grid_no, crop_no, campaign_year)
if show_results:
print(cropd)
|
Crop parameter values for grid_no=81159, crop_no=1 (winter wheat), variety_no=55, campaign_year=2008 derived from sqlite:///d:/nobackup/CGMS8_Anhui/CGMS_Anhui_complete.db
{'CFET': 1.0, 'CVL': 0.685, 'CVO': 0.709, 'CVR': 0.694, 'CVS': 0.662, 'DEPNR': 4.5, 'DLC': 10.0, 'DLO': 13.5, 'DVSEND': 2.0, 'EFFTB': [0.0, 0.45, 40.0, 0.45], 'IAIRDU': 0.0, 'IDSL': 1.0, 'KDIFTB': [0.0, 0.6, 2.0, 0.6], 'LAIEM': 0.138, 'PERDL': 0.03, 'Q10': 2.0, 'RDI': 10.0, 'RDMCR': 125.0, 'RGRLAI': 0.0082, 'RML': 0.03, 'RMO': 0.01, 'RMR': 0.015, 'RMS': 0.015, 'RRI': 1.2, 'SPA': 0.0, 'SPAN': 23.5, 'SSATB': [0.0, 0.0, 2.0, 0.0], 'TBASE': 0.0, 'TBASEM': 0.0, 'TDWI': 195.0, 'TEFFMX': 30.0, 'TSUM1': 794.0, 'TSUM2': 715.0, 'TSUMEM': 100.0, 'AMAXTB': [0.0, 29.4766, 1.0, 29.4766, 1.3, 29.4766, 2.0, 3.6856, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0], 'DTSMTB': [0.0, 0.0, 25.0, 25.0, 45.0, 25.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0], 'FLTB': [0.0, 0.65, 0.1, 0.65, 0.25, 0.7, 0.5, 0.5, 0.646, 0.3, 0.95, 0.0, 2.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0], 'FOTB': [0.0, 0.0, 0.95, 0.0, 1.0, 1.0, 2.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0], 'FRTB': [0.0, 0.5, 0.1, 0.5, 0.2, 0.4, 0.35, 0.22, 0.4, 0.17, 0.5, 0.13, 0.7, 0.07, 0.9, 0.03, 1.2, 0.0, 2.0, 0.0], 'FSTB': [0.0, 0.35, 0.1, 0.35, 0.25, 0.3, 0.5, 0.5, 0.646, 0.7, 0.95, 1.0, 1.0, 0.0, 2.0, 0.0, 0.0, 0.0, 0.0, 0.0], 'RDRRTB': [0.0, 0.0, 1.5, 0.0, 1.5001, 0.02, 2.0, 0.02, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0], 'RDRSTB': [0.0, 0.0, 1.5, 0.0, 1.5001, 0.02, 2.0, 0.02, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0], 'RFSETB': [0.0, 1.0, 2.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0], 'SLATB': [0.0, 0.0021, 0.5, 0.0015, 2.0, 0.0015, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0], 'TMNFTB': [-5.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0], 'TMPFTB': [0.0, 0.01, 10.0, 0.6, 15.0, 1.0, 25.0, 1.0, 35.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0], 'VERNRTB': [], 'DVSI': 0.0, 'IOX': 0, 'CRPNAM': 'winter wheat'}
|
MIT
|
05 Using PCSE WOFOST with a CGMS8 database.ipynb
|
CHNEWUAF/pcse_notebooks
|
Site parametersSite parameters are an ancillary class of parameters that are related to a given site. For example, an important parameter is the initial amount of moisture in the soil profile (WAV) and the Atmospheric CO$_2$ concentration (CO2). Site parameters will be fetched for each soil type within the soil iteration loop. Simulating with WOFOST Place holders for storing simulation results
|
daily_results = {}
summary_results = {}
|
_____no_output_____
|
MIT
|
05 Using PCSE WOFOST with a CGMS8 database.ipynb
|
CHNEWUAF/pcse_notebooks
|
Potential production
|
# For potential production we can provide site data directly
sited = WOFOST71SiteDataProvider(CO2=360, WAV=25)
# We do not need soildata for potential production so we provide some dummy values here
soild = DummySoilDataProvider()
# Start WOFOST, run the simulation
parameters = ParameterProvider(sitedata=sited, soildata=soild, cropdata=cropd)
wofost = Wofost71_PP(parameters, weatherdata, agromanagement)
wofost.run_till_terminate()
# convert output to Pandas DataFrame and store it
daily_results['Potential'] = pd.DataFrame(wofost.get_output()).set_index("day")
summary_results['Potential'] = wofost.get_summary_output()
|
_____no_output_____
|
MIT
|
05 Using PCSE WOFOST with a CGMS8 database.ipynb
|
CHNEWUAF/pcse_notebooks
|
Water-limited productionWater-limited simulations will be carried out for each soil type. First we will check that the soil type is suitable. Next we will retrieve the site data and run the simulation. Finally, we will collect the output and store the results.
|
for smu_no, area, stu_no, percentage, soild in soil_iterator:
# Check if this is a suitable STU
if stu_no not in suitable_stu:
continue
# retrieve the site data for this soil type
sited = SiteDataProvider(engine, grid_no, crop_no, campaign_year, stu_no)
# Start WOFOST, run the simulation
parameters = ParameterProvider(sitedata=sited, soildata=soild, cropdata=cropd)
wofost = Wofost71_WLP_FD(parameters, weatherdata, agromanagement)
wofost.run_till_terminate()
# Store simulation results
runid = "smu_%s-stu_%s" % (smu_no, stu_no)
daily_results[runid] = pd.DataFrame(wofost.get_output()).set_index("day")
summary_results[runid] = wofost.get_summary_output()
|
_____no_output_____
|
MIT
|
05 Using PCSE WOFOST with a CGMS8 database.ipynb
|
CHNEWUAF/pcse_notebooks
|
Visualizing and exporting simulation results We can visualize the simulation results using pandas and matplotlib
|
# Generate a figure with 10 subplots
fig, axes = plt.subplots(nrows=5, ncols=2, figsize=(12, 30))
# Plot results
for runid, results in daily_results.items():
for var, ax in zip(results, axes.flatten()):
results[var].plot(ax=ax, title=var, label=runid)
ax.set_title(var)
fig.autofmt_xdate()
axes[0][0].legend(loc='upper left')
|
_____no_output_____
|
MIT
|
05 Using PCSE WOFOST with a CGMS8 database.ipynb
|
CHNEWUAF/pcse_notebooks
|
Exporting the simulation resultsA pandas DataFrame or panel can be easily export to a [variety of formats](http://pandas.pydata.org/pandas-docs/stable/io.html) including CSV, Excel or HDF5. First we convert the results to a Panel, next we will export to an Excel file.
|
excel_fname = os.path.join(data_dir, "output", "cgms8_wofost_results.xls")
panel = pd.Panel(daily_results)
panel.to_excel(excel_fname)
|
_____no_output_____
|
MIT
|
05 Using PCSE WOFOST with a CGMS8 database.ipynb
|
CHNEWUAF/pcse_notebooks
|
Simulating with a different start date waterbalanceBy default CGMS starts the simulation when the crop is planted. Particularly in dry climates this can be problematic because the results become very sensitive to the initial value of the soil water balance. In such scenarios, it is more realistic to start the water balance with a dry soil profile well before the crop is planted and let the soil 'fill up' as a result of rainfall. To enable this option, the column `GIVEN_STARTDATE_WATBAL` in the table `INITIAL_SOIL_WATER` should be set to the right starting date for each grid_no, crop_no, year and stu_no. Moreover, the other parameters in the table should be set to the appropriate values (particularly the initial soil moisture `WAV`).The start date of the water balance should then be used to update the agromanagement data during the simulation loop, see the example below.
|
for smu_no, area, stu_no, percentage, soild in soil_iterator:
# Check if this is a suitable STU
if stu_no not in suitable_stu:
continue
# retrieve the site data for this soil type
sited = SiteDataProvider(engine, grid_no, crop_no, campaign_year, stu_no)
# update the campaign start date in the agromanagement data
agromanagement.set_campaign_start_date(sited.start_date_waterbalance)
# Start WOFOST, run the simulation
parameters = ParameterProvider(sitedata=sited, soildata=soild, cropdata=cropd)
wofost = Wofost71_WLP_FD(parameters, weatherdata, agromanagement)
wofost.run_till_terminate()
# Store simulation results
runid = "smu_%s-stu_%s" % (smu_no, stu_no)
daily_results[runid] = pd.DataFrame(wofost.get_output()).set_index("day")
summary_results[runid] = wofost.get_summary_output()
|
_____no_output_____
|
MIT
|
05 Using PCSE WOFOST with a CGMS8 database.ipynb
|
CHNEWUAF/pcse_notebooks
|
Let's show the results As you can see, the results from the simulation are slightly different because of a different start date of the water balance.NOTE: the dates on the x-axis are the same except for the soil moisture chart 'SM' where the water-limited simulation results start before potential results. This is a matplotlib problem.
|
# Generate a figure with 10 subplots
fig, axes = plt.subplots(nrows=5, ncols=2, figsize=(12, 30))
# Plot results
for runid, results in daily_results.items():
for var, ax in zip(results, axes.flatten()):
results[var].plot(ax=ax, title=var, label=runid)
fig.autofmt_xdate()
axes[0][0].legend(loc='upper left')
|
_____no_output_____
|
MIT
|
05 Using PCSE WOFOST with a CGMS8 database.ipynb
|
CHNEWUAF/pcse_notebooks
|
7.6 Transformerモデル(分類タスク用)の実装- 本ファイルでは、クラス分類のTransformerモデルを実装します。 ※ 本章のファイルはすべてUbuntuでの動作を前提としています。Windowsなど文字コードが違う環境での動作にはご注意下さい。 7.6 学習目標1. Transformerのモジュール構成を理解する2. LSTMやRNNを使用せずCNNベースのTransformerで自然言語処理が可能な理由を理解する3. Transformerを実装できるようになる 事前準備書籍の指示に従い、本章で使用するデータを用意します
|
import math
import numpy as np
import random
import torch
import torch.nn as nn
import torch.nn.functional as F
import torchtext
# Setup seeds
torch.manual_seed(1234)
np.random.seed(1234)
random.seed(1234)
class Embedder(nn.Module):
'''idで示されている単語をベクトルに変換します'''
def __init__(self, text_embedding_vectors):
super(Embedder, self).__init__()
self.embeddings = nn.Embedding.from_pretrained(
embeddings=text_embedding_vectors, freeze=True)
# freeze=Trueによりバックプロパゲーションで更新されず変化しなくなります
def forward(self, x):
x_vec = self.embeddings(x)
return x_vec
# 動作確認
# 前節のDataLoaderなどを取得
from utils.dataloader import get_IMDb_DataLoaders_and_TEXT
train_dl, val_dl, test_dl, TEXT = get_IMDb_DataLoaders_and_TEXT(
max_length=256, batch_size=24)
# ミニバッチの用意
batch = next(iter(train_dl))
# モデル構築
net1 = Embedder(TEXT.vocab.vectors)
# 入出力
x = batch.Text[0]
x1 = net1(x) # 単語をベクトルに
print("入力のテンソルサイズ:", x.shape)
print("出力のテンソルサイズ:", x1.shape)
class PositionalEncoder(nn.Module):
'''入力された単語の位置を示すベクトル情報を付加する'''
def __init__(self, d_model=300, max_seq_len=256):
super().__init__()
self.d_model = d_model # 単語ベクトルの次元数
# 単語の順番(pos)と埋め込みベクトルの次元の位置(i)によって一意に定まる値の表をpeとして作成
pe = torch.zeros(max_seq_len, d_model)
# GPUが使える場合はGPUへ送る、ここでは省略。実際に学習時には使用する
# device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
# pe = pe.to(device)
for pos in range(max_seq_len):
for i in range(0, d_model, 2):
pe[pos, i] = math.sin(pos / (10000 ** ((2 * i)/d_model)))
pe[pos, i + 1] = math.cos(pos /
(10000 ** ((2 * (i + 1))/d_model)))
# 表peの先頭に、ミニバッチ次元となる次元を足す
self.pe = pe.unsqueeze(0)
# 勾配を計算しないようにする
self.pe.requires_grad = False
def forward(self, x):
# 入力xとPositonal Encodingを足し算する
# xがpeよりも小さいので、大きくする
ret = math.sqrt(self.d_model)*x + self.pe
return ret
# 動作確認
# モデル構築
net1 = Embedder(TEXT.vocab.vectors)
net2 = PositionalEncoder(d_model=300, max_seq_len=256)
# 入出力
x = batch.Text[0]
x1 = net1(x) # 単語をベクトルに
x2 = net2(x1)
print("入力のテンソルサイズ:", x1.shape)
print("出力のテンソルサイズ:", x2.shape)
class Attention(nn.Module):
'''Transformerは本当はマルチヘッドAttentionですが、
分かりやすさを優先しシングルAttentionで実装します'''
def __init__(self, d_model=300):
super().__init__()
# SAGANでは1dConvを使用したが、今回は全結合層で特徴量を変換する
self.q_linear = nn.Linear(d_model, d_model)
self.v_linear = nn.Linear(d_model, d_model)
self.k_linear = nn.Linear(d_model, d_model)
# 出力時に使用する全結合層
self.out = nn.Linear(d_model, d_model)
# Attentionの大きさ調整の変数
self.d_k = d_model
def forward(self, q, k, v, mask):
# 全結合層で特徴量を変換
k = self.k_linear(k)
q = self.q_linear(q)
v = self.v_linear(v)
# Attentionの値を計算する
# 各値を足し算すると大きくなりすぎるので、root(d_k)で割って調整
weights = torch.matmul(q, k.transpose(1, 2)) / math.sqrt(self.d_k)
# ここでmaskを計算
mask = mask.unsqueeze(1)
weights = weights.masked_fill(mask == 0, -1e9)
# softmaxで規格化をする
normlized_weights = F.softmax(weights, dim=-1)
# AttentionをValueとかけ算
output = torch.matmul(normlized_weights, v)
# 全結合層で特徴量を変換
output = self.out(output)
return output, normlized_weights
class FeedForward(nn.Module):
def __init__(self, d_model, d_ff=1024, dropout=0.1):
'''Attention層から出力を単純に全結合層2つで特徴量を変換するだけのユニットです'''
super().__init__()
self.linear_1 = nn.Linear(d_model, d_ff)
self.dropout = nn.Dropout(dropout)
self.linear_2 = nn.Linear(d_ff, d_model)
def forward(self, x):
x = self.linear_1(x)
x = self.dropout(F.relu(x))
x = self.linear_2(x)
return x
class TransformerBlock(nn.Module):
def __init__(self, d_model, dropout=0.1):
super().__init__()
# LayerNormalization層
# https://pytorch.org/docs/stable/nn.html?highlight=layernorm
self.norm_1 = nn.LayerNorm(d_model)
self.norm_2 = nn.LayerNorm(d_model)
# Attention層
self.attn = Attention(d_model)
# Attentionのあとの全結合層2つ
self.ff = FeedForward(d_model)
# Dropout
self.dropout_1 = nn.Dropout(dropout)
self.dropout_2 = nn.Dropout(dropout)
def forward(self, x, mask):
# 正規化とAttention
x_normlized = self.norm_1(x)
output, normlized_weights = self.attn(
x_normlized, x_normlized, x_normlized, mask)
x2 = x + self.dropout_1(output)
# 正規化と全結合層
x_normlized2 = self.norm_2(x2)
output = x2 + self.dropout_2(self.ff(x_normlized2))
return output, normlized_weights
# 動作確認
# モデル構築
net1 = Embedder(TEXT.vocab.vectors)
net2 = PositionalEncoder(d_model=300, max_seq_len=256)
net3 = TransformerBlock(d_model=300)
# maskの作成
x = batch.Text[0]
input_pad = 1 # 単語のIDにおいて、'<pad>': 1 なので
input_mask = (x != input_pad)
print(input_mask[0])
# 入出力
x1 = net1(x) # 単語をベクトルに
x2 = net2(x1) # Positon情報を足し算
x3, normlized_weights = net3(x2, input_mask) # Self-Attentionで特徴量を変換
print("入力のテンソルサイズ:", x2.shape)
print("出力のテンソルサイズ:", x3.shape)
print("Attentionのサイズ:", normlized_weights.shape)
class ClassificationHead(nn.Module):
'''Transformer_Blockの出力を使用し、最後にクラス分類させる'''
def __init__(self, d_model=300, output_dim=2):
super().__init__()
# 全結合層
self.linear = nn.Linear(d_model, output_dim) # output_dimはポジ・ネガの2つ
# 重み初期化処理
nn.init.normal_(self.linear.weight, std=0.02)
nn.init.normal_(self.linear.bias, 0)
def forward(self, x):
x0 = x[:, 0, :] # 各ミニバッチの各文の先頭の単語の特徴量(300次元)を取り出す
out = self.linear(x0)
return out
# 動作確認
# ミニバッチの用意
batch = next(iter(train_dl))
# モデル構築
net1 = Embedder(TEXT.vocab.vectors)
net2 = PositionalEncoder(d_model=300, max_seq_len=256)
net3 = TransformerBlock(d_model=300)
net4 = ClassificationHead(output_dim=2, d_model=300)
# 入出力
x = batch.Text[0]
x1 = net1(x) # 単語をベクトルに
x2 = net2(x1) # Positon情報を足し算
x3, normlized_weights = net3(x2, input_mask) # Self-Attentionで特徴量を変換
x4 = net4(x3) # 最終出力の0単語目を使用して、分類0-1のスカラーを出力
print("入力のテンソルサイズ:", x3.shape)
print("出力のテンソルサイズ:", x4.shape)
# 最終的なTransformerモデルのクラス
class TransformerClassification(nn.Module):
'''Transformerでクラス分類させる'''
def __init__(self, text_embedding_vectors, d_model=300, max_seq_len=256, output_dim=2):
super().__init__()
# モデル構築
self.net1 = Embedder(text_embedding_vectors)
self.net2 = PositionalEncoder(d_model=d_model, max_seq_len=max_seq_len)
self.net3_1 = TransformerBlock(d_model=d_model)
self.net3_2 = TransformerBlock(d_model=d_model)
self.net4 = ClassificationHead(output_dim=output_dim, d_model=d_model)
def forward(self, x, mask):
x1 = self.net1(x) # 単語をベクトルに
x2 = self.net2(x1) # Positon情報を足し算
x3_1, normlized_weights_1 = self.net3_1(
x2, mask) # Self-Attentionで特徴量を変換
x3_2, normlized_weights_2 = self.net3_2(
x3_1, mask) # Self-Attentionで特徴量を変換
x4 = self.net4(x3_2) # 最終出力の0単語目を使用して、分類0-1のスカラーを出力
return x4, normlized_weights_1, normlized_weights_2
# 動作確認
# ミニバッチの用意
batch = next(iter(train_dl))
# モデル構築
net = TransformerClassification(
text_embedding_vectors=TEXT.vocab.vectors, d_model=300, max_seq_len=256, output_dim=2)
# 入出力
x = batch.Text[0]
input_mask = (x != input_pad)
out, normlized_weights_1, normlized_weights_2 = net(x, input_mask)
print("出力のテンソルサイズ:", out.shape)
print("出力テンソルのsigmoid:", F.softmax(out, dim=1))
|
出力のテンソルサイズ: torch.Size([24, 2])
出力テンソルのsigmoid: tensor([[0.6980, 0.3020],
[0.7318, 0.2682],
[0.7244, 0.2756],
[0.7135, 0.2865],
[0.7022, 0.2978],
[0.6974, 0.3026],
[0.6831, 0.3169],
[0.6487, 0.3513],
[0.7096, 0.2904],
[0.7221, 0.2779],
[0.7213, 0.2787],
[0.7046, 0.2954],
[0.6738, 0.3262],
[0.7069, 0.2931],
[0.7217, 0.2783],
[0.6837, 0.3163],
[0.7011, 0.2989],
[0.6944, 0.3056],
[0.6860, 0.3140],
[0.7183, 0.2817],
[0.7256, 0.2744],
[0.7288, 0.2712],
[0.6678, 0.3322],
[0.7253, 0.2747]], grad_fn=<SoftmaxBackward>)
|
MIT
|
7_nlp_sentiment_transformer/7-6_Transformer.ipynb
|
Jinx-git/pytorch_advanced
|
Run model module locally
|
import os
# Import os environment variables for file hyperparameters.
os.environ["TRAIN_FILE_PATTERN"] = "gs://machine-learning-1234-bucket/gan/data/cifar10/train*.tfrecord"
os.environ["EVAL_FILE_PATTERN"] = "gs://machine-learning-1234-bucket/gan/data/cifar10/test*.tfrecord"
os.environ["OUTPUT_DIR"] = "gs://machine-learning-1234-bucket/gan/cdcgan/trained_model2"
# Import os environment variables for train hyperparameters.
os.environ["TRAIN_BATCH_SIZE"] = str(100)
os.environ["TRAIN_STEPS"] = str(50000)
os.environ["SAVE_SUMMARY_STEPS"] = str(100)
os.environ["SAVE_CHECKPOINTS_STEPS"] = str(5000)
os.environ["KEEP_CHECKPOINT_MAX"] = str(10)
os.environ["INPUT_FN_AUTOTUNE"] = "False"
# Import os environment variables for eval hyperparameters.
os.environ["EVAL_BATCH_SIZE"] = str(16)
os.environ["EVAL_STEPS"] = str(10)
os.environ["START_DELAY_SECS"] = str(6000)
os.environ["THROTTLE_SECS"] = str(6000)
# Import os environment variables for image hyperparameters.
os.environ["HEIGHT"] = str(32)
os.environ["WIDTH"] = str(32)
os.environ["DEPTH"] = str(3)
# Import os environment variables for label hyperparameters.
num_classes = 10
os.environ["NUM_CLASSES"] = str(num_classes)
os.environ["LABEL_EMBEDDING_DIMENSION"] = str(10)
# Import os environment variables for generator hyperparameters.
os.environ["LATENT_SIZE"] = str(512)
os.environ["GENERATOR_PROJECTION_DIMS"] = "4,4,256"
os.environ["GENERATOR_USE_LABELS"] = "True"
os.environ["GENERATOR_EMBED_LABELS"] = "True"
os.environ["GENERATOR_CONCATENATE_LABELS"] = "True"
os.environ["GENERATOR_NUM_FILTERS"] = "128,128,128"
os.environ["GENERATOR_KERNEL_SIZES"] = "4,4,4"
os.environ["GENERATOR_STRIDES"] = "2,2,2"
os.environ["GENERATOR_FINAL_NUM_FILTERS"] = str(3)
os.environ["GENERATOR_FINAL_KERNEL_SIZE"] = str(3)
os.environ["GENERATOR_FINAL_STRIDE"] = str(1)
os.environ["GENERATOR_LEAKY_RELU_ALPHA"] = str(0.2)
os.environ["GENERATOR_FINAL_ACTIVATION"] = "tanh"
os.environ["GENERATOR_L1_REGULARIZATION_SCALE"] = str(0.)
os.environ["GENERATOR_L2_REGULARIZATION_SCALE"] = str(0.)
os.environ["GENERATOR_OPTIMIZER"] = "Adam"
os.environ["GENERATOR_LEARNING_RATE"] = str(0.0002)
os.environ["GENERATOR_ADAM_BETA1"] = str(0.5)
os.environ["GENERATOR_ADAM_BETA2"] = str(0.999)
os.environ["GENERATOR_ADAM_EPSILON"] = str(1e-8)
os.environ["GENERATOR_CLIP_GRADIENTS"] = "None"
os.environ["GENERATOR_TRAIN_STEPS"] = str(1)
# Import os environment variables for discriminator hyperparameters.
os.environ["DISCRIMINATOR_USE_LABELS"] = "True"
os.environ["DISCRIMINATOR_EMBED_LABELS"] = "True"
os.environ["DISCRIMINATOR_CONCATENATE_LABELS"] = "True"
os.environ["DISCRIMINATOR_NUM_FILTERS"] = "64,128,128,256"
os.environ["DISCRIMINATOR_KERNEL_SIZES"] = "3,3,3,3"
os.environ["DISCRIMINATOR_STRIDES"] = "1,2,2,2"
os.environ["DISCRIMINATOR_DROPOUT_RATES"] = "0.3,0.3,0.3,0.3"
os.environ["DISCRIMINATOR_LEAKY_RELU_ALPHA"] = str(0.2)
os.environ["DISCRIMINATOR_L1_REGULARIZATION_SCALE"] = str(0.)
os.environ["DISCRIMINATOR_L2_REGULARIZATION_SCALE"] = str(0.)
os.environ["DISCRIMINATOR_OPTIMIZER"] = "Adam"
os.environ["DISCRIMINATOR_LEARNING_RATE"] = str(0.0002)
os.environ["DISCRIMINATOR_ADAM_BETA1"] = str(0.5)
os.environ["DISCRIMINATOR_ADAM_BETA2"] = str(0.999)
os.environ["DISCRIMINATOR_ADAM_EPSILON"] = str(1e-8)
os.environ["DISCRIMINATOR_CLIP_GRADIENTS"] = "None"
os.environ["DISCRIMINATOR_TRAIN_STEPS"] = str(1)
os.environ["LABEL_SMOOTHING"] = str(0.9)
|
_____no_output_____
|
Apache-2.0
|
machine_learning/gan/cdcgan/tf_cdcgan/tf_cdcgan_run_module_local.ipynb
|
ryangillard/artificial_intelligence
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.