markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
Do you see the "elbow"? At what value of $k$ does it occur? Evaluate the results of our clustering algorithm for the best $k$ Use the slider below to choose the "best" $k$ that you determined from looking at the elbow plot. Evaluate the results in the PCA plot. Does this look like a good value of $k$ to separate the data into meaningful clusters?
best_k = 1 #@param {type:"slider", min:1, max:10, step:1} # Cluster the data using the k means algorithm best_cluster_results = KMeans(n_clusters=best_k, n_init=25, random_state=rs).fit(scaled_df) # Save the cluster labels in our dataframe tracks_df['best_cluster'] = ['Cluster ' + str(i) for i in best_cluster_results.labels_] # Show a PCA plot of the clusters pca_plot(tracks_df[audio_feature_cols], classes=tracks_df['best_cluster'])
_____no_output_____
MIT
Session_2_Practical_Data_Science.ipynb
MattFinney/practical_data_science_in_python
How did we do? In addition to the mathematical ways to validate the selection of the best $k$ parameter for our model and the quality of our resulting clusters, there's another very important way to evaluate our results: listening to the tracks!Let's listen to the tracks in each cluster! What do you notice about the attributes that tracks in each cluster have in common? What do you notice about how the clusters are different? What makes each cluster unique?
play_cluster_tracks(tracks_df, cluster_column='best_cluster')
Cluster 0 Track Name: Needy Bees Artist Name(s): Nick Hakim
MIT
Session_2_Practical_Data_Science.ipynb
MattFinney/practical_data_science_in_python
Python是什么? Python是一种高级的多用途编程语言,广泛用于各种非技术和技术领域。Python是一种具备动态语义、面向对象的解释型高级编程语言。它的高级内建数据结构和动态类型及动态绑定相结合,使其在快速应用开发上极具吸引力,也适合于作为脚本或者“粘合剂”语言,将现有组件连接起来。Python简单、易学的语法强调可读性,因此可以降低程序维护成本。Python支持模块和软件包,鼓励模块化的代码重用。
print('hellow world')
hellow world
MIT
topic1_introduction_of_Python/Part1_Python_Introduction.ipynb
AbnerCui/Python_Lectures
Python简史 1989,为了度过圣诞假期,Guido开始编写Python语言编译器。Python这个名字来自Guido的喜爱的电视连续剧《蒙蒂蟒蛇的飞行马戏团》。他希望新的语言Python能够满足他在C和Shell之间创建全功能、易学、可扩展的语言的愿景。 1989年由荷兰人Guido van Rossum于1989年发明,第一个公开发行版发行于1991年 Granddaddy of Python web frameworks, Zope 1 was released in 1999 Python 1.0 - January 1994 增加了 lambda, map, filter and reduce. Python 2.0 - October 16, 2000,加入了内存回收机制,构成了现在Python语言框架的基础 Python 2.4 - November 30, 2004, 同年目前最流行的WEB框架Django 诞生 Python 2.5 - September 19, 2006 Python 2.6 - October 1, 2008 Python 2.7 - July 3, 2010 Python 3.0 - December 3, 2008 Python 3.1 - June 27, 2009 Python 3.2 - February 20, 2011 Python 3.3 - September 29, 2012 Python 3.4 - March 16, 2014 Python 3.5 - September 13, 2015 Python 3.6 - December 23, 2016 Python 3.7 - June 15, 2018 Python的主要运用领域有: 云计算:云计算最热的语言,典型的应用OpenStack WEB开发:许多优秀的WEB框架,许多大型网站是Python开发、YouTube、Dropbox、Douban……典型的Web框架包括Django 科学计算和人工智能:典型的图书馆NumPy、SciPy、Matplotlib、Enided图书馆、熊猫 系统操作和维护:操作和维护人员的基本语言 金融:定量交易、金融分析,在金融工程领域,Python不仅使用最多,而且使用最多,其重要性逐年增加。 ![jupyter](./tiobe_rank.png) Python在一些公司的运用有: 谷歌:谷歌应用程序引擎,代码。谷歌。com、Google.、Google爬虫、Google广告和其他项目正在广泛使用Python。 CIA:美国中情局网站是用Python开发的 NASA:美国航天局广泛使用Python进行数据分析和计算 YouTube:世界上最大的视频网站YouTube是用Python开发的。 Dropbox:美国最大的在线云存储网站,全部用Python实现,每天处理10亿的文件上传和下载。 Instagram:美国最大的照片共享社交网站,每天有3000多万张照片被共享,所有这些都是用Python开发的 Facebook:大量的基本库是通过Python实现的 Red.:世界上最流行的Linux发行版中的Yum包管理工具是用Python开发的 Douban:几乎所有公司的业务都是通过Python开发的。 知识:中国最大的Q&A社区,通过Python开发(国外Quora) 除此之外,还有搜狐、金山、腾讯、盛大、网易、百度、阿里、淘宝、土豆、新浪、果壳等公司正在使用Python来完成各种任务。 Python有如下特征: 1. 开放源码:Python和大部分可用的支持库及工具都是开源的,通常使用相当灵活和开放的许可证。 2. 多重范型:Python支持不同的编程和实现范型,例如面向对象和命令式/函数式或者过程式编程。 3. 多用途:Python可以用用于快速、交互式代码开发,也可以用于构建大型应用程序;它可以用于低级系统操作,也可以承担高级分析任务。 4. 跨平台:Python可用于大部分重要的操作系统,如Windows、Linux和Mac OS;它用于构建桌面应用和Web应用。 5. 运行速度慢:这里是指与C和C++相比。 Python 常用标准库 math模块为浮点运算提供了对底层C函数库的访问:
import math print(math.pi) print(math.log(1024, 2))
3.141592653589793 10.0
MIT
topic1_introduction_of_Python/Part1_Python_Introduction.ipynb
AbnerCui/Python_Lectures
random提供了生成随机数的工具。
import random print(random.choice(['apple', 'pear', 'banana'])) print(random.random())
apple 0.0034954793658343863
MIT
topic1_introduction_of_Python/Part1_Python_Introduction.ipynb
AbnerCui/Python_Lectures
datetime模块为日期和时间处理同时提供了简单和复杂的方法。
from datetime import date now = date.today() birthday = date(1999, 8, 20) age = now - birthday print(age.days)
7625
MIT
topic1_introduction_of_Python/Part1_Python_Introduction.ipynb
AbnerCui/Python_Lectures
Numpy是高性能科学计算和数据分析的基础包。 Pandas 纳入了大量库和一些标准的数据模型,提供了高效地操作大型数据集所需的工具。 Statismodels是一个Python模块,它提供对许多不同统计模型估计的类和函数,并且可以进行统计测试和统计数据的探索。 matplotlib一个绘制数据图的库。对于数据科学家或分析师非常有用。 更多https://docs.python.org/zh-cn/3/library/ 基础架构工具 Anaconda安装 https://www.anaconda.com/products/individual Spyder使用 GitHub创建与使用 GitHub 是一个面向开源及私有软件项目的托管平台,因为只支持 Git 作为唯一的版本库格式进行托管,故名 GitHub。 GitHub 于 2008 年 4 月 10 日正式上线,除了 Git 代码仓库托管及基本的 Web 管理界面以外,还提供了订阅、讨论组、文本渲染、在线文件编辑器、协作图谱(报表)、代码片段分享(Gist)等功能。目前,其注册用户已经超过350万,托管版本数量也是非常之多,其中不乏知名开源项目 Ruby on Rails、jQuery、python 等。GitHub 去年为漏洞支付了 16.6 万美元赏金。 2018年6月,GitHub被微软以75亿美元的价格收购。https://github.com/ Python基础语法
print ("Hello, Python!")
Hello, Python!
MIT
topic1_introduction_of_Python/Part1_Python_Introduction.ipynb
AbnerCui/Python_Lectures
行和缩进 python 最具特色的就是用缩进来写模块。 缩进的空白数量是可变的,但是所有代码块语句必须包含相同的缩进空白数量,这个必须严格执行。 以下实例缩进为四个空格:
if 1>2: print ("True") else: print ("False") if True: print ("Answer") print ("True") else: print ("Answer") # 没有严格缩进,在执行时会报错 print ("False")
_____no_output_____
MIT
topic1_introduction_of_Python/Part1_Python_Introduction.ipynb
AbnerCui/Python_Lectures
多行语句 Python语句中一般以新行作为语句的结束符。 但是我们可以使用斜杠( \)将一行的语句分为多行显示,如下所示:
total = 1 + \ 2 + \ 3 print(total)
6
MIT
topic1_introduction_of_Python/Part1_Python_Introduction.ipynb
AbnerCui/Python_Lectures
语句中包含 [], {} 或 () 括号就不需要使用多行连接符。如下实例:
days = ['Monday', 'Tuesday', 'Wednesday', 'Thursday', 'Friday'] print(days)
['Monday', 'Tuesday', 'Wednesday', 'Thursday', 'Friday']
MIT
topic1_introduction_of_Python/Part1_Python_Introduction.ipynb
AbnerCui/Python_Lectures
Python 引号 Python 可以使用引号( ' )、双引号( " )、三引号( ''' 或 """ ) 来表示字符串,引号的开始与结束必须是相同类型的。 其中三引号可以由多行组成,编写多行文本的快捷语法,常用于文档字符串,在文件的特定地点,被当做注释。
word = 'word' sentence = "这是一个句子。" paragraph = """这是一个段落。 包含了多个语句""" print(word) print(sentence) print(paragraph)
word 这是一个句子。 这是一个段落。 包含了多个语句
MIT
topic1_introduction_of_Python/Part1_Python_Introduction.ipynb
AbnerCui/Python_Lectures
Python注释 python中单行注释采用 开头。
# 第一个注释 print ("Hello, Python!") # 第二个注释
Hello, Python!
MIT
topic1_introduction_of_Python/Part1_Python_Introduction.ipynb
AbnerCui/Python_Lectures
Python变量类型 标准数据类型 在内存中存储的数据可以有多种类型。 例如,一个人的年龄可以用数字来存储,他的名字可以用字符来存储。 Python 定义了一些标准类型,用于存储各种类型的数据。 Python有五个标准的数据类型: Numbers(数字) String(字符串) List(列表) Tuple(元组) Dictionary(字典) Python数字 Python支持三种不同的数字类型: int(有符号整型) float(浮点型) complex(复数)
int1 = 1 float2 = 2.0 complex3 = 1+2j print(type(int1),type(float2),type(complex3))
<class 'int'> <class 'float'> <class 'complex'>
MIT
topic1_introduction_of_Python/Part1_Python_Introduction.ipynb
AbnerCui/Python_Lectures
Python字符串 字符串或串(String)是由数字、字母、下划线组成的一串字符。
st = '123asd_' st1 = st[0:3] print(st) print(st1)
123asd_ 123
MIT
topic1_introduction_of_Python/Part1_Python_Introduction.ipynb
AbnerCui/Python_Lectures
Python列表 List(列表) 是 Python 中使用最频繁的数据类型。 列表可以完成大多数集合类的数据结构实现。它支持字符,数字,字符串甚至可以包含列表(即嵌套)。 列表用 [ ] 标识,是 python 最通用的复合数据类型。
list1 = [ 'runoob', 786 , 2.23, 'john', 70.2 ] tinylist = [123, 'john'] print (list1) # 输出完整列表 print (list1[0]) # 输出列表的第一个元素 print (list1[1:3]) # 输出第二个至第三个元素 print (list1[2:]) # 输出从第三个开始至列表末尾的所有元素 print (tinylist * 2) # 输出列表两次 print (list1 + tinylist) # 打印组合的列表 list1[0] = 0 print(list1)
['runoob', 786, 2.23, 'john', 70.2] runoob [786, 2.23] [2.23, 'john', 70.2] [123, 'john', 123, 'john'] ['runoob', 786, 2.23, 'john', 70.2, 123, 'john'] [0, 786, 2.23, 'john', 70.2]
MIT
topic1_introduction_of_Python/Part1_Python_Introduction.ipynb
AbnerCui/Python_Lectures
Python元组 元组是另一个数据类型,类似于 List(列表)。 元组用 () 标识。内部元素用逗号隔开。但是元组不能二次赋值,相当于只读列表。
tuple1 = ( 'runoob', 786 , 2.23, 'john', 70.2 ) tinytuple = (123, 'john') print(tuple1[0]) print(tuple1+tinytuple)
runoob ('runoob', 786, 2.23, 'john', 70.2, 123, 'john')
MIT
topic1_introduction_of_Python/Part1_Python_Introduction.ipynb
AbnerCui/Python_Lectures
Python 字典 字典(dictionary)是除列表以外python之中最灵活的内置数据结构类型。列表是有序的对象集合,字典是无序的对象集合。 两者之间的区别在于:字典当中的元素是通过键来存取的,而不是通过偏移存取。 字典用"{ }"标识。字典由索引(key)和它对应的值value组成。
dict1 = {} dict1['one'] = "This is one" tinydict = {'name': 'john','code':6734, 'dept': 'sales'} print(tinydict['name']) print(dict1) print(tinydict.keys()) print(tinydict.values())
john {'one': 'This is one'} dict_keys(['name', 'code', 'dept']) dict_values(['john', 6734, 'sales'])
MIT
topic1_introduction_of_Python/Part1_Python_Introduction.ipynb
AbnerCui/Python_Lectures
Python运算符 Python算术运算符
a = 21 b = 10 c = 0 c = a + b print ("c 的值为:", c) c = a - b print ("c 的值为:", c) c = a * b print ("c 的值为:", c) c = a / b print ("c 的值为:", c) c = a % b #取余数 print ("c 的值为:", c) # 修改变量 a 、b 、c a = 2 b = 3 c = a**b print ("c 的值为:", c) a = 10 b = 5 c = a//b #取整 print ("c 的值为:", c)
c 的值为: 31 c 的值为: 11 c 的值为: 210 c 的值为: 2.1 c 的值为: 1 c 的值为: 8 c 的值为: 2
MIT
topic1_introduction_of_Python/Part1_Python_Introduction.ipynb
AbnerCui/Python_Lectures
Python比较运算符
a = 21 b = 10 c = 0 if a == b : print ("a 等于 b") else: print ("a 不等于 b") if a != b : print ("a 不等于 b") else: print ("a 等于 b") if a < b : print ("a 小于 b") else: print ("a 大于等于 b") if a > b : print ("a 大于 b") else: print ("a 小于等于 b") # 修改变量 a 和 b 的值 a = 5 b = 20 if a <= b : print ("a 小于等于 b") else: print ("a 大于 b") if b >= a : print ("b 大于等于 a") else: print ("b 小于 a")
a 不等于 b a 不等于 b a 大于等于 b a 大于 b a 小于等于 b b 大于等于 a
MIT
topic1_introduction_of_Python/Part1_Python_Introduction.ipynb
AbnerCui/Python_Lectures
Python逻辑运算符
a = True b = False if a and b : print ("变量 a 和 b 都为 true") else: print ("变量 a 和 b 有一个不为 true") if a or b : print ("变量 a 和 b 都为 true,或其中一个变量为 true") else: print ("变量 a 和 b 都不为 true") if not( a and b ): print ("变量 a 和 b 都为 false,或其中一个变量为 false") else: print ("变量 a 和 b 都为 true")
变量 a 和 b 有一个不为 true 变量 a 和 b 都为 true,或其中一个变量为 true 变量 a 和 b 都为 false,或其中一个变量为 false
MIT
topic1_introduction_of_Python/Part1_Python_Introduction.ipynb
AbnerCui/Python_Lectures
Python赋值运算符
a = 21 b = 10 c = 0 c = a + b print ("c 的值为:", c) c += a print ("c 的值为:", c) c *= a print ("c 的值为:", c) c /= a print ("c 的值为:", c) c = 2 c %= a print ("c 的值为:", c) c **= a print ("c 的值为:", c) c //= a print ("c 的值为:", c)
c 的值为: 31 c 的值为: 52 c 的值为: 1092 c 的值为: 52.0 c 的值为: 2 c 的值为: 2097152 c 的值为: 99864
MIT
topic1_introduction_of_Python/Part1_Python_Introduction.ipynb
AbnerCui/Python_Lectures
Python 条件语句 Python条件语句是通过一条或多条语句的执行结果(True或者False)来决定执行的代码块。
flag = False name = 'luren' if name == 'python': # 判断变量是否为 python flag = True # 条件成立时设置标志为真 print ('welcome boss') # 并输出欢迎信息 else: print (name) # 条件不成立时输出变量名称 num = 5 if num == 3: # 判断num的值 print ('boss') elif num == 2: print ('user') elif num == 1: print ('worker') elif num < 0: # 值小于零时输出 print ('error') else: print ('roadman') # 条件均不成立时输出 num = 9 if num >= 0 and num <= 10: # 判断值是否在0~10之间 print ('hello') num = 10 if num < 0 or num > 10: # 判断值是否在小于0或大于10 print ('hello') else: print ('undefine') num = 8 # 判断值是否在0~5或者10~15之间 if (num >= 0 and num <= 5) or (num >= 10 and num <= 15): print ('hello') else: print ('undefine')
hello undefine undefine
MIT
topic1_introduction_of_Python/Part1_Python_Introduction.ipynb
AbnerCui/Python_Lectures
Python循环语句 Python 提供了 for 循环和 while 循环 Python While 循环语句 Python 编程中 while 语句用于循环执行程序,即在某条件下,循环执行某段程序,以处理需要重复处理的相同任务。
count = 0 while (count < 9): print ('The count is:', count) count = count + 1 print ("Good bye!") count = 0 while count < 5: print (count, " is less than 5") count = count + 1 else: print (count, " is not less than 5")
0 is less than 5 1 is less than 5 2 is less than 5 3 is less than 5 4 is less than 5 5 is not less than 5
MIT
topic1_introduction_of_Python/Part1_Python_Introduction.ipynb
AbnerCui/Python_Lectures
Python for 循环语句
fruits = ['banana', 'apple', 'mango'] for index in range(len(fruits)): print ('当前水果 :', fruits[index]) print ("Good bye!")
当前水果 : banana 当前水果 : apple 当前水果 : mango Good bye!
MIT
topic1_introduction_of_Python/Part1_Python_Introduction.ipynb
AbnerCui/Python_Lectures
Python 循环嵌套
num=[]; i=2 for i in range(2,100): j=2 for j in range(2,i): if(i%j==0): break else: num.append(i) print(num) print ("Good bye!")
[2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37, 41, 43, 47, 53, 59, 61, 67, 71, 73, 79, 83, 89, 97] Good bye!
MIT
topic1_introduction_of_Python/Part1_Python_Introduction.ipynb
AbnerCui/Python_Lectures
Python break 语句
for letter in 'Python': if letter == 'h': break print ('当前字母 :', letter)
当前字母 : P 当前字母 : y 当前字母 : t
MIT
topic1_introduction_of_Python/Part1_Python_Introduction.ipynb
AbnerCui/Python_Lectures
Python continue 语句 Python continue 语句跳出本次循环,而break跳出整个循环。
for letter in 'Python': if letter == 'h': continue print ('当前字母 :', letter)
当前字母 : P 当前字母 : y 当前字母 : t 当前字母 : o 当前字母 : n
MIT
topic1_introduction_of_Python/Part1_Python_Introduction.ipynb
AbnerCui/Python_Lectures
Python pass 语句 Python pass 是空语句,是为了保持程序结构的完整性。 pass 不做任何事情,一般用做占位语句。
# 输出 Python 的每个字母 for letter in 'Python': if letter == 'h': pass print ('这是 pass 块') print ('当前字母 :', letter) print ("Good bye!")
当前字母 : P 当前字母 : y 当前字母 : t 这是 pass 块 当前字母 : h 当前字母 : o 当前字母 : n Good bye!
MIT
topic1_introduction_of_Python/Part1_Python_Introduction.ipynb
AbnerCui/Python_Lectures
Python应用实例(链家二手房数据分析) 一、根据上海的部分二手房信息,从多角度进行观察和分析房价与哪些因素有关以及房屋不同状况所占比例 二、先对数据进行预处理、构造预测房价的模型、并输入参数对房价进行预测备注:数据来源CSDN下载。上海链家二手房.csv.因文件读入问题,改名为sh.csv 一、导入数据 对数据进行一些简单的预处理
#导入需要用到的包 import pandas as pd import numpy as np import seaborn as sns import matplotlib as mpl import matplotlib.pyplot as plt from IPython.display import display sns.set_style({'font.sans-serif':['simhei','Arial']}) %matplotlib inline shanghai=pd.read_csv('sh.csv')# 将已有数据导进来 shanghai.head(n=1)#显示第一行数据 查看数据是否导入成功
_____no_output_____
MIT
topic1_introduction_of_Python/Part1_Python_Introduction.ipynb
AbnerCui/Python_Lectures
每项数据类型均为object 不方便处理,需要对一些项删除单位转换为int或者float类型 有些列冗余 像house_img需要删除 有些列 如何house_desc包含多种信息 需要逐个提出来单独处理
shanghai.describe() # 检查缺失值情况 shanghai.info() #np.isnan(shanghai).any() shanghai.dropna(inplace=True) #数据处理 删除带有NAN项的行 df=shanghai.copy() house_desc=df['house_desc'] house_desc[0]
_____no_output_____
MIT
topic1_introduction_of_Python/Part1_Python_Introduction.ipynb
AbnerCui/Python_Lectures
house_desc 中带有 室厅的信息 房子面积 楼层 朝向信息 需要分别提出来当一列 下面进行提取
df['layout']=df['house_desc'].map(lambda x:x.split('|')[0]) df['area']=df['house_desc'].map(lambda x:x.split('|')[1]) df['temp']=df['house_desc'].map(lambda x:x.split('|')[2]) #df['Dirextion']=df['house_desc'].map(lambda x:x.split('|')[3]) df['floor']=df['temp'].map(lambda x:x.split('/')[0]) df.head(n=1)
_____no_output_____
MIT
topic1_introduction_of_Python/Part1_Python_Introduction.ipynb
AbnerCui/Python_Lectures
一些列中带有单位 不利于后期处理 去掉单位 并把数据类型转换为float或int
df['area']=df['area'].apply(lambda x:x.rstrip('平')) df['singel_price']=df['singel_price'].apply(lambda x:x.rstrip('元/平')) df['singel_price']=df['singel_price'].apply(lambda x:x.lstrip('单价')) df['district']=df['district'].apply(lambda x:x.rstrip('二手房')) df['house_time']=df['house_time'].apply(lambda x:str(x)) df['house_time']=df['house_time'].apply(lambda x:x.rstrip('年建')) df.head(n=1)
_____no_output_____
MIT
topic1_introduction_of_Python/Part1_Python_Introduction.ipynb
AbnerCui/Python_Lectures
删除一些不需要用到的列 以及 house_desc、temp
del df['house_img'] del df['s_cate_href'] del df['house_desc'] del df['zone_href'] del df['house_href'] del df['temp']
_____no_output_____
MIT
topic1_introduction_of_Python/Part1_Python_Introduction.ipynb
AbnerCui/Python_Lectures
根据房子总价和房子面积 计算房子每平方米的价格 从house_title 描述房子信息中提取关键词。若带有 交通便利、地铁则认为其交通方便,否则交通不便
df.head(n=1) df['singel_price']=df['singel_price'].apply(lambda x:float(x)) df['area']=df['area'].apply(lambda x:float(x)) df.head(n=1) df.head(n=1) df['house_title']=df['house_title'].apply(lambda x:str(x)) df['trafic']=df['house_title'].apply(lambda x:'交通便利' if x.find("交通便利")>=0 or x.find("地铁")>=0 else "交通不便" ) df.head(n=1)
_____no_output_____
MIT
topic1_introduction_of_Python/Part1_Python_Introduction.ipynb
AbnerCui/Python_Lectures
二、根据各列信息 用可视化的形式展现 房价与不同因素如地区、房子面积、所在楼层等之间的关系
df_house_count = df.groupby('district')['house_price'].count().sort_values(ascending=False).to_frame().reset_index() df_house_mean = df.groupby('district')['singel_price'].mean().sort_values(ascending=False).to_frame().reset_index() f, [ax1,ax2,ax3] = plt.subplots(3,1,figsize=(20,15)) sns.barplot(x='district', y='singel_price', palette="Reds_d", data=df_house_mean, ax=ax1) ax1.set_title('上海各大区二手房每平米单价对比',fontsize=15) ax1.set_xlabel('区域') ax1.set_ylabel('每平米单价') sns.countplot(df['district'], ax=ax2) sns.boxplot(x='district', y='house_price', data=df, ax=ax3) ax3.set_title('上海各大区二手房房屋总价',fontsize=15) ax3.set_xlabel('区域') ax3.set_ylabel('房屋总价') plt.show()
_____no_output_____
MIT
topic1_introduction_of_Python/Part1_Python_Introduction.ipynb
AbnerCui/Python_Lectures
上面三幅图显示了 房子单价、总数量、总价与地区之间的关系。 由上面第一幅图可以看到房子单价与地区有关,其中黄浦以及静安地区房价最高。这与地区的发展水平、交通便利程度以及离市中心远近程度有关 由上面第二幅图可以直接看出不同地区的二手房数量,其中浦东最多 由上面第三幅图可以看出上海二手房房价基本在一千万上下,很少有高于两千万的
f, [ax1,ax2] = plt.subplots(1, 2, figsize=(15, 5)) # 二手房的面积分布 sns.distplot(df['area'], bins=20, ax=ax1, color='r') sns.kdeplot(df['area'], shade=True, ax=ax1) # 二手房面积和价位的关系 sns.regplot(x='area', y='house_price', data=df, ax=ax2) plt.show()
_____no_output_____
MIT
topic1_introduction_of_Python/Part1_Python_Introduction.ipynb
AbnerCui/Python_Lectures
由从左到右第一幅图可以看出 基本二手房面积在60-200平方米之间,其中一百平方米左右的占比更大 由第二幅看出,二手房总结与二手房面积基本成正比,和我们的常识吻合
areas=[len(df[df.area<100]),len(df[(df.area>100)&(df.area<200)]),len(df[df.area>200])] labels=['area<100' , '100<area<200','area>200'] plt.pie(areas,labels= labels,autopct='%0f%%',shadow=True) plt.show() # 绘制饼图
_____no_output_____
MIT
topic1_introduction_of_Python/Part1_Python_Introduction.ipynb
AbnerCui/Python_Lectures
将面积划分为三个档次,面积大于200、面积小与100、面积在一百到两百之间 三者的占比情况可以发现 百分之六十九的房子面积在一百平方米一下,高于一百大于200的只有百分之二十五而面积大于两百的只有百分之四
df.loc[df['area']>1000] # 查看size>1000的样本 发现只有一个是大于1000 f, ax1= plt.subplots(figsize=(20,20)) sns.countplot(y='layout', data=df, ax=ax1) ax1.set_title('房屋户型',fontsize=15) ax1.set_xlabel('数量') ax1.set_ylabel('户型') f, ax2= plt.subplots(figsize=(20,20)) sns.barplot(y='layout', x='house_price', data=df, ax=ax2) plt.show()
_____no_output_____
MIT
topic1_introduction_of_Python/Part1_Python_Introduction.ipynb
AbnerCui/Python_Lectures
上述两幅图显示了 不同户型的数量和价格 由第一幅图看出2室1厅最多 2室2厅 3室2厅也较多 是主流的户型选择 由第二幅看出 室和厅的数量增加随之价格也增加,但是室和厅之间的比例要适合
a1=0 a2=0 for x in df['trafic']: if x=='交通便利': a1=a1+1 else: a2=a2+1 sizes=[a1,a2] labels=['交通便利' , '交通不便'] plt.pie(sizes,labels= labels,autopct='%0f%%',shadow=True) plt.show()
_____no_output_____
MIT
topic1_introduction_of_Python/Part1_Python_Introduction.ipynb
AbnerCui/Python_Lectures
上述图显示了上海二手房交通不便利情况。其中百分之六十一为交通不便,百分之三十八为交通不便。由于交通便利情况仅仅是根据对房屋的描述情况提取出来的,实际上 交通便利的占比会更高些
f, [ax1,ax2] = plt.subplots(1, 2, figsize=(20, 10)) sns.countplot(df['trafic'], ax=ax1) ax1.set_title('交通是否便利数量对比',fontsize=15) ax1.set_xlabel('交通是否便利') ax1.set_ylabel('数量') sns.barplot(x='trafic', y='house_price', data=df, ax=ax2) ax2.set_title('交通是否便利房价对比',fontsize=15) ax2.set_xlabel('交通是否便利') ax2.set_ylabel('总价') plt.show()
_____no_output_____
MIT
topic1_introduction_of_Python/Part1_Python_Introduction.ipynb
AbnerCui/Python_Lectures
左边那幅图显示了交通便利以及不便的二手房数量,这与我们刚才的饼图信息一致 右边那幅图显示了交通便利与否与房价的关系。交通便利的房子价格更高
f, ax1= plt.subplots(figsize=(20,5)) sns.countplot(x='floor', data=df, ax=ax1) ax1.set_title('楼层',fontsize=15) ax1.set_xlabel('楼层数') ax1.set_ylabel('数量') f, ax2 = plt.subplots(figsize=(20, 5)) sns.barplot(x='floor', y='house_price', data=df, ax=ax2) ax2.set_title('楼层',fontsize=15) ax2.set_xlabel('楼层数') ax2.set_ylabel('总价') plt.show()
_____no_output_____
MIT
topic1_introduction_of_Python/Part1_Python_Introduction.ipynb
AbnerCui/Python_Lectures
楼层(地区、高区、中区、地下几层)与数量、房价的关系。高区、中区、低区居多 三、根据已有数据建立简单的上海二手房房间预测模型 对数据再次进行简单的预处理 把户型这列拆成室和厅
df[['室','厅']] = df['layout'].str.extract(r'(\d+)室(\d+)厅') df['室'] = df['室'].astype(float) df['厅'] = df['厅'].astype(float) del df['layout'] df.head() df.dropna(inplace=True) df.info() df.columns
_____no_output_____
MIT
topic1_introduction_of_Python/Part1_Python_Introduction.ipynb
AbnerCui/Python_Lectures
删除不需要用到的信息如房子的基本信息描述
del df['house_title'] del df['house_detail'] del df['s_cate'] from sklearn.linear_model import LinearRegression linear = LinearRegression() area=df['area'] price=df['house_price'] area = np.array(area).reshape(-1,1) # 这里需要注意新版的sklearn需要将数据转换为矩阵才能进行计算 price = np.array(price).reshape(-1,1) # 训练模型 model = linear.fit(area,price) # 打印截距和回归系数 print(model.intercept_, model.coef_) linear_p = model.predict(area) plt.figure(figsize=(12,6)) plt.scatter(area,price) plt.plot(area,linear_p,'red') plt.xlabel("area") plt.ylabel("price") plt.show()
_____no_output_____
MIT
topic1_introduction_of_Python/Part1_Python_Introduction.ipynb
AbnerCui/Python_Lectures
Exercise: electrical cars
# Imports: import requests from bs4 import BeautifulSoup import re import time headers = {'user-agent': 'scrapingCourseBot'} # Retrieve the number of electrical Renaults built between 2010 and 2014 and # print the tag containing the number of occasions: pars = {'bmin': 2010, 'bmax': 2014} r2 = requests.get('https://www.gaspedaal.nl/renault/elektrisch?', params=pars, headers=headers) # The number of occasions is in the HTML as: # <h1 class="listing__header__total_found">xxxxx occasions gevonden</h1> #match = re.search(r'<h1 class="listing__header__total_found">.*</h1>', r2.text) # or alternatively: match = re.search(r'[\d\.]* occasions gevonden', r2.text) if match: print(match.group()) else: print('not found') ### Retrieve the number of electrical Renaults for every year between 2010 and 2019: for year in range(2010, 2020): pars = {'bmin': year, 'bmax': year} r3 = requests.get('https://www.gaspedaal.nl/renault/elektrisch?', params=pars, headers=headers) match = re.search(r'[\d\.]* occasions gevonden', r3.text) if match: print(str(year) + ": " + match.group()) else: print('not found') time.sleep(1)
_____no_output_____
CC-BY-4.0
20200907/Answers/Electrical_Cars.ipynb
SNStatComp/CBSAcademyBD
Concise Chit ChatGitHub Repository: Code TODO:1. create a DataLoader class for dataset preprocess. (Use tf.data.Dataset inside?)1. Create a PyPI package for easy load cornell movie curpos dataset(?)1. Use PyPI module `embeddings` to load `GLOVES`, or use tfhub to load `GLOVES`?1. How to do a `clip_norm`(or set `clip_value`) in Keras with Eager mode but without `tf.contrib`?1. Better name for variables & functions1. Code clean1. Encapsulate all layers to Model Class: 1. ChitChatEncoder 1. ChitChatDecoder 1. ChitChatModel1. Re-style to follow the book1. ...? Book Todo1. Outlines1. What's seq2seq1. What's word embedding1. 1. Split code into snips1. Write for snips1. Content cleaning and optimizing1. ...? Other1. `keras.callbacks.TensorBoard` instead of `tf.contrib.summary`? - `model.fit(callbacks=[TensorBoard(...)])`1. download url? - http://old.pep.com.cn/gzsx/jszx_1/czsxtbjxzy/qrzptgjzxjc/dzkb/dscl/ config.py
'''doc''' # GO for start of the sentence # DONE for end of the sentence GO = '\b' DONE = '\a' # max words per sentence MAX_LEN = 20
_____no_output_____
Apache-2.0
Concise_Chit_Chat.ipynb
huan/python-concise-chit-chat
data_loader.py
''' data loader ''' import gzip import re from typing import ( # Any, List, Tuple, ) import tensorflow as tf import numpy as np # from .config import ( # GO, # DONE, # MAX_LEN, # ) DATASET_URL = 'https://github.com/huan/concise-chit-chat/releases/download/v0.0.1/dataset.txt.gz' DATASET_FILE_NAME = 'concise-chit-chat-dataset.txt.gz' class DataLoader(): '''data loader''' def __init__(self) -> None: print('DataLoader', 'downloading dataset from:', DATASET_URL) dataset_file = tf.keras.utils.get_file( DATASET_FILE_NAME, origin=DATASET_URL, ) print('DataLoader', 'loading dataset from:', dataset_file) # dataset_file = './data/dataset.txt.gz' # with open(path, encoding='iso-8859-1') as f: with gzip.open(dataset_file, 'rt') as f: self.raw_text = f.read().lower() self.queries, self.responses \ = self.__parse_raw_text(self.raw_text) self.size = len(self.queries) def get_batch( self, batch_size=32, ) -> Tuple[List[List[str]], List[List[str]]]: '''get batch''' # print('corpus_list', self.corpus) batch_indices = np.random.choice( len(self.queries), size=batch_size, ) batch_queries = self.queries[batch_indices] batch_responses = self.responses[batch_indices] return batch_queries, batch_responses def __parse_raw_text( self, raw_text: str ) -> Tuple[List[List[str]], List[List[str]]]: '''doc''' query_list = [] response_list = [] for line in raw_text.strip('\n').split('\n'): query, response = line.split('\t') query, response = self.preprocess(query), self.preprocess(response) query_list.append('{} {} {}'.format(GO, query, DONE)) response_list.append('{} {} {}'.format(GO, response, DONE)) return np.array(query_list), np.array(response_list) def preprocess(self, text: str) -> str: '''doc''' new_text = text new_text = re.sub('[^a-zA-Z0-9 .,?!]', ' ', new_text) new_text = re.sub(' +', ' ', new_text) new_text = re.sub( '([\w]+)([,;.?!#&-\'\"-]+)([\w]+)?', r'\1 \2 \3', new_text, ) if len(new_text.split()) > MAX_LEN: new_text = (' ').join(new_text.split()[:MAX_LEN]) match = re.search('[.?!]', new_text) if match is not None: idx = match.start() new_text = new_text[:idx+1] new_text = new_text.strip().lower() return new_text
_____no_output_____
Apache-2.0
Concise_Chit_Chat.ipynb
huan/python-concise-chit-chat
vocabulary.py
'''doc''' import re from typing import ( List, ) import tensorflow as tf # from .config import ( # DONE, # GO, # MAX_LEN, # ) class Vocabulary: '''voc''' def __init__(self, text: str) -> None: self.tokenizer = tf.keras.preprocessing.text.Tokenizer(filters='') self.tokenizer.fit_on_texts( [GO, DONE] + re.split( r'[\s\t\n]', text, ) ) # additional 1 for the index 0 self.size = 1 + len(self.tokenizer.word_index.keys()) def texts_to_padded_sequences( self, text_list: List[List[str]] ) -> tf.Tensor: '''doc''' sequence_list = self.tokenizer.texts_to_sequences(text_list) padded_sequences = tf.keras.preprocessing.sequence.pad_sequences( sequence_list, maxlen=MAX_LEN, padding='post', truncating='post', ) return padded_sequences def padded_sequences_to_texts(self, sequence: List[int]) -> str: return 'tbw'
_____no_output_____
Apache-2.0
Concise_Chit_Chat.ipynb
huan/python-concise-chit-chat
model.py
'''doc''' import tensorflow as tf import numpy as np from typing import ( List, ) # from .vocabulary import Vocabulary # from .config import ( # DONE, # GO, # MAX_LENGTH, # ) EMBEDDING_DIM = 300 LATENT_UNIT_NUM = 500 class ChitEncoder(tf.keras.Model): '''encoder''' def __init__( self, ) -> None: super().__init__() self.lstm_encoder = tf.keras.layers.CuDNNLSTM( units=LATENT_UNIT_NUM, return_state=True, ) def call( self, inputs: tf.Tensor, # shape: [batch_size, max_len, embedding_dim] training=None, mask=None, ) -> tf.Tensor: _, *state = self.lstm_encoder(inputs) return state # shape: ([latent_unit_num], [latent_unit_num]) class ChatDecoder(tf.keras.Model): '''decoder''' def __init__( self, voc_size: int, ) -> None: super().__init__() self.lstm_decoder = tf.keras.layers.CuDNNLSTM( units=LATENT_UNIT_NUM, return_sequences=True, return_state=True, ) self.dense = tf.keras.layers.Dense( units=voc_size, ) self.time_distributed_dense = tf.keras.layers.TimeDistributed( self.dense ) self.initial_state = None def set_state(self, state=None): '''doc''' # import pdb; pdb.set_trace() self.initial_state = state def call( self, inputs: tf.Tensor, # shape: [batch_size, None, embedding_dim] training=False, mask=None, ) -> tf.Tensor: '''chat decoder call''' # batch_size = tf.shape(inputs)[0] # max_len = tf.shape(inputs)[0] # outputs = tf.zeros(shape=( # batch_size, # batch_size # max_len, # max time step # LATENT_UNIT_NUM, # dimention of hidden state # )) # import pdb; pdb.set_trace() outputs, *states = self.lstm_decoder(inputs, initial_state=self.initial_state) self.initial_state = states outputs = self.time_distributed_dense(outputs) return outputs class ChitChat(tf.keras.Model): '''doc''' def __init__( self, vocabulary: Vocabulary, ) -> None: super().__init__() self.word_index = vocabulary.tokenizer.word_index self.index_word = vocabulary.tokenizer.index_word self.voc_size = vocabulary.size # [batch_size, max_len] -> [batch_size, max_len, voc_size] self.embedding = tf.keras.layers.Embedding( input_dim=self.voc_size, output_dim=EMBEDDING_DIM, mask_zero=True, ) self.encoder = ChitEncoder() # shape: [batch_size, state] self.decoder = ChatDecoder(self.voc_size) # shape: [batch_size, max_len, voc_size] def call( self, inputs: List[List[int]], # shape: [batch_size, max_len] teacher_forcing_targets: List[List[int]]=None, # shape: [batch_size, max_len] training=None, mask=None, ) -> tf.Tensor: # shape: [batch_size, max_len, embedding_dim] '''call''' batch_size = tf.shape(inputs)[0] inputs_embedding = self.embedding(tf.convert_to_tensor(inputs)) state = self.encoder(inputs_embedding) self.decoder.set_state(state) if training: teacher_forcing_targets = tf.convert_to_tensor(teacher_forcing_targets) teacher_forcing_embeddings = self.embedding(teacher_forcing_targets) # outputs[:, 0, :].assign([self.__go_embedding()] * batch_size) batch_go_embedding = tf.ones([batch_size, 1, 1]) * [self.__go_embedding()] batch_go_one_hot = tf.ones([batch_size, 1, 1]) * [tf.one_hot(self.word_index[GO], self.voc_size)] outputs = batch_go_one_hot output = self.decoder(batch_go_embedding) for t in range(1, MAX_LEN): outputs = tf.concat([outputs, output], 1) if training: target = teacher_forcing_embeddings[:, t, :] decoder_input = tf.expand_dims(target, axis=1) else: decoder_input = self.__indice_to_embedding(tf.argmax(output)) output = self.decoder(decoder_input) return outputs def predict(self, inputs: List[int], temperature=1.) -> List[int]: '''doc''' outputs = self([inputs]) outputs = tf.squeeze(outputs) word_list = [] for t in range(1, MAX_LEN): output = outputs[t] indice = self.__logit_to_indice(output, temperature=temperature) word = self.index_word[indice] if indice == self.word_index[DONE]: break word_list.append(word) return ' '.join(word_list) def __go_embedding(self) -> tf.Tensor: return self.embedding( tf.convert_to_tensor(self.word_index[GO])) def __logit_to_indice( self, inputs, temperature=1., ) -> int: ''' [vocabulary_size] convert one hot encoding to indice with temperature ''' inputs = tf.squeeze(inputs) prob = tf.nn.softmax(inputs / temperature).numpy() indice = np.random.choice(self.voc_size, p=prob) return indice def __indice_to_embedding(self, indice: int) -> tf.Tensor: tensor = tf.convert_to_tensor([[indice]]) return self.embedding(tensor)
_____no_output_____
Apache-2.0
Concise_Chit_Chat.ipynb
huan/python-concise-chit-chat
Train Tensor Board[Quick guide to run TensorBoard in Google Colab](https://www.dlology.com/blog/quick-guide-to-run-tensorboard-in-google-colab/)`tensorboard` vs `tensorboard/` ?
LOG_DIR = '/content/data/tensorboard/' get_ipython().system_raw( 'tensorboard --logdir {} --host 0.0.0.0 --port 6006 &' .format(LOG_DIR) ) # Install ! npm install -g localtunnel # Tunnel port 6006 (TensorBoard assumed running) get_ipython().system_raw('lt --port 6006 >> url.txt 2>&1 &') # Get url ! cat url.txt '''train''' import tensorflow as tf # from chit_chat import ( # ChitChat, # DataLoader, # Vocabulary, # ) tf.enable_eager_execution() data_loader = DataLoader() vocabulary = Vocabulary(data_loader.raw_text) chitchat = ChitChat(vocabulary=vocabulary) def loss(model, x, y) -> tf.Tensor: '''doc''' weights = tf.cast( tf.not_equal(y, 0), tf.float32, ) prediction = model( inputs=x, teacher_forcing_targets=y, training=True, ) # implment the following contrib function in a loop ? # https://stackoverflow.com/a/41135778/1123955 # https://stackoverflow.com/q/48025004/1123955 return tf.contrib.seq2seq.sequence_loss( prediction, tf.convert_to_tensor(y), weights, ) def grad(model, inputs, targets): '''doc''' with tf.GradientTape() as tape: loss_value = loss(model, inputs, targets) return tape.gradient(loss_value, model.variables) def train() -> int: '''doc''' learning_rate = 1e-3 num_batches = 8000 batch_size = 128 print('Dataset size: {}, Vocabulary size: {}'.format( data_loader.size, vocabulary.size, )) optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate) root = tf.train.Checkpoint( optimizer=optimizer, model=chitchat, optimizer_step=tf.train.get_or_create_global_step(), ) root.restore(tf.train.latest_checkpoint('./data/save')) print('checkpoint restored.') writer = tf.contrib.summary.create_file_writer('./data/tensorboard') writer.set_as_default() global_step = tf.train.get_or_create_global_step() for batch_index in range(num_batches): global_step.assign_add(1) queries, responses = data_loader.get_batch(batch_size) encoder_inputs = vocabulary.texts_to_padded_sequences(queries) decoder_outputs = vocabulary.texts_to_padded_sequences(responses) grads = grad(chitchat, encoder_inputs, decoder_outputs) optimizer.apply_gradients( grads_and_vars=zip(grads, chitchat.variables) ) if batch_index % 10 == 0: print("batch %d: loss %f" % (batch_index, loss( chitchat, encoder_inputs, decoder_outputs).numpy())) root.save('./data/save/model.ckpt') print('checkpoint saved.') with tf.contrib.summary.record_summaries_every_n_global_steps(1): # your model code goes here tf.contrib.summary.scalar('loss', loss( chitchat, encoder_inputs, decoder_outputs).numpy()) # print('summary had been written.') return 0 def main() -> int: '''doc''' return train() main() #! rm -fvr data/tensorboard # ! pwd # ! rm -frv data/save # ! rm -fr /content/data/tensorboard # ! kill 2823 # ! kill -9 2823 # ! ps axf | grep lt ! cat url.txt
your url is: https://bright-fox-51.localtunnel.me
Apache-2.0
Concise_Chit_Chat.ipynb
huan/python-concise-chit-chat
chat.py
'''train''' # import tensorflow as tf # from chit_chat import ( # ChitChat, # DataLoader, # Vocabulary, # DONE, # GO, # ) # tf.enable_eager_execution() def main() -> int: '''chat main''' data_loader = DataLoader() vocabulary = Vocabulary(data_loader.raw_text) print('Dataset size: {}, Vocabulary size: {}'.format( data_loader.size, vocabulary.size, )) chitchat = ChitChat(vocabulary) checkpoint = tf.train.Checkpoint(model=chitchat) checkpoint.restore(tf.train.latest_checkpoint('./data/save')) print('checkpoint restored.') return cli(chitchat, vocabulary=vocabulary, data_loader=data_loader) def cli(chitchat: ChitChat, data_loader: DataLoader, vocabulary: Vocabulary): '''command line interface''' index_word = vocabulary.tokenizer.index_word word_index = vocabulary.tokenizer.word_index query = '' while True: try: # Get input sentence query = input('> ').lower() # Check if it is quit case if query == 'q' or query == 'quit': break # Normalize sentence query = data_loader.preprocess(query) query = '{} {} {}'.format(GO, query, DONE) # Evaluate sentence query_sequence = vocabulary.texts_to_padded_sequences([query])[0] response_sequence = chitchat.predict(query_sequence, 1) # Format and print response sentence response_word_list = [ index_word[indice] for indice in response_sequence if indice != 0 and indice != word_index[DONE] ] print('Bot:', ' '.join(response_word_list)) except KeyError: print("Error: Encountered unknown word.") main() ! cat /proc/cpuinfo
processor : 0 vendor_id : GenuineIntel cpu family : 6 model : 63 model name : Intel(R) Xeon(R) CPU @ 2.30GHz stepping : 0 microcode : 0x1 cpu MHz : 2299.998 cache size : 46080 KB physical id : 0 siblings : 2 core id : 0 cpu cores : 1 apicid : 0 initial apicid : 0 fpu : yes fpu_exception : yes cpuid level : 13 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm abm pti fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms xsaveopt arch_capabilities bugs : cpu_meltdown spectre_v1 spectre_v2 spec_store_bypass l1tf bogomips : 4599.99 clflush size : 64 cache_alignment : 64 address sizes : 46 bits physical, 48 bits virtual power management: processor : 1 vendor_id : GenuineIntel cpu family : 6 model : 63 model name : Intel(R) Xeon(R) CPU @ 2.30GHz stepping : 0 microcode : 0x1 cpu MHz : 2299.998 cache size : 46080 KB physical id : 0 siblings : 2 core id : 0 cpu cores : 1 apicid : 1 initial apicid : 1 fpu : yes fpu_exception : yes cpuid level : 13 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm abm pti fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms xsaveopt arch_capabilities bugs : cpu_meltdown spectre_v1 spectre_v2 spec_store_bypass l1tf bogomips : 4599.99 clflush size : 64 cache_alignment : 64 address sizes : 46 bits physical, 48 bits virtual power management:
Apache-2.0
Concise_Chit_Chat.ipynb
huan/python-concise-chit-chat
Scraping de Características Generales Se obtienen características generales del equipo de la página whoscored.Las estadísticas son de la temporada más reciente. ![alt text](generales.png)
from selenium import webdriver from selenium.webdriver.common.by import By from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.support import expected_conditions as EC from selenium.common.exceptions import TimeoutException from selenium.webdriver.support.ui import Select option = webdriver.ChromeOptions() option.add_argument(" — incognito") browser = webdriver.Chrome(executable_path='./chromedriver', chrome_options=option) team_links = [ 'https://es.whoscored.com/Teams/326/Archive/Rusia-Russia', 'https://es.whoscored.com/Teams/967/Archive/Uruguay-Uruguay', 'https://es.whoscored.com/Teams/340/Archive/Portugal-Portugal', 'https://es.whoscored.com/Teams/338/Archive/Espa%C3%B1a-Spain', 'https://es.whoscored.com/Teams/1293/Archive/Ir%C3%A1n-Iran', 'https://es.whoscored.com/Teams/341/Archive/Francia-France', 'https://es.whoscored.com/Teams/328/Archive/Australia-Australia', 'https://es.whoscored.com/Teams/416/Archive/Per%C3%BA-Peru', 'https://es.whoscored.com/Teams/425/Archive/Dinamarca-Denmark', 'https://es.whoscored.com/Teams/346/Archive/Argentina-Argentina', 'https://es.whoscored.com/Teams/770/Archive/Islandia-Iceland', 'https://es.whoscored.com/Teams/337/Archive/Croacia-Croatia', 'https://es.whoscored.com/Teams/977/Archive/Nigeria-Nigeria', 'https://es.whoscored.com/Teams/409/Archive/Brasil-Brazil', 'https://es.whoscored.com/Teams/423/Archive/Suiza-Switzerland', 'https://es.whoscored.com/Teams/970/Archive/Costa-Rica-Costa-Rica', 'https://es.whoscored.com/Teams/336/Archive/Alemania-Germany', 'https://es.whoscored.com/Teams/972/Archive/M%C3%A9xico-Mexico', 'https://es.whoscored.com/Teams/344/Archive/Suecia-Sweden', 'https://es.whoscored.com/Teams/1159/Archive/Corea-Del-sur-South-Korea', 'https://es.whoscored.com/Teams/339/Archive/B%C3%A9lgica-Belgium', 'https://es.whoscored.com/Teams/959/Archive/T%C3%BAnez-Tunisia', 'https://es.whoscored.com/Teams/345/Archive/Inglaterra-England', 'https://es.whoscored.com/Teams/342/Archive/Polonia-Poland', 'https://es.whoscored.com/Teams/957/Archive/Senegal-Senegal', 'https://es.whoscored.com/Teams/408/Archive/Colombia-Colombia', 'https://es.whoscored.com/Teams/986/Archive/Japan-Japan' ] def wait_browser(browser, load_xpath, timeout=20): try: WebDriverWait(browser, timeout).until( EC.visibility_of_element_located( (By.XPATH, load_xpath))) except TimeoutException: print("Timed out waiting for page to load") browser.quit() import pandas as pd data = [] for team_link in team_links: team_data = {} browser.get(team_link) wait_browser(browser, '//a[@class="team-link"]') team_data['Equipo'] = browser.find_element_by_xpath('//a[@class="team-link"]').text sidebox = browser.find_element_by_xpath('//div[@class="team-profile-side-box"]') team_data['rating'] = sidebox.find_element_by_xpath('//div[@class="rating"]').text stats_container = browser.find_element_by_xpath('//div[@class="stats-container"]') labels = stats_container.find_elements_by_tag_name('dt') values = stats_container.find_elements_by_tag_name('dd') for l, v in zip(labels, values): team_data[l.text] = v.text data.append(team_data) df = pd.DataFrame(data) df.head() df df.to_csv("características_equipos.csv", index=False)
_____no_output_____
MIT
CaracteristicasPorEquipo.ipynb
Howl24/Lineup-Prediction
Procesamiento de datos de equipos
%matplotlib inline import matplotlib.pyplot as plt import pandas as pd df = pd.read_csv("características_equipos.csv") df['rating'] = df['rating'].apply(lambda x: x.replace(',', '.')).astype(float) x = range(len(df['rating'])) df = df.sort_values('rating') plt.barh(x, df['rating']) plt.yticks(x, df['Equipo'])
_____no_output_____
MIT
CaracteristicasPorEquipo.ipynb
Howl24/Lineup-Prediction
Database Load
# Dependencies and Setup import json import os import pandas as pd import urllib.request import requests # from config import db_pwd, db_user from config import URI from sqlalchemy import create_engine
_____no_output_____
MIT
TEST/Database/Database - Project 3.ipynb
LoriWard/COVID-Trends-Website
Load Data into DataFrame
csv_file = "data/Merged_data.csv" google_data_df = pd.read_csv(csv_file) google_data_df.head() google_df = google_data_df[["states","dates","SMA_retail_recreation", "SMA_grocery_pharmacy", "SMA_parks", "SMA_transit", "SMA_workplaces", "SMA_residential", "case_count", "new_case_count", "revenue_all", "revenue_ss60", "deaths"]] google_df.head() google_df = google_df.rename(columns = {"SMA_retail_recreation":'sma_retail_recreation', "SMA_grocery_pharmacy":'sma_grocery_pharmacy', "SMA_parks":'sma_parks',"SMA_transit":'sma_transit',"SMA_workplaces":'sma_workplaces',"SMA_residential":'sma_residential'}) google_df # Lisa Connection to the database rds_connection_string = f"{db_user}:{{db_pwd}}@localhost:5432/mobility_db" engine = create_engine(f'postgresql://{rds_connection_string}') # Stojancho Connection to the database # rds_connection_string = f"postgres:{pcode}@localhost:5432/mobility2_db" engine = create_engine(f'{URI}')
_____no_output_____
MIT
TEST/Database/Database - Project 3.ipynb
LoriWard/COVID-Trends-Website
Check for tables
engine.table_names()
_____no_output_____
MIT
TEST/Database/Database - Project 3.ipynb
LoriWard/COVID-Trends-Website
Use pandas to load csv converted DataFrame into database
google_df.to_sql(name='merged_data', con=engine, if_exists='append', index=False)
_____no_output_____
MIT
TEST/Database/Database - Project 3.ipynb
LoriWard/COVID-Trends-Website
Confirm data has been added by querying the tables
pd.read_sql_query('select * from merged_data', con=engine).head(10) pd.read_sql_query('select sma_workplaces from merged_data', con=engine).head(10)
_____no_output_____
MIT
TEST/Database/Database - Project 3.ipynb
LoriWard/COVID-Trends-Website
Trusted Notebook" width="500 px" align="left"> _*Quantum K-Means algorithm*_ The latest version of this notebook is available on https://github.com/qiskit/qiskit-tutorial.*** Contributors Shan Jin, Xi He, Xiaokai Hou, Li Sun, Dingding Wen, Shaojun Wu and Xiaoting Wang$^{1}$1. Institute of Fundamental and Frontier Sciences, University of Electronic Science and Technology of China,Chengdu, China,610051*** IntroductionClustering algorithm is a typical unsupervised learning algorithm, which is mainly used to automatically classify similar samples into one category.In the clustering algorithm, according to the similarity between the samples, the samples are divided into different categories. For different similarity calculation methods, different clustering results will be obtained. The commonly used similarity calculation method is the Euclidean distance method.What we want to show is the quantum K-Means algorithm. The K-Means algorithm is a distance-based clustering algorithm that uses distance as an evaluation index for similarity, that is, the closer the distance between two objects is, the greater the similarity. The algorithm considers the cluster to be composed of objects that are close together, so the compact and independent cluster is the ultimate target. Experiment designThe implementation of the quantum K-Means algorithm mainly uses the swap test to compare the distances among the input data points. Select K points randomly from N data points as centroids, measure the distance from each point to each centroid, and assign it to the nearest centroid- class, recalculate centroids of each class that has been obtained, and iterate 2 to 3 steps until the new centroid is equal to or less than the specified threshold, and the algorithm ends. In our example, we selected 6 data points, 2 centroids, and used the swap test circuit to calculate the distance. Finally, we obtained two clusters of data points.$|0\rangle$ is an auxiliary qubit, through left $H$ gate, it will be changed to $\frac{1}{\sqrt{2}}(|0\rangle + |1\rangle)$. Then under the control of $|1\rangle$, the circuit will swap two vectors $|x\rangle$ and $|y\rangle$. Finally, we get the result at the right end of the circuit:$$|0_{anc}\rangle |x\rangle |y\rangle \rightarrow \frac{1}{2}|0_{anc}\rangle(|xy\rangle + |yx\rangle) + \frac{1}{2}|1_{anc}\rangle(|xy\rangle - |yx\rangle)$$If we measure auxiliary qubit alone, then the probability of final state in the ground state $|1\rangle$ is:$$P(|1_{anc}\rangle) = \frac{1}{2} - \frac{1}{2}|\langle x | y \rangle|^2$$If we measure auxiliary qubit alone, then the probability of final state in the ground state $|1\rangle$ is:$$Euclidean \ distance = \sqrt{(2 - 2|\langle x | y \rangle|)}$$So, we can see that the probability of measuring $|1\rangle$ has positive correlation with the Euclidean distance.The schematic diagram of quantum K-Means is as the follow picture.[[1]](cite) To make our algorithm can be run using qiskit, we design a more detailed circuit to achieve our algorithm. | Quantum K-Means circuit Data pointspoint numthetaphilamxy10.01pipi0.7106330.70356220.02pipi0.7141420.730.03pipi0.7176330.69642140.04pipi0.7211070.69282450.05pipi0.7245620.6892161.31pipi0.8868110.46213271.32pipi0.8891110.45769281.33pipi0.8913880.45324191.34pipi0.8936430.448779101.35pipi0.8958760.444305 Quantum K-Means algorithm program
# import math lib from math import pi # import Qiskit from qiskit import Aer, IBMQ, execute from qiskit import QuantumCircuit, ClassicalRegister, QuantumRegister # import basic plot tools from qiskit.tools.visualization import plot_histogram # To use local qasm simulator backend = Aer.get_backend('qasm_simulator')
_____no_output_____
Apache-2.0
community/awards/teach_me_qiskit_2018/quantum_machine_learning/1_K_Means/Quantum K-Means Algorithm.ipynb
Chibikuri/qiskit-tutorials
In this section, we first judge the version of Python and import the packages of qiskit, math to implement the following code. We show our algorithm on the ibm_qasm_simulator, if you need to run it on the real quantum conputer, please remove the "" in frint of "import Qconfig".
theta_list = [0.01, 0.02, 0.03, 0.04, 0.05, 1.31, 1.32, 1.33, 1.34, 1.35]
_____no_output_____
Apache-2.0
community/awards/teach_me_qiskit_2018/quantum_machine_learning/1_K_Means/Quantum K-Means Algorithm.ipynb
Chibikuri/qiskit-tutorials
Here we define the number pi in the math lib, because we need to use u3 gate. And we also define a list about the parameter theta which we need to use in the u3 gate. As the same above, if you want to implement on the real quantum comnputer, please remove the symbol "" and configure your local Qconfig.py file.
# create Quantum Register called "qr" with 5 qubits qr = QuantumRegister(5, name="qr") # create Classical Register called "cr" with 5 bits cr = ClassicalRegister(5, name="cr") # Creating Quantum Circuit called "qc" involving your Quantum Register "qr" # and your Classical Register "cr" qc = QuantumCircuit(qr, cr, name="k_means") #Define a loop to compute the distance between each pair of points for i in range(9): for j in range(1,10-i): # Set the parament theta about different point theta_1 = theta_list[i] theta_2 = theta_list[i+j] #Achieve the quantum circuit via qiskit qc.h(qr[2]) qc.h(qr[1]) qc.h(qr[4]) qc.u3(theta_1, pi, pi, qr[1]) qc.u3(theta_2, pi, pi, qr[4]) qc.cswap(qr[2], qr[1], qr[4]) qc.h(qr[2]) qc.measure(qr[2], cr[2]) qc.reset(qr) job = execute(qc, backend=backend, shots=1024) result = job.result() print(result) print('theta_1:' + str(theta_1)) print('theta_2:' + str(theta_2)) # print( result.get_data(qc)) plot_histogram(result.get_counts())
COMPLETED theta_1:0.01 theta_2:0.02
Apache-2.0
community/awards/teach_me_qiskit_2018/quantum_machine_learning/1_K_Means/Quantum K-Means Algorithm.ipynb
Chibikuri/qiskit-tutorials
Seminar for Lecture 13 "VAE Vocoder"In the lectures, we studied various approaches to creating vocoders. The problem of sound generation is solved by deep generative models. We've discussed autoregressive models that can be reduced to **MAF**. We've considered the reverse analogue of MAF – **IAF**. We've seen how **normalizing flows** can help us directly optimize likelihood without using autoregression. And alse we've considered a vocoder built with the **GAN** paradigm.At this seminar we will try to apply another popular generative model: the **variational autoencoder (VAE)**. We will try to build an encoder-decoder architecture with **MAF** as encoder and **IAF** as decoder. We will train this network by maximizing ELBO with a couple of additional losses (in vocoders, you can't do without it yet 🤷‍♂️).⚠️ In this seminar we call **"MAF"** not the generative model discussed on lecture, but network which architecture is like MAF's one and accepting audio as input. So we won't model data distribution with our **"MAF"**.
# ! pip install torch==1.7.1+cu101 torchvision==0.8.2+cu101 torchaudio==0.7.2 -f https://download.pytorch.org/whl/torch_stable.html # ! pip install numpy==1.17.5 matplotlib==3.3.3 tqdm==4.54.0 import torch from torch import nn from torch.nn import functional as F from typing import Union from math import log, pi, sqrt from IPython.display import display, Audio import numpy as np import librosa import os os.environ["CUDA_DEVICE_ORDER"] = "PCI_BUS_ID" os.environ["CUDA_VISIBLE_DEVICES"] = "0" device = torch.device("cpu") if False and torch.cuda.is_available(): print('GPU found! 🎉') device = torch.device("cuda")
_____no_output_____
MIT
week_13/seminar.ipynb
ishalyminov/shad_speech
Introduce auxiliary modules:1. causal convolution – simple convolution with `kernel_size` and `dilation` hyper-parameters, but working in causal way (does not look in the future)2. residual block – main building component of WaveNet architectureYes, WaveNet is everywhere. We can build MAF and IAF with any architecture, but WaveNet declared oneself as simple yet powerfull architecture. We will use WaveNet with conditioning on mel spectrograms, because we are building a vocoder.
class CausalConv(nn.Module): def __init__(self, in_channels, out_channels, kernel_size, dilation=1): super(CausalConv, self).__init__() self.padding = dilation * (kernel_size - 1) self.conv = nn.Conv1d( in_channels, out_channels, kernel_size, padding=self.padding, dilation=dilation) self.conv = nn.utils.weight_norm(self.conv) nn.init.kaiming_normal_(self.conv.weight) def forward(self, x): x = self.conv(x) x = x[:, :, :-self.padding] return x class ResBlock(nn.Module): def __init__(self, in_channels, out_channels, skip_channels, kernel_size, dilation, cin_channels): super(ResBlock, self).__init__() self.cin_channels = cin_channels self.filter_conv = CausalConv(in_channels, out_channels, kernel_size, dilation) self.gate_conv = CausalConv(in_channels, out_channels, kernel_size, dilation) self.res_conv = nn.Conv1d(out_channels, in_channels, kernel_size=1) self.skip_conv = nn.Conv1d(out_channels, skip_channels, kernel_size=1) self.res_conv = nn.utils.weight_norm(self.res_conv) self.skip_conv = nn.utils.weight_norm(self.skip_conv) nn.init.kaiming_normal_(self.res_conv.weight) nn.init.kaiming_normal_(self.skip_conv.weight) self.filter_conv_c = nn.Conv1d(cin_channels, out_channels, kernel_size=1) self.gate_conv_c = nn.Conv1d(cin_channels, out_channels, kernel_size=1) self.filter_conv_c = nn.utils.weight_norm(self.filter_conv_c) self.gate_conv_c = nn.utils.weight_norm(self.gate_conv_c) nn.init.kaiming_normal_(self.filter_conv_c.weight) nn.init.kaiming_normal_(self.gate_conv_c.weight) def forward(self, x, c=None): h_filter = self.filter_conv(x) h_gate = self.gate_conv(x) h_filter += self.filter_conv_c(c) h_gate += self.gate_conv_c(c) out = torch.tanh(h_filter) * torch.sigmoid(h_gate) res = self.res_conv(out) skip = self.skip_conv(out) return (x + res) * sqrt(0.5), skip
_____no_output_____
MIT
week_13/seminar.ipynb
ishalyminov/shad_speech
For WaveNet it doesn't matter what it is used for: MAF or IAF - it all depends on our interpretation of the input and output variables.Below is the WaveNet architecture that you are already familiar with from the last seminar. But this time, you will need to implement not inference but forward pass - and it's very simple 😉.
class WaveNet(nn.Module): def __init__(self, params): super(WaveNet, self). __init__() self.front_conv = nn.Sequential( CausalConv(1, params.residual_channels, params.front_kernel_size), nn.ReLU()) self.res_blocks = nn.ModuleList() for b in range(params.num_blocks): for n in range(params.num_layers): self.res_blocks.append(ResBlock( in_channels=params.residual_channels, out_channels=params.gate_channels, skip_channels=params.skip_channels, kernel_size=params.kernel_size, dilation=2 ** n, cin_channels=params.mel_channels)) self.final_conv = nn.Sequential( nn.ReLU(), nn.Conv1d(params.skip_channels, params.skip_channels, kernel_size=1), nn.ReLU(), nn.Conv1d(params.skip_channels, params.out_channels, kernel_size=1)) def forward(self, x, c): # x: input tensor with signal or noise [B, 1, T] # c: local conditioning [B, C_mel, T] out = 0 ################################################################################ x = self.front_conv(x) for b in range(len(self.res_blocks)): x, x_skip = self.res_blocks[b](x, c) out = out + x_skip out = self.final_conv(out) ################################################################################ return out # check that works and gives expected output size # full correctness we will check later, when the whole network will be assembled class Params: mel_channels: int = 80 num_blocks: int = 4 num_layers: int = 6 out_channels: int = 3 front_kernel_size: int = 2 residual_channels: int = 64 gate_channels: int = 64 skip_channels: int = 128 kernel_size: int = 2 net = WaveNet(Params()).to(device).eval() with torch.no_grad(): z = torch.FloatTensor(5, 1, 4096).normal_().to(device) c = torch.FloatTensor(5, 80, 4096).zero_().to(device) assert list(net(z, c).size()) == [5, 3, 4096]
_____no_output_____
MIT
week_13/seminar.ipynb
ishalyminov/shad_speech
Excellent 👍! Now we are ready to get started on more complex and interesting things.Do you remember our talks about vocoders built on IAF (Parallel WaveNet or ClariNet Vocoder)? We casually said that IAF we use not just one WaveNet (predicting mu and sigma), but a stack of WaveNets. Actually, let's implement this stack, but first, a few formulas that will help you.Consider transformations of random variable $z^{(0)} \sim \mathcal{N}(0, I)$: $$z^{(0)} \rightarrow z^{(1)} \rightarrow \dots \rightarrow z^{(n)}.$$Each transformation has the form: $$ z^{(k)} = f^{(k)}(z^{(k-1)}) = z{(k-1)} \cdot \sigma^{(k)} + \mu^{(k)},$$ where $\mu^{(k)}_t = \mu(z_{<t}^{(k-1)}; \theta_k)$ and $\sigma^{(k)}_t = \sigma(z_{<t}^{(k-1)}; \theta_k)$ – are shifting and scaling variables modeled by a Gaussan WaveNet. It is easy to deduce that the whole transformation $f^{(k)} \circ \dots \circ f^{(2)} \circ f^{(1)}$ can be represented as $f^{(\mathrm{total})}(z) = z \cdot \sigma^{(\mathrm{total})} + \mu^{(\mathrm{total})}$, where$$\sigma^{(\mathrm{total})} = \prod_{k=1}^n \sigma^{(k)}, ~ ~ ~ \mu^{(\mathrm{total})} = \sum_{k=1}^n \mu^{(k)} \prod_{j > k}^n \sigma^{(j)} $$$\mu^{(\mathrm{total})}$ and $\sigma^{(\mathrm{total})}$ we will need in the future for $p(\hat x | z) estimation$.You need to **implement** `forward` method of `WaveNetFlows` model.📝 Notes: 1. WaveNet outputs tensor `output` of size `[B, 2, T]`, where `output[:, 0, :]` is $\mu$ and `output[:, 1, :]` is $\log \sigma$. We model logarithms of $\sigma$ insead of $\sigma$ for stable gradients. 2. As we model $\mu(z_{<t}^{(k-1)}; \theta_k)$ and $\sigma(z_{<t}^{(k-1)}; \theta_k)$ – their output we have length `T - 1`. To keep constant length `T` of modelled noise variable we need to pad it on the left side (with zero).3. $\mu^{(\mathrm{total})}$ and $\sigma^{(\mathrm{total})}$ wil have length `T - 1`, because we do not pad distribution parameters.
class WaveNetFlows(nn.Module): def __init__(self, params): super(WaveNetFlows, self).__init__() self.device = params.device self.iafs = nn.ModuleList() for i in range(params.num_flows): self.iafs.append(WaveNet(params)) def forward(self, z, c): # z: random sample from standart distribution [B, 1, T] # c: local conditioning for WaveNet [B, C_mel, T] mu_tot, logs_tot = 0., 0. ################################################################################ mus, log_sigmas = [], [] for iaf in self.iafs: out_i = iaf(z, c) mu = out_i[:, 0, :-1].unsqueeze(1) mu_padded = torch.cat([torch.zeros((*(z.shape[:-1]), 1), dtype=torch.float32).to(device), mu], axis=-1) mus.append(mu) log_sigma = out_i[:, 1, :-1].unsqueeze(1) log_sigma_padded = torch.cat([torch.zeros((*(z.shape[:-1]), 1), dtype=torch.float32).to(device), log_sigma], axis=-1) log_sigmas.append(log_sigma) z = torch.exp(log_sigma_padded) * z + mu_padded logs_tot = torch.sum(torch.stack(log_sigmas, axis=0), axis=0) for i in range(len(self.iafs) - 1): mu_tot += mus[i] * torch.exp(torch.sum(torch.stack(log_sigmas[i + 1:], axis=0), axis=0)) mu_tot += mus[-1] ################################################################################ return z, mu_tot, logs_tot class Params: num_flows: int = 4 mel_channels: int = 80 num_blocks: int = 1 num_layers: int = 5 out_channels: int = 2 front_kernel_size: int = 2 residual_channels: int = 64 gate_channels: int = 64 skip_channels: int = 64 kernel_size: int = 3 device = device net = WaveNetFlows(Params()).to(device) with torch.no_grad(): z = torch.FloatTensor(3, 1, 4096).normal_().to(device) c = torch.FloatTensor(3, 80, 4096).zero_().to(device) z_hat, mu, log_sigma = net(z, c) assert list(z_hat.size()) == [3, 1, 4096] # same length as input assert list(mu.size()) == [3, 1, 4096 - 1] # shorter by one sample assert list(log_sigma.size()) == [3, 1, 4096 - 1] # shorted by one sample
_____no_output_____
MIT
week_13/seminar.ipynb
ishalyminov/shad_speech
If you are not familiar with VAE framework, please try to figure it out. For example, please familiarize with this [blog post](https://wiseodd.github.io/techblog/2016/12/10/variational-autoencoder/).In short, VAE – is just "modification" of AutoEncoder, which consists of encoder and decoder. VAE allows you to sample from data distribution $p(x)$ as $p(x|z)$ via its decoder, where $p(z)$ is simple and known, e.g. $\mathcal{N}(0, I)$. The interesting part is that $p(x | z)$ cannot be optimized with Maximum Likelihood Estimation, because $p(x | z)$ is not tractable. But we can maximize Evidence Lower Bound (ELBO) which has a form:$$\max_{\phi, \theta} \mathbb{E}_{q_{\phi}(z | x)} \log p_{\theta}(x | z) - \mathbb{D}_{KL}(q_{\phi}(z | x) || p(z))$$where $p_{\theta}(x | z)$ is VAE decoder and $q_{\phi}(z | x)$ is VAE encoder. For more details please read mentioned blog post or any other materials on this theme.In our case $q_{\phi}(z | x)$ is represented by MAF WaveNet, and $p_{\theta}(x | z)$ – by IAF build with WaveNet stack. To be more precise our decoder $p_{\theta}(x | z)$ is parametrised by the **one-step-ahead prediction** from an IAF.🧑‍💻 **let's practice..**We will start from easy part: generation (or sampling). **Implement** `generate` method, which accepts mel spectrogram as conditioning tensor. Inside this method random tensor from standart distribution N(0, I) is sampled. This tensor than transformed to tensor from audio distribution via `decoder`. In the cell bellow you will see code for loading pretrained model and mel spectrogram. Listen to result – it should sound passable, but MOS 5.0 is not expected. 😄
class WaveNetVAE(nn.Module): def __init__(self, encoder_params, decoder_params): super(WaveNetVAE, self).__init__() assert encoder_params.device == decoder_params.device self.device = encoder_params.device self.mse_loss = torch.nn.MSELoss() self.encoder = WaveNet(encoder_params) self.decoder = WaveNetFlows(decoder_params) self.log_eps = nn.Parameter(torch.zeros(1)) self.upsample_conv = nn.ModuleList() for s in [16, 16]: conv = nn.ConvTranspose2d(1, 1, (3, 2 * s), padding=(1, s // 2), stride=(1, s)) conv = nn.utils.weight_norm(conv) nn.init.kaiming_normal_(conv.weight) self.upsample_conv.append(conv) self.upsample_conv.append(nn.LeakyReLU(0.4)) def forward(self, x, c): # x: audio signal [B, 1, T] # c: mel spectrogram [B, 1, T / HOP_SIZE] loss_rec = 0 loss_kl = 0 loss_frame_rec = 0 loss_frame_prior = 0 ################################################################################ c_up = self.upsample(c) mu_log_sigma = self.encoder(x, c_up) mu = mu_log_sigma[:, 0, :].unsqueeze(1) log_sigma = mu_log_sigma[:, 1, :].unsqueeze(1) mu_whitened = (x - mu) / torch.exp(log_sigma) eps = torch.randn_like(log_sigma).to(self.device) z = mu_whitened + torch.exp(log_sigma / 2) * eps x_rec, mu_tot, log_sigma_tot = self.decoder(z, c_up) x_prior = self.generate(c) x_rec_stft = librosa.stft(x_rec.view(-1).detach().numpy()) x_stft = librosa.stft(x.view(-1).detach().numpy()) x_prior_stft = librosa.stft(x_prior.view(-1).detach().numpy()) loss_frame_rec = self.mse_loss(torch.FloatTensor(x_stft), torch.FloatTensor(x_rec_stft)) loss_frame_prior = self.mse_loss(torch.FloatTensor(x_stft), torch.FloatTensor(x_prior_stft)) loss_kl = torch.sum(-self.log_eps + (1 / 2.0) * (torch.exp(self.log_eps) ** 2.0 - 1.0 + mu_whitened ** 2.0)) loss_rec = torch.sum(torch.normal(mu_tot, torch.exp(log_sigma_tot))) ################################################################################ alpha = 1e-9 # for annealing during training return loss_rec + alpha * loss_kl + loss_frame_rec + loss_frame_prior def generate(self, c): # c: mel spectrogram [B, 80, L] where L - number of mel frames # outputs: audio [B, 1, L * HOP_SIZE] ################################################################################ c_up = self.upsample(c) frames_number = c_up.shape[-1] z = torch.randn(c_up.shape[0], 1, frames_number).to(self.device) x_sample, _, _ = self.decoder(z, c_up) ################################################################################ return x_sample def upsample(self, c): c = c.unsqueeze(1) # [B, 1, C, L] for f in self.upsample_conv: c = f(c) c = c.squeeze(1) # [B, C, T], where T = L * HOP_SIZE return c # saved checkpoint model has following architecture parameters class ParamsMAF: mel_channels: int = 80 num_blocks: int = 2 num_layers: int = 10 out_channels: int = 2 front_kernel_size: int = 32 residual_channels: int = 128 gate_channels: int = 256 skip_channels: int = 128 kernel_size: int = 2 device: str = device class ParamsIAF: num_flows: int = 6 mel_channels: int = 80 num_blocks: int = 1 num_layers: int = 10 out_channels: int = 2 front_kernel_size: int = 32 residual_channels: int = 64 gate_channels: int = 128 skip_channels: int = 64 kernel_size: int = 3 device: str = device # load checkpoint ckpt_path = 'data/checkpoint.pth' net = WaveNetVAE(ParamsMAF(), ParamsIAF()).eval().to(device) ckpt = torch.load(ckpt_path, map_location='cpu') net.load_state_dict(ckpt['state_dict']) # load original audio and it's mel x = torch.load('data/x.pth').to(device) c = torch.load('data/c.pth').to(device) # generate audio from with torch.no_grad(): x_prior = net.generate(c.unsqueeze(0)).squeeze() display(Audio(x_prior.cpu(), rate=22050))
_____no_output_____
MIT
week_13/seminar.ipynb
ishalyminov/shad_speech
If it sounds plausible **5 points** 🥉 are already yours 🎉! And here the most interesting and difficult part comes: loss function implementation. The `forward` method will return the loss. But lets talk more precisly about our architecture and how it was trained.The encoder of our model $q_{\phi}(z|x)$ is parametrerized by a Gaussian autoregressive WaveNet, which maps the audio $x$ into the sample length latent representation $z$. Specifically, the Gaussian WaveNet (if we talk about **real MAF**) models $x_t$ given the previous samples $x_{<t}$ with $x_t ∼ \mathcal{N}(\mu(x_{<t}; \phi), \sigma(x_{<t}; \phi))$, where the mean $\mu(x_{<t}; \phi)$ and log-scale $\log \sigma(x_{<t}; \phi)$ are predicted by WaveNet, respectively.Our **encoder** posterior is constructed as$$q_{\phi}(z | x) = \prod_{t} q_{\phi}(z_t | x_{\leq t})$$where$$q_{\phi}(z_t | x_{\leq t}) = \mathcal{N}(\frac{x_t - \mu(x_{<t}; \phi)}{\sigma(x_{<t}; \phi)}, \varepsilon)$$We apply the mean $\mu(x_{ 0$ to decouple the global variation, which will make optimization process easier.Substitution of our model formulas in $\mathbb{D}_{KL}$ formula gives:$$\mathbb{D}_{KL}(q_{\phi}(z | x) || p(z)) = \sum_t \log\frac{1}{\varepsilon} + \frac{1}{2}(\varepsilon^2 - 1 + (\frac{x_t - \mu(x_{<t})}{\sigma(x_{<t})})^2)$$**Implement** calculation of `loss_kld` in `forward` method as KL divergence.---The other term in ELBO formula can be interpreted as reconstruction loss. It can be evaluated by sampling from $p_{\theta}(x | z)$, where $z$ is from $q_{\phi}(z | \hat x)$, $\hat x$ is our ground truth audio. But sampling is not differential operation! 🤔 We can apply reparametrization trick!**Implement** calculation of `loss_rec` in `forward` method as recontruction loss – which is just log likelihood of ground truth sample $x$ in predicted by IAF distribution $p_{\theta}(x | \hat z)$ where $\hat z \sim q_{\phi}(z | \hat x)$.--- Vocoders without MLE are still not able to train without auxilary losses. We studied many of them, but STFT-loss is our favourite!**Implement** calculation of `loss_frame_rec` which stands for MSE loss in STFT domain between original audio and its reconstruction.--- We can go even further and calculate STFT loss with random sample from $p_\theta(x | z)$. Conditioning on mel spectrogram allows us to do so.**Implement** calculation of `loss_frame_prior` which stands for MSE loss in STFT domain between original audio and sample from prior.
net = WaveNetVAE(ParamsMAF(), ParamsIAF()).to(device).train() x = x[:64 * 256] c = c[:, :64] net.zero_grad() loss = net.forward(x.unsqueeze(0).unsqueeze(0), c.unsqueeze(0)) loss.backward() print(f"Initial loss: {loss.item():.2f}") ckpt = torch.load(ckpt_path, map_location='cpu') net.load_state_dict(ckpt['state_dict']) net.zero_grad() loss = net.forward(x.unsqueeze(0).unsqueeze(0), c.unsqueeze(0)) loss.backward() print(f"Optimized loss: {loss.item():.2f}")
Initial loss: 6629.40 Optimized loss: 6.45
MIT
week_13/seminar.ipynb
ishalyminov/shad_speech
Trigger ExamplesTriggers allow the user to specify a set of actions that are triggered by the result of a boolean expression.They provide flexibility to adapt what analysis and visualization actions are taken in situ. Triggers leverage Ascent's Query and Expression infrastructure. See Ascent's [Triggers](https://ascent.readthedocs.io/en/latest/Actions/Triggers.html) docs for deeper details on Triggers.
# cleanup any old results !./cleanup.sh # ascent + conduit imports import conduit import conduit.blueprint import ascent import numpy as np # helpers we use to create tutorial data from ascent_tutorial_jupyter_utils import img_display_width from ascent_tutorial_jupyter_utils import tutorial_gyre_example import matplotlib.pyplot as plt
_____no_output_____
BSD-3-Clause
src/examples/tutorial/ascent_intro/notebooks/08_ascent_trigger_examples.ipynb
goodbadwolf/ascent
Trigger Example 1 Using triggers to render when conditions occur
# Use triggers to render when conditions occur a = ascent.Ascent() a.open() # setup actions actions = conduit.Node() # declare a question to ask add_queries = actions.append() add_queries["action"] = "add_queries" # add our entropy query (q1) queries = add_queries["queries"] queries["q1/params/expression"] = "entropy(histogram(field('gyre'), num_bins=128))" queries["q1/params/name"] = "entropy" # declare triggers add_triggers = actions.append() add_triggers["action"] = "add_triggers" triggers = add_triggers["triggers"] # add a simple trigger (t1_ that fires at cycle 500 triggers["t1/params/condition"] = "cycle() == 500" triggers["t1/params/actions_file"] = "cycle_trigger_actions.yaml" # add trigger (t2) that fires when the change in entroy exceeds 0.5 # the history function allows you to access query results of previous # cycles. relative_index indicates how far back in history to look. # Looking at the plot of gyre entropy in the previous notebook, we see a jump # in entropy at cycle 200, so we expect the trigger to fire at cycle 200 triggers["t2/params/condition"] = "entropy - history(entropy, relative_index = 1) > 0.5" triggers["t2/params/actions_file"] = "entropy_trigger_actions.yaml" # view our full actions tree print(actions.to_yaml()) # gyre time varying params nsteps = 10 time = 0.0 delta_time = 0.5 for step in range(nsteps): # call helper that generates a double gyre time varying example mesh. # gyre ref :https://shaddenlab.berkeley.edu/uploads/LCS-tutorial/examples.html mesh = tutorial_gyre_example(time) # update the example cycle cycle = 100 + step * 100 mesh["state/cycle"] = cycle print("time: {} cycle: {}".format(time,cycle)) # publish mesh to ascent a.publish(mesh) # execute the actions a.execute(actions) # update time time = time + delta_time # retrieve the info node that contains the trigger and query results info = conduit.Node() a.info(info) # close ascent a.close() # we expect our cycle trigger to render only at cycle 500 ! ls cycle_trigger*.png # show the result image from the cycle trigger ascent.jupyter.AscentImageSequenceViewer(["cycle_trigger_out_500.png"]).show() # we expect our entropy trigger to render only at cycle 200 ! ls entropy_trigger*.png # show the result image from the entropy trigger ascent.jupyter.AscentImageSequenceViewer(["entropy_trigger_out_200.png"]).show() print(info["expressions"].to_yaml())
_____no_output_____
BSD-3-Clause
src/examples/tutorial/ascent_intro/notebooks/08_ascent_trigger_examples.ipynb
goodbadwolf/ascent
These notebooks can be found at https://github.com/jaspajjr/pydata-visualisation if you want to follow along https://matplotlib.org/users/intro.htmlMatplotlib is a library for making 2D plots of arrays in Python.* Has it's origins in emulating MATLAB, it can also be used in a Pythonic, object oriented way. * Easy stuff should be easy, difficult stuff should be possible
import matplotlib.pyplot as plt import numpy as np import pandas as pd %matplotlib inline
_____no_output_____
MIT
basic-matplotlib-plotting.ipynb
jaspajjr/pydata-visualisation
Everything in matplotlib is organized in a hierarchy. At the top of the hierarchy is the matplotlib “state-machine environment” which is provided by the matplotlib.pyplot module. At this level, simple functions are used to add plot elements (lines, images, text, etc.) to the current axes in the current figure.Pyplot’s state-machine environment behaves similarly to MATLAB and should be most familiar to users with MATLAB experience.The next level down in the hierarchy is the first level of the object-oriented interface, in which pyplot is used only for a few functions such as figure creation, and the user explicitly creates and keeps track of the figure and axes objects. At this level, the user uses pyplot to create figures, and through those figures, one or more axes objects can be created. These axes objects are then used for most plotting actions. Scatter Plot To start with let's do a really basic scatter plot:
plt.plot([0, 1, 2, 3, 4, 5], [0, 2, 4, 6, 8, 10]) x = [0, 1, 2, 3, 4, 5] y = [0, 2, 4, 6, 8, 10] plt.plot(x, y)
_____no_output_____
MIT
basic-matplotlib-plotting.ipynb
jaspajjr/pydata-visualisation
What if we don't want a line?
plt.plot([0, 1, 2, 3, 4, 5], [0, 2, 5, 7, 8, 10], marker='o', linestyle='') plt.xlabel('The X Axis') plt.ylabel('The Y Axis') plt.show();
_____no_output_____
MIT
basic-matplotlib-plotting.ipynb
jaspajjr/pydata-visualisation
Simple example from matplotlibhttps://matplotlib.org/tutorials/intermediate/tight_layout_guide.htmlsphx-glr-tutorials-intermediate-tight-layout-guide-py
def example_plot(ax, fontsize=12): ax.plot([1, 2]) ax.locator_params(nbins=5) ax.set_xlabel('x-label', fontsize=fontsize) ax.set_ylabel('y-label', fontsize=fontsize) ax.set_title('Title', fontsize=fontsize) fig, ax = plt.subplots() example_plot(ax, fontsize=24) fig, ((ax1, ax2), (ax3, ax4)) = plt.subplots(nrows=2, ncols=2) # fig, ((ax1, ax2), (ax3, ax4)) = plt.subplots(nrows=2, ncols=2, sharex=True, sharey=True) ax1.plot([0, 1, 2, 3, 4, 5], [0, 2, 5, 7, 8, 10]) ax2.plot([0, 1, 2, 3, 4, 5], [0, 2, 4, 9, 16, 25]) ax3.plot([0, 1, 2, 3, 4, 5], [0, 13, 18, 21, 23, 25]) ax4.plot([0, 1, 2, 3, 4, 5], [0, 1, 2, 3, 4, 5]) plt.tight_layout()
_____no_output_____
MIT
basic-matplotlib-plotting.ipynb
jaspajjr/pydata-visualisation
Date Plotting
import pandas_datareader as pdr df = pdr.get_data_fred('GS10') df = df.reset_index() print(df.info()) df.head() fig = plt.figure(figsize=(12, 8)) ax = fig.add_subplot(111) ax.plot_date(df['DATE'], df['GS10'])
_____no_output_____
MIT
basic-matplotlib-plotting.ipynb
jaspajjr/pydata-visualisation
Bar Plot
fig = plt.figure(figsize=(12, 8)) ax = fig.add_subplot(111) x_data = [0, 1, 2, 3, 4] values = [20, 35, 30, 35, 27] ax.bar(x_data, values) ax.set_xticks(x_data) ax.set_xticklabels(('A', 'B', 'C', 'D', 'E')) ;
_____no_output_____
MIT
basic-matplotlib-plotting.ipynb
jaspajjr/pydata-visualisation
Matplotlib basicshttp://pbpython.com/effective-matplotlib.html Behind the scenes* matplotlib.backend_bases.FigureCanvas is the area onto which the figure is drawn * matplotlib.backend_bases.Renderer is the object which knows how to draw on the FigureCanvas * matplotlib.artist.Artist is the object that knows how to use a renderer to paint onto the canvas The typical user will spend 95% of their time working with the Artists.https://matplotlib.org/tutorials/intermediate/artists.htmlsphx-glr-tutorials-intermediate-artists-py
fig, (ax1, ax2) = plt.subplots( nrows=1, ncols=2, sharey=True, figsize=(12, 8)) fig.suptitle("Main Title", fontsize=14, fontweight='bold'); x_data = [0, 1, 2, 3, 4] values = [20, 35, 30, 35, 27] ax1.barh(x_data, values); ax1.set_xlim([0, 55]) #ax1.set(xlabel='Unit of measurement', ylabel='Groups') ax1.set(title='Foo', xlabel='Unit of measurement') ax1.grid() ax2.barh(x_data, [y / np.sum(values) for y in values], color='r'); ax2.set_title('Transformed', fontweight='light') ax2.axvline(x=.1, color='k', linestyle='--') ax2.set(xlabel='Unit of measurement') # Worth noticing this ax2.set_axis_off(); fig.savefig('example_plot.png', dpi=80, bbox_inches="tight")
_____no_output_____
MIT
basic-matplotlib-plotting.ipynb
jaspajjr/pydata-visualisation
**[Machine Learning Course Home Page](https://www.kaggle.com/learn/machine-learning)**--- This exercise will test your ability to read a data file and understand statistics about the data.In later exercises, you will apply techniques to filter the data, build a machine learning model, and iteratively improve your model.The course examples use data from Melbourne. To ensure you can apply these techniques on your own, you will have to apply them to a new dataset (with house prices from Iowa).The exercises use a "notebook" coding environment. In case you are unfamiliar with notebooks, we have a [90-second intro video](https://www.youtube.com/watch?v=4C2qMnaIKL4). ExercisesRun the following cell to set up code-checking, which will verify your work as you go.
# Set up code checking from learntools.core import binder binder.bind(globals()) from learntools.machine_learning.ex2 import * print("Setup Complete")
_____no_output_____
MIT
exercises/exercise-explore-your-data.ipynb
bobrokerson/kaggle
Step 1: Loading DataRead the Iowa data file into a Pandas DataFrame called `home_data`.
import pandas as pd # Path of the file to read iowa_file_path ='../input/home-data-for-ml-course/train.csv' # Fill in the line below to read the file into a variable home_data home_data = pd.read_csv(iowa_file_path) # Call line below with no argument to check that you've loaded the data correctly step_1.check()
_____no_output_____
MIT
exercises/exercise-explore-your-data.ipynb
bobrokerson/kaggle
Step 2: Review The DataUse the command you learned to view summary statistics of the data. Then fill in variables to answer the following questions
# Print summary statistics in next line #print(home_data.info()) home_data[['YearBuilt','YrSold']].describe() # What is the average lot size (rounded to nearest integer)? avg_lot_size_1 = home_data.LotArea.mean() avg_lot_size = round(avg_lot_size_1) print(avg_lot_size) # As of today, how old is the newest home (current year - the date in which it was built) import matplotlib.pyplot as plt home_data['YrSold'].value_counts().plot(kind='bar'); plt.xlabel("Year Sold", labelpad=14) plt.ylabel("Count of Houses", labelpad=14) plt.title("Count of Houses Sold by Year", y=1.02); newest_home_age = 2022-home_data['YearBuilt'].max() print(newest_home_age) # Checks your answers step_2.check()
_____no_output_____
MIT
exercises/exercise-explore-your-data.ipynb
bobrokerson/kaggle
Procedures and Functions TutorialMLDB is the Machine Learning Database, and all machine learning operations are done via Procedures and Functions. Training a model happens via Procedures, and applying a model happens via Functions. The notebook cells below use `pymldb`'s `Connection` class to make [REST API](../../../../doc/builtin/WorkingWithRest.md.html) calls. You can check out the [Using `pymldb` Tutorial](../../../../doc/nblink.html_tutorials/Using pymldb Tutorial) for more details.
from pymldb import Connection mldb = Connection("http://localhost")
_____no_output_____
Apache-2.0
container_files/tutorials/Procedures and Functions Tutorial.ipynb
mldbai/mldb
Loading a Dataset The classic [Iris Flower Dataset](http://en.wikipedia.org/wiki/Iris_flower_data_set) isn't very big but it's well-known and easy to reason about so it's a good example dataset to use for machine learning examples.We can import it directly from a remote URL:
mldb.put('/v1/procedures/import_iris', { "type": "import.text", "params": { "dataFileUrl": "file://mldb/mldb_test_data/iris.data", "headers": [ "sepal length", "sepal width", "petal length", "petal width", "class" ], "outputDataset": "iris", "runOnCreation": True } })
_____no_output_____
Apache-2.0
container_files/tutorials/Procedures and Functions Tutorial.ipynb
mldbai/mldb
A quick look at the dataWe can use the [Query API](../../../../doc/builtin/sql/QueryAPI.md.html) to get the data into a Pandas DataFrame to take a quick look at it.
df = mldb.query("select * from iris") df.head() %matplotlib inline import seaborn as sns, pandas as pd sns.pairplot(df, hue="class", size=2.5)
_____no_output_____
Apache-2.0
container_files/tutorials/Procedures and Functions Tutorial.ipynb
mldbai/mldb
Unsupervised Machine Learning with a `kmeans.train` ProcedureWe will create and run a [Procedure](../../../../doc/builtin/procedures/Procedures.md.html) of type [`kmeans.train`](../../../../doc/builtin/procedures/KmeansProcedure.md.html). This will train an unsupervised K-Means model and use it to assign each row in the input to a cluster, in the output dataset.
mldb.put('/v1/procedures/iris_train_kmeans', { 'type' : 'kmeans.train', 'params' : { 'trainingData' : 'select * EXCLUDING(class) from iris', 'outputDataset' : 'iris_clusters', 'numClusters' : 3, 'metric': 'euclidean', "runOnCreation": True } })
_____no_output_____
Apache-2.0
container_files/tutorials/Procedures and Functions Tutorial.ipynb
mldbai/mldb
Now we can look at the output dataset and compare the clusters the model learned with the three types of flower in the dataset.
mldb.query(""" select pivot(class, num) as * from ( select cluster, class, count(*) as num from merge(iris_clusters, iris) group by cluster, class ) group by cluster """)
_____no_output_____
Apache-2.0
container_files/tutorials/Procedures and Functions Tutorial.ipynb
mldbai/mldb
As you can see, the K-means algorithm doesn't do a great job of clustering this data (as is mentioned in the Wikipedia article!). Supervised Machine Learning with `classifier.train` and `.test` ProceduresWe will now create and run a [Procedure](../../../../doc/builtin/procedures/Procedures.md.html) of type [`classifier.train`](../../../../doc/builtin/procedures/Classifier.md.html). The configuration below will use 20% of the data to train a decision tree to classify rows into the three classes of Iris. The output of this procedure is a [Function](../../../../doc/builtin/functions/Functions.md.html), which we will be able to call from REST or SQL.
mldb.put('/v1/procedures/iris_train_classifier', { 'type' : 'classifier.train', 'params' : { 'trainingData' : """ select {* EXCLUDING(class)} as features, class as label from iris where rowHash() % 5 = 0 """, "algorithm": "dt", "modelFileUrl": "file://models/iris.cls", "mode": "categorical", "functionName": "iris_classify", "runOnCreation": True } })
_____no_output_____
Apache-2.0
container_files/tutorials/Procedures and Functions Tutorial.ipynb
mldbai/mldb
We can now test the classifier we just trained on the subset of the data we didn't use for training. To do so we use a procedure of type [`classifier.test`](../../../../doc/builtin/procedures/Accuracy.md.html).
rez = mldb.put('/v1/procedures/iris_test_classifier', { 'type' : 'classifier.test', 'params' : { 'testingData' : """ select iris_classify({ features: {* EXCLUDING(class)} }) as score, class as label from iris where rowHash() % 5 != 0 """, "mode": "categorical", "runOnCreation": True } }) runResults = rez.json()["status"]["firstRun"]["status"] print rez
<Response [201]>
Apache-2.0
container_files/tutorials/Procedures and Functions Tutorial.ipynb
mldbai/mldb
The procedure returns a confusion matrix, which you can compare with the one that resulted from the K-means procedure.
pd.DataFrame(runResults["confusionMatrix"])\ .pivot_table(index="actual", columns="predicted", fill_value=0)
_____no_output_____
Apache-2.0
container_files/tutorials/Procedures and Functions Tutorial.ipynb
mldbai/mldb
As you can see, the decision tree does a much better job of classifying the data than the K-means model, using 20% of the examples as training data.The procedure also returns standard classification statistics on how the classifier performed on the test set. Below are performance statistics for each label:
pd.DataFrame.from_dict(runResults["labelStatistics"]).transpose()
_____no_output_____
Apache-2.0
container_files/tutorials/Procedures and Functions Tutorial.ipynb
mldbai/mldb
They are also available, averaged over all labels:
pd.DataFrame.from_dict({"weightedStatistics": runResults["weightedStatistics"]})
_____no_output_____
Apache-2.0
container_files/tutorials/Procedures and Functions Tutorial.ipynb
mldbai/mldb
Scoring new examplesWe can call the Function REST API endpoint to classify a never-before-seen set of measurements like this:
mldb.get('/v1/functions/iris_classify/application', input={ "features":{ "petal length": 1, "petal width": 2, "sepal length": 3, "sepal width": 4 } })
_____no_output_____
Apache-2.0
container_files/tutorials/Procedures and Functions Tutorial.ipynb
mldbai/mldb
Assignment 2 - Elementary Probability and Information Theory Boise State University NLP - Dr. Kennington Instructions and Hints:* This notebook loads some data into a `pandas` dataframe, then does a small amount of preprocessing. Make sure your data can load by stepping through all of the cells up until question 1. * Most of the questions require you to write some code. In many cases, you will write some kind of probability function like we did in class using the data. * Some of the questions only require you to write answers, so be sure to change the cell type to markdown or raw text* Don't worry about normalizing the text this time (e.g., lowercase, etc.). Just focus on probabilies. * Most questions can be answered in a single cell, but you can make as many additional cells as you need. * Follow the instructions on the corresponding assignment Trello card for submitting your assignment.
import pandas as pd data = pd.read_csv('pnp-train.txt',delimiter='\t',encoding='latin-1', # utf8 encoding didn't work for this names=['type','name']) # supply the column names for the dataframe # this next line creates a new column with the lower-cased first word data['first_word'] = data['name'].map(lambda x: x.lower().split()[0]) data[:10] data.describe()
_____no_output_____
AFL-3.0
A2/A2-probability-information-theory.ipynb
Colmine/NLP_Stuff
1. Write a probability function/distribution $P(T)$ over the types. Hints:* The Counter library might be useful: `from collections import Counter`* Write a function `def P(T='')` that returns the probability of the specific value for T* You can access the types from the dataframe by calling `data['type']`
from collections import Counter def P(T=''): global counts global data counts = Counter(data['type']) return counts[T] / len(data['type']) counts
_____no_output_____
AFL-3.0
A2/A2-probability-information-theory.ipynb
Colmine/NLP_Stuff
2. What is `P(T='movie')` ?
P(T='movie')
_____no_output_____
AFL-3.0
A2/A2-probability-information-theory.ipynb
Colmine/NLP_Stuff
3. Show that your probability distribution sums to one.
import numpy as np round(np.sum([P(T=x) for x in set(data['type'])]), 4)
_____no_output_____
AFL-3.0
A2/A2-probability-information-theory.ipynb
Colmine/NLP_Stuff
4. Write a joint distribution using the type and the first word of the nameHints:* The function is $P2(T,W_1)$* You will need to count up types AND the first words, for example: ('person','bill)* Using the [itertools.product](https://docs.python.org/2/library/itertools.htmlitertools.product) function was useful for me here
def P2(T='', W1=''): global count count = data[['type', 'first_word']] return len(count.loc[(count['type'] == T) & (count['first_word'] == W1)]) / len(count)
_____no_output_____
AFL-3.0
A2/A2-probability-information-theory.ipynb
Colmine/NLP_Stuff
5. What is P2(T='person', W1='bill')? What about P2(T='movie',W1='the')?
P2(T='person', W1='bill') P2(T='movie', W1='the')
_____no_output_____
AFL-3.0
A2/A2-probability-information-theory.ipynb
Colmine/NLP_Stuff
6. Show that your probability distribution P(T,W1) sums to one.
types = Counter(data['type']) words = Counter(data['first_word']) retVal = 0 for x in types: for y in words: retVal = retVal + P2(T=x,W1=y) print(round(retVal,4))
1.0
AFL-3.0
A2/A2-probability-information-theory.ipynb
Colmine/NLP_Stuff
7. Make a new function Q(T) from marginalizing over P(T,W1) and make sure that Q(T) sums to one.Hints:* Your Q function will call P(T,W1)* Your check for the sum to one should be the same answer as Question 3, only it calls Q instead of P.
def Q(T=''): words = Counter(data['first_word']) retVal = 0 for x in words: retVal = retVal + P2(T,W1=x) return retVal Q('movie') round(np.sum([Q(T=x) for x in set(data['type'])]), 4)
_____no_output_____
AFL-3.0
A2/A2-probability-information-theory.ipynb
Colmine/NLP_Stuff
8. What is the KL Divergence of your Q function and your P function for Question 1?* Even if you know the answer, you still need to write code that computes it. I wasn't quite sure how to properly do this question so it's kind of just half implemented. Although I do know that it should be 0.0
import math (P('drug') * math.log(P('drug') / Q('drug')) + P('movie') * math.log(P('movie') / Q('movie')))
_____no_output_____
AFL-3.0
A2/A2-probability-information-theory.ipynb
Colmine/NLP_Stuff
9. Convert from P(T,W1) to P(W1|T) Hints:* Just write a comment cell, no code this time. * Note that $P(T,W1) = P(W1,T)$ Given that P(T,W1) = P(W1,T) then we can infer that P(W1|T) = (P(W1,T)/P(T)) (try to use markdown math formating, answer in this cell) 10. Write a function `Pwt` (that calls the functions you already have) to compute $P(W_1|T)$.* This will be something like the multiplication rule, but you may need to change something
def Pwt(W1='',T=''): return P2(T=T,W1=W1)/P(T=T)
_____no_output_____
AFL-3.0
A2/A2-probability-information-theory.ipynb
Colmine/NLP_Stuff
11. What is P(W1='the'|T='movie')?
Pwt(W1='the',T='movie')
_____no_output_____
AFL-3.0
A2/A2-probability-information-theory.ipynb
Colmine/NLP_Stuff