markdown
stringlengths 0
1.02M
| code
stringlengths 0
832k
| output
stringlengths 0
1.02M
| license
stringlengths 3
36
| path
stringlengths 6
265
| repo_name
stringlengths 6
127
|
---|---|---|---|---|---|
For example, suppose you have a single observation from the dataset, `[False, 4, bytes('goat'), 0.9876]`. You can create and print the `tf.Example` message for this observation using `create_message()`. Each single observation will be written as a `Features` message as per the above. Note that the `tf.Example` [message](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/core/example/example.protoL88) is just a wrapper around the `Features` message: | # This is an example observation from the dataset.
example_observation = []
serialized_example = serialize_example(False, 4, b'goat', 0.9876)
serialized_example | _____no_output_____ | Apache-2.0 | site/en/tutorials/load_data/tfrecord.ipynb | blueyi/docs |
To decode the message use the `tf.train.Example.FromString` method. | example_proto = tf.train.Example.FromString(serialized_example)
example_proto | _____no_output_____ | Apache-2.0 | site/en/tutorials/load_data/tfrecord.ipynb | blueyi/docs |
TFRecords format detailsA TFRecord file contains a sequence of records. The file can only be read sequentially.Each record contains a byte-string, for the data-payload, plus the data-length, and CRC32C (32-bit CRC using the Castagnoli polynomial) hashes for integrity checking.Each record is stored in the following formats: uint64 length uint32 masked_crc32_of_length byte data[length] uint32 masked_crc32_of_dataThe records are concatenated together to produce the file. CRCs are[described here](https://en.wikipedia.org/wiki/Cyclic_redundancy_check), andthe mask of a CRC is: masked_crc = ((crc >> 15) | (crc << 17)) + 0xa282ead8ulNote: There is no requirement to use `tf.Example` in TFRecord files. `tf.Example` is just a method of serializing dictionaries to byte-strings. Lines of text, encoded image data, or serialized tensors (using `tf.io.serialize_tensor`, and`tf.io.parse_tensor` when loading). See the `tf.io` module for more options. TFRecord files using `tf.data` The `tf.data` module also provides tools for reading and writing data in TensorFlow. Writing a TFRecord fileThe easiest way to get the data into a dataset is to use the `from_tensor_slices` method.Applied to an array, it returns a dataset of scalars: | tf.data.Dataset.from_tensor_slices(feature1) | _____no_output_____ | Apache-2.0 | site/en/tutorials/load_data/tfrecord.ipynb | blueyi/docs |
Applied to a tuple of arrays, it returns a dataset of tuples: | features_dataset = tf.data.Dataset.from_tensor_slices((feature0, feature1, feature2, feature3))
features_dataset
# Use `take(1)` to only pull one example from the dataset.
for f0,f1,f2,f3 in features_dataset.take(1):
print(f0)
print(f1)
print(f2)
print(f3) | _____no_output_____ | Apache-2.0 | site/en/tutorials/load_data/tfrecord.ipynb | blueyi/docs |
Use the `tf.data.Dataset.map` method to apply a function to each element of a `Dataset`.The mapped function must operate in TensorFlow graph mode—it must operate on and return `tf.Tensors`. A non-tensor function, like `serialize_example`, can be wrapped with `tf.py_function` to make it compatible.Using `tf.py_function` requires to specify the shape and type information that is otherwise unavailable: | def tf_serialize_example(f0,f1,f2,f3):
tf_string = tf.py_function(
serialize_example,
(f0,f1,f2,f3), # pass these args to the above function.
tf.string) # the return type is `tf.string`.
return tf.reshape(tf_string, ()) # The result is a scalar
tf_serialize_example(f0,f1,f2,f3) | _____no_output_____ | Apache-2.0 | site/en/tutorials/load_data/tfrecord.ipynb | blueyi/docs |
Apply this function to each element in the dataset: | serialized_features_dataset = features_dataset.map(tf_serialize_example)
serialized_features_dataset
def generator():
for features in features_dataset:
yield serialize_example(*features)
serialized_features_dataset = tf.data.Dataset.from_generator(
generator, output_types=tf.string, output_shapes=())
serialized_features_dataset | _____no_output_____ | Apache-2.0 | site/en/tutorials/load_data/tfrecord.ipynb | blueyi/docs |
And write them to a TFRecord file: | filename = 'test.tfrecord'
writer = tf.data.experimental.TFRecordWriter(filename)
writer.write(serialized_features_dataset) | _____no_output_____ | Apache-2.0 | site/en/tutorials/load_data/tfrecord.ipynb | blueyi/docs |
Reading a TFRecord file You can also read the TFRecord file using the `tf.data.TFRecordDataset` class.More information on consuming TFRecord files using `tf.data` can be found [here](https://www.tensorflow.org/guide/datasetsconsuming_tfrecord_data).Using `TFRecordDataset`s can be useful for standardizing input data and optimizing performance. | filenames = [filename]
raw_dataset = tf.data.TFRecordDataset(filenames)
raw_dataset | _____no_output_____ | Apache-2.0 | site/en/tutorials/load_data/tfrecord.ipynb | blueyi/docs |
At this point the dataset contains serialized `tf.train.Example` messages. When iterated over it returns these as scalar string tensors.Use the `.take` method to only show the first 10 records.Note: iterating over a `tf.data.Dataset` only works with eager execution enabled. | for raw_record in raw_dataset.take(10):
print(repr(raw_record)) | _____no_output_____ | Apache-2.0 | site/en/tutorials/load_data/tfrecord.ipynb | blueyi/docs |
These tensors can be parsed using the function below. Note that the `feature_description` is necessary here because datasets use graph-execution, and need this description to build their shape and type signature: | # Create a description of the features.
feature_description = {
'feature0': tf.io.FixedLenFeature([], tf.int64, default_value=0),
'feature1': tf.io.FixedLenFeature([], tf.int64, default_value=0),
'feature2': tf.io.FixedLenFeature([], tf.string, default_value=''),
'feature3': tf.io.FixedLenFeature([], tf.float32, default_value=0.0),
}
def _parse_function(example_proto):
# Parse the input `tf.Example` proto using the dictionary above.
return tf.io.parse_single_example(example_proto, feature_description) | _____no_output_____ | Apache-2.0 | site/en/tutorials/load_data/tfrecord.ipynb | blueyi/docs |
Alternatively, use `tf.parse example` to parse the whole batch at once. Apply this function to each item in the dataset using the `tf.data.Dataset.map` method: | parsed_dataset = raw_dataset.map(_parse_function)
parsed_dataset | _____no_output_____ | Apache-2.0 | site/en/tutorials/load_data/tfrecord.ipynb | blueyi/docs |
Use eager execution to display the observations in the dataset. There are 10,000 observations in this dataset, but you will only display the first 10. The data is displayed as a dictionary of features. Each item is a `tf.Tensor`, and the `numpy` element of this tensor displays the value of the feature: | for parsed_record in parsed_dataset.take(10):
print(repr(parsed_record)) | _____no_output_____ | Apache-2.0 | site/en/tutorials/load_data/tfrecord.ipynb | blueyi/docs |
Here, the `tf.parse_example` function unpacks the `tf.Example` fields into standard tensors. TFRecord files in Python The `tf.io` module also contains pure-Python functions for reading and writing TFRecord files. Writing a TFRecord file Next, write the 10,000 observations to the file `test.tfrecord`. Each observation is converted to a `tf.Example` message, then written to file. You can then verify that the file `test.tfrecord` has been created: | # Write the `tf.Example` observations to the file.
with tf.io.TFRecordWriter(filename) as writer:
for i in range(n_observations):
example = serialize_example(feature0[i], feature1[i], feature2[i], feature3[i])
writer.write(example)
!du -sh {filename} | _____no_output_____ | Apache-2.0 | site/en/tutorials/load_data/tfrecord.ipynb | blueyi/docs |
Reading a TFRecord fileThese serialized tensors can be easily parsed using `tf.train.Example.ParseFromString`: | filenames = [filename]
raw_dataset = tf.data.TFRecordDataset(filenames)
raw_dataset
for raw_record in raw_dataset.take(1):
example = tf.train.Example()
example.ParseFromString(raw_record.numpy())
print(example) | _____no_output_____ | Apache-2.0 | site/en/tutorials/load_data/tfrecord.ipynb | blueyi/docs |
Walkthrough: Reading and writing image data This is an end-to-end example of how to read and write image data using TFRecords. Using an image as input data, you will write the data as a TFRecord file, then read the file back and display the image.This can be useful if, for example, you want to use several models on the same input dataset. Instead of storing the image data raw, it can be preprocessed into the TFRecords format, and that can be used in all further processing and modelling.First, let's download [this image](https://commons.wikimedia.org/wiki/File:Felis_catus-cat_on_snow.jpg) of a cat in the snow and [this photo](https://upload.wikimedia.org/wikipedia/commons/f/fe/New_East_River_Bridge_from_Brooklyn_det.4a09796u.jpg) of the Williamsburg Bridge, NYC under construction. Fetch the images | cat_in_snow = tf.keras.utils.get_file('320px-Felis_catus-cat_on_snow.jpg', 'https://storage.googleapis.com/download.tensorflow.org/example_images/320px-Felis_catus-cat_on_snow.jpg')
williamsburg_bridge = tf.keras.utils.get_file('194px-New_East_River_Bridge_from_Brooklyn_det.4a09796u.jpg','https://storage.googleapis.com/download.tensorflow.org/example_images/194px-New_East_River_Bridge_from_Brooklyn_det.4a09796u.jpg')
display.display(display.Image(filename=cat_in_snow))
display.display(display.HTML('Image cc-by: <a "href=https://commons.wikimedia.org/wiki/File:Felis_catus-cat_on_snow.jpg">Von.grzanka</a>'))
display.display(display.Image(filename=williamsburg_bridge))
display.display(display.HTML('<a "href=https://commons.wikimedia.org/wiki/File:New_East_River_Bridge_from_Brooklyn_det.4a09796u.jpg">From Wikimedia</a>')) | _____no_output_____ | Apache-2.0 | site/en/tutorials/load_data/tfrecord.ipynb | blueyi/docs |
Write the TFRecord file As before, encode the features as types compatible with `tf.Example`. This stores the raw image string feature, as well as the height, width, depth, and arbitrary `label` feature. The latter is used when you write the file to distinguish between the cat image and the bridge image. Use `0` for the cat image, and `1` for the bridge image: | image_labels = {
cat_in_snow : 0,
williamsburg_bridge : 1,
}
# This is an example, just using the cat image.
image_string = open(cat_in_snow, 'rb').read()
label = image_labels[cat_in_snow]
# Create a dictionary with features that may be relevant.
def image_example(image_string, label):
image_shape = tf.image.decode_jpeg(image_string).shape
feature = {
'height': _int64_feature(image_shape[0]),
'width': _int64_feature(image_shape[1]),
'depth': _int64_feature(image_shape[2]),
'label': _int64_feature(label),
'image_raw': _bytes_feature(image_string),
}
return tf.train.Example(features=tf.train.Features(feature=feature))
for line in str(image_example(image_string, label)).split('\n')[:15]:
print(line)
print('...') | _____no_output_____ | Apache-2.0 | site/en/tutorials/load_data/tfrecord.ipynb | blueyi/docs |
Notice that all of the features are now stored in the `tf.Example` message. Next, functionalize the code above and write the example messages to a file named `images.tfrecords`: | # Write the raw image files to `images.tfrecords`.
# First, process the two images into `tf.Example` messages.
# Then, write to a `.tfrecords` file.
record_file = 'images.tfrecords'
with tf.io.TFRecordWriter(record_file) as writer:
for filename, label in image_labels.items():
image_string = open(filename, 'rb').read()
tf_example = image_example(image_string, label)
writer.write(tf_example.SerializeToString())
!du -sh {record_file} | _____no_output_____ | Apache-2.0 | site/en/tutorials/load_data/tfrecord.ipynb | blueyi/docs |
Read the TFRecord fileYou now have the file—`images.tfrecords`—and can now iterate over the records in it to read back what you wrote. Given that in this example you will only reproduce the image, the only feature you will need is the raw image string. Extract it using the getters described above, namely `example.features.feature['image_raw'].bytes_list.value[0]`. You can also use the labels to determine which record is the cat and which one is the bridge: | raw_image_dataset = tf.data.TFRecordDataset('images.tfrecords')
# Create a dictionary describing the features.
image_feature_description = {
'height': tf.io.FixedLenFeature([], tf.int64),
'width': tf.io.FixedLenFeature([], tf.int64),
'depth': tf.io.FixedLenFeature([], tf.int64),
'label': tf.io.FixedLenFeature([], tf.int64),
'image_raw': tf.io.FixedLenFeature([], tf.string),
}
def _parse_image_function(example_proto):
# Parse the input tf.Example proto using the dictionary above.
return tf.io.parse_single_example(example_proto, image_feature_description)
parsed_image_dataset = raw_image_dataset.map(_parse_image_function)
parsed_image_dataset | _____no_output_____ | Apache-2.0 | site/en/tutorials/load_data/tfrecord.ipynb | blueyi/docs |
Recover the images from the TFRecord file: | for image_features in parsed_image_dataset:
image_raw = image_features['image_raw'].numpy()
display.display(display.Image(data=image_raw)) | _____no_output_____ | Apache-2.0 | site/en/tutorials/load_data/tfrecord.ipynb | blueyi/docs |
YAHOO電影爬蟲練習 練習爬取電影放映資訊。必須逐步獲取電影的代號、放映地區、放映日期後,再送出查詢給伺服器。 | import requests
import re
from bs4 import BeautifulSoup | _____no_output_____ | MIT | .ipynb_checkpoints/Day_014_Yahoo_Movie_HW-checkpoint.ipynb | Ruila/PythonCrwalerMarathon_Day14 |
先搜尋全部的電影代號(ID)資訊 | # 查看目前上映那些電影,並擷取出其ID資訊
url = 'https://movies.yahoo.com.tw/'
resp = requests.get(url)
resp.encoding = 'utf-8'
# gggggg
soup = BeautifulSoup(resp.text, 'lxml')
html = soup.find("select", attrs={'name':'movie_id'})
movie_item = html.find_all("option", attrs={'data-name':re.compile('.*')})
for p in movie_item:
print("Movie: %s, ID: %s" % (p["data-name"], p["value"])) | Movie: 空中謎航, ID: 11152
Movie: 致命天際線, ID: 11147
Movie: 廢青四重奏, ID: 11130
Movie: 妄想代理人:前篇, ID: 11102
Movie: 水漾的女人, ID: 11065
Movie: 午夜天鵝, ID: 11045
Movie: 拆彈專家2, ID: 10986
Movie: 杏林醫院, ID: 10781
Movie: 靈魂急轉彎, ID: 11089
Movie: 瑰麗卡萊爾:浮華紐約, ID: 11129
Movie: 愛是您・愛是我, ID: 11123
Movie: 來者弒客, ID: 11107
Movie: 真愛鄰距離, ID: 11101
Movie: 坑爹大作戰, ID: 11082
Movie: 生為女人, ID: 10977
Movie: 一家之煮, ID: 10955
Movie: 腿, ID: 10934
Movie: 鬼巢, ID: 11105
Movie: 高校棋蹟, ID: 11099
Movie: 戀愛進行弒, ID: 11080
Movie: 85年的夏天, ID: 11076
Movie: 順其自然的日子, ID: 11066
Movie: 總是有個人在愛你, ID: 11063
Movie: 神力女超人1984, ID: 10413
Movie: 新解釋・三國志, ID: 11050
Movie: 信用詐欺師JP:公主篇, ID: 11021
Movie: 求婚好意外, ID: 10796
Movie: 再見街貓BOB, ID: 11016
Movie: 愛在午夜希臘時, ID: 11054
Movie: 愛在黎明破曉時, ID: 11053
Movie: 愛在日落巴黎時, ID: 11052
Movie: 魔物獵人, ID: 10983
Movie: 親愛的殺手, ID: 10861
Movie: 未來的我們, ID: 11046
Movie: 十二夜2:回到第零天, ID: 11035
Movie: 尋找小魔女Doremi, ID: 11027
Movie: 緝毒風暴, ID: 11023
Movie: 古魯家族:新石代, ID: 10958
Movie: 同學麥娜絲, ID: 10935
Movie: 名偵探柯南:紅之校外旅行 鮮紅篇&戀紅篇, ID: 10887
Movie: 我的媽媽開GAYBAR, ID: 10973
Movie: 愛麗絲與夢幻島, ID: 11018
Movie: 孤味, ID: 10477
Movie: 聖荷西謀殺案, ID: 10990
Movie: 入魔, ID: 10989
Movie: 悄悄告訴她 經典數位修復, ID: 10911
Movie: 鬼滅之刃劇場版 無限列車篇, ID: 10816
Movie: 親愛的房客, ID: 10707
Movie: 地下弒的秘密, ID: 10984
Movie: 藥頭大媽, ID: 10951
Movie: 看不見的目擊者, ID: 10946
Movie: 無價之保, ID: 10959
Movie: 倒數反擊, ID: 10906
Movie: 阿公當家, ID: 10914
Movie: 刻在你心底的名字, ID: 10902
Movie: 急先鋒, ID: 10443
Movie: 殺戮荒村, ID: 10903
Movie: 消失的情人節, ID: 10870
Movie: 中央車站:數位修復版, ID: 10907
Movie: 海霧, ID: 10872
Movie: 花木蘭, ID: 8632
Movie: TENET天能, ID: 10433
Movie: 聖雅各的天空, ID: 10877
Movie: 看不見的證人, ID: 10873
Movie: 可不可以,你也剛好喜歡我, ID: 10473
Movie: 東京教父:4K數位修復版, ID: 10860
Movie: 劇場版 新幹線變形機器人—來自未來的神速ALFA-X, ID: 10823
Movie: 巴亞拉魔幻冒險, ID: 10851
Movie: 怪胎, ID: 10733
Movie: 魔王的女兒, ID: 10730
Movie: 藍色恐懼:數位修復版, ID: 10775
Movie: 角落小夥伴電影版:魔法繪本裡的新朋友, ID: 10647
Movie: 一首搖滾上月球, ID: 4887
Movie: 錢不夠用2, ID: 3026
| MIT | .ipynb_checkpoints/Day_014_Yahoo_Movie_HW-checkpoint.ipynb | Ruila/PythonCrwalerMarathon_Day14 |
指定你有興趣的電影其ID,然後查詢其放映地區資訊。 | # 參考前一個步驟中擷取到的ID資訊,並指定ID
movie_id = 10477
url = 'https://movies.yahoo.com.tw/api/v1/areas_by_movie_theater'
payload = {'movie_id':str(movie_id)}
# 模擬一個header
headers = {
'authority': 'movies.yahoo.com.tw',
'method': 'GET',
'path': '/api/v1/areas_by_movie_theater?movie_id=' + str(movie_id),
'scheme': 'https',
'accept': 'application/json, text/javascript, */*; q=0.01',
'accept-encoding': 'gzip, deflate, br',
'accept-language': 'zh-TW,zh;q=0.9,en-US;q=0.8,en;q=0.7,zh-CN;q=0.6',
'cookie': 'rxx=9s3x2fws06.1g16irnc&v=1; _ga=GA1.3.2056742944.1551651301; GUC=AQEBAQFczFpdm0IfmwSB&s=AQAAACoo4N5D&g=XMsVBw; BX=4hkdk1decm57t&b=3&s=mr; _ga=GA1.4.2056742944.1551651301; nexagesuid=82843256dd234e8e91aa73f2062f8218; browsed_movie=eyJpdiI6IlJXWWtiSWJaZlNGK2MxQnhscnVUYWc9PSIsInZhbHVlIjoiMXRhMmVHRXRIeUNjc1RBWDJzdGYwbnlIQURmWGsrcjJSMzhkbkcraDNJVUNIZEZsbzU3amlFcVZ1NzlmazJrTGpoMjVrbHk1YmpoRENXaHZTOUw1TmI2ZTZVWHdOejZQZm16RmVuMWlHTTJLaTZLVFZZVkFOMDlTd1wvSGltcytJIiwibWFjIjoiZWQ2ZjA4MmVjZmZlYjlmNjJmYmY2NGMyMDI0Njc0NWViYjVkOWE2NDg0N2RhODMxZjBjZDhiMmJhZTc2MDZhYiJ9; avi=eyJpdiI6Im1NeWFJRlVRWDR1endEcGRGUGJUbVE9PSIsInZhbHVlIjoickRpU3JuUytmcGl6cjF5OW0rNU9iZz09IiwibWFjIjoiY2VmY2NkNzZmM2NhNjY5YzlkOTcyNjE5OGEyMzU0NWYxOTdmMDRkMDY3OWNmMmZjOTMxYjc5MjI5N2Q5NGE5MiJ9; cmp=t=1559391030&j=0; _gid=GA1.4.779543841.1559391031; XSRF-TOKEN=eyJpdiI6IkhpS2hGcDRQaHlmWUJmaHdSS2Q2bHc9PSIsInZhbHVlIjoiOUVoNFk4OHI1UUZmUWRtYXhza0MyWjJSTlhlZ3RnT0VGeVJPN2JuczVRMGRFdWt2OUlsamVKeHRobFwvcHBGM0dhU3VyMXNGTHlsb2dVM2l0U1hpUGxBPT0iLCJtYWMiOiJkZWU4YzJhNjAxMTY3MzE4Y2ExNWIxYmE1ZjE1YWZlZTlhOTcyYjc4M2RlZGY4ZWNjZDYyMTA2NGYwZGViMzc2In0%3D; m_s=eyJpdiI6InpsZHZ2Tk1BZ0dxaHhETml1RjBnUXc9PSIsInZhbHVlIjoiSkNGeHUranRoXC85bDFiaDhySTJqNkJRcWdjWUxjeVRJSHVYZ1wvd2d4bWJZUTUrSHVDM0lUcW5KNHdETFZ4T1lieU81OUhzc1VoUXhZcWk0UDZSQXVFdz09IiwibWFjIjoiYmJkMDJkMDhlODIzMzcyMWY4M2NmYWNjNGVlOWRjMDIwZmVmNzAyMjE3Yzg3ZGY3ODBkZWEzZTI4MTI5ZWNmOSJ9; _gat=1; nexagesd=10',
'dnt': '1',
'mv-authorization': '21835b082e15b91a69b3851eec7b31b82ce82afb',
'referer': 'https://movies.yahoo.com.tw/',
'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/74.0.3729.169 Safari/537.36',
'x-requested-with': 'XMLHttpRequest',
}
resp = requests.get(url, params=payload, headers=headers)
#print(resp.json()) # 若有需要,列印出json原始碼
# 這裡回傳的格式是JSON格式的資料,要解析JSON擷取資料
for p in resp.json():
print('放映地區: {0}, 代號(area_id): {1}'.format(p['title'], p['area_id'])) | 放映地區: 台北市, 代號(area_id): 28
放映地區: 新北市, 代號(area_id): 8
放映地區: 桃園, 代號(area_id): 16
放映地區: 新竹, 代號(area_id): 20
放映地區: 苗栗, 代號(area_id): 15
放映地區: 台中, 代號(area_id): 2
放映地區: 南投, 代號(area_id): 13
放映地區: 嘉義, 代號(area_id): 21
放映地區: 台南, 代號(area_id): 10
放映地區: 高雄, 代號(area_id): 17
放映地區: 宜蘭, 代號(area_id): 11
放映地區: 花蓮, 代號(area_id): 12
放映地區: 澎湖, 代號(area_id): 23
| MIT | .ipynb_checkpoints/Day_014_Yahoo_Movie_HW-checkpoint.ipynb | Ruila/PythonCrwalerMarathon_Day14 |
指定你想要觀看的放映地區,查詢有上映電影的場次日期 | # 指定放映地區
area_id = 28
# 向網站發送請求
url = 'https://movies.yahoo.com.tw/movietime_result.html'
payload = {'movie_id':str(movie_id), 'area_id':str(area_id)}
resp = requests.get(url, params=payload)
resp.encoding = 'utf-8'
soup = BeautifulSoup(resp.text, 'lxml')
movie_date = soup.find_all("label", attrs={'for':re.compile("date_[\d]")})
# 列印播放日期
for date in movie_date:
print("%s %s" % (date.p.string, date.h3.string)) | 一月 1
一月 2
一月 3
一月 4
一月 5
| MIT | .ipynb_checkpoints/Day_014_Yahoo_Movie_HW-checkpoint.ipynb | Ruila/PythonCrwalerMarathon_Day14 |
最後指定觀看的日期,查詢並列印出放映的電影院、放映類型(數位、3D、IMAX 3D...)、放映時間等資訊。 | # 選定要觀看的日期
date = "2019-08-21"
# 向網站發送請求,獲取上映的電影院及時間資訊
url = "https://movies.yahoo.com.tw/ajax/pc/get_schedule_by_movie"
payload = {'movie_id':str(movie_id),
'date':date,
'area_id':str(area_id),
'theater_id':'',
'datetime':'',
'movie_type_id':''}
resp = requests.get(url, params=payload)
#print(resp.json()['view']) # 若有需要,列印出json原始碼
soup = BeautifulSoup(resp.json()['view'], 'lxml')
html = soup.find_all("ul", attrs={'data-theater_name':re.compile(".*")})
'''
試著從上一步驟回傳的電影院資料中,擷取電影院名稱、影片放映類型以及時間表
Your code here.
'''
| ----------------------------------------------------------------------
電影院: 台北美麗華大直影城
放映類型: 數位
2019-08-21 09:00:00
2019-08-21 11:10:00
2019-08-21 13:15:00
2019-08-21 15:20:00
2019-08-21 19:30:00
2019-08-21 21:40:00
2019-08-21 22:30:00
----------------------------------------------------------------------
電影院: 台北新光影城
放映類型: 數位
2019-08-21 10:00:00
2019-08-21 14:50:00
2019-08-21 19:30:00
----------------------------------------------------------------------
電影院: 台北in89豪華數位影城
放映類型: 數位
2019-08-21 09:30:00
2019-08-21 11:20:00
2019-08-21 13:15:00
2019-08-21 15:10:00
2019-08-21 16:10:00
2019-08-21 17:10:00
2019-08-21 18:05:00
2019-08-21 19:10:00
2019-08-21 21:10:00
2019-08-21 23:10:00
2019-08-22 01:10:00
----------------------------------------------------------------------
電影院: 台北日新威秀影城
放映類型: 數位
2019-08-21 09:00:00
2019-08-21 10:55:00
2019-08-21 12:50:00
2019-08-21 14:45:00
2019-08-21 16:40:00
2019-08-21 18:35:00
2019-08-21 20:35:00
----------------------------------------------------------------------
電影院: 喜滿客絕色影城
放映類型: 數位
2019-08-21 10:00:00
2019-08-21 11:55:00
2019-08-21 13:50:00
2019-08-21 15:45:00
2019-08-21 17:40:00
2019-08-21 19:35:00
2019-08-21 21:30:00
----------------------------------------------------------------------
電影院: 台北信義威秀影城
放映類型: 數位
2019-08-21 09:00:00
2019-08-21 11:00:00
2019-08-21 13:00:00
2019-08-21 15:00:00
2019-08-21 17:00:00
2019-08-21 19:00:00
2019-08-21 21:00:00
2019-08-21 23:00:00
----------------------------------------------------------------------
電影院: 喜滿客京華影城
放映類型: 數位
2019-08-21 10:30:00
2019-08-21 12:30:00
2019-08-21 14:30:00
2019-08-21 16:30:00
2019-08-21 18:30:00
2019-08-21 20:30:00
2019-08-21 22:30:00
----------------------------------------------------------------------
電影院: 京站威秀影城
放映類型: 數位
2019-08-21 09:00:00
2019-08-21 11:00:00
2019-08-21 13:00:00
2019-08-21 15:00:00
2019-08-21 17:00:00
2019-08-21 19:00:00
2019-08-21 21:00:00
----------------------------------------------------------------------
電影院: 喜樂時代影城南港店
放映類型: 數位
2019-08-21 10:20:00
2019-08-21 11:10:00
2019-08-21 12:20:00
2019-08-21 13:10:00
2019-08-21 14:20:00
2019-08-21 15:10:00
2019-08-21 16:20:00
2019-08-21 17:10:00
2019-08-21 18:20:00
2019-08-21 19:15:00
2019-08-21 20:20:00
2019-08-21 21:15:00
2019-08-21 22:20:00
| MIT | .ipynb_checkpoints/Day_014_Yahoo_Movie_HW-checkpoint.ipynb | Ruila/PythonCrwalerMarathon_Day14 |
Flopy MODFLOW 6 (MF6) Support The Flopy library contains classes for creating, saving, running, loading, and modifying MF6 simulations. The MF6 portion of the flopy library is located in:*flopy.mf6*While there are a number of classes in flopy.mf6, to get started you only need to use the main classes summarized below:flopy.mf6.MFSimulation * MODFLOW Simulation Class. Entry point into any MODFLOW simulation.flopy.mf6.ModflowGwf* MODFLOW Groundwater Flow Model Class. Represents a single model in a simulation.flopy.mf6.Modflow[pc] * MODFLOW package classes where [pc] is the abbreviation of the package name. Each package is a separate class. For packages that are part of a groundwater flow model, the abbreviation begins with "Gwf". For example, "flopy.mf6.ModflowGwfdis" is the Discretization package. | import os
import sys
from shutil import copyfile
import numpy as np
import matplotlib as mpl
import matplotlib.pyplot as plt
try:
import flopy
except:
fpth = os.path.abspath(os.path.join('..', '..'))
sys.path.append(fpth)
import flopy
print(sys.version)
print('numpy version: {}'.format(np.__version__))
print('matplotlib version: {}'.format(mpl.__version__))
print('flopy version: {}'.format(flopy.__version__)) | 3.8.10 (default, May 19 2021, 11:01:55)
[Clang 10.0.0 ]
numpy version: 1.19.2
matplotlib version: 3.4.2
flopy version: 3.3.4
| CC0-1.0 | examples/Notebooks/flopy3_mf6_tutorial.ipynb | jdlarsen-UA/flopy |
Creating a MF6 Simulation A MF6 simulation is created by first creating a simulation object "MFSimulation". When you create the simulation object you can define the simulation's name, version, executable name, workspace path, and the name of the tdis file. All of these are optional parameters, and if not defined each one will default to the following:sim_name='modflowtest'version='mf6'exe_name='mf6.exe'sim_ws='.'sim_tdis_file='modflow6.tdis' | import os
import sys
from shutil import copyfile
try:
import flopy
except:
fpth = os.path.abspath(os.path.join('..', '..'))
sys.path.append(fpth)
import flopy
sim_name = 'example_sim'
sim_path = os.path.join('data', 'example_project')
sim = flopy.mf6.MFSimulation(sim_name=sim_name, version='mf6', exe_name='mf6',
sim_ws=sim_path) | _____no_output_____ | CC0-1.0 | examples/Notebooks/flopy3_mf6_tutorial.ipynb | jdlarsen-UA/flopy |
The next step is to create a tdis package object "ModflowTdis". The first parameter of the ModflowTdis class is a simulation object, which ties a ModflowTdis object to a specific simulation. The other parameters and their definitions can be found in the docstrings. | tdis = flopy.mf6.ModflowTdis(sim, pname='tdis', time_units='DAYS', nper=2,
perioddata=[(1.0, 1, 1.0), (10.0, 5, 1.0)]) | _____no_output_____ | CC0-1.0 | examples/Notebooks/flopy3_mf6_tutorial.ipynb | jdlarsen-UA/flopy |
Next one or more models are created using the ModflowGwf class. The first parameter of the ModflowGwf class is the simulation object that the model will be a part of. | model_name = 'example_model'
model = flopy.mf6.ModflowGwf(sim, modelname=model_name,
model_nam_file='{}.nam'.format(model_name)) | _____no_output_____ | CC0-1.0 | examples/Notebooks/flopy3_mf6_tutorial.ipynb | jdlarsen-UA/flopy |
Next create one or more Iterative Model Solution (IMS) files. | ims_package = flopy.mf6.ModflowIms(sim, pname='ims', print_option='ALL',
complexity='SIMPLE', outer_hclose=0.00001,
outer_maximum=50, under_relaxation='NONE',
inner_maximum=30, inner_hclose=0.00001,
linear_acceleration='CG',
preconditioner_levels=7,
preconditioner_drop_tolerance=0.01,
number_orthogonalizations=2) | _____no_output_____ | CC0-1.0 | examples/Notebooks/flopy3_mf6_tutorial.ipynb | jdlarsen-UA/flopy |
Each ModflowGwf object needs to be associated with an ModflowIms object. This is done by calling the MFSimulation object's "register_ims_package" method. The first parameter in this method is the ModflowIms object and the second parameter is a list of model names (strings) for the models to be associated with the ModflowIms object. | sim.register_ims_package(ims_package, [model_name]) | _____no_output_____ | CC0-1.0 | examples/Notebooks/flopy3_mf6_tutorial.ipynb | jdlarsen-UA/flopy |
Next add packages to each model. The first package added needs to be a spatial discretization package since flopy uses information from the spatial discretization package to help you build other packages. There are three spatial discretization packages to choose from:DIS (ModflowGwfDis) - Structured discretizationDISV (ModflowGwfdisv) - Discretization with verticesDISU (ModflowGwfdisu) - Unstructured discretization | dis_package = flopy.mf6.ModflowGwfdis(model, pname='dis', length_units='FEET', nlay=2,
nrow=2, ncol=5, delr=500.0,
delc=500.0,
top=100.0, botm=[50.0, 20.0],
filename='{}.dis'.format(model_name)) | _____no_output_____ | CC0-1.0 | examples/Notebooks/flopy3_mf6_tutorial.ipynb | jdlarsen-UA/flopy |
Accessing NamefilesNamefiles are automatically built for you by flopy. However, there are some options contained in the namefiles that you may want to set. To get the namefile object access the name_file attribute in either a simulation or model object to get the simulation or model namefile. | # set the nocheck property in the simulation namefile
sim.name_file.nocheck = True
# set the print_input option in the model namefile
model.name_file.print_input = True | _____no_output_____ | CC0-1.0 | examples/Notebooks/flopy3_mf6_tutorial.ipynb | jdlarsen-UA/flopy |
Specifying OptionsOption that appear alone are assigned a boolean value, like the print_input option above. Options that have additional optional parameters are assigned using a tuple, with the entries containing the names of the optional parameters to turn on. Use a tuple with an empty string to indicate no optional parameters and use a tuple with None to turn the option off. | # Turn Newton option on with under relaxation
model.name_file.newtonoptions = ('UNDER_RELAXATION')
# Turn Newton option on without under relaxation
model.name_file.newtonoptions = ('')
# Turn off Newton option
model.name_file.newtonoptions = (None) | _____no_output_____ | CC0-1.0 | examples/Notebooks/flopy3_mf6_tutorial.ipynb | jdlarsen-UA/flopy |
MFArray TemplatesLastly define all other packages needed. Note that flopy supports a number of ways to specify data for a package. A template, which defines the data array shape for you, can be used to specify the data. Templates are built by calling the empty of the data type you are building. For example, to build a template for k in the npf package you would call:ModflowGwfnpf.k.empty()The empty method for "MFArray" data templates (data templates whose size is based on the structure of the model grid) take up to four parameters:* model - The model object that the data is a part of. A valid model object with a discretization package is required in order to build the proper array dimensions. This parameter is required.* layered - True or false whether the data is layered or not.* data_storage_type_list - List of data storage types, one for each model layer. If the template is not layered, only one data storage type needs to be specified. There are three data storage types supported, internal_array, internal_constant, and external_file. * default_value - The initial value for the array. | # build a data template for k that stores the first layer as an internal array and the second
# layer as a constant with the default value of k for all layers set to 100.0
layer_storage_types = [flopy.mf6.data.mfdatastorage.DataStorageType.internal_array,
flopy.mf6.data.mfdatastorage.DataStorageType.internal_constant]
k_template = flopy.mf6.ModflowGwfnpf.k.empty(model, True, layer_storage_types, 100.0)
# change the value of the second layer to 50.0
k_template[0]['data'] = [65.0, 60.0, 55.0, 50.0, 45.0, 40.0, 35.0, 30.0, 25.0, 20.0]
k_template[0]['factor'] = 1.5
print(k_template)
# create npf package using the k template to define k
npf_package = flopy.mf6.ModflowGwfnpf(model, pname='npf', save_flows=True, icelltype=1, k=k_template) | [{'factor': 1.5, 'iprn': 1, 'data': [65.0, 60.0, 55.0, 50.0, 45.0, 40.0, 35.0, 30.0, 25.0, 20.0]}, 100.0]
| CC0-1.0 | examples/Notebooks/flopy3_mf6_tutorial.ipynb | jdlarsen-UA/flopy |
Specifying MFArray DataMFArray data can also be specified as a numpy array, a list of values, or a single value. Below strt (starting heads) are defined as a single value, 100.0, which is interpreted as an internal constant storage type of value 100.0. Strt could also be defined as a list defining a value for every model cell:strt=[100.0, 100.0, 100.0, 100.0, 100.0, 100.0, 100.0, 100.0, 100.0, 100.0, 90.0, 90.0, 90.0, 90.0, 90.0, 90.0, 90.0, 90.0, 90.0, 90.0] Or as a list defining a value or values for each model layer:strt=[100.0, 90.0]or:strt=[[100.0], [90.0, 90.0, 90.0, 90.0, 90.0, 90.0, 90.0, 90.0, 90.0, 90.0]]MFArray data can also be stored in an external file by using a dictionary using the keys 'filename' to specify the file name relative to the model folder and 'data' to specific the data. The optional 'factor', 'iprn', and 'binary' keys may also be used.strt={'filename': 'strt.txt', 'factor':1.0, 'data':[100.0, 100.0, 100.0, 100.0, 100.0, 100.0, 100.0, 100.0, 100.0, 100.0, 90.0, 90.0, 90.0, 90.0, 90.0, 90.0, 90.0, 90.0, 90.0, 90.0], 'binary': 'True'} If the 'data' key is omitted from the dictionary flopy will try to read the data from an existing file 'filename'. Any relative paths for loading data from a file should specified relative to the MF6 simulation folder. | strt={'filename': 'strt.txt', 'factor':1.0, 'data':[100.0, 100.0, 100.0, 100.0, 100.0, 100.0, 100.0, 100.0, 100.0, 100.0,
90.0, 90.0, 90.0, 90.0, 90.0, 90.0, 90.0, 90.0, 90.0, 90.0], 'binary': 'True'}
ic_package = flopy.mf6.ModflowGwfic(model, pname='ic', strt=strt,
filename='{}.ic'.format(model_name))
# move external file data into model folder
icv_data_path = os.path.join('..', 'data', 'mf6', 'notebooks', 'iconvert.txt')
copyfile(icv_data_path, os.path.join(sim_path, 'iconvert.txt'))
# create storage package
sto_package = flopy.mf6.ModflowGwfsto(model, pname='sto', save_flows=True, iconvert={'filename':'iconvert.txt'},
ss=[0.000001, 0.000002],
sy=[0.15, 0.14, 0.13, 0.12, 0.11, 0.11, 0.12, 0.13, 0.14, 0.15,
0.15, 0.14, 0.13, 0.12, 0.11, 0.11, 0.12, 0.13, 0.14, 0.15]) | _____no_output_____ | CC0-1.0 | examples/Notebooks/flopy3_mf6_tutorial.ipynb | jdlarsen-UA/flopy |
MFList TemplatesFlopy supports specifying record and recarray "MFList" data in a number of ways. Templates can be created that define the shape of the data. The empty method for "MFList" data templates take up to 7 parameters.* model - The model object that the data is a part of. A valid model object with a discretization package is required in order to build the proper array dimensions. This parameter is required.* maxbound - The number of rows in the recarray. If not specified one row is returned.* aux_vars - List of auxiliary variable names. If not specified auxiliary variables are not used.* boundnames - True/False if boundnames is to be used.* nseg - Number of segments (only relevant for a few data types)* timeseries - True/False indicates that time series data will be used.* stress_periods - List of integer stress periods to be used (transient MFList data only). If not specified for transient data, template will only be defined for stress period 1. MFList transient data templates are numpy recarrays stored in a dictionary with the dictionary key an integer zero based stress period value (stress period - 1).In the code below the well package is set up using a transient MFList template to help build the well's stress_periods. | maxbound = 2
# build a stress_period_data template with 2 wells over stress periods 1 and 2 with boundnames
# and three aux variables
wel_periodrec = flopy.mf6.ModflowGwfwel.stress_period_data.empty(model, maxbound=maxbound, boundnames=True,
aux_vars=['var1', 'var2', 'var3'],
stress_periods=[0,1])
# define the two wells for stress period one
wel_periodrec[0][0] = ((0,1,2), -50.0, -1, -2, -3, 'First Well')
wel_periodrec[0][1] = ((1,1,4), -25.0, 2, 3, 4, 'Second Well')
# define the two wells for stress period two
wel_periodrec[1][0] = ((0,1,2), -200.0, -1, -2, -3, 'First Well')
wel_periodrec[1][1] = ((1,1,4), -4000.0, 2, 3, 4, 'Second Well')
# build the well package
wel_package = flopy.mf6.ModflowGwfwel(model, pname='wel', print_input=True, print_flows=True,
auxiliary=['var1', 'var2', 'var3'], maxbound=maxbound,
stress_period_data=wel_periodrec, boundnames=True, save_flows=True) | /Users/jdhughes/Documents/Development/flopy_git/flopy_fork/flopy/mf6/data/mfdatalist.py:1688: FutureWarning: elementwise == comparison failed and returning scalar instead; this will raise an error or perform elementwise comparison in the future.
if "check" in list_item:
| CC0-1.0 | examples/Notebooks/flopy3_mf6_tutorial.ipynb | jdlarsen-UA/flopy |
Cell IDsCell IDs always appear as tuples in an MFList. For a structured grid cell IDs appear as:(<layer>, <row>, <column>)For vertice based grid cells IDs appear as:(<layer>, <intralayer_cell_id>)Unstructured grid cell IDs appear as:(<cell_id>) Specifying MFList DataMFList data can also be defined as a list of tuples, with each tuple being a row of the recarray. For transient data the list of tuples can be stored in a dictionary with the dictionary key an integer zero based stress period value. If only a list of tuples is specified for transient data, the data is assumed to apply to stress period 1. Additional stress periods can be added with the add_transient_key method. The code below defines saverecord and printrecord as a list of tuples. | # printrecord data as a list of tuples. since no stress
# period is specified it will default to stress period 1
printrec_tuple_list = [('HEAD', 'ALL'), ('BUDGET', 'ALL')]
# saverecord data as a dictionary of lists of tuples for
# stress periods 1 and 2.
saverec_dict = {0:[('HEAD', 'ALL'), ('BUDGET', 'ALL')],1:[('HEAD', 'ALL'), ('BUDGET', 'ALL')]}
# create oc package
oc_package = flopy.mf6.ModflowGwfoc(model, pname='oc',
budget_filerecord=[('{}.cbc'.format(model_name),)],
head_filerecord=[('{}.hds'.format(model_name),)],
saverecord=saverec_dict,
printrecord=printrec_tuple_list)
# add stress period two to the print record
oc_package.printrecord.add_transient_key(1)
# set the data for stress period two in the print record
oc_package.printrecord.set_data([('HEAD', 'ALL'), ('BUDGET', 'ALL')], 1) | _____no_output_____ | CC0-1.0 | examples/Notebooks/flopy3_mf6_tutorial.ipynb | jdlarsen-UA/flopy |
Specifying MFList Data in an External File MFList data can be specified in an external file using a dictionary with the 'filename' key. If the 'data' key is also included in the dictionary and is not None, flopy will create the file with the data contained in the 'data' key. The 'binary' key can be used to save data to a binary file ('binary': True). The code below creates a chd package which creates and references an external file containing data for stress period 1 and stores the data internally in the chd package file for stress period 2. | stress_period_data = {0: {'filename': 'chd_sp1.dat', 'data': [[(0, 0, 0), 70.]]},
1: [[(0, 0, 0), 60.]]}
chd = flopy.mf6.ModflowGwfchd(model, maxbound=1, stress_period_data=stress_period_data) | _____no_output_____ | CC0-1.0 | examples/Notebooks/flopy3_mf6_tutorial.ipynb | jdlarsen-UA/flopy |
Packages that Support both List-based and Array-based DataThe recharge and evapotranspiration packages can be specified using list-based or array-based input. The array packages have an "a" on the end of their name:ModflowGwfrch - list based recharge packageModflowGwfrcha - array based recharge packageModflowGwfevt - list based evapotranspiration packageModflowGwfevta - array based evapotranspiration package | rch_recarray = {0:[((0,0,0), 'rch_1'), ((1,1,1), 'rch_2')],
1:[((0,0,0), 'rch_1'), ((1,1,1), 'rch_2')]}
rch_package = flopy.mf6.ModflowGwfrch(model, pname='rch', fixed_cell=True, print_input=True,
maxbound=2, stress_period_data=rch_recarray) | _____no_output_____ | CC0-1.0 | examples/Notebooks/flopy3_mf6_tutorial.ipynb | jdlarsen-UA/flopy |
Utility Files (TS, TAS, OBS, TAB)Utility files, MF6 formatted files that reference by packages, include time series, time array series, observation, and tab files. The file names for utility files are specified using the package that references them. The utility files can be created in several ways. A simple case is demonstrated below. More detail is given in the flopy3_mf6_obs_ts_tas notebook. | # build a time series array for the recharge package
ts_data = [(0.0, 0.015, 0.0017), (1.0, 0.016, 0.0019), (2.0, 0.012, 0.0015),
(3.0, 0.020, 0.0014), (4.0, 0.015, 0.0021), (5.0, 0.013, 0.0012),
(6.0, 0.022, 0.0012), (7.0, 0.016, 0.0014), (8.0, 0.013, 0.0011),
(9.0, 0.021, 0.0011), (10.0, 0.017, 0.0016), (11.0, 0.012, 0.0015)]
rch_package.ts.initialize(time_series_namerecord=['rch_1', 'rch_2'],
timeseries=ts_data, filename='recharge_rates.ts',
interpolation_methodrecord=['stepwise', 'stepwise'])
# build an recharge observation package that outputs the western recharge to a binary file and the eastern
# recharge to a text file
obs_data = {('rch_west.csv', 'binary'): [('rch_1_1_1', 'RCH', (0, 0, 0)),
('rch_1_2_1', 'RCH', (0, 1, 0))],
'rch_east.csv': [('rch_1_1_5', 'RCH', (0, 0, 4)),
('rch_1_2_5', 'RCH', (0, 1, 4))]}
rch_package.obs.initialize(filename='example_model.rch.obs', digits=10,
print_input=True, continuous=obs_data) | _____no_output_____ | CC0-1.0 | examples/Notebooks/flopy3_mf6_tutorial.ipynb | jdlarsen-UA/flopy |
Saving and Running a MF6 Simulation Saving and running a simulation are done with the MFSimulation class's write_simulation and run_simulation methods. | # write simulation to new location
sim.write_simulation()
# run simulation
sim.run_simulation() | writing simulation...
writing simulation name file...
writing simulation tdis package...
writing ims package ims...
writing model example_model...
writing model name file...
writing package dis...
writing package npf...
writing package ic...
writing package sto...
writing package wel...
writing package oc...
writing package chd_0...
writing package rch...
writing package ts_0...
writing package obs_0...
FloPy is using the following executable to run the model: /Users/jdhughes/.local/bin/mf6
MODFLOW 6
U.S. GEOLOGICAL SURVEY MODULAR HYDROLOGIC MODEL
VERSION 6.2.2 07/30/2021
MODFLOW 6 compiled Aug 01 2021 12:51:08 with IFORT compiler (ver. 19.10.3)
This software has been approved for release by the U.S. Geological
Survey (USGS). Although the software has been subjected to rigorous
review, the USGS reserves the right to update the software as needed
pursuant to further analysis and review. No warranty, expressed or
implied, is made by the USGS or the U.S. Government as to the
functionality of the software and related material nor shall the
fact of release constitute any such warranty. Furthermore, the
software is released on condition that neither the USGS nor the U.S.
Government shall be held liable for any damages resulting from its
authorized or unauthorized use. Also refer to the USGS Water
Resources Software User Rights Notice for complete use, copyright,
and distribution information.
Run start date and time (yyyy/mm/dd hh:mm:ss): 2021/08/06 16:21:19
Writing simulation list file: mfsim.lst
Using Simulation name file: mfsim.nam
| CC0-1.0 | examples/Notebooks/flopy3_mf6_tutorial.ipynb | jdlarsen-UA/flopy |
Exporting a MF6 Model Exporting a MF6 model to a shapefile or netcdf is the same as exporting a MF2005 model. | # make directory
pth = os.path.join('data', 'netCDF_export')
if not os.path.exists(pth):
os.makedirs(pth)
# export the dis package to a netcdf file
model.dis.export(os.path.join(pth, 'dis.nc'))
# export the botm array to a shapefile
model.dis.botm.export(os.path.join(pth, 'botm.shp')) | initialize_geometry::proj4_str = epsg:4326
| CC0-1.0 | examples/Notebooks/flopy3_mf6_tutorial.ipynb | jdlarsen-UA/flopy |
Loading an Existing MF6 Simulation Loading a simulation can be done with the flopy.mf6.MFSimulation.load static method. | # load the simulation
loaded_sim = flopy.mf6.MFSimulation.load(sim_name, 'mf6', 'mf6', sim_path) | loading simulation...
loading simulation name file...
loading tdis package...
loading model gwf6...
loading package dis...
loading package npf...
loading package ic...
loading package sto...
loading package wel...
loading package oc...
loading package chd...
| CC0-1.0 | examples/Notebooks/flopy3_mf6_tutorial.ipynb | jdlarsen-UA/flopy |
Retrieving Data and Modifying an Existing MF6 Simulation Data can be easily retrieved from a simulation. Data can be retrieved using two methods. One method is to retrieve the data object from a master simulation dictionary that keeps track of all the data. The master simulation dictionary is accessed by accessing a simulation's "simulation_data" property and then the "mfdata" property:sim.simulation_data.mfdata[]The data path is the path to the data stored as a tuple containing the model name, package name, block name, and data name.The second method is to get the data from the package object. If you do not already have the package object, you can work your way down the simulation structure, from the simulation to the correct model, to the correct package, and finally to the data object.These methods are demonstrated in the code below. | # get hydraulic conductivity data object from the data dictionary
hk = sim.simulation_data.mfdata[(model_name, 'npf', 'griddata', 'k')]
# get specific yield data object from the storage package
sy = sto_package.sy
# get the model object from the simulation object using the get_model method,
# which takes a string with the model's name and returns the model object
mdl = sim.get_model(model_name)
# get the package object from the model mobject using the get_package method,
# which takes a string with the package's name or type
ic = mdl.get_package('ic')
# get the data object from the initial condition package object
strt = ic.strt | _____no_output_____ | CC0-1.0 | examples/Notebooks/flopy3_mf6_tutorial.ipynb | jdlarsen-UA/flopy |
Once you have the appropriate data object there are a number methods to retrieve data from that object. Data retrieved can either be the data as it appears in the model file or the data with any factor specified in the model file applied to it. To get the raw data without applying a factor use the get_data method. To get the data with the factor already applied use .array.Note that MFArray data is always a copy of the data stored by flopy. Modifying the copy of the flopy data will have no affect on the data stored in flopy. Non-constant internal MFList data is returned as a reference to a numpy recarray. Modifying this recarray will modify the data stored in flopy. | # get the data without applying any factor
hk_data_no_factor = hk.get_data()
print('Data without factor:\n{}\n'.format(hk_data_no_factor))
# get data with factor applied
hk_data_factor = hk.array
print('Data with factor:\n{}\n'.format(hk_data_factor)) | Data without factor:
[[[ 65. 60. 55. 50. 45.]
[ 40. 35. 30. 25. 20.]]
[[100. 100. 100. 100. 100.]
[100. 100. 100. 100. 100.]]]
Data with factor:
[[[ 97.5 90. 82.5 75. 67.5]
[ 60. 52.5 45. 37.5 30. ]]
[[100. 100. 100. 100. 100. ]
[100. 100. 100. 100. 100. ]]]
| CC0-1.0 | examples/Notebooks/flopy3_mf6_tutorial.ipynb | jdlarsen-UA/flopy |
Data can also be retrieved from the data object using []. For unlayered data the [] can be used to slice the data. | # slice layer one row two
print('SY slice of layer on row two\n{}\n'.format(sy[0,:,2])) | SY slice of layer on row two
[0.13 0.13]
| CC0-1.0 | examples/Notebooks/flopy3_mf6_tutorial.ipynb | jdlarsen-UA/flopy |
For layered data specify the layer number within the brackets. This will return a "LayerStorage" object which let's you change attributes of an individual layer. | # get layer one LayerStorage object
hk_layer_one = hk[0]
# change the print code and factor for layer one
hk_layer_one.iprn = '2'
hk_layer_one.factor = 1.1
print('Layer one data without factor:\n{}\n'.format(hk_layer_one.get_data()))
print('Data with new factor:\n{}\n'.format(hk.array)) | Layer one data without factor:
[[65. 60. 55. 50. 45.]
[40. 35. 30. 25. 20.]]
Data with new factor:
[[[ 71.5 66. 60.5 55. 49.5]
[ 44. 38.5 33. 27.5 22. ]]
[[100. 100. 100. 100. 100. ]
[100. 100. 100. 100. 100. ]]]
| CC0-1.0 | examples/Notebooks/flopy3_mf6_tutorial.ipynb | jdlarsen-UA/flopy |
Modifying DataData can be modified in several ways. One way is to set data for a given layer within a LayerStorage object, like the one accessed in the code above. Another way is to set the data attribute to the new data. Yet another way is to call the data object's set_data method. | # set data within a LayerStorage object
hk_layer_one.set_data([120.0, 100.0, 80.0, 70.0, 60.0, 50.0, 40.0, 30.0, 25.0, 20.0])
print('New HK data no factor:\n{}\n'.format(hk.get_data()))
# set data attribute to new data
ic_package.strt = 150.0
print('New strt values:\n{}\n'.format(ic_package.strt.array))
# call set_data
sto_package.ss.set_data([0.000003, 0.000004])
print('New ss values:\n{}\n'.format(sto_package.ss.array)) | New HK data no factor:
[[[120. 100. 80. 70. 60.]
[ 50. 40. 30. 25. 20.]]
[[100. 100. 100. 100. 100.]
[100. 100. 100. 100. 100.]]]
New strt values:
[[[150. 150. 150. 150. 150.]
[150. 150. 150. 150. 150.]]
[[150. 150. 150. 150. 150.]
[150. 150. 150. 150. 150.]]]
New ss values:
[[[3.e-06 3.e-06 3.e-06 3.e-06 3.e-06]
[3.e-06 3.e-06 3.e-06 3.e-06 3.e-06]]
[[4.e-06 4.e-06 4.e-06 4.e-06 4.e-06]
[4.e-06 4.e-06 4.e-06 4.e-06 4.e-06]]]
| CC0-1.0 | examples/Notebooks/flopy3_mf6_tutorial.ipynb | jdlarsen-UA/flopy |
Modifying the Simulation PathThe simulation path folder can be changed by using the set_sim_path method in the MFFileMgmt object. The MFFileMgmt object can be obtained from the simulation object through properties:sim.simulation_data.mfpath | # create new path
save_folder = os.path.join(sim_path, 'sim_modified')
# change simulation path
sim.simulation_data.mfpath.set_sim_path(save_folder)
# create folder
if not os.path.isdir(save_folder):
os.makedirs(save_folder) | WARNING: MFFileMgt's set_sim_path has been deprecated. Please use MFSimulation's set_sim_path in the future.
| CC0-1.0 | examples/Notebooks/flopy3_mf6_tutorial.ipynb | jdlarsen-UA/flopy |
Adding a Model Relative PathA model relative path lets you put all of the files associated with a model in a folder relative to the simulation folder. Warning, this will override all of your file paths to model package files and will also override any relative file paths to external model data files. | # Change path of model files relative to the simulation folder
model.set_model_relative_path('model_folder')
# create folder
if not os.path.isdir(save_folder):
os.makedirs(os.path.join(save_folder,'model_folder'))
# write simulation to new folder
sim.write_simulation()
# run simulation from new folder
sim.run_simulation() | writing simulation...
writing simulation name file...
writing simulation tdis package...
writing ims package ims...
writing model example_model...
writing model name file...
writing package dis...
writing package npf...
writing package ic...
writing package sto...
writing package wel...
writing package oc...
writing package chd_0...
writing package rch...
writing package ts_0...
writing package obs_0...
FloPy is using the following executable to run the model: /Users/jdhughes/.local/bin/mf6
| CC0-1.0 | examples/Notebooks/flopy3_mf6_tutorial.ipynb | jdlarsen-UA/flopy |
Post-Processing the ResultsResults can be retrieved from the master simulation dictionary. Results are retrieved from the master simulation dictionary with using a tuple key that identifies the data to be retrieved. For head data use the key('<model name>', 'HDS', 'HEAD')where <model name> is the name of your model. For cell by cell budget data use the key('<model name>', 'CBC', '<flow data name>')where <flow data name> is the name of the flow data to be retrieved (ex. 'FLOW-JA-FACE'). All available output keys can be retrieved using the output_keys method. | keys = sim.simulation_data.mfdata.output_keys() | ('example_model', 'CBC', 'STO-SS')
('example_model', 'CBC', 'STO-SY')
('example_model', 'CBC', 'FLOW-JA-FACE')
('example_model', 'CBC', 'WEL')
('example_model', 'HDS', 'HEAD')
| CC0-1.0 | examples/Notebooks/flopy3_mf6_tutorial.ipynb | jdlarsen-UA/flopy |
The entries in the list above are keys for data in the head file "HDS" and data in cell by cell flow file "CBC". Keys in this list are not guaranteed to be in any particular order. The code below uses the head file key to retrieve head data and then plots head data using matplotlib. | import matplotlib.pyplot as plt
import numpy as np
# get all head data
head = sim.simulation_data.mfdata['example_model', 'HDS', 'HEAD']
# get the head data from the end of the model run
head_end = head[-1]
# plot the head data from the end of the model run
levels = np.arange(160,162,1)
extent = (0.0, 1000.0, 2500.0, 0.0)
plt.contour(head_end[0, :, :],extent=extent)
plt.show() | _____no_output_____ | CC0-1.0 | examples/Notebooks/flopy3_mf6_tutorial.ipynb | jdlarsen-UA/flopy |
Results can also be retrieved using the existing binaryfile method. | # get head data using old flopy method
hds_path = os.path.join(sim_path, model_name + '.hds')
hds = flopy.utils.HeadFile(hds_path)
# get heads after 1.0 days
head = hds.get_data(totim=1.0)
# plot head data
plt.contour(head[0, :, :],extent=extent)
plt.show() | _____no_output_____ | CC0-1.0 | examples/Notebooks/flopy3_mf6_tutorial.ipynb | jdlarsen-UA/flopy |
RegressionRegression is fundamentally a way to estimate an independent variable based on its relationships to predictor variables. This can be done both linearly and non-linearly with a single to many predictor variables. However, there are certain assumptions that must be satisfied in order for these results to be trustworthy. | #Import Packages
import numpy as np
import pandas as pd
#Plotting
import seaborn as sns
sns.set(rc={'figure.figsize':(11,8)})
from matplotlib import pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
import matplotlib.ticker as ticker
#Stats
from scipy import stats
import statsmodels.api as sm
import statsmodels.formula.api as smf
from statsmodels.stats.outliers_influence import variance_inflation_factor
from statsmodels.tools.tools import add_constant
from sklearn.linear_model import LinearRegression
from sklearn.datasets import make_regression
import statistics | _____no_output_____ | MIT | Lectures/4Regression_Analysis_Student.ipynb | jacobswe/data-science-lectures |
Linear RegressionRecall $y=mx+b$, in this case we can see that $y$ is defined by a pair of constants and $x$. Similarly in regression, we have our independent target variable being estimated by a set of derived constants and the input variable(s). Since what comes out of the model is not necessarily what was observed, we call the prediction of the model $\hat{y}$. The equation is then: $\hat{y}=\hat{\beta_0} + \hat{\beta_1}x_1 + ... + \hat{\beta_n}x_n + \epsilon$ for n predictors, where the $\hat{\beta_i}$ are the estimated coefficients and constant.In the case of linear regression, the line will have the equation:$\hat{y}=\hat{\beta_0}+\hat{\beta_1}x_1+\epsilon$, with $\epsilon$ being the distance between the observation and the prediction, i.e. $y-\hat{y}$.Now the question is "how do we determine what the $\hat{\beta_0}\ ...\ \hat{\beta_n}$ should be?" or rather "how can we manipulate the $\hat{\beta_0}\ ...\ \hat{\beta_n}$ to minimize error?"$\epsilon$ is defined as the difference between reality and prediction, so minimizing this distance by itself would be a good starting place. However, since we generally want to increase punishment to the algorithm for missing points by a large margin, $punishment = (y-\hat{y})^2 = \epsilon^2$ is used. More formally, for a series of $m$ data points, the Sum of Squared Errors is defined as follows: $SSE=\sum_{i=1}^{m}(y_i-\hat{y_i})^2$ for $m$ predictions. This makes the Mean Squared Error $\frac{SSE}{m}=MSE$. So, if we minimize mean squared error ($MSE$) we find the optimal line. | #Create a function to predict our line
def predict(x,slope,intercept):
return (slope*x+intercept)
#Generate Data
X = np.random.uniform(10,size=10)
Y = list(map(lambda x: x+ 2*np.random.randn(1)[0], X)) #f(x) = x + 2N
#Graph Cloud
sns.relplot(x='X', y='Y', data=pd.DataFrame({'X':X,'Y':Y}),color='darkblue', zorder=10) | _____no_output_____ | MIT | Lectures/4Regression_Analysis_Student.ipynb | jacobswe/data-science-lectures |
Now, with a regression line | #Get Regression Results
slope, intercept, r_value, p_value, std_err = stats.linregress(X,Y)
#Create Regression Line
lx = [min(X)-1,max(X)+1] #X-Bounds
ly = [predict(lx[0],slope,intercept),predict(lx[1],slope,intercept)] #Predictions at Bounds
#Graph
sns.relplot(x='X', y='Y', data=pd.DataFrame({'X':X,'Y':Y}),color='darkblue', zorder=10)
sns.lineplot(lx, ly, color='r', zorder=5) | _____no_output_____ | MIT | Lectures/4Regression_Analysis_Student.ipynb | jacobswe/data-science-lectures |
Then, find the $y_i-\hat{y_i}$ | #Plot Background
sns.relplot(x='X', y='Y', data=pd.DataFrame({'X':X,'Y':Y}),color='darkblue', zorder=10)
sns.lineplot(lx, ly, color='r', zorder=5)
#Plot Distances
for i in range(len(X)):
plt.plot([X[i],X[i]],[Y[i],predict(X[i],slope,intercept)],color = 'royalblue',linestyle=':',zorder=0) | _____no_output_____ | MIT | Lectures/4Regression_Analysis_Student.ipynb | jacobswe/data-science-lectures |
Finally, calculate the MSE | #Setup
fig, axs = plt.subplots(ncols=3, figsize=(15,5))
#Calculations
xy = pd.DataFrame({'X':X,'Y':Y}).sort_values('Y')
xy['Y`'] = list(map(lambda x,y: ((predict(x,slope,intercept))), xy['X'], xy['Y']))
xy['E'] = abs(xy['Y'] - xy['Y`'])
xy = xy.sort_values(by=['E'])
#Plot First Graph
for i in range(len(xy)):
axs[0].plot([i,i],[0,xy.iloc[i,3]], color='royalblue',linestyle='-')
axs[0].set_title('Sorted Errors')
#Plot Second Graph
for i in range(len(xy)):
axs[1].plot([i,i],[0,xy.iloc[i,3]**2], color='royalblue',linestyle='-')
axs[1].set_title('Sorted Squared Errors')
#Plot Third Graph
total = min(xy['E'])
for i in range(len(xy)):
deltah = xy.iloc[i,3]**2
axs[2].plot([i,i],[0,xy.iloc[i,3]**2], color='royalblue',linestyle='-')
axs[2].plot([max(0,i-1),i],[total,total + deltah], color='red',linestyle='-',marker='o')
total+=deltah
axs[2].set_title('Running Sum of Squared Erros')
plt.show()
#Calculate MSE
MSE = statistics.mean(list(map(lambda e: e**2, xy['E'])))
print('Sum of Squared Errors:',round(total-min(xy['E']),3))
print('Total Observations:',len(xy))
print('Mean Squared Error:',round(MSE,4)) | _____no_output_____ | MIT | Lectures/4Regression_Analysis_Student.ipynb | jacobswe/data-science-lectures |
Now that we have a measure of success, we would like to develop some intuition and a function for how good we are predicting relative to perfect. If we were to imagine a straight line exactly through the mean of the Y-value (target value) of the cloud, we can imagine that half the points are above and half below. This calculation would give an $SSE$ of $\sum_{i=1}^{n}(y_i-\bar{y_i})^2$, or how this will be defined hereon the $SST$ or Sum of Squared Total Error. This guess is assumed to be the baseline guess you can make, and anything better is considered to be an improvement. As such, the ratio of $\frac{SSE}{SST}$ models success relative to a benchmark. Functioning under the assumption that $0\leq SSE\leq SST$, $1-\frac{SSE}{SST}$ gives a [0,1] interval of success generally known as $R^2$.Let's calculate. | #Create Data, Figures
X = np.random.uniform(10,size=10)
Y = list(map(lambda x: x+ len(X)/10*np.random.randn(1)[0], X))
avgy = statistics.mean(Y)
fig, axs = plt.subplots(ncols=2,figsize=(10,5))
#Calculate Regressions
slope, intercept, r_value, p_value, std_err = stats.linregress(X,Y)
lx = [min(X)-1,max(X)+1]
ly = [predict(lx[0],slope,intercept),predict(lx[1],slope,intercept)]
avgy = statistics.mean(Y)
#Calculate and Format MSE
MSEr = 'MSE: '+str(round(statistics.mean(list(map(lambda x, y: (y-(predict(x,slope,intercept)))**2, X, Y))),3))
MSEa = 'MSE: '+str(round(statistics.mean(list(map(lambda y: (y-avgy)**2, Y))),3))
#Create Scatter And Lines
axs[0].scatter(X,Y, color='darkblue', zorder=10)
axs[1].scatter(X,Y, color='darkblue', zorder=10)
axs[0].plot(lx, ly, color='r', zorder=5, label=MSEr)
axs[1].plot(lx, [avgy,avgy], color='lightslategray', label=MSEa)
#Create Dotted Lines
for i in range(len(X)):
axs[0].plot([X[i],X[i]],[Y[i],predict(X[i],slope,intercept)],color = 'red',linestyle=':',zorder=0)
axs[1].plot([X[i],X[i]],[Y[i],avgy],color = 'dimgray',linestyle=':',zorder=0)
#Calculate R2
R2r = 'Linear Regression: R-squared = '+str(round(r_value,3))
SSTa = sum(list(map(lambda y: (y-statistics.mean(Y))**2, Y)))
SSEa = sum(list(map(lambda y: (y-statistics.mean(Y))**2, Y)))
R2a = 'Linear Regression: R-squared = '+str(round(1 - SSEa/SSTa,3))
#Format
axs[0].set(xlabel='X',ylabel='Y',title=R2r)
axs[1].set(xlabel='X',ylabel='Y',title=R2a)
axs[0].legend()
axs[1].legend()
#Paint Graphs
plt.show() | _____no_output_____ | MIT | Lectures/4Regression_Analysis_Student.ipynb | jacobswe/data-science-lectures |
Thankfully, we don't have to do this manually. To repeat this process and get out all the same information automatically, we can use SciPy, a scientific modeling library for python. In this case, as you saw above, we are using the linregress function to solve the linear system of equations that gives us the minimal MSE. Let's see how this would be implemented in practice. | X = np.random.randn(150).tolist()
Y = list(map(lambda x: x + np.random.randn(1)[0]/1.5, X)) #f(x) = x + N(0,2/3)
reg = stats.linregress(X,Y)
title = 'Linear Regression R-Squared: '+str(round(reg.rvalue,3))
fig = sns.lmplot(x='X',y='Y',data=pd.DataFrame({'X':X,'Y':Y}),ci=False).set(title=title)
fig.ax.get_lines()[0].set_color('red') | _____no_output_____ | MIT | Lectures/4Regression_Analysis_Student.ipynb | jacobswe/data-science-lectures |
However, not all data can be regressed. There are four criterian that must be met in order for the assumptions necessary for regression to be satisfied. These can be generally analyzed through residuals, or $y-\hat{y}$.1) Linearity2) Independence3) Homoscedasticity4) NormalityWe will go through each one a describe how to ensure your data is acceptable. LinearityThis is essentially that the relationship between the variables in linear in nature Method 1: Residuals vs. Observed, what you are looking for is any pattern to the data. | #Data
X = np.random.randn(300).tolist()
#Setup
fig, axs = plt.subplots(ncols=2,figsize=(10,5))
#Accepted
Y = list(map(lambda x: x + np.random.randn(1)[0]/1.5, X))
sns.residplot(x='X',y='Y',data=pd.DataFrame({'X':X,'Y':Y})
,ax=axs[0]).set_title('Residuals of Linear Regression: Accepted')
#Rejected
Y = list(map(lambda x: x**2 + np.random.randn(1)[0]/8, X))
sns.residplot(x='X',y='Y',data=pd.DataFrame({'X':X,'Y':Y})
,ax=axs[1], color='b').set_title('Residuals of Linear Regression: Rejected') | _____no_output_____ | MIT | Lectures/4Regression_Analysis_Student.ipynb | jacobswe/data-science-lectures |
Method 2: Observed vs. Predicted, looking for patterns | #Setup
fig, axs = plt.subplots(ncols=2,figsize=(10,5))
#Accept
Y = list(map(lambda x: x + np.random.randn(1)[0]/1.5, X))
slope, intercept, r_value, p_value, std_err = stats.linregress(X,Y)
Yp = list(map(lambda x: predict(x,slope,intercept), X))
sns.scatterplot(x='Y',y='Yp',data=pd.DataFrame({'Y':Y,'Yp':Yp}), ax=axs[0])
axs[0].set(title='Predictions versus Observations: Accepted')
#Reject
Y = list(map(lambda x: x**2 + np.random.randn(1)[0]/1.5, X))
slope, intercept, r_value, p_value, std_err = stats.linregress(X,Y)
Yp = list(map(lambda x: predict(x,slope,intercept), X))
sns.scatterplot(x='Y',y='Yp',data=pd.DataFrame({'Y':Y,'Yp':Yp}), ax=axs[1])
axs[1].set(title='Predictions versus Observations: Rejected') | _____no_output_____ | MIT | Lectures/4Regression_Analysis_Student.ipynb | jacobswe/data-science-lectures |
IndependenceThis looks at whether the $\epsilon$ is correlated to the predictors either through sorting or other methods. This is a serious consideration for time series data, but we will focus on non-time series for now. Method 1) Same as above. Compare the order of observations agains the residuals, there should be no pattern Method 2) For more than 1 predictor, use VIF scores to check for multicolinearity amongst the variablesThere is no hard and fast rule for kicking variables out due to VIF scores, but a good rule of thumb is anything more than 10 needs should likely need to be dealt with and anything greater than 5 should at least be looked at. | #Calculate Scores
A = np.random.randn(150).tolist()
B = list(map(lambda a: a + np.random.randn(1)[0]/20, A))
C = np.random.uniform(1,10,size=150).tolist()
data = pd.DataFrame({'A':A,'B':B,'C':C})
data = add_constant(data)
VIFs = pd.DataFrame([variance_inflation_factor(data.values, i) for i in range(data.shape[1])], index=data.columns,
columns=['VIF Score'])
print('Accept if less than 5, Reject if more than 10\n')
print(VIFs) | Accept if less than 5, Reject if more than 10
VIF Score
const 5.102457
A 431.508540
B 431.471557
C 1.005629
| MIT | Lectures/4Regression_Analysis_Student.ipynb | jacobswe/data-science-lectures |
HomoscedasticityCheck for if the magnitude of the residuals is the same over time or observations Method 1) Ordered Plot | #Setup
fig, axs = plt.subplots(ncols=2,figsize=(10,5))
#Accept
X = np.random.randn(150)
Y = np.random.randn(150)
slope, intercept, r_value, p_value, std_err = stats.linregress(X, Y)
predicted = X*slope+intercept
residuals = np.subtract(Y,predicted)
sns.scatterplot(x='Predicted',y='Residuals', ax = axs[0],
data=pd.DataFrame({'Predicted':predicted,'Residuals':residuals})).set_title('Accept')
#Reject
X = np.random.randn(150)
Y = list(map(lambda x: x**2 + np.random.randn(1)[0], X))
slope, intercept, r_value, p_value, std_err = stats.linregress(X, Y)
predicted = X*slope+intercept
residuals = np.subtract(Y,predicted)
sns.scatterplot(x='Predicted',y='Residuals', ax = axs[1],
data=pd.DataFrame({'Predicted':predicted,'Residuals':residuals})).set_title("Reject") | _____no_output_____ | MIT | Lectures/4Regression_Analysis_Student.ipynb | jacobswe/data-science-lectures |
Normality Method 1) Q-Q Plot | fig, axs = plt.subplots(ncols=2,figsize=(10,5))
#Accept
X = np.random.randn(150)
Y = np.random.randn(150)
slope, intercept, r_value, p_value, std_err = stats.linregress(X, Y)
predicted = X*slope+intercept
residuals = np.subtract(Y,predicted)
sm.qqplot(residuals, line='45',ax=axs[0])
#Reject
X = np.random.randn(150)
Y = list(map(lambda x: x**2 + np.random.randn(1)[0], X))
slope, intercept, r_value, p_value, std_err = stats.linregress(X, Y)
predicted = X*slope+intercept
residuals = np.subtract(Y,predicted)
sm.qqplot(residuals, line='45',ax=axs[1])
plt.show() | _____no_output_____ | MIT | Lectures/4Regression_Analysis_Student.ipynb | jacobswe/data-science-lectures |
Multiple RegressionThis is exactly the same as before, but we can increase the number of predictors. For example, this is what it looks like to perform linear regression in free space, fitting a plane to a cloud of points. | #Generate Data
X = np.random.randn(150)
Y = np.random.randn(150)
Z = list(map(lambda y, x: 4*(y + x) + np.random.randn(1)[0], Y, X))
#Regress
reg = smf.ols(formula='Z ~ X+Y', data=pd.DataFrame({'X':X,'Y':Y,'Z':Z})).fit()
#Calculate Plane
xx = np.linspace(min(X), max(X), 8)
yy = np.linspace(min(Y), max(Y), 8)
XX, YY = np.meshgrid(xx, yy)
ZZ = XX*reg.params[1]+YY*reg.params[2]+reg.params[0]
#Create the figure
fig = plt.figure(figsize=(15,15))
ax = fig.add_subplot(111,projection='3d')
ax.set(xlabel='X',ylabel='Y',zlabel='Z')
ax.view_init(30, 315)
ax.set(title='Three Dimensional Visualization')
#Plot Plane and Points
ax.scatter(X, Y, Z, color='blue')
ax.plot_wireframe(X=XX,Y=YY,Z=ZZ) | _____no_output_____ | MIT | Lectures/4Regression_Analysis_Student.ipynb | jacobswe/data-science-lectures |
We are also still able to run all of the same tests as before, looking at the diagnostic plots. | #1) Linearity: Residuals vs. Observed
residuals = list(map(lambda x, y, z: z-(x*reg.params[1]+y*reg.params[2]+reg.params[0]), X,Y,Z))
sns.relplot(x='Observed',y='Residuals',
data=pd.DataFrame({'Observed':Z,'Residuals':residuals}))
#2) Independence: VIF Scores
data = pd.DataFrame({'X':X,'Y':Y})
data = add_constant(data)
VIFs = pd.DataFrame([variance_inflation_factor(data.values, i) for i in range(data.shape[1])], index=data.columns,
columns=['VIF Score'])
print('Accept if less than 5, Reject if more than 10\n')
print(VIFs)
#3) Predicted vs. Residuals
predicted = X*reg.params[1]+Y*reg.params[2]+reg.params[0]
residuals = list(map(lambda x, y, z: z-(x*reg.params[1]+y*reg.params[2]+reg.params[0]), X,Y,Z))
sns.relplot(x='Predicted',y='Residuals',
data=pd.DataFrame({'Predicted':predicted,'Residuals':residuals}))
#4) Normality: QQ Plot
residuals = np.array(list(map(lambda x, y, z: z-(x*reg.params[1]+y*reg.params[2]+reg.params[0]), X,Y,Z)))
sm.qqplot(residuals, line='45')
plt.show() | _____no_output_____ | MIT | Lectures/4Regression_Analysis_Student.ipynb | jacobswe/data-science-lectures |
This can be done in infinitely many dimensions, but because we cannot visualize above three dimensions (at least statically), you can only investigate this with the diagnostic plots. Predictor Significance AnalysisRemembering back to how you can statistically say whether two random variables are equivalent with a T test, we would like to know if the $\hat{\beta_i}$ is statistically different from zero. If it is, at whatever confidence level we choose, we can say that it is useful in predicting the target variable. This is generally done automatically by any regression engine and is stored in the form of a p-value. What the p-value says is the 1 - the probability that what was observed is different than what it is being tested against, in this case zero. In other words, the lower your p-value the higher the probability that the two values are different. When you hear the term statistical significance, what you are hearing is "does the p-value cross our confidence boundary?" This is often thought of as 0.1, 0.05, or 0.01 depending on the application. Each predictor will have it's own p-value, which can say whether or not it is useful in predicting the target variable. Let's see. | #Generate Data
X = np.random.randn(150)
Y = np.random.randn(150)
Z = list(map(lambda y, x: 4*(y + x) + np.random.randn(1)[0], Y, X))
#Regress
reg = smf.ols(formula='Z ~ X+Y', data=pd.DataFrame({'X':X,'Y':Y,'Z':Z})).fit()
#Grab P-values and Coefficients
confint = pd.DataFrame(reg.conf_int())
pvalues = pd.DataFrame({'P-Value':reg.pvalues.round(7)})
print(pvalues)
print(confint) | P-Value
Intercept 0.240027
X 0.000000
Y 0.000000
0 1
Intercept -0.064758 0.256575
X 3.851395 4.147178
Y 3.821662 4.162070
| MIT | Lectures/4Regression_Analysis_Student.ipynb | jacobswe/data-science-lectures |
This is generally much easier to do in purely statistical engines like R. We will cover that soon. Solving RegressionThere are many ways that regression equations can be solved, one of these ways is through gradient descent, essentially adjusting a combination of variables until you find the minimum MSE. Let's visualize this. Note, this will take a minute or so to run as it calculating the MSE of a regression line for all the possibilities. What will develop is a smooth visualization of the MSEs. If you imagine placing a marble on the graph, where it roles is the direction to the best fit line. It will roll, and reverse, and roll, and reverse until it finally settles at the lowest point. This is gradient descent. | def getMSE(slope, intercept, X, Y):
return statistics.mean(list(map(lambda x, y: (y-(predict(x,slope,intercept)))**2, X, Y)))
def getHMData(slope,intercept,X,Y,step=0.1,):
slopeDist = slope[1]-slope[0]
slopeSteps = int(slopeDist/step)
interceptDist = intercept[1]-intercept[0]
interceptSteps = int(interceptDist/step)
data = []
for i in range(slopeSteps):
row = []
for j in range(interceptSteps):
row.append(getMSE(slope=slope[0]+step*i,intercept=intercept[0]+step*j,X=X,Y=Y))
data.append(row)
return pd.DataFrame(data)
#Set Limits
increment = 0.1
slopes = [-10,10]
numslopes = (slopes[1]-slopes[0])/increment
intercepts = [-10,10]
numinter = (intercepts[1]-intercepts[0])/increment
#Get Data
X = np.random.randn(500)-2*np.random.randn(1)[0]
Y = list(map(lambda x: np.random.uniform(1,4)*x + np.random.randn(1)[0], X))
data = getHMData(slope=slopes,intercept=intercepts,X=X,Y=Y,step=increment)
#Format Labels
data = data.set_index(np.linspace(slopes[0],slopes[1],int(numslopes)).round(1))
data.columns = np.linspace(intercepts[0],intercepts[1],int(numinter)).round(1)
#Heat Map of MSE
fig, axs = plt.subplots(ncols=2,figsize=(20,10))
sns.heatmap(data.iloc[::-1,:],cmap="RdYlGn_r",ax = axs[1]).set(title='MSE over Slopes and Intercepts')
axs[1].set(xlabel='Intercept',ylabel='Slope')
#Regression Plot
sns.regplot(x='X',y='Y',data=pd.DataFrame({'X':X,'Y':Y}), ax = axs[0])
slope, intercept, r_value, p_value, std_err = stats.linregress(X, Y)
title = ('Regression\nSlope: '+str(slope.round(3))+', Intercept: '+
str(intercept.round(3))+', R-Squared: '+str(r_value.round(3)))
axs[0].set_title(title)
#Plot Over Heatmap
axs[1].hlines([slope*-10+100], *axs[1].get_xlim(),linestyles='dashed',colors='indigo')
axs[1].vlines([intercept*10+100], *axs[1].get_ylim(),linestyles='dashed',colors='indigo')
axs[1].scatter(intercept*10+100,slope*-10+100,zorder=10,c='indigo',s=50) | _____no_output_____ | MIT | Lectures/4Regression_Analysis_Student.ipynb | jacobswe/data-science-lectures |
Your turn to solve a problem.The below code will generate for you 3 sets of values: 'X1','X2' - Predictors, 'Y' - TargetYou will need to generate a regression. Then, you should check the predictors to see which one is significant. After this, you will need to re-run the regression and confirm if the data satisfies our regression assumption. Finally, you should create a seaborn plot of your regression. | Xs, Y = make_regression(n_samples=300, n_features=2, n_informative=1, noise=1)
data = pd.DataFrame(np.column_stack([Xs,Y]), columns=['X1','X2','Y'])
reg = smf.ols(formula='Y~X1+X2', data=pd.DataFrame({'X1':data['X1'],'X2':data['X2'],'Y':data['Y']})).fit()
confint = pd.DataFrame(reg.conf_int())
pvalues = pd.DataFrame({'P-Value':reg.pvalues.round(7)})
print(confint)
print(pvalues)
reg = smf.ols(formula='Y ~ X2', data=pd.DataFrame({'X2':data['X2'],'Y':data['Y']})).fit()
sm.qqplot(reg.resid, line='45')
plt.show()
sns.relplot(x='Residuals',y='Predicted',data=pd.DataFrame({'Residuals':reg.resid,'Predicted':reg.fittedvalues})) | _____no_output_____ | MIT | Lectures/4Regression_Analysis_Student.ipynb | jacobswe/data-science-lectures |
数据读取 | df = pd.read_csv('data/train.csv')
df.head(10)
print('共有{}条数据.'.format(len(df)))
print('幸存者{}人.'.format(df.Survived.sum())) | 共有891条数据.
幸存者342人.
| MIT | SVM_sklearn.ipynb | CoderSLZhang/Kaggle-Titanic-Machine-Learning-from-Disaster |
数据分析 Pclass 代表社会经济状况,数字越小状况越好 1 = upper, 2 = mid, 3 = lower | df.Pclass.value_counts(dropna=False)
df[['Pclass', 'Survived']].groupby(['Pclass']).mean().plot(kind='bar') | _____no_output_____ | MIT | SVM_sklearn.ipynb | CoderSLZhang/Kaggle-Titanic-Machine-Learning-from-Disaster |
*社会等级越高,幸存率越高* | df['P1'] = (df.Pclass == 1).astype('int')
df['P2'] = (df.Pclass == 2).astype('int')
df['P3'] = (df.Pclass == 3).astype('int')
df.head() | _____no_output_____ | MIT | SVM_sklearn.ipynb | CoderSLZhang/Kaggle-Titanic-Machine-Learning-from-Disaster |
Sex male时为1, female时为0 | df.Sex.value_counts(dropna=False)
df[['Sex', 'Survived']].groupby(['Sex']).mean().plot(kind='bar') | _____no_output_____ | MIT | SVM_sklearn.ipynb | CoderSLZhang/Kaggle-Titanic-Machine-Learning-from-Disaster |
*女性的幸存率比较高* | df.Sex.replace(['male', 'female'], [1, 0], inplace=True)
df.head() | _____no_output_____ | MIT | SVM_sklearn.ipynb | CoderSLZhang/Kaggle-Titanic-Machine-Learning-from-Disaster |
Age 有缺失值,简单用平均值填充 | df.Age.isnull().sum()
df.Age.fillna(df.Age.median(), inplace=True)
df['Age_categories'] = pd.cut(df['Age'], 5)
df[['Age_categories', 'Survived']].groupby(['Age_categories']).mean().plot(kind='bar') | _____no_output_____ | MIT | SVM_sklearn.ipynb | CoderSLZhang/Kaggle-Titanic-Machine-Learning-from-Disaster |
*可见小孩的幸存率比较高* Sibsp 在船上的旁系亲属和配偶的数量 | df.SibSp.isnull().sum() | _____no_output_____ | MIT | SVM_sklearn.ipynb | CoderSLZhang/Kaggle-Titanic-Machine-Learning-from-Disaster |
Parch 在船上的父母子女的数量 | df.Parch.isnull().sum()
df['Family_size'] = df['SibSp'] + df['Parch']
df[['Family_size', 'Survived']].groupby(['Family_size']).mean().plot(kind='bar') | _____no_output_____ | MIT | SVM_sklearn.ipynb | CoderSLZhang/Kaggle-Titanic-Machine-Learning-from-Disaster |
*亲属和配偶的数量和父母子女的数量合并为一个特征* Fare 船费 | df.Fare.isnull().sum()
df['Fare_categories'] = pd.qcut(df['Fare'], 5)
df[['Fare_categories', 'Survived']].groupby(['Fare_categories']).mean().plot(kind='bar') | _____no_output_____ | MIT | SVM_sklearn.ipynb | CoderSLZhang/Kaggle-Titanic-Machine-Learning-from-Disaster |
*票价越贵,乘客越有钱,幸存几率越大* Embarked 登船港口 C = Cherbourg, Q = Queenstown, S = Southampton | df.Embarked.value_counts(dropna=False)
df[['Embarked', 'Survived']].groupby(['Embarked']).mean().plot(kind='bar') | _____no_output_____ | MIT | SVM_sklearn.ipynb | CoderSLZhang/Kaggle-Titanic-Machine-Learning-from-Disaster |
*可以看到在Cherbourg登陆的乘客幸存几率大* | df['E1'] = (df['Embarked'] == 'S').astype('int')
df['E2'] = (df['Embarked'] == 'C').astype('int')
df['E3'] = (df['Embarked'] == 'Q').astype('int')
df['E4'] = (df['Embarked'].isnull()).astype('int')
df.head() | _____no_output_____ | MIT | SVM_sklearn.ipynb | CoderSLZhang/Kaggle-Titanic-Machine-Learning-from-Disaster |
称谓,Mr、Miss、Master ... | def parse_title(name):
match = re.match(r'\w+\,\s(\w+)\.\s', name)
if match:
return match.group(1)
else:
return np.nan
df['Title'] = df.Name.apply(parse_title)
df.Title.fillna('NoTitle', inplace=True)
df.Title.value_counts(dropna=False)
title_set = set(df.Title)
for i, title in enumerate(title_set):
df['T'+str(i)] = (df.Title == title).astype('int')
df.head(3) | _____no_output_____ | MIT | SVM_sklearn.ipynb | CoderSLZhang/Kaggle-Titanic-Machine-Learning-from-Disaster |
预处理后的数据 | df_processed = df[['Survived', 'Sex', 'Age', 'Family_size', 'Fare', 'Pclass',
'E1', 'E2', 'E3', 'E4', 'T0', 'T1', 'T2', 'T3', 'T4', 'T5',
'T6', 'T7', 'T8', 'T9', 'T10', 'T11', 'T12', 'T13', 'T14']]
df_processed.head(3) | _____no_output_____ | MIT | SVM_sklearn.ipynb | CoderSLZhang/Kaggle-Titanic-Machine-Learning-from-Disaster |
划分训练集和测试集 | total_X = df_processed.iloc[:, 1:].values
total_Y = df_processed.iloc[:, 0].values
train_X, test_X, train_Y, test_Y = train_test_split(total_X, total_Y, test_size=0.25) | _____no_output_____ | MIT | SVM_sklearn.ipynb | CoderSLZhang/Kaggle-Titanic-Machine-Learning-from-Disaster |
数据标准化 | X_scaler = StandardScaler()
train_X_std = X_scaler.fit_transform(train_X)
test_X_std = X_scaler.transform(test_X) | _____no_output_____ | MIT | SVM_sklearn.ipynb | CoderSLZhang/Kaggle-Titanic-Machine-Learning-from-Disaster |
SVM超参数网格搜索 | svm = SVC()
params = {
'C': np.logspace(1, (2 * np.random.rand()), 10),
'gamma':np.logspace(-4, (2 * np.random.rand()), 10)
}
grid_search = GridSearchCV(svm, params, cv=3)
grid_search.fit(train_X_std, train_Y)
best_score = grid_search.best_score_
best_params = grid_search.best_params_
C = best_params['C']
gamma = best_params['gamma']
C, gamma
grid_search.score(test_X_std, test_Y)
predicts = grid_search.predict(test_X_std)
print(metrics.classification_report(test_Y, predicts)) | precision recall f1-score support
0 0.78 0.95 0.85 129
1 0.89 0.63 0.74 94
avg / total 0.83 0.81 0.80 223
| MIT | SVM_sklearn.ipynb | CoderSLZhang/Kaggle-Titanic-Machine-Learning-from-Disaster |
最终预测 | total_X_std = X_scaler.transform(df_processed.iloc[:, 1:].values)
total_Y = df_processed.iloc[:, 0]
svm = SVC(C=C, gamma=gamma)
svm.fit(total_X_std, total_Y)
svm.score(total_X_std, total_Y)
test_df = pd.read_csv('data/test.csv')
test_df.head()
test_df.Sex.replace(['male', 'female'], [1, 0], inplace=True)
test_df.Age.fillna(df.Age.median(), inplace=True)
test_df['Family_size'] = test_df['SibSp'] + test_df['Parch']
test_df['P1'] = (df.Pclass == 1).astype('int')
test_df['P2'] = (df.Pclass == 2).astype('int')
test_df['P3'] = (df.Pclass == 3).astype('int')
test_df['E1'] = (df['Embarked'] == 'S').astype('int')
test_df['E2'] = (df['Embarked'] == 'C').astype('int')
test_df['E3'] = (df['Embarked'] == 'Q').astype('int')
test_df['E4'] = (df['Embarked'].isnull()).astype('int')
test_df['Title'] = df.Name.apply(parse_title)
test_df.Title.fillna('NoTitle', inplace=True)
title_set = set(df.Title)
for i, title in enumerate(title_set):
test_df['T'+str(i)] = (test_df.Title == title).astype('int')
test_df.head()
test_df.isnull().sum()
test_df.Fare.fillna(df.Fare.median(), inplace=True)
test_df_processed = test_df[['Sex', 'Age', 'Family_size', 'Fare', 'Pclass',
'E1', 'E2', 'E3', 'E4', 'T0', 'T1', 'T2', 'T3', 'T4', 'T5',
'T6', 'T7', 'T8', 'T9', 'T10', 'T11', 'T12', 'T13', 'T14']]
test_df_processed.head(3)
final_X = test_df_processed.values
final_X_std = X_scaler.transform(final_X) | _____no_output_____ | MIT | SVM_sklearn.ipynb | CoderSLZhang/Kaggle-Titanic-Machine-Learning-from-Disaster |
预测 | predicts = svm.predict(final_X_std)
ids = test_df['PassengerId'].values
result = pd.DataFrame({
'PassengerId': ids,
'Survived': predicts
})
result.head()
result.to_csv('./2018-8-23_6.csv', index=False) | _____no_output_____ | MIT | SVM_sklearn.ipynb | CoderSLZhang/Kaggle-Titanic-Machine-Learning-from-Disaster |
Goals 1. Learn to implement Resnet V2 Bottleneck Block (Type - 1) using monk - Monk's Keras - Monk's Pytorch - Monk's Mxnet 2. Use network Monk's debugger to create complex blocks 3. Understand how syntactically different it is to implement the same using - Traditional Keras - Traditional Pytorch - Traditional Mxnet Resnet V2 Bottleneck Block - Type 1 - Note: The block structure can have variations too, this is just an example | from IPython.display import Image
Image(filename='imgs/resnet_v2_bottleneck_without_downsample.png') | _____no_output_____ | Apache-2.0 | study_roadmaps/3_image_processing_deep_learning_roadmap/3_deep_learning_advanced/1_Blocks in Deep Learning Networks/8) Resnet V2 Bottleneck Block (Type - 2).ipynb | take2rohit/monk_v1 |
Table of contents[1. Install Monk](1)[2. Block basic Information](2) - [2.1) Visual structure](2-1) - [2.2) Layers in Branches](2-2)[3) Creating Block using monk visual debugger](3) - [3.1) Create the first branch](3-1) - [3.2) Create the second branch](3-2) - [3.3) Merge the branches](3-3) - [3.4) Debug the merged network](3-4) - [3.5) Compile the network](3-5) - [3.6) Visualize the network](3-6) - [3.7) Run data through the network](3-7) [4) Creating Block Using MONK one line API call](4) - [Mxnet Backend](4-1) - [Pytorch Backend](4-2) - [Keras Backend](4-3) [5) Appendix](5) - [Study Material](5-1) - [Creating block using traditional Mxnet](5-2) - [Creating block using traditional Pytorch](5-3) - [Creating block using traditional Keras](5-4) Install Monk Using pip (Recommended) - colab (gpu) - All bakcends: `pip install -U monk-colab` - kaggle (gpu) - All backends: `pip install -U monk-kaggle` - cuda 10.2 - All backends: `pip install -U monk-cuda102` - Gluon bakcned: `pip install -U monk-gluon-cuda102` - Pytorch backend: `pip install -U monk-pytorch-cuda102` - Keras backend: `pip install -U monk-keras-cuda102` - cuda 10.1 - All backend: `pip install -U monk-cuda101` - Gluon bakcned: `pip install -U monk-gluon-cuda101` - Pytorch backend: `pip install -U monk-pytorch-cuda101` - Keras backend: `pip install -U monk-keras-cuda101` - cuda 10.0 - All backend: `pip install -U monk-cuda100` - Gluon bakcned: `pip install -U monk-gluon-cuda100` - Pytorch backend: `pip install -U monk-pytorch-cuda100` - Keras backend: `pip install -U monk-keras-cuda100` - cuda 9.2 - All backend: `pip install -U monk-cuda92` - Gluon bakcned: `pip install -U monk-gluon-cuda92` - Pytorch backend: `pip install -U monk-pytorch-cuda92` - Keras backend: `pip install -U monk-keras-cuda92` - cuda 9.0 - All backend: `pip install -U monk-cuda90` - Gluon bakcned: `pip install -U monk-gluon-cuda90` - Pytorch backend: `pip install -U monk-pytorch-cuda90` - Keras backend: `pip install -U monk-keras-cuda90` - cpu - All backend: `pip install -U monk-cpu` - Gluon bakcned: `pip install -U monk-gluon-cpu` - Pytorch backend: `pip install -U monk-pytorch-cpu` - Keras backend: `pip install -U monk-keras-cpu` Install Monk Manually (Not recommended) Step 1: Clone the library - git clone https://github.com/Tessellate-Imaging/monk_v1.git Step 2: Install requirements - Linux - Cuda 9.0 - `cd monk_v1/installation/Linux && pip install -r requirements_cu90.txt` - Cuda 9.2 - `cd monk_v1/installation/Linux && pip install -r requirements_cu92.txt` - Cuda 10.0 - `cd monk_v1/installation/Linux && pip install -r requirements_cu100.txt` - Cuda 10.1 - `cd monk_v1/installation/Linux && pip install -r requirements_cu101.txt` - Cuda 10.2 - `cd monk_v1/installation/Linux && pip install -r requirements_cu102.txt` - CPU (Non gpu system) - `cd monk_v1/installation/Linux && pip install -r requirements_cpu.txt` - Windows - Cuda 9.0 (Experimental support) - `cd monk_v1/installation/Windows && pip install -r requirements_cu90.txt` - Cuda 9.2 (Experimental support) - `cd monk_v1/installation/Windows && pip install -r requirements_cu92.txt` - Cuda 10.0 (Experimental support) - `cd monk_v1/installation/Windows && pip install -r requirements_cu100.txt` - Cuda 10.1 (Experimental support) - `cd monk_v1/installation/Windows && pip install -r requirements_cu101.txt` - Cuda 10.2 (Experimental support) - `cd monk_v1/installation/Windows && pip install -r requirements_cu102.txt` - CPU (Non gpu system) - `cd monk_v1/installation/Windows && pip install -r requirements_cpu.txt` - Mac - CPU (Non gpu system) - `cd monk_v1/installation/Mac && pip install -r requirements_cpu.txt` - Misc - Colab (GPU) - `cd monk_v1/installation/Misc && pip install -r requirements_colab.txt` - Kaggle (GPU) - `cd monk_v1/installation/Misc && pip install -r requirements_kaggle.txt` Step 3: Add to system path (Required for every terminal or kernel run) - `import sys` - `sys.path.append("monk_v1/");` Imports | # Common
import numpy as np
import math
import netron
from collections import OrderedDict
from functools import partial
#Using mxnet-gluon backend
# When installed using pip
from monk.gluon_prototype import prototype
# When installed manually (Uncomment the following)
#import os
#import sys
#sys.path.append("monk_v1/");
#sys.path.append("monk_v1/monk/");
#from monk.gluon_prototype import prototype | _____no_output_____ | Apache-2.0 | study_roadmaps/3_image_processing_deep_learning_roadmap/3_deep_learning_advanced/1_Blocks in Deep Learning Networks/8) Resnet V2 Bottleneck Block (Type - 2).ipynb | take2rohit/monk_v1 |
Block Information Visual structure | from IPython.display import Image
Image(filename='imgs/resnet_v2_bottleneck_without_downsample.png') | _____no_output_____ | Apache-2.0 | study_roadmaps/3_image_processing_deep_learning_roadmap/3_deep_learning_advanced/1_Blocks in Deep Learning Networks/8) Resnet V2 Bottleneck Block (Type - 2).ipynb | take2rohit/monk_v1 |
Layers in Branches - Number of branches: 2 - Common Elements - batchnorm -> relu - Branch 1 - identity - Branch 2 - conv_1x1 -> batchnorm -> relu -> conv_3x3 -> batchnorm -> relu -> conv1x1 - Branches merged using - Elementwise addition (See Appendix to read blogs on resnets) Creating Block using monk debugger | # Imports and setup a project
# To use pytorch backend - replace gluon_prototype with pytorch_prototype
# To use keras backend - replace gluon_prototype with keras_prototype
from monk.gluon_prototype import prototype
# Create a sample project
gtf = prototype(verbose=1);
gtf.Prototype("sample-project-1", "sample-experiment-1"); | Mxnet Version: 1.5.1
Experiment Details
Project: sample-project-1
Experiment: sample-experiment-1
Dir: /home/abhi/Desktop/Work/tess_tool/gui/v0.3/finetune_models/Organization/development/v5.0_blocks/study_roadmap/blocks/workspace/sample-project-1/sample-experiment-1/
| Apache-2.0 | study_roadmaps/3_image_processing_deep_learning_roadmap/3_deep_learning_advanced/1_Blocks in Deep Learning Networks/8) Resnet V2 Bottleneck Block (Type - 2).ipynb | take2rohit/monk_v1 |
Create the first branch | def first_branch():
network = [];
network.append(gtf.identity());
return network;
# Debug the branch
branch_1 = first_branch()
network = [];
network.append(branch_1);
gtf.debug_custom_model_design(network); | _____no_output_____ | Apache-2.0 | study_roadmaps/3_image_processing_deep_learning_roadmap/3_deep_learning_advanced/1_Blocks in Deep Learning Networks/8) Resnet V2 Bottleneck Block (Type - 2).ipynb | take2rohit/monk_v1 |
Create the second branch | def second_branch(output_channels=128, stride=1):
network = [];
# Bottleneck convolution
network.append(gtf.convolution(output_channels=output_channels//4, kernel_size=1, stride=stride));
network.append(gtf.batch_normalization());
network.append(gtf.relu());
#Bottleneck convolution
network.append(gtf.convolution(output_channels=output_channels//4, kernel_size=1, stride=stride));
network.append(gtf.batch_normalization());
network.append(gtf.relu());
#Normal convolution
network.append(gtf.convolution(output_channels=output_channels, kernel_size=1, stride=1));
return network;
# Debug the branch
branch_2 = second_branch(output_channels=128, stride=1)
network = [];
network.append(branch_2);
gtf.debug_custom_model_design(network); | _____no_output_____ | Apache-2.0 | study_roadmaps/3_image_processing_deep_learning_roadmap/3_deep_learning_advanced/1_Blocks in Deep Learning Networks/8) Resnet V2 Bottleneck Block (Type - 2).ipynb | take2rohit/monk_v1 |
Merge the branches |
def final_block(output_channels=128, stride=1):
network = [];
#Common Elements
network.append(gtf.batch_normalization());
network.append(gtf.relu());
#Create subnetwork and add branches
subnetwork = [];
branch_1 = first_branch()
branch_2 = second_branch(output_channels=output_channels, stride=stride)
subnetwork.append(branch_1);
subnetwork.append(branch_2);
# Add merging element
subnetwork.append(gtf.add());
# Add the subnetwork
network.append(subnetwork)
return network; | _____no_output_____ | Apache-2.0 | study_roadmaps/3_image_processing_deep_learning_roadmap/3_deep_learning_advanced/1_Blocks in Deep Learning Networks/8) Resnet V2 Bottleneck Block (Type - 2).ipynb | take2rohit/monk_v1 |
Debug the merged network | final = final_block(output_channels=64, stride=1)
network = [];
network.append(final);
gtf.debug_custom_model_design(network); | _____no_output_____ | Apache-2.0 | study_roadmaps/3_image_processing_deep_learning_roadmap/3_deep_learning_advanced/1_Blocks in Deep Learning Networks/8) Resnet V2 Bottleneck Block (Type - 2).ipynb | take2rohit/monk_v1 |
Compile the network | gtf.Compile_Network(network, data_shape=(64, 224, 224), use_gpu=False); | Model Details
Loading pretrained model
Model Loaded on device
Model name: Custom Model
Num of potentially trainable layers: 6
Num of actual trainable layers: 6
| Apache-2.0 | study_roadmaps/3_image_processing_deep_learning_roadmap/3_deep_learning_advanced/1_Blocks in Deep Learning Networks/8) Resnet V2 Bottleneck Block (Type - 2).ipynb | take2rohit/monk_v1 |
Run data through the network | import mxnet as mx
x = np.zeros((1, 64, 224, 224));
x = mx.nd.array(x);
y = gtf.system_dict["local"]["model"].forward(x);
print(x.shape, y.shape) | (1, 64, 224, 224) (1, 64, 224, 224)
| Apache-2.0 | study_roadmaps/3_image_processing_deep_learning_roadmap/3_deep_learning_advanced/1_Blocks in Deep Learning Networks/8) Resnet V2 Bottleneck Block (Type - 2).ipynb | take2rohit/monk_v1 |
Visualize network using netron | gtf.Visualize_With_Netron(data_shape=(64, 224, 224)) | Using Netron To Visualize
Not compatible on kaggle
Compatible only for Jupyter Notebooks
Stopping http://localhost:8080
Serving 'model-symbol.json' at http://localhost:8080
| Apache-2.0 | study_roadmaps/3_image_processing_deep_learning_roadmap/3_deep_learning_advanced/1_Blocks in Deep Learning Networks/8) Resnet V2 Bottleneck Block (Type - 2).ipynb | take2rohit/monk_v1 |
Creating Using MONK LOW code API Mxnet backend | from monk.gluon_prototype import prototype
gtf = prototype(verbose=1);
gtf.Prototype("sample-project-1", "sample-experiment-1");
network = [];
# Single line addition of blocks
network.append(gtf.resnet_v2_bottleneck_block(output_channels=64, downsample=False));
gtf.Compile_Network(network, data_shape=(64, 224, 224), use_gpu=False);
| Mxnet Version: 1.5.1
Experiment Details
Project: sample-project-1
Experiment: sample-experiment-1
Dir: /home/abhi/Desktop/Work/tess_tool/gui/v0.3/finetune_models/Organization/development/v5.0_blocks/study_roadmap/blocks/workspace/sample-project-1/sample-experiment-1/
Model Details
Loading pretrained model
Model Loaded on device
Model name: Custom Model
Num of potentially trainable layers: 6
Num of actual trainable layers: 6
| Apache-2.0 | study_roadmaps/3_image_processing_deep_learning_roadmap/3_deep_learning_advanced/1_Blocks in Deep Learning Networks/8) Resnet V2 Bottleneck Block (Type - 2).ipynb | take2rohit/monk_v1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.