markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
与普通张量一样,您可以使用 Python 算术和比较运算符来执行逐元素运算。有关更多信息,请参阅下面的**重载运算符**一节。
print(digits + 3) print(digits + tf.ragged.constant([[1, 2, 3, 4], [], [5, 6, 7], [8], []]))
_____no_output_____
Apache-2.0
site/zh-cn/guide/ragged_tensor.ipynb
RedContritio/docs-l10n
如果需要对 `RaggedTensor` 的值进行逐元素转换,您可以使用 `tf.ragged.map_flat_values`(它采用一个函数加上一个或多个参数的形式),并应用这个函数来转换 `RaggedTensor` 的值。
times_two_plus_one = lambda x: x * 2 + 1 print(tf.ragged.map_flat_values(times_two_plus_one, digits))
_____no_output_____
Apache-2.0
site/zh-cn/guide/ragged_tensor.ipynb
RedContritio/docs-l10n
不规则张量可以转换为嵌套的 Python `list` 和 numpy `array`:
digits.to_list() digits.numpy()
_____no_output_____
Apache-2.0
site/zh-cn/guide/ragged_tensor.ipynb
RedContritio/docs-l10n
构造不规则张量构造不规则张量的最简单方法是使用 `tf.ragged.constant`,它会构建与给定的嵌套 Python `list` 或 numpy `array` 相对应的 `RaggedTensor`:
sentences = tf.ragged.constant([ ["Let's", "build", "some", "ragged", "tensors", "!"], ["We", "can", "use", "tf.ragged.constant", "."]]) print(sentences) paragraphs = tf.ragged.constant([ [['I', 'have', 'a', 'cat'], ['His', 'name', 'is', 'Mat']], [['Do', 'you', 'want', 'to', 'come', 'visit'], ["I'm", 'free', 'tomorrow']], ]) print(paragraphs)
_____no_output_____
Apache-2.0
site/zh-cn/guide/ragged_tensor.ipynb
RedContritio/docs-l10n
还可以通过将扁平的*值*张量与*行分区*张量进行配对来构造不规则张量,行分区张量使用 `tf.RaggedTensor.from_value_rowids`、`tf.RaggedTensor.from_row_lengths` 和 `tf.RaggedTensor.from_row_splits` 等工厂类方法指示如何将值分成各行。 `tf.RaggedTensor.from_value_rowids`如果知道每个值属于哪一行,可以使用 `value_rowids` 行分区张量构建 `RaggedTensor`:![value_rowids](https://tensorflow.google.cn/images/ragged_tensors/value_rowids.png)
print(tf.RaggedTensor.from_value_rowids( values=[3, 1, 4, 1, 5, 9, 2], value_rowids=[0, 0, 0, 0, 2, 2, 3]))
_____no_output_____
Apache-2.0
site/zh-cn/guide/ragged_tensor.ipynb
RedContritio/docs-l10n
`tf.RaggedTensor.from_row_lengths`如果知道每行的长度,可以使用 `row_lengths` 行分区张量:![row_lengths](https://tensorflow.google.cn/images/ragged_tensors/row_lengths.png)
print(tf.RaggedTensor.from_row_lengths( values=[3, 1, 4, 1, 5, 9, 2], row_lengths=[4, 0, 2, 1]))
_____no_output_____
Apache-2.0
site/zh-cn/guide/ragged_tensor.ipynb
RedContritio/docs-l10n
`tf.RaggedTensor.from_row_splits`如果知道指示每行开始和结束的索引,可以使用 `row_splits` 行分区张量:![row_splits](https://tensorflow.google.cn/images/ragged_tensors/row_splits.png)
print(tf.RaggedTensor.from_row_splits( values=[3, 1, 4, 1, 5, 9, 2], row_splits=[0, 4, 4, 6, 7]))
_____no_output_____
Apache-2.0
site/zh-cn/guide/ragged_tensor.ipynb
RedContritio/docs-l10n
有关完整的工厂方法列表,请参阅 `tf.RaggedTensor` 类文档。注:默认情况下,这些工厂方法会添加断言,说明行分区张量结构良好且与值数量保持一致。如果您能够保证输入的结构良好且一致,可以使用 `validate=False` 参数跳过此类检查。 可以在不规则张量中存储什么与普通 `Tensor` 一样,`RaggedTensor` 中的所有值必须具有相同的类型;所有值必须处于相同的嵌套深度(张量的*秩*):
print(tf.ragged.constant([["Hi"], ["How", "are", "you"]])) # ok: type=string, rank=2 print(tf.ragged.constant([[[1, 2], [3]], [[4, 5]]])) # ok: type=int32, rank=3 try: tf.ragged.constant([["one", "two"], [3, 4]]) # bad: multiple types except ValueError as exception: print(exception) try: tf.ragged.constant(["A", ["B", "C"]]) # bad: multiple nesting depths except ValueError as exception: print(exception)
_____no_output_____
Apache-2.0
site/zh-cn/guide/ragged_tensor.ipynb
RedContritio/docs-l10n
示例用例以下示例演示了如何使用 `RaggedTensor`,通过为每个句子的开头和结尾使用特殊标记,为一批可变长度查询构造和组合一元元组与二元元组嵌入。有关本例中使用的运算的更多详细信息,请参阅 `tf.ragged` 包文档。
queries = tf.ragged.constant([['Who', 'is', 'Dan', 'Smith'], ['Pause'], ['Will', 'it', 'rain', 'later', 'today']]) # Create an embedding table. num_buckets = 1024 embedding_size = 4 embedding_table = tf.Variable( tf.random.truncated_normal([num_buckets, embedding_size], stddev=1.0 / math.sqrt(embedding_size))) # Look up the embedding for each word. word_buckets = tf.strings.to_hash_bucket_fast(queries, num_buckets) word_embeddings = tf.nn.embedding_lookup(embedding_table, word_buckets) # ① # Add markers to the beginning and end of each sentence. marker = tf.fill([queries.nrows(), 1], '#') padded = tf.concat([marker, queries, marker], axis=1) # ② # Build word bigrams & look up embeddings. bigrams = tf.strings.join([padded[:, :-1], padded[:, 1:]], separator='+') # ③ bigram_buckets = tf.strings.to_hash_bucket_fast(bigrams, num_buckets) bigram_embeddings = tf.nn.embedding_lookup(embedding_table, bigram_buckets) # ④ # Find the average embedding for each sentence all_embeddings = tf.concat([word_embeddings, bigram_embeddings], axis=1) # ⑤ avg_embedding = tf.reduce_mean(all_embeddings, axis=1) # ⑥ print(avg_embedding)
_____no_output_____
Apache-2.0
site/zh-cn/guide/ragged_tensor.ipynb
RedContritio/docs-l10n
![ragged_example](https://tensorflow.google.cn/images/ragged_tensors/ragged_example.png) 不规则维度和均匀维度***不规则维度***是切片可能具有不同长度的维度。例如,`rt=[[3, 1, 4, 1], [], [5, 9, 2], [6], []]` 的内部(列)维度是不规则的,因为列切片 (`rt[0, :]`, ..., `rt[4, :]`) 具有不同的长度。切片全都具有相同长度的维度称为*均匀维度*。不规则张量的最外层维始终是均匀维度,因为它只包含一个切片(因此不可能有不同的切片长度)。其余维度可能是不规则维度也可能是均匀维度。例如,我们可以使用形状为 `[num_sentences, (num_words), embedding_size]` 的不规则张量为一批句子中的每个单词存储单词嵌入,其中 `(num_words)` 周围的括号表示维度是不规则维度。![sent_word_embed](https://tensorflow.google.cn/images/ragged_tensors/sent_word_embed.png)不规则张量可以有多个不规则维度。例如,我们可以使用形状为 `[num_documents, (num_paragraphs), (num_sentences), (num_words)]` 的张量存储一批结构化文本文档(其中,括号同样用于表示不规则维度)。与 `tf.Tensor` 一样,不规则张量的***秩***是其总维数(包括不规则维度和均匀维度)。***潜在的不规则张量***是一个值,这个值可能是 `tf.Tensor` 或 `tf.RaggedTensor`。描述 RaggedTensor 的形状时,按照惯例,不规则维度会通过括号进行指示。例如,如上面所见,存储一批句子中每个单词的单词嵌入的三维 RaggedTensor 的形状可以写为 `[num_sentences, (num_words), embedding_size]`。`RaggedTensor.shape` 特性返回不规则张量的 `tf.TensorShape`,其中不规则维度的大小为 `None`:
tf.ragged.constant([["Hi"], ["How", "are", "you"]]).shape
_____no_output_____
Apache-2.0
site/zh-cn/guide/ragged_tensor.ipynb
RedContritio/docs-l10n
可以使用方法 `tf.RaggedTensor.bounding_shape` 查找给定 `RaggedTensor` 的紧密边界形状:
print(tf.ragged.constant([["Hi"], ["How", "are", "you"]]).bounding_shape())
_____no_output_____
Apache-2.0
site/zh-cn/guide/ragged_tensor.ipynb
RedContritio/docs-l10n
不规则张量和稀疏张量对比不规则张量*不*应该被认为是一种稀疏张量。尤其是,稀疏张量是以紧凑的格式对相同数据建模的 *tf.Tensor 的高效编码*;而不规则张量是对扩展的数据类建模的 *tf.Tensor 的延伸*。这种区别在定义运算时至关重要:- 对稀疏张量或密集张量应用某一运算应当始终获得相同结果。- 对不规则张量或稀疏张量应用某一运算可能获得不同结果。一个说明性的示例是,考虑如何为不规则张量和稀疏张量定义 `concat`、`stack` 和 `tile` 之类的数组运算。连接不规则张量时,会将每一行连在一起,形成一个具有组合长度的行:![ragged_concat](https://tensorflow.google.cn/images/ragged_tensors/ragged_concat.png)
ragged_x = tf.ragged.constant([["John"], ["a", "big", "dog"], ["my", "cat"]]) ragged_y = tf.ragged.constant([["fell", "asleep"], ["barked"], ["is", "fuzzy"]]) print(tf.concat([ragged_x, ragged_y], axis=1))
_____no_output_____
Apache-2.0
site/zh-cn/guide/ragged_tensor.ipynb
RedContritio/docs-l10n
但连接稀疏张量时,相当于连接相应的密集张量,如以下示例所示(其中 Ø 表示缺失的值):![sparse_concat](https://tensorflow.google.cn/images/ragged_tensors/sparse_concat.png)
sparse_x = ragged_x.to_sparse() sparse_y = ragged_y.to_sparse() sparse_result = tf.sparse.concat(sp_inputs=[sparse_x, sparse_y], axis=1) print(tf.sparse.to_dense(sparse_result, ''))
_____no_output_____
Apache-2.0
site/zh-cn/guide/ragged_tensor.ipynb
RedContritio/docs-l10n
另一个说明为什么这种区别非常重要的示例是,考虑一个运算(如 `tf.reduce_mean`)的“每行平均值”的定义。对于不规则张量,一行的平均值是该行的值总和除以该行的宽度。但对于稀疏张量来说,一行的平均值是该行的值总和除以稀疏张量的总宽度(大于等于最长行的宽度)。 TensorFlow API Keras[tf.keras](https://tensorflow.google.cn/guide/keras) 是 TensorFlow 的高级 API,用于构建和训练深度学习模型。通过在 `tf.keras.Input` 或 `tf.keras.layers.InputLayer` 上设置 `ragged=True`,不规则张量可以作为输入传送到 Keras 模型。不规则张量还可以在 Keras 层之间传递,并由 Keras 模型返回。以下示例显示了一个使用不规则张量训练的小 LSTM 模型。
# Task: predict whether each sentence is a question or not. sentences = tf.constant( ['What makes you think she is a witch?', 'She turned me into a newt.', 'A newt?', 'Well, I got better.']) is_question = tf.constant([True, False, True, False]) # Preprocess the input strings. hash_buckets = 1000 words = tf.strings.split(sentences, ' ') hashed_words = tf.strings.to_hash_bucket_fast(words, hash_buckets) # Build the Keras model. keras_model = tf.keras.Sequential([ tf.keras.layers.Input(shape=[None], dtype=tf.int64, ragged=True), tf.keras.layers.Embedding(hash_buckets, 16), tf.keras.layers.LSTM(32, use_bias=False), tf.keras.layers.Dense(32), tf.keras.layers.Activation(tf.nn.relu), tf.keras.layers.Dense(1) ]) keras_model.compile(loss='binary_crossentropy', optimizer='rmsprop') keras_model.fit(hashed_words, is_question, epochs=5) print(keras_model.predict(hashed_words))
_____no_output_____
Apache-2.0
site/zh-cn/guide/ragged_tensor.ipynb
RedContritio/docs-l10n
tf.Example[tf.Example](https://tensorflow.google.cn/tutorials/load_data/tfrecord) 是 TensorFlow 数据的标准 [protobuf](https://developers.google.com/protocol-buffers/) 编码。使用 `tf.Example` 编码的数据往往包括可变长度特征。例如,以下代码定义了一批具有不同特征长度的四条 `tf.Example` 消息:
import google.protobuf.text_format as pbtext def build_tf_example(s): return pbtext.Merge(s, tf.train.Example()).SerializeToString() example_batch = [ build_tf_example(r''' features { feature {key: "colors" value {bytes_list {value: ["red", "blue"]} } } feature {key: "lengths" value {int64_list {value: [7]} } } }'''), build_tf_example(r''' features { feature {key: "colors" value {bytes_list {value: ["orange"]} } } feature {key: "lengths" value {int64_list {value: []} } } }'''), build_tf_example(r''' features { feature {key: "colors" value {bytes_list {value: ["black", "yellow"]} } } feature {key: "lengths" value {int64_list {value: [1, 3]} } } }'''), build_tf_example(r''' features { feature {key: "colors" value {bytes_list {value: ["green"]} } } feature {key: "lengths" value {int64_list {value: [3, 5, 2]} } } }''')]
_____no_output_____
Apache-2.0
site/zh-cn/guide/ragged_tensor.ipynb
RedContritio/docs-l10n
我们可以使用 `tf.io.parse_example` 解析这个编码数据,它采用序列化字符串的张量和特征规范字典,并将字典映射特征名称返回给张量。要将长度可变特征读入不规则张量,我们只需在特征规范字典中使用 `tf.io.RaggedFeature` 即可:
feature_specification = { 'colors': tf.io.RaggedFeature(tf.string), 'lengths': tf.io.RaggedFeature(tf.int64), } feature_tensors = tf.io.parse_example(example_batch, feature_specification) for name, value in feature_tensors.items(): print("{}={}".format(name, value))
_____no_output_____
Apache-2.0
site/zh-cn/guide/ragged_tensor.ipynb
RedContritio/docs-l10n
`tf.io.RaggedFeature` 还可用于读取具有多个不规则维度的特征。有关详细信息,请参阅 [API 文档](https://tensorflow.google.cn/api_docs/python/tf/io/RaggedFeature)。 数据集[tf.data](https://tensorflow.google.cn/guide/data) 是一个 API,可用于通过简单的可重用代码块构建复杂的输入流水线。它的核心数据结构是 `tf.data.Dataset`,表示一系列元素,每个元素包含一个或多个分量。
# Helper function used to print datasets in the examples below. def print_dictionary_dataset(dataset): for i, element in enumerate(dataset): print("Element {}:".format(i)) for (feature_name, feature_value) in element.items(): print('{:>14} = {}'.format(feature_name, feature_value))
_____no_output_____
Apache-2.0
site/zh-cn/guide/ragged_tensor.ipynb
RedContritio/docs-l10n
使用不规则张量构建数据集可以采用通过 `tf.Tensor` 或 numpy `array` 构建数据集时使用的方法,如 `Dataset.from_tensor_slices`,通过不规则张量构建数据集:
dataset = tf.data.Dataset.from_tensor_slices(feature_tensors) print_dictionary_dataset(dataset)
_____no_output_____
Apache-2.0
site/zh-cn/guide/ragged_tensor.ipynb
RedContritio/docs-l10n
注:`Dataset.from_generator` 目前还不支持不规则张量,但不久后将会支持这种张量。 批处理和取消批处理具有不规则张量的数据集可以使用 `Dataset.batch` 方法对具有不规则张量的数据集进行批处理(将 *n* 个连续元素组合成单个元素)。
batched_dataset = dataset.batch(2) print_dictionary_dataset(batched_dataset)
_____no_output_____
Apache-2.0
site/zh-cn/guide/ragged_tensor.ipynb
RedContritio/docs-l10n
相反,可以使用 `Dataset.unbatch` 将批处理后的数据集转换为扁平数据集。
unbatched_dataset = batched_dataset.unbatch() print_dictionary_dataset(unbatched_dataset)
_____no_output_____
Apache-2.0
site/zh-cn/guide/ragged_tensor.ipynb
RedContritio/docs-l10n
对具有可变长度非不规则张量的数据集进行批处理如果您有一个包含非不规则张量的数据集,而且各个元素的张量长度不同,则可以应用 `dense_to_ragged_batch` 转换,将这些非不规则张量批处理成不规则张量:
non_ragged_dataset = tf.data.Dataset.from_tensor_slices([1, 5, 3, 2, 8]) non_ragged_dataset = non_ragged_dataset.map(tf.range) batched_non_ragged_dataset = non_ragged_dataset.apply( tf.data.experimental.dense_to_ragged_batch(2)) for element in batched_non_ragged_dataset: print(element)
_____no_output_____
Apache-2.0
site/zh-cn/guide/ragged_tensor.ipynb
RedContritio/docs-l10n
转换具有不规则张量的数据集还可以使用 `Dataset.map` 创建或转换数据集中的不规则张量。
def transform_lengths(features): return { 'mean_length': tf.math.reduce_mean(features['lengths']), 'length_ranges': tf.ragged.range(features['lengths'])} transformed_dataset = dataset.map(transform_lengths) print_dictionary_dataset(transformed_dataset)
_____no_output_____
Apache-2.0
site/zh-cn/guide/ragged_tensor.ipynb
RedContritio/docs-l10n
tf.function[tf.function](https://tensorflow.google.cn/guide/function) 是预计算 Python 函数的 TensorFlow 计算图的装饰器,它可以大幅改善 TensorFlow 代码的性能。不规则张量能够透明地与 `@tf.function` 装饰的函数一起使用。例如,以下函数对不规则张量和非不规则张量均有效:
@tf.function def make_palindrome(x, axis): return tf.concat([x, tf.reverse(x, [axis])], axis) make_palindrome(tf.constant([[1, 2], [3, 4], [5, 6]]), axis=1) make_palindrome(tf.ragged.constant([[1, 2], [3], [4, 5, 6]]), axis=1)
_____no_output_____
Apache-2.0
site/zh-cn/guide/ragged_tensor.ipynb
RedContritio/docs-l10n
如果您希望为 `tf.function` 明确指定 `input_signature`,可以使用 `tf.RaggedTensorSpec` 执行此操作。
@tf.function( input_signature=[tf.RaggedTensorSpec(shape=[None, None], dtype=tf.int32)]) def max_and_min(rt): return (tf.math.reduce_max(rt, axis=-1), tf.math.reduce_min(rt, axis=-1)) max_and_min(tf.ragged.constant([[1, 2], [3], [4, 5, 6]]))
_____no_output_____
Apache-2.0
site/zh-cn/guide/ragged_tensor.ipynb
RedContritio/docs-l10n
具体函数[具体函数](https://tensorflow.google.cn/guide/functionobtaining_concrete_functions)封装通过 `tf.function` 构建的各个跟踪图。不规则张量可以透明地与具体函数一起使用。
# Preferred way to use ragged tensors with concrete functions (TF 2.3+): try: @tf.function def increment(x): return x + 1 rt = tf.ragged.constant([[1, 2], [3], [4, 5, 6]]) cf = increment.get_concrete_function(rt) print(cf(rt)) except Exception as e: print(f"Not supported before TF 2.3: {type(e)}: {e}")
_____no_output_____
Apache-2.0
site/zh-cn/guide/ragged_tensor.ipynb
RedContritio/docs-l10n
SavedModel[SavedModel](https://tensorflow.google.cn/guide/saved_model) 是序列化 TensorFlow 程序,包括权重和计算。它可以通过 Keras 模型或自定义模型构建。在任何一种情况下,不规则张量都可以透明地与 SavedModel 定义的函数和方法一起使用。 示例:保存 Keras 模型
import tempfile keras_module_path = tempfile.mkdtemp() tf.saved_model.save(keras_model, keras_module_path) imported_model = tf.saved_model.load(keras_module_path) imported_model(hashed_words)
_____no_output_____
Apache-2.0
site/zh-cn/guide/ragged_tensor.ipynb
RedContritio/docs-l10n
示例:保存自定义模型
class CustomModule(tf.Module): def __init__(self, variable_value): super(CustomModule, self).__init__() self.v = tf.Variable(variable_value) @tf.function def grow(self, x): return x * self.v module = CustomModule(100.0) # Before saving a custom model, we must ensure that concrete functions are # built for each input signature that we will need. module.grow.get_concrete_function(tf.RaggedTensorSpec(shape=[None, None], dtype=tf.float32)) custom_module_path = tempfile.mkdtemp() tf.saved_model.save(module, custom_module_path) imported_model = tf.saved_model.load(custom_module_path) imported_model.grow(tf.ragged.constant([[1.0, 4.0, 3.0], [2.0]]))
_____no_output_____
Apache-2.0
site/zh-cn/guide/ragged_tensor.ipynb
RedContritio/docs-l10n
注:SavedModel [签名](https://tensorflow.google.cn/guide/saved_modelspecifying_signatures_during_export)是具体函数。如上文的“具体函数”部分所述,从 TensorFlow 2.3 开始,只有具体函数才能正确处理不规则张量。如果您需要在先前版本的 TensorFlow 中使用 SavedModel 签名,建议您将不规则张量分解成其张量分量。 重载运算符`RaggedTensor` 类会重载标准 Python 算术和比较运算符,使其易于执行基本的逐元素数学:
x = tf.ragged.constant([[1, 2], [3], [4, 5, 6]]) y = tf.ragged.constant([[1, 1], [2], [3, 3, 3]]) print(x + y)
_____no_output_____
Apache-2.0
site/zh-cn/guide/ragged_tensor.ipynb
RedContritio/docs-l10n
由于重载运算符执行逐元素计算,因此所有二进制运算的输入必须具有相同的形状,或者可以广播至相同的形状。在最简单的广播情况下,单个标量与不规则张量中的每个值逐元素组合:
x = tf.ragged.constant([[1, 2], [3], [4, 5, 6]]) print(x + 3)
_____no_output_____
Apache-2.0
site/zh-cn/guide/ragged_tensor.ipynb
RedContritio/docs-l10n
有关更高级的用例,请参阅**广播**一节。不规则张量重载与正常 `Tensor` 相同的一组运算符:一元运算符 `-`、`~` 和 `abs()`;二元运算符 `+`、`-`、`*`、`/`、`//`、`%`、`**`、`&`、`|`、`^`、`==`、`` 和 `>=`。 索引不规则张量支持 Python 风格的索引,包括多维索引和切片。以下示例使用二维和三维不规则张量演示了不规则张量索引。 索引示例:二维不规则张量
queries = tf.ragged.constant( [['Who', 'is', 'George', 'Washington'], ['What', 'is', 'the', 'weather', 'tomorrow'], ['Goodnight']]) print(queries[1]) # A single query print(queries[1, 2]) # A single word print(queries[1:]) # Everything but the first row print(queries[:, :3]) # The first 3 words of each query print(queries[:, -2:]) # The last 2 words of each query
_____no_output_____
Apache-2.0
site/zh-cn/guide/ragged_tensor.ipynb
RedContritio/docs-l10n
索引示例:三维不规则张量
rt = tf.ragged.constant([[[1, 2, 3], [4]], [[5], [], [6]], [[7]], [[8, 9], [10]]]) print(rt[1]) # Second row (2-D RaggedTensor) print(rt[3, 0]) # First element of fourth row (1-D Tensor) print(rt[:, 1:3]) # Items 1-3 of each row (3-D RaggedTensor) print(rt[:, -1:]) # Last item of each row (3-D RaggedTensor)
_____no_output_____
Apache-2.0
site/zh-cn/guide/ragged_tensor.ipynb
RedContritio/docs-l10n
`RaggedTensor` 支持多维索引和切片,但有一个限制:不允许索引一个不规则维度。这种情况是有问题的,因为指示的值可能在某些行中存在,而在其他行中不存在。这种情况下,我们不知道是应该 (1) 引发 `IndexError`;(2) 使用默认值;还是 (3) 跳过该值并返回一个行数比开始时少的张量。根据 [Python 的指导原则](https://www.python.org/dev/peps/pep-0020/)(“当面对不明确的情况时,不要尝试去猜测”),我们目前不允许此运算。 张量类型转换`RaggedTensor` 类定义了可用于在 `RaggedTensor` 与 `tf.Tensor` 或 `tf.SparseTensors` 之间转换的方法:
ragged_sentences = tf.ragged.constant([ ['Hi'], ['Welcome', 'to', 'the', 'fair'], ['Have', 'fun']]) # RaggedTensor -> Tensor print(ragged_sentences.to_tensor(default_value='', shape=[None, 10])) # Tensor -> RaggedTensor x = [[1, 3, -1, -1], [2, -1, -1, -1], [4, 5, 8, 9]] print(tf.RaggedTensor.from_tensor(x, padding=-1)) #RaggedTensor -> SparseTensor print(ragged_sentences.to_sparse()) # SparseTensor -> RaggedTensor st = tf.SparseTensor(indices=[[0, 0], [2, 0], [2, 1]], values=['a', 'b', 'c'], dense_shape=[3, 3]) print(tf.RaggedTensor.from_sparse(st))
_____no_output_____
Apache-2.0
site/zh-cn/guide/ragged_tensor.ipynb
RedContritio/docs-l10n
评估不规则张量要访问不规则张量中的值,您可以:1. 使用 `tf.RaggedTensor.to_list()` 将不规则张量转换为嵌套 Python 列表。2. 使用 `tf.RaggedTensor.numpy()` 将不规则张量转换为 numpy 数组,数组的值是嵌套的 numpy 数组。3. 使用 `tf.RaggedTensor.values` 和 `tf.RaggedTensor.row_splits` 属性,或 `tf.RaggedTensor.row_lengths()` 和 `tf.RaggedTensor.value_rowids()` 之类的行分区方法,将不规则张量分解成其分量。4. 使用 Python 索引从不规则张量中选择值。
rt = tf.ragged.constant([[1, 2], [3, 4, 5], [6], [], [7]]) print("python list:", rt.to_list()) print("numpy array:", rt.numpy()) print("values:", rt.values.numpy()) print("splits:", rt.row_splits.numpy()) print("indexed value:", rt[1].numpy())
_____no_output_____
Apache-2.0
site/zh-cn/guide/ragged_tensor.ipynb
RedContritio/docs-l10n
广播广播是使具有不同形状的张量在进行逐元素运算时具有兼容形状的过程。有关广播的更多背景,请参阅:- [Numpy:广播](https://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)- `tf.broadcast_dynamic_shape`- `tf.broadcast_to`广播两个输入 `x` 和 `y`,使其具有兼容形状的基本步骤是:1. 如果 `x` 和 `y` 没有相同的维数,则增加外层维度(使用大小 1),直至它们具有相同的维数。2. 对于 `x` 和 `y` 的大小不同的每一个维度: - 如果 `x` 或 `y` 在 `d` 维中的大小为 `1`,则跨 `d` 维重复其值以匹配其他输入的大小。 - 否则,引发异常(`x` 和 `y` 非广播兼容)。其中,均匀维度中一个张量的大小是一个数字(跨该维的切片大小);不规则维度中一个张量的大小是切片长度列表(跨该维的所有切片)。 广播示例
# x (2D ragged): 2 x (num_rows) # y (scalar) # result (2D ragged): 2 x (num_rows) x = tf.ragged.constant([[1, 2], [3]]) y = 3 print(x + y) # x (2d ragged): 3 x (num_rows) # y (2d tensor): 3 x 1 # Result (2d ragged): 3 x (num_rows) x = tf.ragged.constant( [[10, 87, 12], [19, 53], [12, 32]]) y = [[1000], [2000], [3000]] print(x + y) # x (3d ragged): 2 x (r1) x 2 # y (2d ragged): 1 x 1 # Result (3d ragged): 2 x (r1) x 2 x = tf.ragged.constant( [[[1, 2], [3, 4], [5, 6]], [[7, 8]]], ragged_rank=1) y = tf.constant([[10]]) print(x + y) # x (3d ragged): 2 x (r1) x (r2) x 1 # y (1d tensor): 3 # Result (3d ragged): 2 x (r1) x (r2) x 3 x = tf.ragged.constant( [ [ [[1], [2]], [], [[3]], [[4]], ], [ [[5], [6]], [[7]] ] ], ragged_rank=2) y = tf.constant([10, 20, 30]) print(x + y)
_____no_output_____
Apache-2.0
site/zh-cn/guide/ragged_tensor.ipynb
RedContritio/docs-l10n
下面是一些不广播的形状示例:
# x (2d ragged): 3 x (r1) # y (2d tensor): 3 x 4 # trailing dimensions do not match x = tf.ragged.constant([[1, 2], [3, 4, 5, 6], [7]]) y = tf.constant([[1, 2, 3, 4], [5, 6, 7, 8], [9, 10, 11, 12]]) try: x + y except tf.errors.InvalidArgumentError as exception: print(exception) # x (2d ragged): 3 x (r1) # y (2d ragged): 3 x (r2) # ragged dimensions do not match. x = tf.ragged.constant([[1, 2, 3], [4], [5, 6]]) y = tf.ragged.constant([[10, 20], [30, 40], [50]]) try: x + y except tf.errors.InvalidArgumentError as exception: print(exception) # x (3d ragged): 3 x (r1) x 2 # y (3d ragged): 3 x (r1) x 3 # trailing dimensions do not match x = tf.ragged.constant([[[1, 2], [3, 4], [5, 6]], [[7, 8], [9, 10]]]) y = tf.ragged.constant([[[1, 2, 0], [3, 4, 0], [5, 6, 0]], [[7, 8, 0], [9, 10, 0]]]) try: x + y except tf.errors.InvalidArgumentError as exception: print(exception)
_____no_output_____
Apache-2.0
site/zh-cn/guide/ragged_tensor.ipynb
RedContritio/docs-l10n
RaggedTensor 编码不规则张量使用 `RaggedTensor` 类进行编码。在内部,每个 `RaggedTensor` 包含:- 一个 `values` 张量,它将可变长度行连接成扁平列表。- 一个 `row_partition`,它指示如何将这些扁平值分成各行。![ragged_encoding_2](https://tensorflow.google.cn/images/ragged_tensors/ragged_encoding_2.png)可以使用四种不同的编码存储 `row_partition`:- `row_splits` 是一个整型向量,用于指定行之间的拆分点。- `value_rowids` 是一个整型向量,用于指定每个值的行索引。- `row_lengths` 是一个整型向量,用于指定每一行的长度。- `uniform_row_length` 是一个整型标量,用于指定所有行的单个长度。![partition_encodings](https://tensorflow.google.cn/images/ragged_tensors/partition_encodings.png)整型标量 `nrows` 还可以包含在 `row_partition` 编码中,以考虑具有 `value_rowids` 的空尾随行或具有 `uniform_row_length` 的空行。
rt = tf.RaggedTensor.from_row_splits( values=[3, 1, 4, 1, 5, 9, 2], row_splits=[0, 4, 4, 6, 7]) print(rt)
_____no_output_____
Apache-2.0
site/zh-cn/guide/ragged_tensor.ipynb
RedContritio/docs-l10n
选择为行分区使用哪种编码由不规则张量在内部进行管理,以提高某些环境下的效率。尤其是,不同行分区方案的某些优点和缺点是:- **高效索引**:`row_splits` 编码可以实现不规则张量的恒定时间索引和切片。- **高效连接**:`row_lengths` 编码在连接不规则张量时更有效,因为当两个张量连接在一起时,行长度不会改变。- **较小的编码大小**:`value_rowids` 编码在存储有大量空行的不规则张量时更有效,因为张量的大小只取决于值的总数。另一方面,`row_splits` 和 `row_lengths` 编码在存储具有较长行的不规则张量时更有效,因为它们每行只需要一个标量值。- **兼容性**:`value_rowids` 方案与 `tf.math.segment_sum` 等运算使用的[分段](https://tensorflow.google.cn/api_docs/python/tf/mathabout_segmentation)格式相匹配。`row_limits` 方案与 `tf.sequence_mask` 等运算使用的格式相匹配。- **均匀维度**:如下文所述,`uniform_row_length` 编码用于对具有均匀维度的不规则张量进行编码。 多个不规则维度具有多个不规则维度的不规则张量通过为 `values` 张量使用嵌套 `RaggedTensor` 进行编码。每个嵌套 `RaggedTensor` 都会增加一个不规则维度。![ragged_rank_2](https://tensorflow.google.cn/images/ragged_tensors/ragged_rank_2.png)
rt = tf.RaggedTensor.from_row_splits( values=tf.RaggedTensor.from_row_splits( values=[10, 11, 12, 13, 14, 15, 16, 17, 18, 19], row_splits=[0, 3, 3, 5, 9, 10]), row_splits=[0, 1, 1, 5]) print(rt) print("Shape: {}".format(rt.shape)) print("Number of partitioned dimensions: {}".format(rt.ragged_rank))
_____no_output_____
Apache-2.0
site/zh-cn/guide/ragged_tensor.ipynb
RedContritio/docs-l10n
工厂函数 `tf.RaggedTensor.from_nested_row_splits` 可用于通过提供一个 `row_splits` 张量列表,直接构造具有多个不规则维度的 RaggedTensor:
rt = tf.RaggedTensor.from_nested_row_splits( flat_values=[10, 11, 12, 13, 14, 15, 16, 17, 18, 19], nested_row_splits=([0, 1, 1, 5], [0, 3, 3, 5, 9, 10])) print(rt)
_____no_output_____
Apache-2.0
site/zh-cn/guide/ragged_tensor.ipynb
RedContritio/docs-l10n
不规则秩和扁平值不规则张量的***不规则秩***是底层 `values` 张量的分区次数(即 `RaggedTensor` 对象的嵌套深度)。最内层的 `values` 张量称为其 ***flat_values***。在以下示例中,`conversations` 具有 ragged_rank=3,其 `flat_values` 为具有 24 个字符串的一维 `Tensor`:
# shape = [batch, (paragraph), (sentence), (word)] conversations = tf.ragged.constant( [[[["I", "like", "ragged", "tensors."]], [["Oh", "yeah?"], ["What", "can", "you", "use", "them", "for?"]], [["Processing", "variable", "length", "data!"]]], [[["I", "like", "cheese."], ["Do", "you?"]], [["Yes."], ["I", "do."]]]]) conversations.shape assert conversations.ragged_rank == len(conversations.nested_row_splits) conversations.ragged_rank # Number of partitioned dimensions. conversations.flat_values.numpy()
_____no_output_____
Apache-2.0
site/zh-cn/guide/ragged_tensor.ipynb
RedContritio/docs-l10n
均匀内层维度具有均匀内层维度的不规则张量通过为 flat_values(即最内层 `values`)使用多维 `tf.Tensor` 进行编码。![uniform_inner](https://tensorflow.google.cn/images/ragged_tensors/uniform_inner.png)
rt = tf.RaggedTensor.from_row_splits( values=[[1, 3], [0, 0], [1, 3], [5, 3], [3, 3], [1, 2]], row_splits=[0, 3, 4, 6]) print(rt) print("Shape: {}".format(rt.shape)) print("Number of partitioned dimensions: {}".format(rt.ragged_rank)) print("Flat values shape: {}".format(rt.flat_values.shape)) print("Flat values:\n{}".format(rt.flat_values))
_____no_output_____
Apache-2.0
site/zh-cn/guide/ragged_tensor.ipynb
RedContritio/docs-l10n
均匀非内层维度具有均匀非内层维度的不规则张量通过使用 `uniform_row_length` 对行分区进行编码。![uniform_outer](https://tensorflow.google.cn/images/ragged_tensors/uniform_outer.png)
rt = tf.RaggedTensor.from_uniform_row_length( values=tf.RaggedTensor.from_row_splits( values=[10, 11, 12, 13, 14, 15, 16, 17, 18, 19], row_splits=[0, 3, 5, 9, 10]), uniform_row_length=2) print(rt) print("Shape: {}".format(rt.shape)) print("Number of partitioned dimensions: {}".format(rt.ragged_rank))
_____no_output_____
Apache-2.0
site/zh-cn/guide/ragged_tensor.ipynb
RedContritio/docs-l10n
Data Space Report Pittsburgh Bridges Data Set Andy Warhol Bridge - Pittsburgh.Report created by Student Francesco Maria Chiarlo s253666, for A.A 2019/2020.**Abstract**:The aim of this report is to evaluate the effectiveness of distinct, different statistical learning approaches, in particular focusing on their characteristics as well as on their advantages and backwards when applied onto a relatively small dataset as the one employed within this report, that is Pittsburgh Bridgesdataset.**Key words**:Statistical Learning, Machine Learning, Bridge Design. TOC:* [Imports Section](imports-section)* [Dataset's Attributes Description](attributes-description)* [Data Preparation and Investigation](data-preparation)* [Learning Models](learning-models)* [Improvements and Conclusions](improvements-and-conclusions)* [References](references) Imports Section
# =========================================================================== # # STANDARD IMPORTS # =========================================================================== # print(__doc__) from pprint import pprint import warnings warnings.filterwarnings('ignore') import copy import os import sys import time import pandas as pd import numpy as np %matplotlib inline # Matplotlib pyplot provides plotting API import matplotlib as mpl from matplotlib import pyplot as plt import chart_studio.plotly.plotly as py import seaborn as sns; sns.set() # =========================================================================== # # UTILS IMPORTS (Done by myself) # =========================================================================== # from utils.display_utils import * from utils.preprocessing_utils import * from utils.training_utils import * from utils.training_utils_v2 import fit_by_n_components, fit_all_by_n_components from itertools import islice # =========================================================================== # # sklearn IMPORT # =========================================================================== # from sklearn.decomposition import PCA, KernelPCA # Import scikit-learn classes: models (Estimators). from sklearn.naive_bayes import GaussianNB # Non-parametric Generative Model from sklearn.naive_bayes import MultinomialNB # Non-parametric Generative Model from sklearn.linear_model import LinearRegression # Parametric Linear Discriminative Model from sklearn.linear_model import LogisticRegression # Parametric Linear Discriminative Model from sklearn.linear_model import Ridge, Lasso from sklearn.neighbors import KNeighborsClassifier from sklearn.svm import SVC # Parametric Linear Discriminative "Support Vector Classifier" from sklearn.tree import DecisionTreeClassifier # Non-parametric Model from sklearn.ensemble import BaggingClassifier # Non-parametric Model (Meta-Estimator, that is, an Ensemble Method) from sklearn.ensemble import RandomForestClassifier # Non-parametric Model (Meta-Estimator, that is, an Ensemble Method)
_____no_output_____
MIT
pittsburgh-bridges-data-set-analysis/backup/Data Space Report (Official) - Two-Dimensional Analyses-v1.0.1.ipynb
franec94/Pittsburgh-Bridge-Dataset
Dataset's Attributes Description The analyses that I aim at accomplishing while using as means the methods or approaches provided by both Statistical Learning and Machine Learning fields, concern the dataset Pittsburgh Bridges, and what follows is a overview and brief description of the main characteristics, as well as, basic information about this precise dataset.The Pittsburgh Bridges dataset is a dataset available from the web site called mainly *"UCI Machine Learing Repository"*, which is one of the well known web site that let a large amount of different datasets, from different domains or fields, to be used for machine-learning research and which have been cited in peer-reviewed academic journals.In particular, the dataset I'm going to treat and analyze, which is Pittsburgh Bridges dataset, has been made freely available from the Western Pennsylvania Regional Data Center (WPRDC), which is a project led by the University Center of Social and Urban Research (UCSUR) at the University of Pittsburgh ("University") in collaboration with City of Pittsburgh and The County of Allegheny in Pennsylvania. The WPRDC and the WPRDC Project is supported by a grant from the Richard King Mellon Foundation.In order to be more precise, from the official and dedicated web page, within UCI Machine Learning cite, Pittsburgh Bridges dataset is a dataset that has been created after the works of some co-authors which are:- Yoram Reich & Steven J. Fenves from Department of Civil Engineering and Engineering Design Research Center Carnegie Mellon University Pittsburgh, PA 15213The Pittsburgh Bridges dataset is made of up to 108 distinct observations and each of that data sample is made of 12 attributes or features where some of them are considered to be continuous properties and other to be categorical or nominal properties. Those variables are the following:- **RIVER**: which is a nominal type variable that can assume the subsequent possible discrete values which are: A, M, O. Where A stands for Allegheny river, while M stands for Monongahela river and lastly O stands for Ohio river.- **LOCATION**: which represents a nominal type variable too, and assume a positive integer value from 1 up to 52 used as categorical attribute.- **ERECTED**: which might be either a numerical or categorical variable, depending on the fact that we want to aggregate a bunch of value under a categorical quantity. What this means is that, basically such attribute is made of date starting from 1818 up to 1986, but we may imagine to aggregate somehow these data within a given category among those suggested, that are CRAFTS, EMERGENING, MATURE, MODERN.- **PURPOSE**: which is a categorical attribute and represents the reason why a particular bridge has been built, which means that this attribute represents what kind of vehicle can cross the bridge or if the bridge has been made just for people. For this reasons the allowd values for this attributes are the following: WALK, AQUEDUCT, RR, HIGHWAY. Three out of four are self explained values, while RR value that might be tricky at first glance, it just stands for railroad.- **LENGTH**: which represents the bridge's length, is a numerical attribute if we just look at the real number values that go from 804 up to 4558, but we can again decide to handle or arrange such values so that they can be grouped into range of values mapped into SHORT, MEDIUM, LONG so that we can refer to a bridge's length by means of these new categorical values.- **LANES**: which is a categorical variable which is represented by numerical values, that are 1, 2, 4, 6 which indicate the number of distinct lanes that a bridge in Pittsburgh city may have. The larger the value the wider the bridge.- **CLEAR-G**: specifies whether a vertical navigation clearance requirement was enforced in the design or not.- **T-OR-D**: which is a nominal attribute, in other words, a categorical attribute that can assume THROUGH, DECK values. In order to be more precise, this samples attribute deals with structural elements of a bridge. In fact, a deck is the surface of a bridge and this structural element, of bridge's superstructure, may be constructed of concrete, steel, open grating, or wood. On the other hand, a through arch bridge, also known as a half-through arch bridge or a through-type arch bridge, is a bridge that is made from materials such as steel or reinforced concrete, in which the base of an arch structure is below the deck but the top rises above it.- **MATERIAL**: which is a categorical or nominal variable and is used to describe the bridge telling which is the main or core material used to build it. This attribute can assume one of the possible, following values which are: WOOD, IRON, STEEL. Furthermore, we expect to see somehow a bit of correlation between the values assumed by the pairs represented by T-OR-D and MATERIAL columns, when looking just to them.- **SPAN**: which is a categorical or nominal value and has been recorded by means of three possible values for each sample, that are SHORT, MEDIUM, LONG. This attribute, within the field of Structural Engineering, is the distance between two intermediate supports for a structure, e.g. a beam or a bridge. A span can be closed by a solid beam or by a rope. The first kind is used for bridges, the second one for power lines, overhead telecommunication lines, some type of antennas or for aerial tramways. - **REL-L**: which is a categorical or nominal variable and stands for relative length of the main span of the bridge to the total crossing length, it can assume three possible values that are S, S-F, F.- Lastly, **TYPE** which indicates as a categorical or nominal attributes what type of bridge each record represents, among the possible 6 distinct classes or types of bridges that are: WOOD, SUSPEN, SIMPLE-T, ARCH, CANTILEV, CONT-T. Data Preparation and Investigation The aim of this chapter is to get in the data, that are available within Pittsburgh Bridge Dataset, in order to investigate a bit more in to detail and generally speaking deeper the main or high level statistics quantities, such as mean, median, standard deviation of each attribute, as well as displaying somehow data distribution for each attribute by means of histogram plots. This phase allows or enables us to decide which should be the best feature to be selected as the target variable, in other word the attribute that will represent the dependent variable with respect to the remaining attributes that instead will play the role of predictors and independent variables, as well.In order to investigate and explore our data we make usage of *Pandas library*. We recall mainly that, in computer programming, Pandas is a software library written for the Python programming language* for *data manipulation and analysis*. In particular, it offers data structures and operations for manipulating numerical tables and time series. It is free software and a interesting and funny things about such tool is that the name is derived from the term "panel data", an econometrics term for data sets that include observations over multiple time periods for the same individuals.We also note that as the analysis proceeds we will introduce other computer programming as well as programming libraries that allow or enable us to fulfill our goals. Initially, once I have downloaded from the provided web page the dataset with the data samples about Pittsburgh Bridge we load the data by means of functions available using python library's pandas. We notice that the overall set of data points is large up to 108 records or rows, which are sorted by Erected attributes, so this means that are sorted in decreasing order from the oldest bridge which has been built in 1818 up to the most modern bridge that has been erected in 1986. Then we display the first 5 rows to get an overview and have a first idea about what is inside the overall dataset, and the result we obtain by means of head() function applied onto the fetched dataset is equals to what follows:
# =========================================================================== # # READ INPUT DATASET # =========================================================================== # dataset_path = 'C:\\Users\\Francesco\Documents\\datasets\\pittsburgh_dataset' dataset_name = 'bridges.data.csv' # column_names = ['IDENTIF', 'RIVER', 'LOCATION', 'ERECTED', 'PURPOSE', 'LENGTH', 'LANES', 'CLEAR-G', 'T-OR-D', 'MATERIAL', 'SPAN', 'REL-L', 'TYPE'] column_names = ['RIVER', 'LOCATION', 'ERECTED', 'PURPOSE', 'LENGTH', 'LANES', 'CLEAR-G', 'T-OR-D', 'MATERIAL', 'SPAN', 'REL-L', 'TYPE'] dataset = pd.read_csv(os.path.join(dataset_path, dataset_name), names=column_names, index_col=0) # SHOW SOME STANDARD DATASET INFOS # --------------------------------------------------------------------------- # print('Dataset shape: {}'.format(dataset.shape)) print(dataset.info()) # SHOWING FIRSTS N-ROWS AS THEY ARE STORED WITHIN DATASET # --------------------------------------------------------------------------- # dataset.head(5)
_____no_output_____
MIT
pittsburgh-bridges-data-set-analysis/backup/Data Space Report (Official) - Two-Dimensional Analyses-v1.0.1.ipynb
franec94/Pittsburgh-Bridge-Dataset
What we can notice from just the table above is that there are some attributes that are characterized by a special character that is '?' which stands for a missing value, so by chance there was not possibility to get the value for this attribute, such as for LENGTH and SPAN attributes. Analyzing in more details the dataset we discover that there are up to 6 different attributes, in the majority attributes with categorical or nominal nature such as CLEAR-G, T-OR-D, MATERIAL, SPAN, REL-L, and TYPE that contain at list one row characterized by the fact that one of its attributes is set to assuming '?' value that stands, as we already know for a missing value.Here, we can follow different strategies that depends onto the level of complexity as well as accuracy we want to obtain or achieve for models we are going to fit to the data after having correctly pre-processed them, speaking about what we could do with missing values. In fact one can follow the simplest way and can decide to simply discard those rows that contain at least one attribute with a missing value represented by the '?' symbol. Otherwise one may alos decide to follow a different strategy that aims at keeping also those rows that have some missing values by means of some kind of technique that allows to establish a potential substituting value for the missing one.So, in this setting, that is our analyses, we start by just leaving out those rows that at least contain one attribute that has a missing value, this choice leads us to reduce the size of our dataset from 108 records to 70 remaining samples, with a drop of 38 data examples, which may affect the final results, since we left out more or less the 46\% of the data because of missing values.
# INVESTIGATING DATASET IN ORDER TO DETECT NULL VALUES # --------------------------------------------------------------------------- # print('Before preprocessing dataset and handling null values') result = dataset.isnull().values.any() print('There are any null values ? Response: {}'.format(result)) result = dataset.isnull().sum() print('Number of null values for each predictor:\n{}'.format(result)) # DISCOVERING VALUES WITHIN EACH PREDICTOR DOMAIN # --------------------------------------------------------------------------- # columns_2_avoid = ['ERECTED', 'LENGTH', 'LOCATION', 'LANES'] # columns_2_avoid = None list_columns_2_fix = show_categorical_predictor_values(dataset, columns_2_avoid) # FIXING, UPDATING NULL VALUES CODED AS '?' SYMBOL # WITHIN EACH CATEGORICAL VARIABLE, IF DETECTED ANY # --------------------------------------------------------------------------- # print('"Before" removing \'?\' rows, Dataset dim:', dataset.shape) for _, predictor in enumerate(list_columns_2_fix): dataset = dataset[dataset[predictor] != '?'] print('"After" removing \'?\' rows, Dataset dim: ', dataset.shape) print('-' * 50) _ = show_categorical_predictor_values(dataset, columns_2_avoid) # INTERMEDIATE RESULT FOUND # --------------------------------------------------------------------------- # preprocess_categorical_variables(dataset, columns_2_avoid) print(dataset.info()) dataset.head(5)
_____no_output_____
MIT
pittsburgh-bridges-data-set-analysis/backup/Data Space Report (Official) - Two-Dimensional Analyses-v1.0.1.ipynb
franec94/Pittsburgh-Bridge-Dataset
The next step is represented by the effort of mapping categorical variables into numerical variables, so that them are comparable with the already existing numerical or continuous variables, and also by mapping the categorical variables into numerical variables we allow or enable us to perform some kind of normalization or just transformation onto the entire dataset in order to let some machine learning algorithm to work better or to take advantage of normalized data within our pre-processed dataset. Furthermore, by transforming first the categorical attributes into a continuous version we are also able to calculate the \textit{heatmap}, which is a very useful way of representing a correlation matrix calculated on the whole dataset. Moreover we have displayed data distribution for each attribute by means of histogram representation to take some useful information about the number of occurrences for each possible value, in particular for those attributes that have a categorical nature.
# MAP NUMERICAL VALUES TO INTEGER VALUES # --------------------------------------------------------------------------- # print('Before', dataset.shape) columns_2_map = ['ERECTED', 'LANES'] for _, predictor in enumerate(columns_2_map): dataset = dataset[dataset[predictor] != '?'] dataset[predictor] = np.array(list(map(lambda x: int(x), dataset[predictor].values))) print('After', dataset.shape) print(dataset.info()) # print(dataset.head(5)) # MAP NUMERICAL VALUES TO FLOAT VALUES # --------------------------------------------------------------------------- # # print('Before', dataset.shape) columns_2_map = ['LOCATION', 'LANES', 'LENGTH'] for _, predictor in enumerate(columns_2_map): dataset = dataset[dataset[predictor] != '?'] dataset[predictor] = np.array(list(map(lambda x: float(x), dataset[predictor].values))) # print('After', dataset.shape) # print(dataset.info()) # print(dataset.head(5)) # columns_2_avoid = None # list_columns_2_fix = show_categorical_predictor_values(dataset, None) result = dataset.isnull().values.any() # print('After handling null values\nThere are any null values ? Response: {}'.format(result)) result = dataset.isnull().sum() # print('Number of null values for each predictor:\n{}'.format(result)) dataset.head(5) dataset.describe(include='all') # sns.pairplot(dataset, hue='T-OR-D', size=1.5) columns_2_avoid = ['ERECTED', 'LENGTH', 'LOCATION'] target_col = 'T-OR-D' # show_frequency_distribution_predictors(dataset, columns_2_avoid) # show_frequency_distribution_predictor(dataset, predictor_name='RIVER', columns_2_avoid=columns_2_avoid) # build_boxplot(dataset, predictor_name='RIVER', columns_2_avoid=columns_2_avoid, target_col='T-OR-D') # show_frequency_distribution_predictors(dataset, columns_2_avoid) # show_frequency_distribution_predictor(dataset, predictor_name='T-OR-D', columns_2_avoid=columns_2_avoid) # show_frequency_distribution_predictors(dataset, columns_2_avoid) # show_frequency_distribution_predictor(dataset, predictor_name='CLEAR-G', columns_2_avoid=columns_2_avoid) # build_boxplot(dataset, predictor_name='CLEAR-G', columns_2_avoid=columns_2_avoid, target_col='T-OR-D') # show_frequency_distribution_predictors(dataset, columns_2_avoid) # show_frequency_distribution_predictor(dataset, predictor_name='SPAN', columns_2_avoid=columns_2_avoid) # build_boxplot(dataset, predictor_name='SPAN', columns_2_avoid=columns_2_avoid, target_col='T-OR-D') # show_frequency_distribution_predictors(dataset, columns_2_avoid) # show_frequency_distribution_predictor(dataset, predictor_name='MATERIAL', columns_2_avoid=columns_2_avoid) # build_boxplot(dataset, predictor_name='MATERIAL', columns_2_avoid=columns_2_avoid, target_col='T-OR-D') # show_frequency_distribution_predictors(dataset, columns_2_avoid) # show_frequency_distribution_predictor(dataset, predictor_name='REL-L', columns_2_avoid=columns_2_avoid) # show_frequency_distribution_predictors(dataset, columns_2_avoid) # show_frequency_distribution_predictor(dataset, predictor_name='TYPE', columns_2_avoid=columns_2_avoid) # build_boxplot(dataset, predictor_name='TYPE', columns_2_avoid=columns_2_avoid, target_col='T-OR-D') corr_result = dataset.corr() # corr_result.head(corr_result.shape[0]) display_heatmap(corr_result) # show_histograms_from_heatmap_corr_matrix(corr_result, row_names=dataset.columns) # Make distinction between Target Variable and Predictors # --------------------------------------------------------------------------- # columns = dataset.columns # List of all attribute names target_col = 'T-OR-D' # Target variable name # Get Target values and map to 0s and 1s y = np.array(list(map(lambda x: 0 if x == 1 else 1, dataset[target_col].values))) print('Summary about Target Variable {target_col}') print('-' * 50) print(dataset['T-OR-D'].value_counts()) # Get Predictors X = dataset.loc[:, dataset.columns != target_col].values # Standardizing the features # --------------------------------------------------------------------------- # scaler_methods = ['minmax', 'standard', 'norm'] scaler_method = 'standard' rescaledX = preprocessing_data_rescaling(scaler_method, X)
shape features matrix X, after normalizing: (70, 11)
MIT
pittsburgh-bridges-data-set-analysis/backup/Data Space Report (Official) - Two-Dimensional Analyses-v1.0.1.ipynb
franec94/Pittsburgh-Bridge-Dataset
Pricipal Component AnalysisAfter having investigate the data points inside the dataset, I move one to another section of my report where I decide to explore examples that made up the entire dataset using a particular technique in the field of statistical analysis that corresponds, precisely, to so called Principal Component Analysis. In fact, the major objective of this section is understand whether it is possible to transform, by means of some kind of linear transformation given by a mathematical calculation, the original data examples into reprojected representation that allows me to retrieve most useful information to be later exploited at training time. So, lets dive a bit whitin what is and which are main concepts, pros and cons about Principal Component Analysis.Firstly, we know that **Principal Component Analysis**, more shortly PCA, is a statistical procedure that uses an orthogonal transformation to convert a set of observations of possibly correlated variables into a set of values of linearly uncorrelated variables called *principal components*. This transformation is defined in such a way that:- the first principal component has the largest possible variance (that is, accounts for as much of the variability in the data as possible),- and each succeeding component in turn has the highest variance possible under the constraint that it is orthogonal to the preceding components.The resulting vectors, each being a linear combination of the variables and containing n observations, are an uncorrelated orthogonal basis set. PCA is sensitive to the relative scaling of the original variables.PCA is mostly used as a tool in *exploratory data analysis* and for making predictive models, for that reasons I used such a technique here, before going through the different learning technique for producing my models. Several Different ImplementationFrom the theory and the filed of research in statistics, we know that out there, there are several different implementation and way of computing principal component analysis, and each adopted technique has different performance as well as numerical stability. The three major derivations are:- PCA by means of an iterative based procedure of extraing pricipal components one after the other selecting each time the one that account for the most of variance along its own axis, within the remainig subspace to be derived.- The second possible way of performing PCA is done via calculation of *Covariance Matrix* applied to attributes, that are our independent predictive variables, used to represent data points.- Lastly, it is used the technique known as *Singular Valued Decomposition* applied to the overall data points within our dataset.Reading scikit-learn documentation, I discovered that PCA's derivation uses the *LAPACK implementation* of the *full SVD* or a *randomized truncated SVD* by the method of *Halko et al. 2009*, depending on the shape of the input data and the number of components to extract. Therefore I will descrive mainly that way of deriving the method with respect to the others that, instead, will be described more briefly and roughly. PCA's Iterative based MethodGoing in order, as depicted briefly above, I start describing PCA obtained by means of iterative based procedure to extract one at a time a new principal componet explointing the data points at hand.We begin, recalling that, PCA is defined as an orthogonal linear transformation that transforms the data to a new coordinate system such that the greatest variance by some scalar projection of the data comes to lie on the first coordinate (called the first principal component), the second greatest variance on the second coordinate, and so on.We suppose to deal with a data matrix X, with column-wise zero empirical mean, where each of the n rows represents a different repetition of the experiment, and each of the p columns gives a particular kind of feature.From a math poitn of view, the transformation is defined by a set of p-dimensional vectors of weights or coefficients $\mathbf {w} _{(k)}=(w_{1},\dots ,w_{p})_{(k)}$ that map each row vector $\mathbf{x}_{(i)}$ of X to a new vector of principal component scores ${\displaystyle \mathbf {t} _{(i)}=(t_{1},\dots ,t_{l})_{(i)}}$, given by: ${\displaystyle {t_{k}}_{(i)}=\mathbf {x} _{(i)}\cdot \mathbf {w} _{(k)}\qquad \mathrm {for} \qquad i=1,\dots ,n\qquad k=1,\dots ,l}$.In this way all the individual variables ${\displaystyle t_{1},\dots ,t_{l}}$ of t considered over the data set successively inherit the maximum possible variance from X, with each coefficient vector w constrained to be a unit vector.More precisely, the first component In order to maximize variance has to satisfy the following expression:${\displaystyle \mathbf {w} _{(1)}={\underset {\Vert \mathbf {w} \Vert =1}{\operatorname {\arg \,max} }}\,\left\{\sum _{i}\left(t_{1}\right)_{(i)}^{2}\right\}={\underset {\Vert \mathbf {w} \Vert =1}{\operatorname {\arg \,max} }}\,\left\{\sum _{i}\left(\mathbf {x} _{(i)}\cdot \mathbf {w} \right)^{2}\right\}}$So, with $w_{1}$ found, the first principal component of a data vector $x_{1}$ can then be given as a score $t_{1(i)} = x_{1} ⋅ w_{1}$ in the transformed co-ordinates, or as the corresponding vector in the original variables, $(x_{1} ⋅ w_{1})w_{1}$.The others remainig components are computed as folloes. The kth component can be found by subtracting the first k − 1 principal components from X, as in the following expression:- ${\displaystyle \mathbf {\hat {X}} _{k}=\mathbf {X} -\sum _{s=1}^{k-1}\mathbf {X} \mathbf {w} _{(s)}\mathbf {w} _{(s)}^{\rm {T}}}$- and then finding the weight vector which extracts the maximum variance from this new data matrix ${\mathbf {w}}_{{(k)}}={\underset {\Vert {\mathbf {w}}\Vert =1}{\operatorname {arg\,max}}}\left\{\Vert {\mathbf {{\hat {X}}}}_{{k}}{\mathbf {w}}\Vert ^{2}\right\}={\operatorname {\arg \,max}}\,\left\{{\tfrac {{\mathbf {w}}^{T}{\mathbf {{\hat {X}}}}_{{k}}^{T}{\mathbf {{\hat {X}}}}_{{k}}{\mathbf {w}}}{{\mathbf {w}}^{T}{\mathbf {w}}}}\right\}$It turns out that:- from the formulas depicted above me get the remaining eigenvectors of $X^{T}X$, with the maximum values for the quantity in brackets given by their corresponding eigenvalues. Thus the weight vectors are eigenvectors of $X^{T}X$.- The kth principal component of a data vector $x_(i)$ can therefore be given as a score $t_{k(i)} = x_{(i)} ⋅ w_(k)$ in the transformed co-ordinates, or as the corresponding vector in the space of the original variables, $(x_{(i)} ⋅ w_{(k)}) w_{(k)}$, where $w_{(k)}$ is the kth eigenvector of $X^{T}X$.- The full principal components decomposition of X can therefore be given as: ${\displaystyle \mathbf {T} =\mathbf {X} \mathbf {W}}$, where W is a p-by-p matrix of weights whose columns are the eigenvectors of $X^{T}X$. Covariance Matrix for PCA analysisPCA made from covarian matrix computation requires the calculation of sample covariance matrix of the dataset as follows: $\mathbf{Q} \propto \mathbf{X}^T \mathbf{X} = \mathbf{W} \mathbf{\Lambda} \mathbf{W}^T$.The empirical covariance matrix between the principal components becomes ${\displaystyle \mathbf {W} ^{T}\mathbf {Q} \mathbf {W} \propto \mathbf {W} ^{T}\mathbf {W} \,\mathbf {\Lambda } \,\mathbf {W} ^{T}\mathbf {W} =\mathbf {\Lambda } }$. Singular Value Decomposition for PCA analysisFinally, the principal components transformation can also be associated with another matrix factorization, the singular value decomposition (SVD) of X, ${\displaystyle \mathbf {X} =\mathbf {U} \mathbf {\Sigma } \mathbf {W} ^{T}}$, where more precisely:- Σ is an n-by-p rectangular diagonal matrix of positive numbers $σ_{(k)}$, called the singular values of X;- instead U is an n-by-n matrix, the columns of which are orthogonal unit vectors of length n called the left singular vectors of X;- Then, W is a p-by-p whose columns are orthogonal unit vectors of length p and called the right singular vectors of X.factorizingn the matrix ${X^{T}X}$, it can be written as:${\begin{aligned}\mathbf {X} ^{T}\mathbf {X} &=\mathbf {W} \mathbf {\Sigma } ^{T}\mathbf {U} ^{T}\mathbf {U} \mathbf {\Sigma } \mathbf {W} ^{T}\\&=\mathbf {W} \mathbf {\Sigma } ^{T}\mathbf {\Sigma } \mathbf {W} ^{T}\\&=\mathbf {W} \mathbf {\hat {\Sigma }} ^{2}\mathbf {W} ^{T}\end{aligned}}$Where we recall that ${\displaystyle \mathbf {\hat {\Sigma }} }$ is the square diagonal matrix with the singular values of X and the excess zeros chopped off that satisfies ${\displaystyle \mathbf {{\hat {\Sigma }}^{2}} =\mathbf {\Sigma } ^{T}\mathbf {\Sigma } } {\displaystyle \mathbf {{\hat {\Sigma }}^{2}} =\mathbf {\Sigma } ^{T}\mathbf {\Sigma } }$. Comparison with the eigenvector factorization of $X^{T}X$ establishes that the right singular vectors W of X are equivalent to the eigenvectors of $X^{T}X$ , while the singular values $σ_{(k)}$ of X are equal to the square-root of the eigenvalues $λ_{(k)}$ of $X^{T}X$ . At this point we understand that using the singular value decomposition the score matrix T can be written as:$\begin{align} \mathbf{T} & = \mathbf{X} \mathbf{W} \\ & = \mathbf{U}\mathbf{\Sigma}\mathbf{W}^T \mathbf{W} \\ & = \mathbf{U}\mathbf{\Sigma} \end{align}$so each column of T is given by one of the left singular vectors of X multiplied by the corresponding singular value. This form is also the polar decomposition of T.Efficient algorithms exist to calculate the SVD, as in scikit-learn package, of X without having to form the matrix $X^{T}X$, so computing the SVD is now the standard way to calculate a principal components analysis from a data matrix
n_components = rescaledX.shape[1] pca = PCA(n_components=n_components) # pca = PCA(n_components=2) # X_pca = pca.fit_transform(X) pca = pca.fit(rescaledX) X_pca = pca.transform(rescaledX) print(f"Cumulative varation explained(percentage) up to given number of pcs:") tmp_data = [] principal_components = [pc for pc in '2,5,6,7,8,9,10'.split(',')] for _, pc in enumerate(principal_components): n_components = int(pc) cum_var_exp_up_to_n_pcs = np.cumsum(pca.explained_variance_ratio_)[n_components-1] # print(f"Cumulative varation explained up to {n_components} pcs = {cum_var_exp_up_to_n_pcs}") # print(f"# pcs {n_components}: {cum_var_exp_up_to_n_pcs*100:.2f}%") tmp_data.append([n_components, cum_var_exp_up_to_n_pcs * 100]) tmp_df = pd.DataFrame(data=tmp_data, columns=['# PCS', 'Cumulative Varation Explained (percentage)']) tmp_df.head(len(tmp_data)) n_components = rescaledX.shape[1] pca = PCA(n_components=n_components) # pca = PCA(n_components=2) #X_pca = pca.fit_transform(X) pca = pca.fit(rescaledX) X_pca = pca.transform(rescaledX) fig = show_cum_variance_vs_components(pca, n_components) # py.sign_in('franec94', 'QbLNKpC0EZB0kol0aL2Z') # py.iplot(fig, filename='selecting-principal-components {}'.format(scaler_method))
_____no_output_____
MIT
pittsburgh-bridges-data-set-analysis/backup/Data Space Report (Official) - Two-Dimensional Analyses-v1.0.1.ipynb
franec94/Pittsburgh-Bridge-Dataset
Major Pros & Cons of PCA Learning Models
# Parameters to be tested for Cross-Validation Approach estimators_list = [GaussianNB(), LogisticRegression(), KNeighborsClassifier(), SVC(), DecisionTreeClassifier(), RandomForestClassifier()] estimators_names = ['GaussianNB', 'LogisticRegression', 'KNeighborsClassifier', 'SVC', 'DecisionTreeClassifier', 'RandomForestClassifier'] plots_names = list(map(lambda xi: f"{xi}_learning_curve.png", estimators_names)) pca_kernels_list = ['linear', 'poly', 'rbf', 'cosine',] cv_list = [10, 9, 8, 7, 6, 5, 4, 3, 2] parameters_sgd_classifier = { 'clf__loss': ('hinge', 'log', 'modified_huber', 'squared_hinge', 'perceptron'), 'clf__penalty': ('l2', 'l1', 'elasticnet'), 'clf__alpha': (1e-1, 1e-2, 1e-3, 1e-4), 'clf__max_iter': (50, 100, 150, 200, 500, 1000, 1500, 2000, 2500), 'clf__learning_rate': ('optimal',), 'clf__tol': (None, 1e-2, 1e-4, 1e-5, 1e-6) } kernel_type = 'svm-rbf-kernel' parameters_svm = { 'clf__gamma': (0.003, 0.03, 0.05, 0.5, 0.7, 1.0, 1.5), 'clf__max_iter':(1e+2, 1e+3, 2 * 1e+3, 5 * 1e+3, 1e+4, 1.5 * 1e+3), 'clf__C': (1e-4, 1e-3, 1e-2, 0.1, 1.0, 10, 1e+2, 1e+3), } parmas_decision_tree = { 'clf__splitter': ('random', 'best'), 'clf__criterion':('gini', 'entropy'), 'clf__max_features': (None, 'auto', 'sqrt', 'log2') } parmas_random_forest = { 'clf__n_estimators': (3, 5, 7, 10, 30, 50, 70, 100, 150, 200), 'clf__criterion':('gini', 'entropy'), 'clf__bootstrap': (True, False) } model = PCA(n_components=2) model.fit(X) X_2D = model.transform(X) df = pd.DataFrame() df['PCA1'] = X_2D[:, 0] df['PCA2'] = X_2D[:, 1] df[target_col] = dataset[target_col].values sns.lmplot("PCA1", "PCA2", hue=target_col, data=df, fit_reg=False) # show_pca_1_vs_pca_2_pcaKernel(X, pca_kernels_list, target_col, dataset) # show_scatter_plots_pcaKernel(X, pca_kernels_list, target_col, dataset, n_components=12)
_____no_output_____
MIT
pittsburgh-bridges-data-set-analysis/backup/Data Space Report (Official) - Two-Dimensional Analyses-v1.0.1.ipynb
franec94/Pittsburgh-Bridge-Dataset
PCA = 2
plot_dest = os.path.join("figures", "n_comp_2_analysis") N_CV, N_KERNEL = 9, 4 assert len(cv_list) >= N_CV, f"Error: N_CV={N_CV} > len(cv_list)={len(cv_list)}" assert len(pca_kernels_list) >= N_KERNEL, f"Error: N_KERNEL={N_KERNEL} > len(pca_kernels_list)={len(pca_kernels_list)}" X = rescaledX n = len(estimators_list) # len(estimators_list) dfs_list, df_strfd = fit_all_by_n_components( estimators_list=estimators_list[:n], \ estimators_names=estimators_names[:n], \ X=X, \ y=y, \ n_components=2, \ show_plots=False, \ cv_list=cv_list[:N_CV], \ # pca_kernels_list=['linear'], pca_kernels_list=pca_kernels_list[:N_KERNEL], verbose=0 # 0=silent, 1=show informations ) df_strfd.head(df_strfd.shape[0]) # GaussianNB # ----------------------------------- dfs_list[0].head(dfs_list[0].shape[0]) pos = 0 plot_name = plots_names[pos] show_learning_curve(dfs_list[pos], n=len(cv_list[:N_CV]), plot_dest=plot_dest, grid_size=[2, 2], plot_name=plot_name) # LogisticRegression # ----------------------------------- dfs_list[1].head(dfs_list[0].shape[0]) pos = pos + 1 plot_name = plots_names[pos] show_learning_curve(dfs_list[pos], n=len(cv_list[:N_CV]), plot_dest=plot_dest, grid_size=[2, 2], plot_name=plot_name) # SVC # ----------------------------------- dfs_list[2].head(dfs_list[0].shape[0]) pos = pos + 1 plot_name = plots_names[pos] show_learning_curve(dfs_list[pos], n=len(cv_list[:N_CV]), plot_dest=plot_dest, grid_size=[2, 2], plot_name=plot_name) # DecisionTreeClassifier # ----------------------------------- dfs_list[3].head(dfs_list[0].shape[0]) pos = pos + 1 plot_name = plots_names[pos] show_learning_curve(dfs_list[pos], n=len(cv_list[:N_CV]), plot_dest=plot_dest, grid_size=[2, 2], plot_name=plot_name) # RandomForestClassifier # ----------------------------------- dfs_list[4].head(dfs_list[0].shape[0]) pos = pos + 1 plot_name = plots_names[pos] show_learning_curve(dfs_list[pos], n=len(cv_list[:N_CV]), plot_dest=plot_dest, grid_size=[2, 2], plot_name=plot_name)
_____no_output_____
MIT
pittsburgh-bridges-data-set-analysis/backup/Data Space Report (Official) - Two-Dimensional Analyses-v1.0.1.ipynb
franec94/Pittsburgh-Bridge-Dataset
PCA = 9
plot_dest = os.path.join("figures", "n_comp_9_analysis") n = len(estimators_list) # len(estimators_list) pos = 0 dfs_list, df_strfd = fit_all_by_n_components( estimators_list=estimators_list[:n], \ estimators_names=estimators_names[:n], \ X=X, \ y=y, \ n_components=9, \ show_plots=False, \ cv_list=cv_list[:N_CV], \ # pca_kernels_list=['linear'], pca_kernels_list=pca_kernels_list[:N_KERNEL], verbose=0 # 0=silent, 1=show informations ) df_strfd.head(df_strfd.shape[0]) # GaussianNB # ----------------------------------- dfs_list[0].head(dfs_list[0].shape[0]) pos = pos + 1 plot_name = plots_names[pos] show_learning_curve(dfs_list[pos], n=len(cv_list[:N_CV]), plot_dest=plot_dest, grid_size=[2, 2], plot_name=plot_name) # LogisticRegression # ----------------------------------- dfs_list[1].head(dfs_list[0].shape[0]) ppos = pos + 1 plot_name = plots_names[pos] show_learning_curve(dfs_list[pos], n=len(cv_list[:N_CV]), plot_dest=plot_dest, grid_size=[2, 2], plot_name=plot_name) # SVC # ----------------------------------- dfs_list[2].head(dfs_list[0].shape[0]) pos = pos + 1 plot_name = plots_names[pos] show_learning_curve(dfs_list[pos], n=len(cv_list[:N_CV]), plot_dest=plot_dest, grid_size=[2, 2], plot_name=plot_name) # DecisionTreeClassifier # ----------------------------------- dfs_list[3].head(dfs_list[0].shape[0]) pos = pos + 1 plot_name = plots_names[pos] show_learning_curve(dfs_list[pos], n=len(cv_list[:N_CV]), plot_dest=plot_dest, grid_size=[2, 2], plot_name=plot_name) # RandomForestClassifier # ----------------------------------- dfs_list[4].head(dfs_list[0].shape[0]) pos = pos + 1 plot_name = plots_names[pos] show_learning_curve(dfs_list[pos], n=len(cv_list[:N_CV]), plot_dest=plot_dest, grid_size=[2, 2], plot_name=plot_name)
_____no_output_____
MIT
pittsburgh-bridges-data-set-analysis/backup/Data Space Report (Official) - Two-Dimensional Analyses-v1.0.1.ipynb
franec94/Pittsburgh-Bridge-Dataset
PCA = 12
plot_dest = os.path.join("figures", "n_comp_12_analysis") n = len(estimators_list) # len(estimators_list) pos = 0 dfs_list, df_strfd = fit_all_by_n_components( estimators_list=estimators_list[:n], \ estimators_names=estimators_names[:n], \ X=X, \ y=y, \ n_components=12, \ show_plots=False, \ cv_list=cv_list[:N_CV], \ # pca_kernels_list=['linear'], pca_kernels_list=pca_kernels_list[:N_KERNEL], verbose=0 # 0=silent, 1=show informations ) df_strfd.head(df_strfd.shape[0]) # GaussianNB # ----------------------------------- dfs_list[0].head(dfs_list[0].shape[0]) pos = pos + 1 plot_name = plots_names[pos] show_learning_curve(dfs_list[pos], n=len(cv_list[:N_CV]), plot_dest=plot_dest, grid_size=[2, 2], plot_name=plot_name) # LogisticRegression # ----------------------------------- dfs_list[1].head(dfs_list[0].shape[0]) pos = pos + 1 plot_name = plots_names[pos] show_learning_curve(dfs_list[pos], n=len(cv_list[:N_CV]), plot_dest=plot_dest, grid_size=[2, 2], plot_name=plot_name) # SVC # ----------------------------------- dfs_list[2].head(dfs_list[0].shape[0]) pos = pos + 1 plot_name = plots_names[pos] show_learning_curve(dfs_list[pos], n=len(cv_list[:N_CV]), plot_dest=plot_dest, grid_size=[2, 2], plot_name=plot_name) # DecisionTreeClassifier # ----------------------------------- dfs_list[3].head(dfs_list[0].shape[0]) pos = pos + 1 plot_name = plots_names[pos] show_learning_curve(dfs_list[pos], n=len(cv_list[:N_CV]), plot_dest=plot_dest, grid_size=[2, 2], plot_name=plot_name) # RandomForestClassifier # ----------------------------------- dfs_list[4].head(dfs_list[0].shape[0]) pos = pos + 1 plot_name = plots_names[pos] show_learning_curve(dfs_list[pos], n=len(cv_list[:N_CV]), plot_dest=plot_dest, grid_size=[2, 2], plot_name=plot_name) from sklearn.metrics import f1_score y_true = [0, 1, 2, 0, 1, 2] y_pred = [0, 2, 1, 0, 0, 1] f1_score(y_true, y_pred, average='macro')
_____no_output_____
MIT
pittsburgh-bridges-data-set-analysis/backup/Data Space Report (Official) - Two-Dimensional Analyses-v1.0.1.ipynb
franec94/Pittsburgh-Bridge-Dataset
Setup First, let's import a few common modules, ensure MatplotLib plots figures inline and prepare a function to save the figures. We also check that Python 3.5 or later is installed (although Python 2.x may work, it is deprecated so we strongly recommend you use Python 3 instead), as well as Scikit-Learn ≥0.20.
# Python ≥3.5 is required import sys assert sys.version_info >= (3, 5) # Scikit-Learn ≥0.20 is required import sklearn assert sklearn.__version__ >= "0.20" # Common imports import numpy as np import os # to make this notebook's output stable across runs np.random.seed(42) # To plot pretty figures %matplotlib inline import matplotlib as mpl import matplotlib.pyplot as plt mpl.rc('axes', labelsize=14) mpl.rc('xtick', labelsize=12) mpl.rc('ytick', labelsize=12) N_JOBS= 3 # Where to save the figures PROJECT_ROOT_DIR = "." CHAPTER_ID = "svm" IMAGES_PATH = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID) os.makedirs(IMAGES_PATH, exist_ok=True) def save_fig(fig_id, tight_layout=True, fig_extension="png", resolution=300): path = os.path.join(IMAGES_PATH, fig_id + "." + fig_extension) print("Saving figure", fig_id) if tight_layout: plt.tight_layout() plt.savefig(path, format=fig_extension, dpi=resolution)
_____no_output_____
Apache-2.0
05_support_vector_machines.ipynb
Schnatz65/ML_Fundamentals
Linear SVM Classification The next few code cells generate the first figures in chapter 5. The first actual code sample comes after.**Code to generate Figure 5–1. Large margin classification**
from sklearn.svm import SVC from sklearn import datasets iris = datasets.load_iris() X = iris["data"][:, (2, 3)] # petal length, petal width y = iris["target"] setosa_or_versicolor = (y == 0) | (y == 1) X = X[setosa_or_versicolor] y = y[setosa_or_versicolor] # SVM Classifier model svm_clf = SVC(kernel="linear", C=float("inf")) svm_clf.fit(X, y) # Bad models x0 = np.linspace(0, 5.5, 200) pred_1 = 5*x0 - 20 pred_2 = x0 - 1.8 pred_3 = 0.1 * x0 + 0.5 def plot_svc_decision_boundary(svm_clf, xmin, xmax): w = svm_clf.coef_[0] b = svm_clf.intercept_[0] # At the decision boundary, w0*x0 + w1*x1 + b = 0 # => x1 = -w0/w1 * x0 - b/w1 x0 = np.linspace(xmin, xmax, 200) decision_boundary = -w[0]/w[1] * x0 - b/w[1] margin = 1/w[1] gutter_up = decision_boundary + margin gutter_down = decision_boundary - margin svs = svm_clf.support_vectors_ plt.scatter(svs[:, 0], svs[:, 1], s=180, facecolors='#FFAAAA') plt.plot(x0, decision_boundary, "k-", linewidth=2) plt.plot(x0, gutter_up, "k--", linewidth=2) plt.plot(x0, gutter_down, "k--", linewidth=2) fig, axes = plt.subplots(ncols=2, figsize=(10,2.7), sharey=True) plt.sca(axes[0]) plt.plot(x0, pred_1, "g--", linewidth=2) plt.plot(x0, pred_2, "m-", linewidth=2) plt.plot(x0, pred_3, "r-", linewidth=2) plt.plot(X[:, 0][y==1], X[:, 1][y==1], "bs", label="Iris versicolor") plt.plot(X[:, 0][y==0], X[:, 1][y==0], "yo", label="Iris setosa") plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.legend(loc="upper left", fontsize=14) plt.axis([0, 5.5, 0, 2]) plt.sca(axes[1]) plot_svc_decision_boundary(svm_clf, 0, 5.5) plt.plot(X[:, 0][y==1], X[:, 1][y==1], "bs") plt.plot(X[:, 0][y==0], X[:, 1][y==0], "yo") plt.xlabel("Petal length", fontsize=14) plt.axis([0, 5.5, 0, 2]) save_fig("large_margin_classification_plot") plt.show()
Saving figure large_margin_classification_plot
Apache-2.0
05_support_vector_machines.ipynb
Schnatz65/ML_Fundamentals
**Code to generate Figure 5–2. Sensitivity to feature scales**
Xs = np.array([[1, 50], [5, 20], [3, 80], [5, 60]]).astype(np.float64) ys = np.array([0, 0, 1, 1]) svm_clf = SVC(kernel="linear", C=100) svm_clf.fit(Xs, ys) plt.figure(figsize=(9,2.7)) plt.subplot(121) plt.plot(Xs[:, 0][ys==1], Xs[:, 1][ys==1], "bo") plt.plot(Xs[:, 0][ys==0], Xs[:, 1][ys==0], "ms") plot_svc_decision_boundary(svm_clf, 0, 6) plt.xlabel("$x_0$", fontsize=20) plt.ylabel("$x_1$    ", fontsize=20, rotation=0) plt.title("Unscaled", fontsize=16) plt.axis([0, 6, 0, 90]) from sklearn.preprocessing import StandardScaler scaler = StandardScaler() X_scaled = scaler.fit_transform(Xs) svm_clf.fit(X_scaled, ys) plt.subplot(122) plt.plot(X_scaled[:, 0][ys==1], X_scaled[:, 1][ys==1], "bo") plt.plot(X_scaled[:, 0][ys==0], X_scaled[:, 1][ys==0], "ms") plot_svc_decision_boundary(svm_clf, -2, 2) plt.xlabel("$x'_0$", fontsize=20) plt.ylabel("$x'_1$ ", fontsize=20, rotation=0) plt.title("Scaled", fontsize=16) plt.axis([-2, 2, -2, 2]) save_fig("sensitivity_to_feature_scales_plot")
Saving figure sensitivity_to_feature_scales_plot
Apache-2.0
05_support_vector_machines.ipynb
Schnatz65/ML_Fundamentals
Soft Margin Classification**Code to generate Figure 5–3. Hard margin sensitivity to outliers**
X_outliers = np.array([[3.4, 1.3], [3.2, 0.8]]) y_outliers = np.array([0, 0]) Xo1 = np.concatenate([X, X_outliers[:1]], axis=0) yo1 = np.concatenate([y, y_outliers[:1]], axis=0) Xo2 = np.concatenate([X, X_outliers[1:]], axis=0) yo2 = np.concatenate([y, y_outliers[1:]], axis=0) svm_clf2 = SVC(kernel="linear", C=10**9) svm_clf2.fit(Xo2, yo2) fig, axes = plt.subplots(ncols=2, figsize=(10,2.7), sharey=True) plt.sca(axes[0]) plt.plot(Xo1[:, 0][yo1==1], Xo1[:, 1][yo1==1], "bs") plt.plot(Xo1[:, 0][yo1==0], Xo1[:, 1][yo1==0], "yo") plt.text(0.3, 1.0, "Impossible!", fontsize=24, color="red") plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.annotate("Outlier", xy=(X_outliers[0][0], X_outliers[0][1]), xytext=(2.5, 1.7), ha="center", arrowprops=dict(facecolor='black', shrink=0.1), fontsize=16, ) plt.axis([0, 5.5, 0, 2]) plt.sca(axes[1]) plt.plot(Xo2[:, 0][yo2==1], Xo2[:, 1][yo2==1], "bs") plt.plot(Xo2[:, 0][yo2==0], Xo2[:, 1][yo2==0], "yo") plot_svc_decision_boundary(svm_clf2, 0, 5.5) plt.xlabel("Petal length", fontsize=14) plt.annotate("Outlier", xy=(X_outliers[1][0], X_outliers[1][1]), xytext=(3.2, 0.08), ha="center", arrowprops=dict(facecolor='black', shrink=0.1), fontsize=16, ) plt.axis([0, 5.5, 0, 2]) save_fig("sensitivity_to_outliers_plot") plt.show()
Saving figure sensitivity_to_outliers_plot
Apache-2.0
05_support_vector_machines.ipynb
Schnatz65/ML_Fundamentals
**This is the first code example in chapter 5:**
import numpy as np from sklearn import datasets from sklearn.pipeline import Pipeline from sklearn.preprocessing import StandardScaler from sklearn.svm import LinearSVC iris = datasets.load_iris() X = iris["data"][:, (2, 3)] # petal length, petal width y = (iris["target"] == 2).astype(np.float64) # Iris virginica svm_clf = Pipeline([ ("scaler", StandardScaler()), ("linear_svc", LinearSVC(C=1, loss="hinge", random_state=42)), ]) svm_clf.fit(X, y) svm_clf.predict([[5.5, 1.7]])
_____no_output_____
Apache-2.0
05_support_vector_machines.ipynb
Schnatz65/ML_Fundamentals
**Code to generate Figure 5–4. Large margin versus fewer margin violations**
scaler = StandardScaler() svm_clf1 = LinearSVC(C=1, loss="hinge", random_state=42) svm_clf2 = LinearSVC(C=100, loss="hinge", random_state=42) scaled_svm_clf1 = Pipeline([ ("scaler", scaler), ("linear_svc", svm_clf1), ]) scaled_svm_clf2 = Pipeline([ ("scaler", scaler), ("linear_svc", svm_clf2), ]) scaled_svm_clf1.fit(X, y) scaled_svm_clf2.fit(X, y) # Convert to unscaled parameters b1 = svm_clf1.decision_function([-scaler.mean_ / scaler.scale_]) b2 = svm_clf2.decision_function([-scaler.mean_ / scaler.scale_]) w1 = svm_clf1.coef_[0] / scaler.scale_ w2 = svm_clf2.coef_[0] / scaler.scale_ svm_clf1.intercept_ = np.array([b1]) svm_clf2.intercept_ = np.array([b2]) svm_clf1.coef_ = np.array([w1]) svm_clf2.coef_ = np.array([w2]) # Find support vectors (LinearSVC does not do this automatically) t = y * 2 - 1 support_vectors_idx1 = (t * (X.dot(w1) + b1) < 1).ravel() support_vectors_idx2 = (t * (X.dot(w2) + b2) < 1).ravel() svm_clf1.support_vectors_ = X[support_vectors_idx1] svm_clf2.support_vectors_ = X[support_vectors_idx2] fig, axes = plt.subplots(ncols=2, figsize=(10,2.7), sharey=True) plt.sca(axes[0]) plt.plot(X[:, 0][y==1], X[:, 1][y==1], "g^", label="Iris virginica") plt.plot(X[:, 0][y==0], X[:, 1][y==0], "bs", label="Iris versicolor") plot_svc_decision_boundary(svm_clf1, 4, 5.9) plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.legend(loc="upper left", fontsize=14) plt.title("$C = {}$".format(svm_clf1.C), fontsize=16) plt.axis([4, 5.9, 0.8, 2.8]) plt.sca(axes[1]) plt.plot(X[:, 0][y==1], X[:, 1][y==1], "g^") plt.plot(X[:, 0][y==0], X[:, 1][y==0], "bs") plot_svc_decision_boundary(svm_clf2, 4, 5.99) plt.xlabel("Petal length", fontsize=14) plt.title("$C = {}$".format(svm_clf2.C), fontsize=16) plt.axis([4, 5.9, 0.8, 2.8]) save_fig("regularization_plot")
Saving figure regularization_plot
Apache-2.0
05_support_vector_machines.ipynb
Schnatz65/ML_Fundamentals
Nonlinear SVM Classification **Code to generate Figure 5–5. Adding features to make a dataset linearly separable**
X1D = np.linspace(-4, 4, 9).reshape(-1, 1) X2D = np.c_[X1D, X1D**2] y = np.array([0, 0, 1, 1, 1, 1, 1, 0, 0]) plt.figure(figsize=(10, 3)) plt.subplot(121) plt.grid(True, which='both') plt.axhline(y=0, color='k') plt.plot(X1D[:, 0][y==0], np.zeros(4), "bs") plt.plot(X1D[:, 0][y==1], np.zeros(5), "g^") plt.gca().get_yaxis().set_ticks([]) plt.xlabel(r"$x_1$", fontsize=20) plt.axis([-4.5, 4.5, -0.2, 0.2]) plt.subplot(122) plt.grid(True, which='both') plt.axhline(y=0, color='k') plt.axvline(x=0, color='k') plt.plot(X2D[:, 0][y==0], X2D[:, 1][y==0], "bs") plt.plot(X2D[:, 0][y==1], X2D[:, 1][y==1], "g^") plt.xlabel(r"$x_1$", fontsize=20) plt.ylabel(r"$x_2$  ", fontsize=20, rotation=0) plt.gca().get_yaxis().set_ticks([0, 4, 8, 12, 16]) plt.plot([-4.5, 4.5], [6.5, 6.5], "r--", linewidth=3) plt.axis([-4.5, 4.5, -1, 17]) plt.subplots_adjust(right=1) save_fig("higher_dimensions_plot", tight_layout=False) plt.show() from sklearn.datasets import make_moons X, y = make_moons(n_samples=100, noise=0.15, random_state=42) def plot_dataset(X, y, axes): plt.plot(X[:, 0][y==0], X[:, 1][y==0], "bs") plt.plot(X[:, 0][y==1], X[:, 1][y==1], "g^") plt.axis(axes) plt.grid(True, which='both') plt.xlabel(r"$x_1$", fontsize=20) plt.ylabel(r"$x_2$", fontsize=20, rotation=0) plot_dataset(X, y, [-1.5, 2.5, -1, 1.5]) plt.show()
_____no_output_____
Apache-2.0
05_support_vector_machines.ipynb
Schnatz65/ML_Fundamentals
**Here is second code example in the chapter:**
from sklearn.datasets import make_moons from sklearn.pipeline import Pipeline from sklearn.preprocessing import PolynomialFeatures polynomial_svm_clf = Pipeline([ ("poly_features", PolynomialFeatures(degree=3)), ("scaler", StandardScaler()), ("svm_clf", LinearSVC(C=10, loss="hinge", random_state=42)) ]) polynomial_svm_clf.fit(X, y)
C:\Users\kleme\anaconda3\envs\ML_Fundamentals\lib\site-packages\sklearn\svm\_base.py:1206: ConvergenceWarning: Liblinear failed to converge, increase the number of iterations. warnings.warn(
Apache-2.0
05_support_vector_machines.ipynb
Schnatz65/ML_Fundamentals
**Code to generate Figure 5–6. Linear SVM classifier using polynomial features**
def plot_predictions(clf, axes): x0s = np.linspace(axes[0], axes[1], 100) x1s = np.linspace(axes[2], axes[3], 100) x0, x1 = np.meshgrid(x0s, x1s) X = np.c_[x0.ravel(), x1.ravel()] y_pred = clf.predict(X).reshape(x0.shape) y_decision = clf.decision_function(X).reshape(x0.shape) plt.contourf(x0, x1, y_pred, cmap=plt.cm.brg, alpha=0.2) plt.contourf(x0, x1, y_decision, cmap=plt.cm.brg, alpha=0.1) plot_predictions(polynomial_svm_clf, [-1.5, 2.5, -1, 1.5]) plot_dataset(X, y, [-1.5, 2.5, -1, 1.5]) save_fig("moons_polynomial_svc_plot") plt.show()
Saving figure moons_polynomial_svc_plot
Apache-2.0
05_support_vector_machines.ipynb
Schnatz65/ML_Fundamentals
Polynomial Kernel **Next code example:**
from sklearn.svm import SVC poly_kernel_svm_clf = Pipeline([ ("scaler", StandardScaler()), ("svm_clf", SVC(kernel="poly", degree=3, coef0=1, C=5)) ]) poly_kernel_svm_clf.fit(X, y)
_____no_output_____
Apache-2.0
05_support_vector_machines.ipynb
Schnatz65/ML_Fundamentals
**Code to generate Figure 5–7. SVM classifiers with a polynomial kernel**
poly100_kernel_svm_clf = Pipeline([ ("scaler", StandardScaler()), ("svm_clf", SVC(kernel="poly", degree=10, coef0=100, C=5)) ]) poly100_kernel_svm_clf.fit(X, y) fig, axes = plt.subplots(ncols=2, figsize=(10.5, 4), sharey=True) plt.sca(axes[0]) plot_predictions(poly_kernel_svm_clf, [-1.5, 2.45, -1, 1.5]) plot_dataset(X, y, [-1.5, 2.4, -1, 1.5]) plt.title(r"$d=3, r=1, C=5$", fontsize=18) plt.sca(axes[1]) plot_predictions(poly100_kernel_svm_clf, [-1.5, 2.45, -1, 1.5]) plot_dataset(X, y, [-1.5, 2.4, -1, 1.5]) plt.title(r"$d=10, r=100, C=5$", fontsize=18) plt.ylabel("") save_fig("moons_kernelized_polynomial_svc_plot") plt.show()
Saving figure moons_kernelized_polynomial_svc_plot
Apache-2.0
05_support_vector_machines.ipynb
Schnatz65/ML_Fundamentals
Similarity Features **Code to generate Figure 5–8. Similarity features using the Gaussian RBF**
def gaussian_rbf(x, landmark, gamma): return np.exp(-gamma * np.linalg.norm(x - landmark, axis=1)**2) gamma = 0.3 x1s = np.linspace(-4.5, 4.5, 200).reshape(-1, 1) x2s = gaussian_rbf(x1s, -2, gamma) x3s = gaussian_rbf(x1s, 1, gamma) XK = np.c_[gaussian_rbf(X1D, -2, gamma), gaussian_rbf(X1D, 1, gamma)] yk = np.array([0, 0, 1, 1, 1, 1, 1, 0, 0]) plt.figure(figsize=(10.5, 4)) plt.subplot(121) plt.grid(True, which='both') plt.axhline(y=0, color='k') plt.scatter(x=[-2, 1], y=[0, 0], s=150, alpha=0.5, c="red") plt.plot(X1D[:, 0][yk==0], np.zeros(4), "bs") plt.plot(X1D[:, 0][yk==1], np.zeros(5), "g^") plt.plot(x1s, x2s, "g--") plt.plot(x1s, x3s, "b:") plt.gca().get_yaxis().set_ticks([0, 0.25, 0.5, 0.75, 1]) plt.xlabel(r"$x_1$", fontsize=20) plt.ylabel(r"Similarity", fontsize=14) plt.annotate(r'$\mathbf{x}$', xy=(X1D[3, 0], 0), xytext=(-0.5, 0.20), ha="center", arrowprops=dict(facecolor='black', shrink=0.1), fontsize=18, ) plt.text(-2, 0.9, "$x_2$", ha="center", fontsize=20) plt.text(1, 0.9, "$x_3$", ha="center", fontsize=20) plt.axis([-4.5, 4.5, -0.1, 1.1]) plt.subplot(122) plt.grid(True, which='both') plt.axhline(y=0, color='k') plt.axvline(x=0, color='k') plt.plot(XK[:, 0][yk==0], XK[:, 1][yk==0], "bs") plt.plot(XK[:, 0][yk==1], XK[:, 1][yk==1], "g^") plt.xlabel(r"$x_2$", fontsize=20) plt.ylabel(r"$x_3$  ", fontsize=20, rotation=0) plt.annotate(r'$\phi\left(\mathbf{x}\right)$', xy=(XK[3, 0], XK[3, 1]), xytext=(0.65, 0.50), ha="center", arrowprops=dict(facecolor='black', shrink=0.1), fontsize=18, ) plt.plot([-0.1, 1.1], [0.57, -0.1], "r--", linewidth=3) plt.axis([-0.1, 1.1, -0.1, 1.1]) plt.subplots_adjust(right=1) save_fig("kernel_method_plot") plt.show() x1_example = X1D[3, 0] for landmark in (-2, 1): k = gaussian_rbf(np.array([[x1_example]]), np.array([[landmark]]), gamma) print("Phi({}, {}) = {}".format(x1_example, landmark, k))
Phi(-1.0, -2) = [0.74081822] Phi(-1.0, 1) = [0.30119421]
Apache-2.0
05_support_vector_machines.ipynb
Schnatz65/ML_Fundamentals
Gaussian RBF Kernel **Next code example:**
rbf_kernel_svm_clf = Pipeline([ ("scaler", StandardScaler()), ("svm_clf", SVC(kernel="rbf", gamma=5, C=0.001)) ]) rbf_kernel_svm_clf.fit(X, y)
_____no_output_____
Apache-2.0
05_support_vector_machines.ipynb
Schnatz65/ML_Fundamentals
**Code to generate Figure 5–9. SVM classifiers using an RBF kernel**
from sklearn.svm import SVC gamma1, gamma2 = 0.1, 5 C1, C2 = 0.001, 1000 hyperparams = (gamma1, C1), (gamma1, C2), (gamma2, C1), (gamma2, C2) svm_clfs = [] for gamma, C in hyperparams: rbf_kernel_svm_clf = Pipeline([ ("scaler", StandardScaler()), ("svm_clf", SVC(kernel="rbf", gamma=gamma, C=C)) ]) rbf_kernel_svm_clf.fit(X, y) svm_clfs.append(rbf_kernel_svm_clf) fig, axes = plt.subplots(nrows=2, ncols=2, figsize=(10.5, 7), sharex=True, sharey=True) for i, svm_clf in enumerate(svm_clfs): plt.sca(axes[i // 2, i % 2]) plot_predictions(svm_clf, [-1.5, 2.45, -1, 1.5]) plot_dataset(X, y, [-1.5, 2.45, -1, 1.5]) gamma, C = hyperparams[i] plt.title(r"$\gamma = {}, C = {}$".format(gamma, C), fontsize=16) if i in (0, 1): plt.xlabel("") if i in (1, 3): plt.ylabel("") save_fig("moons_rbf_svc_plot") plt.show()
Saving figure moons_rbf_svc_plot
Apache-2.0
05_support_vector_machines.ipynb
Schnatz65/ML_Fundamentals
SVM Regression
np.random.seed(42) m = 50 X = 2 * np.random.rand(m, 1) y = (4 + 3 * X + np.random.randn(m, 1)).ravel()
_____no_output_____
Apache-2.0
05_support_vector_machines.ipynb
Schnatz65/ML_Fundamentals
**Next code example:**
from sklearn.svm import LinearSVR svm_reg = LinearSVR(epsilon=1.5, random_state=42) svm_reg.fit(X, y)
_____no_output_____
Apache-2.0
05_support_vector_machines.ipynb
Schnatz65/ML_Fundamentals
**Code to generate Figure 5–10. SVM Regression**
svm_reg1 = LinearSVR(epsilon=1.5, random_state=42) svm_reg2 = LinearSVR(epsilon=0.5, random_state=42) svm_reg1.fit(X, y) svm_reg2.fit(X, y) def find_support_vectors(svm_reg, X, y): y_pred = svm_reg.predict(X) off_margin = (np.abs(y - y_pred) >= svm_reg.epsilon) return np.argwhere(off_margin) svm_reg1.support_ = find_support_vectors(svm_reg1, X, y) svm_reg2.support_ = find_support_vectors(svm_reg2, X, y) eps_x1 = 1 eps_y_pred = svm_reg1.predict([[eps_x1]]) def plot_svm_regression(svm_reg, X, y, axes): x1s = np.linspace(axes[0], axes[1], 100).reshape(100, 1) y_pred = svm_reg.predict(x1s) plt.plot(x1s, y_pred, "k-", linewidth=2, label=r"$\hat{y}$") plt.plot(x1s, y_pred + svm_reg.epsilon, "k--") plt.plot(x1s, y_pred - svm_reg.epsilon, "k--") plt.scatter(X[svm_reg.support_], y[svm_reg.support_], s=180, facecolors='#FFAAAA') plt.plot(X, y, "bo") plt.xlabel(r"$x_1$", fontsize=18) plt.legend(loc="upper left", fontsize=18) plt.axis(axes) fig, axes = plt.subplots(ncols=2, figsize=(9, 4), sharey=True) plt.sca(axes[0]) plot_svm_regression(svm_reg1, X, y, [0, 2, 3, 11]) plt.title(r"$\epsilon = {}$".format(svm_reg1.epsilon), fontsize=18) plt.ylabel(r"$y$", fontsize=18, rotation=0) #plt.plot([eps_x1, eps_x1], [eps_y_pred, eps_y_pred - svm_reg1.epsilon], "k-", linewidth=2) plt.annotate( '', xy=(eps_x1, eps_y_pred), xycoords='data', xytext=(eps_x1, eps_y_pred - svm_reg1.epsilon), textcoords='data', arrowprops={'arrowstyle': '<->', 'linewidth': 1.5} ) plt.text(0.91, 5.6, r"$\epsilon$", fontsize=20) plt.sca(axes[1]) plot_svm_regression(svm_reg2, X, y, [0, 2, 3, 11]) plt.title(r"$\epsilon = {}$".format(svm_reg2.epsilon), fontsize=18) save_fig("svm_regression_plot") plt.show() np.random.seed(42) m = 100 X = 2 * np.random.rand(m, 1) - 1 y = (0.2 + 0.1 * X + 0.5 * X**2 + np.random.randn(m, 1)/10).ravel()
_____no_output_____
Apache-2.0
05_support_vector_machines.ipynb
Schnatz65/ML_Fundamentals
**Note**: to be future-proof, we set `gamma="scale"`, as this will be the default value in Scikit-Learn 0.22. **Next code example:**
from sklearn.svm import SVR svm_poly_reg = SVR(kernel="poly", degree=2, C=100, epsilon=0.1, gamma="scale") svm_poly_reg.fit(X, y)
_____no_output_____
Apache-2.0
05_support_vector_machines.ipynb
Schnatz65/ML_Fundamentals
**Code to generate Figure 5–11. SVM Regression using a second-degree polynomial kernel**
from sklearn.svm import SVR svm_poly_reg1 = SVR(kernel="poly", degree=2, C=100, epsilon=0.1, gamma="scale") svm_poly_reg2 = SVR(kernel="poly", degree=2, C=0.01, epsilon=0.1, gamma="scale") svm_poly_reg1.fit(X, y) svm_poly_reg2.fit(X, y) fig, axes = plt.subplots(ncols=2, figsize=(9, 4), sharey=True) plt.sca(axes[0]) plot_svm_regression(svm_poly_reg1, X, y, [-1, 1, 0, 1]) plt.title(r"$degree={}, C={}, \epsilon = {}$".format(svm_poly_reg1.degree, svm_poly_reg1.C, svm_poly_reg1.epsilon), fontsize=18) plt.ylabel(r"$y$", fontsize=18, rotation=0) plt.sca(axes[1]) plot_svm_regression(svm_poly_reg2, X, y, [-1, 1, 0, 1]) plt.title(r"$degree={}, C={}, \epsilon = {}$".format(svm_poly_reg2.degree, svm_poly_reg2.C, svm_poly_reg2.epsilon), fontsize=18) save_fig("svm_with_polynomial_kernel_plot") plt.show()
Saving figure svm_with_polynomial_kernel_plot
Apache-2.0
05_support_vector_machines.ipynb
Schnatz65/ML_Fundamentals
%matplotlib inline import numpy as np import tensorflow as tf print(tf.__version__) !pip install git+https://github.com/tensorflow/docs import tensorflow_docs as tfdocs import tensorflow_docs.plots import tensorflow_docs.modeling from google.colab import drive drive.mount('/content/gdrive') import pandas as pd df=pd.read_csv('gdrive/My Drive/SS_AITrader/INTC/df_INTC_20drtn_features.csv') df.head() df['timestamp'] = pd.to_datetime(df['timestamp']) from_date='2010-01-01' to_date='2020-01-01' df = df[pd.to_datetime(from_date) < df['timestamp'] ] df = df[pd.to_datetime(to_date) > df['timestamp'] ] df.head() df.tail() df.drop(['timestamp'], inplace=True, axis=1) train_dataset = df.sample(frac=0.8,random_state=0) test_dataset = df.drop(train_dataset.index) train_dataset.head() train_labels = train_dataset.pop('labels') test_labels = test_dataset.pop('labels') train_labels.head() from sklearn.utils import compute_class_weight def get_sample_weights(y): y = y.astype(int) # compute_class_weight needs int labels class_weights = compute_class_weight('balanced', np.unique(y), y) print("real class weights are {}".format(class_weights), np.unique(y)) print("value_counts", np.unique(y, return_counts=True)) sample_weights = y.copy().astype(float) for i in np.unique(y): sample_weights[sample_weights == i] = class_weights[i] # if i == 2 else 0.8 * class_weights[i] # sample_weights = np.where(sample_weights == i, class_weights[int(i)], y_) return sample_weights get_sample_weights(train_labels) SAMPLE_WEIGHT=get_sample_weights(train_labels) train_stats = train_dataset.describe() train_stats = train_stats.transpose() def norm(x): return (x - train_stats['mean']) / train_stats['std'] normed_train_data = norm(train_dataset) normed_test_data = norm(test_dataset) from sklearn.feature_selection import SelectKBest, f_classif, mutual_info_classif from operator import itemgetter k=20 list_features = list(normed_train_data.columns) select_k_best = SelectKBest(f_classif, k=k) select_k_best.fit(normed_train_data, train_labels) selected_features_anova = itemgetter(*select_k_best.get_support(indices=True))(list_features) selected_features_anova select_k_best = SelectKBest(mutual_info_classif, k=k) select_k_best.fit(normed_train_data, train_labels) selected_features_mic = itemgetter(*select_k_best.get_support(indices=True))(list_features) selected_features_mic list_features = list(normed_train_data.columns) feat_idx = [] for c in selected_features_mic: feat_idx.append(list_features.index(c)) feat_idx = sorted(feat_idx) X_train_new=normed_train_data.iloc[:, feat_idx] X_test_new=normed_test_data.iloc[:, feat_idx] #kbest=SelectKBest(f_classif, k=10) #X_train_new = kbest.fit_transform(normed_train_data, train_labels) #X_test_new = kbest.transform(normed_test_data) X_test_new.shape X_test_new.head() def build_model(hidden_dim,dropout=0.5): ## input layer inputs=tf.keras.Input(shape=(X_train_new.shape[1],)) h1= tf.keras.layers.Dense(units=hidden_dim,activation='relu')(inputs) h2= tf.keras.layers.Dropout(dropout)(h1) h3= tf.keras.layers.Dense(units=hidden_dim*2,activation='relu')(h2) h4= tf.keras.layers.Dropout(dropout)(h3) h5= tf.keras.layers.Dense(units=hidden_dim*2,activation='relu')(h4) h6= tf.keras.layers.Dropout(dropout)(h5) h7= tf.keras.layers.Dense(units=hidden_dim,activation='relu')(h6) ##output outputs=tf.keras.layers.Dense(units=2,activation='softmax')(h7) return tf.keras.Model(inputs=inputs, outputs=outputs) tf.random.set_seed(1) criterion = tf.keras.losses.sparse_categorical_crossentropy optimizer = tf.keras.optimizers.Adam(learning_rate=0.001) model = build_model(hidden_dim=64) model.compile(optimizer=optimizer,loss=criterion,metrics=['accuracy']) example_batch = X_train_new[:10] example_result = model.predict(example_batch) example_result EPOCHS=200 BATCH_SIZE=20 history = model.fit( X_train_new, train_labels, epochs=EPOCHS, batch_size=BATCH_SIZE ,sample_weight=SAMPLE_WEIGHT,shuffle=True,validation_split = 0.2, verbose=1, callbacks=[tfdocs.modeling.EpochDots()]) hist = pd.DataFrame(history.history) hist['epoch'] = history.epoch hist.tail() import matplotlib.pyplot as plt hist=history.history fig=plt.figure(figsize=(12,5)) ax=fig.add_subplot(1,2,1) ax.plot(hist['loss'],lw=3) ax.plot(hist['val_loss'],lw=3) ax.set_title('Training & Validation Loss',size=15) ax.set_xlabel('Epoch',size=15) ax.tick_params(axis='both',which='major',labelsize=15) ax=fig.add_subplot(1,2,2) ax.plot(hist['accuracy'],lw=3) ax.plot(hist['val_accuracy'],lw=3) ax.set_title('Training & Validation accuracy',size=15) ax.set_xlabel('Epoch',size=15) ax.tick_params(axis='both',which='major',labelsize=15) plt.show() !pip install shap import shap explainer = shap.DeepExplainer(model, np.array(X_train_new)) shap_values = explainer.shap_values(np.array(X_test_new)) shap.summary_plot(shap_values[1], X_test_new) pred=model.predict(X_test_new) pred.argmax(axis=1) from sklearn.metrics import classification_report, confusion_matrix cm=confusion_matrix(test_labels, pred.argmax(axis=1)) print('Confusion Matrix') fig,ax = plt.subplots(figsize=(2.5,2.5)) ax.matshow(cm,cmap=plt.cm.Blues,alpha=0.3) for i in range(cm.shape[0]): for j in range(cm.shape[1]): ax.text(x=j,y=i, s=cm[i,j], va='center',ha='center') plt.xlabel('Predicted Label') plt.ylabel('True Label') plt.show() from sklearn.metrics import precision_score from sklearn.metrics import recall_score, f1_score print('Precision: %.3f' % precision_score(y_true=test_labels,y_pred=pred.argmax(axis=1))) print('Recall: %.3f' % recall_score(y_true=test_labels,y_pred=pred.argmax(axis=1))) print('F1: %.3f' % f1_score(y_true=test_labels,y_pred=pred.argmax(axis=1))) from sklearn.pipeline import Pipeline from sklearn.feature_selection import SelectKBest, chi2 import xgboost as xgb from sklearn.model_selection import KFold, GridSearchCV from sklearn.metrics import accuracy_score, make_scorer pipe = Pipeline([ ('fs', SelectKBest()), ('clf', xgb.XGBClassifier(objective='binary:logistic')) ]) search_space = [ { 'clf__n_estimators': [200], 'clf__learning_rate': [0.05, 0.1], 'clf__max_depth': range(3, 10), 'clf__colsample_bytree': [i/10.0 for i in range(1, 3)], 'clf__gamma': [i/10.0 for i in range(3)], 'fs__score_func': [mutual_info_classif,f_classif], 'fs__k': [20,30,40], } ] kfold = KFold(n_splits=5, shuffle=True, random_state=42) scoring = {'AUC':'roc_auc', 'Accuracy':make_scorer(accuracy_score)} grid = GridSearchCV( pipe, param_grid=search_space, cv=kfold, scoring=scoring, refit='AUC', verbose=1, n_jobs=-1 ) model = grid.fit(normed_train_data, train_labels) import pickle # Dictionary of best parameters best_pars = grid.best_params_ # Best XGB model that was found based on the metric score you specify best_model = grid.best_estimator_ # Save model pickle.dump(grid.best_estimator_, open('gdrive/My Drive/SS_AITrader/INTC/xgb_INTC_log_reg.pickle', "wb")) predict = model.predict(normed_test_data) print('Best AUC Score: {}'.format(model.best_score_)) print('Accuracy: {}'.format(accuracy_score(test_labels, predict))) cm=confusion_matrix(test_labels,predict) print('Confusion Matrix') fig,ax = plt.subplots(figsize=(2.5,2.5)) ax.matshow(cm,cmap=plt.cm.Blues,alpha=0.3) for i in range(cm.shape[0]): for j in range(cm.shape[1]): ax.text(x=j,y=i, s=cm[i,j], va='center',ha='center') plt.xlabel('Predicted Label') plt.ylabel('True Label') plt.show() print(model.best_params_) model_opt = xgb.XGBClassifier(max_depth=9, objective='binary:logistic', n_estimators=200, learning_rate = 0.1, colsample_bytree= 0.2, gamma= 0.1) eval_set = [(X_train_new, train_labels), (X_test_new, test_labels)] model_opt.fit(X_train_new, train_labels, early_stopping_rounds=15, eval_metric=["error", "logloss"], eval_set=eval_set, verbose=True) # make predictions for test data y_pred = model_opt.predict(X_test_new) predictions = [round(value) for value in y_pred] # evaluate predictions accuracy = accuracy_score(test_labels, predictions) print("Accuracy: %.2f%%" % (accuracy * 100.0)) from matplotlib import pyplot results = model_opt.evals_result() epochs = len(results['validation_0']['error']) x_axis = range(0, epochs) # plot log loss fig, ax = pyplot.subplots() ax.plot(x_axis, results['validation_0']['logloss'], label='Train') ax.plot(x_axis, results['validation_1']['logloss'], label='Test') ax.legend() pyplot.ylabel('Log Loss') pyplot.title('XGBoost Log Loss') pyplot.show() # plot classification error fig, ax = pyplot.subplots() ax.plot(x_axis, results['validation_0']['error'], label='Train') ax.plot(x_axis, results['validation_1']['error'], label='Test') ax.legend() pyplot.ylabel('Classification Error') pyplot.title('XGBoost Classification Error') pyplot.show() shap_values = shap.TreeExplainer(model_opt).shap_values(X_test_new) shap.summary_plot(shap_values, X_test_new) predict = model_opt.predict(X_test_new) cm=confusion_matrix(test_labels,predict) print('Confusion Matrix') fig,ax = plt.subplots(figsize=(2.5,2.5)) ax.matshow(cm,cmap=plt.cm.Blues,alpha=0.3) for i in range(cm.shape[0]): for j in range(cm.shape[1]): ax.text(x=j,y=i, s=cm[i,j], va='center',ha='center') plt.xlabel('Predicted Label') plt.ylabel('True Label') plt.show()
Confusion Matrix
Apache-2.0
SS_AITrader_INTC.ipynb
JamesHorrex/AI_stock_trading
# Make inline plots vector graphics instead of raster graphics from IPython.display import set_matplotlib_formats set_matplotlib_formats('pdf', 'svg') import pandas as pd import plotly.express as px cereals = pd.read_csv("https://github.com/briandk/2020-virtual-program-in-data-science/raw/master/data/cereals.csv") cereals cereals.count() cereals.groupby('mfr').size().sort_values() fig = px.scatter(cereals, 'rating', 'calories') fig.show() cereals.groupby('mfr').mean()['calories'].sort_values()
_____no_output_____
MIT
cereals.ipynb
briandk/2020-virtual-program-in-data-science
Deriving a Point-Spread Function in a Crowded Field following Appendix III of Peter Stetson's *User's Manual for DAOPHOT II* Using `pydaophot` form `astwro` python package All *italic* text here have been taken from Stetson's manual. The only input file for this procedure is a FITS file containing reference frame image. Here we use sample FITS form astwro package (NGC6871 I filter 20s frame). Below we get filepath for this image, as well as create instances of `Daophot` and `Allstar` classes - wrappers around `daophot` and `allstar` respectively.One should also provide `daophot.opt`, `photo.opt` and `allstar.opt` in apropiriete constructors. Here default, build in, sample, `opt` files are used.
from astwro.sampledata import fits_image frame = fits_image()
_____no_output_____
MIT
examples/deriving_psf_stenson.ipynb
majkelx/astwro
`Daophot` object creates temporary working directory (*runner directory*), which is passed to `Allstar` constructor to share.
from astwro.pydaophot import Daophot, Allstar dp = Daophot(image=frame) al = Allstar(dir=dp.dir)
_____no_output_____
MIT
examples/deriving_psf_stenson.ipynb
majkelx/astwro
Daophot got FITS file in construction, which will be automatically **ATTACH**ed. *(1) Run FIND on your frame* Daophot `FIND` parameters `Number of frames averaged, summed` are defaulted to `1,1`, below are provided for clarity.
res = dp.FInd(frames_av=1, frames_sum=1)
_____no_output_____
MIT
examples/deriving_psf_stenson.ipynb
majkelx/astwro
Check some results returned by `FIND`, every method for `daophot` command returns results object.
print ("{} pixels analysed, sky estimate {}, {} stars found.".format(res.pixels, res.sky, res.stars))
9640 pixels analysed, sky estimate 12.665, 4166 stars found.
MIT
examples/deriving_psf_stenson.ipynb
majkelx/astwro
Also, take a look into *runner directory*
!ls -lt $dp.dir
total 536 lrwxr-xr-x 1 michal staff 60 Jun 26 18:25 63d38b_NGC6871.fits -> /Users/michal/projects/astwro/astwro/sampledata/NGC6871.fits lrwxr-xr-x 1 michal staff 65 Jun 26 18:25 allstar.opt -> /Users/michal/projects/astwro/astwro/pydaophot/config/allstar.opt lrwxr-xr-x 1 michal staff 65 Jun 26 18:25 daophot.opt -> /Users/michal/projects/astwro/astwro/pydaophot/config/daophot.opt -rw-r--r-- 1 michal staff 258438 Jun 26 18:25 i.coo
MIT
examples/deriving_psf_stenson.ipynb
majkelx/astwro
We see symlinks to input image and `opt` files, and `i.coo` - result of `FIND` *(2) Run PHOTOMETRY on your frame* Below we run photometry, providing explicitly radius of aperture `A1` and `IS`, `OS` sky radiuses.
res = dp.PHotometry(apertures=[8], IS=35, OS=50)
_____no_output_____
MIT
examples/deriving_psf_stenson.ipynb
majkelx/astwro
List of stars generated by daophot commands, can be easily get as `astwro.starlist.Starlist` being essentially `pandas.DataFrame`:
stars = res.photometry_starlist
_____no_output_____
MIT
examples/deriving_psf_stenson.ipynb
majkelx/astwro
Let's check 10 stars with least A1 error (``mag_err`` column). ([pandas](https://pandas.pydata.org) style)
stars.sort_values('mag_err').iloc[:10]
_____no_output_____
MIT
examples/deriving_psf_stenson.ipynb
majkelx/astwro
*(3) SORT the output from PHOTOMETRY**in order of increasing apparent magnitude decreasingstellar brightness with the renumbering feature. This step is optional but it can be more convenient than not.* `SORT` command of `daophor` is not implemented (yet) in `pydaohot`. But we do sorting by ourself.
sorted_stars = stars.sort_values('mag') sorted_stars.renumber()
_____no_output_____
MIT
examples/deriving_psf_stenson.ipynb
majkelx/astwro
Here we write sorted list back info photometry file at default name (overwriting existing one), because it's convenient to use default files in next commands.
dp.write_starlist(sorted_stars, 'i.ap') !head -n20 $dp.PHotometry_result.photometry_file dp.PHotometry_result.photometry_file
_____no_output_____
MIT
examples/deriving_psf_stenson.ipynb
majkelx/astwro
*(4) PICK to generate a set of likely PSF stars* *How many stars you want to use is a function of the degree of variation you expect and the frequency with which stars are contaminated by cosmic rays or neighbor stars. [...]*
pick_res = dp.PIck(faintest_mag=20, number_of_stars_to_pick=40)
_____no_output_____
MIT
examples/deriving_psf_stenson.ipynb
majkelx/astwro
If no error reported, symlink to image file (renamed to `i.fits`), and all daophot output files (`i.*`) are in the working directory of runner:
ls $dp.dir
63d38b_NGC6871.fits@ daophot.opt@ i.coo allstar.opt@ i.ap i.lst
MIT
examples/deriving_psf_stenson.ipynb
majkelx/astwro
One may examine and improve `i.lst` list of PSF stars. Or use `astwro.tools.gapick.py` to obtain list of PSF stars optimised by genetic algorithm. *(5) Run PSF **tell it the name of your complete (sorted renumbered) aperture photometry file, the name of the file with the list of PSF stars, and the name of the disk file you want the point spread function stored in (the default should be fine) [...]**If the frame is crowded it is probably worth your while to generate the first PSF with the "VARIABLE PSF" option set to -1 --- pure analytic PSF. That way, the companions will not generate ghosts in the model PSF that will come back to haunt you later. You should also have specified a reasonably generous fitting radius --- these stars have been preselected to be as isolated as possible and you want the best fits you can get. But remember to avoid letting neighbor stars intrude within one fitting radius of the center of any PSF star.* For illustration we will set `VARIABLE PSF` option, before `PSf()`
dp.set_options('VARIABLE PSF', 2) psf_res = dp.PSf()
_____no_output_____
MIT
examples/deriving_psf_stenson.ipynb
majkelx/astwro
*(6) Run GROUP and NSTAR or ALLSTAR on your NEI file**If your PSF stars have many neighbors this may take some minutes of real time. Please be patient or submit it as a batch job and perform steps on your next frame while you wait.* We use `allstar`. (`GROUP` and `NSTAR` command are not implemented in current version of `pydaophot`). We use prepared above `Allstar` object: `al` operating on the same runner dir that `dp`. As parameter we set input image (we haven't do that on constructor), and `nei` file produced by `PSf()`. We do not remember name `i.psf` so use `psf_res.nei_file` property. Finally we order `allstar` to produce subtracted FITS .
alls_res = al.ALlstar(image_file=frame, stars=psf_res.nei_file, subtracted_image_file='is.fits')
_____no_output_____
MIT
examples/deriving_psf_stenson.ipynb
majkelx/astwro
All `result` objects, has `get_buffer()` method, useful to lookup unparsed `daophot` or `allstar` output:
print (alls_res.get_buffer())
63d38b_NGC6871... Picture size: 1250 1150 File with the PSF (default 63d38b_NGC6871.psf): Input file (default 63d38b_NGC6871.ap): File for results (default i.als): Name for subtracted image (default is): 915 stars. << I = iteration number R = number of stars that remain D = number of stars that disappeared C = number of stars that converged I R D C 1 915 0 0 << 2 915 0 0 << 3 915 0 0 << 4 724 0 191 << 5 385 0 530 << 6 211 0 704 << 7 110 0 805 << 8 67 0 848 << 9 40 0 875 << 10 0 0 915 Finished i  Good bye.
MIT
examples/deriving_psf_stenson.ipynb
majkelx/astwro
*(8) EXIT from DAOPHOT and send this new picture to the image display * *Examine each of the PSF stars and its environs. Have all of the PSF stars subtracted out more or less cleanly, or should some of them be rejected from further use as PSF stars? (If so use a text editor to delete these stars from the LST file.) Have the neighbors mostly disappeared, or have they left behind big zits? Have you uncovered any faint companions that FIND missed?[...]* The absolute path to subtracted file (like for most output files) is available as result's property:
sub_img = alls_res.subtracted_image_file
_____no_output_____
MIT
examples/deriving_psf_stenson.ipynb
majkelx/astwro
We can also generate region file for psf stars:
from astwro.starlist.ds9 import write_ds9_regions reg_file_path = dp.file_from_runner_dir('lst.reg') write_ds9_regions(pick_res.picked_starlist, reg_file_path) # One can run ds9 directly from notebook: !ds9 $sub_img -regions $reg_file_path
_____no_output_____
MIT
examples/deriving_psf_stenson.ipynb
majkelx/astwro
*(9) Back in DAOPHOT II ATTACH the original picture and run SUBSTAR**specifying the file created in step (6) or in step (8f) as the stars to subtract, and the stars in the LST file as the stars to keep.* Lookup into runner dir:
ls $al.dir sub_res = dp.SUbstar(subtract=alls_res.profile_photometry_file, leave_in=pick_res.picked_stars_file)
_____no_output_____
MIT
examples/deriving_psf_stenson.ipynb
majkelx/astwro
*You have now created a new picture which has the PSF stars still in it but from which the known neighbors of these PSF stars have been mostly removed* (10) ATTACH the new star subtracted frame and repeat step (5) to derive a new point spread function (11+...) Run GROUP NSTAR or ALLSTAR
for i in range(3): print ("Iteration {}: Allstar chi: {}".format(i, alls_res.als_stars.chi.mean())) dp.image = 'is.fits' respsf = dp.PSf() print ("Iteration {}: PSF chi: {}".format(i, respsf.chi)) alls_res = al.ALlstar(image_file=frame, stars='i.nei') dp.image = frame dp.SUbstar(subtract='i.als', leave_in='i.lst') print ("Final: Allstar chi: {}".format(alls_res.als_stars.chi.mean())) alls_res.als_stars
_____no_output_____
MIT
examples/deriving_psf_stenson.ipynb
majkelx/astwro
Check last image with subtracted PSF stars neighbours.
!ds9 $dp.SUbstar_result.subtracted_image_file -regions $reg_file_path
_____no_output_____
MIT
examples/deriving_psf_stenson.ipynb
majkelx/astwro
*Once you have produced a frame in which the PSF stars and their neighbors all subtract out cleanly, one more time through PSF should produce a point-spread function you can be proud of.*
dp.image = 'is.fits' psf_res = dp.PSf() print ("PSF file: {}".format(psf_res.psf_file))
PSF file: /var/folders/kt/1jqvm3s51jd4qbxns7dc43rw0000gq/T/pydaophot_tmpDu5p8c/i.psf
MIT
examples/deriving_psf_stenson.ipynb
majkelx/astwro
Fitting Models Exercise 1 Imports
%matplotlib inline import matplotlib.pyplot as plt import numpy as np import scipy.optimize as opt
_____no_output_____
MIT
assignments/assignment12/FittingModelsEx01.ipynb
rsterbentz/phys202-2015-work
Fitting a quadratic curve For this problem we are going to work with the following model:$$ y_{model}(x) = a x^2 + b x + c $$The true values of the model parameters are as follows:
a_true = 0.5 b_true = 2.0 c_true = -4.0
_____no_output_____
MIT
assignments/assignment12/FittingModelsEx01.ipynb
rsterbentz/phys202-2015-work
First, generate a dataset using this model using these parameters and the following characteristics:* For your $x$ data use 30 uniformly spaced points between $[-5,5]$.* Add a noise term to the $y$ value at each point that is drawn from a normal distribution with zero mean and standard deviation 2.0. Make sure you add a different random number to each point (see the `size` argument of `np.random.normal`).After you generate the data, make a plot of the raw data (use points).
def quad(x,a,b,c): return a*x**2 + b*x + c N = 30 xdata = np.linspace(-5,5,N) dy = 2.0 np.random.seed(0) ydata = quad(xdata,a_true,b_true,c_true) + np.random.normal(0.0, dy, N) plt.errorbar(xdata,ydata,dy,fmt='.k',ecolor='lightgrey') plt.xlabel('x') plt.ylabel('y') plt.xlim(-5,5); assert True # leave this cell for grading the raw data generation and plot
_____no_output_____
MIT
assignments/assignment12/FittingModelsEx01.ipynb
rsterbentz/phys202-2015-work
Now fit the model to the dataset to recover estimates for the model's parameters:* Print out the estimates and uncertainties of each parameter.* Plot the raw data and best fit of the model.
theta_best, theta_cov = opt.curve_fit(quad, xdata, ydata, sigma=dy) a_fit = theta_best[0] b_fit = theta_best[1] c_fit = theta_best[2] print('a = {0:.3f} +/- {1:.3f}'.format(a_fit, np.sqrt(theta_cov[0,0]))) print('b = {0:.3f} +/- {1:.3f}'.format(b_fit, np.sqrt(theta_cov[1,1]))) print('c = {0:.3f} +/- {1:.3f}'.format(c_fit, np.sqrt(theta_cov[2,2]))) x_fit = np.linspace(-5,5,30) y_fit = quad(x_fit,a_fit,b_fit,c_fit) plt.errorbar(xdata,ydata,dy,fmt='.k',ecolor='lightgrey') plt.plot(x_fit,y_fit) plt.xlabel('x') plt.ylabel('y') plt.xlim(-5,5); assert True # leave this cell for grading the fit; should include a plot and printout of the parameters+errors
_____no_output_____
MIT
assignments/assignment12/FittingModelsEx01.ipynb
rsterbentz/phys202-2015-work
内容- lightGBMモデル初版- ターゲットエンコーディング:Holdout TS- 外部データ3つ(ステージ面積1,ステージ面積2,ブキ)を結合 - ステージ面積1: https://probspace-stg.s3-ap-northeast-1.amazonaws.com/uploads/user/c10947bba5cde4ad3dd4a0d42a0ec35b/files/2020-09-06-0320/stagedata.csv - ステージ面積2:https://stat.ink/api-info/stage2 - ブキ:https://stat.ink/api-info/weapon2
# ライブラリのインポート import pandas as pd import numpy as np import re import matplotlib.pyplot as plt import seaborn as sns import lightgbm as lgb from sklearn.model_selection import train_test_split from sklearn.model_selection import KFold from sklearn.metrics import accuracy_score import warnings warnings.filterwarnings('ignore') # データの読込 train = pd.read_csv("../data/train_data.csv") test = pd.read_csv('../data/test_data.csv')
_____no_output_____
MIT
program/lightGBM_base_v0.1.ipynb
tomokoochi/splatoon_competition
データの確認
def inspection_datas(df): print('######################################') print('①サイズ(行数、列数)の確認') print(df.shape) print('######################################') print('②最初の5行の表示') display(df.head()) print('######################################') print('③各行のデータ型の確認(オブジェクト型の有無)') display(df.info()) display(df.select_dtypes(include=object).columns) print('######################################') print('④各種統計値の確認(③で、Objectのものは統計されない)') display(df.describe()) print('######################################') print('➄欠損値がある列の確認') null_df =df.isnull().sum()[df.columns[df.isnull().sum()!=0]] display(null_df) display(null_df.shape) print('######################################') print('⑥相関係数のヒートマップ') sns.heatmap(df.corr()) inspection_datas(train)
###################################### ①サイズ(行数、列数)の確認 (66125, 32) ###################################### ②最初の5行の表示
MIT
program/lightGBM_base_v0.1.ipynb
tomokoochi/splatoon_competition
外部データの結合
# 外部データの読込 # stage,stage2は若干面積が異なる、バージョンによる違いや計算方法による誤差 stage = pd.read_csv('../gaibu_data/stagedata.csv') stage2 = pd.read_json('../gaibu_data/stage.json') weapon = pd.read_csv('../gaibu_data/statink-weapon2.csv') stage.head(3) stage2.head(3) weapon.head(3)
_____no_output_____
MIT
program/lightGBM_base_v0.1.ipynb
tomokoochi/splatoon_competition
stageを結合
# 表記揺れの確認 print(np.sort(train['stage'].unique())) print(np.sort(test['stage'].unique())) print(np.sort(stage['stage'].unique())) # 結合のため列名変更 stage_r = stage.rename(columns = {'size':'stage_size1'}) # 結合 train_s = pd.merge(train, stage_r, on = 'stage', how = 'left') test_s = pd.merge(test, stage_r, on = 'stage', how = 'left') # null確認 print(train_s[['stage_size1']].isnull().sum()) print(test_s[['stage_size1']].isnull().sum())
stage_size1 0 dtype: int64 stage_size1 0 dtype: int64
MIT
program/lightGBM_base_v0.1.ipynb
tomokoochi/splatoon_competition