markdown
stringlengths 0
1.02M
| code
stringlengths 0
832k
| output
stringlengths 0
1.02M
| license
stringlengths 3
36
| path
stringlengths 6
265
| repo_name
stringlengths 6
127
|
---|---|---|---|---|---|
Dataset structureA dataset contains elements that each have the same (nested) structure and theindividual components of the structure can be of any type representable by`tf.TypeSpec`, including `tf.Tensor`, `tf.sparse.SparseTensor`, `tf.RaggedTensor`,`tf.TensorArray`, or `tf.data.Dataset`.The `Dataset.element_spec` property allows you to inspect the type of eachelement component. The property returns a *nested structure* of `tf.TypeSpec`objects, matching the structure of the element, which may be a single component,a tuple of components, or a nested tuple of components. For example:
|
dataset1 = tf.data.Dataset.from_tensor_slices(tf.random.uniform([4, 10]))
dataset1.element_spec
dataset2 = tf.data.Dataset.from_tensor_slices(
(tf.random.uniform([4]),
tf.random.uniform([4, 100], maxval=100, dtype=tf.int32)))
dataset2.element_spec
dataset3 = tf.data.Dataset.zip((dataset1, dataset2))
dataset3.element_spec
# Dataset containing a sparse tensor.
dataset4 = tf.data.Dataset.from_tensors(tf.SparseTensor(indices=[[0, 0], [1, 2]], values=[1, 2], dense_shape=[3, 4]))
dataset4.element_spec
# Use value_type to see the type of value represented by the element spec
dataset4.element_spec.value_type
|
_____no_output_____
|
Apache-2.0
|
site/en/guide/data.ipynb
|
zyberg2091/docs
|
The `Dataset` transformations support datasets of any structure. When using the`Dataset.map()`, and `Dataset.filter()` transformations,which apply a function to each element, the element structure determines thearguments of the function:
|
dataset1 = tf.data.Dataset.from_tensor_slices(
tf.random.uniform([4, 10], minval=1, maxval=10, dtype=tf.int32))
dataset1
for z in dataset1:
print(z.numpy())
dataset2 = tf.data.Dataset.from_tensor_slices(
(tf.random.uniform([4]),
tf.random.uniform([4, 100], maxval=100, dtype=tf.int32)))
dataset2
dataset3 = tf.data.Dataset.zip((dataset1, dataset2))
dataset3
for a, (b,c) in dataset3:
print('shapes: {a.shape}, {b.shape}, {c.shape}'.format(a=a, b=b, c=c))
|
_____no_output_____
|
Apache-2.0
|
site/en/guide/data.ipynb
|
zyberg2091/docs
|
Reading input data Consuming NumPy arraysSee [Loading NumPy arrays](../tutorials/load_data/numpy.ipynb) for more examples.If all of your input data fits in memory, the simplest way to create a `Dataset`from them is to convert them to `tf.Tensor` objects and use`Dataset.from_tensor_slices()`.
|
train, test = tf.keras.datasets.fashion_mnist.load_data()
images, labels = train
images = images/255
dataset = tf.data.Dataset.from_tensor_slices((images, labels))
dataset
|
_____no_output_____
|
Apache-2.0
|
site/en/guide/data.ipynb
|
zyberg2091/docs
|
Note: The above code snippet will embed the `features` and `labels` arraysin your TensorFlow graph as `tf.constant()` operations. This works well for asmall dataset, but wastes memory---because the contents of the array will becopied multiple times---and can run into the 2GB limit for the `tf.GraphDef`protocol buffer. Consuming Python generatorsAnother common data source that can easily be ingested as a `tf.data.Dataset` is the python generator.Caution: While this is a convienient approach it has limited portability and scalibility. It must run in the same python process that created the generator, and is still subject to the Python [GIL](https://en.wikipedia.org/wiki/Global_interpreter_lock).
|
def count(stop):
i = 0
while i<stop:
yield i
i += 1
for n in count(5):
print(n)
|
_____no_output_____
|
Apache-2.0
|
site/en/guide/data.ipynb
|
zyberg2091/docs
|
The `Dataset.from_generator` constructor converts the python generator to a fully functional `tf.data.Dataset`.The constructor takes a callable as input, not an iterator. This allows it to restart the generator when it reaches the end. It takes an optional `args` argument, which is passed as the callable's arguments.The `output_types` argument is required because `tf.data` builds a `tf.Graph` internally, and graph edges require a `tf.dtype`.
|
ds_counter = tf.data.Dataset.from_generator(count, args=[25], output_types=tf.int32, output_shapes = (), )
for count_batch in ds_counter.repeat().batch(10).take(10):
print(count_batch.numpy())
|
_____no_output_____
|
Apache-2.0
|
site/en/guide/data.ipynb
|
zyberg2091/docs
|
The `output_shapes` argument is not *required* but is highly recomended as many tensorflow operations do not support tensors with unknown rank. If the length of a particular axis is unknown or variable, set it as `None` in the `output_shapes`.It's also important to note that the `output_shapes` and `output_types` follow the same nesting rules as other dataset methods.Here is an example generator that demonstrates both aspects, it returns tuples of arrays, where the second array is a vector with unknown length.
|
def gen_series():
i = 0
while True:
size = np.random.randint(0, 10)
yield i, np.random.normal(size=(size,))
i += 1
for i, series in gen_series():
print(i, ":", str(series))
if i > 5:
break
|
_____no_output_____
|
Apache-2.0
|
site/en/guide/data.ipynb
|
zyberg2091/docs
|
The first output is an `int32` the second is a `float32`.The first item is a scalar, shape `()`, and the second is a vector of unknown length, shape `(None,)`
|
ds_series = tf.data.Dataset.from_generator(
gen_series,
output_types=(tf.int32, tf.float32),
output_shapes=((), (None,)))
ds_series
|
_____no_output_____
|
Apache-2.0
|
site/en/guide/data.ipynb
|
zyberg2091/docs
|
Now it can be used like a regular `tf.data.Dataset`. Note that when batching a dataset with a variable shape, you need to use `Dataset.padded_batch`.
|
ds_series_batch = ds_series.shuffle(20).padded_batch(10)
ids, sequence_batch = next(iter(ds_series_batch))
print(ids.numpy())
print()
print(sequence_batch.numpy())
|
_____no_output_____
|
Apache-2.0
|
site/en/guide/data.ipynb
|
zyberg2091/docs
|
For a more realistic example, try wrapping `preprocessing.image.ImageDataGenerator` as a `tf.data.Dataset`.First download the data:
|
flowers = tf.keras.utils.get_file(
'flower_photos',
'https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz',
untar=True)
|
_____no_output_____
|
Apache-2.0
|
site/en/guide/data.ipynb
|
zyberg2091/docs
|
Create the `image.ImageDataGenerator`
|
img_gen = tf.keras.preprocessing.image.ImageDataGenerator(rescale=1./255, rotation_range=20)
images, labels = next(img_gen.flow_from_directory(flowers))
print(images.dtype, images.shape)
print(labels.dtype, labels.shape)
ds = tf.data.Dataset.from_generator(
lambda: img_gen.flow_from_directory(flowers),
output_types=(tf.float32, tf.float32),
output_shapes=([32,256,256,3], [32,5])
)
ds.element_spec
for images, label in ds.take(1):
print('images.shape: ', images.shape)
print('labels.shape: ', labels.shape)
|
_____no_output_____
|
Apache-2.0
|
site/en/guide/data.ipynb
|
zyberg2091/docs
|
Consuming TFRecord dataSee [Loading TFRecords](../tutorials/load_data/tf_records.ipynb) for an end-to-end example.The `tf.data` API supports a variety of file formats so that you can processlarge datasets that do not fit in memory. For example, the TFRecord file formatis a simple record-oriented binary format that many TensorFlow applications usefor training data. The `tf.data.TFRecordDataset` class enables you tostream over the contents of one or more TFRecord files as part of an inputpipeline. Here is an example using the test file from the French Street Name Signs (FSNS).
|
# Creates a dataset that reads all of the examples from two files.
fsns_test_file = tf.keras.utils.get_file("fsns.tfrec", "https://storage.googleapis.com/download.tensorflow.org/data/fsns-20160927/testdata/fsns-00000-of-00001")
|
_____no_output_____
|
Apache-2.0
|
site/en/guide/data.ipynb
|
zyberg2091/docs
|
The `filenames` argument to the `TFRecordDataset` initializer can either be astring, a list of strings, or a `tf.Tensor` of strings. Therefore if you havetwo sets of files for training and validation purposes, you can create a factorymethod that produces the dataset, taking filenames as an input argument:
|
dataset = tf.data.TFRecordDataset(filenames = [fsns_test_file])
dataset
|
_____no_output_____
|
Apache-2.0
|
site/en/guide/data.ipynb
|
zyberg2091/docs
|
Many TensorFlow projects use serialized `tf.train.Example` records in their TFRecord files. These need to be decoded before they can be inspected:
|
raw_example = next(iter(dataset))
parsed = tf.train.Example.FromString(raw_example.numpy())
parsed.features.feature['image/text']
|
_____no_output_____
|
Apache-2.0
|
site/en/guide/data.ipynb
|
zyberg2091/docs
|
Consuming text dataSee [Loading Text](../tutorials/load_data/text.ipynb) for an end to end example.Many datasets are distributed as one or more text files. The`tf.data.TextLineDataset` provides an easy way to extract lines from one or moretext files. Given one or more filenames, a `TextLineDataset` will produce onestring-valued element per line of those files.
|
directory_url = 'https://storage.googleapis.com/download.tensorflow.org/data/illiad/'
file_names = ['cowper.txt', 'derby.txt', 'butler.txt']
file_paths = [
tf.keras.utils.get_file(file_name, directory_url + file_name)
for file_name in file_names
]
dataset = tf.data.TextLineDataset(file_paths)
|
_____no_output_____
|
Apache-2.0
|
site/en/guide/data.ipynb
|
zyberg2091/docs
|
Here are the first few lines of the first file:
|
for line in dataset.take(5):
print(line.numpy())
|
_____no_output_____
|
Apache-2.0
|
site/en/guide/data.ipynb
|
zyberg2091/docs
|
To alternate lines between files use `Dataset.interleave`. This makes it easier to shuffle files together. Here are the first, second and third lines from each translation:
|
files_ds = tf.data.Dataset.from_tensor_slices(file_paths)
lines_ds = files_ds.interleave(tf.data.TextLineDataset, cycle_length=3)
for i, line in enumerate(lines_ds.take(9)):
if i % 3 == 0:
print()
print(line.numpy())
|
_____no_output_____
|
Apache-2.0
|
site/en/guide/data.ipynb
|
zyberg2091/docs
|
By default, a `TextLineDataset` yields *every* line of each file, which maynot be desirable, for example, if the file starts with a header line, or contains comments. These lines can be removed using the `Dataset.skip()` or`Dataset.filter()` transformations. Here, you skip the first line, then filter tofind only survivors.
|
titanic_file = tf.keras.utils.get_file("train.csv", "https://storage.googleapis.com/tf-datasets/titanic/train.csv")
titanic_lines = tf.data.TextLineDataset(titanic_file)
for line in titanic_lines.take(10):
print(line.numpy())
def survived(line):
return tf.not_equal(tf.strings.substr(line, 0, 1), "0")
survivors = titanic_lines.skip(1).filter(survived)
for line in survivors.take(10):
print(line.numpy())
|
_____no_output_____
|
Apache-2.0
|
site/en/guide/data.ipynb
|
zyberg2091/docs
|
Consuming CSV data See [Loading CSV Files](../tutorials/load_data/csv.ipynb), and [Loading Pandas DataFrames](../tutorials/load_data/pandas.ipynb) for more examples. The CSV file format is a popular format for storing tabular data in plain text.For example:
|
titanic_file = tf.keras.utils.get_file("train.csv", "https://storage.googleapis.com/tf-datasets/titanic/train.csv")
df = pd.read_csv(titanic_file)
df.head()
|
_____no_output_____
|
Apache-2.0
|
site/en/guide/data.ipynb
|
zyberg2091/docs
|
If your data fits in memory the same `Dataset.from_tensor_slices` method works on dictionaries, allowing this data to be easily imported:
|
titanic_slices = tf.data.Dataset.from_tensor_slices(dict(df))
for feature_batch in titanic_slices.take(1):
for key, value in feature_batch.items():
print(" {!r:20s}: {}".format(key, value))
|
_____no_output_____
|
Apache-2.0
|
site/en/guide/data.ipynb
|
zyberg2091/docs
|
A more scalable approach is to load from disk as necessary. The `tf.data` module provides methods to extract records from one or more CSV files that comply with [RFC 4180](https://tools.ietf.org/html/rfc4180).The `experimental.make_csv_dataset` function is the high level interface for reading sets of csv files. It supports column type inference and many other features, like batching and shuffling, to make usage simple.
|
titanic_batches = tf.data.experimental.make_csv_dataset(
titanic_file, batch_size=4,
label_name="survived")
for feature_batch, label_batch in titanic_batches.take(1):
print("'survived': {}".format(label_batch))
print("features:")
for key, value in feature_batch.items():
print(" {!r:20s}: {}".format(key, value))
|
_____no_output_____
|
Apache-2.0
|
site/en/guide/data.ipynb
|
zyberg2091/docs
|
You can use the `select_columns` argument if you only need a subset of columns.
|
titanic_batches = tf.data.experimental.make_csv_dataset(
titanic_file, batch_size=4,
label_name="survived", select_columns=['class', 'fare', 'survived'])
for feature_batch, label_batch in titanic_batches.take(1):
print("'survived': {}".format(label_batch))
for key, value in feature_batch.items():
print(" {!r:20s}: {}".format(key, value))
|
_____no_output_____
|
Apache-2.0
|
site/en/guide/data.ipynb
|
zyberg2091/docs
|
There is also a lower-level `experimental.CsvDataset` class which provides finer grained control. It does not support column type inference. Instead you must specify the type of each column.
|
titanic_types = [tf.int32, tf.string, tf.float32, tf.int32, tf.int32, tf.float32, tf.string, tf.string, tf.string, tf.string]
dataset = tf.data.experimental.CsvDataset(titanic_file, titanic_types , header=True)
for line in dataset.take(10):
print([item.numpy() for item in line])
|
_____no_output_____
|
Apache-2.0
|
site/en/guide/data.ipynb
|
zyberg2091/docs
|
If some columns are empty, this low-level interface allows you to provide default values instead of column types.
|
%%writefile missing.csv
1,2,3,4
,2,3,4
1,,3,4
1,2,,4
1,2,3,
,,,
# Creates a dataset that reads all of the records from two CSV files, each with
# four float columns which may have missing values.
record_defaults = [999,999,999,999]
dataset = tf.data.experimental.CsvDataset("missing.csv", record_defaults)
dataset = dataset.map(lambda *items: tf.stack(items))
dataset
for line in dataset:
print(line.numpy())
|
_____no_output_____
|
Apache-2.0
|
site/en/guide/data.ipynb
|
zyberg2091/docs
|
By default, a `CsvDataset` yields *every* column of *every* line of the file,which may not be desirable, for example if the file starts with a header linethat should be ignored, or if some columns are not required in the input.These lines and fields can be removed with the `header` and `select_cols`arguments respectively.
|
# Creates a dataset that reads all of the records from two CSV files with
# headers, extracting float data from columns 2 and 4.
record_defaults = [999, 999] # Only provide defaults for the selected columns
dataset = tf.data.experimental.CsvDataset("missing.csv", record_defaults, select_cols=[1, 3])
dataset = dataset.map(lambda *items: tf.stack(items))
dataset
for line in dataset:
print(line.numpy())
|
_____no_output_____
|
Apache-2.0
|
site/en/guide/data.ipynb
|
zyberg2091/docs
|
Consuming sets of files There are many datasets distributed as a set of files, where each file is an example.
|
flowers_root = tf.keras.utils.get_file(
'flower_photos',
'https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz',
untar=True)
flowers_root = pathlib.Path(flowers_root)
|
_____no_output_____
|
Apache-2.0
|
site/en/guide/data.ipynb
|
zyberg2091/docs
|
Note: these images are licensed CC-BY, see LICENSE.txt for details. The root directory contains a directory for each class:
|
for item in flowers_root.glob("*"):
print(item.name)
|
_____no_output_____
|
Apache-2.0
|
site/en/guide/data.ipynb
|
zyberg2091/docs
|
The files in each class directory are examples:
|
list_ds = tf.data.Dataset.list_files(str(flowers_root/'*/*'))
for f in list_ds.take(5):
print(f.numpy())
|
_____no_output_____
|
Apache-2.0
|
site/en/guide/data.ipynb
|
zyberg2091/docs
|
Read the data using the `tf.io.read_file` function and extract the label from the path, returning `(image, label)` pairs:
|
def process_path(file_path):
label = tf.strings.split(file_path, os.sep)[-2]
return tf.io.read_file(file_path), label
labeled_ds = list_ds.map(process_path)
for image_raw, label_text in labeled_ds.take(1):
print(repr(image_raw.numpy()[:100]))
print()
print(label_text.numpy())
|
_____no_output_____
|
Apache-2.0
|
site/en/guide/data.ipynb
|
zyberg2091/docs
|
<!--TODO(mrry): Add this section. Handling text data with unusual sizes--> Batching dataset elements Simple batchingThe simplest form of batching stacks `n` consecutive elements of a dataset intoa single element. The `Dataset.batch()` transformation does exactly this, withthe same constraints as the `tf.stack()` operator, applied to each componentof the elements: i.e. for each component *i*, all elements must have a tensorof the exact same shape.
|
inc_dataset = tf.data.Dataset.range(100)
dec_dataset = tf.data.Dataset.range(0, -100, -1)
dataset = tf.data.Dataset.zip((inc_dataset, dec_dataset))
batched_dataset = dataset.batch(4)
for batch in batched_dataset.take(4):
print([arr.numpy() for arr in batch])
|
_____no_output_____
|
Apache-2.0
|
site/en/guide/data.ipynb
|
zyberg2091/docs
|
While `tf.data` tries to propagate shape information, the default settings of `Dataset.batch` result in an unknown batch size because the last batch may not be full. Note the `None`s in the shape:
|
batched_dataset
|
_____no_output_____
|
Apache-2.0
|
site/en/guide/data.ipynb
|
zyberg2091/docs
|
Use the `drop_remainder` argument to ignore that last batch, and get full shape propagation:
|
batched_dataset = dataset.batch(7, drop_remainder=True)
batched_dataset
|
_____no_output_____
|
Apache-2.0
|
site/en/guide/data.ipynb
|
zyberg2091/docs
|
Batching tensors with paddingThe above recipe works for tensors that all have the same size. However, manymodels (e.g. sequence models) work with input data that can have varying size(e.g. sequences of different lengths). To handle this case, the`Dataset.padded_batch` transformation enables you to batch tensors ofdifferent shape by specifying one or more dimensions in which they may bepadded.
|
dataset = tf.data.Dataset.range(100)
dataset = dataset.map(lambda x: tf.fill([tf.cast(x, tf.int32)], x))
dataset = dataset.padded_batch(4, padded_shapes=(None,))
for batch in dataset.take(2):
print(batch.numpy())
print()
|
_____no_output_____
|
Apache-2.0
|
site/en/guide/data.ipynb
|
zyberg2091/docs
|
The `Dataset.padded_batch` transformation allows you to set different paddingfor each dimension of each component, and it may be variable-length (signifiedby `None` in the example above) or constant-length. It is also possible tooverride the padding value, which defaults to 0.<!--TODO(mrry): Add this section. Dense ragged -> tf.SparseTensor--> Training workflows Processing multiple epochsThe `tf.data` API offers two main ways to process multiple epochs of the samedata.The simplest way to iterate over a dataset in multiple epochs is to use the`Dataset.repeat()` transformation. First, create a dataset of titanic data:
|
titanic_file = tf.keras.utils.get_file("train.csv", "https://storage.googleapis.com/tf-datasets/titanic/train.csv")
titanic_lines = tf.data.TextLineDataset(titanic_file)
def plot_batch_sizes(ds):
batch_sizes = [batch.shape[0] for batch in ds]
plt.bar(range(len(batch_sizes)), batch_sizes)
plt.xlabel('Batch number')
plt.ylabel('Batch size')
|
_____no_output_____
|
Apache-2.0
|
site/en/guide/data.ipynb
|
zyberg2091/docs
|
Applying the `Dataset.repeat()` transformation with no arguments will repeatthe input indefinitely.The `Dataset.repeat` transformation concatenates itsarguments without signaling the end of one epoch and the beginning of the nextepoch. Because of this a `Dataset.batch` applied after `Dataset.repeat` will yield batches that straddle epoch boundaries:
|
titanic_batches = titanic_lines.repeat(3).batch(128)
plot_batch_sizes(titanic_batches)
|
_____no_output_____
|
Apache-2.0
|
site/en/guide/data.ipynb
|
zyberg2091/docs
|
If you need clear epoch separation, put `Dataset.batch` before the repeat:
|
titanic_batches = titanic_lines.batch(128).repeat(3)
plot_batch_sizes(titanic_batches)
|
_____no_output_____
|
Apache-2.0
|
site/en/guide/data.ipynb
|
zyberg2091/docs
|
If you would like to perform a custom computation (e.g. to collect statistics) at the end of each epoch then it's simplest to restart the dataset iteration on each epoch:
|
epochs = 3
dataset = titanic_lines.batch(128)
for epoch in range(epochs):
for batch in dataset:
print(batch.shape)
print("End of epoch: ", epoch)
|
_____no_output_____
|
Apache-2.0
|
site/en/guide/data.ipynb
|
zyberg2091/docs
|
Randomly shuffling input dataThe `Dataset.shuffle()` transformation maintains a fixed-sizebuffer and chooses the next element uniformly at random from that buffer.Note: While large buffer_sizes shuffle more thoroughly, they can take a lot of memory, and significant time to fill. Consider using `Dataset.interleave` across files if this becomes a problem. Add an index to the dataset so you can see the effect:
|
lines = tf.data.TextLineDataset(titanic_file)
counter = tf.data.experimental.Counter()
dataset = tf.data.Dataset.zip((counter, lines))
dataset = dataset.shuffle(buffer_size=100)
dataset = dataset.batch(20)
dataset
|
_____no_output_____
|
Apache-2.0
|
site/en/guide/data.ipynb
|
zyberg2091/docs
|
Since the `buffer_size` is 100, and the batch size is 20, the first batch contains no elements with an index over 120.
|
n,line_batch = next(iter(dataset))
print(n.numpy())
|
_____no_output_____
|
Apache-2.0
|
site/en/guide/data.ipynb
|
zyberg2091/docs
|
As with `Dataset.batch` the order relative to `Dataset.repeat` matters.`Dataset.shuffle` doesn't signal the end of an epoch until the shuffle buffer is empty. So a shuffle placed before a repeat will show every element of one epoch before moving to the next:
|
dataset = tf.data.Dataset.zip((counter, lines))
shuffled = dataset.shuffle(buffer_size=100).batch(10).repeat(2)
print("Here are the item ID's near the epoch boundary:\n")
for n, line_batch in shuffled.skip(60).take(5):
print(n.numpy())
shuffle_repeat = [n.numpy().mean() for n, line_batch in shuffled]
plt.plot(shuffle_repeat, label="shuffle().repeat()")
plt.ylabel("Mean item ID")
plt.legend()
|
_____no_output_____
|
Apache-2.0
|
site/en/guide/data.ipynb
|
zyberg2091/docs
|
But a repeat before a shuffle mixes the epoch boundaries together:
|
dataset = tf.data.Dataset.zip((counter, lines))
shuffled = dataset.repeat(2).shuffle(buffer_size=100).batch(10)
print("Here are the item ID's near the epoch boundary:\n")
for n, line_batch in shuffled.skip(55).take(15):
print(n.numpy())
repeat_shuffle = [n.numpy().mean() for n, line_batch in shuffled]
plt.plot(shuffle_repeat, label="shuffle().repeat()")
plt.plot(repeat_shuffle, label="repeat().shuffle()")
plt.ylabel("Mean item ID")
plt.legend()
|
_____no_output_____
|
Apache-2.0
|
site/en/guide/data.ipynb
|
zyberg2091/docs
|
Preprocessing dataThe `Dataset.map(f)` transformation produces a new dataset by applying a givenfunction `f` to each element of the input dataset. It is based on the[`map()`](https://en.wikipedia.org/wiki/Map_\(higher-order_function\)) functionthat is commonly applied to lists (and other structures) in functionalprogramming languages. The function `f` takes the `tf.Tensor` objects thatrepresent a single element in the input, and returns the `tf.Tensor` objectsthat will represent a single element in the new dataset. Its implementation usesstandard TensorFlow operations to transform one element into another.This section covers common examples of how to use `Dataset.map()`. Decoding image data and resizing itWhen training a neural network on real-world image data, it is often necessaryto convert images of different sizes to a common size, so that they may bebatched into a fixed size.Rebuild the flower filenames dataset:
|
list_ds = tf.data.Dataset.list_files(str(flowers_root/'*/*'))
|
_____no_output_____
|
Apache-2.0
|
site/en/guide/data.ipynb
|
zyberg2091/docs
|
Write a function that manipulates the dataset elements.
|
# Reads an image from a file, decodes it into a dense tensor, and resizes it
# to a fixed shape.
def parse_image(filename):
parts = tf.strings.split(filename, os.sep)
label = parts[-2]
image = tf.io.read_file(filename)
image = tf.image.decode_jpeg(image)
image = tf.image.convert_image_dtype(image, tf.float32)
image = tf.image.resize(image, [128, 128])
return image, label
|
_____no_output_____
|
Apache-2.0
|
site/en/guide/data.ipynb
|
zyberg2091/docs
|
Test that it works.
|
file_path = next(iter(list_ds))
image, label = parse_image(file_path)
def show(image, label):
plt.figure()
plt.imshow(image)
plt.title(label.numpy().decode('utf-8'))
plt.axis('off')
show(image, label)
|
_____no_output_____
|
Apache-2.0
|
site/en/guide/data.ipynb
|
zyberg2091/docs
|
Map it over the dataset.
|
images_ds = list_ds.map(parse_image)
for image, label in images_ds.take(2):
show(image, label)
|
_____no_output_____
|
Apache-2.0
|
site/en/guide/data.ipynb
|
zyberg2091/docs
|
Applying arbitrary Python logicFor performance reasons, use TensorFlow operations forpreprocessing your data whenever possible. However, it is sometimes useful tocall external Python libraries when parsing your input data. You can use the `tf.py_function()` operation in a `Dataset.map()` transformation. For example, if you want to apply a random rotation, the `tf.image` module only has `tf.image.rot90`, which is not very useful for image augmentation. Note: `tensorflow_addons` has a TensorFlow compatible `rotate` in `tensorflow_addons.image.rotate`.To demonstrate `tf.py_function`, try using the `scipy.ndimage.rotate` function instead:
|
import scipy.ndimage as ndimage
def random_rotate_image(image):
image = ndimage.rotate(image, np.random.uniform(-30, 30), reshape=False)
return image
image, label = next(iter(images_ds))
image = random_rotate_image(image)
show(image, label)
|
_____no_output_____
|
Apache-2.0
|
site/en/guide/data.ipynb
|
zyberg2091/docs
|
To use this function with `Dataset.map` the same caveats apply as with `Dataset.from_generator`, you need to describe the return shapes and types when you apply the function:
|
def tf_random_rotate_image(image, label):
im_shape = image.shape
[image,] = tf.py_function(random_rotate_image, [image], [tf.float32])
image.set_shape(im_shape)
return image, label
rot_ds = images_ds.map(tf_random_rotate_image)
for image, label in rot_ds.take(2):
show(image, label)
|
_____no_output_____
|
Apache-2.0
|
site/en/guide/data.ipynb
|
zyberg2091/docs
|
Parsing `tf.Example` protocol buffer messagesMany input pipelines extract `tf.train.Example` protocol buffer messages from aTFRecord format. Each `tf.train.Example` record contains one or more "features",and the input pipeline typically converts these features into tensors.
|
fsns_test_file = tf.keras.utils.get_file("fsns.tfrec", "https://storage.googleapis.com/download.tensorflow.org/data/fsns-20160927/testdata/fsns-00000-of-00001")
dataset = tf.data.TFRecordDataset(filenames = [fsns_test_file])
dataset
|
_____no_output_____
|
Apache-2.0
|
site/en/guide/data.ipynb
|
zyberg2091/docs
|
You can work with `tf.train.Example` protos outside of a `tf.data.Dataset` to understand the data:
|
raw_example = next(iter(dataset))
parsed = tf.train.Example.FromString(raw_example.numpy())
feature = parsed.features.feature
raw_img = feature['image/encoded'].bytes_list.value[0]
img = tf.image.decode_png(raw_img)
plt.imshow(img)
plt.axis('off')
_ = plt.title(feature["image/text"].bytes_list.value[0])
raw_example = next(iter(dataset))
def tf_parse(eg):
example = tf.io.parse_example(
eg[tf.newaxis], {
'image/encoded': tf.io.FixedLenFeature(shape=(), dtype=tf.string),
'image/text': tf.io.FixedLenFeature(shape=(), dtype=tf.string)
})
return example['image/encoded'][0], example['image/text'][0]
img, txt = tf_parse(raw_example)
print(txt.numpy())
print(repr(img.numpy()[:20]), "...")
decoded = dataset.map(tf_parse)
decoded
image_batch, text_batch = next(iter(decoded.batch(10)))
image_batch.shape
|
_____no_output_____
|
Apache-2.0
|
site/en/guide/data.ipynb
|
zyberg2091/docs
|
Time series windowing For an end to end time series example see: [Time series forecasting](../../tutorials/text/time_series.ipynb). Time series data is often organized with the time axis intact.Use a simple `Dataset.range` to demonstrate:
|
range_ds = tf.data.Dataset.range(100000)
|
_____no_output_____
|
Apache-2.0
|
site/en/guide/data.ipynb
|
zyberg2091/docs
|
Typically, models based on this sort of data will want a contiguous time slice. The simplest approach would be to batch the data: Using `batch`
|
batches = range_ds.batch(10, drop_remainder=True)
for batch in batches.take(5):
print(batch.numpy())
|
_____no_output_____
|
Apache-2.0
|
site/en/guide/data.ipynb
|
zyberg2091/docs
|
Or to make dense predictions one step into the future, you might shift the features and labels by one step relative to each other:
|
def dense_1_step(batch):
# Shift features and labels one step relative to each other.
return batch[:-1], batch[1:]
predict_dense_1_step = batches.map(dense_1_step)
for features, label in predict_dense_1_step.take(3):
print(features.numpy(), " => ", label.numpy())
|
_____no_output_____
|
Apache-2.0
|
site/en/guide/data.ipynb
|
zyberg2091/docs
|
To predict a whole window instead of a fixed offset you can split the batches into two parts:
|
batches = range_ds.batch(15, drop_remainder=True)
def label_next_5_steps(batch):
return (batch[:-5], # Take the first 5 steps
batch[-5:]) # take the remainder
predict_5_steps = batches.map(label_next_5_steps)
for features, label in predict_5_steps.take(3):
print(features.numpy(), " => ", label.numpy())
|
_____no_output_____
|
Apache-2.0
|
site/en/guide/data.ipynb
|
zyberg2091/docs
|
To allow some overlap between the features of one batch and the labels of another, use `Dataset.zip`:
|
feature_length = 10
label_length = 3
features = range_ds.batch(feature_length, drop_remainder=True)
labels = range_ds.batch(feature_length).skip(1).map(lambda labels: labels[:label_length])
predicted_steps = tf.data.Dataset.zip((features, labels))
for features, label in predicted_steps.take(5):
print(features.numpy(), " => ", label.numpy())
|
_____no_output_____
|
Apache-2.0
|
site/en/guide/data.ipynb
|
zyberg2091/docs
|
Using `window` While using `Dataset.batch` works, there are situations where you may need finer control. The `Dataset.window` method gives you complete control, but requires some care: it returns a `Dataset` of `Datasets`. See [Dataset structure](dataset_structure) for details.
|
window_size = 5
windows = range_ds.window(window_size, shift=1)
for sub_ds in windows.take(5):
print(sub_ds)
|
_____no_output_____
|
Apache-2.0
|
site/en/guide/data.ipynb
|
zyberg2091/docs
|
The `Dataset.flat_map` method can take a dataset of datasets and flatten it into a single dataset:
|
for x in windows.flat_map(lambda x: x).take(30):
print(x.numpy(), end=' ')
|
_____no_output_____
|
Apache-2.0
|
site/en/guide/data.ipynb
|
zyberg2091/docs
|
In nearly all cases, you will want to `.batch` the dataset first:
|
def sub_to_batch(sub):
return sub.batch(window_size, drop_remainder=True)
for example in windows.flat_map(sub_to_batch).take(5):
print(example.numpy())
|
_____no_output_____
|
Apache-2.0
|
site/en/guide/data.ipynb
|
zyberg2091/docs
|
Now, you can see that the `shift` argument controls how much each window moves over.Putting this together you might write this function:
|
def make_window_dataset(ds, window_size=5, shift=1, stride=1):
windows = ds.window(window_size, shift=shift, stride=stride)
def sub_to_batch(sub):
return sub.batch(window_size, drop_remainder=True)
windows = windows.flat_map(sub_to_batch)
return windows
ds = make_window_dataset(range_ds, window_size=10, shift = 5, stride=3)
for example in ds.take(10):
print(example.numpy())
|
_____no_output_____
|
Apache-2.0
|
site/en/guide/data.ipynb
|
zyberg2091/docs
|
Then it's easy to extract labels, as before:
|
dense_labels_ds = ds.map(dense_1_step)
for inputs,labels in dense_labels_ds.take(3):
print(inputs.numpy(), "=>", labels.numpy())
|
_____no_output_____
|
Apache-2.0
|
site/en/guide/data.ipynb
|
zyberg2091/docs
|
ResamplingWhen working with a dataset that is very class-imbalanced, you may want to resample the dataset. `tf.data` provides two methods to do this. The credit card fraud dataset is a good example of this sort of problem.Note: See [Imbalanced Data](../tutorials/keras/imbalanced_data.ipynb) for a full tutorial.
|
zip_path = tf.keras.utils.get_file(
origin='https://storage.googleapis.com/download.tensorflow.org/data/creditcard.zip',
fname='creditcard.zip',
extract=True)
csv_path = zip_path.replace('.zip', '.csv')
creditcard_ds = tf.data.experimental.make_csv_dataset(
csv_path, batch_size=1024, label_name="Class",
# Set the column types: 30 floats and an int.
column_defaults=[float()]*30+[int()])
|
_____no_output_____
|
Apache-2.0
|
site/en/guide/data.ipynb
|
zyberg2091/docs
|
Now, check the distribution of classes, it is highly skewed:
|
def count(counts, batch):
features, labels = batch
class_1 = labels == 1
class_1 = tf.cast(class_1, tf.int32)
class_0 = labels == 0
class_0 = tf.cast(class_0, tf.int32)
counts['class_0'] += tf.reduce_sum(class_0)
counts['class_1'] += tf.reduce_sum(class_1)
return counts
counts = creditcard_ds.take(10).reduce(
initial_state={'class_0': 0, 'class_1': 0},
reduce_func = count)
counts = np.array([counts['class_0'].numpy(),
counts['class_1'].numpy()]).astype(np.float32)
fractions = counts/counts.sum()
print(fractions)
|
_____no_output_____
|
Apache-2.0
|
site/en/guide/data.ipynb
|
zyberg2091/docs
|
A common approach to training with an imbalanced dataset is to balance it. `tf.data` includes a few methods which enable this workflow: Datasets sampling One approach to resampling a dataset is to use `sample_from_datasets`. This is more applicable when you have a separate `data.Dataset` for each class.Here, just use filter to generate them from the credit card fraud data:
|
negative_ds = (
creditcard_ds
.unbatch()
.filter(lambda features, label: label==0)
.repeat())
positive_ds = (
creditcard_ds
.unbatch()
.filter(lambda features, label: label==1)
.repeat())
for features, label in positive_ds.batch(10).take(1):
print(label.numpy())
|
_____no_output_____
|
Apache-2.0
|
site/en/guide/data.ipynb
|
zyberg2091/docs
|
To use `tf.data.experimental.sample_from_datasets` pass the datasets, and the weight for each:
|
balanced_ds = tf.data.experimental.sample_from_datasets(
[negative_ds, positive_ds], [0.5, 0.5]).batch(10)
|
_____no_output_____
|
Apache-2.0
|
site/en/guide/data.ipynb
|
zyberg2091/docs
|
Now the dataset produces examples of each class with 50/50 probability:
|
for features, labels in balanced_ds.take(10):
print(labels.numpy())
|
_____no_output_____
|
Apache-2.0
|
site/en/guide/data.ipynb
|
zyberg2091/docs
|
Rejection resampling One problem with the above `experimental.sample_from_datasets` approach is thatit needs a separate `tf.data.Dataset` per class. Using `Dataset.filter`works, but results in all the data being loaded twice.The `data.experimental.rejection_resample` function can be applied to a dataset to rebalance it, while only loading it once. Elements will be dropped from the dataset to achieve balance.`data.experimental.rejection_resample` takes a `class_func` argument. This `class_func` is applied to each dataset element, and is used to determine which class an example belongs to for the purposes of balancing.The elements of `creditcard_ds` are already `(features, label)` pairs. So the `class_func` just needs to return those labels:
|
def class_func(features, label):
return label
|
_____no_output_____
|
Apache-2.0
|
site/en/guide/data.ipynb
|
zyberg2091/docs
|
The resampler also needs a target distribution, and optionally an initial distribution estimate:
|
resampler = tf.data.experimental.rejection_resample(
class_func, target_dist=[0.5, 0.5], initial_dist=fractions)
|
_____no_output_____
|
Apache-2.0
|
site/en/guide/data.ipynb
|
zyberg2091/docs
|
The resampler deals with individual examples, so you must `unbatch` the dataset before applying the resampler:
|
resample_ds = creditcard_ds.unbatch().apply(resampler).batch(10)
|
_____no_output_____
|
Apache-2.0
|
site/en/guide/data.ipynb
|
zyberg2091/docs
|
The resampler returns creates `(class, example)` pairs from the output of the `class_func`. In this case, the `example` was already a `(feature, label)` pair, so use `map` to drop the extra copy of the labels:
|
balanced_ds = resample_ds.map(lambda extra_label, features_and_label: features_and_label)
|
_____no_output_____
|
Apache-2.0
|
site/en/guide/data.ipynb
|
zyberg2091/docs
|
Now the dataset produces examples of each class with 50/50 probability:
|
for features, labels in balanced_ds.take(10):
print(labels.numpy())
|
_____no_output_____
|
Apache-2.0
|
site/en/guide/data.ipynb
|
zyberg2091/docs
|
Iterator Checkpointing Tensorflow supports [taking checkpoints](https://www.tensorflow.org/guide/checkpoint) so that when your training process restarts it can restore the latest checkpoint to recover most of its progress. In addition to checkpointing the model variables, you can also checkpoint the progress of the dataset iterator. This could be useful if you have a large dataset and don't want to start the dataset from the beginning on each restart. Note however that iterator checkpoints may be large, since transformations such as `shuffle` and `prefetch` require buffering elements within the iterator. To include your iterator in a checkpoint, pass the iterator to the `tf.train.Checkpoint` constructor.
|
range_ds = tf.data.Dataset.range(20)
iterator = iter(range_ds)
ckpt = tf.train.Checkpoint(step=tf.Variable(0), iterator=iterator)
manager = tf.train.CheckpointManager(ckpt, '/tmp/my_ckpt', max_to_keep=3)
print([next(iterator).numpy() for _ in range(5)])
save_path = manager.save()
print([next(iterator).numpy() for _ in range(5)])
ckpt.restore(manager.latest_checkpoint)
print([next(iterator).numpy() for _ in range(5)])
|
_____no_output_____
|
Apache-2.0
|
site/en/guide/data.ipynb
|
zyberg2091/docs
|
Note: It is not possible to checkpoint an iterator which relies on external state such as a `tf.py_function`. Attempting to do so will raise an exception complaining about the external state. Using tf.data with tf.keras The `tf.keras` API simplifies many aspects of creating and executing machinelearning models. Its `.fit()` and `.evaluate()` and `.predict()` APIs support datasets as inputs. Here is a quick dataset and model setup:
|
train, test = tf.keras.datasets.fashion_mnist.load_data()
images, labels = train
images = images/255.0
labels = labels.astype(np.int32)
fmnist_train_ds = tf.data.Dataset.from_tensor_slices((images, labels))
fmnist_train_ds = fmnist_train_ds.shuffle(5000).batch(32)
model = tf.keras.Sequential([
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(10)
])
model.compile(optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
|
_____no_output_____
|
Apache-2.0
|
site/en/guide/data.ipynb
|
zyberg2091/docs
|
Passing a dataset of `(feature, label)` pairs is all that's needed for `Model.fit` and `Model.evaluate`:
|
model.fit(fmnist_train_ds, epochs=2)
|
_____no_output_____
|
Apache-2.0
|
site/en/guide/data.ipynb
|
zyberg2091/docs
|
If you pass an infinite dataset, for example by calling `Dataset.repeat()`, you just need to also pass the `steps_per_epoch` argument:
|
model.fit(fmnist_train_ds.repeat(), epochs=2, steps_per_epoch=20)
|
_____no_output_____
|
Apache-2.0
|
site/en/guide/data.ipynb
|
zyberg2091/docs
|
For evaluation you can pass the number of evaluation steps:
|
loss, accuracy = model.evaluate(fmnist_train_ds)
print("Loss :", loss)
print("Accuracy :", accuracy)
|
_____no_output_____
|
Apache-2.0
|
site/en/guide/data.ipynb
|
zyberg2091/docs
|
For long datasets, set the number of steps to evaluate:
|
loss, accuracy = model.evaluate(fmnist_train_ds.repeat(), steps=10)
print("Loss :", loss)
print("Accuracy :", accuracy)
|
_____no_output_____
|
Apache-2.0
|
site/en/guide/data.ipynb
|
zyberg2091/docs
|
The labels are not required in when calling `Model.predict`.
|
predict_ds = tf.data.Dataset.from_tensor_slices(images).batch(32)
result = model.predict(predict_ds, steps = 10)
print(result.shape)
|
_____no_output_____
|
Apache-2.0
|
site/en/guide/data.ipynb
|
zyberg2091/docs
|
But the labels are ignored if you do pass a dataset containing them:
|
result = model.predict(fmnist_train_ds, steps = 10)
print(result.shape)
|
_____no_output_____
|
Apache-2.0
|
site/en/guide/data.ipynb
|
zyberg2091/docs
|
As print, len, etc donot work directly for class: Special Methods:
|
class Book():
def __init__ (self,title,author,pages):
self.title = title
self.author = author
self.pages = pages
def __str__ (self):
return f"{self.title} by {self.author} of {self.pages} pages."
def __len__ (self):
return self.pages
def __del__ (self):
print ("Book is deleted.")
mybook = Book("Python", "Jose", 200)
print (mybook)
str (mybook)
len (mybook)
del mybook
mybook
|
_____no_output_____
|
MIT
|
Section 08 - Object Oriented Prog/Lec 72 - Special (Magic) Methods.ipynb
|
sansjha4900/Udemy-Python-Notes
|
let's import and check for the version of TensorFlow...
|
import tensorflow as tf
tf.__version__
|
_____no_output_____
|
Apache-2.0
|
images/fedora/aicoe-tensorflow-jupyter-toolbox/AI CoE's TensorFlow Jupyter Notebook.ipynb
|
goern/toolbox
|
Try to obtain information of AICoE TensorFlow builds.
|
import os
try:
path = os.path.dirname(os.path.dirname(tf.__file__))
build_info_path = os.path.join(path, 'tensorflow-' + tf.__version__ + '.dist-info', 'build_info.json')
with open(build_info_path, 'r') as build_info_file:
build_info = json.load(build_info_file)
print(build_info)
except Exception as e:
print(e)
|
[Errno 2] No such file or directory: '/home/goern/.local/share/virtualenvs/aicoe-tensorflow-jupyter-toolbox-3i4NnwxE/lib/python3.6/site-packages/tensorflow-2.1.0.dist-info/build_info.json'
|
Apache-2.0
|
images/fedora/aicoe-tensorflow-jupyter-toolbox/AI CoE's TensorFlow Jupyter Notebook.ipynb
|
goern/toolbox
|
... and see if we got some GPU available
|
tf.config.list_physical_devices('GPU')
|
_____no_output_____
|
Apache-2.0
|
images/fedora/aicoe-tensorflow-jupyter-toolbox/AI CoE's TensorFlow Jupyter Notebook.ipynb
|
goern/toolbox
|
Rede Neural Simples Implementando uma RNA SimplesO diagrama abaixo mostra uma rede simples. A combinação linear dos pesos, inputs e viés formam o input h, que então é passado pela função de ativação f(h), gerando o output final do perceptron, etiquetado como y. Diagrama de uma rede neural simples Círculos são unidades, caixas são operações. O que faz as redes neurais possíveis, é que a função de ativação, f(h) pode ser qualquer função, não apenas a função degrau. Por exemplo, caso f(h)=h, o output será o mesmo que o input. Agora o output da rede é $$h = \frac 1n\sum_{i=1}^n(w_i*x_i)+b$$ Essa equação deveria ser familiar para você, pois é a mesma do modelo de regressão linear!Outras funções de ativação comuns são a função logística (também chamada de sigmóide), tanh e a função softmax. Nós iremos trabalhar principalmente com a função sigmóide pelo resto dessa aula:$$f(h) = sigmoid(h)=\frac 1 {1+e^{-h}}$$ Vamos implementar uma RNA de apenas um neurônio! Importando a biblioteca
|
import numpy as np
|
_____no_output_____
|
MIT
|
T-RNA/Tarefa-1/Solucao-Tarefa1-RNA-simples.ipynb
|
EdTonatto/UFFS-2020.2-Inteligencia_Artificial
|
Função do cáculo da sigmóide
|
def sigmoid(x):
return 1/(1+np.exp(-x))
|
_____no_output_____
|
MIT
|
T-RNA/Tarefa-1/Solucao-Tarefa1-RNA-simples.ipynb
|
EdTonatto/UFFS-2020.2-Inteligencia_Artificial
|
Vetor dos valores de entrada
|
x = np.array([1.66, -0.22])
b = 0.1
|
_____no_output_____
|
MIT
|
T-RNA/Tarefa-1/Solucao-Tarefa1-RNA-simples.ipynb
|
EdTonatto/UFFS-2020.2-Inteligencia_Artificial
|
Pesos das ligações sinápticas
|
w = np.array([0.5, -0.3])
|
_____no_output_____
|
MIT
|
T-RNA/Tarefa-1/Solucao-Tarefa1-RNA-simples.ipynb
|
EdTonatto/UFFS-2020.2-Inteligencia_Artificial
|
Calcule a combinação linear de entradas e pesos sinápticos
|
h = np.dot(x, w) + b
|
_____no_output_____
|
MIT
|
T-RNA/Tarefa-1/Solucao-Tarefa1-RNA-simples.ipynb
|
EdTonatto/UFFS-2020.2-Inteligencia_Artificial
|
Aplicado a função de ativação do neurônio
|
y = sigmoid(h)
print('A Saida da rede eh: ', y)
|
A Saida da rede eh: 0.7302714044131816
|
MIT
|
T-RNA/Tarefa-1/Solucao-Tarefa1-RNA-simples.ipynb
|
EdTonatto/UFFS-2020.2-Inteligencia_Artificial
|
Communication in Crisis AcquireData: [Los Angeles Parking Citations](https://www.kaggle.com/cityofLA/los-angeles-parking-citations)Load the dataset and filter for:- Citations issued from 2017-01-01 to 2021-04-12.- Street Sweeping violations - `Violation Description` == __"NO PARK/STREET CLEAN"__Let's acquire the parking citations data from our file.1. Import libraries.1. Load the dataset.1. Display the shape and first/last 2 rows.1. Display general infomation about the dataset - w/ the of unique values in each column.1. Display the number of missing values in each column.1. Descriptive statistics for all numeric features.
|
# Import libraries
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
from scipy import stats
import sys
import time
import folium.plugins as plugins
from IPython.display import HTML
import json
import datetime
import calplot
import folium
import math
sns.set()
from tqdm.notebook import tqdm
import src
# Filter warnings
from warnings import filterwarnings
filterwarnings('ignore')
# Load the data
df = src.get_sweep_data(prepared=False)
# Display the shape and dtypes of each column
print(df.shape)
df.info()
# Display the first two citations
df.head(2)
# Display the last two citations
df.tail(2)
# Display descriptive statistics of numeric columns
df.describe()
df.hist(figsize=(16, 8), bins=15)
plt.tight_layout();
|
_____no_output_____
|
MIT
|
MVP.ipynb
|
Promeos/LADOT-Street-Sweeping-Transition-Pan
|
__Initial findings__- `Issue time` and `Marked Time` are quasi-normally distributed. Note: Poisson Distribution- It's interesting to see the distribution of our activity on earth follows a normal distribution.- Agencies 50+ write the most parking citations.- Most fine amounts are less than $100.00- There are a few null or invalid license plates. Prepare - Remove spaces + capitalization from each column name.- Cast `Plate Expiry Date` to datetime data type.- Cast `Issue Date` and `Issue Time` to datetime data types.- Drop columns missing >=74.42\% of their values. - Drop missing values.- Transform Latitude and Longitude columns from NAD1983StatePlaneCaliforniaVFIPS0405 feet projection to EPSG:4326 World Geodetic System 1984: used in GPS [Standard]- Filter data for street sweeping citations only.
|
# Prepare the data using a function stored in prepare.py
df_citations = src.get_sweep_data(prepared=True)
# Display the first two rows
df_citations.head(2)
# Check the column data types and non-null counts.
df_citations.info()
|
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 2279063 entries, 0 to 2279062
Data columns (total 15 columns):
# Column Dtype
--- ------ -----
0 issue_date datetime64[ns]
1 issue_time object
2 location object
3 route object
4 agency float64
5 violation_description object
6 fine_amount float64
7 latitude float64
8 longitude float64
9 citation_year int64
10 citation_month int64
11 citation_day int64
12 day_of_week object
13 citation_hour int64
14 citation_minute int64
dtypes: datetime64[ns](1), float64(4), int64(5), object(5)
memory usage: 260.8+ MB
|
MIT
|
MVP.ipynb
|
Promeos/LADOT-Street-Sweeping-Transition-Pan
|
Exploration How much daily revenue is generated from street sweeper citations? Daily Revenue from Street Sweeper CitationsDaily street sweeper citations increased in 2020.
|
# Daily street sweeping citation revenue
daily_revenue = df_citations.groupby('issue_date').fine_amount.sum()
daily_revenue.index = pd.to_datetime(daily_revenue.index)
df_sweep = src.street_sweep(data=df_citations)
df_d = src.resample_period(data=df_sweep)
df_m = src.resample_period(data=df_sweep, period='M')
df_d.head()
sns.set_context('talk')
# Plot daily revenue from street sweeping citations
df_d.revenue.plot(figsize=(14, 7), label='Revenue', color='DodgerBlue')
plt.axhline(df_d.revenue.mean(skipna=True), color='black', label='Average Revenue')
plt.title("Daily Revenue from Street Sweeping Citations")
plt.xlabel('')
plt.ylabel("Revenue (in thousand's)")
plt.xticks(rotation=0, horizontalalignment='center', fontsize=13)
plt.yticks(range(0, 1_000_000, 200_000), ['$0', '$200', '$400', '$600', '$800',])
plt.ylim(0, 1_000_000)
plt.legend(loc=2, framealpha=.8);
|
_____no_output_____
|
MIT
|
MVP.ipynb
|
Promeos/LADOT-Street-Sweeping-Transition-Pan
|
> __Anomaly__: Between March 2020 and October 2020 a Local Emergency was Declared by the Mayor of Los Angeles in response to COVID-19. Street Sweeping was halted to help Angelenos Shelter in Place. _Street Sweeping resumed on 10/15/2020_. Anomaly: Declaration of Local Emergency
|
sns.set_context('talk')
# Plot daily revenue from street sweeping citations
df_d.revenue.plot(figsize=(14, 7), label='Revenue', color='DodgerBlue')
plt.axvspan('2020-03-16', '2020-10-14', color='grey', alpha=.25)
plt.text('2020-03-29', 890_000, 'Declaration of\nLocal Emergency', fontsize=11)
plt.title("Daily Revenue from Street Sweeping Citations")
plt.xlabel('')
plt.ylabel("Revenue (in thousand's)")
plt.xticks(rotation=0, horizontalalignment='center', fontsize=13)
plt.yticks(range(0, 1_000_000, 200_000), ['$0', '$200', '$400', '$600', '$800',])
plt.ylim(0, 1_000_000)
plt.legend(loc=2, framealpha=.8);
sns.set_context('talk')
# Plot daily revenue from street sweeping citations
df_d.revenue.plot(figsize=(14, 7), label='Revenue', color='DodgerBlue')
plt.axhline(df_d.revenue.mean(skipna=True), color='black', label='Average Revenue')
plt.axvline(datetime.datetime(2020, 10, 15), color='red', linestyle="--", label='October 15, 2020')
plt.title("Daily Revenue from Street Sweeping Citations")
plt.xlabel('')
plt.ylabel("Revenue (in thousand's)")
plt.xticks(rotation=0, horizontalalignment='center', fontsize=13)
plt.yticks(range(0, 1_000_000, 200_000), ['$0', '$200K', '$400K', '$600K', '$800K',])
plt.ylim(0, 1_000_000)
plt.legend(loc=2, framealpha=.8);
|
_____no_output_____
|
MIT
|
MVP.ipynb
|
Promeos/LADOT-Street-Sweeping-Transition-Pan
|
Hypothesis Test General InquiryIs the daily citation revenue after 10/15/2020 significantly greater than average? Z-Score$H_0$: The daily citation revenue after 10/15/2020 is less than or equal to the average daily revenue.$H_a$: The daily citation revenue after 10/15/2020 is significantly greater than average.
|
confidence_interval = .997
# Directional Test
alpha = (1 - confidence_interval)/2
# Data to calculate z-scores using precovid values to calculate the mean and std
daily_revenue_precovid = df_d.loc[df_d.index < '2020-03-16']['revenue']
mean_precovid, std_precovid = daily_revenue_precovid.agg(['mean', 'std']).values
mean, std = df_d.agg(['mean', 'std']).values
# Calculating Z-Scores using precovid mean and std
z_scores_precovid = (df_d.revenue - mean_precovid)/std_precovid
z_scores_precovid.index = pd.to_datetime(z_scores_precovid.index)
sig_zscores_pre_covid = z_scores_precovid[z_scores_precovid>3]
# Calculating Z-Scores using entire data
z_scores = (df_d.revenue - mean)/std
z_scores.index = pd.to_datetime(z_scores.index)
sig_zscores = z_scores[z_scores>3]
sns.set_context('talk')
plt.figure(figsize=(12, 6))
sns.histplot(data=z_scores_precovid,
bins=50,
label='preCOVID z-scores')
sns.histplot(data=z_scores,
bins=50,
color='orange',
label='z-scores')
plt.title('Daily citation revenue after 10/15/2020 is significantly greater than average', fontsize=16)
plt.xlabel('Standard Deviations')
plt.ylabel('# of Days')
plt.axvline(3, color='Black', linestyle="--", label='3 Standard Deviations')
plt.xticks(np.linspace(-1, 9, 11))
plt.legend(fontsize=13);
a = stats.zscore(daily_revenue)
fig, ax = plt.subplots(figsize=(8, 8))
stats.probplot(a, plot=ax)
plt.xlabel("Quantile of Normal Distribution")
plt.ylabel("z-score");
|
_____no_output_____
|
MIT
|
MVP.ipynb
|
Promeos/LADOT-Street-Sweeping-Transition-Pan
|
p-values
|
p_values_precovid = z_scores_precovid.apply(stats.norm.cdf)
p_values = z_scores_precovid.apply(stats.norm.cdf)
significant_dates_precovid = p_values_precovid[(1-p_values_precovid) < alpha]
significant_dates = p_values[(1-p_values) < alpha]
# The chance of an outcome occuring by random chance
print(f'{alpha:0.3%}')
|
0.150%
|
MIT
|
MVP.ipynb
|
Promeos/LADOT-Street-Sweeping-Transition-Pan
|
Cohen's D
|
fractions = [.1, .2, .5, .7, .9]
cohen_d = []
for percentage in fractions:
cohen_d_trial = []
for i in range(10000):
sim = daily_revenue.sample(frac=percentage)
sim_mean = sim.mean()
d = (sim_mean - mean) / (std/math.sqrt(int(len(daily_revenue)*percentage)))
cohen_d_trial.append(d)
cohen_d.append(np.mean(cohen_d_trial))
cohen_d
fractions = [.1, .2, .5, .7, .9]
cohen_d_precovid = []
for percentage in fractions:
cohen_d_trial = []
for i in range(10000):
sim = daily_revenue_precovid.sample(frac=percentage)
sim_mean = sim.mean()
d = (sim_mean - mean_precovid) / (std_precovid/math.sqrt(int(len(daily_revenue_precovid)*percentage)))
cohen_d_trial.append(d)
cohen_d_precovid.append(np.mean(cohen_d_trial))
cohen_d_precovid
|
_____no_output_____
|
MIT
|
MVP.ipynb
|
Promeos/LADOT-Street-Sweeping-Transition-Pan
|
Significant Dates with less than a 0.15% chance of occuring- All dates that are considered significant occur after 10/15/2020- In the two weeks following 10/15/2020 significant events occured on __Tuesday's and Wednesday's__.
|
dates_precovid = set(list(sig_zscores_pre_covid.index))
dates = set(list(sig_zscores.index))
common_dates = list(dates.intersection(dates_precovid))
common_dates = pd.to_datetime(common_dates).sort_values()
sig_zscores
pd.Series(common_dates.day_name(),
common_dates)
np.random.seed(sum(map(ord, 'calplot')))
all_days = pd.date_range('1/1/2020', '12/22/2020', freq='D')
significant_events = pd.Series(np.ones_like(len(common_dates)), index=common_dates)
calplot.calplot(significant_events, figsize=(18, 12), cmap='coolwarm_r');
|
_____no_output_____
|
MIT
|
MVP.ipynb
|
Promeos/LADOT-Street-Sweeping-Transition-Pan
|
Which parts of the city were impacted the most?
|
df_outliers = df_citations.loc[df_citations.issue_date.isin(list(common_dates.astype('str')))]
df_outliers.reset_index(drop=True, inplace=True)
print(df_outliers.shape)
df_outliers.head()
m = folium.Map(location=[34.0522, -118.2437],
min_zoom=8,
max_bounds=True)
mc = plugins.MarkerCluster()
for index, row in df_outliers.iterrows():
mc.add_child(
folium.Marker(location=[str(row['latitude']), str(row['longitude'])],
popup='Cited {} {} at {}'.format(row['day_of_week'],
row['issue_date'],
row['issue_time'][:-3]),
control_scale=True,
clustered_marker=True
)
)
m.add_child(mc)
|
_____no_output_____
|
MIT
|
MVP.ipynb
|
Promeos/LADOT-Street-Sweeping-Transition-Pan
|
Transfering map to Tablaeu Conclusions Appendix What time(s) are Street Sweeping citations issued?Most citations are issued during the hours of 8am, 10am, and 12pm. Citation Times
|
# Filter street sweeping data for citations issued between
# 8 am and 2 pm, 8 and 14 respectively.
df_citation_times = df_citations.loc[(df_citations.issue_hour >= 8)&(df_citations.issue_hour < 14)]
sns.set_context('talk')
# Issue Hour Plot
df_citation_times.issue_hour.value_counts().sort_index().plot.bar(figsize=(8, 6))
# Axis labels
plt.title('Most Street Sweeper Citations are Issued at 8am')
plt.xlabel('Issue Hour (24HR)')
plt.ylabel('# of Citations (in thousands)')
# Chart Formatting
plt.xticks(rotation=0)
plt.yticks(range(100_000, 400_001,100_000), ['100', '200', '300', '400'])
plt.show()
sns.set_context('talk')
# Issue Minute Plot
df_citation_times.issue_minute.value_counts().sort_index().plot.bar(figsize=(20, 9))
# Axis labels
plt.title('Most Street Sweeper Citations are Issued in the First 30 Minutes')
plt.xlabel('Issue Minute')
plt.ylabel('# of Citations (in thousands)')
# plt.axvspan(0, 30, facecolor='grey', alpha=0.1)
# Chart Formatting
plt.xticks(rotation=0)
plt.yticks(range(5_000, 40_001, 5_000), ['5', '10', '15', '20', '25', '30', '35', '40'])
plt.tight_layout()
plt.show()
|
_____no_output_____
|
MIT
|
MVP.ipynb
|
Promeos/LADOT-Street-Sweeping-Transition-Pan
|
Which state has the most Street Sweeping violators? License PlateOver 90% of all street sweeping citations are issued to California Residents.
|
sns.set_context('talk')
fig = df_citations.rp_state_plate.value_counts(normalize=True).nlargest(3).plot.bar(figsize=(12, 6))
# Chart labels
plt.title('California residents receive the most street sweeping citations', fontsize=16)
plt.xlabel('State')
plt.ylabel('% of all Citations')
# Tick Formatting
plt.xticks(rotation=0)
plt.yticks(np.linspace(0, 1, 11), labels=[f'{i:0.0%}' for i in np.linspace(0, 1, 11)])
plt.grid(axis='x', alpha=.5)
plt.tight_layout();
|
_____no_output_____
|
MIT
|
MVP.ipynb
|
Promeos/LADOT-Street-Sweeping-Transition-Pan
|
Which street has the most Street Sweeping citations?The characteristics of the top 3 streets:1. Vehicles are parked bumper to bumper leaving few parking spaces available2. Parking spaces have a set time limit
|
df_citations['street_name'] = df_citations.location.str.replace('^[\d+]{2,}', '').str.strip()
sns.set_context('talk')
# Removing the street number and white space from the address
df_citations.street_name.value_counts().nlargest(3).plot.barh(figsize=(16, 6))
# Chart formatting
plt.title('Streets with the Most Street Sweeping Citations', fontsize=24)
plt.xlabel('# of Citations');
|
_____no_output_____
|
MIT
|
MVP.ipynb
|
Promeos/LADOT-Street-Sweeping-Transition-Pan
|
__Abbot Kinney Blvd: "Small Boutiques, No Parking"__> [Abbot Kinney Blvd on Google Maps](https://www.google.com/maps/@33.9923689,-118.4731719,3a,75y,112.99h,91.67t/data=!3m6!1e1!3m4!1sKD3cG40eGmdWxhwqLD1BvA!2e0!7i16384!8i8192) - Near Venice Beach - Small businesses and name brand stores line both sides of the street - Little to no parking in this area- Residential area inland - Multiplex style dwellings with available parking spaces - Weekly Street Sweeping on Monday from 7:30 am - 9:30 am __Clinton Street: "Packed Street"__ > [Clinton Street on Google Maps](https://www.google.com/maps/@34.0816611,-118.3306842,3a,75y,70.72h,57.92t/data=!3m9!1e1!3m7!1sdozFgC7Ms3EvaOF4-CeNAg!2e0!7i16384!8i8192!9m2!1b1!2i37) - All parking spaces on the street are filled- Residential Area - Weekly Street Sweeping on Friday from 8:00 am - 11:00 am __Kelton Ave: "2 Hour Time Limit"__> [Kelton Ave on Google Maps](https://www.google.com/maps/place/Kelton+Ave,+Los+Angeles,+CA/@34.0475262,-118.437594,3a,49.9y,183.92h,85.26t/data=!3m9!1e1!3m7!1s5VICHNYMVEk9utaV5egFYg!2e0!7i16384!8i8192!9m2!1b1!2i25!4m5!3m4!1s0x80c2bb7efb3a05eb:0xe155071f3fe49df3!8m2!3d34.0542999!4d-118.4434919) - Most parking spaces on this street are available. This is due to the strict 2 hour time limit for parked vehicles without the proper exception permit.- Multiplex, Residential Area - Weekly Street Sweeping on Thursday from 10:00 am - 1:00 pm - Weekly Street Sweeping on Friday from 8:00 am - 10:00 am Which street has the most Street Sweeping citations, given the day of the week?- __Abbot Kinney Blvd__ is the most cited street on __Monday and Tuesday__- __4th Street East__ is the most cited street on __Saturday and Sunday__
|
# Group by the day of the week and street name
df_day_street = df_citations.groupby(by=['day_of_week', 'street_name'])\
.size()\
.sort_values()\
.groupby(level=0)\
.tail(1)\
.reset_index()\
.rename(columns={0:'count'})
# Create a new column to sort the values by the day of the
# week starting with Monday
df_day_street['order'] = [5, 6, 4, 3, 0, 2, 1]
# Display the street with the most street sweeping citations
# given the day of the week.
df_day_street.sort_values('order').set_index('order')
|
_____no_output_____
|
MIT
|
MVP.ipynb
|
Promeos/LADOT-Street-Sweeping-Transition-Pan
|
Which Agencies issue the most street sweeping citations?The Department of Transportation's __Western, Hollywood, and Valley__ subdivisions issue the most street sweeping citations.
|
sns.set_context('talk')
df_citations.agency.value_counts().nlargest(5).plot.barh(figsize=(12, 6));
# plt.axhspan(2.5, 5, facecolor='0.5', alpha=.8)
plt.title('Agencies With the Most Street Sweeper Citations')
plt.xlabel('# of Citations (in thousands)')
plt.xticks(np.arange(0, 400_001, 100_000), list(np.arange(0, 401, 100)))
plt.yticks([0, 1, 2, 3, 4], labels=['DOT-WESTERN',
'DOT-HOLLYWOOD',
'DOT-VALLEY',
'DOT-SOUTHERN',
'DOT-CENTRAL']);
|
_____no_output_____
|
MIT
|
MVP.ipynb
|
Promeos/LADOT-Street-Sweeping-Transition-Pan
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.