markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
And show it in the notebook
ipyd.Image(url='multiple.gif?{}'.format(np.random.rand()), height=500, width=500)
_____no_output_____
Apache-2.0
session-2/session-2.ipynb
takitsuba/kadenze_cadl
What we're seeing is the training process over time. We feed in our `xs`, which consist of the pixel values of each of our 100 images, it goes through the neural network, and out come predicted color values for every possible input value. We visualize it above as a gif by seeing how at each iteration the network has predicted the entire space of the inputs. We can visualize just the last iteration as a "latent" space, going from the first image (the top left image in the montage), to the last image, (the bottom right image).
final = gifs[-1] final_gif = [np.clip(((m * 127.5) + 127.5), 0, 255).astype(np.uint8) for m in final] gif.build_gif(final_gif, saveto='final.gif') ipyd.Image(url='final.gif?{}'.format(np.random.rand()), height=200, width=200)
_____no_output_____
Apache-2.0
session-2/session-2.ipynb
takitsuba/kadenze_cadl
Part Four - Open Exploration (Extra Credit)I now what you to explore what other possible manipulations of the network and/or dataset you could imagine. Perhaps a process that does the reverse, tries to guess where a given color should be painted? What if it was only taught a certain palette, and had to reason about other colors, how it would interpret those colors? Or what if you fed it pixel locations that weren't part of the training set, or outside the frame of what it was trained on? Or what happens with different activation functions, number of layers, increasing number of neurons or lesser number of neurons? I leave any of these as an open exploration for you.Try exploring this process with your own ideas, materials, and networks, and submit something you've created as a gif! To aid exploration, be sure to scale the image down quite a bit or it will require a much larger machine, and much more time to train. Then whenever you think you may be happy with the process you've created, try scaling up the resolution and leave the training to happen over a few hours/overnight to produce something truly stunning!Make sure to name the result of your gif: "explore.gif", and be sure to include it in your zip file. TODO! COMPLETE THIS SECTION!
# Train a network to produce something, storing every few # iterations in the variable gifs, then export the training # over time as a gif. ... gif.build_gif(montage_gifs, saveto='explore.gif') ipyd.Image(url='explore.gif?{}'.format(np.random.rand()), height=500, width=500)
_____no_output_____
Apache-2.0
session-2/session-2.ipynb
takitsuba/kadenze_cadl
Assignment SubmissionAfter you've completed the notebook, create a zip file of the current directory using the code below. This code will make sure you have included this completed ipython notebook and the following files named exactly as: session-2/ session-2.ipynb single.gif multiple.gif final.gif explore.gif* libs/ utils.py * = optional/extra-creditYou'll then submit this zip file for your second assignment on Kadenze for "Assignment 2: Teach a Deep Neural Network to Paint"! If you have any questions, remember to reach out on the forums and connect with your peers or with me.To get assessed, you'll need to be a premium student! This will allow you to build an online portfolio of all of your work and receive grades. If you aren't already enrolled as a student, register now at http://www.kadenze.com/ and join the [CADL](https://twitter.com/hashtag/CADL) community to see what your peers are doing! https://www.kadenze.com/courses/creative-applications-of-deep-learning-with-tensorflow/infoAlso, if you share any of the GIFs on Facebook/Twitter/Instagram/etc..., be sure to use the CADL hashtag so that other students can find your work!
utils.build_submission('session-2.zip', ('reference.png', 'single.gif', 'multiple.gif', 'final.gif', 'session-2.ipynb'), ('explore.gif'))
_____no_output_____
Apache-2.0
session-2/session-2.ipynb
takitsuba/kadenze_cadl
Cols to drop
# CUST_ID,ONEOFF_PURCHASES Cust.info() Cust.drop(["CUST_ID","ONEOFF_PURCHASES"], axis=1, inplace=True) Cust.info() Cust.TENURE.unique() #Handling Outliers - Method2 def outlier_capping(x): x = x.clip(upper=x.quantile(0.99), lower=x.quantile(0.01)) return x Cust=Cust.apply(lambda x: outlier_capping(x)) #Handling missings - Method2 def Missing_imputation(x): x = x.fillna(x.median()) return x Cust=Cust.apply(lambda x: Missing_imputation(x)) Cust.corr() # visualize correlation matrix in Seaborn using a heatmap sns.heatmap(Cust.corr())
_____no_output_____
MIT
CustSeg.ipynb
pranjalAI/Segmentation-of-Credit-Card-Customers
Standardrizing data - To put data on the same scale
sc=StandardScaler() Cust_scaled=sc.fit_transform(Cust) pd.DataFrame(Cust_scaled).shape
_____no_output_____
MIT
CustSeg.ipynb
pranjalAI/Segmentation-of-Credit-Card-Customers
Applyting PCA
pc = PCA(n_components=16) pc.fit(Cust_scaled) pc.explained_variance_ #Eigen values sum(pc.explained_variance_) #The amount of variance that each PC explains var= pc.explained_variance_ratio_ var #Cumulative Variance explains var1=np.cumsum(np.round(pc.explained_variance_ratio_, decimals=4)*100) var1
_____no_output_____
MIT
CustSeg.ipynb
pranjalAI/Segmentation-of-Credit-Card-Customers
number of components have choosen as 6 based on cumulative variacne is explaining >75 % and individual component explaining >0.8 variance
pc_final=PCA(n_components=6).fit(Cust_scaled) pc_final.explained_variance_ reduced_cr=pc_final.transform(Cust_scaled) dimensions = pd.DataFrame(reduced_cr) dimensions dimensions.columns = ["C1", "C2", "C3", "C4", "C5", "C6"] dimensions.head()
_____no_output_____
MIT
CustSeg.ipynb
pranjalAI/Segmentation-of-Credit-Card-Customers
Factor Loading MatrixLoadings=Eigenvectors * sqrt(Eigenvalues)loadings are the covariances/correlations between the original variables and the unit-scaled components.
Loadings = pd.DataFrame((pc_final.components_.T * np.sqrt(pc_final.explained_variance_)).T,columns=Cust.columns).T Loadings.to_csv("Loadings.csv")
_____no_output_____
MIT
CustSeg.ipynb
pranjalAI/Segmentation-of-Credit-Card-Customers
Clustering
#selected the list variables from PCA based on factor loading matrics list_var = ['PURCHASES_TRX','INSTALLMENTS_PURCHASES','PURCHASES_INSTALLMENTS_FREQUENCY','MINIMUM_PAYMENTS','BALANCE','CREDIT_LIMIT','CASH_ADVANCE','PRC_FULL_PAYMENT','ONEOFF_PURCHASES_FREQUENCY'] Cust_scaled1=pd.DataFrame(Cust_scaled, columns=Cust.columns) Cust_scaled1.head(5) Cust_scaled2=Cust_scaled1[list_var] Cust_scaled2.head(5)
_____no_output_____
MIT
CustSeg.ipynb
pranjalAI/Segmentation-of-Credit-Card-Customers
Segmentation
km_3=KMeans(n_clusters=3,random_state=123) km_3.fit(Cust_scaled2) print(km_3.labels_) km_3.cluster_centers_ km_4=KMeans(n_clusters=4,random_state=123).fit(Cust_scaled2) #km_5.labels_a km_5=KMeans(n_clusters=5,random_state=123).fit(Cust_scaled2) #km_5.labels_ km_6=KMeans(n_clusters=6,random_state=123).fit(Cust_scaled2) #km_6.labels_ km_7=KMeans(n_clusters=7,random_state=123).fit(Cust_scaled2) #km_7.labels_ km_8=KMeans(n_clusters=8,random_state=123).fit(Cust_scaled2) #km_5.labels_ metrics.silhouette_score(Cust_scaled2, km_3.labels_) # 5 clusters are better # Conactenating labels found through Kmeans with data # save the cluster labels and sort by cluster Cust['cluster_3'] = km_3.labels_ Cust['cluster_4'] = km_4.labels_ Cust['cluster_5'] = km_5.labels_ Cust['cluster_6'] = km_6.labels_ Cust['cluster_7'] = km_7.labels_ Cust['cluster_8'] = km_8.labels_ Cust.head()
_____no_output_____
MIT
CustSeg.ipynb
pranjalAI/Segmentation-of-Credit-Card-Customers
Choosing number clusters using Silhouette Coefficient
# calculate SC for K=6 from sklearn import metrics metrics.silhouette_score(Cust_scaled2, km_3.labels_) # calculate SC for K=3 through K=9 k_range = range(3, 13) scores = [] for k in k_range: km = KMeans(n_clusters=k, random_state=123) km.fit(Cust_scaled2) scores.append(metrics.silhouette_score(Cust_scaled2, km.labels_)) scores # plot the results plt.plot(k_range, scores) plt.xlabel('Number of clusters') plt.ylabel('Silhouette Coefficient') plt.grid(True)
_____no_output_____
MIT
CustSeg.ipynb
pranjalAI/Segmentation-of-Credit-Card-Customers
Segment Distribution
Cust.cluster_3.value_counts()*100/sum(Cust.cluster_3.value_counts()) pd.Series.sort_index(Cust.cluster_3.value_counts())
_____no_output_____
MIT
CustSeg.ipynb
pranjalAI/Segmentation-of-Credit-Card-Customers
Profiling
size=pd.concat([pd.Series(Cust.cluster_3.size), pd.Series.sort_index(Cust.cluster_3.value_counts()), pd.Series.sort_index(Cust.cluster_4.value_counts()), pd.Series.sort_index(Cust.cluster_5.value_counts()), pd.Series.sort_index(Cust.cluster_6.value_counts()), pd.Series.sort_index(Cust.cluster_7.value_counts()), pd.Series.sort_index(Cust.cluster_8.value_counts())]) size Seg_size=pd.DataFrame(size, columns=['Seg_size']) Seg_Pct = pd.DataFrame(size/Cust.cluster_3.size, columns=['Seg_Pct']) Seg_size.T Seg_Pct.T pd.concat([Seg_size.T, Seg_Pct.T], axis=0) Cust.head() # Mean value gives a good indication of the distribution of data. So we are finding mean value for each variable for each cluster Profling_output = pd.concat([Cust.apply(lambda x: x.mean()).T, Cust.groupby('cluster_3').apply(lambda x: x.mean()).T, Cust.groupby('cluster_4').apply(lambda x: x.mean()).T, Cust.groupby('cluster_5').apply(lambda x: x.mean()).T, Cust.groupby('cluster_6').apply(lambda x: x.mean()).T, Cust.groupby('cluster_7').apply(lambda x: x.mean()).T, Cust.groupby('cluster_8').apply(lambda x: x.mean()).T], axis=1) Profling_output Profling_output_final=pd.concat([Seg_size.T, Seg_Pct.T, Profling_output], axis=0) Profling_output_final #Profling_output_final.columns = ['Seg_' + str(i) for i in Profling_output_final.columns] Profling_output_final.columns = ['Overall', 'KM3_1', 'KM3_2', 'KM3_3', 'KM4_1', 'KM4_2', 'KM4_3', 'KM4_4', 'KM5_1', 'KM5_2', 'KM5_3', 'KM5_4', 'KM5_5', 'KM6_1', 'KM6_2', 'KM6_3', 'KM6_4', 'KM6_5','KM6_6', 'KM7_1', 'KM7_2', 'KM7_3', 'KM7_4', 'KM7_5','KM7_6','KM7_7', 'KM8_1', 'KM8_2', 'KM8_3', 'KM8_4', 'KM8_5','KM8_6','KM8_7','KM8_8'] Profling_output_final Profling_output_final.to_csv('Profiling_output.csv')
_____no_output_____
MIT
CustSeg.ipynb
pranjalAI/Segmentation-of-Credit-Card-Customers
Copyright 2020 The TensorFlow Authors.
#@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License.
_____no_output_____
Apache-2.0
site/en-snapshot/guide/keras/writing_a_training_loop_from_scratch.ipynb
masa-ita/docs-l10n
Writing a training loop from scratch View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook Setup
import tensorflow as tf from tensorflow import keras from tensorflow.keras import layers import numpy as np
_____no_output_____
Apache-2.0
site/en-snapshot/guide/keras/writing_a_training_loop_from_scratch.ipynb
masa-ita/docs-l10n
IntroductionKeras provides default training and evaluation loops, `fit()` and `evaluate()`.Their usage is covered in the guide[Training & evaluation with the built-in methods](https://www.tensorflow.org/guide/keras/train_and_evaluate/).If you want to customize the learning algorithm of your model while still leveragingthe convenience of `fit()`(for instance, to train a GAN using `fit()`), you can subclass the `Model` class andimplement your own `train_step()` method, whichis called repeatedly during `fit()`. This is covered in the guide[Customizing what happens in `fit()`](https://www.tensorflow.org/guide/keras/customizing_what_happens_in_fit/).Now, if you want very low-level control over training & evaluation, you should writeyour own training & evaluation loops from scratch. This is what this guide is about. Using the `GradientTape`: a first end-to-end exampleCalling a model inside a `GradientTape` scope enables you to retrieve the gradients ofthe trainable weights of the layer with respect to a loss value. Using an optimizerinstance, you can use these gradients to update these variables (which you canretrieve using `model.trainable_weights`).Let's consider a simple MNIST model:
inputs = keras.Input(shape=(784,), name="digits") x1 = layers.Dense(64, activation="relu")(inputs) x2 = layers.Dense(64, activation="relu")(x1) outputs = layers.Dense(10, name="predictions")(x2) model = keras.Model(inputs=inputs, outputs=outputs)
_____no_output_____
Apache-2.0
site/en-snapshot/guide/keras/writing_a_training_loop_from_scratch.ipynb
masa-ita/docs-l10n
Let's train it using mini-batch gradient with a custom training loop.First, we're going to need an optimizer, a loss function, and a dataset:
# Instantiate an optimizer. optimizer = keras.optimizers.SGD(learning_rate=1e-3) # Instantiate a loss function. loss_fn = keras.losses.SparseCategoricalCrossentropy(from_logits=True) # Prepare the training dataset. batch_size = 64 (x_train, y_train), (x_test, y_test) = keras.datasets.mnist.load_data() x_train = np.reshape(x_train, (-1, 784)) x_test = np.reshape(x_test, (-1, 784)) train_dataset = tf.data.Dataset.from_tensor_slices((x_train, y_train)) train_dataset = train_dataset.shuffle(buffer_size=1024).batch(batch_size)
_____no_output_____
Apache-2.0
site/en-snapshot/guide/keras/writing_a_training_loop_from_scratch.ipynb
masa-ita/docs-l10n
Here's our training loop:- We open a `for` loop that iterates over epochs- For each epoch, we open a `for` loop that iterates over the dataset, in batches- For each batch, we open a `GradientTape()` scope- Inside this scope, we call the model (forward pass) and compute the loss- Outside the scope, we retrieve the gradients of the weightsof the model with regard to the loss- Finally, we use the optimizer to update the weights of the model based on thegradients
epochs = 2 for epoch in range(epochs): print("\nStart of epoch %d" % (epoch,)) # Iterate over the batches of the dataset. for step, (x_batch_train, y_batch_train) in enumerate(train_dataset): # Open a GradientTape to record the operations run # during the forward pass, which enables auto-differentiation. with tf.GradientTape() as tape: # Run the forward pass of the layer. # The operations that the layer applies # to its inputs are going to be recorded # on the GradientTape. logits = model(x_batch_train, training=True) # Logits for this minibatch # Compute the loss value for this minibatch. loss_value = loss_fn(y_batch_train, logits) # Use the gradient tape to automatically retrieve # the gradients of the trainable variables with respect to the loss. grads = tape.gradient(loss_value, model.trainable_weights) # Run one step of gradient descent by updating # the value of the variables to minimize the loss. optimizer.apply_gradients(zip(grads, model.trainable_weights)) # Log every 200 batches. if step % 200 == 0: print( "Training loss (for one batch) at step %d: %.4f" % (step, float(loss_value)) ) print("Seen so far: %s samples" % ((step + 1) * 64))
_____no_output_____
Apache-2.0
site/en-snapshot/guide/keras/writing_a_training_loop_from_scratch.ipynb
masa-ita/docs-l10n
Low-level handling of metricsLet's add metrics monitoring to this basic loop.You can readily reuse the built-in metrics (or custom ones you wrote) in such trainingloops written from scratch. Here's the flow:- Instantiate the metric at the start of the loop- Call `metric.update_state()` after each batch- Call `metric.result()` when you need to display the current value of the metric- Call `metric.reset_states()` when you need to clear the state of the metric(typically at the end of an epoch)Let's use this knowledge to compute `SparseCategoricalAccuracy` on validation data atthe end of each epoch:
# Get model inputs = keras.Input(shape=(784,), name="digits") x = layers.Dense(64, activation="relu", name="dense_1")(inputs) x = layers.Dense(64, activation="relu", name="dense_2")(x) outputs = layers.Dense(10, name="predictions")(x) model = keras.Model(inputs=inputs, outputs=outputs) # Instantiate an optimizer to train the model. optimizer = keras.optimizers.SGD(learning_rate=1e-3) # Instantiate a loss function. loss_fn = keras.losses.SparseCategoricalCrossentropy(from_logits=True) # Prepare the metrics. train_acc_metric = keras.metrics.SparseCategoricalAccuracy() val_acc_metric = keras.metrics.SparseCategoricalAccuracy() # Prepare the training dataset. batch_size = 64 train_dataset = tf.data.Dataset.from_tensor_slices((x_train, y_train)) train_dataset = train_dataset.shuffle(buffer_size=1024).batch(batch_size) # Prepare the validation dataset. # Reserve 10,000 samples for validation. x_val = x_train[-10000:] y_val = y_train[-10000:] x_train = x_train[:-10000] y_train = y_train[:-10000] val_dataset = tf.data.Dataset.from_tensor_slices((x_val, y_val)) val_dataset = val_dataset.batch(64)
_____no_output_____
Apache-2.0
site/en-snapshot/guide/keras/writing_a_training_loop_from_scratch.ipynb
masa-ita/docs-l10n
Here's our training & evaluation loop:
import time epochs = 2 for epoch in range(epochs): print("\nStart of epoch %d" % (epoch,)) start_time = time.time() # Iterate over the batches of the dataset. for step, (x_batch_train, y_batch_train) in enumerate(train_dataset): with tf.GradientTape() as tape: logits = model(x_batch_train, training=True) loss_value = loss_fn(y_batch_train, logits) grads = tape.gradient(loss_value, model.trainable_weights) optimizer.apply_gradients(zip(grads, model.trainable_weights)) # Update training metric. train_acc_metric.update_state(y_batch_train, logits) # Log every 200 batches. if step % 200 == 0: print( "Training loss (for one batch) at step %d: %.4f" % (step, float(loss_value)) ) print("Seen so far: %d samples" % ((step + 1) * 64)) # Display metrics at the end of each epoch. train_acc = train_acc_metric.result() print("Training acc over epoch: %.4f" % (float(train_acc),)) # Reset training metrics at the end of each epoch train_acc_metric.reset_states() # Run a validation loop at the end of each epoch. for x_batch_val, y_batch_val in val_dataset: val_logits = model(x_batch_val, training=False) # Update val metrics val_acc_metric.update_state(y_batch_val, val_logits) val_acc = val_acc_metric.result() val_acc_metric.reset_states() print("Validation acc: %.4f" % (float(val_acc),)) print("Time taken: %.2fs" % (time.time() - start_time))
_____no_output_____
Apache-2.0
site/en-snapshot/guide/keras/writing_a_training_loop_from_scratch.ipynb
masa-ita/docs-l10n
Speeding-up your training step with `tf.function`The default runtime in TensorFlow 2.0 is[eager execution](https://www.tensorflow.org/guide/eager). As such, our training loopabove executes eagerly.This is great for debugging, but graph compilation has a definite performanceadvantage. Describing your computation as a static graph enables the frameworkto apply global performance optimizations. This is impossible whenthe framework is constrained to greedly execute one operation after another,with no knowledge of what comes next.You can compile into a static graph any function that takes tensors as input.Just add a `@tf.function` decorator on it, like this:
@tf.function def train_step(x, y): with tf.GradientTape() as tape: logits = model(x, training=True) loss_value = loss_fn(y, logits) grads = tape.gradient(loss_value, model.trainable_weights) optimizer.apply_gradients(zip(grads, model.trainable_weights)) train_acc_metric.update_state(y, logits) return loss_value
_____no_output_____
Apache-2.0
site/en-snapshot/guide/keras/writing_a_training_loop_from_scratch.ipynb
masa-ita/docs-l10n
Let's do the same with the evaluation step:
@tf.function def test_step(x, y): val_logits = model(x, training=False) val_acc_metric.update_state(y, val_logits)
_____no_output_____
Apache-2.0
site/en-snapshot/guide/keras/writing_a_training_loop_from_scratch.ipynb
masa-ita/docs-l10n
Now, let's re-run our training loop with this compiled training step:
import time epochs = 2 for epoch in range(epochs): print("\nStart of epoch %d" % (epoch,)) start_time = time.time() # Iterate over the batches of the dataset. for step, (x_batch_train, y_batch_train) in enumerate(train_dataset): loss_value = train_step(x_batch_train, y_batch_train) # Log every 200 batches. if step % 200 == 0: print( "Training loss (for one batch) at step %d: %.4f" % (step, float(loss_value)) ) print("Seen so far: %d samples" % ((step + 1) * 64)) # Display metrics at the end of each epoch. train_acc = train_acc_metric.result() print("Training acc over epoch: %.4f" % (float(train_acc),)) # Reset training metrics at the end of each epoch train_acc_metric.reset_states() # Run a validation loop at the end of each epoch. for x_batch_val, y_batch_val in val_dataset: test_step(x_batch_val, y_batch_val) val_acc = val_acc_metric.result() val_acc_metric.reset_states() print("Validation acc: %.4f" % (float(val_acc),)) print("Time taken: %.2fs" % (time.time() - start_time))
_____no_output_____
Apache-2.0
site/en-snapshot/guide/keras/writing_a_training_loop_from_scratch.ipynb
masa-ita/docs-l10n
Much faster, isn't it? Low-level handling of losses tracked by the modelLayers & models recursively track any losses created during the forward passby layers that call `self.add_loss(value)`. The resulting list of scalar lossvalues are available via the property `model.losses`at the end of the forward pass.If you want to be using these loss components, you should sum themand add them to the main loss in your training step.Consider this layer, that creates an activity regularization loss:
class ActivityRegularizationLayer(layers.Layer): def call(self, inputs): self.add_loss(1e-2 * tf.reduce_sum(inputs)) return inputs
_____no_output_____
Apache-2.0
site/en-snapshot/guide/keras/writing_a_training_loop_from_scratch.ipynb
masa-ita/docs-l10n
Let's build a really simple model that uses it:
inputs = keras.Input(shape=(784,), name="digits") x = layers.Dense(64, activation="relu")(inputs) # Insert activity regularization as a layer x = ActivityRegularizationLayer()(x) x = layers.Dense(64, activation="relu")(x) outputs = layers.Dense(10, name="predictions")(x) model = keras.Model(inputs=inputs, outputs=outputs)
_____no_output_____
Apache-2.0
site/en-snapshot/guide/keras/writing_a_training_loop_from_scratch.ipynb
masa-ita/docs-l10n
Here's what our training step should look like now:
@tf.function def train_step(x, y): with tf.GradientTape() as tape: logits = model(x, training=True) loss_value = loss_fn(y, logits) # Add any extra losses created during the forward pass. loss_value += sum(model.losses) grads = tape.gradient(loss_value, model.trainable_weights) optimizer.apply_gradients(zip(grads, model.trainable_weights)) train_acc_metric.update_state(y, logits) return loss_value
_____no_output_____
Apache-2.0
site/en-snapshot/guide/keras/writing_a_training_loop_from_scratch.ipynb
masa-ita/docs-l10n
SummaryNow you know everything there is to know about using built-in training loops andwriting your own from scratch.To conclude, here's a simple end-to-end example that ties together everythingyou've learned in this guide: a DCGAN trained on MNIST digits. End-to-end example: a GAN training loop from scratchYou may be familiar with Generative Adversarial Networks (GANs). GANs can generate newimages that look almost real, by learning the latent distribution of a trainingdataset of images (the "latent space" of the images).A GAN is made of two parts: a "generator" model that maps points in the latentspace to points in image space, a "discriminator" model, a classifierthat can tell the difference between real images (from the training dataset)and fake images (the output of the generator network).A GAN training loop looks like this:1) Train the discriminator.- Sample a batch of random points in the latent space.- Turn the points into fake images via the "generator" model.- Get a batch of real images and combine them with the generated images.- Train the "discriminator" model to classify generated vs. real images.2) Train the generator.- Sample random points in the latent space.- Turn the points into fake images via the "generator" network.- Get a batch of real images and combine them with the generated images.- Train the "generator" model to "fool" the discriminator and classify the fake imagesas real.For a much more detailed overview of how GANs works, see[Deep Learning with Python](https://www.manning.com/books/deep-learning-with-python).Let's implement this training loop. First, create the discriminator meant to classifyfake vs real digits:
discriminator = keras.Sequential( [ keras.Input(shape=(28, 28, 1)), layers.Conv2D(64, (3, 3), strides=(2, 2), padding="same"), layers.LeakyReLU(alpha=0.2), layers.Conv2D(128, (3, 3), strides=(2, 2), padding="same"), layers.LeakyReLU(alpha=0.2), layers.GlobalMaxPooling2D(), layers.Dense(1), ], name="discriminator", ) discriminator.summary()
_____no_output_____
Apache-2.0
site/en-snapshot/guide/keras/writing_a_training_loop_from_scratch.ipynb
masa-ita/docs-l10n
Then let's create a generator network,that turns latent vectors into outputs of shape `(28, 28, 1)` (representingMNIST digits):
latent_dim = 128 generator = keras.Sequential( [ keras.Input(shape=(latent_dim,)), # We want to generate 128 coefficients to reshape into a 7x7x128 map layers.Dense(7 * 7 * 128), layers.LeakyReLU(alpha=0.2), layers.Reshape((7, 7, 128)), layers.Conv2DTranspose(128, (4, 4), strides=(2, 2), padding="same"), layers.LeakyReLU(alpha=0.2), layers.Conv2DTranspose(128, (4, 4), strides=(2, 2), padding="same"), layers.LeakyReLU(alpha=0.2), layers.Conv2D(1, (7, 7), padding="same", activation="sigmoid"), ], name="generator", )
_____no_output_____
Apache-2.0
site/en-snapshot/guide/keras/writing_a_training_loop_from_scratch.ipynb
masa-ita/docs-l10n
Here's the key bit: the training loop. As you can see it is quite straightforward. Thetraining step function only takes 17 lines.
# Instantiate one optimizer for the discriminator and another for the generator. d_optimizer = keras.optimizers.Adam(learning_rate=0.0003) g_optimizer = keras.optimizers.Adam(learning_rate=0.0004) # Instantiate a loss function. loss_fn = keras.losses.BinaryCrossentropy(from_logits=True) @tf.function def train_step(real_images): # Sample random points in the latent space random_latent_vectors = tf.random.normal(shape=(batch_size, latent_dim)) # Decode them to fake images generated_images = generator(random_latent_vectors) # Combine them with real images combined_images = tf.concat([generated_images, real_images], axis=0) # Assemble labels discriminating real from fake images labels = tf.concat( [tf.ones((batch_size, 1)), tf.zeros((real_images.shape[0], 1))], axis=0 ) # Add random noise to the labels - important trick! labels += 0.05 * tf.random.uniform(labels.shape) # Train the discriminator with tf.GradientTape() as tape: predictions = discriminator(combined_images) d_loss = loss_fn(labels, predictions) grads = tape.gradient(d_loss, discriminator.trainable_weights) d_optimizer.apply_gradients(zip(grads, discriminator.trainable_weights)) # Sample random points in the latent space random_latent_vectors = tf.random.normal(shape=(batch_size, latent_dim)) # Assemble labels that say "all real images" misleading_labels = tf.zeros((batch_size, 1)) # Train the generator (note that we should *not* update the weights # of the discriminator)! with tf.GradientTape() as tape: predictions = discriminator(generator(random_latent_vectors)) g_loss = loss_fn(misleading_labels, predictions) grads = tape.gradient(g_loss, generator.trainable_weights) g_optimizer.apply_gradients(zip(grads, generator.trainable_weights)) return d_loss, g_loss, generated_images
_____no_output_____
Apache-2.0
site/en-snapshot/guide/keras/writing_a_training_loop_from_scratch.ipynb
masa-ita/docs-l10n
Let's train our GAN, by repeatedly calling `train_step` on batches of images.Since our discriminator and generator are convnets, you're going to want torun this code on a GPU.
import os # Prepare the dataset. We use both the training & test MNIST digits. batch_size = 64 (x_train, _), (x_test, _) = keras.datasets.mnist.load_data() all_digits = np.concatenate([x_train, x_test]) all_digits = all_digits.astype("float32") / 255.0 all_digits = np.reshape(all_digits, (-1, 28, 28, 1)) dataset = tf.data.Dataset.from_tensor_slices(all_digits) dataset = dataset.shuffle(buffer_size=1024).batch(batch_size) epochs = 1 # In practice you need at least 20 epochs to generate nice digits. save_dir = "./" for epoch in range(epochs): print("\nStart epoch", epoch) for step, real_images in enumerate(dataset): # Train the discriminator & generator on one batch of real images. d_loss, g_loss, generated_images = train_step(real_images) # Logging. if step % 200 == 0: # Print metrics print("discriminator loss at step %d: %.2f" % (step, d_loss)) print("adversarial loss at step %d: %.2f" % (step, g_loss)) # Save one generated image img = tf.keras.preprocessing.image.array_to_img( generated_images[0] * 255.0, scale=False ) img.save(os.path.join(save_dir, "generated_img" + str(step) + ".png")) # To limit execution time we stop after 10 steps. # Remove the lines below to actually train the model! if step > 10: break
_____no_output_____
Apache-2.0
site/en-snapshot/guide/keras/writing_a_training_loop_from_scratch.ipynb
masa-ita/docs-l10n
Title
# SERVIÇO FLORESTAL BRASILEIRO # Sistema Nacional de Informações Florestais # Incêndios Florestais
_____no_output_____
MIT
SNIF/.ipynb_checkpoints/SNIF_Focos de calor_xlsx-checkpoint.ipynb
geanclm/LabHacker
Import libs
import pandas as pd
_____no_output_____
MIT
SNIF/.ipynb_checkpoints/SNIF_Focos de calor_xlsx-checkpoint.ipynb
geanclm/LabHacker
Import data
# fonte: https://snif.florestal.gov.br/pt-br/incendios-florestais df = pd.read_excel('focos_calor_1998_2019.xlsx') df.shape df.info() df df[df['Número']==df['Número'].max()]
_____no_output_____
MIT
SNIF/.ipynb_checkpoints/SNIF_Focos de calor_xlsx-checkpoint.ipynb
geanclm/LabHacker
使用scikit_learn中的kNN
from sklearn.neighbors import KNeighborsClassifier kNN_classifier = KNeighborsClassifier(n_neighbors=6) kNN_classifier.fit(X_train, y_train) kNN_classifier.predict(x) y_predict = kNN_classifier.predict(x) y_predict[0]
_____no_output_____
Apache-2.0
data-science/scikit-learn/02/02 kNN-in-Scikit-Learn.ipynb
le3t/ko-repo
重新整理我们的kNN的代码
%run ../kNN/kNN.py knn_clf = KNNClassifier(k=6) knn_clf.fit(X_train, y_train) y_predict = knn_clf.predict(x) y_predict y_predict[0]
_____no_output_____
Apache-2.0
data-science/scikit-learn/02/02 kNN-in-Scikit-Learn.ipynb
le3t/ko-repo
Week 3: Improve MNIST with ConvolutionsIn the videos you looked at how you would improve Fashion MNIST using Convolutions. For this exercise see if you can improve MNIST to 99.5% accuracy or more by adding only a single convolutional layer and a single MaxPooling 2D layer to the model from the assignment of the previous week. You should stop training once the accuracy goes above this amount. It should happen in less than 10 epochs, so it's ok to hard code the number of epochs for training, but your training must end once it hits the above metric. If it doesn't, then you'll need to redesign your callback.When 99.5% accuracy has been hit, you should print out the string "Reached 99.5% accuracy so cancelling training!"
import os import numpy as np import tensorflow as tf from tensorflow import keras
_____no_output_____
Apache-2.0
C1/W3/assignment/C1W3_Assignment.ipynb
druvdub/Tensorflow-Specialization
Begin by loading the data. A couple of things to notice:- The file `mnist.npz` is already included in the current workspace under the `data` directory. By default the `load_data` from Keras accepts a path relative to `~/.keras/datasets` but in this case it is stored somewhere else, as a result of this, you need to specify the full path.- `load_data` returns the train and test sets in the form of the tuples `(x_train, y_train), (x_test, y_test)` but in this exercise you will be needing only the train set so you can ignore the second tuple.
# Load the data # Get current working directory current_dir = os.getcwd() # Append data/mnist.npz to the previous path to get the full path data_path = os.path.join(current_dir, "data/mnist.npz") # Get only training set (training_images, training_labels), _ = tf.keras.datasets.mnist.load_data(path=data_path)
_____no_output_____
Apache-2.0
C1/W3/assignment/C1W3_Assignment.ipynb
druvdub/Tensorflow-Specialization
One important step when dealing with image data is to preprocess the data. During the preprocess step you can apply transformations to the dataset that will be fed into your convolutional neural network.Here you will apply two transformations to the data:- Reshape the data so that it has an extra dimension. The reason for this is that commonly you will use 3-dimensional arrays (without counting the batch dimension) to represent image data. The third dimension represents the color using RGB values. This data might be in black and white format so the third dimension doesn't really add any additional information for the classification process but it is a good practice regardless.- Normalize the pixel values so that these are values between 0 and 1. You can achieve this by dividing every value in the array by the maximum.Remember that these tensors are of type `numpy.ndarray` so you can use functions like [reshape](https://numpy.org/doc/stable/reference/generated/numpy.reshape.html) or [divide](https://numpy.org/doc/stable/reference/generated/numpy.divide.html) to complete the `reshape_and_normalize` function below:
# GRADED FUNCTION: reshape_and_normalize def reshape_and_normalize(images): ### START CODE HERE # Reshape the images to add an extra dimension images = np.reshape(images, images.shape + (1,)) # Normalize pixel values images = np.divide(images,255) ### END CODE HERE return images
_____no_output_____
Apache-2.0
C1/W3/assignment/C1W3_Assignment.ipynb
druvdub/Tensorflow-Specialization
Test your function with the next cell:
# Reload the images in case you run this cell multiple times (training_images, _), _ = tf.keras.datasets.mnist.load_data(path=data_path) # Apply your function training_images = reshape_and_normalize(training_images) print(f"Maximum pixel value after normalization: {np.max(training_images)}\n") print(f"Shape of training set after reshaping: {training_images.shape}\n") print(f"Shape of one image after reshaping: {training_images[0].shape}")
Maximum pixel value after normalization: 1.0 Shape of training set after reshaping: (60000, 28, 28, 1) Shape of one image after reshaping: (28, 28, 1)
Apache-2.0
C1/W3/assignment/C1W3_Assignment.ipynb
druvdub/Tensorflow-Specialization
**Expected Output:**```Maximum pixel value after normalization: 1.0Shape of training set after reshaping: (60000, 28, 28, 1)Shape of one image after reshaping: (28, 28, 1)``` Now complete the callback that will ensure that training will stop after an accuracy of 99.5% is reached:
# GRADED CLASS: myCallback ### START CODE HERE # Remember to inherit from the correct class class myCallback(tf.keras.callbacks.Callback): # Define the method that checks the accuracy at the end of each epoch def on_epoch_end(self, epoch, logs={}): # check accuracy if logs.get('accuracy') >= 0.995: print('\nReached 99.5% accuracy so cancelling training!') self.model.stop_training = True ### END CODE HERE
_____no_output_____
Apache-2.0
C1/W3/assignment/C1W3_Assignment.ipynb
druvdub/Tensorflow-Specialization
Finally, complete the `convolutional_model` function below. This function should return your convolutional neural network:
# GRADED FUNCTION: convolutional_model def convolutional_model(): ### START CODE HERE # Define the model, it should have 5 layers: # - A Conv2D layer with 32 filters, a kernel_size of 3x3, ReLU activation function # and an input shape that matches that of every image in the training set # - A MaxPooling2D layer with a pool_size of 2x2 # - A Flatten layer with no arguments # - A Dense layer with 128 units and ReLU activation function # - A Dense layer with 10 units and softmax activation function model = tf.keras.models.Sequential([ tf.keras.layers.Conv2D(32, (3,3), activation = 'relu', input_shape = (28, 28, 1)), tf.keras.layers.MaxPooling2D(2,2), tf.keras.layers.Flatten(), tf.keras.layers.Dense(128, activation = 'relu'), tf.keras.layers.Dense(10, activation = 'softmax') ]) ### END CODE HERE # Compile the model model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy']) return model # Save your untrained model model = convolutional_model() # Instantiate the callback class callbacks = myCallback() # Train your model (this can take up to 5 minutes) history = model.fit(training_images, training_labels, epochs=10, callbacks=[callbacks])
Epoch 1/10 1875/1875 [==============================] - 36s 19ms/step - loss: 0.1522 - accuracy: 0.9548 Epoch 2/10 1875/1875 [==============================] - 35s 19ms/step - loss: 0.0529 - accuracy: 0.9840 Epoch 3/10 1875/1875 [==============================] - 35s 19ms/step - loss: 0.0327 - accuracy: 0.9897 Epoch 4/10 1875/1875 [==============================] - 34s 18ms/step - loss: 0.0212 - accuracy: 0.9935 Epoch 5/10 1872/1875 [============================>.] - ETA: 0s - loss: 0.0150 - accuracy: 0.9952 Reached 99.5% accuracy so cancelling training! 1875/1875 [==============================] - 34s 18ms/step - loss: 0.0150 - accuracy: 0.9952
Apache-2.0
C1/W3/assignment/C1W3_Assignment.ipynb
druvdub/Tensorflow-Specialization
If you see the message that you defined in your callback printed out after less than 10 epochs it means your callback worked as expected. You can also double check by running the following cell:
print(f"Your model was trained for {len(history.epoch)} epochs")
Your model was trained for 5 epochs
Apache-2.0
C1/W3/assignment/C1W3_Assignment.ipynb
druvdub/Tensorflow-Specialization
Data and TrainingThe **augmented** cough audio dataset of the [Project Coswara](https://coswara.iisc.ac.in/about) was used to train the deep CNN model.The preprocessing steps and CNN architecture is as shown below. The training code is concealed on Github to protect the exact hyperparameters and maintain performance integrity of the model. Model Deployment on IBM Watson Machine LearningBelow are the contents of an IBM Watson Studio Notebook for deploying our trained ML model IBM Watson Machine Learning.Outputs, Keys, Endpoints and URLs are removed (replaced with ) to maintain privacy. Import model
import ibm_boto3 from ibm_botocore.client import Config # @hidden_cell # The following code contains the credentials for a file in your IBM Cloud Object Storage. # You might want to remove those credentials before you share your notebook. credentials_2 = { 'IAM_SERVICE_ID': <>, 'IBM_API_KEY_ID': <>, 'ENDPOINT': <>, 'IBM_AUTH_ENDPOINT': <>, 'BUCKET': <>, 'FILE': 'cough-it-model.tgz' } cos = ibm_boto3.client(service_name='s3', ibm_api_key_id=credentials_2['IBM_API_KEY_ID'], ibm_auth_endpoint=credentials_2['IBM_AUTH_ENDPOINT'], ibm_service_instance_id=credentials_2['IAM_SERVICE_ID'], config=Config(signature_version='oauth'), endpoint_url=credentials_2['ENDPOINT']) cos.download_file(Bucket=credentials_2['BUCKET'], Key='cough-it-model.h5.tgz', Filename='cough-it-model.h5.tgz') model_path = 'cough-it-model.h5.tgz'
_____no_output_____
MIT
ML model/model-deploy.ipynb
darshkaushik/cough-it
Set up Watson Machine Learning Client and Deployment space
from ibm_watson_machine_learning import APIClient wml_credentials = { "apikey" : <>, "url" : <> } client = APIClient( wml_credentials ) space_guid = <> client.set.default_space(space_guid)
_____no_output_____
MIT
ML model/model-deploy.ipynb
darshkaushik/cough-it
Store the model
sofware_spec_uid = client.software_specifications.get_id_by_name("default_py3.8") metadata = { client.repository.ModelMetaNames.NAME: "cough-it model", client.repository.ModelMetaNames.SOFTWARE_SPEC_UID: sofware_spec_uid, client.repository.ModelMetaNames.TYPE: "tensorflow_2.4" } published_model = client.repository.store_model( model= model_path, meta_props=metadata ) import json published_model_uid = client.repository.get_model_uid(published_model) model_details = client.repository.get_details(published_model_uid) print(json.dumps(model_details, indent=2))
_____no_output_____
MIT
ML model/model-deploy.ipynb
darshkaushik/cough-it
Create a deployment
dep_metadata = { client.deployments.ConfigurationMetaNames.NAME: "Deployment of external Keras model", client.deployments.ConfigurationMetaNames.ONLINE: {} } created_deployment = client.deployments.create(published_model_uid, meta_props=dep_metadata) deployment_uid = client.deployments.get_uid(created_deployment) client.deployments.get_details(deployment_uid)
_____no_output_____
MIT
ML model/model-deploy.ipynb
darshkaushik/cough-it
Joy Ride - Part 3: Parallel ParkingIn this section you will write a function that implements the correct sequence of steps required to parallel park a vehicle.NOTE: for this segment the vehicle's maximum speed has been set to just over 4 mph. This should make parking a little easier.![](https://upload.wikimedia.org/wikipedia/commons/2/26/ParallelParkingAnimation.gif) If you have never heard of WASD keys, please check out this [link](https://en.wikipedia.org/wiki/Arrow_keysWASD_keys). Instructions to get started1. Run the `SETUP CELL` below this one by pressing `Ctrl + Enter`. 1. Click the button that says "Load Car Simulator". The simulator will appear below the button.1. Run the cell below the simulator, marked `CODE CELL` (hit `Ctrl + Enter`). 1. Try to drive the car using WASD keys. You might notice a problem...1. Press the **Reset** button in the simulator and then modify the code in the `CODE CELL` as per the instructions in TODO comments. 1. When you think you've fixed the problem, run the code cell again. **NOTE** - Depending on your computer, it may take a few minutes for the simulator to load! Please be patient. Instructions to Reload the SimulatorOnce the simulator is loaded, the `SETUP CELL` cannot be rerun, or it will prevent the simulator from appearing. If something happens to the simulator, you can do the following:- Go to Jupyter’s menu: Kernel --> Restart and Clear Output- Reload the page (Cmd-R)- Run the first cell again- Click the Green `Load Car Simulator` button again
# SETUP CELL %%HTML <link rel="stylesheet" type="text/css" href="buttonStyle.css"> <button id="launcher">Load Car Simulator </button> <button id="restart">Restart Connection</button> <script src="setupLauncher.js"></script><div id="simulator_frame"></sim> <script src="kernelRestart.js"></script> # CODE CELL # Before/After running any code changes make sure to click the button "Restart Connection" above first. # Also make sure to click Reset in the simulator to refresh the connection. # You need to wait for the Kernel Ready message. car_parameters = {"throttle": 0, "steer": 0, "brake": 0} def control(pos_x, pos_y, time, velocity): """ Controls the simulated car""" global car_parameters # The car will back up with a steering of 25 for 3 seconds # then the car will back up with a steering of -25 until its y position is less than 32.5 # then the car will steer straight and brake if time < 3: car_parameters['throttle'] = -1 car_parameters['steer'] = 25 elif pos_y > 32.5: car_parameters['throttle'] = -1 car_parameters['steer'] = -25 else: car_parameters['steer'] = 0 car_parameters['brake'] = 1 return car_parameters import src.simulate as sim sim.run(control)
running CONNECTED ('172.18.0.1', 50088) connected
MIT
ParallelParking.ipynb
ianleongg/Joy-Ride-Parallel-Parking
Submitting this Project!Your parallel park function is "correct" when:1. Your car doesn't hit any other cars.2. Your car stops completely inside of the right lane.Once you've got it working, it's time to submit. Submit by pressing the `SUBMIT` button at the lower right corner of this page.
# CODE CELL # Before/After running any code changes make sure to click the button "Restart Connection" above first. # Also make sure to click Reset in the simulator to refresh the connection. # You need to wait for the Kernel Ready message. car_parameters = {"throttle": 0, "steer": 0, "brake": 0} def control(pos_x, pos_y, time, velocity): """ Controls the simulated car""" global car_parameters # The car will back up with a steering of 25 for 3 seconds # then the car will back up with a steering of -25 until its y position is less than 32.5 # then the car will steer straight and brake if time < 3: car_parameters['throttle'] = -1 car_parameters['steer'] = 25 elif pos_y > 32.5: car_parameters['throttle'] = -1 car_parameters['steer'] = -25 else: car_parameters['steer'] = 0 car_parameters['brake'] = 1 return car_parameters import src.simulate as sim sim.run(control)
_____no_output_____
MIT
ParallelParking.ipynb
ianleongg/Joy-Ride-Parallel-Parking
The Basics of NumPy Arrays **Python- Numpy Practice Session-S4 : Save a Copy in local drive and Work** Data manipulation in Python is nearly synonymous with NumPy array manipulation: even newer tools like Pandas ([Chapter 3](03.00-Introduction-to-Pandas.ipynb)) are built around the NumPy array.This section will present several examples of using NumPy array manipulation to access data and subarrays, and to split, reshape, and join the arrays.While the types of operations shown here may seem a bit dry and pedantic, they comprise the building blocks of many other examples used throughout the book.Get to know them well!We'll cover a few categories of basic array manipulations here:- *Attributes of arrays*: Determining the size, shape, memory consumption, and data types of arrays- *Indexing of arrays*: Getting and setting the value of individual array elements- *Slicing of arrays*: Getting and setting smaller subarrays within a larger array- *Reshaping of arrays*: Changing the shape of a given array- *Joining and splitting of arrays*: Combining multiple arrays into one, and splitting one array into many NumPy Array Attributes First let's discuss some useful array attributes.We'll start by defining three random arrays, a one-dimensional, two-dimensional, and three-dimensional array.We'll use NumPy's random number generator, which we will *seed* with a set value in order to ensure that the same random arrays are generated each time this code is run:
import numpy as np np.random.seed(0) # seed for reproducibility x1 = np.random.randint(10, size=6) # One-dimensional array x2 = np.random.randint(10, size=(3, 4)) # Two-dimensional array x3 = np.random.randint(10, size=(3, 4, 5)) # Three-dimensional array
_____no_output_____
MIT
OOP/Practice Sessions/Python_S4_Basics_Of_NumPy_Arrays.ipynb
siddhantdixit/OOP-ClassWork
Each array has attributes ``ndim`` (the number of dimensions), ``shape`` (the size of each dimension), and ``size`` (the total size of the array):
print("x3 ndim: ", x3.ndim) print("x3 shape:", x3.shape) print("x3 size: ", x3.size)
_____no_output_____
MIT
OOP/Practice Sessions/Python_S4_Basics_Of_NumPy_Arrays.ipynb
siddhantdixit/OOP-ClassWork
Another useful attribute is the ``dtype``, the data type of the array (which we discussed previously in [Understanding Data Types in Python](02.01-Understanding-Data-Types.ipynb)):
print("dtype:", x3.dtype)
_____no_output_____
MIT
OOP/Practice Sessions/Python_S4_Basics_Of_NumPy_Arrays.ipynb
siddhantdixit/OOP-ClassWork
Other attributes include ``itemsize``, which lists the size (in bytes) of each array element, and ``nbytes``, which lists the total size (in bytes) of the array:
print("itemsize:", x3.itemsize, "bytes") print("nbytes:", x3.nbytes, "bytes")
_____no_output_____
MIT
OOP/Practice Sessions/Python_S4_Basics_Of_NumPy_Arrays.ipynb
siddhantdixit/OOP-ClassWork
In general, we expect that ``nbytes`` is equal to ``itemsize`` times ``size``. Array Indexing: Accessing Single Elements If you are familiar with Python's standard list indexing, indexing in NumPy will feel quite familiar.In a one-dimensional array, the $i^{th}$ value (counting from zero) can be accessed by specifying the desired index in square brackets, just as with Python lists:
x1 x1[0] x1[4]
_____no_output_____
MIT
OOP/Practice Sessions/Python_S4_Basics_Of_NumPy_Arrays.ipynb
siddhantdixit/OOP-ClassWork
To index from the end of the array, you can use negative indices:
x1[-1] x1[-2]
_____no_output_____
MIT
OOP/Practice Sessions/Python_S4_Basics_Of_NumPy_Arrays.ipynb
siddhantdixit/OOP-ClassWork
In a multi-dimensional array, items can be accessed using a comma-separated tuple of indices:
x2 x2[0, 0] x2[2, 0] x2[2, -1]
_____no_output_____
MIT
OOP/Practice Sessions/Python_S4_Basics_Of_NumPy_Arrays.ipynb
siddhantdixit/OOP-ClassWork
Values can also be modified using any of the above index notation:
x2[0, 0] = 12 x2
_____no_output_____
MIT
OOP/Practice Sessions/Python_S4_Basics_Of_NumPy_Arrays.ipynb
siddhantdixit/OOP-ClassWork
Keep in mind that, unlike Python lists, NumPy arrays have a fixed type.This means, for example, that if you attempt to insert a floating-point value to an integer array, the value will be silently truncated. Don't be caught unaware by this behavior!
x1[0] = 3.14159 # this will be truncated! x1
_____no_output_____
MIT
OOP/Practice Sessions/Python_S4_Basics_Of_NumPy_Arrays.ipynb
siddhantdixit/OOP-ClassWork
Array Slicing: Accessing Subarrays Just as we can use square brackets to access individual array elements, we can also use them to access subarrays with the *slice* notation, marked by the colon (``:``) character.The NumPy slicing syntax follows that of the standard Python list; to access a slice of an array ``x``, use this:``` pythonx[start:stop:step]```If any of these are unspecified, they default to the values ``start=0``, ``stop=``*``size of dimension``*, ``step=1``.We'll take a look at accessing sub-arrays in one dimension and in multiple dimensions. One-dimensional subarrays
x = np.arange(10) x x[:5] # first five elements x[5:] # elements after index 5 x[4:7] # middle sub-array x[::2] # every other element x[1::2] # every other element, starting at index 1
_____no_output_____
MIT
OOP/Practice Sessions/Python_S4_Basics_Of_NumPy_Arrays.ipynb
siddhantdixit/OOP-ClassWork
A potentially confusing case is when the ``step`` value is negative.In this case, the defaults for ``start`` and ``stop`` are swapped.This becomes a convenient way to reverse an array:
x[::-1] # all elements, reversed x[5::-2] # reversed every other from index 5
_____no_output_____
MIT
OOP/Practice Sessions/Python_S4_Basics_Of_NumPy_Arrays.ipynb
siddhantdixit/OOP-ClassWork
Multi-dimensional subarraysMulti-dimensional slices work in the same way, with multiple slices separated by commas.For example:
x2 x2[:2, :3] # two rows, three columns x2[:3, ::2] # all rows, every other column
_____no_output_____
MIT
OOP/Practice Sessions/Python_S4_Basics_Of_NumPy_Arrays.ipynb
siddhantdixit/OOP-ClassWork
Finally, subarray dimensions can even be reversed together:
x2[::-1, ::-1]
_____no_output_____
MIT
OOP/Practice Sessions/Python_S4_Basics_Of_NumPy_Arrays.ipynb
siddhantdixit/OOP-ClassWork
Accessing array rows and columnsOne commonly needed routine is accessing of single rows or columns of an array.This can be done by combining indexing and slicing, using an empty slice marked by a single colon (``:``):
print(x2[:, 0]) # first column of x2 print(x2[0, :]) # first row of x2
_____no_output_____
MIT
OOP/Practice Sessions/Python_S4_Basics_Of_NumPy_Arrays.ipynb
siddhantdixit/OOP-ClassWork
In the case of row access, the empty slice can be omitted for a more compact syntax:
print(x2[0]) # equivalent to x2[0, :]
_____no_output_____
MIT
OOP/Practice Sessions/Python_S4_Basics_Of_NumPy_Arrays.ipynb
siddhantdixit/OOP-ClassWork
Subarrays as no-copy viewsOne important–and extremely useful–thing to know about array slices is that they return *views* rather than *copies* of the array data.This is one area in which NumPy array slicing differs from Python list slicing: in lists, slices will be copies.Consider our two-dimensional array from before:
print(x2)
_____no_output_____
MIT
OOP/Practice Sessions/Python_S4_Basics_Of_NumPy_Arrays.ipynb
siddhantdixit/OOP-ClassWork
Let's extract a $2 \times 2$ subarray from this:
x2_sub = x2[:2, :2] print(x2_sub)
_____no_output_____
MIT
OOP/Practice Sessions/Python_S4_Basics_Of_NumPy_Arrays.ipynb
siddhantdixit/OOP-ClassWork
Now if we modify this subarray, we'll see that the original array is changed! Observe:
x2_sub[0, 0] = 99 print(x2_sub) print(x2)
_____no_output_____
MIT
OOP/Practice Sessions/Python_S4_Basics_Of_NumPy_Arrays.ipynb
siddhantdixit/OOP-ClassWork
This default behavior is actually quite useful: it means that when we work with large datasets, we can access and process pieces of these datasets without the need to copy the underlying data buffer. Creating copies of arraysDespite the nice features of array views, it is sometimes useful to instead explicitly copy the data within an array or a subarray. This can be most easily done with the ``copy()`` method:
x2_sub_copy = x2[:2, :2].copy() print(x2_sub_copy)
[[3 5] [7 6]]
MIT
OOP/Practice Sessions/Python_S4_Basics_Of_NumPy_Arrays.ipynb
siddhantdixit/OOP-ClassWork
If we now modify this subarray, the original array is not touched:
x2_sub_copy[0, 0] = 42 print(x2_sub_copy) print(x2)
_____no_output_____
MIT
OOP/Practice Sessions/Python_S4_Basics_Of_NumPy_Arrays.ipynb
siddhantdixit/OOP-ClassWork
Reshaping of ArraysAnother useful type of operation is reshaping of arrays.The most flexible way of doing this is with the ``reshape`` method.For example, if you want to put the numbers 1 through 9 in a $3 \times 3$ grid, you can do the following:
grid = np.arange(1, 10).reshape((3, 3)) print(grid)
_____no_output_____
MIT
OOP/Practice Sessions/Python_S4_Basics_Of_NumPy_Arrays.ipynb
siddhantdixit/OOP-ClassWork
Note that for this to work, the size of the initial array must match the size of the reshaped array. Where possible, the ``reshape`` method will use a no-copy view of the initial array, but with non-contiguous memory buffers this is not always the case.Another common reshaping pattern is the conversion of a one-dimensional array into a two-dimensional row or column matrix.This can be done with the ``reshape`` method, or more easily done by making use of the ``newaxis`` keyword within a slice operation:
x = np.array([1, 2, 3]) # row vector via reshape x.reshape((1, 3)) # row vector via newaxis x[np.newaxis, :] # column vector via reshape x.reshape((3, 1)) # column vector via newaxis x[:, np.newaxis]
_____no_output_____
MIT
OOP/Practice Sessions/Python_S4_Basics_Of_NumPy_Arrays.ipynb
siddhantdixit/OOP-ClassWork
We will see this type of transformation often throughout the remainder of the book. Array Concatenation and SplittingAll of the preceding routines worked on single arrays. It's also possible to combine multiple arrays into one, and to conversely split a single array into multiple arrays. We'll take a look at those operations here. Concatenation of arraysConcatenation, or joining of two arrays in NumPy, is primarily accomplished using the routines ``np.concatenate``, ``np.vstack``, and ``np.hstack``.``np.concatenate`` takes a tuple or list of arrays as its first argument, as we can see here:
x = np.array([1, 2, 3]) y = np.array([3, 2, 1]) np.concatenate([x, y])
_____no_output_____
MIT
OOP/Practice Sessions/Python_S4_Basics_Of_NumPy_Arrays.ipynb
siddhantdixit/OOP-ClassWork
You can also concatenate more than two arrays at once:
z = [99, 99, 99] print(np.concatenate([x, y, z]))
_____no_output_____
MIT
OOP/Practice Sessions/Python_S4_Basics_Of_NumPy_Arrays.ipynb
siddhantdixit/OOP-ClassWork
It can also be used for two-dimensional arrays:
grid = np.array([[1, 2, 3], [4, 5, 6]]) # concatenate along the first axis np.concatenate([grid, grid]) # concatenate along the second axis (zero-indexed) np.concatenate([grid, grid], axis=1)
_____no_output_____
MIT
OOP/Practice Sessions/Python_S4_Basics_Of_NumPy_Arrays.ipynb
siddhantdixit/OOP-ClassWork
For working with arrays of mixed dimensions, it can be clearer to use the ``np.vstack`` (vertical stack) and ``np.hstack`` (horizontal stack) functions:
x = np.array([1, 2, 3]) grid = np.array([[9, 8, 7], [6, 5, 4]]) # vertically stack the arrays np.vstack([x, grid]) # horizontally stack the arrays y = np.array([[99], [99]]) np.hstack([grid, y])
_____no_output_____
MIT
OOP/Practice Sessions/Python_S4_Basics_Of_NumPy_Arrays.ipynb
siddhantdixit/OOP-ClassWork
Similary, ``np.dstack`` will stack arrays along the third axis. Splitting of arraysThe opposite of concatenation is splitting, which is implemented by the functions ``np.split``, ``np.hsplit``, and ``np.vsplit``. For each of these, we can pass a list of indices giving the split points:
x = [1, 2, 3, 99, 99, 3, 2, 1] x1, x2, x3 = np.split(x, [3, 5]) print(x1, x2, x3)
_____no_output_____
MIT
OOP/Practice Sessions/Python_S4_Basics_Of_NumPy_Arrays.ipynb
siddhantdixit/OOP-ClassWork
Notice that *N* split-points, leads to *N + 1* subarrays.The related functions ``np.hsplit`` and ``np.vsplit`` are similar:
grid = np.arange(16).reshape((4, 4)) grid upper, lower = np.vsplit(grid, [2]) print(upper) print(lower) left, right = np.hsplit(grid, [2]) print(left) print(right)
_____no_output_____
MIT
OOP/Practice Sessions/Python_S4_Basics_Of_NumPy_Arrays.ipynb
siddhantdixit/OOP-ClassWork
Scraping and Parsing: EAD XML Finding Aids from the Library of Congress
import os from urllib.request import urlopen from bs4 import BeautifulSoup import subprocess ## Creating a directory called 'LOC_Metadata' and setting it as our current working directory !mkdir /sharedfolder/LOC_Metadata os.chdir('/sharedfolder/LOC_Metadata') ## To make this notebook self-contained, we'll download a list of XML finding aid files the 'right' way. ## (In practice I normally use the 'find-and-replace + grep + wget' approach we covered in class, ## because it takes some extra effort to remind myself how to parse the HTML page via BeautifulSoup.) ## We first load a page with links to finding aids in the 'recorded sound' collection. finding_aid_list_url = 'http://findingaids.loc.gov/source/RS' finding_aid_list_page = urlopen(finding_aid_list_url).read().decode('utf8') # Loading the page print(finding_aid_list_page[:700]) # Printing the first 700 characters in the page we just loaded ## Now we'll parse the page's HTML using BeautifulSoup ... soup = BeautifulSoup(finding_aid_list_page, 'lxml') ## ... and examine soup.find_all('a'), which returns a list of 'a' elements (i.e., HTML links). print(len(soup.find_all('a'))) # Checking the number of links on the page print() # Printing a blank line for readability print(soup.find_all('a')[70]) # Printing element #70 in the list ## We can access the 'href' attribute of an element (i.e., the link URL) using 'href' in ## brackets, just like a dictionary. soup.find_all('a')[70]['href'] ## Now let's make a list of every link on the page. all_links = [] for element in soup.find_all('a'): # Looping through all 'a' elements. try: # Because some 'a' elements do not contain 'href' attributes, all_links.append(element['href']) ## we can use a try/except statement to skip elements that except: ## would otherwise raise an error. pass all_links[:15] # Outputting the first 15 links in the list ## We know that the URL for every XML file we're looking for ends in '.2', so we can ## use that fact to filter out irrelevant links. xml_urls = [] for link in all_links: if link[-2:] == '.2': # Checking whether the last two characters of a link are '.2' xml_urls.append(link) xml_urls # Outputting the full list of relevant XML URLs ## Downloading each XML file in our list of URLs ## We can use the subprocess module (which we imported above) to issue commands in the bash shell. ## In an interactive bash shell session we'd use spaces to separate arguments; instead, subprocess ## takes arguments in the form of a Python list. ## For each item in our list, the following issues a command with two arguments: 'wget' followed by the URL. ## It thus downloads each XML file to the current directory. for url in xml_urls: subprocess.call(['wget', url]) ## Outputting a list of filenames in the current directory ## In Unix-like operating systems, './' always refers to the current directory. os.listdir('./') ## Just in case there are other files in the current directory, we can use a ## list comprehension to create a list of filenames that end in '.2' and assign ## it to the variable 'xml_filenames'. xml_filenames = [item for item in os.listdir('./') if item[-2:]=='.2'] xml_filenames ## Now let's choose an arbitrary XML file in our collection so we can figure out how to parse it. xml_filename = xml_filenames[4] ## Selecting filename #4 in our list xml_text = open(xml_filename).read() ## Reading the file and assigning its content to the variable 'xml_text' print(xml_text[:700]) ## Printing the first 700 characters in the XML text we just loaded ## Parse the XML text from the previous cell using Beautiful Soup soup = BeautifulSoup(xml_text, 'lxml') ## By looking at the XML text above, we can see that the 'ead' element is the root of our XML tree. ## Let's use a for loop to look at the names of elements one next level down in the tree. for element in soup.ead: print(element.name) ## In practice you'd usually just look through the XML file by eye, identify the elements ## you're looking for, and use soup.find_all('...') to extract them. For now, let's continue ## working down the XML tree with BeautifulSoup. # You can find a glossary of EAD element names here: # https://loc.gov/ead/EAD3taglib/index.html ## Since the 'eadheader' element is administrative metadata we don't care about, let's ## repeat the process for 'soup.ead.archdesc' ('archdesc' is 'archival description' in EAD parlance). for element in soup.ead.archdesc: if element.name != None: ## Filtering out 'None' elements, which in this case are irrelevant comments print(element.name) ## By looking at the XML file in a text editor, I notice the 'did' element ('descriptive identification') ## contains the item-level information we're looking for. Let's run another for loop to look at the ## names of elements contained within each 'did' element. for element in soup.ead.archdesc.did: if element.name != None: print(element.name) ## Note that 'soup.ead.archdesc.did' only refers to the first 'did' element in the XML document. ## OK, that's enough exploring. Let's use soup.find_all() to create a list of 'did' elements. did_elements = soup.find_all('did') print(len(did_elements)) ## Printing the number of 'did' elements in our list print() print(did_elements[4]) ## Printing item #4 in the the list ## Not every 'did' element contains the same fields; different objects are described differently. ## Try running this cell several times, plugging in other index numbers to compare the way ## different items' records are formatted. print(did_elements[7]) ## If you run the cell above several times with different index numbers, you'll notice that the ## first item in the list (index 0) refers to the entire box of records, while the others are ## individual folders or series of folders. ## To make things more complicated, some items are physically described using 'container' elements ## while others use 'extent' instead. Most appear to include 'unittitle' and 'unitdate'. ## Our goal is to create a CSV that contains a basic description of each 'unit', or 'did' element, ## in each XML finding aid. For the purposes of this exercise, let's include the following pieces ## of information for each unit, where available: #### title of the source collection #### unittitle #### unitdate #### container type #### container number #### extent ## Since each XML finding aid represents a single collection, we'll want to include a column that ## identifies which collection it comes from. By reading through the XML files, we see that each ## has a single element called 'titleproper' that describes the whole collection. ## Let's create a recipe to extract that text. Here's a first try: collection_title = soup.find('titleproper').get_text() collection_title ## That format is OK, but we should remove the tab and newline characters. Let's try again, using ## the replace() function to replace them with spaces. collection_title = soup.find('titleproper').get_text().replace('\t', ' ').replace('\n', ' ') collection_title ## We can add the strip() function to remove the space at the end of the string. collection_title = soup.find('titleproper').get_text().replace('\t', ' ').replace('\n', ' ').strip() collection_title ## We still have a series of spaces in a row in the middle of the string. We can use a 'while loop' ## to repeatedly replace any occurrence of ' ' (two spaces) with ' ' (one space). collection_title = soup.find('titleproper').get_text().replace('\t', ' ').replace('\n', ' ').strip() while ' ' in collection_title: collection_title = collection_title.replace(' ', ' ') collection_title ## Perfect. We'll extract the collection name whenever we open an XML finding aid and include it ## in each CSV row associated with that collection. ## Now on to 'unittitle'. Recall that we created a list of 'did' elements above, called 'did_elements'. element = did_elements[4] unittitle = element.find('unittitle').get_text() unittitle ## Since those tabs and newlines are a recurring probem, we should define a function that ## removes them from any given text string. def clean_text(text): temp_text = text.replace('\t', ' ').replace('\n', ' ').strip() while ' ' in temp_text: temp_text = temp_text.replace(' ', ' ') return temp_text # Let's test our clean_text() function. element = did_elements[4] unittitle = element.find('unittitle').get_text() unittitle = clean_text(unittitle) unittitle ## Now let's try extracting the 'unittitle' field for each 'did' element in our list. for element in did_elements: unittitle = element.get_text().replace('\t', ' ').replace('\n', ' ').strip() print(clean_text(unittitle)) print('-----------------') # Printing a divider between elements ## The first element in the list above contains more information than we need, but we can ## let that slide for this exercise. ## Next is 'unitdate'. We'll use our clean_text() function once again. element = did_elements[4] unitdate = element.find('unitdate').get_text() unitdate = clean_text(unitdate) unitdate ## Let's loop through the list of 'did' elements and see if our 'unittitle' recipe holds up. for element in did_elements: unitdate = element.find('unitdate').get_text() print(clean_text(unitdate)) print('-----------------') # Printing a divider between elements ## Now on to container type and number. Let's examine a 'container' XML element. element = did_elements[4] element.find('container') ## Since the container type ('folder', in this case) is an attribute in the 'container' tag, ## we can extract it using bracket notation. element = did_elements[4] container_type = element.find('container')['type'] container_type ## The container number is specified between the opening and closing 'container' tags, ## so we can get it using get_text(). element = did_elements[4] container_number = element.find('container').get_text() container_number ## Next we'll try to get the container type and number for each 'did' element in our list ... for element in did_elements: container_type = element.find('container')['type'] print(container_type) container_number = element.find('container').get_text() print(container_number) print('-----------------') # Printing a divider between elements ## ... and we get an error. The reason is that some 'did' elements don't include a 'container' field. ## Using try/accept notation, whenever we get an error because a container element isn't found, ## we can revert to '' (an empty string) instead. for element in did_elements: try: container_type = element.find('container')['type'] except: container_type = '' print(container_type) try: container_number = element.find('container').get_text() except: container_number = '' print(container_number) print('-----------------') # Printing a divider between elements ## The last field we'll extract is 'extent', which is only included in a handful of 'did' elements. element = did_elements[3] extent = element.find('extent').get_text() extent ## Let's extract 'extent' from each element in our list of 'did' elements (for those that happen to include it). for element in did_elements: try: extent = element.find('extent').get_text() except: extent = '' print(extent) print('-----------------') # Printing a divider between elements ## Let's put it all together and view our chosen fields for a single 'did' element. ## We will combine our fields in a list to create a 'row' for our future CSV file. element = did_elements[6] # unittitle try: # Added try/except statements for 'unittitle' and 'unitdate' just to be safe unittitle = clean_text(element.find('unittitle').get_text()) except: unittitle = '' # unitdate try: unitdate = clean_text(element.find('unitdate').get_text()) except: unitdate = '' # container type and number try: container_type = element.find('container')['type'] except: container_type = '' try: container_number = element.find('container').get_text() except: container_number = '' # extent try: extent = element.find('extent').get_text() except: extent = '' row = [unittitle, unitdate, container_type, container_number, extent] print(row) ## Let's take a step back and generalize, so that we can extract metadata for each ## 'did' element in a single XML file. ## We will also include the 'collection title' field ('titleproper' in EAD's vocabulary) as ## the first item in each row. xml_filename = xml_filenames[3] # <-- Change the index number there to run the script on another XML file in the list. xml_text = open(xml_filename).read() soup = BeautifulSoup(xml_text, 'lxml') list_of_lists = [] # Creating an empty list, which we will use to hold our rows (each row represented as a list) try: collection_title = clean_text(soup.find('titleproper').get_text()) except: collection_title = xml_filename # If the 'titleproper' field is missing for some reason, ## we'll use the XML filename instead. for element in soup.find_all('did'): # unittitle try: unittitle = clean_text(element.find('unittitle').get_text()) except: unittitle = '' # unitdate try: unitdate = clean_text(element.find('unitdate').get_text()) except: unitdate = '' # container type and number try: container_type = element.find('container')['type'] except: container_type = '' try: container_number = element.find('container').get_text() except: container_number = '' # extent try: extent = element.find('extent').get_text() except: extent = '' row = [collection_title, unittitle, unitdate, container_type, container_number, extent] list_of_lists.append(row) ## Adding the row list we defined in the previous line to 'list_of_lists' list_of_lists[:15] ## Outputting the first 15 rows in our list of lists ## Almost there! Next we'll run the script above on each XML file in our list, creating a ## master list of lists that we'll write to disk as a CSV in the next cell. ## Let's begin by re-loading our list of XML filenames: os.chdir('/sharedfolder/LOC_Metadata') xml_filenames = [item for item in os.listdir('./') if item[-2:]=='.2'] # Creating a list of XML filenames list_of_lists = [] # Creating an empty list ## Now we'll extract metadata from the full batch of XML files. This may take a few moments to complete. for xml_filename in xml_filenames: xml_text = open(xml_filename).read() soup = BeautifulSoup(xml_text, 'lxml') try: collection_title = clean_text(soup.find('titleproper').get_text()) except: collection_title = xml_filename # If the 'titleproper' field is missing for some reason, ## we'll use the XML filename instead. for element in soup.find_all('did'): # unittitle try: unittitle = clean_text(element.find('unittitle').get_text()) except: unittitle = '' # unitdate try: unitdate = clean_text(element.find('unitdate').get_text()) except: unitdate = '' # container type and number try: container_type = element.find('container')['type'] except: container_type = '' try: container_number = element.find('container').get_text() except: container_number = '' # extent try: extent = element.find('extent').get_text() except: extent = '' row = [collection_title, unittitle, unitdate, container_type, container_number, extent] list_of_lists.append(row) print(len(list_of_lists)) ## Printing the number of rows in our table ## Finally, we write the extracted metadata to disk as a CSV called 'LOC_RS_Reduced_Metadata.csv' out_path = "./LOC_RS_Reduced_Metadata.csv" # The './' part is optional; it just means we're writing to # the current working directory. # Defining a list of column headers, which we will write as the first row in our CSV column_headers = ['Collection Title', 'Unit Title', 'Unit Date', 'Container Type', 'Container Number', 'Extent'] import csv # Importing Python's built-in CSV input/output package with open(out_path, 'w') as fo: # Creating a tempory file stream object called 'fo' (my abbreviation for 'file out') csv_writer = csv.writer(fo) # Initializing our CSV writer csv_writer.writerow(column_headers) # Writing one row (our column headers) csv_writer.writerows(list_of_lists) # Writing a list of lists as a sequence of rows ## Go to 'sharedfolder' on your desktop and use LibreOffice or Excel to open your new CSV. ## As you scroll through the CSV file, you will probably see more formatting oddities you can fix ## by tweaking the code above.
_____no_output_____
CC0-1.0
Week-06_Scraping-and-Parsing-XML.ipynb
pcda17/pcda
Dependencies
import warnings, glob from tensorflow.keras import Sequential, Model from cassava_scripts import * seed = 0 seed_everything(seed) warnings.filterwarnings('ignore')
_____no_output_____
MIT
Model backlog/Models/Inference/162-cassava-leaf-inf-effnetb4-dcr-04-380x380.ipynb
dimitreOliveira/Cassava-Leaf-Disease-Classification
Hardware configuration
# TPU or GPU detection # Detect hardware, return appropriate distribution strategy strategy, tpu = set_up_strategy() AUTO = tf.data.experimental.AUTOTUNE REPLICAS = strategy.num_replicas_in_sync print(f'REPLICAS: {REPLICAS}')
REPLICAS: 1
MIT
Model backlog/Models/Inference/162-cassava-leaf-inf-effnetb4-dcr-04-380x380.ipynb
dimitreOliveira/Cassava-Leaf-Disease-Classification
Model parameters
BATCH_SIZE = 8 * REPLICAS HEIGHT = 380 WIDTH = 380 CHANNELS = 3 N_CLASSES = 5 TTA_STEPS = 0 # Do TTA if > 0
_____no_output_____
MIT
Model backlog/Models/Inference/162-cassava-leaf-inf-effnetb4-dcr-04-380x380.ipynb
dimitreOliveira/Cassava-Leaf-Disease-Classification
Augmentation
def data_augment(image, label): p_spatial = tf.random.uniform([], 0, 1.0, dtype=tf.float32) # Flips image = tf.image.random_flip_left_right(image) image = tf.image.random_flip_up_down(image) if p_spatial > .75: image = tf.image.transpose(image) return image, label
_____no_output_____
MIT
Model backlog/Models/Inference/162-cassava-leaf-inf-effnetb4-dcr-04-380x380.ipynb
dimitreOliveira/Cassava-Leaf-Disease-Classification
Auxiliary functions
# Datasets utility functions def resize_image(image, label): image = tf.image.resize(image, [HEIGHT, WIDTH]) image = tf.reshape(image, [HEIGHT, WIDTH, CHANNELS]) return image, label def process_path(file_path): name = get_name(file_path) img = tf.io.read_file(file_path) img = decode_image(img) # img, _ = scale_image(img, None) # img = center_crop(img, HEIGHT, WIDTH) return img, name def get_dataset(files_path, shuffled=False, tta=False, extension='jpg'): dataset = tf.data.Dataset.list_files(f'{files_path}*{extension}', shuffle=shuffled) dataset = dataset.map(process_path, num_parallel_calls=AUTO) if tta: dataset = dataset.map(data_augment, num_parallel_calls=AUTO) dataset = dataset.map(resize_image, num_parallel_calls=AUTO) dataset = dataset.batch(BATCH_SIZE) dataset = dataset.prefetch(AUTO) return dataset
_____no_output_____
MIT
Model backlog/Models/Inference/162-cassava-leaf-inf-effnetb4-dcr-04-380x380.ipynb
dimitreOliveira/Cassava-Leaf-Disease-Classification
Load data
database_base_path = '/kaggle/input/cassava-leaf-disease-classification/' submission = pd.read_csv(f'{database_base_path}sample_submission.csv') display(submission.head()) TEST_FILENAMES = tf.io.gfile.glob(f'{database_base_path}test_tfrecords/ld_test*.tfrec') NUM_TEST_IMAGES = count_data_items(TEST_FILENAMES) print(f'GCS: test: {NUM_TEST_IMAGES}') !ls /kaggle/input/ model_path_list = glob.glob('/kaggle/input/162-cassava-leaf-effnetb4-dcr-04-380x380/*.h5') model_path_list.sort() print('Models to predict:') print(*model_path_list, sep='\n')
Models to predict: /kaggle/input/162-cassava-leaf-effnetb4-dcr-04-380x380/model_0.h5
MIT
Model backlog/Models/Inference/162-cassava-leaf-inf-effnetb4-dcr-04-380x380.ipynb
dimitreOliveira/Cassava-Leaf-Disease-Classification
Model
def model_fn(input_shape, N_CLASSES): inputs = L.Input(shape=input_shape, name='input_image') base_model = tf.keras.applications.EfficientNetB4(input_tensor=inputs, include_top=False, drop_connect_rate=.4, weights=None) x = L.GlobalAveragePooling2D()(base_model.output) x = L.Dropout(.5)(x) output = L.Dense(N_CLASSES, activation='softmax', name='output')(x) model = Model(inputs=inputs, outputs=output) return model with strategy.scope(): model = model_fn((None, None, CHANNELS), N_CLASSES) model.summary()
Model: "model" __________________________________________________________________________________________________ Layer (type) Output Shape Param # Connected to ================================================================================================== input_image (InputLayer) [(None, None, None, 0 __________________________________________________________________________________________________ rescaling (Rescaling) (None, None, None, 3 0 input_image[0][0] __________________________________________________________________________________________________ normalization (Normalization) (None, None, None, 3 7 rescaling[0][0] __________________________________________________________________________________________________ stem_conv_pad (ZeroPadding2D) (None, None, None, 3 0 normalization[0][0] __________________________________________________________________________________________________ stem_conv (Conv2D) (None, None, None, 4 1296 stem_conv_pad[0][0] __________________________________________________________________________________________________ stem_bn (BatchNormalization) (None, None, None, 4 192 stem_conv[0][0] __________________________________________________________________________________________________ stem_activation (Activation) (None, None, None, 4 0 stem_bn[0][0] __________________________________________________________________________________________________ block1a_dwconv (DepthwiseConv2D (None, None, None, 4 432 stem_activation[0][0] __________________________________________________________________________________________________ block1a_bn (BatchNormalization) (None, None, None, 4 192 block1a_dwconv[0][0] __________________________________________________________________________________________________ block1a_activation (Activation) (None, None, None, 4 0 block1a_bn[0][0] __________________________________________________________________________________________________ block1a_se_squeeze (GlobalAvera (None, 48) 0 block1a_activation[0][0] __________________________________________________________________________________________________ block1a_se_reshape (Reshape) (None, 1, 1, 48) 0 block1a_se_squeeze[0][0] __________________________________________________________________________________________________ block1a_se_reduce (Conv2D) (None, 1, 1, 12) 588 block1a_se_reshape[0][0] __________________________________________________________________________________________________ block1a_se_expand (Conv2D) (None, 1, 1, 48) 624 block1a_se_reduce[0][0] __________________________________________________________________________________________________ block1a_se_excite (Multiply) (None, None, None, 4 0 block1a_activation[0][0] block1a_se_expand[0][0] __________________________________________________________________________________________________ block1a_project_conv (Conv2D) (None, None, None, 2 1152 block1a_se_excite[0][0] __________________________________________________________________________________________________ block1a_project_bn (BatchNormal (None, None, None, 2 96 block1a_project_conv[0][0] __________________________________________________________________________________________________ block1b_dwconv (DepthwiseConv2D (None, None, None, 2 216 block1a_project_bn[0][0] __________________________________________________________________________________________________ block1b_bn (BatchNormalization) (None, None, None, 2 96 block1b_dwconv[0][0] __________________________________________________________________________________________________ block1b_activation (Activation) (None, None, None, 2 0 block1b_bn[0][0] __________________________________________________________________________________________________ block1b_se_squeeze (GlobalAvera (None, 24) 0 block1b_activation[0][0] __________________________________________________________________________________________________ block1b_se_reshape (Reshape) (None, 1, 1, 24) 0 block1b_se_squeeze[0][0] __________________________________________________________________________________________________ block1b_se_reduce (Conv2D) (None, 1, 1, 6) 150 block1b_se_reshape[0][0] __________________________________________________________________________________________________ block1b_se_expand (Conv2D) (None, 1, 1, 24) 168 block1b_se_reduce[0][0] __________________________________________________________________________________________________ block1b_se_excite (Multiply) (None, None, None, 2 0 block1b_activation[0][0] block1b_se_expand[0][0] __________________________________________________________________________________________________ block1b_project_conv (Conv2D) (None, None, None, 2 576 block1b_se_excite[0][0] __________________________________________________________________________________________________ block1b_project_bn (BatchNormal (None, None, None, 2 96 block1b_project_conv[0][0] __________________________________________________________________________________________________ block1b_drop (Dropout) (None, None, None, 2 0 block1b_project_bn[0][0] __________________________________________________________________________________________________ block1b_add (Add) (None, None, None, 2 0 block1b_drop[0][0] block1a_project_bn[0][0] __________________________________________________________________________________________________ block2a_expand_conv (Conv2D) (None, None, None, 1 3456 block1b_add[0][0] __________________________________________________________________________________________________ block2a_expand_bn (BatchNormali (None, None, None, 1 576 block2a_expand_conv[0][0] __________________________________________________________________________________________________ block2a_expand_activation (Acti (None, None, None, 1 0 block2a_expand_bn[0][0] __________________________________________________________________________________________________ block2a_dwconv_pad (ZeroPadding (None, None, None, 1 0 block2a_expand_activation[0][0] __________________________________________________________________________________________________ block2a_dwconv (DepthwiseConv2D (None, None, None, 1 1296 block2a_dwconv_pad[0][0] __________________________________________________________________________________________________ block2a_bn (BatchNormalization) (None, None, None, 1 576 block2a_dwconv[0][0] __________________________________________________________________________________________________ block2a_activation (Activation) (None, None, None, 1 0 block2a_bn[0][0] __________________________________________________________________________________________________ block2a_se_squeeze (GlobalAvera (None, 144) 0 block2a_activation[0][0] __________________________________________________________________________________________________ block2a_se_reshape (Reshape) (None, 1, 1, 144) 0 block2a_se_squeeze[0][0] __________________________________________________________________________________________________ block2a_se_reduce (Conv2D) (None, 1, 1, 6) 870 block2a_se_reshape[0][0] __________________________________________________________________________________________________ block2a_se_expand (Conv2D) (None, 1, 1, 144) 1008 block2a_se_reduce[0][0] __________________________________________________________________________________________________ block2a_se_excite (Multiply) (None, None, None, 1 0 block2a_activation[0][0] block2a_se_expand[0][0] __________________________________________________________________________________________________ block2a_project_conv (Conv2D) (None, None, None, 3 4608 block2a_se_excite[0][0] __________________________________________________________________________________________________ block2a_project_bn (BatchNormal (None, None, None, 3 128 block2a_project_conv[0][0] __________________________________________________________________________________________________ block2b_expand_conv (Conv2D) (None, None, None, 1 6144 block2a_project_bn[0][0] __________________________________________________________________________________________________ block2b_expand_bn (BatchNormali (None, None, None, 1 768 block2b_expand_conv[0][0] __________________________________________________________________________________________________ block2b_expand_activation (Acti (None, None, None, 1 0 block2b_expand_bn[0][0] __________________________________________________________________________________________________ block2b_dwconv (DepthwiseConv2D (None, None, None, 1 1728 block2b_expand_activation[0][0] __________________________________________________________________________________________________ block2b_bn (BatchNormalization) (None, None, None, 1 768 block2b_dwconv[0][0] __________________________________________________________________________________________________ block2b_activation (Activation) (None, None, None, 1 0 block2b_bn[0][0] __________________________________________________________________________________________________ block2b_se_squeeze (GlobalAvera (None, 192) 0 block2b_activation[0][0] __________________________________________________________________________________________________ block2b_se_reshape (Reshape) (None, 1, 1, 192) 0 block2b_se_squeeze[0][0] __________________________________________________________________________________________________ block2b_se_reduce (Conv2D) (None, 1, 1, 8) 1544 block2b_se_reshape[0][0] __________________________________________________________________________________________________ block2b_se_expand (Conv2D) (None, 1, 1, 192) 1728 block2b_se_reduce[0][0] __________________________________________________________________________________________________ block2b_se_excite (Multiply) (None, None, None, 1 0 block2b_activation[0][0] block2b_se_expand[0][0] __________________________________________________________________________________________________ block2b_project_conv (Conv2D) (None, None, None, 3 6144 block2b_se_excite[0][0] __________________________________________________________________________________________________ block2b_project_bn (BatchNormal (None, None, None, 3 128 block2b_project_conv[0][0] __________________________________________________________________________________________________ block2b_drop (Dropout) (None, None, None, 3 0 block2b_project_bn[0][0] __________________________________________________________________________________________________ block2b_add (Add) (None, None, None, 3 0 block2b_drop[0][0] block2a_project_bn[0][0] __________________________________________________________________________________________________ block2c_expand_conv (Conv2D) (None, None, None, 1 6144 block2b_add[0][0] __________________________________________________________________________________________________ block2c_expand_bn (BatchNormali (None, None, None, 1 768 block2c_expand_conv[0][0] __________________________________________________________________________________________________ block2c_expand_activation (Acti (None, None, None, 1 0 block2c_expand_bn[0][0] __________________________________________________________________________________________________ block2c_dwconv (DepthwiseConv2D (None, None, None, 1 1728 block2c_expand_activation[0][0] __________________________________________________________________________________________________ block2c_bn (BatchNormalization) (None, None, None, 1 768 block2c_dwconv[0][0] __________________________________________________________________________________________________ block2c_activation (Activation) (None, None, None, 1 0 block2c_bn[0][0] __________________________________________________________________________________________________ block2c_se_squeeze (GlobalAvera (None, 192) 0 block2c_activation[0][0] __________________________________________________________________________________________________ block2c_se_reshape (Reshape) (None, 1, 1, 192) 0 block2c_se_squeeze[0][0] __________________________________________________________________________________________________ block2c_se_reduce (Conv2D) (None, 1, 1, 8) 1544 block2c_se_reshape[0][0] __________________________________________________________________________________________________ block2c_se_expand (Conv2D) (None, 1, 1, 192) 1728 block2c_se_reduce[0][0] __________________________________________________________________________________________________ block2c_se_excite (Multiply) (None, None, None, 1 0 block2c_activation[0][0] block2c_se_expand[0][0] __________________________________________________________________________________________________ block2c_project_conv (Conv2D) (None, None, None, 3 6144 block2c_se_excite[0][0] __________________________________________________________________________________________________ block2c_project_bn (BatchNormal (None, None, None, 3 128 block2c_project_conv[0][0] __________________________________________________________________________________________________ block2c_drop (Dropout) (None, None, None, 3 0 block2c_project_bn[0][0] __________________________________________________________________________________________________ block2c_add (Add) (None, None, None, 3 0 block2c_drop[0][0] block2b_add[0][0] __________________________________________________________________________________________________ block2d_expand_conv (Conv2D) (None, None, None, 1 6144 block2c_add[0][0] __________________________________________________________________________________________________ block2d_expand_bn (BatchNormali (None, None, None, 1 768 block2d_expand_conv[0][0] __________________________________________________________________________________________________ block2d_expand_activation (Acti (None, None, None, 1 0 block2d_expand_bn[0][0] __________________________________________________________________________________________________ block2d_dwconv (DepthwiseConv2D (None, None, None, 1 1728 block2d_expand_activation[0][0] __________________________________________________________________________________________________ block2d_bn (BatchNormalization) (None, None, None, 1 768 block2d_dwconv[0][0] __________________________________________________________________________________________________ block2d_activation (Activation) (None, None, None, 1 0 block2d_bn[0][0] __________________________________________________________________________________________________ block2d_se_squeeze (GlobalAvera (None, 192) 0 block2d_activation[0][0] __________________________________________________________________________________________________ block2d_se_reshape (Reshape) (None, 1, 1, 192) 0 block2d_se_squeeze[0][0] __________________________________________________________________________________________________ block2d_se_reduce (Conv2D) (None, 1, 1, 8) 1544 block2d_se_reshape[0][0] __________________________________________________________________________________________________ block2d_se_expand (Conv2D) (None, 1, 1, 192) 1728 block2d_se_reduce[0][0] __________________________________________________________________________________________________ block2d_se_excite (Multiply) (None, None, None, 1 0 block2d_activation[0][0] block2d_se_expand[0][0] __________________________________________________________________________________________________ block2d_project_conv (Conv2D) (None, None, None, 3 6144 block2d_se_excite[0][0] __________________________________________________________________________________________________ block2d_project_bn (BatchNormal (None, None, None, 3 128 block2d_project_conv[0][0] __________________________________________________________________________________________________ block2d_drop (Dropout) (None, None, None, 3 0 block2d_project_bn[0][0] __________________________________________________________________________________________________ block2d_add (Add) (None, None, None, 3 0 block2d_drop[0][0] block2c_add[0][0] __________________________________________________________________________________________________ block3a_expand_conv (Conv2D) (None, None, None, 1 6144 block2d_add[0][0] __________________________________________________________________________________________________ block3a_expand_bn (BatchNormali (None, None, None, 1 768 block3a_expand_conv[0][0] __________________________________________________________________________________________________ block3a_expand_activation (Acti (None, None, None, 1 0 block3a_expand_bn[0][0] __________________________________________________________________________________________________ block3a_dwconv_pad (ZeroPadding (None, None, None, 1 0 block3a_expand_activation[0][0] __________________________________________________________________________________________________ block3a_dwconv (DepthwiseConv2D (None, None, None, 1 4800 block3a_dwconv_pad[0][0] __________________________________________________________________________________________________ block3a_bn (BatchNormalization) (None, None, None, 1 768 block3a_dwconv[0][0] __________________________________________________________________________________________________ block3a_activation (Activation) (None, None, None, 1 0 block3a_bn[0][0] __________________________________________________________________________________________________ block3a_se_squeeze (GlobalAvera (None, 192) 0 block3a_activation[0][0] __________________________________________________________________________________________________ block3a_se_reshape (Reshape) (None, 1, 1, 192) 0 block3a_se_squeeze[0][0] __________________________________________________________________________________________________ block3a_se_reduce (Conv2D) (None, 1, 1, 8) 1544 block3a_se_reshape[0][0] __________________________________________________________________________________________________ block3a_se_expand (Conv2D) (None, 1, 1, 192) 1728 block3a_se_reduce[0][0] __________________________________________________________________________________________________ block3a_se_excite (Multiply) (None, None, None, 1 0 block3a_activation[0][0] block3a_se_expand[0][0] __________________________________________________________________________________________________ block3a_project_conv (Conv2D) (None, None, None, 5 10752 block3a_se_excite[0][0] __________________________________________________________________________________________________ block3a_project_bn (BatchNormal (None, None, None, 5 224 block3a_project_conv[0][0] __________________________________________________________________________________________________ block3b_expand_conv (Conv2D) (None, None, None, 3 18816 block3a_project_bn[0][0] __________________________________________________________________________________________________ block3b_expand_bn (BatchNormali (None, None, None, 3 1344 block3b_expand_conv[0][0] __________________________________________________________________________________________________ block3b_expand_activation (Acti (None, None, None, 3 0 block3b_expand_bn[0][0] __________________________________________________________________________________________________ block3b_dwconv (DepthwiseConv2D (None, None, None, 3 8400 block3b_expand_activation[0][0] __________________________________________________________________________________________________ block3b_bn (BatchNormalization) (None, None, None, 3 1344 block3b_dwconv[0][0] __________________________________________________________________________________________________ block3b_activation (Activation) (None, None, None, 3 0 block3b_bn[0][0] __________________________________________________________________________________________________ block3b_se_squeeze (GlobalAvera (None, 336) 0 block3b_activation[0][0] __________________________________________________________________________________________________ block3b_se_reshape (Reshape) (None, 1, 1, 336) 0 block3b_se_squeeze[0][0] __________________________________________________________________________________________________ block3b_se_reduce (Conv2D) (None, 1, 1, 14) 4718 block3b_se_reshape[0][0] __________________________________________________________________________________________________ block3b_se_expand (Conv2D) (None, 1, 1, 336) 5040 block3b_se_reduce[0][0] __________________________________________________________________________________________________ block3b_se_excite (Multiply) (None, None, None, 3 0 block3b_activation[0][0] block3b_se_expand[0][0] __________________________________________________________________________________________________ block3b_project_conv (Conv2D) (None, None, None, 5 18816 block3b_se_excite[0][0] __________________________________________________________________________________________________ block3b_project_bn (BatchNormal (None, None, None, 5 224 block3b_project_conv[0][0] __________________________________________________________________________________________________ block3b_drop (Dropout) (None, None, None, 5 0 block3b_project_bn[0][0] __________________________________________________________________________________________________ block3b_add (Add) (None, None, None, 5 0 block3b_drop[0][0] block3a_project_bn[0][0] __________________________________________________________________________________________________ block3c_expand_conv (Conv2D) (None, None, None, 3 18816 block3b_add[0][0] __________________________________________________________________________________________________ block3c_expand_bn (BatchNormali (None, None, None, 3 1344 block3c_expand_conv[0][0] __________________________________________________________________________________________________ block3c_expand_activation (Acti (None, None, None, 3 0 block3c_expand_bn[0][0] __________________________________________________________________________________________________ block3c_dwconv (DepthwiseConv2D (None, None, None, 3 8400 block3c_expand_activation[0][0] __________________________________________________________________________________________________ block3c_bn (BatchNormalization) (None, None, None, 3 1344 block3c_dwconv[0][0] __________________________________________________________________________________________________ block3c_activation (Activation) (None, None, None, 3 0 block3c_bn[0][0] __________________________________________________________________________________________________ block3c_se_squeeze (GlobalAvera (None, 336) 0 block3c_activation[0][0] __________________________________________________________________________________________________ block3c_se_reshape (Reshape) (None, 1, 1, 336) 0 block3c_se_squeeze[0][0] __________________________________________________________________________________________________ block3c_se_reduce (Conv2D) (None, 1, 1, 14) 4718 block3c_se_reshape[0][0] __________________________________________________________________________________________________ block3c_se_expand (Conv2D) (None, 1, 1, 336) 5040 block3c_se_reduce[0][0] __________________________________________________________________________________________________ block3c_se_excite (Multiply) (None, None, None, 3 0 block3c_activation[0][0] block3c_se_expand[0][0] __________________________________________________________________________________________________ block3c_project_conv (Conv2D) (None, None, None, 5 18816 block3c_se_excite[0][0] __________________________________________________________________________________________________ block3c_project_bn (BatchNormal (None, None, None, 5 224 block3c_project_conv[0][0] __________________________________________________________________________________________________ block3c_drop (Dropout) (None, None, None, 5 0 block3c_project_bn[0][0] __________________________________________________________________________________________________ block3c_add (Add) (None, None, None, 5 0 block3c_drop[0][0] block3b_add[0][0] __________________________________________________________________________________________________ block3d_expand_conv (Conv2D) (None, None, None, 3 18816 block3c_add[0][0] __________________________________________________________________________________________________ block3d_expand_bn (BatchNormali (None, None, None, 3 1344 block3d_expand_conv[0][0] __________________________________________________________________________________________________ block3d_expand_activation (Acti (None, None, None, 3 0 block3d_expand_bn[0][0] __________________________________________________________________________________________________ block3d_dwconv (DepthwiseConv2D (None, None, None, 3 8400 block3d_expand_activation[0][0] __________________________________________________________________________________________________ block3d_bn (BatchNormalization) (None, None, None, 3 1344 block3d_dwconv[0][0] __________________________________________________________________________________________________ block3d_activation (Activation) (None, None, None, 3 0 block3d_bn[0][0] __________________________________________________________________________________________________ block3d_se_squeeze (GlobalAvera (None, 336) 0 block3d_activation[0][0] __________________________________________________________________________________________________ block3d_se_reshape (Reshape) (None, 1, 1, 336) 0 block3d_se_squeeze[0][0] __________________________________________________________________________________________________ block3d_se_reduce (Conv2D) (None, 1, 1, 14) 4718 block3d_se_reshape[0][0] __________________________________________________________________________________________________ block3d_se_expand (Conv2D) (None, 1, 1, 336) 5040 block3d_se_reduce[0][0] __________________________________________________________________________________________________ block3d_se_excite (Multiply) (None, None, None, 3 0 block3d_activation[0][0] block3d_se_expand[0][0] __________________________________________________________________________________________________ block3d_project_conv (Conv2D) (None, None, None, 5 18816 block3d_se_excite[0][0] __________________________________________________________________________________________________ block3d_project_bn (BatchNormal (None, None, None, 5 224 block3d_project_conv[0][0] __________________________________________________________________________________________________ block3d_drop (Dropout) (None, None, None, 5 0 block3d_project_bn[0][0] __________________________________________________________________________________________________ block3d_add (Add) (None, None, None, 5 0 block3d_drop[0][0] block3c_add[0][0] __________________________________________________________________________________________________ block4a_expand_conv (Conv2D) (None, None, None, 3 18816 block3d_add[0][0] __________________________________________________________________________________________________ block4a_expand_bn (BatchNormali (None, None, None, 3 1344 block4a_expand_conv[0][0] __________________________________________________________________________________________________ block4a_expand_activation (Acti (None, None, None, 3 0 block4a_expand_bn[0][0] __________________________________________________________________________________________________ block4a_dwconv_pad (ZeroPadding (None, None, None, 3 0 block4a_expand_activation[0][0] __________________________________________________________________________________________________ block4a_dwconv (DepthwiseConv2D (None, None, None, 3 3024 block4a_dwconv_pad[0][0] __________________________________________________________________________________________________ block4a_bn (BatchNormalization) (None, None, None, 3 1344 block4a_dwconv[0][0] __________________________________________________________________________________________________ block4a_activation (Activation) (None, None, None, 3 0 block4a_bn[0][0] __________________________________________________________________________________________________ block4a_se_squeeze (GlobalAvera (None, 336) 0 block4a_activation[0][0] __________________________________________________________________________________________________ block4a_se_reshape (Reshape) (None, 1, 1, 336) 0 block4a_se_squeeze[0][0] __________________________________________________________________________________________________ block4a_se_reduce (Conv2D) (None, 1, 1, 14) 4718 block4a_se_reshape[0][0] __________________________________________________________________________________________________ block4a_se_expand (Conv2D) (None, 1, 1, 336) 5040 block4a_se_reduce[0][0] __________________________________________________________________________________________________ block4a_se_excite (Multiply) (None, None, None, 3 0 block4a_activation[0][0] block4a_se_expand[0][0] __________________________________________________________________________________________________ block4a_project_conv (Conv2D) (None, None, None, 1 37632 block4a_se_excite[0][0] __________________________________________________________________________________________________ block4a_project_bn (BatchNormal (None, None, None, 1 448 block4a_project_conv[0][0] __________________________________________________________________________________________________ block4b_expand_conv (Conv2D) (None, None, None, 6 75264 block4a_project_bn[0][0] __________________________________________________________________________________________________ block4b_expand_bn (BatchNormali (None, None, None, 6 2688 block4b_expand_conv[0][0] __________________________________________________________________________________________________ block4b_expand_activation (Acti (None, None, None, 6 0 block4b_expand_bn[0][0] __________________________________________________________________________________________________ block4b_dwconv (DepthwiseConv2D (None, None, None, 6 6048 block4b_expand_activation[0][0] __________________________________________________________________________________________________ block4b_bn (BatchNormalization) (None, None, None, 6 2688 block4b_dwconv[0][0] __________________________________________________________________________________________________ block4b_activation (Activation) (None, None, None, 6 0 block4b_bn[0][0] __________________________________________________________________________________________________ block4b_se_squeeze (GlobalAvera (None, 672) 0 block4b_activation[0][0] __________________________________________________________________________________________________ block4b_se_reshape (Reshape) (None, 1, 1, 672) 0 block4b_se_squeeze[0][0] __________________________________________________________________________________________________ block4b_se_reduce (Conv2D) (None, 1, 1, 28) 18844 block4b_se_reshape[0][0] __________________________________________________________________________________________________ block4b_se_expand (Conv2D) (None, 1, 1, 672) 19488 block4b_se_reduce[0][0] __________________________________________________________________________________________________ block4b_se_excite (Multiply) (None, None, None, 6 0 block4b_activation[0][0] block4b_se_expand[0][0] __________________________________________________________________________________________________ block4b_project_conv (Conv2D) (None, None, None, 1 75264 block4b_se_excite[0][0] __________________________________________________________________________________________________ block4b_project_bn (BatchNormal (None, None, None, 1 448 block4b_project_conv[0][0] __________________________________________________________________________________________________ block4b_drop (Dropout) (None, None, None, 1 0 block4b_project_bn[0][0] __________________________________________________________________________________________________ block4b_add (Add) (None, None, None, 1 0 block4b_drop[0][0] block4a_project_bn[0][0] __________________________________________________________________________________________________ block4c_expand_conv (Conv2D) (None, None, None, 6 75264 block4b_add[0][0] __________________________________________________________________________________________________ block4c_expand_bn (BatchNormali (None, None, None, 6 2688 block4c_expand_conv[0][0] __________________________________________________________________________________________________ block4c_expand_activation (Acti (None, None, None, 6 0 block4c_expand_bn[0][0] __________________________________________________________________________________________________ block4c_dwconv (DepthwiseConv2D (None, None, None, 6 6048 block4c_expand_activation[0][0] __________________________________________________________________________________________________ block4c_bn (BatchNormalization) (None, None, None, 6 2688 block4c_dwconv[0][0] __________________________________________________________________________________________________ block4c_activation (Activation) (None, None, None, 6 0 block4c_bn[0][0] __________________________________________________________________________________________________ block4c_se_squeeze (GlobalAvera (None, 672) 0 block4c_activation[0][0] __________________________________________________________________________________________________ block4c_se_reshape (Reshape) (None, 1, 1, 672) 0 block4c_se_squeeze[0][0] __________________________________________________________________________________________________ block4c_se_reduce (Conv2D) (None, 1, 1, 28) 18844 block4c_se_reshape[0][0] __________________________________________________________________________________________________ block4c_se_expand (Conv2D) (None, 1, 1, 672) 19488 block4c_se_reduce[0][0] __________________________________________________________________________________________________ block4c_se_excite (Multiply) (None, None, None, 6 0 block4c_activation[0][0] block4c_se_expand[0][0] __________________________________________________________________________________________________ block4c_project_conv (Conv2D) (None, None, None, 1 75264 block4c_se_excite[0][0] __________________________________________________________________________________________________ block4c_project_bn (BatchNormal (None, None, None, 1 448 block4c_project_conv[0][0] __________________________________________________________________________________________________ block4c_drop (Dropout) (None, None, None, 1 0 block4c_project_bn[0][0] __________________________________________________________________________________________________ block4c_add (Add) (None, None, None, 1 0 block4c_drop[0][0] block4b_add[0][0] __________________________________________________________________________________________________ block4d_expand_conv (Conv2D) (None, None, None, 6 75264 block4c_add[0][0] __________________________________________________________________________________________________ block4d_expand_bn (BatchNormali (None, None, None, 6 2688 block4d_expand_conv[0][0] __________________________________________________________________________________________________ block4d_expand_activation (Acti (None, None, None, 6 0 block4d_expand_bn[0][0] __________________________________________________________________________________________________ block4d_dwconv (DepthwiseConv2D (None, None, None, 6 6048 block4d_expand_activation[0][0] __________________________________________________________________________________________________ block4d_bn (BatchNormalization) (None, None, None, 6 2688 block4d_dwconv[0][0] __________________________________________________________________________________________________ block4d_activation (Activation) (None, None, None, 6 0 block4d_bn[0][0] __________________________________________________________________________________________________ block4d_se_squeeze (GlobalAvera (None, 672) 0 block4d_activation[0][0] __________________________________________________________________________________________________ block4d_se_reshape (Reshape) (None, 1, 1, 672) 0 block4d_se_squeeze[0][0] __________________________________________________________________________________________________ block4d_se_reduce (Conv2D) (None, 1, 1, 28) 18844 block4d_se_reshape[0][0] __________________________________________________________________________________________________ block4d_se_expand (Conv2D) (None, 1, 1, 672) 19488 block4d_se_reduce[0][0] __________________________________________________________________________________________________ block4d_se_excite (Multiply) (None, None, None, 6 0 block4d_activation[0][0] block4d_se_expand[0][0] __________________________________________________________________________________________________ block4d_project_conv (Conv2D) (None, None, None, 1 75264 block4d_se_excite[0][0] __________________________________________________________________________________________________ block4d_project_bn (BatchNormal (None, None, None, 1 448 block4d_project_conv[0][0] __________________________________________________________________________________________________ block4d_drop (Dropout) (None, None, None, 1 0 block4d_project_bn[0][0] __________________________________________________________________________________________________ block4d_add (Add) (None, None, None, 1 0 block4d_drop[0][0] block4c_add[0][0] __________________________________________________________________________________________________ block4e_expand_conv (Conv2D) (None, None, None, 6 75264 block4d_add[0][0] __________________________________________________________________________________________________ block4e_expand_bn (BatchNormali (None, None, None, 6 2688 block4e_expand_conv[0][0] __________________________________________________________________________________________________ block4e_expand_activation (Acti (None, None, None, 6 0 block4e_expand_bn[0][0] __________________________________________________________________________________________________ block4e_dwconv (DepthwiseConv2D (None, None, None, 6 6048 block4e_expand_activation[0][0] __________________________________________________________________________________________________ block4e_bn (BatchNormalization) (None, None, None, 6 2688 block4e_dwconv[0][0] __________________________________________________________________________________________________ block4e_activation (Activation) (None, None, None, 6 0 block4e_bn[0][0] __________________________________________________________________________________________________ block4e_se_squeeze (GlobalAvera (None, 672) 0 block4e_activation[0][0] __________________________________________________________________________________________________ block4e_se_reshape (Reshape) (None, 1, 1, 672) 0 block4e_se_squeeze[0][0] __________________________________________________________________________________________________ block4e_se_reduce (Conv2D) (None, 1, 1, 28) 18844 block4e_se_reshape[0][0] __________________________________________________________________________________________________ block4e_se_expand (Conv2D) (None, 1, 1, 672) 19488 block4e_se_reduce[0][0] __________________________________________________________________________________________________ block4e_se_excite (Multiply) (None, None, None, 6 0 block4e_activation[0][0] block4e_se_expand[0][0] __________________________________________________________________________________________________ block4e_project_conv (Conv2D) (None, None, None, 1 75264 block4e_se_excite[0][0] __________________________________________________________________________________________________ block4e_project_bn (BatchNormal (None, None, None, 1 448 block4e_project_conv[0][0] __________________________________________________________________________________________________ block4e_drop (Dropout) (None, None, None, 1 0 block4e_project_bn[0][0] __________________________________________________________________________________________________ block4e_add (Add) (None, None, None, 1 0 block4e_drop[0][0] block4d_add[0][0] __________________________________________________________________________________________________ block4f_expand_conv (Conv2D) (None, None, None, 6 75264 block4e_add[0][0] __________________________________________________________________________________________________ block4f_expand_bn (BatchNormali (None, None, None, 6 2688 block4f_expand_conv[0][0] __________________________________________________________________________________________________ block4f_expand_activation (Acti (None, None, None, 6 0 block4f_expand_bn[0][0] __________________________________________________________________________________________________ block4f_dwconv (DepthwiseConv2D (None, None, None, 6 6048 block4f_expand_activation[0][0] __________________________________________________________________________________________________ block4f_bn (BatchNormalization) (None, None, None, 6 2688 block4f_dwconv[0][0] __________________________________________________________________________________________________ block4f_activation (Activation) (None, None, None, 6 0 block4f_bn[0][0] __________________________________________________________________________________________________ block4f_se_squeeze (GlobalAvera (None, 672) 0 block4f_activation[0][0] __________________________________________________________________________________________________ block4f_se_reshape (Reshape) (None, 1, 1, 672) 0 block4f_se_squeeze[0][0] __________________________________________________________________________________________________ block4f_se_reduce (Conv2D) (None, 1, 1, 28) 18844 block4f_se_reshape[0][0] __________________________________________________________________________________________________ block4f_se_expand (Conv2D) (None, 1, 1, 672) 19488 block4f_se_reduce[0][0] __________________________________________________________________________________________________ block4f_se_excite (Multiply) (None, None, None, 6 0 block4f_activation[0][0] block4f_se_expand[0][0] __________________________________________________________________________________________________ block4f_project_conv (Conv2D) (None, None, None, 1 75264 block4f_se_excite[0][0] __________________________________________________________________________________________________ block4f_project_bn (BatchNormal (None, None, None, 1 448 block4f_project_conv[0][0] __________________________________________________________________________________________________ block4f_drop (Dropout) (None, None, None, 1 0 block4f_project_bn[0][0] __________________________________________________________________________________________________ block4f_add (Add) (None, None, None, 1 0 block4f_drop[0][0] block4e_add[0][0] __________________________________________________________________________________________________ block5a_expand_conv (Conv2D) (None, None, None, 6 75264 block4f_add[0][0] __________________________________________________________________________________________________ block5a_expand_bn (BatchNormali (None, None, None, 6 2688 block5a_expand_conv[0][0] __________________________________________________________________________________________________ block5a_expand_activation (Acti (None, None, None, 6 0 block5a_expand_bn[0][0] __________________________________________________________________________________________________ block5a_dwconv (DepthwiseConv2D (None, None, None, 6 16800 block5a_expand_activation[0][0] __________________________________________________________________________________________________ block5a_bn (BatchNormalization) (None, None, None, 6 2688 block5a_dwconv[0][0] __________________________________________________________________________________________________ block5a_activation (Activation) (None, None, None, 6 0 block5a_bn[0][0] __________________________________________________________________________________________________ block5a_se_squeeze (GlobalAvera (None, 672) 0 block5a_activation[0][0] __________________________________________________________________________________________________ block5a_se_reshape (Reshape) (None, 1, 1, 672) 0 block5a_se_squeeze[0][0] __________________________________________________________________________________________________ block5a_se_reduce (Conv2D) (None, 1, 1, 28) 18844 block5a_se_reshape[0][0] __________________________________________________________________________________________________ block5a_se_expand (Conv2D) (None, 1, 1, 672) 19488 block5a_se_reduce[0][0] __________________________________________________________________________________________________ block5a_se_excite (Multiply) (None, None, None, 6 0 block5a_activation[0][0] block5a_se_expand[0][0] __________________________________________________________________________________________________ block5a_project_conv (Conv2D) (None, None, None, 1 107520 block5a_se_excite[0][0] __________________________________________________________________________________________________ block5a_project_bn (BatchNormal (None, None, None, 1 640 block5a_project_conv[0][0] __________________________________________________________________________________________________ block5b_expand_conv (Conv2D) (None, None, None, 9 153600 block5a_project_bn[0][0] __________________________________________________________________________________________________ block5b_expand_bn (BatchNormali (None, None, None, 9 3840 block5b_expand_conv[0][0] __________________________________________________________________________________________________ block5b_expand_activation (Acti (None, None, None, 9 0 block5b_expand_bn[0][0] __________________________________________________________________________________________________ block5b_dwconv (DepthwiseConv2D (None, None, None, 9 24000 block5b_expand_activation[0][0] __________________________________________________________________________________________________ block5b_bn (BatchNormalization) (None, None, None, 9 3840 block5b_dwconv[0][0] __________________________________________________________________________________________________ block5b_activation (Activation) (None, None, None, 9 0 block5b_bn[0][0] __________________________________________________________________________________________________ block5b_se_squeeze (GlobalAvera (None, 960) 0 block5b_activation[0][0] __________________________________________________________________________________________________ block5b_se_reshape (Reshape) (None, 1, 1, 960) 0 block5b_se_squeeze[0][0] __________________________________________________________________________________________________ block5b_se_reduce (Conv2D) (None, 1, 1, 40) 38440 block5b_se_reshape[0][0] __________________________________________________________________________________________________ block5b_se_expand (Conv2D) (None, 1, 1, 960) 39360 block5b_se_reduce[0][0] __________________________________________________________________________________________________ block5b_se_excite (Multiply) (None, None, None, 9 0 block5b_activation[0][0] block5b_se_expand[0][0] __________________________________________________________________________________________________ block5b_project_conv (Conv2D) (None, None, None, 1 153600 block5b_se_excite[0][0] __________________________________________________________________________________________________ block5b_project_bn (BatchNormal (None, None, None, 1 640 block5b_project_conv[0][0] __________________________________________________________________________________________________ block5b_drop (Dropout) (None, None, None, 1 0 block5b_project_bn[0][0] __________________________________________________________________________________________________ block5b_add (Add) (None, None, None, 1 0 block5b_drop[0][0] block5a_project_bn[0][0] __________________________________________________________________________________________________ block5c_expand_conv (Conv2D) (None, None, None, 9 153600 block5b_add[0][0] __________________________________________________________________________________________________ block5c_expand_bn (BatchNormali (None, None, None, 9 3840 block5c_expand_conv[0][0] __________________________________________________________________________________________________ block5c_expand_activation (Acti (None, None, None, 9 0 block5c_expand_bn[0][0] __________________________________________________________________________________________________ block5c_dwconv (DepthwiseConv2D (None, None, None, 9 24000 block5c_expand_activation[0][0] __________________________________________________________________________________________________ block5c_bn (BatchNormalization) (None, None, None, 9 3840 block5c_dwconv[0][0] __________________________________________________________________________________________________ block5c_activation (Activation) (None, None, None, 9 0 block5c_bn[0][0] __________________________________________________________________________________________________ block5c_se_squeeze (GlobalAvera (None, 960) 0 block5c_activation[0][0] __________________________________________________________________________________________________ block5c_se_reshape (Reshape) (None, 1, 1, 960) 0 block5c_se_squeeze[0][0] __________________________________________________________________________________________________ block5c_se_reduce (Conv2D) (None, 1, 1, 40) 38440 block5c_se_reshape[0][0] __________________________________________________________________________________________________ block5c_se_expand (Conv2D) (None, 1, 1, 960) 39360 block5c_se_reduce[0][0] __________________________________________________________________________________________________ block5c_se_excite (Multiply) (None, None, None, 9 0 block5c_activation[0][0] block5c_se_expand[0][0] __________________________________________________________________________________________________ block5c_project_conv (Conv2D) (None, None, None, 1 153600 block5c_se_excite[0][0] __________________________________________________________________________________________________ block5c_project_bn (BatchNormal (None, None, None, 1 640 block5c_project_conv[0][0] __________________________________________________________________________________________________ block5c_drop (Dropout) (None, None, None, 1 0 block5c_project_bn[0][0] __________________________________________________________________________________________________ block5c_add (Add) (None, None, None, 1 0 block5c_drop[0][0] block5b_add[0][0] __________________________________________________________________________________________________ block5d_expand_conv (Conv2D) (None, None, None, 9 153600 block5c_add[0][0] __________________________________________________________________________________________________ block5d_expand_bn (BatchNormali (None, None, None, 9 3840 block5d_expand_conv[0][0] __________________________________________________________________________________________________ block5d_expand_activation (Acti (None, None, None, 9 0 block5d_expand_bn[0][0] __________________________________________________________________________________________________ block5d_dwconv (DepthwiseConv2D (None, None, None, 9 24000 block5d_expand_activation[0][0] __________________________________________________________________________________________________ block5d_bn (BatchNormalization) (None, None, None, 9 3840 block5d_dwconv[0][0] __________________________________________________________________________________________________ block5d_activation (Activation) (None, None, None, 9 0 block5d_bn[0][0] __________________________________________________________________________________________________ block5d_se_squeeze (GlobalAvera (None, 960) 0 block5d_activation[0][0] __________________________________________________________________________________________________ block5d_se_reshape (Reshape) (None, 1, 1, 960) 0 block5d_se_squeeze[0][0] __________________________________________________________________________________________________ block5d_se_reduce (Conv2D) (None, 1, 1, 40) 38440 block5d_se_reshape[0][0] __________________________________________________________________________________________________ block5d_se_expand (Conv2D) (None, 1, 1, 960) 39360 block5d_se_reduce[0][0] __________________________________________________________________________________________________ block5d_se_excite (Multiply) (None, None, None, 9 0 block5d_activation[0][0] block5d_se_expand[0][0] __________________________________________________________________________________________________ block5d_project_conv (Conv2D) (None, None, None, 1 153600 block5d_se_excite[0][0] __________________________________________________________________________________________________ block5d_project_bn (BatchNormal (None, None, None, 1 640 block5d_project_conv[0][0] __________________________________________________________________________________________________ block5d_drop (Dropout) (None, None, None, 1 0 block5d_project_bn[0][0] __________________________________________________________________________________________________ block5d_add (Add) (None, None, None, 1 0 block5d_drop[0][0] block5c_add[0][0] __________________________________________________________________________________________________ block5e_expand_conv (Conv2D) (None, None, None, 9 153600 block5d_add[0][0] __________________________________________________________________________________________________ block5e_expand_bn (BatchNormali (None, None, None, 9 3840 block5e_expand_conv[0][0] __________________________________________________________________________________________________ block5e_expand_activation (Acti (None, None, None, 9 0 block5e_expand_bn[0][0] __________________________________________________________________________________________________ block5e_dwconv (DepthwiseConv2D (None, None, None, 9 24000 block5e_expand_activation[0][0] __________________________________________________________________________________________________ block5e_bn (BatchNormalization) (None, None, None, 9 3840 block5e_dwconv[0][0] __________________________________________________________________________________________________ block5e_activation (Activation) (None, None, None, 9 0 block5e_bn[0][0] __________________________________________________________________________________________________ block5e_se_squeeze (GlobalAvera (None, 960) 0 block5e_activation[0][0] __________________________________________________________________________________________________ block5e_se_reshape (Reshape) (None, 1, 1, 960) 0 block5e_se_squeeze[0][0] __________________________________________________________________________________________________ block5e_se_reduce (Conv2D) (None, 1, 1, 40) 38440 block5e_se_reshape[0][0] __________________________________________________________________________________________________ block5e_se_expand (Conv2D) (None, 1, 1, 960) 39360 block5e_se_reduce[0][0] __________________________________________________________________________________________________ block5e_se_excite (Multiply) (None, None, None, 9 0 block5e_activation[0][0] block5e_se_expand[0][0] __________________________________________________________________________________________________ block5e_project_conv (Conv2D) (None, None, None, 1 153600 block5e_se_excite[0][0] __________________________________________________________________________________________________ block5e_project_bn (BatchNormal (None, None, None, 1 640 block5e_project_conv[0][0] __________________________________________________________________________________________________ block5e_drop (Dropout) (None, None, None, 1 0 block5e_project_bn[0][0] __________________________________________________________________________________________________ block5e_add (Add) (None, None, None, 1 0 block5e_drop[0][0] block5d_add[0][0] __________________________________________________________________________________________________ block5f_expand_conv (Conv2D) (None, None, None, 9 153600 block5e_add[0][0] __________________________________________________________________________________________________ block5f_expand_bn (BatchNormali (None, None, None, 9 3840 block5f_expand_conv[0][0] __________________________________________________________________________________________________ block5f_expand_activation (Acti (None, None, None, 9 0 block5f_expand_bn[0][0] __________________________________________________________________________________________________ block5f_dwconv (DepthwiseConv2D (None, None, None, 9 24000 block5f_expand_activation[0][0] __________________________________________________________________________________________________ block5f_bn (BatchNormalization) (None, None, None, 9 3840 block5f_dwconv[0][0] __________________________________________________________________________________________________ block5f_activation (Activation) (None, None, None, 9 0 block5f_bn[0][0] __________________________________________________________________________________________________ block5f_se_squeeze (GlobalAvera (None, 960) 0 block5f_activation[0][0] __________________________________________________________________________________________________ block5f_se_reshape (Reshape) (None, 1, 1, 960) 0 block5f_se_squeeze[0][0] __________________________________________________________________________________________________ block5f_se_reduce (Conv2D) (None, 1, 1, 40) 38440 block5f_se_reshape[0][0] __________________________________________________________________________________________________ block5f_se_expand (Conv2D) (None, 1, 1, 960) 39360 block5f_se_reduce[0][0] __________________________________________________________________________________________________ block5f_se_excite (Multiply) (None, None, None, 9 0 block5f_activation[0][0] block5f_se_expand[0][0] __________________________________________________________________________________________________ block5f_project_conv (Conv2D) (None, None, None, 1 153600 block5f_se_excite[0][0] __________________________________________________________________________________________________ block5f_project_bn (BatchNormal (None, None, None, 1 640 block5f_project_conv[0][0] __________________________________________________________________________________________________ block5f_drop (Dropout) (None, None, None, 1 0 block5f_project_bn[0][0] __________________________________________________________________________________________________ block5f_add (Add) (None, None, None, 1 0 block5f_drop[0][0] block5e_add[0][0] __________________________________________________________________________________________________ block6a_expand_conv (Conv2D) (None, None, None, 9 153600 block5f_add[0][0] __________________________________________________________________________________________________ block6a_expand_bn (BatchNormali (None, None, None, 9 3840 block6a_expand_conv[0][0] __________________________________________________________________________________________________ block6a_expand_activation (Acti (None, None, None, 9 0 block6a_expand_bn[0][0] __________________________________________________________________________________________________ block6a_dwconv_pad (ZeroPadding (None, None, None, 9 0 block6a_expand_activation[0][0] __________________________________________________________________________________________________ block6a_dwconv (DepthwiseConv2D (None, None, None, 9 24000 block6a_dwconv_pad[0][0] __________________________________________________________________________________________________ block6a_bn (BatchNormalization) (None, None, None, 9 3840 block6a_dwconv[0][0] __________________________________________________________________________________________________ block6a_activation (Activation) (None, None, None, 9 0 block6a_bn[0][0] __________________________________________________________________________________________________ block6a_se_squeeze (GlobalAvera (None, 960) 0 block6a_activation[0][0] __________________________________________________________________________________________________ block6a_se_reshape (Reshape) (None, 1, 1, 960) 0 block6a_se_squeeze[0][0] __________________________________________________________________________________________________ block6a_se_reduce (Conv2D) (None, 1, 1, 40) 38440 block6a_se_reshape[0][0] __________________________________________________________________________________________________ block6a_se_expand (Conv2D) (None, 1, 1, 960) 39360 block6a_se_reduce[0][0] __________________________________________________________________________________________________ block6a_se_excite (Multiply) (None, None, None, 9 0 block6a_activation[0][0] block6a_se_expand[0][0] __________________________________________________________________________________________________ block6a_project_conv (Conv2D) (None, None, None, 2 261120 block6a_se_excite[0][0] __________________________________________________________________________________________________ block6a_project_bn (BatchNormal (None, None, None, 2 1088 block6a_project_conv[0][0] __________________________________________________________________________________________________ block6b_expand_conv (Conv2D) (None, None, None, 1 443904 block6a_project_bn[0][0] __________________________________________________________________________________________________ block6b_expand_bn (BatchNormali (None, None, None, 1 6528 block6b_expand_conv[0][0] __________________________________________________________________________________________________ block6b_expand_activation (Acti (None, None, None, 1 0 block6b_expand_bn[0][0] __________________________________________________________________________________________________ block6b_dwconv (DepthwiseConv2D (None, None, None, 1 40800 block6b_expand_activation[0][0] __________________________________________________________________________________________________ block6b_bn (BatchNormalization) (None, None, None, 1 6528 block6b_dwconv[0][0] __________________________________________________________________________________________________ block6b_activation (Activation) (None, None, None, 1 0 block6b_bn[0][0] __________________________________________________________________________________________________ block6b_se_squeeze (GlobalAvera (None, 1632) 0 block6b_activation[0][0] __________________________________________________________________________________________________ block6b_se_reshape (Reshape) (None, 1, 1, 1632) 0 block6b_se_squeeze[0][0] __________________________________________________________________________________________________ block6b_se_reduce (Conv2D) (None, 1, 1, 68) 111044 block6b_se_reshape[0][0] __________________________________________________________________________________________________ block6b_se_expand (Conv2D) (None, 1, 1, 1632) 112608 block6b_se_reduce[0][0] __________________________________________________________________________________________________ block6b_se_excite (Multiply) (None, None, None, 1 0 block6b_activation[0][0] block6b_se_expand[0][0] __________________________________________________________________________________________________ block6b_project_conv (Conv2D) (None, None, None, 2 443904 block6b_se_excite[0][0] __________________________________________________________________________________________________ block6b_project_bn (BatchNormal (None, None, None, 2 1088 block6b_project_conv[0][0] __________________________________________________________________________________________________ block6b_drop (Dropout) (None, None, None, 2 0 block6b_project_bn[0][0] __________________________________________________________________________________________________ block6b_add (Add) (None, None, None, 2 0 block6b_drop[0][0] block6a_project_bn[0][0] __________________________________________________________________________________________________ block6c_expand_conv (Conv2D) (None, None, None, 1 443904 block6b_add[0][0] __________________________________________________________________________________________________ block6c_expand_bn (BatchNormali (None, None, None, 1 6528 block6c_expand_conv[0][0] __________________________________________________________________________________________________ block6c_expand_activation (Acti (None, None, None, 1 0 block6c_expand_bn[0][0] __________________________________________________________________________________________________ block6c_dwconv (DepthwiseConv2D (None, None, None, 1 40800 block6c_expand_activation[0][0] __________________________________________________________________________________________________ block6c_bn (BatchNormalization) (None, None, None, 1 6528 block6c_dwconv[0][0] __________________________________________________________________________________________________ block6c_activation (Activation) (None, None, None, 1 0 block6c_bn[0][0] __________________________________________________________________________________________________ block6c_se_squeeze (GlobalAvera (None, 1632) 0 block6c_activation[0][0] __________________________________________________________________________________________________ block6c_se_reshape (Reshape) (None, 1, 1, 1632) 0 block6c_se_squeeze[0][0] __________________________________________________________________________________________________ block6c_se_reduce (Conv2D) (None, 1, 1, 68) 111044 block6c_se_reshape[0][0] __________________________________________________________________________________________________ block6c_se_expand (Conv2D) (None, 1, 1, 1632) 112608 block6c_se_reduce[0][0] __________________________________________________________________________________________________ block6c_se_excite (Multiply) (None, None, None, 1 0 block6c_activation[0][0] block6c_se_expand[0][0] __________________________________________________________________________________________________ block6c_project_conv (Conv2D) (None, None, None, 2 443904 block6c_se_excite[0][0] __________________________________________________________________________________________________ block6c_project_bn (BatchNormal (None, None, None, 2 1088 block6c_project_conv[0][0] __________________________________________________________________________________________________ block6c_drop (Dropout) (None, None, None, 2 0 block6c_project_bn[0][0] __________________________________________________________________________________________________ block6c_add (Add) (None, None, None, 2 0 block6c_drop[0][0] block6b_add[0][0] __________________________________________________________________________________________________ block6d_expand_conv (Conv2D) (None, None, None, 1 443904 block6c_add[0][0] __________________________________________________________________________________________________ block6d_expand_bn (BatchNormali (None, None, None, 1 6528 block6d_expand_conv[0][0] __________________________________________________________________________________________________ block6d_expand_activation (Acti (None, None, None, 1 0 block6d_expand_bn[0][0] __________________________________________________________________________________________________ block6d_dwconv (DepthwiseConv2D (None, None, None, 1 40800 block6d_expand_activation[0][0] __________________________________________________________________________________________________ block6d_bn (BatchNormalization) (None, None, None, 1 6528 block6d_dwconv[0][0] __________________________________________________________________________________________________ block6d_activation (Activation) (None, None, None, 1 0 block6d_bn[0][0] __________________________________________________________________________________________________ block6d_se_squeeze (GlobalAvera (None, 1632) 0 block6d_activation[0][0] __________________________________________________________________________________________________ block6d_se_reshape (Reshape) (None, 1, 1, 1632) 0 block6d_se_squeeze[0][0] __________________________________________________________________________________________________ block6d_se_reduce (Conv2D) (None, 1, 1, 68) 111044 block6d_se_reshape[0][0] __________________________________________________________________________________________________ block6d_se_expand (Conv2D) (None, 1, 1, 1632) 112608 block6d_se_reduce[0][0] __________________________________________________________________________________________________ block6d_se_excite (Multiply) (None, None, None, 1 0 block6d_activation[0][0] block6d_se_expand[0][0] __________________________________________________________________________________________________ block6d_project_conv (Conv2D) (None, None, None, 2 443904 block6d_se_excite[0][0] __________________________________________________________________________________________________ block6d_project_bn (BatchNormal (None, None, None, 2 1088 block6d_project_conv[0][0] __________________________________________________________________________________________________ block6d_drop (Dropout) (None, None, None, 2 0 block6d_project_bn[0][0] __________________________________________________________________________________________________ block6d_add (Add) (None, None, None, 2 0 block6d_drop[0][0] block6c_add[0][0] __________________________________________________________________________________________________ block6e_expand_conv (Conv2D) (None, None, None, 1 443904 block6d_add[0][0] __________________________________________________________________________________________________ block6e_expand_bn (BatchNormali (None, None, None, 1 6528 block6e_expand_conv[0][0] __________________________________________________________________________________________________ block6e_expand_activation (Acti (None, None, None, 1 0 block6e_expand_bn[0][0] __________________________________________________________________________________________________ block6e_dwconv (DepthwiseConv2D (None, None, None, 1 40800 block6e_expand_activation[0][0] __________________________________________________________________________________________________ block6e_bn (BatchNormalization) (None, None, None, 1 6528 block6e_dwconv[0][0] __________________________________________________________________________________________________ block6e_activation (Activation) (None, None, None, 1 0 block6e_bn[0][0] __________________________________________________________________________________________________ block6e_se_squeeze (GlobalAvera (None, 1632) 0 block6e_activation[0][0] __________________________________________________________________________________________________ block6e_se_reshape (Reshape) (None, 1, 1, 1632) 0 block6e_se_squeeze[0][0] __________________________________________________________________________________________________ block6e_se_reduce (Conv2D) (None, 1, 1, 68) 111044 block6e_se_reshape[0][0] __________________________________________________________________________________________________ block6e_se_expand (Conv2D) (None, 1, 1, 1632) 112608 block6e_se_reduce[0][0] __________________________________________________________________________________________________ block6e_se_excite (Multiply) (None, None, None, 1 0 block6e_activation[0][0] block6e_se_expand[0][0] __________________________________________________________________________________________________ block6e_project_conv (Conv2D) (None, None, None, 2 443904 block6e_se_excite[0][0] __________________________________________________________________________________________________ block6e_project_bn (BatchNormal (None, None, None, 2 1088 block6e_project_conv[0][0] __________________________________________________________________________________________________ block6e_drop (Dropout) (None, None, None, 2 0 block6e_project_bn[0][0] __________________________________________________________________________________________________ block6e_add (Add) (None, None, None, 2 0 block6e_drop[0][0] block6d_add[0][0] __________________________________________________________________________________________________ block6f_expand_conv (Conv2D) (None, None, None, 1 443904 block6e_add[0][0] __________________________________________________________________________________________________ block6f_expand_bn (BatchNormali (None, None, None, 1 6528 block6f_expand_conv[0][0] __________________________________________________________________________________________________ block6f_expand_activation (Acti (None, None, None, 1 0 block6f_expand_bn[0][0] __________________________________________________________________________________________________ block6f_dwconv (DepthwiseConv2D (None, None, None, 1 40800 block6f_expand_activation[0][0] __________________________________________________________________________________________________ block6f_bn (BatchNormalization) (None, None, None, 1 6528 block6f_dwconv[0][0] __________________________________________________________________________________________________ block6f_activation (Activation) (None, None, None, 1 0 block6f_bn[0][0] __________________________________________________________________________________________________ block6f_se_squeeze (GlobalAvera (None, 1632) 0 block6f_activation[0][0] __________________________________________________________________________________________________ block6f_se_reshape (Reshape) (None, 1, 1, 1632) 0 block6f_se_squeeze[0][0] __________________________________________________________________________________________________ block6f_se_reduce (Conv2D) (None, 1, 1, 68) 111044 block6f_se_reshape[0][0] __________________________________________________________________________________________________ block6f_se_expand (Conv2D) (None, 1, 1, 1632) 112608 block6f_se_reduce[0][0] __________________________________________________________________________________________________ block6f_se_excite (Multiply) (None, None, None, 1 0 block6f_activation[0][0] block6f_se_expand[0][0] __________________________________________________________________________________________________ block6f_project_conv (Conv2D) (None, None, None, 2 443904 block6f_se_excite[0][0] __________________________________________________________________________________________________ block6f_project_bn (BatchNormal (None, None, None, 2 1088 block6f_project_conv[0][0] __________________________________________________________________________________________________ block6f_drop (Dropout) (None, None, None, 2 0 block6f_project_bn[0][0] __________________________________________________________________________________________________ block6f_add (Add) (None, None, None, 2 0 block6f_drop[0][0] block6e_add[0][0] __________________________________________________________________________________________________ block6g_expand_conv (Conv2D) (None, None, None, 1 443904 block6f_add[0][0] __________________________________________________________________________________________________ block6g_expand_bn (BatchNormali (None, None, None, 1 6528 block6g_expand_conv[0][0] __________________________________________________________________________________________________ block6g_expand_activation (Acti (None, None, None, 1 0 block6g_expand_bn[0][0] __________________________________________________________________________________________________ block6g_dwconv (DepthwiseConv2D (None, None, None, 1 40800 block6g_expand_activation[0][0] __________________________________________________________________________________________________ block6g_bn (BatchNormalization) (None, None, None, 1 6528 block6g_dwconv[0][0] __________________________________________________________________________________________________ block6g_activation (Activation) (None, None, None, 1 0 block6g_bn[0][0] __________________________________________________________________________________________________ block6g_se_squeeze (GlobalAvera (None, 1632) 0 block6g_activation[0][0] __________________________________________________________________________________________________ block6g_se_reshape (Reshape) (None, 1, 1, 1632) 0 block6g_se_squeeze[0][0] __________________________________________________________________________________________________ block6g_se_reduce (Conv2D) (None, 1, 1, 68) 111044 block6g_se_reshape[0][0] __________________________________________________________________________________________________ block6g_se_expand (Conv2D) (None, 1, 1, 1632) 112608 block6g_se_reduce[0][0] __________________________________________________________________________________________________ block6g_se_excite (Multiply) (None, None, None, 1 0 block6g_activation[0][0] block6g_se_expand[0][0] __________________________________________________________________________________________________ block6g_project_conv (Conv2D) (None, None, None, 2 443904 block6g_se_excite[0][0] __________________________________________________________________________________________________ block6g_project_bn (BatchNormal (None, None, None, 2 1088 block6g_project_conv[0][0] __________________________________________________________________________________________________ block6g_drop (Dropout) (None, None, None, 2 0 block6g_project_bn[0][0] __________________________________________________________________________________________________ block6g_add (Add) (None, None, None, 2 0 block6g_drop[0][0] block6f_add[0][0] __________________________________________________________________________________________________ block6h_expand_conv (Conv2D) (None, None, None, 1 443904 block6g_add[0][0] __________________________________________________________________________________________________ block6h_expand_bn (BatchNormali (None, None, None, 1 6528 block6h_expand_conv[0][0] __________________________________________________________________________________________________ block6h_expand_activation (Acti (None, None, None, 1 0 block6h_expand_bn[0][0] __________________________________________________________________________________________________ block6h_dwconv (DepthwiseConv2D (None, None, None, 1 40800 block6h_expand_activation[0][0] __________________________________________________________________________________________________ block6h_bn (BatchNormalization) (None, None, None, 1 6528 block6h_dwconv[0][0] __________________________________________________________________________________________________ block6h_activation (Activation) (None, None, None, 1 0 block6h_bn[0][0] __________________________________________________________________________________________________ block6h_se_squeeze (GlobalAvera (None, 1632) 0 block6h_activation[0][0] __________________________________________________________________________________________________ block6h_se_reshape (Reshape) (None, 1, 1, 1632) 0 block6h_se_squeeze[0][0] __________________________________________________________________________________________________ block6h_se_reduce (Conv2D) (None, 1, 1, 68) 111044 block6h_se_reshape[0][0] __________________________________________________________________________________________________ block6h_se_expand (Conv2D) (None, 1, 1, 1632) 112608 block6h_se_reduce[0][0] __________________________________________________________________________________________________ block6h_se_excite (Multiply) (None, None, None, 1 0 block6h_activation[0][0] block6h_se_expand[0][0] __________________________________________________________________________________________________ block6h_project_conv (Conv2D) (None, None, None, 2 443904 block6h_se_excite[0][0] __________________________________________________________________________________________________ block6h_project_bn (BatchNormal (None, None, None, 2 1088 block6h_project_conv[0][0] __________________________________________________________________________________________________ block6h_drop (Dropout) (None, None, None, 2 0 block6h_project_bn[0][0] __________________________________________________________________________________________________ block6h_add (Add) (None, None, None, 2 0 block6h_drop[0][0] block6g_add[0][0] __________________________________________________________________________________________________ block7a_expand_conv (Conv2D) (None, None, None, 1 443904 block6h_add[0][0] __________________________________________________________________________________________________ block7a_expand_bn (BatchNormali (None, None, None, 1 6528 block7a_expand_conv[0][0] __________________________________________________________________________________________________ block7a_expand_activation (Acti (None, None, None, 1 0 block7a_expand_bn[0][0] __________________________________________________________________________________________________ block7a_dwconv (DepthwiseConv2D (None, None, None, 1 14688 block7a_expand_activation[0][0] __________________________________________________________________________________________________ block7a_bn (BatchNormalization) (None, None, None, 1 6528 block7a_dwconv[0][0] __________________________________________________________________________________________________ block7a_activation (Activation) (None, None, None, 1 0 block7a_bn[0][0] __________________________________________________________________________________________________ block7a_se_squeeze (GlobalAvera (None, 1632) 0 block7a_activation[0][0] __________________________________________________________________________________________________ block7a_se_reshape (Reshape) (None, 1, 1, 1632) 0 block7a_se_squeeze[0][0] __________________________________________________________________________________________________ block7a_se_reduce (Conv2D) (None, 1, 1, 68) 111044 block7a_se_reshape[0][0] __________________________________________________________________________________________________ block7a_se_expand (Conv2D) (None, 1, 1, 1632) 112608 block7a_se_reduce[0][0] __________________________________________________________________________________________________ block7a_se_excite (Multiply) (None, None, None, 1 0 block7a_activation[0][0] block7a_se_expand[0][0] __________________________________________________________________________________________________ block7a_project_conv (Conv2D) (None, None, None, 4 731136 block7a_se_excite[0][0] __________________________________________________________________________________________________ block7a_project_bn (BatchNormal (None, None, None, 4 1792 block7a_project_conv[0][0] __________________________________________________________________________________________________ block7b_expand_conv (Conv2D) (None, None, None, 2 1204224 block7a_project_bn[0][0] __________________________________________________________________________________________________ block7b_expand_bn (BatchNormali (None, None, None, 2 10752 block7b_expand_conv[0][0] __________________________________________________________________________________________________ block7b_expand_activation (Acti (None, None, None, 2 0 block7b_expand_bn[0][0] __________________________________________________________________________________________________ block7b_dwconv (DepthwiseConv2D (None, None, None, 2 24192 block7b_expand_activation[0][0] __________________________________________________________________________________________________ block7b_bn (BatchNormalization) (None, None, None, 2 10752 block7b_dwconv[0][0] __________________________________________________________________________________________________ block7b_activation (Activation) (None, None, None, 2 0 block7b_bn[0][0] __________________________________________________________________________________________________ block7b_se_squeeze (GlobalAvera (None, 2688) 0 block7b_activation[0][0] __________________________________________________________________________________________________ block7b_se_reshape (Reshape) (None, 1, 1, 2688) 0 block7b_se_squeeze[0][0] __________________________________________________________________________________________________ block7b_se_reduce (Conv2D) (None, 1, 1, 112) 301168 block7b_se_reshape[0][0] __________________________________________________________________________________________________ block7b_se_expand (Conv2D) (None, 1, 1, 2688) 303744 block7b_se_reduce[0][0] __________________________________________________________________________________________________ block7b_se_excite (Multiply) (None, None, None, 2 0 block7b_activation[0][0] block7b_se_expand[0][0] __________________________________________________________________________________________________ block7b_project_conv (Conv2D) (None, None, None, 4 1204224 block7b_se_excite[0][0] __________________________________________________________________________________________________ block7b_project_bn (BatchNormal (None, None, None, 4 1792 block7b_project_conv[0][0] __________________________________________________________________________________________________ block7b_drop (Dropout) (None, None, None, 4 0 block7b_project_bn[0][0] __________________________________________________________________________________________________ block7b_add (Add) (None, None, None, 4 0 block7b_drop[0][0] block7a_project_bn[0][0] __________________________________________________________________________________________________ top_conv (Conv2D) (None, None, None, 1 802816 block7b_add[0][0] __________________________________________________________________________________________________ top_bn (BatchNormalization) (None, None, None, 1 7168 top_conv[0][0] __________________________________________________________________________________________________ top_activation (Activation) (None, None, None, 1 0 top_bn[0][0] __________________________________________________________________________________________________ global_average_pooling2d (Globa (None, 1792) 0 top_activation[0][0] __________________________________________________________________________________________________ dropout (Dropout) (None, 1792) 0 global_average_pooling2d[0][0] __________________________________________________________________________________________________ output (Dense) (None, 5) 8965 dropout[0][0] ================================================================================================== Total params: 17,682,788 Trainable params: 17,557,581 Non-trainable params: 125,207 __________________________________________________________________________________________________
MIT
Model backlog/Models/Inference/162-cassava-leaf-inf-effnetb4-dcr-04-380x380.ipynb
dimitreOliveira/Cassava-Leaf-Disease-Classification
Test set predictions
files_path = f'{database_base_path}test_images/' test_size = len(os.listdir(files_path)) test_preds = np.zeros((test_size, N_CLASSES)) for model_path in model_path_list: print(model_path) K.clear_session() model.load_weights(model_path) if TTA_STEPS > 0: test_ds = get_dataset(files_path, tta=True).repeat() ct_steps = TTA_STEPS * ((test_size/BATCH_SIZE) + 1) preds = model.predict(test_ds, steps=ct_steps, verbose=1)[:(test_size * TTA_STEPS)] preds = np.mean(preds.reshape(test_size, TTA_STEPS, N_CLASSES, order='F'), axis=1) test_preds += preds / len(model_path_list) else: test_ds = get_dataset(files_path, tta=False) x_test = test_ds.map(lambda image, image_name: image) test_preds += model.predict(x_test) / len(model_path_list) test_preds = np.argmax(test_preds, axis=-1) test_names_ds = get_dataset(files_path) image_names = [img_name.numpy().decode('utf-8') for img, img_name in iter(test_names_ds.unbatch())] submission = pd.DataFrame({'image_id': image_names, 'label': test_preds}) submission.to_csv('submission.csv', index=False) display(submission.head())
_____no_output_____
MIT
Model backlog/Models/Inference/162-cassava-leaf-inf-effnetb4-dcr-04-380x380.ipynb
dimitreOliveira/Cassava-Leaf-Disease-Classification
String Criando uma String Para criar uma string em python você pode usar aspas simples ou duplas
# Uma única palavra 'Olá' # uma frase 'isto é uma string em pyton' # usando aspas duplas "teste aspa duplas" # combinação "podemos utilizas as duas aspas ou uma no 'python'"
_____no_output_____
MIT
02-Variaveis_Tipo_Estrutura_Dados/03-Strings.ipynb
alineAssuncao/Python_Fundamentos_Analise_Dados
Imprimindo uma String
print ('imprimindo uma String') print ('testando \nString \nem \nPython') print ('\n')
MIT
02-Variaveis_Tipo_Estrutura_Dados/03-Strings.ipynb
alineAssuncao/Python_Fundamentos_Analise_Dados
Indexando Strings
# Atribuindo uma string s = 'Data Science Academy' print (s) # primeiro elemento da string s[0] s[1] s[2]
_____no_output_____
MIT
02-Variaveis_Tipo_Estrutura_Dados/03-Strings.ipynb
alineAssuncao/Python_Fundamentos_Analise_Dados
Podemos usar : para executar um slicing que faz a leitura de tudo até um ponto designado
# retorna os elementos sa string, começando em uma posição s[1:] # a string continua inalterada s # retorna tudo até uma posição anterior da informada s[:3] # retorna uma determinada cadeia de caracter s[2:6] s[:] # indexação negativa para ler de trás para frente # busca apenas a posição informada s[-2] # retorna tudo, exceto a última letra s[:-1]
_____no_output_____
MIT
02-Variaveis_Tipo_Estrutura_Dados/03-Strings.ipynb
alineAssuncao/Python_Fundamentos_Analise_Dados
Podemos usar a notação de índice e fatiar a string em pedaços especificos
s[::1] s[::2] s[::-1]
_____no_output_____
MIT
02-Variaveis_Tipo_Estrutura_Dados/03-Strings.ipynb
alineAssuncao/Python_Fundamentos_Analise_Dados
Propriedades de string
s # Alterando um caracter (não permite a alteração - imutaveis) s[0] = 'x' # concatenando strings s + ' é a melhor' print (s) s = s + ' é a melhor' print(s) # podemos usar o símbolo de multiplicação para criar repetição letra = 'W' letra * 3
_____no_output_____
MIT
02-Variaveis_Tipo_Estrutura_Dados/03-Strings.ipynb
alineAssuncao/Python_Fundamentos_Analise_Dados
Funções Built-in de strings
s # upper case s.upper() #lower case s.lower() # dividir uma string por espaços em branco(padrão) s.split() # dividindo com um elemento especifico s.split('y')
_____no_output_____
MIT
02-Variaveis_Tipo_Estrutura_Dados/03-Strings.ipynb
alineAssuncao/Python_Fundamentos_Analise_Dados
Funções de string
s = 'olá! Seja bem vindo ao universo Python' s.capitalize() s.count('a') s.find('p') s.center(20, 'z') s.isalnum() s.islower() s.isspace() s.endswith('o') s.partition('!')
_____no_output_____
MIT
02-Variaveis_Tipo_Estrutura_Dados/03-Strings.ipynb
alineAssuncao/Python_Fundamentos_Analise_Dados
Comparando Strings
print ("Python" == "R") print ("Python" == "Python")
True
MIT
02-Variaveis_Tipo_Estrutura_Dados/03-Strings.ipynb
alineAssuncao/Python_Fundamentos_Analise_Dados
This notebook deals with banks of cylinders in a cross flow. Cylinder banks are common heat exchangers where the cylinders may be heated by electricity or a fluid may be flowing within the cylinder to cool or heat the flow around the cylinders. The advantage of cylinder banks is the increase mixing in the fluid, thus the temperature downstream of the bank is likely to be quite homogeneous. The arrangement of cylinders may be aligned or staggered as shown in the following figures. The flow and geometrical parameters will be used in the derivation of temperature equations and Nusselt number correlation. This notebook should cover a wide variety of problems, providing that the assumption of isothermal boundary conditions on the tubes is (approximately) valid. The tube surface temperature is $T_s$. The flow and geometrical parameters of importance to solve this problem are:* Arithmetic mean of temperature between inlet $T_i$ and outlet $T_o$ of the bank. $$T_m = \frac{T_i+T_o}{2}$$* Reynolds number based on the max velocity within the bank $V_\text{max}$, the density and viscosity based on $T_m$:$$Re=\frac{\rho V_\text{max}D}{\mu}$$**Question: At what temperature should you estimate $\rho$ and $\mu$?** The energy of the flow comes from the inlet and the velocity $V_\mathrm{max}$ is calculated from the inlet velocity. The density should therefore be estimated at $T_i$. The viscous forces however occur throughout the domain, so $\mu$ should be estimated at $T_m$. In some cases $T_o$ is the quantity to be found. It is acceptable to use $\mu(T_i)$, but you must verify that the temperature difference $\Delta T=\vert T_i-T_o\vert$ is not too large. If it is, you must repeat the calculation iteratively with $\mu(T_i)$ until $T_o$ converges.* Prandtl number $Pr$ based on $T_m$ * Surface Prandtl number $Pr_s$ based on $T_s$* Number of tubes in the transversal direction $N_T$, longitudinal direction $N_L$ and total $N=N_T\times N_L$* The transversal $S_T$ and longitudinal $S_L$ separations between tubes in a row and between rows.* The type of tube arrangement: * Aligned$$V_\text{max}=\frac{S_T}{S_T-D}V_i$$ * Staggered$$V_\text{max}=\frac{S_T}{2(S_D-D)}V_i\text{ with }S_D=\sqrt{S_L^2+\left(\frac{S_T}{2}\right)^2}$$ The Nusselt number correlation for a bank of tubes is a variation of the Zukauskas correlation:$$Nu = C_2C_1Re^mPr^{0.36}\left(\frac{Pr}{Pr_s}\right)^{1/4}$$where $C_2$ depends on $N_L$. In the library, the function for this correlation isNu_tube_banks(Re,Pr,Pr_s,S_L,S_T,N_L,arrangement) .The heat rate per unit length across the tube bank is$$q'=N\overline{h}\pi D \Delta T_\text{lm}$$where the temperature drop is the log-mean temperature difference$$\Delta T_\text{lm}=\cfrac{(T_s-T_i)-(T_s-T_o)}{\ln\left(\cfrac{T_s-T_i}{T_s-T_o}\right)}$$which accounts for the exponential variation of temperature across the bank$$\cfrac{T_s-T_o}{T_s-T_i}=\exp\left(-\cfrac{\pi D N \overline{h}}{\rho V_i N_T S_T C_p}\right)$$where $\rho$, $C_p$ and $V_i$ are inlet quantities if $T_o$ is unknown of the arthimetic mean temperature if available. Note that $N=N_L\times N_T$ thus $$\cfrac{T_s-T_o}{T_s-T_i}=\exp\left(-\cfrac{\pi D N_L \overline{h}}{\rho V_i S_T C_p}\right)$$One may want to determine the number of tubes necessary to achieve a given $T_o$. The number of tubes in the transverse directions is typically dictated by the geometry of the system, so we are looking for $N_L$:$$N_L = \cfrac{\rho V_i S_T C_p}{\pi D \overline{h}} \log\left(\cfrac{T_s-T_i}{T_s-T_o}\right)$$ The pressure loss through the tube bank is a critical component of the heat exchanger design. The presence of obstacles in the flow requires an increase in the mechanical energy necessary to drive the flow at a given flow rate. The pressure loss, given all parameters above, is$$\Delta p = N_L\,\chi\, f\,\frac{\rho V_\text{max}^2}{2}$$where the friction factor $f$ and the parameter $\chi$ are given by the graphs below for the aligned (top) and staggered (bottom) arrangements. These graphs use two new quantities, the longitudnal and transverse pitches:$$P_L=\frac{S_L}{D}\text{ and } P_T=\frac{S_T}{D}$$ Problem1A preheater involves the use of condensing steam on the inside of a bank of tubes to heat air that enters at $P_i=1 \text{ atm}$ and $T_i=25^\circ\text{C}$. The air moves at $V_i=5\text{ m/s}$ in cross flow over the tubes. Each tube is $L=1\text{ m}$ long and has an outside diameter of $D=10 \text{ mm}$. The bank consists of columns of 14 tubes in the transversal direction $N_T=14$ and $N_L$ rows in the direction of flow. The arrangement of tubes is aligned array for which $S_T=S_L=15\text{ mm}$. What is the minimum value of $N_L$ needed to achieve an outlet temperature of $T_o=75^\circ\text{C}$? What is the corresponding pressure drop across the tube bank?
import numpy as np from Libraries import thermodynamics as thermo from Libraries import HT_external_convection as extconv T_i = 25 #C T_o = 75 #C T_s = 100 #C V_i = 5 #m/s L = 1 #m D = 10e-3 #mm N_L = 14 S_T = S_L = 15e-3 #m # ?extconv.BankofTubes bank = extconv.BankofTubes('aligned','air',T_i,T_s,T_o,"C",V_i,D,S_L,S_T,N_L) print("The number of rows required to reach T_o=%.0f C is %.2f" %(bank.T_o,bank.N_L_for_given_To))
The number of rows required to reach T_o=75 C is 15.26
CC-BY-3.0
HT-banks_of_tubes.ipynb
CarlGriffinsteed/UVM-ME144-Heat-Transfer
If the outlet temperature can be slightly below $75^\circ\mathrm{C}$, then the number of rows is 15.If the outlet temperature has to be at least $75^\circ\mathrm{C}$, then the number of rows is 16.
N_L = 15 bank = extconv.BankofTubes('aligned','air',T_i,T_s,T_o,"C",V_i,D,S_L,S_T,N_L) N_T = 14 bank.temperature_outlet_tube_banks(N_T,N_L) print("With N_L=%.0f, T_o=%.2f" %(bank.N_L,bank.T_o)) print("Re=%.0f, P_L = %.2f" %(bank.Re,bank.S_T/bank.D)) bank.pressure_drop(N_L,3.2,1) print("Pressure drop is %.2f Pa" %(bank.Delta_p))
With N_L=15, T_o=74.54 Re=9052, P_L = 1.50 Pressure drop is 6401.70
CC-BY-3.0
HT-banks_of_tubes.ipynb
CarlGriffinsteed/UVM-ME144-Heat-Transfer
Problem 2A preheater involves the use of condensing steam at $100^\circ\text{C}$ on the inside of a bank of tubes to heat air that enters at $1 \text{ atm}$ and $25^\circ\text{C}$. The air moves at $5\text{ m/s}$ in cross flow over the tubes. Each tube is $1\text{ m}$ long and has an outside diameter of $10 \text{ mm}$. The bank consists of 196 tubes in a square, aligned array for which $S_T=S_L=15\text{ mm}$. What is the total rate of heat transfer to the air? What is the pressure drop associated with the airflow?
N_L = N_T = 14 # T_o = 50. # bank = extconv.BankofTubes('aligned','air',T_i,T_s,T_o,"C",V_i,D,S_L,S_T,N_L) # bank.temperature_outlet_tube_banks(N_T,N_L) # print(bank.T_o) # print(bank.Re) # print(bank.Nu) T_o = 72.6 bank = extconv.BankofTubes('aligned','air',T_i,T_s,T_o,"C",V_i,D,S_L,S_T,N_L) bank.temperature_outlet_tube_banks(N_T,N_L) print(bank.T_o) print(bank.Re) print(bank.Nu) bank.heat_rate(N_T,N_L,L) print(bank.q)
72.60620496012206 9080.451003545966 73.95776478607291 59665.2457253688
CC-BY-3.0
HT-banks_of_tubes.ipynb
CarlGriffinsteed/UVM-ME144-Heat-Transfer
Problem 3 An air duct heater consists of an aligned array of electrical heating elements in which the longitudinal and transverse pitches are $S_L=S_T= 24\text{ mm}$. There are 3 rows of elements in the flow direction ($N_L=3$) and 4 elements per row ($N_T=4$). Atmospheric air with an upstream velocity of $12\text{ m/s}$ and a temperature of $25^\circ\text{C}$ moves in cross flow over the elements, which have a diameter of $12\text{ mm}$, a length of $250\text{ mm}$, and are maintained at a surface temperature of $350^\circ\text{C}$.Determine the total heat transfer to the air and the temperature of the air leaving the duct heater.Determine the pressure drop across the element bank and the fan power requirement.Compare the average convection coefficient obtained in your analysis with the value for an isolated (single) element. Explain the difference between the results.What effect would increasing the longitudinal and transverse pitches to 30 mm have on the exit temperature of the air, the total heat rate, and the pressure drop?
_____no_output_____
CC-BY-3.0
HT-banks_of_tubes.ipynb
CarlGriffinsteed/UVM-ME144-Heat-Transfer
Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. Training Pipeline - Custom Script_**Training many models using a custom script**_----This notebook demonstrates how to create a pipeline that trains and registers many models using a custom script. We utilize the [ParallelRunStep](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-use-parallel-run-step) to parallelize the process of training the models to make the process more efficient. For this solution accelerator we are using the [OJ Sales Dataset](https://azure.microsoft.com/en-us/services/open-datasets/catalog/sample-oj-sales-simulated/) to train individual models that predict sales for each store and brand of orange juice.The model we use here is a simple, regression-based forecaster built on scikit-learn and pandas utilities. See the [training script](scripts/train.py) to see how the forecaster is constructed. This forecaster is intended for demonstration purposes, so it does not handle the large variety of special cases that one encounters in time-series modeling. For instance, the model here assumes that all time-series are comprised of regularly sampled observations on a contiguous interval with no missing values. The model does not include any handling of categorical variables. For a more general-use forecaster that handles missing data, advanced featurization, and automatic model selection, see the [AutoML Forecasting task](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-auto-train-forecast). Also, see the notebooks demonstrating [AutoML forecasting in a many models scenario](../Automated_ML). PrerequisitesAt this point, you should have already:1. Created your AML Workspace using the [00_Setup_AML_Workspace notebook](../00_Setup_AML_Workspace.ipynb)2. Run [01_Data_Preparation.ipynb](../01_Data_Preparation.ipynb) to setup your compute and create the dataset Please ensure you have the latest version of the Azure ML SDK and also install Pipeline Steps Package
#!pip install --upgrade azureml-sdk # !pip install azureml-pipeline-steps
_____no_output_____
MIT
Custom_Script/02_CustomScript_Training_Pipeline.ipynb
ben-chin-unify/solution-accelerator-many-models