Unnamed: 0
int64
0
16k
text_prompt
stringlengths
110
62.1k
code_prompt
stringlengths
37
152k
3,600
Given the following text description, write Python code to implement the functionality described below step by step Description: Ground Penetrating Radar Depth of Investigation and Resolution Overview This notebook contains two apps, which are used to complete part 2 and part 3 in team TBL assignment 4 Step1: GPR Zero Offset App (Wave Regime) This app is used to complete part 2 of the team TBL. As previously mentionned, the app simulates radargram data from two reflectors buried in a homogeneous Earth. The range of parameter values for this app are set such that we may assume we are operating in the wave regime. In the wave regime, the following formulas can be used to approximate propagation velocity and skin depth Step2: Attenuation App This app is used to complete part 3 of the team TBL. As mentionned previously, the app computes the propagation velocity and skin depth for GPR signals as a function of operating frequency. Because we are working in the general case, the propagation velocity and skin depth are given by
Python Code: %matplotlib inline import numpy as np from geoscilabs.gpr.GPR_zero_offset import WidgetWaveRegime from geoscilabs.gpr.Attenuation import AttenuationWidgetTBL Explanation: Ground Penetrating Radar Depth of Investigation and Resolution Overview This notebook contains two apps, which are used to complete part 2 and part 3 in team TBL assignment 4: + GPR Zero Offset App: This app simulates radargram data from two reflectors buried in a homogeneous Earth. The range of parameter values for this app are set such that we may assume we are operating in the wave regime. + Attenuation App: This app computes the propagation velocity and skin depth for GPR signals as a function of operating frequency. Importing Packages End of explanation fc = 250*1e6 d = 6 v = 3*1e8 / np.sqrt(4) np.sqrt(v*d / (2*fc)) WidgetWaveRegime() Explanation: GPR Zero Offset App (Wave Regime) This app is used to complete part 2 of the team TBL. As previously mentionned, the app simulates radargram data from two reflectors buried in a homogeneous Earth. The range of parameter values for this app are set such that we may assume we are operating in the wave regime. In the wave regime, the following formulas can be used to approximate propagation velocity and skin depth: + Propagation Velocity: $\;\;\; v = \dfrac{c}{\sqrt{\varepsilon_r}}$ + Skin Depth: $\;\;\; \delta = 0.0053 \, \dfrac{\sqrt{\varepsilon_r}}{\sigma}$ Note however, that expressions for the horizontal resolution, vertical layer resolution and wavelength found in the GPG are still valid. Parameters for the App: $\sigma$: Conductivity for the Earth in mS/m $\varepsilon_r$: Relative permittivity for the Earth (unitless) $f_c$: Central operating frequency for the instrument in MHz $x_1, d_1$ and $R_1$: The x-location, depth and radius of reflector 1 in metres $x_2, d_2$ and $R_2$: The x-location, depth and radius of reflector 2 in metres End of explanation AttenuationWidgetTBL() Explanation: Attenuation App This app is used to complete part 3 of the team TBL. As mentionned previously, the app computes the propagation velocity and skin depth for GPR signals as a function of operating frequency. Because we are working in the general case, the propagation velocity and skin depth are given by: Propagation Velocity: $\;\;\; v = \sqrt{\dfrac{2}{\mu \varepsilon}} \Bigg [ \Bigg ( 1 + \bigg ( \dfrac{\sigma}{\omega \varepsilon} \bigg )^2 \Bigg )^{1/2} + 1 \; \Bigg ]^{-1/2}$ Skin Depth: $\;\;\; \delta = \sqrt{\dfrac{2}{\omega^2 \mu \varepsilon}} \Bigg [ \Bigg ( 1 + \bigg ( \dfrac{\sigma}{\omega \varepsilon} \bigg )^2 \Bigg )^{1/2} - 1 \; \Bigg ]^{-1/2}$ where $\omega = 2\pi f_c$ and $f_c$ is the operating frequency. Here, we assume that the Earth is non-magnetic (e.g. $\mu = \mu_0$). The app provides the values for the propagation velocity and skin depth at frequencies $f_c$ = 25,100 and 1000 MHz. Parameters for the App: $epsr$: Relative permittivity of the medium (unitless) $sigma$: Log (base 10) conductivity of the medium. Note that sigma = -1.5 corresponds to a true conductivity of $\sigma$ = 0.0316 S/m. End of explanation
3,601
Given the following text description, write Python code to implement the functionality described below step by step Description: A nice way to animate fucntions in jupyter notebooks ref Thanks to the heavy recent development dedicated to Matplotlib and the Jupyter Notebook, , Matplotlib 1.5.1 supports inline display of animations with the to_html5_video method, which converts the animation to an h264 encoded video and embeddeds it directly in the notebook. Step1: To show the animation, anim uses its conversion of the video to html5 using its method to_html5_video(), and the result is shown through the HTML() function. Step2: Note that Animation instances nowhave a repr_html method. However, it returns None by default. Step3: This means that we won't get any animation from the inline display. Step4: The method used to display is controlled by the animation.html rc parameter, which currently supports values of None and html5. None is the default, performing no display. We simply need to set it to html5
Python Code: %matplotlib inline import numpy as np import matplotlib.pyplot as plt from matplotlib import animation, rc from IPython.display import HTML # first set up the figure, the axes and the plot element we want to animate fig, ax = plt.subplots() ax.set_xlim( 0, 2) ax.set_ylim(-1, 2) line, = ax.plot([],[], lw=2) # initialization function: plot the background of each frame def init(): line.set_data([], []) return (line,) # animation function. This is called sequentially def animate(i): x = np.linspace(0, 2, 1000) y = np.sin(2 * np.pi * (x - 0.01 * i)) line.set_data(x, y) return (line,) # call the animator. blit=True means: only re-daw the parts that have changed. anim = animation.FuncAnimation(fig, animate, init_func=init, frames=100, interval=20, blit=True) Explanation: A nice way to animate fucntions in jupyter notebooks ref Thanks to the heavy recent development dedicated to Matplotlib and the Jupyter Notebook, , Matplotlib 1.5.1 supports inline display of animations with the to_html5_video method, which converts the animation to an h264 encoded video and embeddeds it directly in the notebook. End of explanation # Show the animation. HTML(anim.to_html5_video()) Explanation: To show the animation, anim uses its conversion of the video to html5 using its method to_html5_video(), and the result is shown through the HTML() function. End of explanation anim._repr_html_() is None Explanation: Note that Animation instances nowhave a repr_html method. However, it returns None by default. End of explanation anim Explanation: This means that we won't get any animation from the inline display. End of explanation # equivalent to rcParams['animation.html'] = 'html5' rc('animation', html='html5') anim Explanation: The method used to display is controlled by the animation.html rc parameter, which currently supports values of None and html5. None is the default, performing no display. We simply need to set it to html5: End of explanation
3,602
Given the following text description, write Python code to implement the functionality described below step by step Description: BigQuery ML models with feature engineering In this notebook, we will use BigQuery ML to build more sophisticated models for taxifare prediction. This is a continuation of our first models we created earlier with BigQuery ML but now with more feature engineering. Learning Objectives Apply transformations using SQL to prune the taxi cab dataset Create and train a new Linear Regression model with BigQuery ML Evaluate and predict with the linear model Create a feature cross for day-hour combination using SQL Examine ways to reduce model overfitting with regularization Create and train a DNN model with BigQuery ML Each learning objective will correspond to a #TODO in the student lab notebook -- try to complete that notebook first before reviewing this solution notebook. Step1: Create a BigQuery Dataset and Google Cloud Storage Bucket A BigQuery dataset is a container for tables, views, and models built with BigQuery ML. Let's create one called serverlessml if we have not already done so in an earlier lab. We'll do the same for a GCS bucket for our project too. Step2: Model 4 Step3: Once the training is done, visit the BigQuery Cloud Console and look at the model that has been trained. Then, come back to this notebook. Note that BigQuery automatically split the data we gave it, and trained on only a part of the data and used the rest for evaluation. We can look at eval statistics on that held-out data Step4: Yippee! We're now below our target of 6 dollars in RMSE. We are now beating our goals, and with just a linear model. Making predictions with BigQuery ML This is how the prediction query would look that we saw earlier heading 1.3 miles uptown in New York City. Step5: Improving the model with feature crosses Let's do a feature cross of the day-hour combination instead of using them raw Step6: Sometimes (not the case above), the training RMSE is quite reasonable, but the evaluation RMSE is terrible. This is an indication of overfitting. When we do feature crosses, we run into the risk of overfitting (for example, when a particular day-hour combo doesn't have enough taxirides). Reducing overfitting Let's add L2 regularization to help reduce overfitting. Let's set it to 0.1 Step7: These sorts of experiment would have taken days to do otherwise. We did it in minutes, thanks to BigQuery ML! The advantage of doing all this in the TRANSFORM is the client code doing the PREDICT doesn't change. Our model improvement is transparent to client code. Step8: Let's try feature crossing the locations too Because the lat and lon by themselves don't have meaning, but only in conjunction, it may be useful to treat the fields as a pair instead of just using them as numeric values. However, lat and lon are continuous numbers, so we have to discretize them first. That's what ML.BUCKETIZE does. Here are some of the preprocessing functions in BigQuery ML Step9: Yippee! We're now below our target of 6 dollars in RMSE. DNN You could, of course, train a more sophisticated model. Change "linear_reg" above to "dnn_regressor" and see if it improves things. Note
Python Code: %%bash export PROJECT=$(gcloud config list project --format "value(core.project)") echo "Your current GCP Project Name is: "$PROJECT import os PROJECT = "your-gcp-project-here" # REPLACE WITH YOUR PROJECT NAME REGION = "us-central1" # REPLACE WITH YOUR BUCKET REGION e.g. us-central1 # Do not change these os.environ["PROJECT"] = PROJECT os.environ["REGION"] = REGION os.environ["BUCKET"] = PROJECT # DEFAULT BUCKET WILL BE PROJECT ID if PROJECT == "your-gcp-project-here": print("Don't forget to update your PROJECT name! Currently:", PROJECT) Explanation: BigQuery ML models with feature engineering In this notebook, we will use BigQuery ML to build more sophisticated models for taxifare prediction. This is a continuation of our first models we created earlier with BigQuery ML but now with more feature engineering. Learning Objectives Apply transformations using SQL to prune the taxi cab dataset Create and train a new Linear Regression model with BigQuery ML Evaluate and predict with the linear model Create a feature cross for day-hour combination using SQL Examine ways to reduce model overfitting with regularization Create and train a DNN model with BigQuery ML Each learning objective will correspond to a #TODO in the student lab notebook -- try to complete that notebook first before reviewing this solution notebook. End of explanation %%bash ## Create a BigQuery dataset for serverlessml if it doesn't exist datasetexists=$(bq ls -d | grep -w serverlessml) if [ -n "$datasetexists" ]; then echo -e "BigQuery dataset already exists, let's not recreate it." else echo "Creating BigQuery dataset titled: serverlessml" bq --location=US mk --dataset \ --description 'Taxi Fare' \ $PROJECT:serverlessml echo "\nHere are your current datasets:" bq ls fi ## Create GCS bucket if it doesn't exist already... exists=$(gsutil ls -d | grep -w gs://${PROJECT}/) if [ -n "$exists" ]; then echo -e "Bucket exists, let's not recreate it." else echo "Creating a new GCS bucket." gsutil mb -l ${REGION} gs://${PROJECT} echo "\nHere are your current buckets:" gsutil ls fi Explanation: Create a BigQuery Dataset and Google Cloud Storage Bucket A BigQuery dataset is a container for tables, views, and models built with BigQuery ML. Let's create one called serverlessml if we have not already done so in an earlier lab. We'll do the same for a GCS bucket for our project too. End of explanation %%bigquery CREATE OR REPLACE TABLE serverlessml.feateng_training_data AS SELECT (tolls_amount + fare_amount) AS fare_amount, pickup_datetime, pickup_longitude AS pickuplon, pickup_latitude AS pickuplat, dropoff_longitude AS dropofflon, dropoff_latitude AS dropofflat, passenger_count*1.0 AS passengers FROM `nyc-tlc.yellow.trips` # The full dataset has 1+ Billion rows, let's take only 1 out of 1,000 (or 1 Million total) WHERE ABS(MOD(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING)), 1000)) = 1 AND trip_distance > 0 AND fare_amount >= 2.5 AND pickup_longitude > -78 AND pickup_longitude < -70 AND dropoff_longitude > -78 AND dropoff_longitude < -70 AND pickup_latitude > 37 AND pickup_latitude < 45 AND dropoff_latitude > 37 AND dropoff_latitude < 45 AND passenger_count > 0 %%bigquery # Tip: You can CREATE MODEL IF NOT EXISTS as well CREATE OR REPLACE MODEL serverlessml.model4_feateng TRANSFORM( * EXCEPT(pickup_datetime) , ST_Distance(ST_GeogPoint(pickuplon, pickuplat), ST_GeogPoint(dropofflon, dropofflat)) AS euclidean , CAST(EXTRACT(DAYOFWEEK FROM pickup_datetime) AS STRING) AS dayofweek , CAST(EXTRACT(HOUR FROM pickup_datetime) AS STRING) AS hourofday ) OPTIONS(input_label_cols=['fare_amount'], model_type='linear_reg') AS SELECT * FROM serverlessml.feateng_training_data Explanation: Model 4: With some transformations BigQuery ML automatically scales the inputs. so we don't need to do scaling, but human insight can help. Since we we'll repeat this quite a bit, let's make a dataset with 1 million rows. End of explanation %%bigquery SELECT *, SQRT(loss) AS rmse FROM ML.TRAINING_INFO(MODEL serverlessml.model4_feateng) %%bigquery SELECT SQRT(mean_squared_error) AS rmse FROM ML.EVALUATE(MODEL serverlessml.model4_feateng) Explanation: Once the training is done, visit the BigQuery Cloud Console and look at the model that has been trained. Then, come back to this notebook. Note that BigQuery automatically split the data we gave it, and trained on only a part of the data and used the rest for evaluation. We can look at eval statistics on that held-out data: End of explanation %%bigquery SELECT * FROM ML.PREDICT(MODEL serverlessml.model4_feateng, ( SELECT -73.982683 AS pickuplon, 40.742104 AS pickuplat, -73.983766 AS dropofflon, 40.755174 AS dropofflat, 3.0 AS passengers, TIMESTAMP('2019-06-03 04:21:29.769443 UTC') AS pickup_datetime )) Explanation: Yippee! We're now below our target of 6 dollars in RMSE. We are now beating our goals, and with just a linear model. Making predictions with BigQuery ML This is how the prediction query would look that we saw earlier heading 1.3 miles uptown in New York City. End of explanation %%bigquery CREATE OR REPLACE MODEL serverlessml.model5_featcross TRANSFORM( * EXCEPT(pickup_datetime) , ST_Distance(ST_GeogPoint(pickuplon, pickuplat), ST_GeogPoint(dropofflon, dropofflat)) AS euclidean , ML.FEATURE_CROSS( STRUCT(CAST(EXTRACT(DAYOFWEEK FROM pickup_datetime) AS STRING) AS dayofweek, CAST(EXTRACT(HOUR FROM pickup_datetime) AS STRING) AS hourofday) ) AS day_hr ) OPTIONS(input_label_cols=['fare_amount'], model_type='linear_reg') AS SELECT * FROM serverlessml.feateng_training_data %%bigquery SELECT *, SQRT(loss) AS rmse FROM ML.TRAINING_INFO(MODEL serverlessml.model5_featcross) %%bigquery SELECT SQRT(mean_squared_error) AS rmse FROM ML.EVALUATE(MODEL serverlessml.model5_featcross) Explanation: Improving the model with feature crosses Let's do a feature cross of the day-hour combination instead of using them raw End of explanation %%bigquery CREATE OR REPLACE MODEL serverlessml.model6_featcross_l2 TRANSFORM( * EXCEPT(pickup_datetime) , ST_Distance(ST_GeogPoint(pickuplon, pickuplat), ST_GeogPoint(dropofflon, dropofflat)) AS euclidean , ML.FEATURE_CROSS(STRUCT(CAST(EXTRACT(DAYOFWEEK FROM pickup_datetime) AS STRING) AS dayofweek, CAST(EXTRACT(HOUR FROM pickup_datetime) AS STRING) AS hourofday)) AS day_hr ) OPTIONS(input_label_cols=['fare_amount'], model_type='linear_reg', l2_reg=0.1) AS SELECT * FROM serverlessml.feateng_training_data %%bigquery SELECT SQRT(mean_squared_error) AS rmse FROM ML.EVALUATE(MODEL serverlessml.model6_featcross_l2) Explanation: Sometimes (not the case above), the training RMSE is quite reasonable, but the evaluation RMSE is terrible. This is an indication of overfitting. When we do feature crosses, we run into the risk of overfitting (for example, when a particular day-hour combo doesn't have enough taxirides). Reducing overfitting Let's add L2 regularization to help reduce overfitting. Let's set it to 0.1 End of explanation %%bigquery SELECT * FROM ML.PREDICT(MODEL serverlessml.model6_featcross_l2, ( SELECT -73.982683 AS pickuplon, 40.742104 AS pickuplat, -73.983766 AS dropofflon, 40.755174 AS dropofflat, 3.0 AS passengers, TIMESTAMP('2019-06-03 04:21:29.769443 UTC') AS pickup_datetime )) Explanation: These sorts of experiment would have taken days to do otherwise. We did it in minutes, thanks to BigQuery ML! The advantage of doing all this in the TRANSFORM is the client code doing the PREDICT doesn't change. Our model improvement is transparent to client code. End of explanation %%bigquery -- BQML chooses the wrong gradient descent strategy here. It will get fixed in (b/141429990) -- But for now, as a workaround, explicitly specify optimize_strategy='BATCH_GRADIENT_DESCENT' CREATE OR REPLACE MODEL serverlessml.model7_geo TRANSFORM( fare_amount , ST_Distance(ST_GeogPoint(pickuplon, pickuplat), ST_GeogPoint(dropofflon, dropofflat)) AS euclidean , ML.FEATURE_CROSS(STRUCT(CAST(EXTRACT(DAYOFWEEK FROM pickup_datetime) AS STRING) AS dayofweek, CAST(EXTRACT(HOUR FROM pickup_datetime) AS STRING) AS hourofday), 2) AS day_hr , CONCAT( ML.BUCKETIZE(pickuplon, GENERATE_ARRAY(-78, -70, 0.01)), ML.BUCKETIZE(pickuplat, GENERATE_ARRAY(37, 45, 0.01)), ML.BUCKETIZE(dropofflon, GENERATE_ARRAY(-78, -70, 0.01)), ML.BUCKETIZE(dropofflat, GENERATE_ARRAY(37, 45, 0.01)) ) AS pickup_and_dropoff ) OPTIONS(input_label_cols=['fare_amount'], model_type='linear_reg', l2_reg=0.1, optimize_strategy='BATCH_GRADIENT_DESCENT') AS SELECT * FROM serverlessml.feateng_training_data %%bigquery SELECT SQRT(mean_squared_error) AS rmse FROM ML.EVALUATE(MODEL serverlessml.model7_geo) Explanation: Let's try feature crossing the locations too Because the lat and lon by themselves don't have meaning, but only in conjunction, it may be useful to treat the fields as a pair instead of just using them as numeric values. However, lat and lon are continuous numbers, so we have to discretize them first. That's what ML.BUCKETIZE does. Here are some of the preprocessing functions in BigQuery ML: * ML.FEATURE_CROSS(STRUCT(features)) does a feature cross of all the combinations * ML.POLYNOMIAL_EXPAND(STRUCT(features), degree) creates x, x^2, x^3, etc. * ML.BUCKETIZE(f, split_points) where split_points is an array End of explanation %%bigquery -- This is alpha and may not work for you. CREATE OR REPLACE MODEL serverlessml.model8_dnn TRANSFORM( fare_amount , ST_Distance(ST_GeogPoint(pickuplon, pickuplat), ST_GeogPoint(dropofflon, dropofflat)) AS euclidean , CONCAT(CAST(EXTRACT(DAYOFWEEK FROM pickup_datetime) AS STRING), CAST(EXTRACT(HOUR FROM pickup_datetime) AS STRING)) AS day_hr , CONCAT( ST_AsText(ST_SnapToGrid(ST_GeogPoint(pickuplon, pickuplat), 0.01)), ST_AsText(ST_SnapToGrid(ST_GeogPoint(dropofflon, dropofflat), 0.01)) ) AS pickup_and_dropoff ) -- at the time of writing, l2_reg wasn't supported yet. OPTIONS(input_label_cols=['fare_amount'], model_type='dnn_regressor', hidden_units=[32, 8]) AS SELECT * FROM serverlessml.feateng_training_data %%bigquery SELECT SQRT(mean_squared_error) AS rmse FROM ML.EVALUATE(MODEL serverlessml.model8_dnn) Explanation: Yippee! We're now below our target of 6 dollars in RMSE. DNN You could, of course, train a more sophisticated model. Change "linear_reg" above to "dnn_regressor" and see if it improves things. Note: This takes 20 - 25 minutes to run. End of explanation
3,603
Given the following text description, write Python code to implement the functionality described below step by step Description: Homework 9 - Clustering Isac do Nascimento Lira, 371890 1 - Aplique os algoritmos K-means [1] e AgglomerativeClustering [2] em qualquer dataset que você desejar (recomendação Step1: 2 - Qual o valor de K (número de clusteres) você escolheu para a questão anterior? Desenvolva o Método do Cotovelo (não utilizar lib!) e descubra o K mais adequado. Após descobrir, aplique novamente o K-means com o K adequado. Step2: Pelo gráfico, o valor de k ideal é 3, pois balanceia as métricas de completude e homogeneidade. Apartir deste valor, também não há ganhos significativos na métrica de homogeneidade. No mais, o valor é coerente com o número de clusters real do dataset.
Python Code: from sklearn import datasets import pandas as pd from sklearn.cluster import KMeans from sklearn.cluster import AgglomerativeClustering as AC from sklearn.decomposition import PCA import matplotlib.pyplot as plt import numpy as np from sklearn import metrics %matplotlib inline # Carrega os dados irisDF = datasets.load_iris() X = irisDF.data y = irisDF.target # Instancia o classificador KMeans e realiza a predição n_clusters = 2 kmeans = KMeans(n_clusters = n_clusters,random_state = 0).fit(X) predict = kmeans.predict(X) # Reduz os o número de features e visualiza os clusters formados newSet = PCA(n_components=2).fit_transform(X) fig,axs = plt.subplots(1,2,figsize=[15,5]) axs[0].scatter(newSet[:,0], newSet[:,1], c=y) axs[0].set_xlabel('Component 1') axs[0].set_ylabel('Component 2') axs[1].scatter(newSet[:,0], newSet[:,1], c=predict) axs[1].set_xlabel('Component 1') axs[1].set_ylabel('Component 2') # Calcula as métricas - Homogeneidade e Completude hg_kmeans = metrics.homogeneity_score(y,predict) cs_kmeans = metrics.completeness_score(y,predict) print('Métrica de homogeneida(KMeans): ',hg_kmeans) print('Métrica de completude(KMeans): ',cs_kmeans) # Realiza a clusterização baseado em AgglomerativeClustering predict_ac = AC(n_clusters = 2).fit_predict(X) fig,axs = plt.subplots(1,2,figsize=[15,5]) axs[0].scatter(newSet[:,0], newSet[:,1], c=y) axs[0].set_xlabel('Component 1') axs[0].set_ylabel('Component 2') axs[1].scatter(newSet[:,0], newSet[:,1], c=predict_ac) axs[1].set_xlabel('Component 1') axs[1].set_ylabel('Component 2') hg_ac = metrics.homogeneity_score(y,predict_ac) cs_ac = metrics.completeness_score(y,predict_ac) print('AglomerativeClustering Metrics') print('Métrica de homogeneida: ',hg_ac) print('Métrica de completude: ',cs_ac) Explanation: Homework 9 - Clustering Isac do Nascimento Lira, 371890 1 - Aplique os algoritmos K-means [1] e AgglomerativeClustering [2] em qualquer dataset que você desejar (recomendação: iris). Compare os resultados utilizando métricas de avaliação de clusteres (completeness e homogeneity, por exemplo) [3]. End of explanation ks = np.arange(1,10) homog_metrics = [] comp_metrics = [] for k in ks: kmeans = KMeans(n_clusters=k,random_state=0).fit(X) ypred = kmeans.predict(X) homog_metrics.append(metrics.homogeneity_score(y,ypred)) comp_metrics.append(metrics.completeness_score(y,ypred)) plt.plot(ks,homog_metrics) plt.plot(ks,comp_metrics) plt.xlabel('K') plt.ylabel('Homogeneity Metric') Explanation: 2 - Qual o valor de K (número de clusteres) você escolheu para a questão anterior? Desenvolva o Método do Cotovelo (não utilizar lib!) e descubra o K mais adequado. Após descobrir, aplique novamente o K-means com o K adequado. End of explanation best_k = 3 predict = KMeans(n_clusters=best_k).fit_predict(X) hg_kmeans = metrics.homogeneity_score(y,predict) cs_kmeans = metrics.completeness_score(y,predict) print('Métrica de homogeneida(KMeans): ',hg_kmeans) print('Métrica de completude(KMeans): ',cs_kmeans) Explanation: Pelo gráfico, o valor de k ideal é 3, pois balanceia as métricas de completude e homogeneidade. Apartir deste valor, também não há ganhos significativos na métrica de homogeneidade. No mais, o valor é coerente com o número de clusters real do dataset. End of explanation
3,604
Given the following text description, write Python code to implement the functionality described below step by step Description: <div style='background-image Step1: 1. Initialization of setup Step2: 2. Elemental Mass and Stiffness matrices The mass and the stiffness matrix are calculated prior time extrapolation, so they are pre-calculated and stored at the beginning of the code. The integrals defined in the mass and stiffness matrices are computed using a numerical quadrature, in this cases the GLL quadrature that uses the GLL points and their corresponding weights to approximate the integrals. Hence, \begin{equation} M_{ij}^k=\int_{-1}^1 \ell_i^k(\xi) \ell_j^k(\xi) \ J \ d\xi = \sum_{m=1}^{N_p} w_m \ \ell_i^k (x_m) \ell_j^k(x_m)\ J =\sum_{m=1}^{N_p} w_m \delta_{im}\ \delta_{jm} \ J= \begin{cases} w_i \ J \ \ \text{ if } i=j \ 0 \ \ \ \ \ \ \ \text{ if } i \neq j\end{cases} \end{equation} that is a diagonal mass matrix!. Subsequently, the stiffness matrices is given as \begin{equation} K_{i,j}= \int_{-1}^1 \ell_i^k(\xi) \cdot \partial x \ell_j^k(\xi) \ d\xi= \sum{m=1}^{N_p} w_m \ \ell_i^k(x_m)\cdot \partial_x \ell_j^k(x_m)= \sum_{m=1}^{N_p} w_m \delta_{im}\cdot \partial_x\ell_j^k(x_m)= w_i \cdot \partial_x \ell_j^k(x_i) \end{equation} The Lagrange polynomials and their properties have been already used, they determine the integration weights $w_i$ that are returned by the python method "gll". Additionally, the fist derivatives of such basis, $\partial_x \ell_j^k(x_i)$, are needed, the python method "Lagrange1st" returns them. Step3: 3. Flux Matrices The main difference in the heterogeneous case with respect the homogeneous one is found in the definition of fluxes. As in the case of finite volumes when we solve the 1D elastic wave equation, we allow the coefficients of matrix A to vary inside the element. \begin{equation} \mathbf{A}= \begin{pmatrix} 0 & -\mu_i \ -1/\rho_i & 0 \end{pmatrix} \end{equation} Now we need to diagonalize $\mathbf{A}$. Introducing the seismic impedance $Z_i = \rho_i c_i$, we have \begin{equation} \mathbf{A} = \mathbf{R}^{-1}\mathbf{\Lambda}\mathbf{R} \qquad\text{,}\qquad \mathbf{\Lambda}= \begin{pmatrix} -c_i & 0 \ 0 & c_i \end{pmatrix} \qquad\text{,}\qquad \mathbf{R} = \begin{pmatrix} Z_i & -Z_i \ 1 & 1 \end{pmatrix} \qquad\text{and}\qquad \mathbf{R}^{-1} = \frac{1}{2Z_i} \begin{pmatrix} 1 & Z_i \ -1 & Z_i \end{pmatrix} \end{equation} We decompose the solution into right propagating $\mathbf{\Lambda}^{+}$ and left propagating eigenvalues $\mathbf{\Lambda}^{-}$ where \begin{equation} \mathbf{\Lambda}^{+}= \begin{pmatrix} -c_i & 0 \ 0 & 0 \end{pmatrix} \qquad\text{,}\qquad \mathbf{\Lambda}^{-}= \begin{pmatrix} 0 & 0 \ 0 & c_i \end{pmatrix} \qquad\text{and}\qquad \mathbf{A}^{\pm} = \mathbf{R}^{-1}\mathbf{\Lambda}^{\pm}\mathbf{R} \end{equation} This strategy allows us to formulate the Flux term in the discontinuous Galerkin method. The following cell initializes all flux related matrices Step4: 4. Discontinuous Galerkin Solution The principal characteristic of the discontinuous Galerkin Method is the communication between the element neighbors using a flux term, in general it is given \begin{equation} \mathbf{Flux} = \int_{\partial D_k} \mathbf{A}\mathbf{Q}\ell_j(\xi)\mathbf{n}d\xi \end{equation} this term leads to four flux contributions for left and right sides of the elements \begin{equation} \mathbf{Flux} = -\mathbf{A}{k}^{-}\mathbf{Q}{l}^{k}\mathbf{F}^{l} + \mathbf{A}{k}^{+}\mathbf{Q}{r}^{k}\mathbf{F}^{r} - \mathbf{A}{k}^{+}\mathbf{Q}{r}^{k-1}\mathbf{F}^{l} + \mathbf{A}{k}^{-}\mathbf{Q}{l}^{k+1}\mathbf{F}^{r} \end{equation} Last but not least, we have to solve our semi-discrete scheme that we derived above using an appropriate time extrapolation, in the code below we implemented two different time extrapolation schemes
Python Code: # Import all necessary libraries, this is a configuration step for the exercise. # Please run it before the simulation code! import numpy as np import matplotlib.pyplot as plt from gll import gll from lagrange1st import lagrange1st from flux_hetero import flux # Show the plots in the Notebook. plt.switch_backend("nbagg") Explanation: <div style='background-image: url("../../share/images/header.svg") ; padding: 0px ; background-size: cover ; border-radius: 5px ; height: 250px'> <div style="float: right ; margin: 50px ; padding: 20px ; background: rgba(255 , 255 , 255 , 0.7) ; width: 50% ; height: 150px"> <div style="position: relative ; top: 50% ; transform: translatey(-50%)"> <div style="font-size: xx-large ; font-weight: 900 ; color: rgba(0 , 0 , 0 , 0.8) ; line-height: 100%">Computational Seismology</div> <div style="font-size: large ; padding-top: 20px ; color: rgba(0 , 0 , 0 , 0.5)">Discontinuous Galerkin Method - 1D Elastic Wave Equation, Heterogeneous case</div> </div> </div> </div> Seismo-Live: http://seismo-live.org Authors: David Vargas (@dvargas) Heiner Igel (@heinerigel) Basic Equations The source-free elastic wave equation in 1D reads \begin{align} \partial_t \sigma - \mu \partial_x v & = 0 \ \partial_t v - \frac{1}{\rho} \partial_x \sigma & = 0 \end{align} with $\rho$ the density and $\mu$ the shear modulus. This equation in matrix-vector notation follows \begin{equation} \partial_t \mathbf{Q} + \mathbf{A} \partial_x \mathbf{Q} = 0 \end{equation} where $\mathbf{Q} = (\sigma, v)$ is the vector of unknowns and the matrix $\mathbf{A}$ contains the parameters $\rho$ and $\mu$. We seek to solve the linear advection equation as a hyperbolic equation $ \partial_t u + \mu \ \partial_x u=0$. A series of steps need to be done: 1) The weak form of the equation is derived by multiplying both sides by an arbitrary test function. 2) Apply the stress Free Boundary Condition after integration by parts 3) We approximate the unknown field $\mathbf{Q}(x,t)$ by a sum over space-dependent basis functions $\ell_i$ weighted by time-dependent coefficients $\mathbf{Q}(x_i,t)$, as we did in the spectral elements method. As interpolating functions we choose the Lagrange polynomials and use $\xi$ as the space variable representing the elemental domain: \begin{equation} \mathbf{Q}(\xi,t) \ = \ \sum_{i=1}^{N_p} \mathbf{Q}(\xi_i,t) \ell_i(\xi) \qquad with \qquad \ell_i^{(N)} (\xi) \ := \ \prod_{j = 1, \ j \neq i}^{N+1} \frac{\xi - \xi_j}{\xi_i-\xi_j}, \quad i,j = 1, 2, \dotsc , N + 1 \end{equation} 4) The continuous weak form is written as a system of linear equations by considering the approximated displacement field. Finally, the semi-discrete scheme can be written in matrix-vector form as \begin{equation} \mathbf{M}\partial_t \mathbf{Q} = \mathbf{A}\mathbf{K}\mathbf{Q} - \mathbf{Flux} \end{equation} 5) Time extrapolation is done after applying a standard 1st order finite-difference approximation to the time derivative, we call it the Euler scheme. \begin{equation} \mathbf{Q}^{t+1} \approx \mathbf{Q}^{t} + dt\mathbf{M}^{-1}(\mathbf{A}\mathbf{K}\mathbf{Q} - \mathbf{Flux}) \end{equation} This notebook implements both Euler and Runge-Kutta schemes for solving the free source version of the elastic wave equation in a homogeneous media. To keep the problem simple, we use as spatial initial condition a Gauss function with half-width $\sigma$ \begin{equation} Q(x,t=0) = e^{-1/\sigma^2 (x - x_{o})^2} \end{equation} End of explanation # Initialization of setup # -------------------------------------------------------------------------- tmax = 2.5 # Length of seismogram [s] xmax = 10000 # Length of domain [m] vs0 = 2500 # Advection velocity rho0 = 2500 # Density [kg/m^3] mu0 = rho0*vs0**2 # shear modulus N = 2 # Order of Lagrange polynomials ne = 200 # Number of elements sig = 100 # width of Gaussian initial condition x0 = 4000 # x locartion of Gauss eps = 0.2 # Courant criterion iplot = 20 # Plotting frequency imethod = 'RK' # 'Euler', 'RK' nx = ne*N + 1 dx = xmax/(nx-1) # space increment #-------------------------------------------------------------------- # Initialization of GLL points integration weights [xi,w] = gll(N) # xi, N+1 coordinates [-1 1] of GLL points # w Integration weights at GLL locations # Space domain le = xmax/ne # Length of elements, here equidistent ng = ne*N + 1 # Vector with GLL points k = 0 xg = np.zeros((N+1)*ne) for i in range(0, ne): for j in range(0, N+1): k += 1 xg[k-1] = i*le + .5*(xi[j] + 1)*le x = np.reshape(xg, (N+1, ne), order='F').T # Calculation of time step acoording to Courant criterion dxmin = np.min(np.diff(xg[1:N+1])) dt = eps*dxmin/vs0 # Global time step nt = int(np.floor(tmax/dt)) # Mapping - Jacobian J = le/2 # Jacobian Ji = 1/J # Inverse Jacobian # 1st derivative of Lagrange polynomials l1d = lagrange1st(N) Explanation: 1. Initialization of setup End of explanation # Initialization of system matrices # ----------------------------------------------------------------- # Elemental Mass matrix M = np.zeros((N+1, N+1)) for i in range(0, N+1): M[i, i] = w[i] * J # Inverse matrix of M (M is diagonal!) Minv = np.identity(N+1) for i in range(0, N+1): Minv[i,i] = 1. / M[i,i] # Elemental Stiffness Matrix K = np.zeros((N+1, N+1)) for i in range(0, N+1): for j in range(0, N+1): K[i,j] = w[j] * l1d[i,j] # NxN matrix for every element Explanation: 2. Elemental Mass and Stiffness matrices The mass and the stiffness matrix are calculated prior time extrapolation, so they are pre-calculated and stored at the beginning of the code. The integrals defined in the mass and stiffness matrices are computed using a numerical quadrature, in this cases the GLL quadrature that uses the GLL points and their corresponding weights to approximate the integrals. Hence, \begin{equation} M_{ij}^k=\int_{-1}^1 \ell_i^k(\xi) \ell_j^k(\xi) \ J \ d\xi = \sum_{m=1}^{N_p} w_m \ \ell_i^k (x_m) \ell_j^k(x_m)\ J =\sum_{m=1}^{N_p} w_m \delta_{im}\ \delta_{jm} \ J= \begin{cases} w_i \ J \ \ \text{ if } i=j \ 0 \ \ \ \ \ \ \ \text{ if } i \neq j\end{cases} \end{equation} that is a diagonal mass matrix!. Subsequently, the stiffness matrices is given as \begin{equation} K_{i,j}= \int_{-1}^1 \ell_i^k(\xi) \cdot \partial x \ell_j^k(\xi) \ d\xi= \sum{m=1}^{N_p} w_m \ \ell_i^k(x_m)\cdot \partial_x \ell_j^k(x_m)= \sum_{m=1}^{N_p} w_m \delta_{im}\cdot \partial_x\ell_j^k(x_m)= w_i \cdot \partial_x \ell_j^k(x_i) \end{equation} The Lagrange polynomials and their properties have been already used, they determine the integration weights $w_i$ that are returned by the python method "gll". Additionally, the fist derivatives of such basis, $\partial_x \ell_j^k(x_i)$, are needed, the python method "Lagrange1st" returns them. End of explanation # Inialize Flux relates matrices # --------------------------------------------------------------- # initialize heterogeneous A Ap = np.zeros((ne,2,2)) Am = np.zeros((ne,2,2)) Z = np.zeros(ne) rho = np.zeros(ne) mu = np.zeros(ne) # initialize c, rho, mu, and Z rho = rho + rho0 rho[int(ne/2):ne] = .25 * rho[int(ne/2):ne] # Introduce discontinuity mu = mu + mu0 c = np.sqrt(mu/rho) Z = rho * c # Initialize flux matrices for i in range(1,ne-1): # Left side positive direction R = np.array([[Z[i], -Z[i]], [1, 1]]) Lp = np.array([[0, 0], [0, c[i]]]) Ap[i,:,:] = R @ Lp @ np.linalg.inv(R) # Right side negative direction R = np.array([[Z[i], -Z[i]], [1, 1]]) Lm = np.array([[-c[i], 0 ], [0, 0]]) Am[i,:,:] = R @ Lm @ np.linalg.inv(R) Explanation: 3. Flux Matrices The main difference in the heterogeneous case with respect the homogeneous one is found in the definition of fluxes. As in the case of finite volumes when we solve the 1D elastic wave equation, we allow the coefficients of matrix A to vary inside the element. \begin{equation} \mathbf{A}= \begin{pmatrix} 0 & -\mu_i \ -1/\rho_i & 0 \end{pmatrix} \end{equation} Now we need to diagonalize $\mathbf{A}$. Introducing the seismic impedance $Z_i = \rho_i c_i$, we have \begin{equation} \mathbf{A} = \mathbf{R}^{-1}\mathbf{\Lambda}\mathbf{R} \qquad\text{,}\qquad \mathbf{\Lambda}= \begin{pmatrix} -c_i & 0 \ 0 & c_i \end{pmatrix} \qquad\text{,}\qquad \mathbf{R} = \begin{pmatrix} Z_i & -Z_i \ 1 & 1 \end{pmatrix} \qquad\text{and}\qquad \mathbf{R}^{-1} = \frac{1}{2Z_i} \begin{pmatrix} 1 & Z_i \ -1 & Z_i \end{pmatrix} \end{equation} We decompose the solution into right propagating $\mathbf{\Lambda}^{+}$ and left propagating eigenvalues $\mathbf{\Lambda}^{-}$ where \begin{equation} \mathbf{\Lambda}^{+}= \begin{pmatrix} -c_i & 0 \ 0 & 0 \end{pmatrix} \qquad\text{,}\qquad \mathbf{\Lambda}^{-}= \begin{pmatrix} 0 & 0 \ 0 & c_i \end{pmatrix} \qquad\text{and}\qquad \mathbf{A}^{\pm} = \mathbf{R}^{-1}\mathbf{\Lambda}^{\pm}\mathbf{R} \end{equation} This strategy allows us to formulate the Flux term in the discontinuous Galerkin method. The following cell initializes all flux related matrices End of explanation # DG Solution, Time extrapolation # --------------------------------------------------------------- # Initalize solution vectors Q = np.zeros((ne, N+1, 2)) Qnew = np.zeros((ne, N+1, 2)) k1 = np.zeros((ne, N+1, 2)) k2 = np.zeros((ne, N+1, 2)) Q[:,:,0] = np.exp(-1/sig**2*((x-x0))**2) Qs = np.zeros(xg.size) # for plotting Qv = np.zeros(xg.size) # for plotting # Initialize animated plot # --------------------------------------------------------------- fig = plt.figure(figsize=(10,6)) ax1 = fig.add_subplot(2,1,1) ax2 = fig.add_subplot(2,1,2) line1 = ax1.plot(x, Q[:,:,0], 'k', lw=1.5) line2 = ax2.plot(x, Q[:,:,1], 'r', lw=1.5) ax1.axvspan(((nx-1)/2+1)*dx, nx*dx, alpha=0.2, facecolor='b') ax2.axvspan(((nx-1)/2+1)*dx, nx*dx, alpha=0.2, facecolor='b') ax1.set_xlim([0, xmax]) ax2.set_xlim([0, xmax]) ax1.set_ylabel('Stress') ax2.set_ylabel('Velocity') ax2.set_xlabel(' x ') plt.suptitle('Heterogeneous Disc. Galerkin - %s method'%imethod, size=16) plt.ion() # set interective mode plt.show() # --------------------------------------------------------------- # Time extrapolation # --------------------------------------------------------------- for it in range(nt): if imethod == 'Euler': # Calculate Fluxes Flux = flux(Q, N, ne, Ap, Am) for i in range(1,ne-1): Qnew[i,:,0] = dt * Minv @ (-mu[i] * K @ Q[i,:,1].T - Flux[i,:,0].T) + Q[i,:,0].T Qnew[i,:,1] = dt * Minv @ (-1/rho[i] * K @ Q[i,:,0].T - Flux[i,:,1].T) + Q[i,:,1].T elif imethod == 'RK': # Calculate Fluxes Flux = flux(Q, N, ne, Ap, Am) for i in range(1,ne-1): k1[i,:,0] = Minv @ (-mu[i] * K @ Q[i,:,1].T - Flux[i,:,0].T) k1[i,:,1] = Minv @ (-1/rho[i] * K @ Q[i,:,0].T - Flux[i,:,1].T) for i in range(1,ne-1): Qnew[i,:,0] = dt * Minv @ (-mu[i] * K @ Q[i,:,1].T - Flux[i,:,0].T) + Q[i,:,0].T Qnew[i,:,1] = dt * Minv @ (-1/rho[i] * K @ Q[i,:,0].T - Flux[i,:,1].T) + Q[i,:,1].T Flux = flux(Qnew,N,ne,Ap,Am) for i in range(1,ne-1): k2[i,:,0] = Minv @ (-mu[i] * K @ Qnew[i,:,1].T - Flux[i,:,0].T) k2[i,:,1] = Minv @ (-1/rho[i] * K @ Qnew[i,:,0].T - Flux[i,:,1].T) # Extrapolate Qnew = Q + 0.5 * dt * (k1 + k2) else: raise NotImplementedError Q, Qnew = Qnew, Q # -------------------------------------- # Animation plot. Display solution if not it % iplot: for l in line1: l.remove() del l for l in line2: l.remove() del l # stretch for plotting k = 0 for i in range(ne): for j in range(N+1): Qs[k] = Q[i,j,0] Qv[k] = Q[i,j,1] k = k + 1 # -------------------------------------- # Display lines line1 = ax1.plot(xg, Qs, 'k', lw=1.5) line2 = ax2.plot(xg, Qv, 'r', lw=1.5) plt.gcf().canvas.draw() Explanation: 4. Discontinuous Galerkin Solution The principal characteristic of the discontinuous Galerkin Method is the communication between the element neighbors using a flux term, in general it is given \begin{equation} \mathbf{Flux} = \int_{\partial D_k} \mathbf{A}\mathbf{Q}\ell_j(\xi)\mathbf{n}d\xi \end{equation} this term leads to four flux contributions for left and right sides of the elements \begin{equation} \mathbf{Flux} = -\mathbf{A}{k}^{-}\mathbf{Q}{l}^{k}\mathbf{F}^{l} + \mathbf{A}{k}^{+}\mathbf{Q}{r}^{k}\mathbf{F}^{r} - \mathbf{A}{k}^{+}\mathbf{Q}{r}^{k-1}\mathbf{F}^{l} + \mathbf{A}{k}^{-}\mathbf{Q}{l}^{k+1}\mathbf{F}^{r} \end{equation} Last but not least, we have to solve our semi-discrete scheme that we derived above using an appropriate time extrapolation, in the code below we implemented two different time extrapolation schemes: 1) Euler scheme \begin{equation} \mathbf{Q}^{t+1} \approx \mathbf{Q}^{t} + dt\mathbf{M}^{-1}(\mathbf{A}\mathbf{K}\mathbf{Q} - \mathbf{Flux}) \end{equation} 2) Second-order Runge-Kutta method (also called predictor-corrector scheme) \begin{eqnarray} k_1 &=& f(t_i, y_i) \ k_2 &=& f(t_i + dt, y_i + dt k_1) \ & & \ y_{i+1} &=& y_i + \frac{dt}{2} (k_1 + k_2) \end{eqnarray} with $f$ that corresponds with $\mathbf{M}^{-1}(\mathbf{A}\mathbf{K}\mathbf{Q} - \mathbf{Flux})$ End of explanation
3,605
Given the following text description, write Python code to implement the functionality described below step by step Description: Copyright 2020 Google Step1: Optimization Analysis <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https Step2: Load Data Go through each record, load in supporting objects, flatten everything into records, and put into a dataframe. Step3: Plot Step4: Hardware Grid Step5: SK Model Step6: 3 Regular MaxCut
Python Code: #@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. Explanation: Copyright 2020 Google End of explanation try: import recirq except ImportError: !pip install -q git+https://github.com/quantumlib/ReCirq sympy~=1.6 Explanation: Optimization Analysis <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://quantumai.google/cirq/experiments/qaoa/optimization_analysis"><img src="https://quantumai.google/site-assets/images/buttons/quantumai_logo_1x.png" />View on QuantumAI</a> </td> <td> <a target="_blank" href="https://colab.research.google.com/github/quantumlib/ReCirq/blob/master/docs/qaoa/optimization_analysis.ipynb"><img src="https://quantumai.google/site-assets/images/buttons/colab_logo_1x.png" />Run in Google Colab</a> </td> <td> <a target="_blank" href="https://github.com/quantumlib/ReCirq/blob/master/docs/qaoa/optimization_analysis.ipynb"><img src="https://quantumai.google/site-assets/images/buttons/github_logo_1x.png" />View source on GitHub</a> </td> <td> <a href="https://storage.googleapis.com/tensorflow_docs/ReCirq/docs/qaoa/optimization_analysis.ipynb"><img src="https://quantumai.google/site-assets/images/buttons/download_icon_1x.png" />Download notebook</a> </td> </table> Setup Install the ReCirq package: End of explanation from datetime import datetime import recirq import cirq import numpy as np import pandas as pd from recirq.qaoa.experiments.optimization_tasks import ( DEFAULT_BASE_DIR, DEFAULT_PROBLEM_GENERATION_BASE_DIR) records = [] for record in recirq.iterload_records(dataset_id="2020-03-tutorial", base_dir=DEFAULT_BASE_DIR): task = record['task'] result = recirq.load(task, DEFAULT_BASE_DIR) pgen_task = task.generation_task problem = recirq.load(pgen_task, base_dir=DEFAULT_PROBLEM_GENERATION_BASE_DIR)['problem'] record['problem'] = problem.graph record['problem_type'] = problem.__class__.__name__ recirq.flatten_dataclass_into_record(record, 'task') records.append(record) df = pd.DataFrame(records) df['timestamp'] = pd.to_datetime(df['timestamp']) df.head() Explanation: Load Data Go through each record, load in supporting objects, flatten everything into records, and put into a dataframe. End of explanation %matplotlib inline from matplotlib import pyplot as plt import seaborn as sns sns.set_style('ticks') plt.rc('axes', labelsize=16, titlesize=16) plt.rc('xtick', labelsize=14) plt.rc('ytick', labelsize=14) plt.rc('legend', fontsize=14, title_fontsize=16) # Load landscape data from recirq.qaoa.experiments.p1_landscape_tasks import \ DEFAULT_BASE_DIR, DEFAULT_PROBLEM_GENERATION_BASE_DIR, DEFAULT_PRECOMPUTATION_BASE_DIR, \ ReadoutCalibrationTask records = [] ro_records = [] for record in recirq.iterload_records(dataset_id="2020-03-tutorial", base_dir=DEFAULT_BASE_DIR): record['timestamp'] = datetime.fromisoformat(record['timestamp']) dc_task = record['task'] if isinstance(dc_task, ReadoutCalibrationTask): ro_records.append(record) continue pgen_task = dc_task.generation_task problem = recirq.load(pgen_task, base_dir=DEFAULT_PROBLEM_GENERATION_BASE_DIR)['problem'] record['problem'] = problem.graph record['problem_type'] = problem.__class__.__name__ record['bitstrings'] = record['bitstrings'].bits recirq.flatten_dataclass_into_record(record, 'task') recirq.flatten_dataclass_into_record(record, 'generation_task') records.append(record) # Associate each data collection task with its nearest readout calibration for record in sorted(records, key=lambda x: x['timestamp']): record['ro'] = min(ro_records, key=lambda x: abs((x['timestamp']-record['timestamp']).total_seconds())) df_raw = pd.DataFrame(records) df_raw.head() from recirq.qaoa.simulation import hamiltonian_objectives def compute_energies(row): permutation = [] qubit_map = {} final_qubit_index = {q: i for i, q in enumerate(row['final_qubits'])} for i, q in enumerate(row['qubits']): fi = final_qubit_index[q] permutation.append(fi) qubit_map[i] = q return hamiltonian_objectives(row['bitstrings'], row['problem'], permutation, row['ro']['calibration'], qubit_map) # Start cleaning up the raw data landscape_df = df_raw.copy() landscape_df = landscape_df.drop(['line_placement_strategy', 'generation_task.dataset_id', 'generation_task.device_name'], axis=1) # Compute energies landscape_df['energies'] = landscape_df.apply(compute_energies, axis=1) landscape_df = landscape_df.drop(['bitstrings', 'problem', 'ro', 'qubits', 'final_qubits'], axis=1) landscape_df['energy'] = landscape_df.apply(lambda row: np.mean(row['energies']), axis=1) # We won't do anything with raw energies right now landscape_df = landscape_df.drop('energies', axis=1) # Do timing somewhere else landscape_df = landscape_df.drop([col for col in landscape_df.columns if col.endswith('_time')], axis=1) import scipy.interpolate from recirq.qaoa.simulation import lowest_and_highest_energy def get_problem_graph(problem_type, n=None, instance_i=0): if n is None: if problem_type == 'HardwareGridProblem': n = 4 elif problem_type == 'SKProblem': n = 3 elif problem_type == 'ThreeRegularProblem': n = 4 else: raise ValueError(repr(problem_type)) r = df_raw[ (df_raw['problem_type']==problem_type)& (df_raw['n_qubits']==n)& (df_raw['instance_i']==instance_i) ]['problem'] return r.iloc[0] def plot_optimization_path_in_landscape(problem_type, res=200, method='nearest', cmap='PuOr'): optimization_data = df[df['problem_type'] == problem_type] landscape_data = landscape_df[landscape_df['problem_type'] == problem_type] xx, yy = np.meshgrid(np.linspace(0, np.pi/2, res), np.linspace(-np.pi/4, np.pi/4, res)) x_iters = optimization_data['x_iters'].values[0] min_c, max_c = lowest_and_highest_energy(get_problem_graph(problem_type)) zz = scipy.interpolate.griddata( points=landscape_data[['gamma', 'beta']].values, values=landscape_data['energy'].values / min_c, xi=(xx, yy), method=method, ) fig, ax = plt.subplots(1, 1, figsize=(5, 5)) norm = plt.Normalize(max_c/min_c, min_c/min_c) cmap = 'RdBu' extent=(0, 4, -2, 2) g = ax.imshow(zz, extent=extent, origin='lower', cmap=cmap, norm=norm, interpolation='none') xs, ys = zip(*x_iters) xs = np.array(xs) / (np.pi / 8) ys = np.array(ys) / (np.pi / 8) ax.plot(xs, ys, 'r-') ax.plot(xs[0], ys[0], 'rs')### Hardware Grid ax.plot(xs[1:-1], ys[1:-1], 'r.') ax.plot(xs[-1], ys[-1], 'ro') x, y = optimization_data['optimal_angles'].values[0] x /= (np.pi / 8) y /= (np.pi / 8) ax.plot(x, y, 'r*') ax.set_xlabel(r'$\gamma\ /\ (\pi/8)$') ax.set_ylabel(r'$\beta\ /\ (\pi/8)$') ax.set_title('Optimization path in landscape') fig.colorbar(g, ax=ax, shrink=0.8) def plot_function_values(problem_type): data = df[df['problem_type'] == problem_type] function_values = data['func_vals'].values[0] min_c, _ = lowest_and_highest_energy(get_problem_graph(problem_type)) function_values = np.array(function_values) / min_c x = range(len(function_values)) fig, ax = plt.subplots(1, 1, figsize=(5, 5)) ax.plot(x, function_values, 'o--') ax.set_xlabel('Optimization iteration') ax.set_ylabel(r'$E / E_{min}$') ax.set_title('Optimization function values') Explanation: Plot End of explanation plot_optimization_path_in_landscape('HardwareGridProblem') plot_function_values('HardwareGridProblem') Explanation: Hardware Grid End of explanation plot_optimization_path_in_landscape('SKProblem') plot_function_values('SKProblem') Explanation: SK Model End of explanation plot_optimization_path_in_landscape('ThreeRegularProblem') plot_function_values('ThreeRegularProblem') Explanation: 3 Regular MaxCut End of explanation
3,606
Given the following text description, write Python code to implement the functionality described below step by step Description: Reading and writing an evoked file This script shows how to read and write evoked datasets. Step1: Show result as a butterfly plot
Python Code: # Author: Alexandre Gramfort <[email protected]> # # License: BSD (3-clause) from mne import read_evokeds from mne.datasets import sample print(__doc__) data_path = sample.data_path() fname = data_path + '/MEG/sample/sample_audvis-ave.fif' # Reading condition = 'Left Auditory' evoked = read_evokeds(fname, condition=condition, baseline=(None, 0), proj=True) Explanation: Reading and writing an evoked file This script shows how to read and write evoked datasets. End of explanation evoked.plot(exclude=[], time_unit='s') # Show result as a 2D image (x: time, y: channels, color: amplitude) evoked.plot_image(exclude=[], time_unit='s') Explanation: Show result as a butterfly plot: By using exclude=[] bad channels are not excluded and are shown in red End of explanation
3,607
Given the following text description, write Python code to implement the functionality described below step by step Description: Solving the Schrödinger equation on a computer The Schrödinger equation governs the behaviour of physical system on scales where quantum mechanical effects become important. This is a differential equation which belongs to the category of partial differential equations called wave equations. $$\Large{\dot \imath\hslash = \hat H\psi}$$ Step1: Next we generate the lattice for the simulation. For now we consider 200 lattice points. Step2: Now that we've computed the hamiltonian matrix, let's diagonalize it Step3: Let's define the initial state of our system and normalize it now. Step4: Now that we know the wavefunction as a linear combination of basis vectors, the time evolution of the complete wavefuction is just the linear combination of the time evolutions of the basis states. Lets do this for 10 time steps.
Python Code: %pylab inline from IPython.display import HTML Explanation: Solving the Schrödinger equation on a computer The Schrödinger equation governs the behaviour of physical system on scales where quantum mechanical effects become important. This is a differential equation which belongs to the category of partial differential equations called wave equations. $$\Large{\dot \imath\hslash = \hat H\psi}$$ End of explanation # defining our own kronecker delta and hamiltonian as vectorized functions psi = vectorize(lambda x,p: (1/pow(np.pi*(length**2),0.25))*np.exp(-((x-peak)**2)/(2.0*length**2) - 1j*p*x)) kronecker = vectorize(lambda i,j: 1 if i == j else 0) h= vectorize(lambda i,j: (-kronecker(i+1,j) + 2*kronecker(i,j) - kronecker(i-1,j))/delta**2) delta = .1 # The spacing between neighboring lattice points L = 20. # The ends of the lattice length = 1. # spread of the Gaussian wavefunction peak = 0. # Centre of gaussian N = int(L/delta) # half the points of the lattice momentum = 2. # initial momentum of the wavepacket rows,cols = reshape(arange(-N,N+1),(2*N+1,1)),reshape(arange(-N,N+1),(1,2*N+1)) lattice = linspace(-L,L,2*N+1) #The lattice points hamiltonian = h(rows,cols) Explanation: Next we generate the lattice for the simulation. For now we consider 200 lattice points. End of explanation eigenvalues, eigenvectors = linalg.eigh(hamiltonian) index = eigenvalues.argsort() eigenvalues = eigenvalues[index] eigenvectors = eigenvectors[index] plot(lattice,absolute(eigenvectors[0])) Explanation: Now that we've computed the hamiltonian matrix, let's diagonalize it End of explanation wavefunction = psi(lattice,momentum) wavefunction /= sum(absolute(wavefunction))*delta print(sum(absolute(wavefunction))*delta) plot(lattice,absolute(wavefunction)) Explanation: Let's define the initial state of our system and normalize it now. End of explanation def Psi(t): sum = zeros(2*N+1, 'complex') for n in range(2*N+1): c = vdot(wavefunction,eigenvectors[:,n]) # nth expansion coefficient E = eigenvalues[n] sum += c * np.exp(-E*t*1.0j) * eigenvectors[:,n] return sum def Prob(t): return array( [absolute(Psi(t)[i])**2 for i in range(2*N+1)] ) fig = plt.figure() ax = plt.axes(xlim=(-L, L), ylim=(0, 0.15)) line, = ax.plot([], [], lw=2) def init(): line.set_data([], []) return line, def animate(i): line.set_data(lattice,absolute(Psi(0.05*i)**2)) return line, from matplotlib import animation anim = animation.FuncAnimation(fig,animate,init_func=init,frames=40,interval=100,blit=True) HTML(anim.to_html5_video()) Explanation: Now that we know the wavefunction as a linear combination of basis vectors, the time evolution of the complete wavefuction is just the linear combination of the time evolutions of the basis states. Lets do this for 10 time steps. End of explanation
3,608
Given the following text description, write Python code to implement the functionality described below step by step Description: Copyright 2020 The TensorFlow IO Authors. Step1: Load metrics from Prometheus server <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https Step2: Install and setup CoreDNS and Prometheus For demo purposes, a CoreDNS server locally with port 9053 open to receive DNS queries and port 9153 (defult) open to expose metrics for scraping. The following is a basic Corefile configuration for CoreDNS and is available to download Step3: The next step is to setup Prometheus server and use Prometheus to scrape CoreDNS metrics that are exposed on port 9153 from above. The prometheus.yml file for configuration is also available for download Step4: In order to show some activity, dig command could be used to generate a few DNS queries against the CoreDNS server that has been setup Step5: Now a CoreDNS server whose metrics are scraped by a Prometheus server and ready to be consumed by TensorFlow. Create Dataset for CoreDNS metrics and use it in TensorFlow Create a Dataset for CoreDNS metrics that is available from PostgreSQL server, could be done with tfio.experimental.IODataset.from_prometheus. At the minimium two arguments are needed. query is passed to Prometheus server to select the metrics and length is the period you want to load into Dataset. You can start with "coredns_dns_request_count_total" and "5" (secs) to create the Dataset below. Since earlier in the tutorial two DNS queries were sent, it is expected that the metrics for "coredns_dns_request_count_total" will be "2.0" at the end of the time series Step6: Further looking into the spec of the Dataset Step7: The created Dataset is ready to be passed to tf.keras directly for either training or inference purposes now. Use Dataset for model training With metrics Dataset created, it is possible to directly pass the Dataset to tf.keras for model training or inference. For demo purposes, this tutorial will just use a very simple LSTM model with 1 feature and 2 steps as input Step8: The dataset to be used is the value of 'go_memstats_sys_bytes' for CoreDNS with 10 samples. However, since a sliding window of window=n_steps and shift=1 are formed, additional samples are needed (for any two consecute elements, the first is taken as x and the second is taken as y for training). The total is 10 + n_steps - 1 + 1 = 12 seconds. The data value is also scaled to [0, 1].
Python Code: #@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. Explanation: Copyright 2020 The TensorFlow IO Authors. End of explanation import os try: %tensorflow_version 2.x except Exception: pass !pip install tensorflow-io from datetime import datetime import tensorflow as tf import tensorflow_io as tfio Explanation: Load metrics from Prometheus server <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://www.tensorflow.org/io/tutorials/prometheus"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a> </td> <td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/io/blob/master/docs/tutorials/prometheus.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a> </td> <td> <a target="_blank" href="https://github.com/tensorflow/io/blob/master/docs/tutorials/prometheus.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a> </td> <td> <a href="https://storage.googleapis.com/tensorflow_docs/io/docs/tutorials/prometheus.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a> </td> </table> Caution: In addition to python packages this notebook uses sudo apt-get install to install third party packages. Overview This tutorial loads CoreDNS metrics from a Prometheus server into a tf.data.Dataset, then uses tf.keras for training and inference. CoreDNS is a DNS server with a focus on service discovery, and is widely deployed as a part of the Kubernetes cluster. For that reason it is often closely monitoring by devops operations. This tutorial is an example that could be used by devops looking for automation in their operations through machine learning. Setup and usage Install required tensorflow-io package, and restart runtime End of explanation !curl -s -OL https://github.com/coredns/coredns/releases/download/v1.6.7/coredns_1.6.7_linux_amd64.tgz !tar -xzf coredns_1.6.7_linux_amd64.tgz !curl -s -OL https://raw.githubusercontent.com/tensorflow/io/master/docs/tutorials/prometheus/Corefile !cat Corefile # Run `./coredns` as a background process. # IPython doesn't recognize `&` in inline bash cells. get_ipython().system_raw('./coredns &') Explanation: Install and setup CoreDNS and Prometheus For demo purposes, a CoreDNS server locally with port 9053 open to receive DNS queries and port 9153 (defult) open to expose metrics for scraping. The following is a basic Corefile configuration for CoreDNS and is available to download: .:9053 { prometheus whoami } More details about installation could be found on CoreDNS's documentation. End of explanation !curl -s -OL https://github.com/prometheus/prometheus/releases/download/v2.15.2/prometheus-2.15.2.linux-amd64.tar.gz !tar -xzf prometheus-2.15.2.linux-amd64.tar.gz --strip-components=1 !curl -s -OL https://raw.githubusercontent.com/tensorflow/io/master/docs/tutorials/prometheus/prometheus.yml !cat prometheus.yml # Run `./prometheus` as a background process. # IPython doesn't recognize `&` in inline bash cells. get_ipython().system_raw('./prometheus &') Explanation: The next step is to setup Prometheus server and use Prometheus to scrape CoreDNS metrics that are exposed on port 9153 from above. The prometheus.yml file for configuration is also available for download: End of explanation !sudo apt-get install -y -qq dnsutils !dig @127.0.0.1 -p 9053 demo1.example.org !dig @127.0.0.1 -p 9053 demo2.example.org Explanation: In order to show some activity, dig command could be used to generate a few DNS queries against the CoreDNS server that has been setup: End of explanation dataset = tfio.experimental.IODataset.from_prometheus( "coredns_dns_request_count_total", 5, endpoint="http://localhost:9090") print("Dataset Spec:\n{}\n".format(dataset.element_spec)) print("CoreDNS Time Series:") for (time, value) in dataset: # time is milli second, convert to data time: time = datetime.fromtimestamp(time // 1000) print("{}: {}".format(time, value['coredns']['localhost:9153']['coredns_dns_request_count_total'])) Explanation: Now a CoreDNS server whose metrics are scraped by a Prometheus server and ready to be consumed by TensorFlow. Create Dataset for CoreDNS metrics and use it in TensorFlow Create a Dataset for CoreDNS metrics that is available from PostgreSQL server, could be done with tfio.experimental.IODataset.from_prometheus. At the minimium two arguments are needed. query is passed to Prometheus server to select the metrics and length is the period you want to load into Dataset. You can start with "coredns_dns_request_count_total" and "5" (secs) to create the Dataset below. Since earlier in the tutorial two DNS queries were sent, it is expected that the metrics for "coredns_dns_request_count_total" will be "2.0" at the end of the time series: End of explanation dataset = tfio.experimental.IODataset.from_prometheus( "go_memstats_gc_sys_bytes", 5, endpoint="http://localhost:9090") print("Time Series CoreDNS/Prometheus Comparision:") for (time, value) in dataset: # time is milli second, convert to data time: time = datetime.fromtimestamp(time // 1000) print("{}: {}/{}".format( time, value['coredns']['localhost:9153']['go_memstats_gc_sys_bytes'], value['prometheus']['localhost:9090']['go_memstats_gc_sys_bytes'])) Explanation: Further looking into the spec of the Dataset: ``` ( TensorSpec(shape=(), dtype=tf.int64, name=None), { 'coredns': { 'localhost:9153': { 'coredns_dns_request_count_total': TensorSpec(shape=(), dtype=tf.float64, name=None) } } } ) ``` It is obvious that the dataset consists of a (time, values) tuple where the values field is a python dict expanded into: "job_name": { "instance_name": { "metric_name": value, }, } In the above example, 'coredns' is the job name, 'localhost:9153' is the instance name, and 'coredns_dns_request_count_total' is the metric name. Note that depending on the Prometheus query used, it is possible that multiple jobs/instances/metrics could be returned. This is also the reason why python dict has been used in the structure of the Dataset. Take another query "go_memstats_gc_sys_bytes" as an example. Since both CoreDNS and Prometheus are written in Golang, "go_memstats_gc_sys_bytes" metric is available for both "coredns" job and "prometheus" job: Note: This cell may error out the first time you run it. Run it again and it will pass . End of explanation n_steps, n_features = 2, 1 simple_lstm_model = tf.keras.models.Sequential([ tf.keras.layers.LSTM(8, input_shape=(n_steps, n_features)), tf.keras.layers.Dense(1) ]) simple_lstm_model.compile(optimizer='adam', loss='mae') Explanation: The created Dataset is ready to be passed to tf.keras directly for either training or inference purposes now. Use Dataset for model training With metrics Dataset created, it is possible to directly pass the Dataset to tf.keras for model training or inference. For demo purposes, this tutorial will just use a very simple LSTM model with 1 feature and 2 steps as input: End of explanation n_samples = 10 dataset = tfio.experimental.IODataset.from_prometheus( "go_memstats_sys_bytes", n_samples + n_steps - 1 + 1, endpoint="http://localhost:9090") # take go_memstats_gc_sys_bytes from coredns job dataset = dataset.map(lambda _, v: v['coredns']['localhost:9153']['go_memstats_sys_bytes']) # find the max value and scale the value to [0, 1] v_max = dataset.reduce(tf.constant(0.0, tf.float64), tf.math.maximum) dataset = dataset.map(lambda v: (v / v_max)) # expand the dimension by 1 to fit n_features=1 dataset = dataset.map(lambda v: tf.expand_dims(v, -1)) # take a sliding window dataset = dataset.window(n_steps, shift=1, drop_remainder=True) dataset = dataset.flat_map(lambda d: d.batch(n_steps)) # the first value is x and the next value is y, only take 10 samples x = dataset.take(n_samples) y = dataset.skip(1).take(n_samples) dataset = tf.data.Dataset.zip((x, y)) # pass the final dataset to model.fit for training simple_lstm_model.fit(dataset.batch(1).repeat(10), epochs=5, steps_per_epoch=10) Explanation: The dataset to be used is the value of 'go_memstats_sys_bytes' for CoreDNS with 10 samples. However, since a sliding window of window=n_steps and shift=1 are formed, additional samples are needed (for any two consecute elements, the first is taken as x and the second is taken as y for training). The total is 10 + n_steps - 1 + 1 = 12 seconds. The data value is also scaled to [0, 1]. End of explanation
3,609
Given the following text description, write Python code to implement the functionality described below step by step Description: Copyright 2020 The TensorFlow Authors. Step1: Image classification with Model Garden <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https Step2: Import TensorFlow, TensorFlow Datasets, and a few helper libraries. Step3: The tensorflow_models package contains the ResNet vision model, and the official.vision.serving model contains the function to save and export the tuned model. Step4: Configure the ResNet-18 model for the Cifar-10 dataset The CIFAR10 dataset contains 60,000 color images in mutually exclusive 10 classes, with 6,000 images in each class. In Model Garden, the collections of parameters that define a model are called configs. Model Garden can create a config based on a known set of parameters via a factory. Use the resnet_imagenet factory configuration, as defined by tfm.vision.configs.image_classification.image_classification_imagenet. The configuration is set up to train ResNet to converge on ImageNet. Step5: Adjust the model and dataset configurations so that it works with Cifar-10 (cifar10). Step6: Adjust the trainer configuration. Step7: Print the modified configuration. Step8: Set up the distribution strategy. Step9: Create the Task object (tfm.core.base_task.Task) from the config_definitions.TaskConfig. The Task object has all the methods necessary for building the dataset, building the model, and running training & evaluation. These methods are driven by tfm.core.train_lib.run_experiment. Step10: Visualize the training data The dataloader applies a z-score normalization using preprocess_ops.normalize_image(image, offset=MEAN_RGB, scale=STDDEV_RGB), so the images returned by the dataset can't be directly displayed by standard tools. The visualization code needs to rescale the data into the [0,1] range. Step11: Use ds_info (which is an instance of tfds.core.DatasetInfo) to lookup the text descriptions of each class ID. Step12: Visualize a batch of the data. Step13: Visualize the testing data Visualize a batch of images from the validation dataset. Step14: Train and evaluate Step15: Print the accuracy, top_5_accuracy, and validation_loss evaluation metrics. Step16: Run a batch of the processed training data through the model, and view the results Step17: Export a SavedModel The keras.Model object returned by train_lib.run_experiment expects the data to be normalized by the dataset loader using the same mean and variance statiscics in preprocess_ops.normalize_image(image, offset=MEAN_RGB, scale=STDDEV_RGB). This export function handles those details, so you can pass tf.uint8 images and get the correct results. Step18: Test the exported model. Step19: Visualize the predictions.
Python Code: #@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. Explanation: Copyright 2020 The TensorFlow Authors. End of explanation !pip uninstall -y opencv-python !pip install -U -q "tensorflow>=2.9.0" "tf-models-official" Explanation: Image classification with Model Garden <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://www.tensorflow.org/tutorials/images/classification_with_model_garden"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a> </td> <td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/tutorials/images/classification_with_model_garden.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a> </td> <td> <a target="_blank" href="https://github.com/tensorflow/docs/blob/master/site/en/tutorials/images/classification_with_model_garden.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View on GitHub</a> </td> <td> <a href="https://storage.googleapis.com/tensorflow_docs/docs/site/en/tutorials/images/classification_with_model_garden.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a> </td> </table> This tutorial fine-tunes a Residual Network (ResNet) from the TensorFlow Model Garden package (tensorflow-models) to classify images in the CIFAR dataset. Model Garden contains a collection of state-of-the-art vision models, implemented with TensorFlow's high-level APIs. The implementations demonstrate the best practices for modeling, letting users to take full advantage of TensorFlow for their research and product development. This tutorial uses a ResNet model, a state-of-the-art image classifier. This tutorial uses the ResNet-18 model, a convolutional neural network with 18 layers. This tutorial demonstrates how to: 1. Use models from the TensorFlow Models package. 2. Fine-tune a pre-built ResNet for image classification. 3. Export the tuned ResNet model. Setup Install and import the necessary modules. This tutorial uses the tf-models-nightly version of Model Garden. Note: Upgrading TensorFlow to 2.9 in Colab breaks GPU support, so this colab is set to run on CPU until the Colab runtimes are updated. End of explanation import pprint import tempfile from IPython import display import matplotlib.pyplot as plt import tensorflow as tf import tensorflow_datasets as tfds Explanation: Import TensorFlow, TensorFlow Datasets, and a few helper libraries. End of explanation import tensorflow_models as tfm # These are not in the tfm public API for v2.9. They will be available in v2.10 from official.vision.serving import export_saved_model_lib import official.core.train_lib Explanation: The tensorflow_models package contains the ResNet vision model, and the official.vision.serving model contains the function to save and export the tuned model. End of explanation exp_config = tfm.core.exp_factory.get_exp_config('resnet_imagenet') tfds_name = 'cifar10' ds_info = tfds.builder(tfds_name ).info ds_info Explanation: Configure the ResNet-18 model for the Cifar-10 dataset The CIFAR10 dataset contains 60,000 color images in mutually exclusive 10 classes, with 6,000 images in each class. In Model Garden, the collections of parameters that define a model are called configs. Model Garden can create a config based on a known set of parameters via a factory. Use the resnet_imagenet factory configuration, as defined by tfm.vision.configs.image_classification.image_classification_imagenet. The configuration is set up to train ResNet to converge on ImageNet. End of explanation # Configure model exp_config.task.model.num_classes = 10 exp_config.task.model.input_size = list(ds_info.features["image"].shape) exp_config.task.model.backbone.resnet.model_id = 18 # Configure training and testing data batch_size = 128 exp_config.task.train_data.input_path = '' exp_config.task.train_data.tfds_name = tfds_name exp_config.task.train_data.tfds_split = 'train' exp_config.task.train_data.global_batch_size = batch_size exp_config.task.validation_data.input_path = '' exp_config.task.validation_data.tfds_name = tfds_name exp_config.task.validation_data.tfds_split = 'test' exp_config.task.validation_data.global_batch_size = batch_size Explanation: Adjust the model and dataset configurations so that it works with Cifar-10 (cifar10). End of explanation logical_device_names = [logical_device.name for logical_device in tf.config.list_logical_devices()] if 'GPU' in ''.join(logical_device_names): print('This may be broken in Colab.') device = 'GPU' elif 'TPU' in ''.join(logical_device_names): print('This may be broken in Colab.') device = 'TPU' else: print('Running on CPU is slow, so only train for a few steps.') device = 'CPU' if device=='CPU': train_steps = 20 exp_config.trainer.steps_per_loop = 5 else: train_steps=5000 exp_config.trainer.steps_per_loop = 100 exp_config.trainer.summary_interval = 100 exp_config.trainer.checkpoint_interval = train_steps exp_config.trainer.validation_interval = 1000 exp_config.trainer.validation_steps = ds_info.splits['test'].num_examples // batch_size exp_config.trainer.train_steps = train_steps exp_config.trainer.optimizer_config.learning_rate.type = 'cosine' exp_config.trainer.optimizer_config.learning_rate.cosine.decay_steps = train_steps exp_config.trainer.optimizer_config.learning_rate.cosine.initial_learning_rate = 0.1 exp_config.trainer.optimizer_config.warmup.linear.warmup_steps = 100 Explanation: Adjust the trainer configuration. End of explanation pprint.pprint(exp_config.as_dict()) display.Javascript("google.colab.output.setIframeHeight('300px');") Explanation: Print the modified configuration. End of explanation logical_device_names = [logical_device.name for logical_device in tf.config.list_logical_devices()] if exp_config.runtime.mixed_precision_dtype == tf.float16: tf.keras.mixed_precision.set_global_policy('mixed_float16') if 'GPU' in ''.join(logical_device_names): distribution_strategy = tf.distribute.MirroredStrategy() elif 'TPU' in ''.join(logical_device_names): tf.tpu.experimental.initialize_tpu_system() tpu = tf.distribute.cluster_resolver.TPUClusterResolver(tpu='/device:TPU_SYSTEM:0') distribution_strategy = tf.distribute.experimental.TPUStrategy(tpu) else: print('Warning: this will be really slow.') distribution_strategy = tf.distribute.OneDeviceStrategy(logical_device_names[0]) Explanation: Set up the distribution strategy. End of explanation with distribution_strategy.scope(): model_dir = tempfile.mkdtemp() task = tfm.core.task_factory.get_task(exp_config.task, logging_dir=model_dir) tf.keras.utils.plot_model(task.build_model(), show_shapes=True) for images, labels in task.build_inputs(exp_config.task.train_data).take(1): print() print(f'images.shape: {str(images.shape):16} images.dtype: {images.dtype!r}') print(f'labels.shape: {str(labels.shape):16} labels.dtype: {labels.dtype!r}') Explanation: Create the Task object (tfm.core.base_task.Task) from the config_definitions.TaskConfig. The Task object has all the methods necessary for building the dataset, building the model, and running training & evaluation. These methods are driven by tfm.core.train_lib.run_experiment. End of explanation plt.hist(images.numpy().flatten()); Explanation: Visualize the training data The dataloader applies a z-score normalization using preprocess_ops.normalize_image(image, offset=MEAN_RGB, scale=STDDEV_RGB), so the images returned by the dataset can't be directly displayed by standard tools. The visualization code needs to rescale the data into the [0,1] range. End of explanation label_info = ds_info.features['label'] label_info.int2str(1) Explanation: Use ds_info (which is an instance of tfds.core.DatasetInfo) to lookup the text descriptions of each class ID. End of explanation def show_batch(images, labels, predictions=None): plt.figure(figsize=(10, 10)) min = images.numpy().min() max = images.numpy().max() delta = max - min for i in range(12): plt.subplot(6, 6, i + 1) plt.imshow((images[i]-min) / delta) if predictions is None: plt.title(label_info.int2str(labels[i])) else: if labels[i] == predictions[i]: color = 'g' else: color = 'r' plt.title(label_info.int2str(predictions[i]), color=color) plt.axis("off") plt.figure(figsize=(10, 10)) for images, labels in task.build_inputs(exp_config.task.train_data).take(1): show_batch(images, labels) Explanation: Visualize a batch of the data. End of explanation plt.figure(figsize=(10, 10)); for images, labels in task.build_inputs(exp_config.task.validation_data).take(1): show_batch(images, labels) Explanation: Visualize the testing data Visualize a batch of images from the validation dataset. End of explanation model, eval_logs = tfm.core.train_lib.run_experiment( distribution_strategy=distribution_strategy, task=task, mode='train_and_eval', params=exp_config, model_dir=model_dir, run_post_eval=True) tf.keras.utils.plot_model(model, show_shapes=True) Explanation: Train and evaluate End of explanation for key, value in eval_logs.items(): print(f'{key:20}: {value.numpy():.3f}') Explanation: Print the accuracy, top_5_accuracy, and validation_loss evaluation metrics. End of explanation for images, labels in task.build_inputs(exp_config.task.train_data).take(1): predictions = model.predict(images) predictions = tf.argmax(predictions, axis=-1) show_batch(images, labels, tf.cast(predictions, tf.int32)) if device=='CPU': plt.suptitle('The model was only trained for a few steps, it is not expected to do well.') Explanation: Run a batch of the processed training data through the model, and view the results End of explanation # Saving and exporting the trained model export_saved_model_lib.export_inference_graph( input_type='image_tensor', batch_size=1, input_image_size=[32, 32], params=exp_config, checkpoint_path=tf.train.latest_checkpoint(model_dir), export_dir='./export/') Explanation: Export a SavedModel The keras.Model object returned by train_lib.run_experiment expects the data to be normalized by the dataset loader using the same mean and variance statiscics in preprocess_ops.normalize_image(image, offset=MEAN_RGB, scale=STDDEV_RGB). This export function handles those details, so you can pass tf.uint8 images and get the correct results. End of explanation # Importing SavedModel imported = tf.saved_model.load('./export/') model_fn = imported.signatures['serving_default'] Explanation: Test the exported model. End of explanation plt.figure(figsize=(10, 10)) for data in tfds.load('cifar10', split='test').batch(12).take(1): predictions = [] for image in data['image']: index = tf.argmax(model_fn(image[tf.newaxis, ...])['logits'], axis=1)[0] predictions.append(index) show_batch(data['image'], data['label'], predictions) if device=='CPU': plt.suptitle('The model was only trained for a few steps, it is not expected to do better than random.') Explanation: Visualize the predictions. End of explanation
3,610
Given the following text description, write Python code to implement the functionality described below step by step Description: Estimating School Tour Scheduling This notebook illustrates how to re-estimate the mandatory tour scheduling component for ActivitySim. This process includes running ActivitySim in estimation mode to read household travel survey files and write out the estimation data bundles used in this notebook. To review how to do so, please visit the other notebooks in this directory. Load libraries Step1: We'll work in our test directory, where ActivitySim has saved the estimation data bundles. Step2: Load data and prep model for estimation Step3: Review data loaded from the EDB The next (optional) step is to review the EDB, including the coefficients, utilities specification, and chooser and alternative data. Coefficients Step4: Utility specification Step5: Chooser data Step6: Alternatives data Step7: Estimate With the model setup for estimation, the next step is to estimate the model coefficients. Make sure to use a sufficiently large enough household sample and set of zones to avoid an over-specified model, which does not have a numerically stable likelihood maximizing solution. Larch has a built-in estimation methods including BHHH, and also offers access to more advanced general purpose non-linear optimizers in the scipy package, including SLSQP, which allows for bounds and constraints on parameters. BHHH is the default and typically runs faster, but does not follow constraints on parameters. Step8: Estimated coefficients Step9: Output Estimation Results Step10: Write the model estimation report, including coefficient t-statistic and log likelihood Step11: Next Steps The final step is to either manually or automatically copy the *_coefficients_revised.csv file to the configs folder, rename it to *_coefficients.csv, and run ActivitySim in simulation mode.
Python Code: import os import larch # !conda install larch -c conda-forge # for estimation import pandas as pd Explanation: Estimating School Tour Scheduling This notebook illustrates how to re-estimate the mandatory tour scheduling component for ActivitySim. This process includes running ActivitySim in estimation mode to read household travel survey files and write out the estimation data bundles used in this notebook. To review how to do so, please visit the other notebooks in this directory. Load libraries End of explanation os.chdir('test') Explanation: We'll work in our test directory, where ActivitySim has saved the estimation data bundles. End of explanation modelname = "mandatory_tour_scheduling_school" from activitysim.estimation.larch import component_model model, data = component_model(modelname, return_data=True) Explanation: Load data and prep model for estimation End of explanation data.coefficients Explanation: Review data loaded from the EDB The next (optional) step is to review the EDB, including the coefficients, utilities specification, and chooser and alternative data. Coefficients End of explanation data.spec Explanation: Utility specification End of explanation data.chooser_data Explanation: Chooser data End of explanation data.alt_values Explanation: Alternatives data End of explanation model.estimate() Explanation: Estimate With the model setup for estimation, the next step is to estimate the model coefficients. Make sure to use a sufficiently large enough household sample and set of zones to avoid an over-specified model, which does not have a numerically stable likelihood maximizing solution. Larch has a built-in estimation methods including BHHH, and also offers access to more advanced general purpose non-linear optimizers in the scipy package, including SLSQP, which allows for bounds and constraints on parameters. BHHH is the default and typically runs faster, but does not follow constraints on parameters. End of explanation model.parameter_summary() Explanation: Estimated coefficients End of explanation from activitysim.estimation.larch import update_coefficients result_dir = data.edb_directory/"estimated" update_coefficients( model, data, result_dir, output_file=f"{modelname}_coefficients_revised.csv", ); Explanation: Output Estimation Results End of explanation model.to_xlsx( result_dir/f"{modelname}_model_estimation.xlsx", data_statistics=False, ) Explanation: Write the model estimation report, including coefficient t-statistic and log likelihood End of explanation pd.read_csv(result_dir/f"{modelname}_coefficients_revised.csv") Explanation: Next Steps The final step is to either manually or automatically copy the *_coefficients_revised.csv file to the configs folder, rename it to *_coefficients.csv, and run ActivitySim in simulation mode. End of explanation
3,611
Given the following text description, write Python code to implement the functionality described below step by step Description: 4T_Pandas Basic (4) - 파일 입출력 ( csv, excel, sql ) Step1: CSV(Comma Seperated Value) => 각각의 데이터가 ","를 기준으로 나뉜 데이터 예를 들어 김기표 | 29 | 분석가 // sep="|" 이거였어 // 이렇게 하면 ,가 |이걸로 바뀌게 된다. Step2: 데이터 분석을 사용할 때 2가지 양식이 있다 csv(엑셀), XML, JSON == 데이터베이스 Pickle(데이터 분석에서 상당히 중요하다) => 파이썬의 객체 그대로 저장할 수 있다. 파이썬 코드를 그대로 저장 즉, 클래스나 함수를 바이너리 형태로 저장해서 언제든 쓸 수 있도록
Python Code: -실제 엑셀 파일 데이터를 바탕으로 위의 것들을 다시 한 번 실습 -국가별 파일 입출력했음 번외로 수학계산을 해 볼 것이다. max, mean, min, sum df = pd.DataFrame([{"Name": "KiPyo Kim", "Age": 29}, {"Name": "KiDong Kim", "Age": 33}]) df # 옵션에 대해서만 알아가자 df.to_csv("fastcampus.csv") df.to_csv("fastcampus.csv", index=False) df.to_csv("fastcampus.csv", index=False, header=False) Explanation: 4T_Pandas Basic (4) - 파일 입출력 ( csv, excel, sql ) End of explanation df.to_csv("fastcampus.csv", index=False, header=False, sep="|") Explanation: CSV(Comma Seperated Value) => 각각의 데이터가 ","를 기준으로 나뉜 데이터 예를 들어 김기표 | 29 | 분석가 // sep="|" 이거였어 // 이렇게 하면 ,가 |이걸로 바뀌게 된다. End of explanation df = pd.read_csv # 이렇게 간단하다 # 엑셀 파일을 일괄적으로 csv 형태로 바꿔주는 프로그래밍 => Pandas로 하면 금방 한다 # read_excel().to_csv 이런 식으로 하면. 해보자 pd.read_csv("fastcampus.csv") pd.read_csv("fastcampus.csv", header=None, sep="|") df = pd.read_csv("fastcampus.csv", header=None, sep="|") df.rename(columns={0: "Age", 1: "Name"}, inplace=True) df Explanation: 데이터 분석을 사용할 때 2가지 양식이 있다 csv(엑셀), XML, JSON == 데이터베이스 Pickle(데이터 분석에서 상당히 중요하다) => 파이썬의 객체 그대로 저장할 수 있다. 파이썬 코드를 그대로 저장 즉, 클래스나 함수를 바이너리 형태로 저장해서 언제든 쓸 수 있도록 End of explanation
3,612
Given the following text description, write Python code to implement the functionality described below step by step Description: <h1> Experimenting with different models </h1> In this notebook, we try out different ideas. The first thing we have to do is to create a validation set, so that we are not doing experimentation with our independent test dataset. Step1: <h2> Read dataset </h2> Step2: <h2> Create separate training and validation data </h2> Step4: <h2> Logistic regression </h2> Step5: <h2> Evaluate model on the heldout data </h2>
Python Code: BUCKET='cs358-bucket' import os os.environ['BUCKET'] = BUCKET from __future__ import print_function from pyspark.mllib.classification import LogisticRegressionWithLBFGS from pyspark.mllib.regression import LabeledPoint from pyspark.sql.types import StringType, FloatType, StructType, StructField # Create spark session from __future__ import print_function from pyspark.sql import SparkSession from pyspark import SparkContext sc = SparkContext('local', 'experimentation') spark = SparkSession \ .builder \ .appName("experimentation w/ Spark ML") \ .getOrCreate() print(spark) print(sc) Explanation: <h1> Experimenting with different models </h1> In this notebook, we try out different ideas. The first thing we have to do is to create a validation set, so that we are not doing experimentation with our independent test dataset. End of explanation traindays = spark.read \ .option("header", "true") \ .csv('gs://{}/flights/trainday.csv'.format(BUCKET)) traindays.createOrReplaceTempView('traindays') header = 'FL_DATE,UNIQUE_CARRIER,AIRLINE_ID,CARRIER,FL_NUM,ORIGIN_AIRPORT_ID,ORIGIN_AIRPORT_SEQ_ID,ORIGIN_CITY_MARKET_ID,ORIGIN,DEST_AIRPORT_ID,DEST_AIRPORT_SEQ_ID,DEST_CITY_MARKET_ID,DEST,CRS_DEP_TIME,DEP_TIME,DEP_DELAY,TAXI_OUT,WHEELS_OFF,WHEELS_ON,TAXI_IN,CRS_ARR_TIME,ARR_TIME,ARR_DELAY,CANCELLED,CANCELLATION_CODE,DIVERTED,DISTANCE,DEP_AIRPORT_LAT,DEP_AIRPORT_LON,DEP_AIRPORT_TZOFFSET,ARR_AIRPORT_LAT,ARR_AIRPORT_LON,ARR_AIRPORT_TZOFFSET,EVENT,NOTIFY_TIME' def get_structfield(colname): if colname in ['ARR_DELAY', 'DEP_DELAY', 'DISTANCE', 'TAXI_OUT']: return StructField(colname, FloatType(), True) else: return StructField(colname, StringType(), True) schema = StructType([get_structfield(colname) for colname in header.split(',')]) inputs = 'gs://{}/flights/tzcorr/all_flights-00000-*' # 1/30th #inputs = 'gs://{}/flights/tzcorr/all_flights-*' # FULL flights = spark.read\ .schema(schema)\ .csv(inputs.format(BUCKET)) # this view can now be queried ... flights.createOrReplaceTempView('flights') Explanation: <h2> Read dataset </h2> End of explanation from pyspark.sql.functions import rand SEED = 13 traindays = traindays.withColumn("holdout", rand(SEED) > 0.8) # 80% of data is for training traindays.createOrReplaceTempView('traindays') traindays.head(10) Explanation: <h2> Create separate training and validation data </h2> End of explanation trainquery = SELECT * FROM flights f JOIN traindays t ON f.FL_DATE == t.FL_DATE WHERE t.is_train_day == 'True' AND t.holdout == False AND f.CANCELLED == '0.00' AND f.DIVERTED == '0.00' traindata = spark.sql(trainquery) traindata.head() def to_example(fields): return LabeledPoint(\ float(fields['ARR_DELAY'] < 15), #ontime \ [ \ fields['DEP_DELAY'], # DEP_DELAY \ fields['TAXI_OUT'], # TAXI_OUT \ fields['DISTANCE'], # DISTANCE \ ]) examples = traindata.rdd.map(to_example) lrmodel = LogisticRegressionWithLBFGS.train(examples, intercept=True) print(lrmodel.weights,lrmodel.intercept) lrmodel.setThreshold(0.7) # cancel if prob-of-ontime < 0.7 Explanation: <h2> Logistic regression </h2> End of explanation evalquery = trainquery.replace("t.holdout == False","t.holdout == True") print(evalquery) evaldata = spark.sql(evalquery) examples = evaldata.rdd.map(to_example) def eval(labelpred): ''' data = (label, pred) data[0] = label data[1] = pred ''' cancel = labelpred.filter(lambda data: data[1] < 0.7) nocancel = labelpred.filter(lambda data: data[1] >= 0.7) corr_cancel = cancel.filter(lambda data: data[0] == int(data[1] >= 0.7)).count() corr_nocancel = nocancel.filter(lambda data: data[0] == int(data[1] >= 0.7)).count() cancel_denom = cancel.count() nocancel_denom = nocancel.count() if cancel_denom == 0: cancel_denom = 1 if nocancel_denom == 0: nocancel_denom = 1 return {'total_cancel': cancel.count(), \ 'correct_cancel': float(corr_cancel)/cancel_denom, \ 'total_noncancel': nocancel.count(), \ 'correct_noncancel': float(corr_nocancel)/nocancel_denom \ } labelpred = examples.map(lambda p: (p.label, lrmodel.predict(p.features))) print(eval(labelpred)) Explanation: <h2> Evaluate model on the heldout data </h2> End of explanation
3,613
Given the following text description, write Python code to implement the functionality described below step by step Description: Step1: TV Script Generation In this project, you'll generate your own Simpsons TV scripts using RNNs. You'll be using part of the Simpsons dataset of scripts from 27 seasons. The Neural Network you'll build will generate a new TV script for a scene at Moe's Tavern. Get the Data The data is already provided for you. You'll be using a subset of the original dataset. It consists of only the scenes in Moe's Tavern. This doesn't include other versions of the tavern, like "Moe's Cavern", "Flaming Moe's", "Uncle Moe's Family Feed-Bag", etc.. Step3: Explore the Data Play around with view_sentence_range to view different parts of the data. Step6: Implement Preprocessing Functions The first thing to do to any dataset is preprocessing. Implement the following preprocessing functions below Step9: Tokenize Punctuation We'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks make it hard for the neural network to distinguish between the word "bye" and "bye!". Implement the function token_lookup to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token Step11: Preprocess all the data and save it Running the code cell below will preprocess all the data and save it to file. Step13: Check Point This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk. Step15: Build the Neural Network You'll build the components necessary to build a RNN by implementing the following functions below Step18: Input Implement the get_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders Step21: Build RNN Cell and Initialize Stack one or more BasicLSTMCells in a MultiRNNCell. - The Rnn size should be set using rnn_size - Initalize Cell State using the MultiRNNCell's zero_state() function - Apply the name "initial_state" to the initial state using tf.identity() Return the cell and initial state in the following tuple (Cell, InitialState) Step24: Word Embedding Apply embedding to input_data using TensorFlow. Return the embedded sequence. Step27: Build RNN You created a RNN Cell in the get_init_cell() function. Time to use the cell to create a RNN. - Build the RNN using the tf.nn.dynamic_rnn() - Apply the name "final_state" to the final state using tf.identity() Return the outputs and final_state state in the following tuple (Outputs, FinalState) Step30: Build the Neural Network Apply the functions you implemented above to Step33: Batches Implement get_batches to create batches of input and targets using int_text. The batches should be a Numpy array with the shape (number of batches, 2, batch size, sequence length). Each batch contains two elements Step35: Neural Network Training Hyperparameters Tune the following parameters Step37: Build the Graph Build the graph using the neural network you implemented. Step39: Train Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem. Step41: Save Parameters Save seq_length and save_dir for generating a new TV script. Step43: Checkpoint Step46: Implement Generate Functions Get Tensors Get tensors from loaded_graph using the function get_tensor_by_name(). Get the tensors using the following names Step49: Choose Word Implement the pick_word() function to select the next word using probabilities. Step51: Generate TV Script This will generate the TV script for you. Set gen_length to the length of TV script you want to generate.
Python Code: DON'T MODIFY ANYTHING IN THIS CELL import helper data_dir = './data/divina_commedia.txt' text = helper.load_data(data_dir) # Ignore notice, since we don't use it for analysing the data #text = text[81:] Explanation: TV Script Generation In this project, you'll generate your own Simpsons TV scripts using RNNs. You'll be using part of the Simpsons dataset of scripts from 27 seasons. The Neural Network you'll build will generate a new TV script for a scene at Moe's Tavern. Get the Data The data is already provided for you. You'll be using a subset of the original dataset. It consists of only the scenes in Moe's Tavern. This doesn't include other versions of the tavern, like "Moe's Cavern", "Flaming Moe's", "Uncle Moe's Family Feed-Bag", etc.. End of explanation view_sentence_range = (0, 10) DON'T MODIFY ANYTHING IN THIS CELL import numpy as np print('Dataset Stats') print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()}))) scenes = text.split('\n\n') print('Number of scenes: {}'.format(len(scenes))) sentence_count_scene = [scene.count('\n') for scene in scenes] print('Average number of sentences in each scene: {}'.format(np.average(sentence_count_scene))) sentences = [sentence for scene in scenes for sentence in scene.split('\n')] print('Number of lines: {}'.format(len(sentences))) word_count_sentence = [len(sentence.split()) for sentence in sentences] print('Average number of words in each line: {}'.format(np.average(word_count_sentence))) print() print('The sentences {} to {}:'.format(*view_sentence_range)) print('\n'.join(text.split('\n')[view_sentence_range[0]:view_sentence_range[1]])) Explanation: Explore the Data Play around with view_sentence_range to view different parts of the data. End of explanation import numpy as np import problem_unittests as tests def create_lookup_tables(text): Create lookup tables for vocabulary :param text: The text of tv scripts split into words :return: A tuple of dicts (vocab_to_int, int_to_vocab) words_ordered = sorted(set(text)) # TODO: Implement Function vocab_to_int = {word: index for index, word in enumerate(words_ordered)} int_to_vocab = {index: word for index, word in enumerate(words_ordered)} return vocab_to_int, int_to_vocab DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE tests.test_create_lookup_tables(create_lookup_tables) Explanation: Implement Preprocessing Functions The first thing to do to any dataset is preprocessing. Implement the following preprocessing functions below: - Lookup Table - Tokenize Punctuation Lookup Table To create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries: - Dictionary to go from the words to an id, we'll call vocab_to_int - Dictionary to go from the id to word, we'll call int_to_vocab Return these dictionaries in the following tuple (vocab_to_int, int_to_vocab) End of explanation def token_lookup(): Generate a dict to turn punctuation into a token. :return: Tokenize dictionary where the key is the punctuation and the value is the token # TODO: Implement Function token_dict = dict() token_dict['.'] = "||Period||" token_dict[','] = "||Comma||" token_dict['"'] = "||Quotation_Mark||" token_dict[';'] = "||Semicolon||" token_dict['!'] = "||Exclamation_Mark||" token_dict['?'] = "||Question_Mark||" token_dict['('] = "||Left_Parentheses||" token_dict[')'] = "||Right_Parentheses||" token_dict['--'] = "||Dash||" token_dict['\n'] = "||Return||" return token_dict DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE tests.test_tokenize(token_lookup) Explanation: Tokenize Punctuation We'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks make it hard for the neural network to distinguish between the word "bye" and "bye!". Implement the function token_lookup to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token: - Period ( . ) - Comma ( , ) - Quotation Mark ( " ) - Semicolon ( ; ) - Exclamation mark ( ! ) - Question mark ( ? ) - Left Parentheses ( ( ) - Right Parentheses ( ) ) - Dash ( -- ) - Return ( \n ) This dictionary will be used to token the symbols and add the delimiter (space) around it. This separates the symbols as it's own word, making it easier for the neural network to predict on the next word. Make sure you don't use a token that could be confused as a word. Instead of using the token "dash", try using something like "||dash||". End of explanation DON'T MODIFY ANYTHING IN THIS CELL # Preprocess Training, Validation, and Testing Data helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables) Explanation: Preprocess all the data and save it Running the code cell below will preprocess all the data and save it to file. End of explanation DON'T MODIFY ANYTHING IN THIS CELL import helper import numpy as np import problem_unittests as tests int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess() Explanation: Check Point This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk. End of explanation DON'T MODIFY ANYTHING IN THIS CELL from distutils.version import LooseVersion import warnings import tensorflow as tf # Check TensorFlow Version assert LooseVersion(tf.__version__) >= LooseVersion('1.0'), 'Please use TensorFlow version 1.0 or newer' print('TensorFlow Version: {}'.format(tf.__version__)) # Check for a GPU if not tf.test.gpu_device_name(): warnings.warn('No GPU found. Please use a GPU to train your neural network.') else: print('Default GPU Device: {}'.format(tf.test.gpu_device_name())) Explanation: Build the Neural Network You'll build the components necessary to build a RNN by implementing the following functions below: - get_inputs - get_init_cell - get_embed - build_rnn - build_nn - get_batches Check the Version of TensorFlow and Access to GPU End of explanation def get_inputs(): Create TF Placeholders for input, targets, and learning rate. :return: Tuple (input, targets, learning rate) # TODO: Implement Function inputs = tf.placeholder(tf.int32, shape=(None, None), name="input") targets = tf.placeholder(tf.int32, shape=(None,None), name="targets") learning_rate = tf.placeholder(tf.float32, name="learning_rate") return inputs, targets, learning_rate DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE tests.test_get_inputs(get_inputs) Explanation: Input Implement the get_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders: - Input text placeholder named "input" using the TF Placeholder name parameter. - Targets placeholder - Learning Rate placeholder Return the placeholders in the following tuple (Input, Targets, LearningRate) End of explanation def get_init_cell(batch_size, rnn_size): Create an RNN Cell and initialize it. :param batch_size: Size of batches :param rnn_size: Size of RNNs :return: Tuple (cell, initialize state) # TODO: Implement Function lstm_layers = 2 #Need to pass test?! (otherwise final_state shape will be wrong) lstm = tf.contrib.rnn.BasicLSTMCell(num_units=rnn_size) cell = tf.contrib.rnn.MultiRNNCell([lstm] * lstm_layers) initial_state = cell.zero_state(batch_size, tf.float32) # print(initial_state) initial_state = tf.identity(initial_state, name="initial_state") return cell, initial_state DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE tests.test_get_init_cell(get_init_cell) Explanation: Build RNN Cell and Initialize Stack one or more BasicLSTMCells in a MultiRNNCell. - The Rnn size should be set using rnn_size - Initalize Cell State using the MultiRNNCell's zero_state() function - Apply the name "initial_state" to the initial state using tf.identity() Return the cell and initial state in the following tuple (Cell, InitialState) End of explanation def get_embed(input_data, vocab_size, embed_dim): Create embedding for <input_data>. :param input_data: TF placeholder for text input. :param vocab_size: Number of words in vocabulary. :param embed_dim: Number of embedding dimensions :return: Embedded input. # TODO: Implement Function embeddings = tf.Variable(tf.random_uniform((vocab_size, embed_dim), -1, 1)) embed = tf.nn.embedding_lookup(embeddings, ids=input_data) return embed DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE tests.test_get_embed(get_embed) Explanation: Word Embedding Apply embedding to input_data using TensorFlow. Return the embedded sequence. End of explanation def build_rnn(cell, inputs): Create a RNN using a RNN Cell :param cell: RNN Cell :param inputs: Input text data :return: Tuple (Outputs, Final State) # TODO: Implement Function #print(cell) #print(inputs) outputs, final_state = tf.nn.dynamic_rnn(cell, inputs, dtype=tf.float32) final_state = tf.identity(final_state, name="final_state") # Shape is lstm_layers x 2 (inputs and targets) x None (batch_size) x lstm_units #print(final_state) return outputs, final_state DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE tests.test_build_rnn(build_rnn) Explanation: Build RNN You created a RNN Cell in the get_init_cell() function. Time to use the cell to create a RNN. - Build the RNN using the tf.nn.dynamic_rnn() - Apply the name "final_state" to the final state using tf.identity() Return the outputs and final_state state in the following tuple (Outputs, FinalState) End of explanation def build_nn(cell, rnn_size, input_data, vocab_size, embed_dim): Build part of the neural network :param cell: RNN cell :param rnn_size: Size of rnns :param input_data: Input data :param vocab_size: Vocabulary size :param embed_dim: Number of embedding dimensions :return: Tuple (Logits, FinalState) # TODO: Implement Function embed = get_embed(input_data, vocab_size, embed_dim=embed_dim) # outputs shape is batch_size x seq_len x lstm_units outputs, final_state = build_rnn(cell, inputs=embed) #print(outputs.shape) logits = tf.contrib.layers.fully_connected(outputs, vocab_size, activation_fn=None) # logits shape is batch_size x seq_len x vocab_size #print(logits.shape) return logits, final_state DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE tests.test_build_nn(build_nn) Explanation: Build the Neural Network Apply the functions you implemented above to: - Apply embedding to input_data using your get_embed(input_data, vocab_size, embed_dim) function. - Build RNN using cell and your build_rnn(cell, inputs) function. - Apply a fully connected layer with a linear activation and vocab_size as the number of outputs. Return the logits and final state in the following tuple (Logits, FinalState) End of explanation def get_batches(int_text, batch_size, seq_length): Return batches of input and target :param int_text: Text with the words replaced by their ids :param batch_size: The size of batch :param seq_length: The length of sequence :return: Batches as a Numpy array #print("Batch_size: " + str(batch_size)) #print("Seq length: " + str(seq_length)) # Consider that targets is shifted by 1 num_batches = len(int_text)//(batch_size * seq_length + 1) #print("Num batches: " + str(num_batches)) #print("Text length: " + str(len(int_text))) batches = np.zeros(shape=(num_batches, 2, batch_size, seq_length), dtype=np.int32) #print(batches.shape) # TODO: Add a smarter check for batch_index in range(0, num_batches): for in_batch_index in range(0, batch_size): start_x = (batch_index * seq_length) + (seq_length * num_batches * in_batch_index) start_y = start_x + 1 x = int_text[start_x : start_x + seq_length] y = int_text[start_y : start_y + seq_length] #print("batch_index: " + str(batch_index)) #print("in_batch_index: " + str(in_batch_index)) #print("start_x: " + str(start_x)) #print(x) batches[batch_index][0][in_batch_index] = np.asarray(x) batches[batch_index][1][in_batch_index] = np.asarray(y) #print(batches) return batches DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE tests.test_get_batches(get_batches) Explanation: Batches Implement get_batches to create batches of input and targets using int_text. The batches should be a Numpy array with the shape (number of batches, 2, batch size, sequence length). Each batch contains two elements: - The first element is a single batch of input with the shape [batch size, sequence length] - The second element is a single batch of targets with the shape [batch size, sequence length] If you can't fill the last batch with enough data, drop the last batch. For exmple, get_batches([1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15], 2, 3) would return a Numpy array of the following: ``` [ # First Batch [ # Batch of Input [[ 1 2 3], [ 7 8 9]], # Batch of targets [[ 2 3 4], [ 8 9 10]] ], # Second Batch [ # Batch of Input [[ 4 5 6], [10 11 12]], # Batch of targets [[ 5 6 7], [11 12 13]] ] ] ``` End of explanation # FINAL LOSS: 0.213 - Seq length 20, LR 0.001, Epochs 250 # Number of Epochs num_epochs = 250 # Batch Size batch_size = 64 # RNN Size rnn_size = 256 # Embedding Dimension Size embed_dim = 300 # Sequence Length seq_length = 20 # Learning Rate learning_rate = 0.001 # Show stats for every n number of batches show_every_n_batches = 99 DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE save_dir = './save' Explanation: Neural Network Training Hyperparameters Tune the following parameters: Set num_epochs to the number of epochs. Set batch_size to the batch size. Set rnn_size to the size of the RNNs. Set embed_dim to the size of the embedding. Set seq_length to the length of sequence. Set learning_rate to the learning rate. Set show_every_n_batches to the number of batches the neural network should print progress. End of explanation DON'T MODIFY ANYTHING IN THIS CELL from tensorflow.contrib import seq2seq train_graph = tf.Graph() with train_graph.as_default(): vocab_size = len(int_to_vocab) input_text, targets, lr = get_inputs() input_data_shape = tf.shape(input_text) cell, initial_state = get_init_cell(input_data_shape[0], rnn_size) logits, final_state = build_nn(cell, rnn_size, input_text, vocab_size, embed_dim) # Probabilities for generating words # probs shape is batch_size x seq_len x vocab_size probs = tf.nn.softmax(logits, name='probs') #print(probs.shape) # Loss function cost = seq2seq.sequence_loss( logits, targets, tf.ones([input_data_shape[0], input_data_shape[1]])) # Optimizer optimizer = tf.train.AdamOptimizer(lr) # Gradient Clipping gradients = optimizer.compute_gradients(cost) capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients] train_op = optimizer.apply_gradients(capped_gradients) Explanation: Build the Graph Build the graph using the neural network you implemented. End of explanation DON'T MODIFY ANYTHING IN THIS CELL batches = get_batches(int_text, batch_size, seq_length) with tf.Session(graph=train_graph) as sess: sess.run(tf.global_variables_initializer()) for epoch_i in range(num_epochs): state = sess.run(initial_state, {input_text: batches[0][0]}) for batch_i, (x, y) in enumerate(batches): # x and y shapes are batch_size x seq_len feed = { input_text: x, targets: y, initial_state: state, lr: learning_rate} train_loss, state, _ = sess.run([cost, final_state, train_op], feed) # Show every <show_every_n_batches> batches if (epoch_i * len(batches) + batch_i) % show_every_n_batches == 0: print('Epoch {:>3} Batch {:>4}/{} train_loss = {:.3f}'.format( epoch_i, batch_i, len(batches), train_loss)) # Save Model saver = tf.train.Saver() saver.save(sess, save_dir) print('Model Trained and Saved') Explanation: Train Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem. End of explanation DON'T MODIFY ANYTHING IN THIS CELL # Save parameters for checkpoint helper.save_params((seq_length, save_dir)) Explanation: Save Parameters Save seq_length and save_dir for generating a new TV script. End of explanation DON'T MODIFY ANYTHING IN THIS CELL import tensorflow as tf import numpy as np import helper import problem_unittests as tests _, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess() seq_length, load_dir = helper.load_params() Explanation: Checkpoint End of explanation def get_tensors(loaded_graph): Get input, initial state, final state, and probabilities tensor from <loaded_graph> :param loaded_graph: TensorFlow graph loaded from file :return: Tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor) # TODO: Implement Function return loaded_graph.get_tensor_by_name("input:0"), loaded_graph.get_tensor_by_name("initial_state:0"), \ loaded_graph.get_tensor_by_name("final_state:0"), loaded_graph.get_tensor_by_name("probs:0") DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE tests.test_get_tensors(get_tensors) Explanation: Implement Generate Functions Get Tensors Get tensors from loaded_graph using the function get_tensor_by_name(). Get the tensors using the following names: - "input:0" - "initial_state:0" - "final_state:0" - "probs:0" Return the tensors in the following tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor) End of explanation def pick_word(probabilities, int_to_vocab): Pick the next word in the generated text :param probabilities: Probabilites of the next word :param int_to_vocab: Dictionary of word ids as the keys and words as the values :return: String of the predicted word # TODO: Implement Function return int_to_vocab[np.argmax(probabilities)] DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE tests.test_pick_word(pick_word) Explanation: Choose Word Implement the pick_word() function to select the next word using probabilities. End of explanation #print(vocab_to_int) gen_length = 200 #prime_word = 'Inferno: Canto I' prime_word = 'vuolsi' prime_word = str.lower(prime_word) DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE loaded_graph = tf.Graph() with tf.Session(graph=loaded_graph) as sess: # Load saved model loader = tf.train.import_meta_graph(load_dir + '.meta') loader.restore(sess, load_dir) # Get Tensors from loaded model input_text, initial_state, final_state, probs = get_tensors(loaded_graph) # Sentences generation setup gen_sentences = prime_word.split() prev_state = sess.run(initial_state, {input_text: np.array([[1]])}) # Generate sentences for n in range(gen_length): # Dynamic Input dyn_input = [[vocab_to_int[word] for word in gen_sentences[-seq_length:]]] dyn_seq_length = len(dyn_input[0]) # Get Prediction probabilities, prev_state = sess.run( [probs, final_state], {input_text: dyn_input, initial_state: prev_state}) pred_word = pick_word(probabilities[dyn_seq_length-1], int_to_vocab) gen_sentences.append(pred_word) # Remove tokens tv_script = ' '.join(gen_sentences) for key, token in token_dict.items(): ending = ' ' if key in ['\n', '(', '"'] else '' tv_script = tv_script.replace(' ' + token.lower(), key) tv_script = tv_script.replace('\n ', '\n') tv_script = tv_script.replace('( ', '(') print(tv_script) Explanation: Generate TV Script This will generate the TV script for you. Set gen_length to the length of TV script you want to generate. End of explanation
3,614
Given the following text description, write Python code to implement the functionality described below step by step Description: Going to train on 50,000,000 molecules from GDB-17 May later try scraping for all molecules w/ positive charge Step1: only N+ contain positive charges in this dataset Step2: We may want to remove cations with more than 25 heavy atoms Step3: so some keras version stuff. 1.0 uses keras.losses to store its loss functions. 2.0 uses objectives. we'll just have to be consistent Step4: Here I've adapted the exact architecture used in the paper Step5: encoded_input looks like a dummy layer here Step6: create a separate autoencoder model that combines the encoder and decoder (I guess the former cells are for accessing those separate parts of the model) Step7: we compile and fit Step8: Alright. So now I'm going to loop through our 276 cations, sample 100x from the decoder based on these representations, and see how many sanitize with the RDKit Also will for now remove cations with new elements Step9: so we had to remove 25 cations
Python Code: import matplotlib.pylab as plt import numpy as np import seaborn as sns; sns.set() %matplotlib inline import keras from keras.models import Sequential, Model from keras.layers import Dense from keras.optimizers import Adam import salty from numpy import array from numpy import argmax from sklearn.preprocessing import LabelEncoder from sklearn.preprocessing import OneHotEncoder import numpy as np from sklearn.model_selection import train_test_split from random import shuffle import pandas as pd df = pd.read_csv('../../../../../../../GDB17.50000000', names=['smiles']) Explanation: Going to train on 50,000,000 molecules from GDB-17 May later try scraping for all molecules w/ positive charge End of explanation df = df[df['smiles'].str.contains("N+", regex=False)] values = df['smiles'] print(values.shape) smile_max_length = values.map(len).max() print(smile_max_length) Explanation: only N+ contain positive charges in this dataset End of explanation plt.hist(values.map(len)) def pad_smiles(smiles_string, smile_max_length): if len(smiles_string) < smile_max_length: return smiles_string + " " * (smile_max_length - len(smiles_string)) padded_smiles = [pad_smiles(i, smile_max_length) for i in values if pad_smiles(i, smile_max_length)] shuffle(padded_smiles) def create_char_list(char_set, smile_series): for smile in smile_series: char_set.update(set(smile)) return char_set char_set = set() char_set = create_char_list(char_set, padded_smiles) print(len(char_set)) char_set char_list = list(char_set) chars_in_dict = len(char_list) char_to_index = dict((c, i) for i, c in enumerate(char_list)) index_to_char = dict((i, c) for i, c in enumerate(char_list)) char_to_index X_train = np.zeros((len(padded_smiles), smile_max_length, chars_in_dict), dtype=np.float32) X_train.shape for i, smile in enumerate(padded_smiles): for j, char in enumerate(smile): X_train[i, j, char_to_index[char]] = 1 X_train, X_test = train_test_split(X_train, test_size=0.33, random_state=42) X_train.shape # need to build RNN to encode. some issues include what the 'embedded dimension' is (vector length of embedded sequence) Explanation: We may want to remove cations with more than 25 heavy atoms End of explanation from keras import backend as K from keras.objectives import binary_crossentropy #objs or losses from keras.models import Model from keras.layers import Input, Dense, Lambda from keras.layers.core import Dense, Activation, Flatten, RepeatVector from keras.layers.wrappers import TimeDistributed from keras.layers.recurrent import GRU from keras.layers.convolutional import Convolution1D Explanation: so some keras version stuff. 1.0 uses keras.losses to store its loss functions. 2.0 uses objectives. we'll just have to be consistent End of explanation def Encoder(x, latent_rep_size, smile_max_length, epsilon_std = 0.01): h = Convolution1D(9, 9, activation = 'relu', name='conv_1')(x) h = Convolution1D(9, 9, activation = 'relu', name='conv_2')(h) h = Convolution1D(10, 11, activation = 'relu', name='conv_3')(h) h = Flatten(name = 'flatten_1')(h) h = Dense(435, activation = 'relu', name = 'dense_1')(h) def sampling(args): z_mean_, z_log_var_ = args batch_size = K.shape(z_mean_)[0] epsilon = K.random_normal(shape=(batch_size, latent_rep_size), mean=0., stddev = epsilon_std) return z_mean_ + K.exp(z_log_var_ / 2) * epsilon z_mean = Dense(latent_rep_size, name='z_mean', activation = 'linear')(h) z_log_var = Dense(latent_rep_size, name='z_log_var', activation = 'linear')(h) def vae_loss(x, x_decoded_mean): x = K.flatten(x) x_decoded_mean = K.flatten(x_decoded_mean) xent_loss = smile_max_length * binary_crossentropy(x, x_decoded_mean) kl_loss = - 0.5 * K.mean(1 + z_log_var - K.square(z_mean) - \ K.exp(z_log_var), axis = -1) return xent_loss + kl_loss return (vae_loss, Lambda(sampling, output_shape=(latent_rep_size,), name='lambda')([z_mean, z_log_var])) def Decoder(z, latent_rep_size, smile_max_length, charset_length): h = Dense(latent_rep_size, name='latent_input', activation = 'relu')(z) h = RepeatVector(smile_max_length, name='repeat_vector')(h) h = GRU(501, return_sequences = True, name='gru_1')(h) h = GRU(501, return_sequences = True, name='gru_2')(h) h = GRU(501, return_sequences = True, name='gru_3')(h) return TimeDistributed(Dense(charset_length, activation='softmax'), name='decoded_mean')(h) x = Input(shape=(smile_max_length, len(char_set))) _, z = Encoder(x, latent_rep_size=292, smile_max_length=smile_max_length) encoder = Model(x, z) Explanation: Here I've adapted the exact architecture used in the paper End of explanation encoded_input = Input(shape=(292,)) decoder = Model(encoded_input, Decoder(encoded_input, latent_rep_size=292, smile_max_length=smile_max_length, charset_length=len(char_set))) Explanation: encoded_input looks like a dummy layer here: End of explanation x1 = Input(shape=(smile_max_length, len(char_set)), name='input_1') vae_loss, z1 = Encoder(x1, latent_rep_size=292, smile_max_length=smile_max_length) autoencoder = Model(x1, Decoder(z1, latent_rep_size=292, smile_max_length=smile_max_length, charset_length=len(char_set))) Explanation: create a separate autoencoder model that combines the encoder and decoder (I guess the former cells are for accessing those separate parts of the model) End of explanation autoencoder.compile(optimizer='Adam', loss=vae_loss, metrics =['accuracy']) autoencoder.fit(X_train, X_train, shuffle = True, validation_data=(X_test, X_test)) def sample(a, temperature=1.0): # helper function to sample an index from a probability array # a = np.log(a) / temperature # a = np.exp(a) / np.sum(np.exp(a)) # return np.argmax(np.random.multinomial(1, a, 1)) # work around from https://github.com/llSourcell/How-to-Generate-Music-Demo/issues/4 a = np.log(a) / temperature dist = np.exp(a)/np.sum(np.exp(a)) choices = range(len(a)) return np.random.choice(choices, p=dist) values[393977] test_smi = values[393977] test_smi = pad_smiles(test_smi, smile_max_length) Z = np.zeros((1, smile_max_length, len(char_list)), dtype=np.bool) for t, char in enumerate(test_smi): Z[0, t, char_to_index[char]] = 1 string = "" for i in autoencoder.predict(Z): for j in i: index = sample(j) string += index_to_char[index] print("\n callback guess: " + string) properties = ['density', 'cpt', 'viscosity', 'thermal_conductivity', 'melting_point'] props = properties devmodel = salty.aggregate_data(props, merge='Union') devmodel.Data['smiles_string'] = devmodel.Data['smiles-cation'] cations = devmodel.Data['smiles_string'].drop_duplicates() print(cations.shape) cations = cations.reset_index(drop=True) test_smi = cations[100] test_smi = pad_smiles(test_smi, smile_max_length) Z = np.zeros((1, smile_max_length, len(char_list)), dtype=np.bool) for t, char in enumerate(test_smi): Z[0, t, char_to_index[char]] = 1 test_smi Z.shape string = "" for i in autoencoder.predict(Z): for j in i: index = sample(j) string += index_to_char[index] print("\n callback guess: " + string) Explanation: we compile and fit End of explanation cations_with_proper_chars = [] for smi in cations: if set(smi).issubset(char_list): cations_with_proper_chars.append(smi) len(cations_with_proper_chars) Explanation: Alright. So now I'm going to loop through our 276 cations, sample 100x from the decoder based on these representations, and see how many sanitize with the RDKit Also will for now remove cations with new elements: End of explanation cation_samples = [] for smi_index, smi in enumerate(cations_with_proper_chars): smi = pad_smiles(smi, smile_max_length) Z = np.zeros((1, smile_max_length, len(char_list)), dtype=np.bool) for t, char in enumerate(smi): Z[0, t, char_to_index[char]] = 1 string = "" for i in autoencoder.predict(Z): for j in i: index = sample(j, temperature=0.5) string += index_to_char[index] cation_samples.append(string) print('sampled cations: {}'.format(len(cation_samples))) print('unique samples: {}'.format(pd.DataFrame(cation_samples).drop_duplicates().shape[0])) from rdkit import Chem from rdkit.Chem import Draw % matplotlib inline for smi in cation_samples: try: Draw.MolToMPL(Chem.MolFromSmiles(smi)) print(smi) except: pass cation_samples Explanation: so we had to remove 25 cations End of explanation
3,615
Given the following text description, write Python code to implement the functionality described below step by step Description: SEQUENCES Preliminary imports, you need to run this cell before any other cell in this notebook. Step1: Discrete time Step2: Linear Difference Equations A linear difference equation(LDE) is written as Step3: Money exercises Imagine you put 10€ in a bank that gives you 10% interest each year. That interest can be compounded in different time basis. Banks typically use a year compound, that is, each year, it applies the interest to the balance in the account, no matter what happened during the year. Think that you forget about it, and check the account balance after 30 years, how much money will you have using a year, month, day and hour compound? What do you notice? 10% year, year compound Step4: 10% per year, month compound Step5: 10% per year, day compound Step6: 10% per year, hour compound Step7: The Fibonacci sequence is a second order difference equation, so we need to provide two initial conditions. Check the if statements in the code below.
Python Code: %matplotlib notebook import matplotlib.pyplot as plt import numpy as np from __future__ import print_function from ipywidgets import interact, interactive, fixed import ipywidgets as widgets def plotSequence(y): n = np.linspace(0, y.size, y.size) plt.scatter(n, y) plt.plot([n, n], [np.zeros(n.size), y], color='gray', linestyle="--") return Explanation: SEQUENCES Preliminary imports, you need to run this cell before any other cell in this notebook. End of explanation w = 1.0 # frequency in hz = 1/seg n=np.linspace(0,10,num=50) y = np.sin(w*n) plt.figure() plotSequence(y) Explanation: Discrete time: Sampling a sinusoidal signal Try different frequencies of the signal and write down what are the min and max values you can recognize the sine wave. End of explanation def lde(n, lmbda, f0): return f0*pow(lmbda,n); # natural frequency and stability def f(a): N = 30 f = np.linspace(0,N,N) f0 = 1 for i in range(N): f[i] = lde(i, a, f0) plt.figure() plotSequence(f) interact(f, a=(-10.0,10.0,0.1)) Explanation: Linear Difference Equations A linear difference equation(LDE) is written as: $y[n] = \lambda y[n-1]$ where $\lambda$ is called the Natural Frequency of the system. Depending on its value, the system modelled with the LDE can be stable or unstable. In order to solve a LDE, we must provide an initial condition, for instance, typically it is chosen the intial state, say $y[0] = 1$. Thus, we can obtain any value in the sequence by applying the formula recursively $y[n] = \lambda y[n-1]$ $y[n-1] = \lambda y[n-2]$ $y[n-2] = \lambda y[n-3]$ $\vdots$ $y[1] = \lambda y[0]$ that is, $y[n] = \lambda ( \lambda ( \lambda ( \dots ( \lambda y[0] ) \dots ))) $ or simply, $y[n] = \lambda^n y[0]$. That is, a pair of $\lambda$ and $y[0]$ defines uniquely a solution to ann LDE. Below there is the definition of an LDE, and interactive plot to play with the influence of $\lambda$ End of explanation lde(30, 1+(10.0/100), 10) Explanation: Money exercises Imagine you put 10€ in a bank that gives you 10% interest each year. That interest can be compounded in different time basis. Banks typically use a year compound, that is, each year, it applies the interest to the balance in the account, no matter what happened during the year. Think that you forget about it, and check the account balance after 30 years, how much money will you have using a year, month, day and hour compound? What do you notice? 10% year, year compound End of explanation lde(30*12, 1+((10.0/100)/12), 10) Explanation: 10% per year, month compound End of explanation lde(30*365, 1+((10.0/100)/365), 10) Explanation: 10% per year, day compound End of explanation lde(30*365*24, 1+((10.0/100)/(365*24)), 10) Explanation: 10% per year, hour compound End of explanation def fibonacci(n): if n == 0: # first initial condition return 0 elif n == 1: # second initial condition return 1 else: return fibonacci(n-1) + fibonacci(n-2) N = 10 z = np.linspace(0,N,N) for i in range(N): z[i] = fibonacci(i) plt.figure() plotSequence(z) Explanation: The Fibonacci sequence is a second order difference equation, so we need to provide two initial conditions. Check the if statements in the code below. End of explanation
3,616
Given the following text description, write Python code to implement the functionality described below step by step Description: Your first neural network In this project, you'll build your first neural network and use it to predict daily bike rental ridership. We've provided some of the code, but left the implementation of the neural network up to you (for the most part). After you've submitted this project, feel free to explore the data and the model more. Step1: Load and prepare the data A critical step in working with neural networks is preparing the data correctly. Variables on different scales make it difficult for the network to efficiently learn the correct weights. Below, we've written the code to load and prepare the data. You'll learn more about this soon! Step2: Checking out the data This dataset has the number of riders for each hour of each day from January 1 2011 to December 31 2012. The number of riders is split between casual and registered, summed up in the cnt column. You can see the first few rows of the data above. Below is a plot showing the number of bike riders over the first 10 days in the data set. You can see the hourly rentals here. This data is pretty complicated! The weekends have lower over all ridership and there are spikes when people are biking to and from work during the week. Looking at the data above, we also have information about temperature, humidity, and windspeed, all of these likely affecting the number of riders. You'll be trying to capture all this with your model. Step3: Dummy variables Here we have some categorical variables like season, weather, month. To include these in our model, we'll need to make binary dummy variables. This is simple to do with Pandas thanks to get_dummies(). Step4: Scaling target variables To make training the network easier, we'll standardize each of the continuous variables. That is, we'll shift and scale the variables such that they have zero mean and a standard deviation of 1. The scaling factors are saved so we can go backwards when we use the network for predictions. Step5: Splitting the data into training, testing, and validation sets We'll save the last 21 days of the data to use as a test set after we've trained the network. We'll use this set to make predictions and compare them with the actual number of riders. Step6: We'll split the data into two sets, one for training and one for validating as the network is being trained. Since this is time series data, we'll train on historical data, then try to predict on future data (the validation set). Step7: Time to build the network Below you'll build your network. We've built out the structure and the backwards pass. You'll implement the forward pass through the network. You'll also set the hyperparameters Step8: Training the network Here you'll set the hyperparameters for the network. The strategy here is to find hyperparameters such that the error on the training set is low, but you're not overfitting to the data. If you train the network too long or have too many hidden nodes, it can become overly specific to the training set and will fail to generalize to the validation set. That is, the loss on the validation set will start increasing as the training set loss drops. You'll also be using a method know as Stochastic Gradient Descent (SGD) to train the network. The idea is that for each training pass, you grab a random sample of the data instead of using the whole data set. You use many more training passes than with normal gradient descent, but each pass is much faster. This ends up training the network more efficiently. You'll learn more about SGD later. Choose the number of epochs This is the number of times the dataset will pass through the network, each time updating the weights. As the number of epochs increases, the network becomes better and better at predicting the targets in the training set. You'll need to choose enough epochs to train the network well but not too many or you'll be overfitting. Choose the learning rate This scales the size of weight updates. If this is too big, the weights tend to explode and the network fails to fit the data. A good choice to start at is 0.1. If the network has problems fitting the data, try reducing the learning rate. Note that the lower the learning rate, the smaller the steps are in the weight updates and the longer it takes for the neural network to converge. Choose the number of hidden nodes The more hidden nodes you have, the more accurate predictions the model will make. Try a few different numbers and see how it affects the performance. You can look at the losses dictionary for a metric of the network performance. If the number of hidden units is too low, then the model won't have enough space to learn and if it is too high there are too many options for the direction that the learning can take. The trick here is to find the right balance in number of hidden units you choose. Step9: Check out your predictions Here, use the test data to view how well your network is modeling the data. If something is completely wrong here, make sure each step in your network is implemented correctly. Step10: Thinking about your results Answer these questions about your results. How well does the model predict the data? Where does it fail? Why does it fail where it does? Note Step11: Our model's RMSE value is very large compare to test data set's std values. Especially, 'casual' rider count prediction is horrible RMSE result. So, our neural network can not predict rider count well. Where does it fail? From Dec 22, prediction is far from real data. Step12: Dec 11 ~ Dec 22 prediction is relatively well. Step13: Dec 22 ~ Dec 27, Dec 29 ~ Dec 31 prediction is horrible. I think it because hollyday season(X-mas, end year). Step14: Why does it fail where it does? casual rider count prediction is always not good. Step15: registered rider count prediction is relatively good in Dec 11 ~ Dec 21. But Dec 21 ~ Dec 26 and Dec 28 ~ Dec 31 registered rider count prediction is bad. Step16: Unit tests Run these unit tests to check the correctness of your network implementation. These tests must all be successful to pass the project.
Python Code: %matplotlib inline %config InlineBackend.figure_format = 'retina' import numpy as np import pandas as pd import matplotlib.pyplot as plt Explanation: Your first neural network In this project, you'll build your first neural network and use it to predict daily bike rental ridership. We've provided some of the code, but left the implementation of the neural network up to you (for the most part). After you've submitted this project, feel free to explore the data and the model more. End of explanation data_path = 'Bike-Sharing-Dataset/hour.csv' rides = pd.read_csv(data_path) rides.head() Explanation: Load and prepare the data A critical step in working with neural networks is preparing the data correctly. Variables on different scales make it difficult for the network to efficiently learn the correct weights. Below, we've written the code to load and prepare the data. You'll learn more about this soon! End of explanation rides[:24*10].plot(x='dteday', y='cnt') Explanation: Checking out the data This dataset has the number of riders for each hour of each day from January 1 2011 to December 31 2012. The number of riders is split between casual and registered, summed up in the cnt column. You can see the first few rows of the data above. Below is a plot showing the number of bike riders over the first 10 days in the data set. You can see the hourly rentals here. This data is pretty complicated! The weekends have lower over all ridership and there are spikes when people are biking to and from work during the week. Looking at the data above, we also have information about temperature, humidity, and windspeed, all of these likely affecting the number of riders. You'll be trying to capture all this with your model. End of explanation dummy_fields = ['season', 'weathersit', 'mnth', 'hr', 'weekday'] for each in dummy_fields: dummies = pd.get_dummies(rides[each], prefix=each, drop_first=False) rides = pd.concat([rides, dummies], axis=1) fields_to_drop = ['instant', 'dteday', 'season', 'weathersit', 'weekday', 'atemp', 'mnth', 'workingday', 'hr'] data = rides.drop(fields_to_drop, axis=1) data.head() Explanation: Dummy variables Here we have some categorical variables like season, weather, month. To include these in our model, we'll need to make binary dummy variables. This is simple to do with Pandas thanks to get_dummies(). End of explanation quant_features = ['casual', 'registered', 'cnt', 'temp', 'hum', 'windspeed'] # Store scalings in a dictionary so we can convert back later scaled_features = {} for each in quant_features: mean, std = data[each].mean(), data[each].std() scaled_features[each] = [mean, std] data.loc[:, each] = (data[each] - mean)/std Explanation: Scaling target variables To make training the network easier, we'll standardize each of the continuous variables. That is, we'll shift and scale the variables such that they have zero mean and a standard deviation of 1. The scaling factors are saved so we can go backwards when we use the network for predictions. End of explanation # Save the last 21 days test_data = data[-21*24:] data = data[:-21*24] # Separate the data into features and targets target_fields = ['cnt', 'casual', 'registered'] features, targets = data.drop(target_fields, axis=1), data[target_fields] test_features, test_targets = test_data.drop(target_fields, axis=1), test_data[target_fields] Explanation: Splitting the data into training, testing, and validation sets We'll save the last 21 days of the data to use as a test set after we've trained the network. We'll use this set to make predictions and compare them with the actual number of riders. End of explanation # Hold out the last 60 days of the remaining data as a validation set train_features, train_targets = features[:-60*24], targets[:-60*24] val_features, val_targets = features[-60*24:], targets[-60*24:] Explanation: We'll split the data into two sets, one for training and one for validating as the network is being trained. Since this is time series data, we'll train on historical data, then try to predict on future data (the validation set). End of explanation import numpy as np def sigmoid(x): return 1/(1+np.exp(-x)) class NeuralNetwork(object): def __init__(self, input_nodes, hidden_nodes, output_nodes, learning_rate): # Set number of nodes in input, hidden and output layers. self.input_nodes = input_nodes self.hidden_nodes = hidden_nodes self.output_nodes = output_nodes # Initialize weights self.weights_input_to_hidden = np.random.normal(0.0, self.hidden_nodes**-0.5, (self.hidden_nodes, self.input_nodes)) self.weights_hidden_to_output = np.random.normal(0.0, self.output_nodes**-0.5, (self.output_nodes, self.hidden_nodes)) self.lr = learning_rate #### Set this to your implemented sigmoid function #### # Activation function is the sigmoid function self.activation_function = sigmoid def train(self, inputs_list, targets_list): # Convert inputs list to 2d array inputs = np.array(inputs_list, ndmin=2).T targets = np.array(targets_list, ndmin=2).T #### Implement the forward pass here #### ### Forward pass ### # TODO: Hidden layer hidden_inputs = np.dot(self.weights_input_to_hidden, inputs) hidden_outputs = self.activation_function(hidden_inputs) # signals from hidden layer # TODO: Output layer final_inputs = np.dot(self.weights_hidden_to_output, hidden_outputs) final_outputs = final_inputs # signals from final output layer #### Implement the backward pass here #### ### Backward pass ### # TODO: Output error output_errors = targets - final_outputs # Output layer error is the difference between desired target and actual output. # TODO: Backpropagated error hidden_output_deriv = hidden_outputs * (1 - hidden_outputs) hidden_errors = np.dot(output_errors.T, self.weights_hidden_to_output).T * hidden_output_deriv # TODO: Update the weights delta_w_h_o = self.lr * np.dot(output_errors, hidden_outputs.T) delta_w_i_h = self.lr * np.dot(hidden_errors, inputs.T) self.weights_hidden_to_output += delta_w_h_o # update hidden-to-output weights with gradient descent step self.weights_input_to_hidden += delta_w_i_h # update input-to-hidden weights with gradient descent step def run(self, inputs_list): # Run a forward pass through the network inputs = np.array(inputs_list, ndmin=2).T #### Implement the forward pass here #### # TODO: Hidden layer hidden_layer_input = np.dot(self.weights_input_to_hidden, inputs) hidden_layer_output = self.activation_function(hidden_layer_input) # TODO: Output layer final_inputs = np.dot(self.weights_hidden_to_output, hidden_layer_output) final_outputs = final_inputs return final_outputs def MSE(y, Y): return np.mean((y-Y)**2) Explanation: Time to build the network Below you'll build your network. We've built out the structure and the backwards pass. You'll implement the forward pass through the network. You'll also set the hyperparameters: the learning rate, the number of hidden units, and the number of training passes. The network has two layers, a hidden layer and an output layer. The hidden layer will use the sigmoid function for activations. The output layer has only one node and is used for the regression, the output of the node is the same as the input of the node. That is, the activation function is $f(x)=x$. A function that takes the input signal and generates an output signal, but takes into account the threshold, is called an activation function. We work through each layer of our network calculating the outputs for each neuron. All of the outputs from one layer become inputs to the neurons on the next layer. This process is called forward propagation. We use the weights to propagate signals forward from the input to the output layers in a neural network. We use the weights to also propagate error backwards from the output back into the network to update our weights. This is called backpropagation. Hint: You'll need the derivative of the output activation function ($f(x) = x$) for the backpropagation implementation. If you aren't familiar with calculus, this function is equivalent to the equation $y = x$. What is the slope of that equation? That is the derivative of $f(x)$. Below, you have these tasks: 1. Implement the sigmoid function to use as the activation function. Set self.activation_function in __init__ to your sigmoid function. 2. Implement the forward pass in the train method. 3. Implement the backpropagation algorithm in the train method, including calculating the output error. 4. Implement the forward pass in the run method. End of explanation import sys ### Set the hyperparameters here ### epochs = 100 learning_rate = 0.1 hidden_nodes = 2 output_nodes = 1 N_i = train_features.shape[1] network = NeuralNetwork(N_i, hidden_nodes, output_nodes, learning_rate) losses = {'train':[], 'validation':[], 'validation_RMSE':[]} for e in range(epochs): # Go through a random batch of 128 records from the training data set batch = np.random.choice(train_features.index, size=128) for record, target in zip(train_features.ix[batch].values, train_targets.ix[batch]['cnt']): network.train(record, target) # Printing out the training progress train_loss = MSE(network.run(train_features), train_targets['cnt'].values) val_loss = MSE(network.run(val_features), val_targets['cnt'].values) val_loss_rmse = val_loss ** 0.5 sys.stdout.write("\rProgress: " + str(100 * e/float(epochs))[:4] \ + "% ... Training loss: " + str(train_loss)[:5] \ + " ... Validation loss: " + str(val_loss)[:5] \ + " ... Validation loss(RMSE): " + str(val_loss_rmse)[:5]) losses['train'].append(train_loss) losses['validation'].append(val_loss) losses['validation_RMSE'].append(val_loss_rmse) plt.plot(losses['train'], label='Training loss') plt.plot(losses['validation'], label='Validation loss') plt.plot(losses['validation_RMSE'], label='Validation loss(RMSE)') plt.legend() plt.ylim(ymax=1.0) Explanation: Training the network Here you'll set the hyperparameters for the network. The strategy here is to find hyperparameters such that the error on the training set is low, but you're not overfitting to the data. If you train the network too long or have too many hidden nodes, it can become overly specific to the training set and will fail to generalize to the validation set. That is, the loss on the validation set will start increasing as the training set loss drops. You'll also be using a method know as Stochastic Gradient Descent (SGD) to train the network. The idea is that for each training pass, you grab a random sample of the data instead of using the whole data set. You use many more training passes than with normal gradient descent, but each pass is much faster. This ends up training the network more efficiently. You'll learn more about SGD later. Choose the number of epochs This is the number of times the dataset will pass through the network, each time updating the weights. As the number of epochs increases, the network becomes better and better at predicting the targets in the training set. You'll need to choose enough epochs to train the network well but not too many or you'll be overfitting. Choose the learning rate This scales the size of weight updates. If this is too big, the weights tend to explode and the network fails to fit the data. A good choice to start at is 0.1. If the network has problems fitting the data, try reducing the learning rate. Note that the lower the learning rate, the smaller the steps are in the weight updates and the longer it takes for the neural network to converge. Choose the number of hidden nodes The more hidden nodes you have, the more accurate predictions the model will make. Try a few different numbers and see how it affects the performance. You can look at the losses dictionary for a metric of the network performance. If the number of hidden units is too low, then the model won't have enough space to learn and if it is too high there are too many options for the direction that the learning can take. The trick here is to find the right balance in number of hidden units you choose. End of explanation fig, ax = plt.subplots(figsize=(8,4)) mean, std = scaled_features['cnt'] predictions = network.run(test_features)*std + mean ax.plot(predictions[0], label='Prediction') ax.plot((test_targets['cnt']*std + mean).values, label='Data') ax.set_xlim(right=len(predictions)) ax.legend() dates = pd.to_datetime(rides.ix[test_data.index]['dteday']) dates = dates.apply(lambda d: d.strftime('%b %d')) ax.set_xticks(np.arange(len(dates))[12::24]) _ = ax.set_xticklabels(dates[12::24], rotation=45) Explanation: Check out your predictions Here, use the test data to view how well your network is modeling the data. If something is completely wrong here, make sure each step in your network is implemented correctly. End of explanation print("test['cnt']_loss(RMSE)", MSE(network.run(test_features), test_targets['cnt'].values) ** 0.5) print("test['casual']_loss(RMSE)", MSE(network.run(test_features), test_targets['casual'].values) ** 0.5) print("test['registered']_loss(RMSE)", MSE(network.run(test_features), test_targets['registered'].values) ** 0.5) test_targets.describe() Explanation: Thinking about your results Answer these questions about your results. How well does the model predict the data? Where does it fail? Why does it fail where it does? Note: You can edit the text in this cell by double clicking on it. When you want to render the text, press control + enter Your answer below How well does the model predict the data? End of explanation print("test set length:", len(test_features)) def draw(column_name='cnt',from_index=0,to_index=505): fig, ax = plt.subplots(figsize=(8,4)) mean, std = scaled_features[column_name] predictions = network.run(test_features[from_index:to_index])*std + mean ax.plot(predictions[0], label='Prediction') ax.plot((test_targets[from_index:to_index][column_name]*std + mean).values, label='Data') ax.set_xlim(right=len(predictions)) ax.legend() dates = pd.to_datetime(rides.ix[test_data[from_index:to_index].index]['dteday']) dates = dates.apply(lambda d: d.strftime('%b %d')) ax.set_xticks(np.arange(len(dates))[12::24]) _ = ax.set_xticklabels(dates[12::24], rotation=45) Explanation: Our model's RMSE value is very large compare to test data set's std values. Especially, 'casual' rider count prediction is horrible RMSE result. So, our neural network can not predict rider count well. Where does it fail? From Dec 22, prediction is far from real data. End of explanation draw(column_name='cnt', from_index=0, to_index=300) Explanation: Dec 11 ~ Dec 22 prediction is relatively well. End of explanation draw(column_name='cnt', from_index=300) Explanation: Dec 22 ~ Dec 27, Dec 29 ~ Dec 31 prediction is horrible. I think it because hollyday season(X-mas, end year). End of explanation draw(column_name='casual') Explanation: Why does it fail where it does? casual rider count prediction is always not good. End of explanation draw(column_name='registered') Explanation: registered rider count prediction is relatively good in Dec 11 ~ Dec 21. But Dec 21 ~ Dec 26 and Dec 28 ~ Dec 31 registered rider count prediction is bad. End of explanation import unittest inputs = [0.5, -0.2, 0.1] targets = [0.4] test_w_i_h = np.array([[0.1, 0.4, -0.3], [-0.2, 0.5, 0.2]]) test_w_h_o = np.array([[0.3, -0.1]]) class TestMethods(unittest.TestCase): ########## # Unit tests for data loading ########## def test_data_path(self): # Test that file path to dataset has been unaltered self.assertTrue(data_path.lower() == 'bike-sharing-dataset/hour.csv') def test_data_loaded(self): # Test that data frame loaded self.assertTrue(isinstance(rides, pd.DataFrame)) ########## # Unit tests for network functionality ########## def test_activation(self): network = NeuralNetwork(3, 2, 1, 0.5) # Test that the activation function is a sigmoid self.assertTrue(np.all(network.activation_function(0.5) == 1/(1+np.exp(-0.5)))) def test_train(self): # Test that weights are updated correctly on training network = NeuralNetwork(3, 2, 1, 0.5) network.weights_input_to_hidden = test_w_i_h.copy() network.weights_hidden_to_output = test_w_h_o.copy() network.train(inputs, targets) self.assertTrue(np.allclose(network.weights_hidden_to_output, np.array([[ 0.37275328, -0.03172939]]))) self.assertTrue(np.allclose(network.weights_input_to_hidden, np.array([[ 0.10562014, 0.39775194, -0.29887597], [-0.20185996, 0.50074398, 0.19962801]]))) def test_run(self): # Test correctness of run method network = NeuralNetwork(3, 2, 1, 0.5) network.weights_input_to_hidden = test_w_i_h.copy() network.weights_hidden_to_output = test_w_h_o.copy() self.assertTrue(np.allclose(network.run(inputs), 0.09998924)) suite = unittest.TestLoader().loadTestsFromModule(TestMethods()) unittest.TextTestRunner().run(suite) Explanation: Unit tests Run these unit tests to check the correctness of your network implementation. These tests must all be successful to pass the project. End of explanation
3,617
Given the following text description, write Python code to implement the functionality described below step by step Description: Lesson 1 Data Wrangling Step1: Quizzes Your task is as follows Step2: Playing around with web services and requests Step3: Problem sets Handle CSV files Your task is to process the supplied file and use the csv module to extract data from it. The data comes from NREL (National Renewable Energy Laboratory) website. Each file contains information from one meteorological station, in particular - about amount of solar and wind energy for each hour of day. Note that the first line of the datafile is neither data entry, nor header. It is a line describing the data source. You should extract the name of the station from it. The data should be returned as a list of lists (not dictionaries). You can use the csv modules "reader" method to get data in such format. Another useful method is next() - to get the next line from the iterator. You should only change the parse_file function. Step5: Handle Excel files Find the time and value of max load for each of the regions COAST, EAST, FAR_WEST, NORTH, NORTH_C, SOUTHERN, SOUTH_C, WEST and write the result out in a csv file, using pipe character | as the delimiter. An example output can be seen in the "example.csv" file. Step6: Handle JSON This exercise shows some important concepts that you should be aware about
Python Code: # set up environment import numpy as np import pandas as pd Explanation: Lesson 1 Data Wrangling End of explanation # read data from local file system data = pd.read_excel("2013_ERCOT_Hourly_Load_Data.xls") data.head() data.dtypes data["COAST"].describe() print(data["COAST"].max(), data["COAST"].min(), np.mean(data["COAST"])) coast_max = data[["Hour_End", "COAST"]].ix[data["COAST"] == np.max(data["COAST"])] coast_max coast_min = data[["Hour_End", "COAST"]].ix[data["COAST"] == np.min(data["COAST"])] coast_min coast_max.values coast_min.values Explanation: Quizzes Your task is as follows: - read the provided Excel file - find and return the min, max and average values for the COAST region - find and return the time value for the min and max entries - the time values should be returned as Python tuples Please see the test function for the expected return format End of explanation # To experiment with this code freely you will have to run this code locally. # Take a look at the main() function for an example of how to use the code. # We have provided example json output in the other code editor tabs for you to # look at, but you will not be able to run any queries through our UI. import json import requests BASE_URL = "http://musicbrainz.org/ws/2/" ARTIST_URL = BASE_URL + "artist/" # query parameters are given to the requests.get function as a dictionary; this # variable contains some starter parameters. query_type = { "simple": {}, "atr": {"inc": "aliases+tags+ratings"}, "aliases": {"inc": "aliases"}, "releases": {"inc": "releases"}} def query_site(url, params, uid="", fmt="json"): # This is the main function for making queries to the musicbrainz API. # A json document should be returned by the query. params["fmt"] = fmt r = requests.get(url + uid, params=params) print("requesting", r.url) if r.status_code == requests.codes.ok: return r.json() else: r.raise_for_status() def query_by_name(url, params, name): # This adds an artist name to the query parameters before making # an API call to the function above. params["query"] = "artist:" + name return query_site(url, params) def pretty_print(data, indent=4): # After we get our output, we can format it to be more readable # by using this function. if type(data) == dict: print(json.dumps(data, indent=indent, sort_keys=True)) else: print(data) def get_info(band): ''' Modify the function calls and indexing below to answer the questions on the next quiz. HINT: Note how the output we get from the site is a multi-level JSON document, so try making print statements to step through the structure one level at a time or copy the output to a separate output file. ''' results = query_by_name(ARTIST_URL, query_type["simple"], band) pretty_print(results) artist_id = results["artists"][1]["id"] print("\nARTIST:") pretty_print(results["artists"][1]) artist_data = query_site(ARTIST_URL, query_type["releases"], artist_id) releases = artist_data["releases"] print("\nONE RELEASE:") pretty_print(releases[0], indent=2) release_titles = [r["title"] for r in releases] print("\nALL TITLES:") for t in release_titles: print(t) Explanation: Playing around with web services and requests End of explanation data2 = pd.read_csv("745090.csv", header=1, parse_dates=[[0,1]]) data2.head() data2.dtypes data2.rename(index=str, columns={"Date (MM/DD/YYYY)_Time (HH:MM)": "Date"}, inplace=True) Explanation: Problem sets Handle CSV files Your task is to process the supplied file and use the csv module to extract data from it. The data comes from NREL (National Renewable Energy Laboratory) website. Each file contains information from one meteorological station, in particular - about amount of solar and wind energy for each hour of day. Note that the first line of the datafile is neither data entry, nor header. It is a line describing the data source. You should extract the name of the station from it. The data should be returned as a list of lists (not dictionaries). You can use the csv modules "reader" method to get data in such format. Another useful method is next() - to get the next line from the iterator. You should only change the parse_file function. End of explanation data3 = pd.read_excel("2013_ERCOT_Hourly_Load_Data.xls") data3.head() data3.apply(np.max) data3[data3.columns[1:]].idxmax data3["COAST"].max() data3["COAST"].idxmax() data3["Hour_End"].iloc[5391] def get_maxload_per_station(data): Retrieve the maximum load per station and the corresponding timestamp # create empty list result = [] # loop over columns for column in data.columns[1:]: # get max value and timestamp max_value = data[column].max() max_pos = data[column].idxmax() timestamp = data["Hour_End"].iloc[max_pos] # add values to list result.append([column, timestamp.year, timestamp.month, timestamp.day, timestamp.hour, max_value]) # return result return pd.DataFrame(result, columns=["station", "year", "month", "day", "hour", "load"]) output = get_maxload_per_station(data3) output.head() # write file to local file system #output.to_csv("output.csv", sep="|") Explanation: Handle Excel files Find the time and value of max load for each of the regions COAST, EAST, FAR_WEST, NORTH, NORTH_C, SOUTHERN, SOUTH_C, WEST and write the result out in a csv file, using pipe character | as the delimiter. An example output can be seen in the "example.csv" file. End of explanation import json import codecs import requests URL_MAIN = "http://api.nytimes.com/svc/" URL_POPULAR = URL_MAIN + "mostpopular/v2/" API_KEY = { "popular": "", "article": ""} def get_from_file(kind, period): filename = "popular-{0}-{1}.json".format(kind, period) with open(filename, "r") as f: return json.loads(f.read()) def article_overview(kind, period): data = get_from_file(kind, period) titles = [] urls =[] for article in data: section = article["section"] title = article["title"] titles.append({section: title}) if "media" in article: for m in article["media"]: for mm in m["media-metadata"]: if mm["format"] == "Standard Thumbnail": urls.append(mm["url"]) return (titles, urls) def query_site(url, target, offset): # This will set up the query with the API key and offset # Web services often use offset paramter to return data in small chunks # NYTimes returns 20 articles per request, if you want the next 20 # You have to provide the offset parameter if API_KEY["popular"] == "" or API_KEY["article"] == "": print "You need to register for NYTimes Developer account to run this program." print "See Intructor notes for information" return False params = {"api-key": API_KEY[target], "offset": offset} r = requests.get(url, params = params) if r.status_code == requests.codes.ok: return r.json() else: r.raise_for_status() def get_popular(url, kind, days, section="all-sections", offset=0): # This function will construct the query according to the requirements of the site # and return the data, or print an error message if called incorrectly if days not in [1,7,30]: print "Time period can be 1,7, 30 days only" return False if kind not in ["viewed", "shared", "emailed"]: print "kind can be only one of viewed/shared/emailed" return False url += "most{0}/{1}/{2}.json".format(kind, section, days) data = query_site(url, "popular", offset) return data def save_file(kind, period): # This will process all results, by calling the API repeatedly with supplied offset value, # combine the data and then write all results in a file. data = get_popular(URL_POPULAR, "viewed", 1) num_results = data["num_results"] full_data = [] with codecs.open("popular-{0}-{1}.json".format(kind, period), encoding='utf-8', mode='w') as v: for offset in range(0, num_results, 20): data = get_popular(URL_POPULAR, kind, period, offset=offset) full_data += data["results"] v.write(json.dumps(full_data, indent=2)) def test(): titles, urls = article_overview("viewed", 1) assert len(titles) == 20 assert len(urls) == 30 assert titles[2] == {'Opinion': 'Professors, We Need You!'} assert urls[20] == 'http://graphics8.nytimes.com/images/2014/02/17/sports/ICEDANCE/ICEDANCE-thumbStandard.jpg' Explanation: Handle JSON This exercise shows some important concepts that you should be aware about: - using codecs module to write unicode files - using authentication with web APIs - using offset when accessing web APIs To run this code locally you have to register at the NYTimes developer site and get your own API key. You will be able to complete this exercise in our UI without doing so, as we have provided a sample result. Your task is to process the saved file that represents the most popular articles (by view count) from the last day, and return the following data: - list of dictionaries, where the dictionary key is "section" and value is "title" - list of URLs for all media entries with "format": "Standard Thumbnail" All your changes should be in the article_overview function. The rest of functions are provided for your convenience, if you want to access the API by yourself. End of explanation
3,618
Given the following text description, write Python code to implement the functionality described below step by step Description: Import modules Step1: Load MNIST Fashion data Step2: Create seperate class list Step3: Convert lists to numpy arrays Step4: Plot sample images from each class Class Description 0 T-shirt/top 1 Trouser 2 Pullover 3 Dress 4 Coat 5 Sandal 6 Shirt 7 Sneaker 8 Bag 9 Ankle boot Step5: Save each image to corresponding class directory
Python Code: %matplotlib inline import numpy as np import matplotlib.pyplot as plt import csv import os from PIL import Image Explanation: Import modules End of explanation path = '/some/dir/' with open(path + 'fashion-mnist_train.csv') as csvfile: clothing_reader = csv.reader(csvfile) next(clothing_reader) clothing_list=list(clothing_reader) clothing_list = [[int(j) for j in i] for i in clothing_list] len(clothing_list) #60,000 images for train #10,000 images for validation # Sample to work off of #clothing_list_sample = clothing_list[:10] Explanation: Load MNIST Fashion data End of explanation classes = [[] for i in range(10)] for i in clothing_list: for j in range(10): if int(i[0]) == j: classes[j].append(i[1:]) break else: continue Explanation: Create seperate class list End of explanation for cl in classes: for idx,image in enumerate(cl): cl[idx] = np.reshape((np.array(image)),(28,28)) Explanation: Convert lists to numpy arrays End of explanation for i,cl in enumerate(classes): plt.title('Image class is {}'.format(i)) plt.imshow(cl[0], cmap='gray') plt.show() Explanation: Plot sample images from each class Class Description 0 T-shirt/top 1 Trouser 2 Pullover 3 Dress 4 Coat 5 Sandal 6 Shirt 7 Sneaker 8 Bag 9 Ankle boot End of explanation for idx,cl in enumerate(classes): os.makedirs(path + 'mnist_fashion_train_png/class{}'.format(idx)) for num,image in enumerate(cl): im = Image.fromarray(image.astype('uint8')) im = im.convert('L') im.save(path + 'mnist_fashion_train_png/class{idx}/img{num}.png'.format(idx=idx,num=num), 'PNG') Explanation: Save each image to corresponding class directory End of explanation
3,619
Given the following text description, write Python code to implement the functionality described below step by step Description: Harassment and Newcomer Retention (Paper) Regression analysis notebook for study of harassment on newcomer retention in Wikipedia. See research project page for an overview. Step1: Load Data and take sample Pick harassment threshold in [0.01, 0.425, 0.75, 0.85] WARNING Step5: Regression Analysis Step6: RQ1 Step7: The first regression shows that newcomers who are harassed in m1 tend to be more active in m2, indicating that harassment does not have a chilling effect on continued newcomer activity. However, this result is an artifact of the group of harassed newcomers being more active in general. After controlling for the level of activity in m1, we see that when comparing users of comparable activity levels in m1, those who get harassed are significantly less active in m2. RQ2 Step8: For our gender analysis, we reduce our sample to the set of users who reported a gender. First off, we observe that newcomers who end up reporting a female gender are more likely to receive harassment in m1. To investigate whether the impact of receiving harassment differs across genders, we ran the same regression as in RQ1, but restricted our analysis to users who supplied a gender and added a interaction term between gender and our measure of harassment in m1. We find that when restricting to users who supplied a gender, we again see that users who received harassment have reduced activity in m2. Inspecting the regression results for the interaction term between harassment and gender indicates that the impact is not significantly different for males and females. RQ3 Step9: A serious potential confound in our analyses could be that the users who receive harassment are just bad faith newcomers or sock-puppets. They get attacked for their misbehavior and reduce their activity in m2 because they get blocked or because they never intended to stick around past their own attacks. To reduce this confound, we control for whether the user harassed anyone in m1 and for whether they received an user warning of any type. The results show that even users who receive harassment but did not harass anyone or receive a user warning show reduced activity in m2. RQ4
Python Code: % matplotlib inline import pandas as pd from dateutil.relativedelta import relativedelta import statsmodels.formula.api as sm import requests from io import StringIO import math import pandas as pd Explanation: Harassment and Newcomer Retention (Paper) Regression analysis notebook for study of harassment on newcomer retention in Wikipedia. See research project page for an overview. End of explanation threshold = 0.425 #Features computes in ./Harassment and Newcomer Retention Data Munging.ipynb df_random = pd.read_csv("../../data/retention/random_user_sample_features.csv") df_attacked = pd.read_csv("../../data/retention/attacked_user_sample_features.csv") # include all harassed newcomer in the sample df_reg = pd.concat([df_random, df_attacked[df_attacked['m1_num_attack_received_%.3f' % threshold] > 0]]) df_reg = df_reg.drop_duplicates(subset = ['user_id']) df_reg.shape df_reg['m1_harassment_received'] = (df_reg['m1_num_attack_received_%.3f' % threshold] > 0).apply(int) df_reg['m1_harassment_made'] = (df_reg['m1_num_attack_made_%.3f' % threshold] > 0).apply(int) df_reg['m1_harassment_received'].value_counts() df_reg.shape column_map = { 'm1_num_days_active': 'm1_days_active', 'm2_num_days_active' : 'm2_days_active', 'm1_harassment_received': 'm1_received_harassment', 'm1_harassment_made': 'm1_made_harassment', 'm1_fraction_ns0_deleted': 'm1_fraction_ns0_deleted', 'm1_fraction_ns0_reverted': 'm1_fraction_ns0_reverted', 'm1_num_warnings_recieved': 'm1_warnings', } df_reg = df_reg.rename(columns=column_map) Explanation: Load Data and take sample Pick harassment threshold in [0.01, 0.425, 0.75, 0.85] WARNING: seeing some very threshold sensitive results! High thresholds result in harassment having positive impact on t2 activiy. Construct sample that is concatenation of a random sample and and all users who received harassment in t1. End of explanation def regress(df, f, family = 'linear'): if family == 'linear': results = sm.ols(formula=f, data=df).fit() return results.summary().tables[1] elif family == 'logistic': results = sm.logit(formula=f, data=df).fit(disp=0) return results.summary().tables[1] else: return def get_latex_table(results, famiily = 'linear'): Mess of a function for turning a statsmodels SimpleTable into a nice latex table strinf results = pd.read_csv(StringIO(results.as_csv())) if family == 'linear': column_map = { results.columns[0]: "", ' coef ' : 'coef', 'P>|t| ': "p-val", ' t ': "z-stat", ' [95.0% Conf. Int.]': "95% CI" } elif family == 'logistic': column_map = { results.columns[0]: "", ' coef ' : 'coef', 'P>|z| ': "p-val", ' z ': "z-stat", ' [95.0% Conf. Int.]': "95% CI" } else: return results = results.rename(columns=column_map) results.index = results[""] del results[""] results = results[['coef', "z-stat", "p-val", "95% CI"]] results['coef'] = results['coef'].apply(lambda x: round(float(x), 2)) results['z-stat'] = results['z-stat'].apply(lambda x: round(float(x), 1)) results['p-val'] = results['p-val'].apply(lambda x: round(float(x), 3)) results['95% CI'] = results['95% CI'].apply(reformat_ci) header = \\begin{table}[h] \\begin{center} footer = \\end{center} \\caption{%s} \\label{tab:} \\end{table} f = f.replace("_", "\_").replace("~", "\\texttildelow\\") latex = header + results.to_latex() + footer % f print(latex) return results def reformat_ci(s): ci = s.strip().split() ci = (round(float(ci[0]), 1), round(float(ci[1]), 1)) return "[%.1f, %.1f]" % ci Explanation: Regression Analysis End of explanation f ="m2_days_active ~ m1_received_harassment" regress(df_reg, f) f= "m2_days_active ~ m1_days_active + m1_received_harassment" regress(df_reg, f) Explanation: RQ1: Do newcomers in general show reduced activity after experiencing harassment? End of explanation f="m1_received_harassment ~ is_female" regress(df_reg.query("has_gender == 1"), f, family = 'logistic') f="m2_days_active ~ m1_days_active + m1_received_harassment + m1_received_harassment : is_female" regress(df_reg.query("has_gender == 1"), f) Explanation: The first regression shows that newcomers who are harassed in m1 tend to be more active in m2, indicating that harassment does not have a chilling effect on continued newcomer activity. However, this result is an artifact of the group of harassed newcomers being more active in general. After controlling for the level of activity in m1, we see that when comparing users of comparable activity levels in m1, those who get harassed are significantly less active in m2. RQ2: Does a newcomer's gender affect how they behave after experiencing harassment? End of explanation f="m2_days_active ~ m1_days_active + m1_received_harassment + m1_received_harassment : m1_made_harassment + m1_received_harassment : m1_warnings" regress(df_reg, f) Explanation: For our gender analysis, we reduce our sample to the set of users who reported a gender. First off, we observe that newcomers who end up reporting a female gender are more likely to receive harassment in m1. To investigate whether the impact of receiving harassment differs across genders, we ran the same regression as in RQ1, but restricted our analysis to users who supplied a gender and added a interaction term between gender and our measure of harassment in m1. We find that when restricting to users who supplied a gender, we again see that users who received harassment have reduced activity in m2. Inspecting the regression results for the interaction term between harassment and gender indicates that the impact is not significantly different for males and females. RQ3: How do good faith newcomers behave after experiencing harassment? End of explanation f = "m2_days_active ~ m1_days_active + m1_fraction_ns0_deleted + m1_fraction_ns0_reverted " regress(df_reg.query("m1_num_ns0_edits > 0"), f) f = "m2_days_active ~ m1_days_active + m1_received_harassment + m1_warnings + m1_fraction_ns0_deleted + m1_fraction_ns0_reverted " regress(df_reg.query("m1_num_ns0_edits > 0"), f) Explanation: A serious potential confound in our analyses could be that the users who receive harassment are just bad faith newcomers or sock-puppets. They get attacked for their misbehavior and reduce their activity in m2 because they get blocked or because they never intended to stick around past their own attacks. To reduce this confound, we control for whether the user harassed anyone in m1 and for whether they received an user warning of any type. The results show that even users who receive harassment but did not harass anyone or receive a user warning show reduced activity in m2. RQ4: How does experiencing harassment compare to previously studied barriers to newcomer socialization? Halfak et al examine how user warnings and deletions and reverts correlate with newcomer retention. Here we add those features and see how they compare to measure of harassment. End of explanation
3,620
Given the following text description, write Python code to implement the functionality described below step by step Description: Graded = 7/7 HOMEWORK 06 You'll be using the Dark Sky Forecast API from Forecast.io, available at https Step1: 2) What's the current wind speed? How much warmer does it feel than it actually is? Step2: 3) The first daily forecast is the forecast for today. For the place you decided on up above, how much of the moon is currently visible? Step3: 4) What's the difference between the high and low temperatures for today? Step4: 5) Loop through the daily forecast, printing out the next week's worth of predictions. I'd like to know the high temperature for each day, and whether it's hot, warm, or cold, based on what temperatures you think are hot, warm or cold. Step5: 6) What's the weather looking like for the rest of today in Miami, Florida? I'd like to know the temperature for every hour, and if it's going to have cloud cover of more than 0.5 say "{temperature} and cloudy" instead of just the temperature.
Python Code: import config import requests weather_key = config.weather_key # api key - latitude, longitude, time (epoch) #without time parameter response = requests.get('https://api.forecast.io/forecast/' + weather_key + '/39.0068,76.7791') #on my birthdate - time parameter #response = requests.get('https://api.forecast.io/forecast/' + weather_key + '/39.0068,76.7791,765407717') data = response.json() # print(data) Explanation: Graded = 7/7 HOMEWORK 06 You'll be using the Dark Sky Forecast API from Forecast.io, available at https://developer.forecast.io. It's a pretty simple API, but be sure to read the documentation! 1) Make a request from the Forecast.io API for where you were born (or lived, or want to visit!). Tip: Once you've imported the JSON into a variable, check the timezone's name to make sure it seems like it got the right part of the world! Tip 2: How is north vs. south and east vs. west latitude/longitude represented? Is it the normal North/South/East/West? End of explanation response = requests.get('https://api.forecast.io/forecast/' + weather_key + '/39.0068,76.7791') data = response.json() # the current date's weath print(data['currently']) print("The current wind speed is", data['currently']['windSpeed'], "and it feels", data['currently']['apparentTemperature']-data['currently']['temperature'], "degrees warmer than it is.") Explanation: 2) What's the current wind speed? How much warmer does it feel than it actually is? End of explanation # the moon is not currently printing print("Currently", data['daily']['data'][0]['moonPhase'], "of the moon is showing.") Explanation: 3) The first daily forecast is the forecast for today. For the place you decided on up above, how much of the moon is currently visible? End of explanation print("There is a ", data['daily']['data'][0]['temperatureMax'] - data['daily']['data'][0]['temperatureMin'], "degree difference between the high and low temperatures for today.") Explanation: 4) What's the difference between the high and low temperatures for today? End of explanation dailies = data['daily']['data'] import time for day in dailies: date = time.strftime('%Y-%m-%d %H:%M:%S', time.localtime(day['temperatureMaxTime'])) if(day['temperatureMax']) > 95: print(date, "will be a hot day of",day['temperatureMax'], "degrees" ) else: print(date, "will be a warm day of", day['temperatureMax'], "degrees") #print(dailies) Explanation: 5) Loop through the daily forecast, printing out the next week's worth of predictions. I'd like to know the high temperature for each day, and whether it's hot, warm, or cold, based on what temperatures you think are hot, warm or cold. End of explanation response = requests.get('https://api.forecast.io/forecast/' + weather_key + '/25.7617,80.1918') data = response.json() #print(data.keys()) #print(data['hourly']) current_hour = 1 #print(data['hourly']['data']) for hour in data['hourly']['data']: # need to make it so that this only looks at the rest of today if current_hour < 12: print("In the next", current_hour, "hour it will be", hour['apparentTemperature'], "in Miami, Florida") current_hour = current_hour + 1 7) What was the temperature in Central Park on Christmas Day, 1980? How about 1990? 2000? Tip: You'll need to use UNIX time, which is the number of seconds since January 1, 1970. Google can help you convert a normal date! Tip: You'll want to use Forecast.io's "time machine" API at https://developer.forecast.io/docs/v2 central_park_lat = str(40.7829) central_park_long = str(73.9654) # 12/25/1980 @ 12:00am (UTC) #unix_xmas = str(346550400) #12/25/1990 #662083200 # 12/25/2000 #977702400 #start year = 1980 year = 1980 # unix conversion of chrimas for 1980, 1990 and 2000. #http://www.unixtimestamp.com/index.php website used to convert dates for xmas in [346550400, 662083200, 977702400]: unix_xmas = str(xmas) response = requests.get('https://api.forecast.io/forecast/' + weather_key + '/'+ central_park_lat + ',' + central_park_long + ',' + unix_xmas) response = response.json() daily_data = response['daily']['data'] # print(daily_data) for day in daily_data: print("It was an average of", day['temperatureMax'] - day['temperatureMin'], "degrees on Christmas in", year) year = year + 10 #print(response['apparentTemperatureMax']) Explanation: 6) What's the weather looking like for the rest of today in Miami, Florida? I'd like to know the temperature for every hour, and if it's going to have cloud cover of more than 0.5 say "{temperature} and cloudy" instead of just the temperature. End of explanation
3,621
Given the following text description, write Python code to implement the functionality described below step by step Description: Getting started with TensorFlow Learning Objectives 1. Practice defining and performing basic operations on constant Tensors 1. Use Tensorflow's automatic differentiation capability 1. Learn how to train a linear regression from scratch with TensorFLow In this notebook, we will start by reviewing the main operations on Tensors in TensorFlow and understand how to manipulate TensorFlow Variables. We explain how these are compatible with python built-in list and numpy arrays. Then we will jump to the problem of training a linear regression from scratch with gradient descent. The first order of business will be to understand how to compute the gradients of a function (the loss here) with respect to some of its arguments (the model weights here). The TensorFlow construct allowing us to do that is tf.GradientTape, which we will describe. At last we will create a simple training loop to learn the weights of a 1-dim linear regression using synthetic data generated from a linear model. As a bonus exercise, we will do the same for data generated from a non linear model, forcing us to manual engineer non-linear features to improve our linear model performance. Step1: Operations on Tensors Variables and Constants Tensors in TensorFlow are either contant (tf.constant) or variables (tf.Variable). Constant values can not be changed, while variables values can be. The main difference is that instances of tf.Variable have methods allowing us to change their values while tensors constructed with tf.constant don't have these methods, and therefore their values can not be changed. When you want to change the value of a tf.Variable x use one of the following method Step2: Point-wise operations Tensorflow offers similar point-wise tensor operations as numpy does Step3: NumPy Interoperability In addition to native TF tensors, tensorflow operations can take native python types and NumPy arrays as operands. Step4: You can convert a native TF tensor to a NumPy array using .numpy() Step5: Linear Regression Now let's use low level tensorflow operations to implement linear regression. Later in the course you'll see abstracted ways to do this using high level TensorFlow. Toy Dataset We'll model the following function Step6: Let's also create a test dataset to evaluate our models Step7: Loss Function The simplest model we can build is a model that for each value of x returns the sample mean of the training set Step8: Using mean squared error, our loss is Step9: This values for the MSE loss above will give us a baseline to compare how a more complex model is doing. Now, if $\hat{Y}$ represents the vector containing our model's predictions when we use a linear regression model \begin{equation} \hat{Y} = w_0X + w_1 \end{equation} we can write a loss function taking as arguments the coefficients of the model Step10: Gradient Function To use gradient descent we need to take the partial derivatives of the loss function with respect to each of the weights. We could manually compute the derivatives, but with Tensorflow's automatic differentiation capabilities we don't have to! During gradient descent we think of the loss as a function of the parameters $w_0$ and $w_1$. Thus, we want to compute the partial derivative with respect to these variables. For that we need to wrap our loss computation within the context of tf.GradientTape instance which will reccord gradient information Step11: Training Loop Here we have a very simple training loop that converges. Note we are ignoring best practices like batching, creating a separate test set, and random weight initialization for the sake of simplicity. Step12: Now let's compare the test loss for this linear regression to the test loss from the baseline model that outputs always the mean of the training set Step13: This is indeed much better! Bonus Try modelling a non-linear function such as
Python Code: # Ensure the right version of Tensorflow is installed. !pip freeze | grep tensorflow==2.0 || pip install tensorflow==2.0 import numpy as np from matplotlib import pyplot as plt import tensorflow as tf print(tf.__version__) Explanation: Getting started with TensorFlow Learning Objectives 1. Practice defining and performing basic operations on constant Tensors 1. Use Tensorflow's automatic differentiation capability 1. Learn how to train a linear regression from scratch with TensorFLow In this notebook, we will start by reviewing the main operations on Tensors in TensorFlow and understand how to manipulate TensorFlow Variables. We explain how these are compatible with python built-in list and numpy arrays. Then we will jump to the problem of training a linear regression from scratch with gradient descent. The first order of business will be to understand how to compute the gradients of a function (the loss here) with respect to some of its arguments (the model weights here). The TensorFlow construct allowing us to do that is tf.GradientTape, which we will describe. At last we will create a simple training loop to learn the weights of a 1-dim linear regression using synthetic data generated from a linear model. As a bonus exercise, we will do the same for data generated from a non linear model, forcing us to manual engineer non-linear features to improve our linear model performance. End of explanation x = tf.constant([2, 3, 4]) x x = tf.Variable(2.0, dtype=tf.float32, name='my_variable') x.assign(45.8) # TODO 1 x x.assign_add(4) # TODO 1 x x.assign_sub(3) # TODO 1 x Explanation: Operations on Tensors Variables and Constants Tensors in TensorFlow are either contant (tf.constant) or variables (tf.Variable). Constant values can not be changed, while variables values can be. The main difference is that instances of tf.Variable have methods allowing us to change their values while tensors constructed with tf.constant don't have these methods, and therefore their values can not be changed. When you want to change the value of a tf.Variable x use one of the following method: x.assign(new_value) x.assign_add(value_to_be_added) x.assign_sub(value_to_be_subtracted End of explanation a = tf.constant([5, 3, 8]) # TODO 1 b = tf.constant([3, -1, 2]) c = tf.add(a, b) d = a + b print("c:", c) print("d:", d) a = tf.constant([5, 3, 8]) # TODO 1 b = tf.constant([3, -1, 2]) c = tf.multiply(a, b) d = a * b print("c:", c) print("d:", d) # tf.math.exp expects floats so we need to explicitly give the type a = tf.constant([5, 3, 8], dtype=tf.float32) b = tf.math.exp(a) print("b:", b) Explanation: Point-wise operations Tensorflow offers similar point-wise tensor operations as numpy does: tf.add allows to add the components of a tensor tf.multiply allows us to multiply the components of a tensor tf.subtract allow us to substract the components of a tensor tf.math.* contains the usual math operations to be applied on the components of a tensor and many more... Most of the standard aritmetic operations (tf.add, tf.substrac, etc.) are overloaded by the usual corresponding arithmetic symbols (+, -, etc.) End of explanation # native python list a_py = [1, 2] b_py = [3, 4] tf.add(a_py, b_py) # TODO 1 # numpy arrays a_np = np.array([1, 2]) b_np = np.array([3, 4]) tf.add(a_np, b_np) # TODO 1 # native TF tensor a_tf = tf.constant([1, 2]) b_tf = tf.constant([3, 4]) tf.add(a_tf, b_tf) # TODO 1 Explanation: NumPy Interoperability In addition to native TF tensors, tensorflow operations can take native python types and NumPy arrays as operands. End of explanation a_tf.numpy() Explanation: You can convert a native TF tensor to a NumPy array using .numpy() End of explanation X = tf.constant(range(10), dtype=tf.float32) Y = 2 * X + 10 print("X:{}".format(X)) print("Y:{}".format(Y)) Explanation: Linear Regression Now let's use low level tensorflow operations to implement linear regression. Later in the course you'll see abstracted ways to do this using high level TensorFlow. Toy Dataset We'll model the following function: \begin{equation} y= 2x + 10 \end{equation} End of explanation X_test = tf.constant(range(10, 20), dtype=tf.float32) Y_test = 2 * X_test + 10 print("X_test:{}".format(X_test)) print("Y_test:{}".format(Y_test)) Explanation: Let's also create a test dataset to evaluate our models: End of explanation y_mean = Y.numpy().mean() def predict_mean(X): y_hat = [y_mean] * len(X) return y_hat Y_hat = predict_mean(X_test) Explanation: Loss Function The simplest model we can build is a model that for each value of x returns the sample mean of the training set: End of explanation errors = (Y_hat - Y)**2 loss = tf.reduce_mean(errors) loss.numpy() Explanation: Using mean squared error, our loss is: \begin{equation} MSE = \frac{1}{m}\sum_{i=1}^{m}(\hat{Y}_i-Y_i)^2 \end{equation} For this simple model the loss is then: End of explanation def loss_mse(X, Y, w0, w1): Y_hat = w0 * X + w1 errors = (Y_hat - Y)**2 return tf.reduce_mean(errors) Explanation: This values for the MSE loss above will give us a baseline to compare how a more complex model is doing. Now, if $\hat{Y}$ represents the vector containing our model's predictions when we use a linear regression model \begin{equation} \hat{Y} = w_0X + w_1 \end{equation} we can write a loss function taking as arguments the coefficients of the model: End of explanation # TODO 2 def compute_gradients(X, Y, w0, w1): with tf.GradientTape() as tape: loss = loss_mse(X, Y, w0, w1) return tape.gradient(loss, [w0, w1]) w0 = tf.Variable(0.0) w1 = tf.Variable(0.0) dw0, dw1 = compute_gradients(X, Y, w0, w1) print("dw0:", dw0.numpy()) print("dw1", dw1.numpy()) Explanation: Gradient Function To use gradient descent we need to take the partial derivatives of the loss function with respect to each of the weights. We could manually compute the derivatives, but with Tensorflow's automatic differentiation capabilities we don't have to! During gradient descent we think of the loss as a function of the parameters $w_0$ and $w_1$. Thus, we want to compute the partial derivative with respect to these variables. For that we need to wrap our loss computation within the context of tf.GradientTape instance which will reccord gradient information: python with tf.GradientTape() as tape: loss = # computation This will allow us to later compute the gradients of any tensor computed within the tf.GradientTape context with respect to instances of tf.Variable: python gradients = tape.gradient(loss, [w0, w1]) We illustrate this procedure with by computing the loss gradients with respect to the model weights: End of explanation STEPS = 1000 LEARNING_RATE = .02 MSG = "STEP {step} - loss: {loss}, w0: {w0}, w1: {w1}\n" w0 = tf.Variable(0.0) w1 = tf.Variable(0.0) for step in range(0, STEPS + 1): dw0, dw1 = compute_gradients(X, Y, w0, w1) w0.assign_sub(dw0 * LEARNING_RATE) w1.assign_sub(dw1 * LEARNING_RATE) if step % 100 == 0: loss = loss_mse(X, Y, w0, w1) print(MSG.format(step=step, loss=loss, w0=w0.numpy(), w1=w1.numpy())) Explanation: Training Loop Here we have a very simple training loop that converges. Note we are ignoring best practices like batching, creating a separate test set, and random weight initialization for the sake of simplicity. End of explanation loss = loss_mse(X_test, Y_test, w0, w1) loss.numpy() Explanation: Now let's compare the test loss for this linear regression to the test loss from the baseline model that outputs always the mean of the training set: End of explanation X = tf.constant(np.linspace(0, 2, 1000), dtype=tf.float32) Y = X * tf.exp(-X**2) %matplotlib inline plt.plot(X, Y) def make_features(X): f1 = tf.ones_like(X) # Bias. f2 = X f3 = tf.square(X) f4 = tf.sqrt(X) f5 = tf.exp(X) return tf.stack([f1, f2, f3, f4, f5], axis=1) def predict(X, W): return tf.squeeze(X @ W, -1) def loss_mse(X, Y, W): Y_hat = predict(X, W) errors = (Y_hat - Y)**2 return tf.reduce_mean(errors) def compute_gradients(X, Y, W): with tf.GradientTape() as tape: loss = loss_mse(Xf, Y, W) return tape.gradient(loss, W) # TODO 3 STEPS = 2000 LEARNING_RATE = .02 Xf = make_features(X) n_weights = Xf.shape[1] W = tf.Variable(np.zeros((n_weights, 1)), dtype=tf.float32) # For plotting steps, losses = [], [] plt.figure() for step in range(1, STEPS + 1): dW = compute_gradients(X, Y, W) W.assign_sub(dW * LEARNING_RATE) if step % 100 == 0: loss = loss_mse(Xf, Y, W) steps.append(step) losses.append(loss) plt.clf() plt.plot(steps, losses) print("STEP: {} MSE: {}".format(STEPS, loss_mse(Xf, Y, W))) plt.figure() plt.plot(X, Y, label='actual') plt.plot(X, predict(Xf, W), label='predicted') plt.legend() Explanation: This is indeed much better! Bonus Try modelling a non-linear function such as: $y=xe^{-x^2}$ End of explanation
3,622
Given the following text description, write Python code to implement the functionality described below step by step Description: Conditional statements The most common conditional statement in Python is the if-elif-else statement Step1: There is no switch-case type of statement in Python. Note Step2: While statement Python supports while statement familiar from many languages. It is not nearly as much used because of iterators (covered later). value = 5 while value &gt; 0 Step3: Iterating Python has a for-loop statement that is similar to the foreach statement in a lot of other languages. It is possible to loop over any iterables, i.e. lists, sets, tuples, even dicts. Step4: It is possible to unpack things in this stage if that is required. Step5: In dictionaries the keys are iterated over by default. Step6: It is still possible to loop through numbers using the built-in range function that returns an iterable with numbers in sequence. Step7: The function enumerate returns the values it's given with their number in the collection. Step8: Breaking and continuing Sometimes it is necessary to stop the execution of a loop before it's time. For that there is the break keyword. At other times it is desired to end that particular step in the loop and immediately move to the next one. Both of the keywords could be substituted with complex if-else statements but a well-considered break or continue statement is more readable to the next programmer. Step9: List comprehension The act of modifying all the values in a list into a new list is so common in programming that there is a special syntax for it in python, the list comprehension. Step10: It is not necessary to use list comprehensions but they are mentioned so they can be understood if discovered in other programs. Part of the Zen of Python says There should be one-- and preferably only one --obvious way to do it. Although that way may not be obvious at first unless you're Dutch. List comprehensions are the one and obvious way to do these kinds of operations so they are presented even though they may be considered "advanced" syntax. There is also possibility to add a simple test to the statement.
Python Code: value = 4 value = value + 1 if value < 5: print("value is less than 5") elif value > 5: print("value is more than 5") else: print("value is precisely 5") # go ahead and experiment by changing the value Explanation: Conditional statements The most common conditional statement in Python is the if-elif-else statement: if variable &gt; 5: do_something() elif variable &gt; 0: do_something_else() else: give_up() Compared to languages like C, Java or Lisp, do you feel something is missing? Python is whitespace-aware and it uses the so-called off-side rule to annotate code blocks. This has several benefits * It's easy to read at a glance * levels of indentation are processed pre-attentively to conserve brain power for everything * It's easy to write without having to worry too much One corollary of the indentation is that you need to be very aware of when you're using the tabulator character and when you're using a space. Most Python programmers only use whitespace and configure their editor to output several spaces when tab is pressed. Note: the line before a deeper level of indentation ends in a colon ":". This syntax is part of beginning a new code block and surprisingly easy to forget. End of explanation list_ = [1] list_.pop() if not list_: print("list is None or empty") Explanation: There is no switch-case type of statement in Python. Note: When evaluating conditional statements the values 0, an empty string and an empty list all evaluate to False. This can be confusing as it is one of the few places where Python doesn't enforce strong typing. End of explanation list_ = [1, 2, 3, 4] while list_: # remember, an empty list evaluates as False for conditional purposes print(list_.pop()) # pop() removes the last entry from the list Explanation: While statement Python supports while statement familiar from many languages. It is not nearly as much used because of iterators (covered later). value = 5 while value &gt; 0: value = do_something(value) The following example shows how a list is used as the conditional. End of explanation synonyms = ["is dead", "has kicked the bucket", "is no more", "ceased to be"] for phrase in synonyms: print("This parrot " + phrase + ".") Explanation: Iterating Python has a for-loop statement that is similar to the foreach statement in a lot of other languages. It is possible to loop over any iterables, i.e. lists, sets, tuples, even dicts. End of explanation pairs = ( (1, 2), [3, 4], (5, 6), ) for x, y in pairs: print("A is " + str(x)) print("B is " + str(y)) Explanation: It is possible to unpack things in this stage if that is required. End of explanation airspeed_swallows = {"African": 20, "European": 30} for swallow in airspeed_swallows: print("The air speed of " + swallow + " swallows is "+ str(airspeed_swallows[swallow])) Explanation: In dictionaries the keys are iterated over by default. End of explanation for i in range(5): print(str(i)) # The function supports arbitary step lengths and going backwards for i in range(99, 90, -2): # parameters are from, to and step length in that order print(str(i) +" boxes of bottles of beer on the wall") Explanation: It is still possible to loop through numbers using the built-in range function that returns an iterable with numbers in sequence. End of explanation my_list = ["a", "b", "c", "d", "e"] for index, string in enumerate(my_list): print(string +" is the alphabet number "+ str(index)) Explanation: The function enumerate returns the values it's given with their number in the collection. End of explanation for i in range(20): if i % 7 == 6: # modulo operator break # print(i) for i in range(-5, 5, 1): if i == 0: print ("not dividing by 0") continue print("5/" + str(i) + " equals " + str(5/i)) Explanation: Breaking and continuing Sometimes it is necessary to stop the execution of a loop before it's time. For that there is the break keyword. At other times it is desired to end that particular step in the loop and immediately move to the next one. Both of the keywords could be substituted with complex if-else statements but a well-considered break or continue statement is more readable to the next programmer. End of explanation list_ = [value*3-1 for value in range(5)] list_ Explanation: List comprehension The act of modifying all the values in a list into a new list is so common in programming that there is a special syntax for it in python, the list comprehension. End of explanation list_2 = [value*3-1 for value in range(10) if value % 2 == 0] #only take even numbers list_2 Explanation: It is not necessary to use list comprehensions but they are mentioned so they can be understood if discovered in other programs. Part of the Zen of Python says There should be one-- and preferably only one --obvious way to do it. Although that way may not be obvious at first unless you're Dutch. List comprehensions are the one and obvious way to do these kinds of operations so they are presented even though they may be considered "advanced" syntax. There is also possibility to add a simple test to the statement. End of explanation
3,623
Given the following text description, write Python code to implement the functionality described below step by step Description: 1T_Pandas 의 자료형 - Series, DataFrame 오늘의 과정 Pandas => 데이터 입력 ( 우리가 직접 수작업, 엑셀, CSV, DB ) => 데이터 출력 ( print, csv, excel, ... db ) => 데이터 전처리 ( preprocess => 우리가 수집한/크롤링한 정보를 분석 가능한 형태로 만들어주는 작업 ) => 조금 더 복잡한, => 크롤링 ( AJAX POST , select option ) Step1: Numpy => 수학적인 연산; Number, Matrix ( C ) => Matlab ( low-level ) Pandas => 데이터 분석; ( Pandas를 설치하기 위해서 Numpy 가 필요합니다. ) ( high-level )(내부적으로 Numpy가 쓰여 계산이 된다.) scipy => 수학계산; 과학적인 계산... scikit-learn => 미리 머신러닝 알고리즘이 구현되어 있는 라이브러리 tensorflow => 미리 머신러닝/딥러닝 알고리즘이 구현되어 있는 라이브러리 scikit-learn, tensorflow 이 데이터를 => pandas ( 머신러닝 X; 데이터 분석 ) 파이썬 배울 때 순서 Zen of Python ( PEP0020 ) PEP0008 ( Pythonic한 방법 ) 자료형 Pandas 배울 때 순서 => Pandas스러운 방법으로 개발 Pandas 자료형 ( DF; DataFrame ) Series, 2. DataFrame Step2: Series & DataFrame 의 관계 Step3: 위와 같은 방법은 좋은 게 아니다. 가급적 한글은 빼서 쓰자. 헤더에서는 영어를 쓰는 것이 좋다. 한글은 깨지기도 하고 문제가 생길 수 있다. Python2에서 깨질 수 있다. - -시리즈의 기능으로 각각 채워 넣어진다. Step4: Pandas 에서 데이터 추가하기 ( 2 ) ( Series 로 하는거 말고, 우리가 직접 입력하는 경우 )
Python Code: import pandas as pd # 관례적으로, pandas 를 pd라는 이름으로 import 한다. Explanation: 1T_Pandas 의 자료형 - Series, DataFrame 오늘의 과정 Pandas => 데이터 입력 ( 우리가 직접 수작업, 엑셀, CSV, DB ) => 데이터 출력 ( print, csv, excel, ... db ) => 데이터 전처리 ( preprocess => 우리가 수집한/크롤링한 정보를 분석 가능한 형태로 만들어주는 작업 ) => 조금 더 복잡한, => 크롤링 ( AJAX POST , select option ) End of explanation # Matrix ( 2차원 행렬; Column x Row ) pd.DataFrame() # List, = Row나 Column 그 자체 pd.Series() animals = ["dog", "cat", "iguana"] animals len(animals) animals_series = pd.Series(["dog", "cat", "iguana"]) animals_series len(animals_series) Explanation: Numpy => 수학적인 연산; Number, Matrix ( C ) => Matlab ( low-level ) Pandas => 데이터 분석; ( Pandas를 설치하기 위해서 Numpy 가 필요합니다. ) ( high-level )(내부적으로 Numpy가 쓰여 계산이 된다.) scipy => 수학계산; 과학적인 계산... scikit-learn => 미리 머신러닝 알고리즘이 구현되어 있는 라이브러리 tensorflow => 미리 머신러닝/딥러닝 알고리즘이 구현되어 있는 라이브러리 scikit-learn, tensorflow 이 데이터를 => pandas ( 머신러닝 X; 데이터 분석 ) 파이썬 배울 때 순서 Zen of Python ( PEP0020 ) PEP0008 ( Pythonic한 방법 ) 자료형 Pandas 배울 때 순서 => Pandas스러운 방법으로 개발 Pandas 자료형 ( DF; DataFrame ) Series, 2. DataFrame End of explanation name_series = pd.Series(["김기표", "고기표", "이기표"]) # List <=> Series, 하지만 여러분은 가능하면 Series 만 사용하자. name_series[0:2] list(name_series) email_series = pd.Series(["[email protected]", "[email protected]", "[email protected]"]) name_series email_series type(email_series) # Column => Name, Email df = pd.DataFrame({"Name": name_series, "Email": email_series}) df # Column 을 기준으로 ( 지금은 하나의 Column 이 하나의 Series ) => Series 라고 보고 있다. df.loc[0] # Row 를 Series 로 보고 있다. type(df.loc[0]) # DataFrame 에서 Column 을 가져오는 방법 df["Name"] df.Name #이렇게 써도 되긴 한데 함수랑 헷갈려. 기호에 따라 쓰자 df2 = pd.DataFrame(columns=["나는 최고다"]) df2 df2["나는 최고다"] df2.나는 최고다 df2 = pd.DataFrame(columns=["우리는_최고다"]) df2.우리는_최고다 Explanation: Series & DataFrame 의 관계 End of explanation # DataFrame 에서 Row 를 가져오는 방법 df.loc[0] #loc = location의 약자 df.Name[0] #이것과 loc가 다른 것은? 리스트인데 딕셔너리처럼 동작도 가능한 리스트다. df.loc[2]["Email"] df.Name #index로 나오게 되고 df.loc[2] #column 이름으로 나오게 된다. Explanation: 위와 같은 방법은 좋은 게 아니다. 가급적 한글은 빼서 쓰자. 헤더에서는 영어를 쓰는 것이 좋다. 한글은 깨지기도 하고 문제가 생길 수 있다. Python2에서 깨질 수 있다. - -시리즈의 기능으로 각각 채워 넣어진다. End of explanation df = pd.DataFrame(columns=["Name", "Email"]) df.loc[0] = ["김기표", "[email protected]"] # 데이터 추가하기 (1) df.loc[0] df # 이것을 정확히 쓰기 위해서는 Column의 정확한 "순서"를 알고 있어야 합니다. df.loc[0] = ["[email protected]", '김기표'] df #좋은 입력 방식. 딕셔너리 방식으로 넣으면 된다. df.loc[1] = {"Name": "강기표", "Email": "[email protected]"} df #데이터의 Row를 추가하는 방법에 대해서 다룸 #데이터 Column을 추가하려면? df["Address"] = "" df # Address => "대기빌딩 지하2층 교무실 ____ 앞" # 최대한 다양한 방법으로 해보시면 됩니다. ( 3가지+ ) # 1. for 문으로 시작하자. df.loc[____] for i in range(len(df)): name = df.loc[i]["Name"] #이름을 뽑았어요. df.loc[i]["Address"] = "대기빌딩 지하2층 교무실 " + name + " 앞" # + => 더하는 방법 # %s => 문자열 치환 # {name} => 문자열 치환 ( formatting ) df # 2. for 문을 돌린다. ( 쉽게 돌리는 방식 ) # 제가 잘 몰랐어요. 없는 거고 이거 까먹으시면 됩니다. for index, row in df.iteritems(): # df.loc[index]["Address"] = "대기빌딩 지하2층 교무실 " + df.loc[index]["Name"] + " 앞" print(index) #for문을 안 도는 이유는 index가 아니고 column이 나온다. # df df["Address"] = "" df df["Address"] = "대기빌딩 지하 2층 교무실 " + df["Name"] + " 앞" df #왜 되는지 모르겠다. 그냥 되는 거다. df["Name"] #Series가 뽑힌다. Series의 기능? 연산이 가능하다. df["Name"] + " 앞" #이렇게 Series로 자동 연산이 쭉 된다. # 함수형 프로그래밍을 이용하자. df["Address"] = "" df def get_address(name): return "대기빌딩 지하 2층 교무실 " + name + " 앞" get_address("김기표") df["Name"].apply(get_address) df df["Address"] = df["Name"].apply(lambda name: "대기빌딩 지하 2층 " + name + " 앞") df Explanation: Pandas 에서 데이터 추가하기 ( 2 ) ( Series 로 하는거 말고, 우리가 직접 입력하는 경우 ) End of explanation
3,624
Given the following text description, write Python code to implement the functionality described below step by step Description: Table of Contents <p><div class="lev1 toc-item"><a href="#Query-Data" data-toc-modified-id="Query-Data-1"><span class="toc-item-num">1&nbsp;&nbsp;</span>Query Data</a></div><div class="lev1 toc-item"><a href="#visualize-some-stuff" data-toc-modified-id="visualize-some-stuff-2"><span class="toc-item-num">2&nbsp;&nbsp;</span>visualize some stuff</a></div> Step1: Query Data Grab schedule page Step2: Let's query every talk description Step3: Okay, make a dataframe and add some helpful columns Step4: visualize some stuff
Python Code: import requests as rq import pandas as pd import matplotlib.pyplot as mpl import bs4 import os from tqdm import tqdm_notebook from datetime import time %matplotlib inline Explanation: Table of Contents <p><div class="lev1 toc-item"><a href="#Query-Data" data-toc-modified-id="Query-Data-1"><span class="toc-item-num">1&nbsp;&nbsp;</span>Query Data</a></div><div class="lev1 toc-item"><a href="#visualize-some-stuff" data-toc-modified-id="visualize-some-stuff-2"><span class="toc-item-num">2&nbsp;&nbsp;</span>visualize some stuff</a></div> End of explanation base_url = "https://pydata.org" r = rq.get(base_url + "/berlin2018/schedule/") bs = bs4.BeautifulSoup(r.text, "html.parser") Explanation: Query Data Grab schedule page: End of explanation data = {} for ahref in tqdm_notebook(bs.find_all("a")): if 'schedule/presentation' in ahref.get("href"): url = ahref.get("href") else: continue data[url] = {} resp = bs4.BeautifulSoup(rq.get(base_url + url).text, "html.parser") title = resp.find("h2").text resp = resp.find_all(attrs={'class':"container"})[1] when, who = resp.find_all("h4") date_info = when.string.split("\n")[1:] day_info = date_info[0].strip() time_inf = date_info[1].strip() room_inf = date_info[3].strip()[3:] speaker = who.find("a").text level = resp.find("dd").text abstract = resp.find(attrs={'class':'abstract'}).text description = resp.find(attrs={'class':'description'}).text data[url] = { 'day_info': day_info, 'title': title, 'time_inf': time_inf, 'room_inf': room_inf, 'speaker': speaker, 'level': level, 'abstract': abstract, 'description': description } Explanation: Let's query every talk description: End of explanation df = pd.DataFrame.from_dict(data, orient='index') df.reset_index(drop=True, inplace=True) # Tutorials on Friday df.loc[df.day_info=='Friday', 'tutorial'] = True df['tutorial'].fillna(False, inplace=True) # time handling df['time_from'], df['time_to'] = zip(*df.time_inf.str.split(u'\u2013')) df.time_from = pd.to_datetime(df.time_from).dt.time df.time_to = pd.to_datetime(df.time_to).dt.time del df['time_inf'] df.to_json('./data.json') df.head(3) # Example: Let's query all non-novice talks on sunday, starting at 4 pm tmp = df.query("(level!='Novice') & (day_info=='Sunday')") tmp[tmp.time_from >= time(16)] Explanation: Okay, make a dataframe and add some helpful columns: End of explanation plt.style.use('seaborn-darkgrid')#'seaborn-darkgrid') plt.rcParams['savefig.dpi'] = 200 plt.rcParams['figure.dpi'] = 120 plt.rcParams['figure.autolayout'] = False plt.rcParams['figure.figsize'] = 10, 5 plt.rcParams['axes.labelsize'] = 17 plt.rcParams['axes.titlesize'] = 20 plt.rcParams['font.size'] = 16 plt.rcParams['lines.linewidth'] = 2.0 plt.rcParams['lines.markersize'] = 8 plt.rcParams['legend.fontsize'] = 11 plt.rcParams['font.family'] = "serif" plt.rcParams['font.serif'] = "cm" plt.rcParams['text.latex.preamble'] = "\\usepackage{subdepth}, \\usepackage{type1cm}" plt.rcParams['text.usetex'] = True ax = df.level.value_counts().plot.bar(rot=0) ax.set_ylabel("number of talks") ax.set_title("levels of the talks where:") plt.show() ax = df.rename(columns={'day_info': 'dayinfo'}).groupby("dayinfo")['level'].value_counts(normalize=True).round(2).unstack(level=0).plot.bar(rot=0) ax.set_xlabel('') ax.set_title('So the last day is more kind of "fade-out"?') plt.show() ax = df.groupby("tutorial")['level'].value_counts(normalize=True).round(2).unstack(level=0).T.plot.bar(rot=0) ax.set_title('the percentage of experienced slots is higher for tutorials!\n\\small{So come on fridays for experienced level ;-)}') plt.show() Explanation: visualize some stuff End of explanation
3,625
Given the following text description, write Python code to implement the functionality described below step by step Description: E2E ML on GCP Step1: Restart the kernel After you install the additional packages, you need to restart the notebook kernel so it can find the packages. Step2: Before you begin GPU runtime Make sure you're running this notebook in a GPU runtime if you have that option. In Colab, select Runtime > Change Runtime Type > GPU Set up your Google Cloud project The following steps are required, regardless of your notebook environment. Select or create a Google Cloud project. When you first create an account, you get a $300 free credit towards your compute/storage costs. Make sure that billing is enabled for your project. Enable the following APIs Step3: Region You can also change the REGION variable, which is used for operations throughout the rest of this notebook. Below are regions supported for Vertex AI. We recommend that you choose the region closest to you. Americas Step4: Timestamp If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append the timestamp onto the name of resources you create in this tutorial. Step5: Authenticate your Google Cloud account If you are using Vertex AI Workbench Notebooks, your environment is already authenticated. Skip this step. If you are using Colab, run the cell below and follow the instructions when prompted to authenticate your account via oAuth. Otherwise, follow these steps Step6: Create a Cloud Storage bucket The following steps are required, regardless of your notebook environment. When you initialize the Vertex AI SDK for Python, you specify a Cloud Storage staging bucket. The staging bucket is where all the data associated with your dataset and model resources are retained across sessions. Set the name of your Cloud Storage bucket below. Bucket names must be globally unique across all Google Cloud projects, including those outside of your organization. Step7: Only if your bucket doesn't already exist Step8: Finally, validate access to your Cloud Storage bucket by examining its contents Step9: Set up variables Next, set up some variables used throughout the tutorial. Import libraries and define constants Step10: Initialize Vertex AI SDK for Python Initialize the Vertex AI SDK for Python for your project and corresponding bucket. Step11: Set hardware accelerators You can set hardware accelerators for training and prediction. Set the variables DEPLOY_GPU/DEPLOY_NGPU to use a container image supporting a GPU and the number of GPUs allocated to the virtual machine (VM) instance. For example, to use a GPU container image with 4 Nvidia Telsa K80 GPUs allocated to each VM, you would specify Step12: Set pre-built containers Set the pre-built Docker container image for prediction. For the latest list, see Pre-built containers for prediction. Step13: Set machine type Next, set the machine type to use for prediction. Set the variable DEPLOY_COMPUTE to configure the compute resources for the VMs you will use for for prediction. machine type n1-standard Step14: Get pretrained model from TensorFlow Hub For demonstration purposes, this tutorial uses a pretrained model from TensorFlow Hub (TFHub), which is then uploaded to a Vertex AI Model resource. Once you have a Vertex AI Model resource, the model can be deployed to a Vertex AI Endpoint resource. Download the pretrained model First, you download the pretrained model from TensorFlow Hub. The model gets downloaded as a TF.Keras layer. To finalize the model, in this example, you create a Sequential() model with the downloaded TFHub model as a layer, and specify the input shape to the model. Step15: Save the model artifacts At this point, the model is in memory. Next, you save the model artifacts to a Cloud Storage location. Step16: Upload the model for serving Next, you will upload your TF.Keras model from the custom job to Vertex Model service, which will create a Vertex Model resource for your custom model. During upload, you need to define a serving function to convert data to the format your model expects. If you send encoded data to Vertex AI, your serving function ensures that the data is decoded on the model server before it is passed as input to your model. How does the serving function work When you send a request to an online prediction server, the request is received by a HTTP server. The HTTP server extracts the prediction request from the HTTP request content body. The extracted prediction request is forwarded to the serving function. For Google pre-built prediction containers, the request content is passed to the serving function as a tf.string. The serving function consists of two parts Step17: Get the serving function signature You can get the signatures of your model's input and output layers by reloading the model into memory, and querying it for the signatures corresponding to each layer. For your purpose, you need the signature of the serving function. Why? Well, when we send our data for prediction as a HTTP request packet, the image data is base64 encoded, and our TF.Keras model takes numpy input. Your serving function will do the conversion from base64 to a numpy array. When making a prediction request, you need to route the request to the serving function instead of the model, so you need to know the input layer name of the serving function -- which you will use later when you make a prediction request. Step18: Upload the TensorFlow Hub model to a Vertex AI Model resource Finally, you upload the model artifacts from the TFHub model and serving function into a Vertex AI Model resource. Note Step19: Creating an Endpoint resource You create an Endpoint resource using the Endpoint.create() method. At a minimum, you specify the display name for the endpoint. Optionally, you can specify the project and location (region); otherwise the settings are inherited by the values you set when you initialized the Vertex AI SDK with the init() method. In this example, the following parameters are specified Step20: Deploying Model resources to an Endpoint resource. You can deploy one of more Vertex AI Model resource instances to the same endpoint. Each Vertex AI Model resource that is deployed will have its own deployment container for the serving binary. Note Step21: Prepare test data for prediction Next, you will load a compressed JPEG image into memory and then base64 encode it. For demonstration purposes, you use an image from the Flowers dataset. Step22: Make the prediction Now that your Model resource is deployed to an Endpoint resource, you can do online predictions by sending prediction requests to the Endpoint resource. Request Since in this example your test item is in a Cloud Storage bucket, you open and read the contents of the image using tf.io.gfile.Gfile(). To pass the test data to the prediction service, you encode the bytes into base64 -- which makes the content safe from modification while transmitting binary data over the network. The format of each instance is Step23: Cleaning up To clean up all Google Cloud resources used in this project, you can delete the Google Cloud project you used for the tutorial. Otherwise, you can delete the individual resources you created in this tutorial
Python Code: import os # The Vertex AI Workbench Notebook product has specific requirements IS_WORKBENCH_NOTEBOOK = os.getenv("DL_ANACONDA_HOME") IS_USER_MANAGED_WORKBENCH_NOTEBOOK = os.path.exists( "/opt/deeplearning/metadata/env_version" ) # Vertex AI Notebook requires dependencies to be installed with '--user' USER_FLAG = "" if IS_WORKBENCH_NOTEBOOK: USER_FLAG = "--user" ! pip3 install --upgrade google-cloud-aiplatform $USER_FLAG -q ! pip3 install --upgrade google-cloud-pipeline-components $USER_FLAG -q ! pip3 install tensorflow-hub $USER_FLAG -q Explanation: E2E ML on GCP: MLOps stage 6 : Get started with TensorFlow serving functions with Vertex AI Prediction <table align="left"> <td> <a href="https://colab.research.google.com/github/GoogleCloudPlatform/vertex-ai-samples/blob/main/notebooks/community/ml_ops/stage6/get_started_with_tf_serving_function.ipynb"> <img src="https://cloud.google.com/ml-engine/images/colab-logo-32px.png" alt="Colab logo"> Run in Colab </a> </td> <td> <a href="https://github.com/GoogleCloudPlatform/vertex-ai-samples/blob/main/notebooks/community/ml_ops/stage6/get_started_with_tf_serving_function.ipynb"> <img src="https://cloud.google.com/ml-engine/images/github-logo-32px.png" alt="GitHub logo"> View on GitHub </a> </td> <td> <a href="https://console.cloud.google.com/vertex-ai/workbench/deploy-notebook?download_url=https://raw.githubusercontent.com/GoogleCloudPlatform/vertex-ai-samples/main/notebooks/community/ml_ops/stage6/get_started_with_tf_serving_function.ipynb"> <img src="https://lh3.googleusercontent.com/UiNooY4LUgW_oTvpsNhPpQzsstV5W8F7rYgxgGBD85cWJoLmrOzhVs_ksK_vgx40SHs7jCqkTkCk=e14-rj-sc0xffffff-h130-w32" alt="Vertex AI logo"> Open in Vertex AI Workbench </a> </td> </table> <br/><br/><br/> Overview This tutorial demonstrates how to add a serving function to a model deployed to a Vertex AI Endpoint. Objective In this tutorial, you learn how to use Vertex AI Prediction on a Vertex AI Endpoint resource with a serving function. This tutorial uses the following Google Cloud ML services and resources: Vertex AI Prediction Vertex AI Models Vertex AI Endpoints The steps performed include: Download a pretrained image classification model from TensorFlow Hub. Create a serving function to receive compressed image data, and output decomopressed preprocessed data for the model input. Upload the TensorFlow Hub model and serving function as a Vertex AI Model resource. Creating an Endpoint resource. Deploying the Model resource to an Endpoint resource. Make an online prediction to the Model resource instance deployed to the Endpoint resource. Dataset This tutorial uses a pre-trained image classification model from TensorFlow Hub, which is trained on ImageNet dataset. Learn more about ResNet V2 pretained model. Costs This tutorial uses billable components of Google Cloud: Vertex AI Cloud Storage Learn about Vertex AI pricing and Cloud Storage pricing, and use the Pricing Calculator to generate a cost estimate based on your projected usage. Installation Install the following packages to execute this notebook. End of explanation # Automatically restart kernel after installs import os if not os.getenv("IS_TESTING"): # Automatically restart kernel after installs import IPython app = IPython.Application.instance() app.kernel.do_shutdown(True) Explanation: Restart the kernel After you install the additional packages, you need to restart the notebook kernel so it can find the packages. End of explanation PROJECT_ID = "[your-project-id]" # @param {type:"string"} if PROJECT_ID == "" or PROJECT_ID is None or PROJECT_ID == "[your-project-id]": # Get your GCP project id from gcloud shell_output = ! gcloud config list --format 'value(core.project)' 2>/dev/null PROJECT_ID = shell_output[0] print("Project ID:", PROJECT_ID) ! gcloud config set project $PROJECT_ID Explanation: Before you begin GPU runtime Make sure you're running this notebook in a GPU runtime if you have that option. In Colab, select Runtime > Change Runtime Type > GPU Set up your Google Cloud project The following steps are required, regardless of your notebook environment. Select or create a Google Cloud project. When you first create an account, you get a $300 free credit towards your compute/storage costs. Make sure that billing is enabled for your project. Enable the following APIs: Vertex AI APIs, Compute Engine APIs, and Cloud Storage. If you are running this notebook locally, you will need to install the Cloud SDK. Enter your project ID in the cell below. Then run the cell to make sure the Cloud SDK uses the right project for all the commands in this notebook. Note: Jupyter runs lines prefixed with ! as shell commands, and it interpolates Python variables prefixed with $. Set your project ID If you don't know your project ID, you may be able to get your project ID using gcloud. End of explanation REGION = "[your-region]" # @param {type: "string"} if REGION == "[your-region]": REGION = "us-central1" Explanation: Region You can also change the REGION variable, which is used for operations throughout the rest of this notebook. Below are regions supported for Vertex AI. We recommend that you choose the region closest to you. Americas: us-central1 Europe: europe-west4 Asia Pacific: asia-east1 You may not use a multi-regional bucket for training with Vertex AI. Not all regions provide support for all Vertex AI services. Learn more about Vertex AI regions. End of explanation from datetime import datetime TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S") Explanation: Timestamp If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append the timestamp onto the name of resources you create in this tutorial. End of explanation # If you are running this notebook in Colab, run this cell and follow the # instructions to authenticate your GCP account. This provides access to your # Cloud Storage bucket and lets you submit training jobs and prediction # requests. import os import sys # If on Vertex AI Workbench, then don't execute this code IS_COLAB = False if not os.path.exists("/opt/deeplearning/metadata/env_version") and not os.getenv( "DL_ANACONDA_HOME" ): if "google.colab" in sys.modules: IS_COLAB = True from google.colab import auth as google_auth google_auth.authenticate_user() # If you are running this notebook locally, replace the string below with the # path to your service account key and run this cell to authenticate your GCP # account. elif not os.getenv("IS_TESTING"): %env GOOGLE_APPLICATION_CREDENTIALS '' Explanation: Authenticate your Google Cloud account If you are using Vertex AI Workbench Notebooks, your environment is already authenticated. Skip this step. If you are using Colab, run the cell below and follow the instructions when prompted to authenticate your account via oAuth. Otherwise, follow these steps: In the Cloud Console, go to the Create service account key page. Click Create service account. In the Service account name field, enter a name, and click Create. In the Grant this service account access to project section, click the Role drop-down list. Type "Vertex" into the filter box, and select Vertex Administrator. Type "Storage Object Admin" into the filter box, and select Storage Object Admin. Click Create. A JSON file that contains your key downloads to your local environment. Enter the path to your service account key as the GOOGLE_APPLICATION_CREDENTIALS variable in the cell below and run the cell. End of explanation BUCKET_NAME = "[your-bucket-name]" # @param {type:"string"} BUCKET_URI = f"gs://{BUCKET_NAME}" if BUCKET_URI == "" or BUCKET_URI is None or BUCKET_URI == "gs://[your-bucket-name]": BUCKET_URI = "gs://" + PROJECT_ID + "aip-" + TIMESTAMP Explanation: Create a Cloud Storage bucket The following steps are required, regardless of your notebook environment. When you initialize the Vertex AI SDK for Python, you specify a Cloud Storage staging bucket. The staging bucket is where all the data associated with your dataset and model resources are retained across sessions. Set the name of your Cloud Storage bucket below. Bucket names must be globally unique across all Google Cloud projects, including those outside of your organization. End of explanation ! gsutil mb -l $REGION $BUCKET_URI Explanation: Only if your bucket doesn't already exist: Run the following cell to create your Cloud Storage bucket. End of explanation ! gsutil ls -al $BUCKET_URI Explanation: Finally, validate access to your Cloud Storage bucket by examining its contents: End of explanation import google.cloud.aiplatform as aip import tensorflow as tf import tensorflow_hub as hub Explanation: Set up variables Next, set up some variables used throughout the tutorial. Import libraries and define constants End of explanation aip.init(project=PROJECT_ID, staging_bucket=BUCKET_URI) Explanation: Initialize Vertex AI SDK for Python Initialize the Vertex AI SDK for Python for your project and corresponding bucket. End of explanation if os.getenv("IS_TESTING_DEPLOY_GPU"): DEPLOY_GPU, DEPLOY_NGPU = ( aip.gapic.AcceleratorType.NVIDIA_TESLA_K80, int(os.getenv("IS_TESTING_DEPLOY_GPU")), ) else: DEPLOY_GPU, DEPLOY_NGPU = (None, None) Explanation: Set hardware accelerators You can set hardware accelerators for training and prediction. Set the variables DEPLOY_GPU/DEPLOY_NGPU to use a container image supporting a GPU and the number of GPUs allocated to the virtual machine (VM) instance. For example, to use a GPU container image with 4 Nvidia Telsa K80 GPUs allocated to each VM, you would specify: (aip.AcceleratorType.NVIDIA_TESLA_K80, 4) Otherwise specify (None, None) to use a container image to run on a CPU. Learn more about hardware accelerator support for your region. Note: TF releases before 2.3 for GPU support will fail to load the custom model in this tutorial. It is a known issue and fixed in TF 2.3. This is caused by static graph ops that are generated in the serving function. If you encounter this issue on your own custom models, use a container image for TF 2.3 with GPU support. End of explanation if os.getenv("IS_TESTING_TF"): TF = os.getenv("IS_TESTING_TF") else: TF = "2.5".replace(".", "-") if TF[0] == "2": if DEPLOY_GPU: DEPLOY_VERSION = "tf2-gpu.{}".format(TF) else: DEPLOY_VERSION = "tf2-cpu.{}".format(TF) else: if DEPLOY_GPU: DEPLOY_VERSION = "tf-gpu.{}".format(TF) else: DEPLOY_VERSION = "tf-cpu.{}".format(TF) DEPLOY_IMAGE = "{}-docker.pkg.dev/vertex-ai/prediction/{}:latest".format( REGION.split("-")[0], DEPLOY_VERSION ) print("Deployment:", DEPLOY_IMAGE, DEPLOY_GPU, DEPLOY_NGPU) Explanation: Set pre-built containers Set the pre-built Docker container image for prediction. For the latest list, see Pre-built containers for prediction. End of explanation if os.getenv("IS_TESTING_DEPLOY_MACHINE"): MACHINE_TYPE = os.getenv("IS_TESTING_DEPLOY_MACHINE") else: MACHINE_TYPE = "n1-standard" VCPU = "4" DEPLOY_COMPUTE = MACHINE_TYPE + "-" + VCPU print("Train machine type", DEPLOY_COMPUTE) Explanation: Set machine type Next, set the machine type to use for prediction. Set the variable DEPLOY_COMPUTE to configure the compute resources for the VMs you will use for for prediction. machine type n1-standard: 3.75GB of memory per vCPU. n1-highmem: 6.5GB of memory per vCPU n1-highcpu: 0.9 GB of memory per vCPU vCPUs: number of [2, 4, 8, 16, 32, 64, 96 ] Note: You may also use n2 and e2 machine types for training and deployment, but they do not support GPUs. End of explanation tfhub_model = tf.keras.Sequential( [hub.KerasLayer("https://tfhub.dev/google/imagenet/resnet_v2_101/classification/5")] ) tfhub_model.build([None, 224, 224, 3]) tfhub_model.summary() Explanation: Get pretrained model from TensorFlow Hub For demonstration purposes, this tutorial uses a pretrained model from TensorFlow Hub (TFHub), which is then uploaded to a Vertex AI Model resource. Once you have a Vertex AI Model resource, the model can be deployed to a Vertex AI Endpoint resource. Download the pretrained model First, you download the pretrained model from TensorFlow Hub. The model gets downloaded as a TF.Keras layer. To finalize the model, in this example, you create a Sequential() model with the downloaded TFHub model as a layer, and specify the input shape to the model. End of explanation MODEL_DIR = BUCKET_URI + "/model" tfhub_model.save(MODEL_DIR) Explanation: Save the model artifacts At this point, the model is in memory. Next, you save the model artifacts to a Cloud Storage location. End of explanation CONCRETE_INPUT = "numpy_inputs" def _preprocess(bytes_input): decoded = tf.io.decode_jpeg(bytes_input, channels=3) decoded = tf.image.convert_image_dtype(decoded, tf.float32) resized = tf.image.resize(decoded, size=(224, 224)) return resized @tf.function(input_signature=[tf.TensorSpec([None], tf.string)]) def preprocess_fn(bytes_inputs): decoded_images = tf.map_fn( _preprocess, bytes_inputs, dtype=tf.float32, back_prop=False ) return { CONCRETE_INPUT: decoded_images } # User needs to make sure the key matches model's input @tf.function(input_signature=[tf.TensorSpec([None], tf.string)]) def serving_fn(bytes_inputs): images = preprocess_fn(bytes_inputs) prob = m_call(**images) return prob m_call = tf.function(tfhub_model.call).get_concrete_function( [tf.TensorSpec(shape=[None, 224, 224, 3], dtype=tf.float32, name=CONCRETE_INPUT)] ) tf.saved_model.save(tfhub_model, MODEL_DIR, signatures={"serving_default": serving_fn}) Explanation: Upload the model for serving Next, you will upload your TF.Keras model from the custom job to Vertex Model service, which will create a Vertex Model resource for your custom model. During upload, you need to define a serving function to convert data to the format your model expects. If you send encoded data to Vertex AI, your serving function ensures that the data is decoded on the model server before it is passed as input to your model. How does the serving function work When you send a request to an online prediction server, the request is received by a HTTP server. The HTTP server extracts the prediction request from the HTTP request content body. The extracted prediction request is forwarded to the serving function. For Google pre-built prediction containers, the request content is passed to the serving function as a tf.string. The serving function consists of two parts: preprocessing function: Converts the input (tf.string) to the input shape and data type of the underlying model (dynamic graph). Performs the same preprocessing of the data that was done during training the underlying model -- e.g., normalizing, scaling, etc. post-processing function: Converts the model output to format expected by the receiving application -- e.q., compresses the output. Packages the output for the the receiving application -- e.g., add headings, make JSON object, etc. Both the preprocessing and post-processing functions are converted to static graphs which are fused to the model. The output from the underlying model is passed to the post-processing function. The post-processing function passes the converted/packaged output back to the HTTP server. The HTTP server returns the output as the HTTP response content. One consideration you need to consider when building serving functions for TF.Keras models is that they run as static graphs. That means, you cannot use TF graph operations that require a dynamic graph. If you do, you will get an error during the compile of the serving function which will indicate that you are using an EagerTensor which is not supported. Serving function for image data Preprocessing To pass images to the prediction service, you encode the compressed (e.g., JPEG) image bytes into base 64 -- which makes the content safe from modification while transmitting binary data over the network. Since this deployed model expects input data as raw (uncompressed) bytes, you need to ensure that the base 64 encoded data gets converted back to raw bytes, and then preprocessed to match the model input requirements, before it is passed as input to the deployed model. To resolve this, you define a serving function (serving_fn) and attach it to the model as a preprocessing step. Add a @tf.function decorator so the serving function is fused to the underlying model (instead of upstream on a CPU). When you send a prediction or explanation request, the content of the request is base 64 decoded into a Tensorflow string (tf.string), which is passed to the serving function (serving_fn). The serving function preprocesses the tf.string into raw (uncompressed) numpy bytes (preprocess_fn) to match the input requirements of the model: io.decode_jpeg- Decompresses the JPG image which is returned as a Tensorflow tensor with three channels (RGB). image.convert_image_dtype - Changes integer pixel values to float 32, and rescales pixel data between 0 and 1. image.resize - Resizes the image to match the input shape for the model. At this point, the data can be passed to the model (m_call), via a concrete function. The serving function is a static graph, while the model is a dynamic graph. The concrete function performs the tasks of marshalling the input data from the serving function to the model, and marshalling the prediction result from the model back to the serving function. End of explanation loaded = tf.saved_model.load(MODEL_DIR) serving_input = list( loaded.signatures["serving_default"].structured_input_signature[1].keys() )[0] print("Serving function input:", serving_input) Explanation: Get the serving function signature You can get the signatures of your model's input and output layers by reloading the model into memory, and querying it for the signatures corresponding to each layer. For your purpose, you need the signature of the serving function. Why? Well, when we send our data for prediction as a HTTP request packet, the image data is base64 encoded, and our TF.Keras model takes numpy input. Your serving function will do the conversion from base64 to a numpy array. When making a prediction request, you need to route the request to the serving function instead of the model, so you need to know the input layer name of the serving function -- which you will use later when you make a prediction request. End of explanation model = aip.Model.upload( display_name="example_" + TIMESTAMP, artifact_uri=MODEL_DIR, serving_container_image_uri=DEPLOY_IMAGE, ) print(model) Explanation: Upload the TensorFlow Hub model to a Vertex AI Model resource Finally, you upload the model artifacts from the TFHub model and serving function into a Vertex AI Model resource. Note: When you upload the model artifacts to a Vertex AI Model resource, you specify the corresponding deployment container image. End of explanation endpoint = aip.Endpoint.create( display_name="example_" + TIMESTAMP, project=PROJECT_ID, location=REGION, labels={"your_key": "your_value"}, ) print(endpoint) Explanation: Creating an Endpoint resource You create an Endpoint resource using the Endpoint.create() method. At a minimum, you specify the display name for the endpoint. Optionally, you can specify the project and location (region); otherwise the settings are inherited by the values you set when you initialized the Vertex AI SDK with the init() method. In this example, the following parameters are specified: display_name: A human readable name for the Endpoint resource. project: Your project ID. location: Your region. labels: (optional) User defined metadata for the Endpoint in the form of key/value pairs. This method returns an Endpoint object. Learn more about Vertex AI Endpoints. End of explanation response = endpoint.deploy( model=model, deployed_model_display_name="example_" + TIMESTAMP, machine_type=DEPLOY_COMPUTE, ) print(endpoint) Explanation: Deploying Model resources to an Endpoint resource. You can deploy one of more Vertex AI Model resource instances to the same endpoint. Each Vertex AI Model resource that is deployed will have its own deployment container for the serving binary. Note: For this example, you specified the deployment container for the TFHub model in the previous step of uploading the model artifacts to a Vertex AI Model resource. In the next example, you deploy the Vertex AI Model resource to a Vertex AI Endpoint resource. The Vertex AI Model resource already has defined for it the deployment container image. To deploy, you specify the following additional configuration settings: The machine type. The (if any) type and number of GPUs. Static, manual or auto-scaling of VM instances. In this example, you deploy the model with the minimal amount of specified parameters, as follows: model: The Model resource. deployed_model_displayed_name: The human readable name for the deployed model instance. machine_type: The machine type for each VM instance. Do to the requirements to provision the resource, this may take upto a few minutes. End of explanation ! gsutil cp gs://cloud-ml-data/img/flower_photos/daisy/100080576_f52e8ee070_n.jpg test.jpg import base64 with open("test.jpg", "rb") as f: data = f.read() b64str = base64.b64encode(data).decode("utf-8") Explanation: Prepare test data for prediction Next, you will load a compressed JPEG image into memory and then base64 encode it. For demonstration purposes, you use an image from the Flowers dataset. End of explanation # The format of each instance should conform to the deployed model's prediction input schema. instances = [{serving_input: {"b64": b64str}}] prediction = endpoint.predict(instances=instances) print(prediction) Explanation: Make the prediction Now that your Model resource is deployed to an Endpoint resource, you can do online predictions by sending prediction requests to the Endpoint resource. Request Since in this example your test item is in a Cloud Storage bucket, you open and read the contents of the image using tf.io.gfile.Gfile(). To pass the test data to the prediction service, you encode the bytes into base64 -- which makes the content safe from modification while transmitting binary data over the network. The format of each instance is: { serving_input: { 'b64': base64_encoded_bytes } } Since the predict() method can take multiple items (instances), send your single test item as a list of one test item. Response The response from the predict() call is a Python dictionary with the following entries: ids: The internal assigned unique identifiers for each prediction request. predictions: The predicted confidence, between 0 and 1, per class label. deployed_model_id: The Vertex AI identifier for the deployed Model resource which did the predictions. End of explanation delete_bucket = False delete_model = True delete_endpoint = True if delete_endpoint: try: endpoint.undeploy_all() endpoint.delete() except Exception as e: print(e) if delete_model: try: model.delete() except Exception as e: print(e) if delete_bucket or os.getenv("IS_TESTING"): ! gsutil rm -rf {BUCKET_URI} Explanation: Cleaning up To clean up all Google Cloud resources used in this project, you can delete the Google Cloud project you used for the tutorial. Otherwise, you can delete the individual resources you created in this tutorial: End of explanation
3,626
Given the following text description, write Python code to implement the functionality described below step by step Description: Advent of Code 2017 December 3rd You come across an experimental new kind of memory stored on an infinite two-dimensional grid. Each square on the grid is allocated in a spiral pattern starting at a location marked 1 and then counting up while spiraling outward. For example, the first few squares are allocated like this Step1: Subtract $k$ from the index and take the absolute value Step2: Not quite. Subtract $k - 1$ from the index and take the absolute value Step3: Great, now add $k$... Step4: So to write a function that can give us the value of a row at a given index Step5: (I'm leaving out details of how I figured this all out and just giving the relevent bits. It took a little while to zero in of the aspects of the pattern that were important for the task.) Finding the rank and offset of a number. Now that we can compute the desired output value for a given rank and the offset (index) into that rank, we need to determine how to find the rank and offset of a number. The rank is easy to find by iteratively stripping off the amount already covered by previous ranks until you find the one that brackets the target number. Because each row is $2k$ places and there are $4$ per rank each rank contains $8k$ places. Counting the initial square we have Step6: Putting it all together Step7: Sympy to the Rescue Find the rank for large numbers Using e.g. Sympy we can find the rank directly by solving for the roots of an equation. For large numbers this will (eventually) be faster than iterating as rank_and_offset() does. Step8: Since $1 + 2 + 3 + ... + N = \frac{N(N + 1)}{2}$ and $\sum_{n=1}^k 8n = 8(\sum_{n=1}^k n) = 8\frac{k(k + 1)}{2}$ We want Step9: We can write a function to solve for $k$ given some $n$... Step10: First solve() for $E - n = 0$ which has two solutions (because the equation is quadratic so it has two roots) and since we only care about the larger one we use max() to select it. It will generally not be a nice integer (unless $n$ is the number of an end-corner of a rank) so we take the floor() and add 1 to get the integer rank of $n$. (Taking the ceiling() gives off-by-one errors on the rank boundaries. I don't know why. I'm basically like a monkey doing math here.) =-D It gives correct answers Step11: And it runs much faster (at least for large numbers) Step12: After finding the rank you would still have to find the actual value of the rank's first corner and subtract it (plus 2) from the number and compute the offset as above and then the final output, but this overhead is partially shared by the other method, and overshadowed by the time it (the other iterative method) would take for really big inputs. The fun thing to do here would be to graph the actual runtime of both methods against each other to find the trade-off point. It took me a second to realize I could do this... Sympy is a symbolic math library, and it supports symbolic manipulation of equations. I can put in $y$ (instead of a value) and ask it to solve for $k$. Step13: The equation is quadratic so there are two roots, we are interested in the greater one... Step14: Now we can take the floor(), add 1, and lambdify() the equation to get a Python function that calculates the rank directly. Step15: It's pretty fast. Step16: Knowing the equation we could write our own function manually, but the speed is no better. Step17: Given $n$ and a rank, compute the offset. Now that we have a fast way to get the rank, we still need to use it to compute the offset into a pyramid row. Step18: (Note the sneaky way the sign changes from $k(k + 1)$ to $k(k - 1)$. This is because we want to subract the $(k - 1)$th rank's total places (its own and those of lesser rank) from our $n$ of rank $k$. Substituting $k - 1$ for $k$ in $k(k + 1)$ gives $(k - 1)(k - 1 + 1)$, which of course simplifies to $k(k - 1)$.) Step19: So, we can compute the rank, then the offset, then the row value. Step20: A Joy Version At this point I feel confident that I can implement a concise version of this code in Joy. ;-) Step21: rank_of n rank_of --------------- k The translation is straightforward. int(floor(sqrt(n - 1) / 2 - 0.5) + 1) rank_of == -- sqrt 2 / 0.5 - floor ++ Step22: offset_of n k offset_of ------------------- i (n - 2 + 4 * k * (k - 1)) % (2 * k) A little tricky... n k dup 2 * n k k 2 * n k k*2 [Q] dip % n k Q k*2 % n k dup -- n k k -- n k k-1 4 * * 2 + - n k*k-1*4 2 + - n k*k-1*4+2 - n-k*k-1*4+2 n-k*k-1*4+2 k*2 % n-k*k-1*4+2%k*2 Ergo Step23: row_value k i row_value ------------------- n abs(i - (k - 1)) + k k i over -- - abs + k i k -- - abs + k i k-1 - abs + k i-k-1 abs + k |i-k-1| + k+|i-k-1| Step24: aoc2017.3 n aoc2017.3 ----------------- m n dup rank_of n k [offset_of] dupdip n k offset_of k i k swap row_value k i row_value m
Python Code: k = 4 Explanation: Advent of Code 2017 December 3rd You come across an experimental new kind of memory stored on an infinite two-dimensional grid. Each square on the grid is allocated in a spiral pattern starting at a location marked 1 and then counting up while spiraling outward. For example, the first few squares are allocated like this: 17 16 15 14 13 18 5 4 3 12 19 6 1 2 11 20 7 8 9 10 21 22 23---&gt; ... While this is very space-efficient (no squares are skipped), requested data must be carried back to square 1 (the location of the only access port for this memory system) by programs that can only move up, down, left, or right. They always take the shortest path: the Manhattan Distance between the location of the data and square 1. For example: Data from square 1 is carried 0 steps, since it's at the access port. Data from square 12 is carried 3 steps, such as: down, left, left. Data from square 23 is carried only 2 steps: up twice. Data from square 1024 must be carried 31 steps. How many steps are required to carry the data from the square identified in your puzzle input all the way to the access port? Analysis I freely admit that I worked out the program I wanted to write using graph paper and some Python doodles. There's no point in trying to write a Joy program until I'm sure I understand the problem well enough. The first thing I did was to write a column of numbers from 1 to n (32 as it happens) and next to them the desired output number, to look for patterns directly: 1 0 2 1 3 2 4 1 5 2 6 1 7 2 8 1 9 2 10 3 11 2 12 3 13 4 14 3 15 2 16 3 17 4 18 3 19 2 20 3 21 4 22 3 23 2 24 3 25 4 26 5 27 4 28 3 29 4 30 5 31 6 32 5 There are four groups repeating for a given "rank", then the pattern enlarges and four groups repeat again, etc. 1 2 3 2 3 4 5 4 3 4 5 6 7 6 5 4 5 6 7 8 9 8 7 6 5 6 7 8 9 10 Four of this pyramid interlock to tile the plane extending from the initial "1" square. 2 3 | 4 5 | 6 7 | 8 9 10 11 12 13|14 15 16 17|18 19 20 21|22 23 24 25 And so on. We can figure out the pattern for a row of the pyramid at a given "rank" $k$: $2k - 1, 2k - 2, ..., k, k + 1, k + 2, ..., 2k$ or $k + (k - 1), k + (k - 2), ..., k, k + 1, k + 2, ..., k + k$ This shows that the series consists at each place of $k$ plus some number that begins at $k - 1$, decreases to zero, then increases to $k$. Each row has $2k$ members. Let's figure out how, given an index into a row, we can calculate the value there. The index will be from 0 to $k - 1$. Let's look at an example, with $k = 4$: 0 1 2 3 4 5 6 7 7 6 5 4 5 6 7 8 End of explanation for n in range(2 * k): print abs(n - k), Explanation: Subtract $k$ from the index and take the absolute value: End of explanation for n in range(2 * k): print abs(n - (k - 1)), Explanation: Not quite. Subtract $k - 1$ from the index and take the absolute value: End of explanation for n in range(2 * k): print abs(n - (k - 1)) + k, Explanation: Great, now add $k$... End of explanation def row_value(k, i): i %= (2 * k) # wrap the index at the row boundary. return abs(i - (k - 1)) + k k = 5 for i in range(2 * k): print row_value(k, i), Explanation: So to write a function that can give us the value of a row at a given index: End of explanation def rank_and_offset(n): assert n >= 2 # Guard the domain. n -= 2 # Subtract two, # one for the initial square, # and one because we are counting from 1 instead of 0. k = 1 while True: m = 8 * k # The number of places total in this rank, 4(2k). if n < m: return k, n % (2 * k) n -= m # Remove this rank's worth. k += 1 for n in range(2, 51): print n, rank_and_offset(n) for n in range(2, 51): k, i = rank_and_offset(n) print n, row_value(k, i) Explanation: (I'm leaving out details of how I figured this all out and just giving the relevent bits. It took a little while to zero in of the aspects of the pattern that were important for the task.) Finding the rank and offset of a number. Now that we can compute the desired output value for a given rank and the offset (index) into that rank, we need to determine how to find the rank and offset of a number. The rank is easy to find by iteratively stripping off the amount already covered by previous ranks until you find the one that brackets the target number. Because each row is $2k$ places and there are $4$ per rank each rank contains $8k$ places. Counting the initial square we have: $corner_k = 1 + \sum_{n=1}^k 8n$ I'm not mathematically sophisticated enough to turn this directly into a formula (but Sympy is, see below.) I'm going to write a simple Python function to iterate and search: End of explanation def row_value(k, i): return abs(i - (k - 1)) + k def rank_and_offset(n): n -= 2 # Subtract two, # one for the initial square, # and one because we are counting from 1 instead of 0. k = 1 while True: m = 8 * k # The number of places total in this rank, 4(2k). if n < m: return k, n % (2 * k) n -= m # Remove this rank's worth. k += 1 def aoc20173(n): if n <= 1: return 0 k, i = rank_and_offset(n) return row_value(k, i) aoc20173(23) aoc20173(23000) aoc20173(23000000000000) Explanation: Putting it all together End of explanation from sympy import floor, lambdify, solve, symbols from sympy import init_printing init_printing() k = symbols('k') Explanation: Sympy to the Rescue Find the rank for large numbers Using e.g. Sympy we can find the rank directly by solving for the roots of an equation. For large numbers this will (eventually) be faster than iterating as rank_and_offset() does. End of explanation E = 2 + 8 * k * (k + 1) / 2 # For the reason for adding 2 see above. E Explanation: Since $1 + 2 + 3 + ... + N = \frac{N(N + 1)}{2}$ and $\sum_{n=1}^k 8n = 8(\sum_{n=1}^k n) = 8\frac{k(k + 1)}{2}$ We want: End of explanation def rank_of(n): return floor(max(solve(E - n, k))) + 1 Explanation: We can write a function to solve for $k$ given some $n$... End of explanation for n in (9, 10, 25, 26, 49, 50): print n, rank_of(n) Explanation: First solve() for $E - n = 0$ which has two solutions (because the equation is quadratic so it has two roots) and since we only care about the larger one we use max() to select it. It will generally not be a nice integer (unless $n$ is the number of an end-corner of a rank) so we take the floor() and add 1 to get the integer rank of $n$. (Taking the ceiling() gives off-by-one errors on the rank boundaries. I don't know why. I'm basically like a monkey doing math here.) =-D It gives correct answers: End of explanation %time rank_of(23000000000000) # Compare runtime with rank_and_offset()! %time rank_and_offset(23000000000000) Explanation: And it runs much faster (at least for large numbers): End of explanation y = symbols('y') g, f = solve(E - y, k) Explanation: After finding the rank you would still have to find the actual value of the rank's first corner and subtract it (plus 2) from the number and compute the offset as above and then the final output, but this overhead is partially shared by the other method, and overshadowed by the time it (the other iterative method) would take for really big inputs. The fun thing to do here would be to graph the actual runtime of both methods against each other to find the trade-off point. It took me a second to realize I could do this... Sympy is a symbolic math library, and it supports symbolic manipulation of equations. I can put in $y$ (instead of a value) and ask it to solve for $k$. End of explanation g f Explanation: The equation is quadratic so there are two roots, we are interested in the greater one... End of explanation floor(f) + 1 F = lambdify(y, floor(f) + 1) for n in (9, 10, 25, 26, 49, 50): print n, int(F(n)) Explanation: Now we can take the floor(), add 1, and lambdify() the equation to get a Python function that calculates the rank directly. End of explanation %time int(F(23000000000000)) # The clear winner. Explanation: It's pretty fast. End of explanation from math import floor as mfloor, sqrt def mrank_of(n): return int(mfloor(sqrt(23000000000000 - 1) / 2 - 0.5) + 1) %time mrank_of(23000000000000) Explanation: Knowing the equation we could write our own function manually, but the speed is no better. End of explanation def offset_of(n, k): return (n - 2 + 4 * k * (k - 1)) % (2 * k) Explanation: Given $n$ and a rank, compute the offset. Now that we have a fast way to get the rank, we still need to use it to compute the offset into a pyramid row. End of explanation offset_of(23000000000000, 2397916) Explanation: (Note the sneaky way the sign changes from $k(k + 1)$ to $k(k - 1)$. This is because we want to subract the $(k - 1)$th rank's total places (its own and those of lesser rank) from our $n$ of rank $k$. Substituting $k - 1$ for $k$ in $k(k + 1)$ gives $(k - 1)(k - 1 + 1)$, which of course simplifies to $k(k - 1)$.) End of explanation def rank_of(n): return int(mfloor(sqrt(n - 1) / 2 - 0.5) + 1) def offset_of(n, k): return (n - 2 + 4 * k * (k - 1)) % (2 * k) def row_value(k, i): return abs(i - (k - 1)) + k def aoc20173(n): k = rank_of(n) i = offset_of(n, k) return row_value(k, i) aoc20173(23) aoc20173(23000) aoc20173(23000000000000) %time aoc20173(23000000000000000000000000) # Fast for large values. Explanation: So, we can compute the rank, then the offset, then the row value. End of explanation from notebook_preamble import J, V, define Explanation: A Joy Version At this point I feel confident that I can implement a concise version of this code in Joy. ;-) End of explanation define('rank_of == -- sqrt 2 / 0.5 - floor ++') Explanation: rank_of n rank_of --------------- k The translation is straightforward. int(floor(sqrt(n - 1) / 2 - 0.5) + 1) rank_of == -- sqrt 2 / 0.5 - floor ++ End of explanation define('offset_of == dup 2 * [dup -- 4 * * 2 + -] dip %') Explanation: offset_of n k offset_of ------------------- i (n - 2 + 4 * k * (k - 1)) % (2 * k) A little tricky... n k dup 2 * n k k 2 * n k k*2 [Q] dip % n k Q k*2 % n k dup -- n k k -- n k k-1 4 * * 2 + - n k*k-1*4 2 + - n k*k-1*4+2 - n-k*k-1*4+2 n-k*k-1*4+2 k*2 % n-k*k-1*4+2%k*2 Ergo: offset_of == dup 2 * [dup -- 4 * * 2 + -] dip % End of explanation define('row_value == over -- - abs +') Explanation: row_value k i row_value ------------------- n abs(i - (k - 1)) + k k i over -- - abs + k i k -- - abs + k i k-1 - abs + k i-k-1 abs + k |i-k-1| + k+|i-k-1| End of explanation define('aoc2017.3 == dup rank_of [offset_of] dupdip swap row_value') J('23 aoc2017.3') J('23000 aoc2017.3') V('23000000000000 aoc2017.3') Explanation: aoc2017.3 n aoc2017.3 ----------------- m n dup rank_of n k [offset_of] dupdip n k offset_of k i k swap row_value k i row_value m End of explanation
3,627
Given the following text description, write Python code to implement the functionality described below step by step Description: Apache Spot's Ipython Advanced Mode DNS This guide provides examples about how to request data, show data with some cool libraries like pandas and more. Import Libraries The next cell will import the necessary libraries to execute the functions. Do not remove Step1: Request Data In order to request data we are using Graphql (a query language for APIs, more info at Step3: Now that we have a function, we can run a query like this Step4: Pandas Dataframes The following cell loads the results into a pandas dataframe For more information on how to use pandas, you can learn more here Step5: Additional operations Additional operations can be performed on the dataframe like sorting the data, filtering it and grouping it Filtering the data Step6: Ordering the data Step7: Grouping the data Step9: Reset Scored Connections Uncomment and execute the following cell to reset all scored connections for this day Step10: Sandbox At this point you can perform your own analysis using the previously provided functions as a guide. Happy threat hunting!
Python Code: import datetime import pandas as pd import numpy as np import linecache, bisect import os spath = os.getcwd() path = spath.split("/") date = path[len(path)-1] Explanation: Apache Spot's Ipython Advanced Mode DNS This guide provides examples about how to request data, show data with some cool libraries like pandas and more. Import Libraries The next cell will import the necessary libraries to execute the functions. Do not remove End of explanation def makeGraphqlRequest(query, variables): return GraphQLClient.request(query, variables) Explanation: Request Data In order to request data we are using Graphql (a query language for APIs, more info at: http://graphql.org/). We provide the function to make a data request, all you need is a query and variables End of explanation suspicious_query = query($date:SpotDateType) { dns { suspicious(date:$date) { clientIp clientIpSev dnsQuery dnsQueryClass dnsQueryClassLabel dnsQueryRcode dnsQueryRcodeLabel dnsQueryRep dnsQuerySev dnsQueryType dnsQueryTypeLabel frameLength frameTime networkContext score tld unixTimestamp } } } ##If you want to use a different date for your query, switch the ##commented/uncommented following lines variables={ 'date': datetime.datetime.strptime(date, '%Y%m%d').strftime('%Y-%m-%d') # 'date': "2016-10-08" } suspicious_request = makeGraphqlRequest(suspicious_query,variables) ##The variable suspicious_request will contain the resulting data from the query. results = suspicious_request['data']['dns']['suspicious'] Explanation: Now that we have a function, we can run a query like this: *Note: There's no need to manually set the date for the query, by default the code will read the date from the current path End of explanation df = pd.read_json(json.dumps(results)) ##Printing only the selected column list from the dataframe ##Unless specified otherwise, print df[['clientIp', 'unixTimestamp','tld', 'dnsQuery','dnsQueryRcode','dnsQueryRcodeLabel']] Explanation: Pandas Dataframes The following cell loads the results into a pandas dataframe For more information on how to use pandas, you can learn more here: https://pandas.pydata.org/pandas-docs/stable/10min.html End of explanation ##Filter results where the destination port = 3389 ##The resulting data will be stored in df2 df2 = df[df['tld'].isin(['sjc04-login.dotomi.com'])] print df2[['clientIp', 'unixTimestamp','tld', 'dnsQuery','dnsQueryRcode','dnsQueryRcodeLabel']] Explanation: Additional operations Additional operations can be performed on the dataframe like sorting the data, filtering it and grouping it Filtering the data End of explanation srtd = df.sort_values(by="tld") print srtd[['clientIp', 'unixTimestamp','tld', 'dnsQuery','dnsQueryRcode','dnsQueryRcodeLabel']] Explanation: Ordering the data End of explanation ## This command will group the results by pairs of source-destination IP ## summarizing all other columns grpd = df.groupby(['clientIp','tld']).count() ## This will print the resulting dataframe displaying the input and output bytes columnns print grpd[["dnsQuery"]] Explanation: Grouping the data End of explanation # reset_scores = mutation($date:SpotDateType!) { # dns{ # resetScoredConnections(date:$date){ # success # } # } # } # variables={ # 'date': datetime.datetime.strptime(date, '%Y%m%d').strftime('%Y-%m-%d') # } # request = makeGraphqlRequest(reset_scores,variables) # print request['data']['dns']['resetScoredConnections']['success'] Explanation: Reset Scored Connections Uncomment and execute the following cell to reset all scored connections for this day End of explanation #Your code here Explanation: Sandbox At this point you can perform your own analysis using the previously provided functions as a guide. Happy threat hunting! End of explanation
3,628
Given the following text description, write Python code to implement the functionality described below step by step Description: Process U-Wind Step1: 2. Read u-wind data and pick variables 2.1 Use print to check variable information. Actually, you can also use numdump infile.nc -h to check the same inforamtion Step2: 2.2 Read data Have to set_auto_mask(False) to automatically scaling and offseting, or may cause problem. Step3: 2.3 Have a quick shot on first grid Step4: 3. Calculate Mean and STD in time 3.1 Mean Step5: 3.2 STD Step6: 3.3 Visualize Mean and STD at 1000hPa (the first level)
Python Code: % matplotlib inline from pylab import * import numpy as np import matplotlib.pyplot as plt from mpl_toolkits.basemap import Basemap # plot on map projections from netCDF4 import Dataset as netcdf # netcdf4-python module Explanation: Process U-Wind: Mean and Std In this notebook, we will do a little complicated operations * read 4D u-Wind data * calculate mean and stadard deviation along the axis of time * visualize based on the library of basemap Data wind data can be downlaed from https://www.esrl.noaa.gov/psd/data/gridded/data.ncep.reanalysis2.html This u-wind is a 4D data, includding [months|levels|lat|lon]. The presure levels in hPa. Moreover, the wind data with scaling and offset. when using them, have to restore them to oringal values. 1. Load basic libs End of explanation ncset = netcdf(r'data/uwnd3.mon.mean.nc') print(ncset) Explanation: 2. Read u-wind data and pick variables 2.1 Use print to check variable information. Actually, you can also use numdump infile.nc -h to check the same inforamtion End of explanation ncset.set_auto_mask(False) lon = ncset['lon'][:] lat = ncset['lat'][:] lev = ncset['level'][:] u = ncset['uwnd'][504:624,:] # for the period 1990-1999. print(u.shape) print(lev) Explanation: 2.2 Read data Have to set_auto_mask(False) to automatically scaling and offseting, or may cause problem. End of explanation plot(u[:,1,0, 0]) Explanation: 2.3 Have a quick shot on first grid End of explanation u_10y = np.mean(u, axis=0) # calculate mean for all years and months u_10y.shape Explanation: 3. Calculate Mean and STD in time 3.1 Mean End of explanation u_10y_std=np.std(u, axis=0) u_10y.shape Explanation: 3.2 STD End of explanation [lons, lats] = meshgrid(lon,lat) m = Basemap(projection='robin', lon_0=0) m.drawcoastlines() # draw parallels and meridians. m.drawparallels(np.arange(-90.,120.,30.)) m.drawmeridians(np.arange(0.,360.,60.)) m.drawmapboundary(fill_color='aqua') minu = floor(np.min(u_10y[0])) maxu = ceil(np.max(u_10y[0])) h = m.pcolormesh(lons, lats, u_10y[0], shading='flat',latlon=True, cmap='jet', vmin=minu, vmax=maxu) m.colorbar(h, location='bottom', pad="15%", label='[$^oC$]') plt.title('U1000 Mean between 1990-1999 [m/s]') [lons, lats] = meshgrid(lon,lat) m = Basemap(projection='robin', lon_0=0) m.drawcoastlines() # draw parallels and meridians. m.drawparallels(np.arange(-90.,120.,30.)) m.drawmeridians(np.arange(0.,360.,60.)) m.drawmapboundary(fill_color='aqua') minu = floor(np.min(u_10y_std[0])) maxu = ceil(np.max(u_10y_std[0])) h = m.pcolormesh(lons, lats, u_10y_std[0], shading='flat',latlon=True, cmap='jet', vmin=minu, vmax=maxu) m.colorbar(h, location='bottom', pad="15%", label='[$^oC$]') plt.title('U1000 STD between 1990-1999 [m/s]') Explanation: 3.3 Visualize Mean and STD at 1000hPa (the first level) End of explanation
3,629
Given the following text description, write Python code to implement the functionality described below step by step Description: From transfer function to difference equation In approximately the middle of Peter Corke's lecture Introduction to digital control, he explaines how to go from a transfer function description of a controller (or compensator) to a difference equation that can be implemented on a microcontroller. The idea is to recognize that the term $$ sX(s) $$ in a transfer function is the laplace transform of the derivative of $x(t)$, \begin{equation} sX(s) + x(0) \quad \overset{\mathcal{L}}{\longleftrightarrow} \quad \frac{d}{dt} x(t), \end{equation} where the inital value $x(0)$ is often taken to be zero. We then make use of a discrete approximation of the derivative $$ \frac{d}{dt}x(t) \approx \frac{x(t-h) - x(t)}{h}, $$ where $h$ is the time between the samples in the sampled version of signal $x(t)$. The steps to convert the system on transfer function form $$ Y(s) = F(s)U(s) = \frac{s+b}{s+a}U(s) $$ are to write $$ (s+a)Y(s) = (s+b)U(s) $$ $$ sY(s) + aY(s) = sU(s) + bU(s), $$ take the inverse Laplace transform $$ \frac{d}{dt} y + ay = \frac{d}{dt} u + bu$$ and use the discrete approximation of the derivative $$ \frac{y_k - y_{k-1}}{h} + ay_k = \frac{u_k - u_{k-1}}{h} + bu_k $$ which can be written $$ (1+ah) y_k = y_{k-1} + u_k - u_{k-1} + bh u_k,$$ or $$ y_k = \frac{1}{1+ah} y_{k-1} + \frac{1+bh}{1+ah}u_k - \frac{1}{1+ah}u_{k-1}. $$ Example With the system $$ F(s) = \frac{s+1}{s+2} $$ and the sampling time $$ h=0.1 $$ we get the difference equation $$ y_k = \frac{1}{1.2}y_{k-1} + \frac{1.1}{1.2}u_k - \frac{1}{1.2} u_{k-1}. $$ Let's implement the system and see how the discrete approximation compares to the continuous-time system for the case of a step-response. Step1: Exercise Why is the error in the discrete approximation larger in the beginning than at the end of the step-response? Make a discrete approximation of the transfer function $$ F(s) = \frac{3}{s+3} $$ using the sampling time $$ h=0.2 $$ Then simulate and plot a step-response for the continuous- and discrete system, following the example above. Hint Step2: Recursively computing values of a polynomial using difference equations In the lecture by Peter Corke, he talks about the historical importance of difference equations for computing values of a polynomial. Let's look at this in some more detail. A first order polynomial Consider the polynomial $$ p(x) = 4x + 2. $$ The first difference is $$ \Delta p(x) = p(x) - p(x-h) = 4x + 2 - \big( 4(x-h) + 2 \big) = 4h, $$ and the second order difference is zero (as are all higher order differences) Step3: Second order polynomial For a second order polynomial $$ p(x) = a_0x^2 + a_1x + a_2 $$ we have $$ p''(x) = 2a_0, $$ and the differences $$ \Delta p(x) = p(x) - p(x-h) = a_0x^2 + a_1x + a_2 - \big( a_0(x-h)^2 + a_1(x-h) + a_2 \big) = h(2a_0x + a_1) -a_0h^2, $$ $$ \Delta^2 p(x) = \Delta p(x) - \Delta p(x-h) = h(2a_0x+a_1) - a_0h^2 - \big( h(2a_0(x-h) + a_1) - a_0 h^2 \big) = h^22a_0 $$ Recall the difference equation using the second order difference $$ p(x) = p(x-h) + \Delta p(x-h) + \Delta^2 p(x)$$ We now get $$ p(x) = p(x-h) + \Delta p(x-h) + \Delta^2 p(x) = p(x-h) + \Delta p(x-h) + h^2 2 a_0,$$ or, using the definition of the first-order difference $\Delta p(x-h)$ $$ p(x) = 2p(x-h) - p(x-2h) + h^2 2 a_0,$$ Consider the second order polynomial $$ p(x) = 2x^2 - 3x + 2, $$ and compute values using the difference equation.
Python Code: import numpy as np import scipy.signal as signal import matplotlib.pyplot as plt %matplotlib inline # Define the continuous-time linear time invariant system F a = 2 b = 1 num = [1, b] den = [1, a] F = signal.lti(num, den) # Plot a step response (t, y) = signal.step(F) plt.figure(figsize=(14,6)) plt.plot(t, y, linewidth=2) # Solve the difference equation y_k = c y_{k-1} + d_0 u_k + d_1 u_{k-1} h = 0.1 # The sampling time c = 1.0/(1 + a*h) d0 = (1 + b*h) / (1 + a*h) d1 = -c td = np.arange(35)* h #The sampling time instants ud = np.ones(35) # The input signal is a step, limited in time to 3.5 seconds yd = np.zeros(35) # A vector to hold the discrete output signal yd[0] = c*0 + d0*ud[0] - d1*0 # The first sample of the output signal for k in range(1,35): # And then the rest yd[k] = c*yd[k-1] + d0*ud[k] + d1*ud[k-1] plt.plot(td, yd, 'o', markersize=8) plt.xlim([0, 3.5]) plt.ylim([0, 1]) plt.legend(('Continuous-time system', 'Discrete approximation')) Explanation: From transfer function to difference equation In approximately the middle of Peter Corke's lecture Introduction to digital control, he explaines how to go from a transfer function description of a controller (or compensator) to a difference equation that can be implemented on a microcontroller. The idea is to recognize that the term $$ sX(s) $$ in a transfer function is the laplace transform of the derivative of $x(t)$, \begin{equation} sX(s) + x(0) \quad \overset{\mathcal{L}}{\longleftrightarrow} \quad \frac{d}{dt} x(t), \end{equation} where the inital value $x(0)$ is often taken to be zero. We then make use of a discrete approximation of the derivative $$ \frac{d}{dt}x(t) \approx \frac{x(t-h) - x(t)}{h}, $$ where $h$ is the time between the samples in the sampled version of signal $x(t)$. The steps to convert the system on transfer function form $$ Y(s) = F(s)U(s) = \frac{s+b}{s+a}U(s) $$ are to write $$ (s+a)Y(s) = (s+b)U(s) $$ $$ sY(s) + aY(s) = sU(s) + bU(s), $$ take the inverse Laplace transform $$ \frac{d}{dt} y + ay = \frac{d}{dt} u + bu$$ and use the discrete approximation of the derivative $$ \frac{y_k - y_{k-1}}{h} + ay_k = \frac{u_k - u_{k-1}}{h} + bu_k $$ which can be written $$ (1+ah) y_k = y_{k-1} + u_k - u_{k-1} + bh u_k,$$ or $$ y_k = \frac{1}{1+ah} y_{k-1} + \frac{1+bh}{1+ah}u_k - \frac{1}{1+ah}u_{k-1}. $$ Example With the system $$ F(s) = \frac{s+1}{s+2} $$ and the sampling time $$ h=0.1 $$ we get the difference equation $$ y_k = \frac{1}{1.2}y_{k-1} + \frac{1.1}{1.2}u_k - \frac{1}{1.2} u_{k-1}. $$ Let's implement the system and see how the discrete approximation compares to the continuous-time system for the case of a step-response. End of explanation ## Your python code goes here Explanation: Exercise Why is the error in the discrete approximation larger in the beginning than at the end of the step-response? Make a discrete approximation of the transfer function $$ F(s) = \frac{3}{s+3} $$ using the sampling time $$ h=0.2 $$ Then simulate and plot a step-response for the continuous- and discrete system, following the example above. Hint: Copy the python code for the example above into the code cell below and modify for the exercise. End of explanation def p1(x): return 4*x + 2 # Our first-order polynomial # Compute values for x=[0,0.2, 0.4, ... 2] recursively using the difference equation h = 0.2 x = h*np.arange(11) # Gives the array [0,0.2, 0.4, ... 2] pd = np.zeros(11) d1 = 4*h # Need to compute the first value as the initial value for the difference equation, pd[0] = p1(x[0]) for k in range(1,10): # Solve difference equation pd[k] = pd[k-1] + d1 plt.figure(figsize=(14,6)) plt.plot(x, p1(x), linewidth=2) plt.plot(x, pd, 'ro') Explanation: Recursively computing values of a polynomial using difference equations In the lecture by Peter Corke, he talks about the historical importance of difference equations for computing values of a polynomial. Let's look at this in some more detail. A first order polynomial Consider the polynomial $$ p(x) = 4x + 2. $$ The first difference is $$ \Delta p(x) = p(x) - p(x-h) = 4x + 2 - \big( 4(x-h) + 2 \big) = 4h, $$ and the second order difference is zero (as are all higher order differences): $$ \Delta^2 p(x) = \Delta p(x) - \Delta p(x-h) = 4h - 4h = 0. $$ Using the firs order difference, we can also write the second order difference $ \Delta p(x) - \Delta p(x-h) = \Delta^2 p(x) $ as $$ p(x) - p(x-h) - \Delta p(x-h) = \Delta^2p(x) $$ or $$ p(x) = p(x-h) + \Delta p(x-h) + \Delta^2 p(x)$$ which for the first order polynomial above becomes $$ p(x) = p(x-h) + \Delta p(x-h) = p(x-h) + 4h. $$ End of explanation a0 = 2 a1 = -3 a2 = 2 def p2(x): return a0*x**2 + a1*x + a2 # Our second-order polynomial # Compute values for x=[0,0.2, 0.4, ... 8] recursively using the difference equation h = 0.2 x = h*np.arange(41) # Gives the array [0,0.2, 0.4, ... 2] d1 = np.zeros(41) # The first differences pd = np.zeros(41) d2 = h**2*2*a0 # The constant, second difference # Need to compute the first two values to get the initial values for the difference equation, pd[0] = p2(x[0]) pd[1] = p2(x[1]) for k in range(2,41): # Solve difference equation pd[k] = 2*pd[k-1] - pd[k-2] + d2 plt.figure(figsize=(14,6)) plt.plot(x, p2(x), linewidth=2) # Evaluating the polynomial plt.plot(x, pd, 'ro') # The solution using the difference equation Explanation: Second order polynomial For a second order polynomial $$ p(x) = a_0x^2 + a_1x + a_2 $$ we have $$ p''(x) = 2a_0, $$ and the differences $$ \Delta p(x) = p(x) - p(x-h) = a_0x^2 + a_1x + a_2 - \big( a_0(x-h)^2 + a_1(x-h) + a_2 \big) = h(2a_0x + a_1) -a_0h^2, $$ $$ \Delta^2 p(x) = \Delta p(x) - \Delta p(x-h) = h(2a_0x+a_1) - a_0h^2 - \big( h(2a_0(x-h) + a_1) - a_0 h^2 \big) = h^22a_0 $$ Recall the difference equation using the second order difference $$ p(x) = p(x-h) + \Delta p(x-h) + \Delta^2 p(x)$$ We now get $$ p(x) = p(x-h) + \Delta p(x-h) + \Delta^2 p(x) = p(x-h) + \Delta p(x-h) + h^2 2 a_0,$$ or, using the definition of the first-order difference $\Delta p(x-h)$ $$ p(x) = 2p(x-h) - p(x-2h) + h^2 2 a_0,$$ Consider the second order polynomial $$ p(x) = 2x^2 - 3x + 2, $$ and compute values using the difference equation. End of explanation
3,630
Given the following text description, write Python code to implement the functionality described below step by step Description: Make a well Step1: Make a striplog and add it to the well Step2: Striplogs are added as a dictionary, but you can access them via attributes too Step3: Make another striplog and add it We can also make striplogs from cuttings data, for example. So let's read cuttings data from an Excel spreadsheet and add another striplog to the well. Step4: Export to LAS 3.0 Step5: Reading from LAS files Step6: Plotting with logs Step7: Now we have all the well data in one place, we should be able to make some nice plots. For example, use the striplog in a log plot, or use the striplog to define categories for colouring a cross-plot. First, let's try filling an ordinary curve – the GR log — with lithlogy colours. A classic workaround for this is to plot the striplog, then plot the GR log, then fill the GR log with white to mask the striplog. Step8: Crossplots Now let's try a cross-plot. To facilitate this, generate a new log with the striplog.to_log() method Step9: Next step Step10: We can get a bit fancier still
Python Code: from striplog import Well print(Well.__doc__) fname = 'P-129_out.LAS' well = Well(fname) well.data['GR'] well.well.DATE.data Explanation: Make a well End of explanation from striplog import Striplog, Legend legend = Legend.default() f = 'P-129_280_1935.png' name, start, stop = f.strip('.png').split('_') striplog = Striplog.from_img(f, float(start), float(stop), legend=legend, tolerance=35) %matplotlib inline striplog.plot(legend, ladder=True, interval=(5,50), aspect=5) well.add_striplog(striplog, "striplog") Explanation: Make a striplog and add it to the well End of explanation well.striplog well.striplog.striplog.source well.striplog.striplog.start Explanation: Striplogs are added as a dictionary, but you can access them via attributes too: End of explanation import xlrd xls = "_Cuttings.xlsx" book = xlrd.open_workbook(xls) sh = book.sheet_by_name("P-129") tops = [c.value for c in sh.col_slice(0, 4)] bases = [c.value for c in sh.col_slice(1, 4)] descr = [c.value for c in sh.col_slice(3, 4)] rows = [i for i in zip(tops, bases, descr)] rows[:5] from striplog import Lexicon lexicon = Lexicon.default() cuttings = Striplog.from_array(rows, lexicon) cuttings cuttings.plot(interval=(5,50), aspect=5) print(cuttings[:5]) well.add_striplog(cuttings, "cuttings") well.striplog.cuttings[3:5] Explanation: Make another striplog and add it We can also make striplogs from cuttings data, for example. So let's read cuttings data from an Excel spreadsheet and add another striplog to the well. End of explanation print(well.striplogs_to_las3(use_descriptions=True)) Explanation: Export to LAS 3.0 End of explanation fname = 'P-129_striplog_from_image.las' p129 = Well(fname, lexicon=lexicon, unknown_as_other=True) p129.striplog p129.striplog.lithology.plot(legend, interval=(10,50)) Explanation: Reading from LAS files End of explanation import matplotlib.pyplot as plt Explanation: Plotting with logs End of explanation z = well.data['DEPT'] log = well.data['GR'] lineweight = 0.5 plot_min = 0 plot_max = 200 # Set up the figure. fig = plt.figure(figsize=(4,16)) # Plot into the figure. # First, the lith log, the full width of the log. ax = fig.add_subplot(111) well.striplog.striplog.plot_axis(ax, legend, default_width=plot_max) # Plot the DT with a white fill to fake the curve fill. ax.plot(log, z, color='k', lw=lineweight) ax.fill_betweenx(z, log, plot_max, color='w', zorder = 2) # Limit axes. ax.set_xlim(plot_min, plot_max) ax.set_ylim(z[-1], 0) # Show the figure. #plt.savefig('/home/matt/filled_log.png') plt.show() Explanation: Now we have all the well data in one place, we should be able to make some nice plots. For example, use the striplog in a log plot, or use the striplog to define categories for colouring a cross-plot. First, let's try filling an ordinary curve – the GR log — with lithlogy colours. A classic workaround for this is to plot the striplog, then plot the GR log, then fill the GR log with white to mask the striplog. End of explanation z, lith = well.striplog.striplog.to_log(start=well.start, stop=well.stop, step=well.step, legend=legend) import matplotlib.pyplot as plt fig = plt.figure(figsize=(4, 10)) ax = fig.add_subplot(121) ax.plot(lith, z) ax.set_ylim(z[-1], 0) ax.get_yaxis().set_tick_params(direction='out') ax2 = fig.add_subplot(122) striplog.plot_axis(ax2, legend=legend) ax2.set_ylim(z[-1], 0) ax2.get_xaxis().set_ticks([]) ax2.get_yaxis().set_ticks([]) #plt.savefig('/home/matt/discretized.png') plt.show() Explanation: Crossplots Now let's try a cross-plot. To facilitate this, generate a new log with the striplog.to_log() method: End of explanation import matplotlib.colors as clr cmap = clr.ListedColormap([i.colour for i in legend]) plt.figure(figsize=(10,7)) plt.scatter(well.data['GR'], well.data['DT'], c=lith, edgecolors='none', alpha=0.8, cmap=cmap, vmin=1) plt.xlim(0, 200); plt.ylim(0,150) plt.xlabel('GR'); plt.ylabel('DT') plt.grid() ticks = [int(i) for i in list(set(lith))] ix = [int(i)-1 for i in list(set(lith)) if i] labels = [i.component.summary() for i in legend[ix]] cbar = plt.colorbar() cbar.set_ticks(ticks) cbar.set_ticklabels(labels) plt.show() Explanation: Next step: make a colourmap from the legend. This could be a method of the legend, but it's so easy it hardly seems worth the trouble. End of explanation from matplotlib.patches import Rectangle # Start the plot. fig = plt.figure(figsize=(12,10)) # Crossplot. ax = fig.add_axes([0.15,0.15,0.6,0.6]) ax.scatter(well.data['GR'], well.data['DT'], c=lith, edgecolors='none', alpha=0.67, cmap=cmap, vmin=1) ax.set_xlim(0, 200); plt.ylim(0,150) ax.set_xlabel('GR $API$'); ax.set_ylabel(r'DT $\mu s/m$') ax.grid() # Draw the legend. axc = fig.add_axes([0.775,0.2,0.15,0.5]) i = 0 for d in legend: if i+1 in lith: tcolour = 'k' talpha = 1.0 tstyle = 'medium' else: tcolour = 'k' talpha = 0.25 tstyle = 'normal' rect = Rectangle((0, i), 1, 1, color=d.colour, alpha=0.67 ) axc.add_patch(rect) text = axc.text(0.5, 0.5+i, d.component.summary(default="Unassigned"), color=tcolour, ha='center', va='center', fontsize=10, weight=tstyle, alpha=talpha) i += 1 axc.set_xlim(0,1); axc.set_ylim(len(legend), 0) axc.set_xticks([]); axc.set_yticks([]) axc.set_title('Lithology', fontsize=12) # Finish. #plt.savefig('/home/matt/crossplot.png') plt.show() Explanation: We can get a bit fancier still: End of explanation
3,631
Given the following text description, write Python code to implement the functionality described below step by step Description: Object oriented programming (OOP) Python is not ony a powerful scripting language, but it also supports object-oriented programming. In fact, everything in Python is an object. Working with functions is instead called procedure-oriented programming. Both styles (or philosophies) are acceptable and appropriate. Objected-oriented programming is well suited for creating modules and APIs. Objects are defined and handled trhough the Class type More reading on objected-oriented programming Step1: A note about scopes and namespaces Like in other programming languages, variables are only visible in certain parts of the code, formally termed ["scopes"](https Step2: List-comprehensions (and all other comprehensions) have their own scope Step3: Note Step5: The attribute pi is present in both namespaces, so that there are no conflicts between variables or functions with the same name. Classes are also examples of namespaces. Classes Think of a class as a container of data and functionality at the same time. A class is essentially a new object type, which can create new instances, much like the int type is used to create different numbers (the instances). Each class instance can have attributes of any type attached to it and methods that can act on those attributes or other variables. Example Step6: MyClass.i and MyClass.f are valid attribute references, returning an integer and a function object. Class attributes can also be assigned to, so you can change the value of MyClass.i by assignment. __doc__ is also a valid attribute, returning the docstring belonging to the class. Class instantiation is the creation of a new instance of type MyClass, and uses the function notation. Step7: New class instances can be created with specific initial variables, either with default values or user-defined ones. The __init__ method is used for this task, usually as the first method in the class definition. If __init__ has any positional arguments, an instance cannot be created without providing them. Step8: What about the self variable? self refers to the specific instance of the class any method acts upon. The two following cells are perfectly equivalent, even though the second notation is very rare. Step9: On top of the attributes (variables) and methods (functions) created when a class instance is initiated, we can attach attributes to an already existing class instance Step10: Class and instance variables Generally speaking, instance variables are for data unique to each instance and class variables are for attributes and methods shared by all instances of the class Step11: Warning When mutable objects (lists, and so on, see previous chapter) are used as class variables, any change to that variable will be shared by all of that class instances. Step12: Inheritance A powerful design principle in OOP is class inheritance Step13: An underappreciated advantage of inheritance is that it is allows to expand classes that belong to different namespaces. This means that even classes belonging to different modules (or even the base namespace) can be expanded. Step14: Public and private attributes/methods Another paradigm of OOP is the distinction between public/private/protected attributes and methods. Specifically Step15: In the above example, the _private attribute is not meant to be called by the class user, but it can still be easily accessed. In languages like C++ accessing or changing the value of a private attribute would trigger an error. In python it is possible but might interfere with the intended purpose of that attribute/method. A way to obfuscate a private attribute/method a bit more is to use Name mangling, that is using a double underscore before the attribute name Step16: We have created a new attribute called __private, but the original class attribute has not been changed. That is because name mangling has transformed the __private attribute to _Reverser__private internally. Step17: Operators Step18: The sum operator is in fact a method of the int class. The following expression is exactly equivalent to calling x + y. Step19: A comprehensive list of operators that can be implemented for any given class can be found here. It's worth noting that many of those operators are already implemented for any class. Re-implementing an existing operator (or more generally a method) is termed overloading. Step20: For instance, the __eq__ method implements the == boolean operation. The basic implementation checks whether two instances are exactly the same, a behaviour that is not always intuitive. Step21: Other interesting operators
Python Code: print(dir(bool)) Explanation: Object oriented programming (OOP) Python is not ony a powerful scripting language, but it also supports object-oriented programming. In fact, everything in Python is an object. Working with functions is instead called procedure-oriented programming. Both styles (or philosophies) are acceptable and appropriate. Objected-oriented programming is well suited for creating modules and APIs. Objects are defined and handled trhough the Class type More reading on objected-oriented programming: From Wikipedia OOP in python Classes in python's documentation A note about objects (reprise) Everything in Python is an object, which can be seen as an advanced version of a variable objects have methods the dir keyword allows the user to discover them This is different from other languages like c or C++, where int and bool are primitive types End of explanation def f(): x = 1 print(x) x = 2 f() print(x) Explanation: A note about scopes and namespaces Like in other programming languages, variables are only visible in certain parts of the code, formally termed ["scopes"](https://en.wikipedia.org/wiki/Scope_(computer_science). In practical terms this means that certain variables will only be visible inside a limited part of the code, and that variables in different scopes can have the same name, without generating any conflict. Consider the following example: End of explanation x = 2 a = [x**2 for x in range(10)] print(a) print(x) Explanation: List-comprehensions (and all other comprehensions) have their own scope End of explanation import math import numpy print(math.pi, numpy.pi) # don't do this at home math.pi = 2 print(math.pi, numpy.pi) Explanation: Note: despite these notes about scopes, it is still a good idea to use descriptive variable names, and to avoid name conflicts as much as possible. Note: despite scopes, it is still a good idea to avoid globally-defined variables as much as possible. Namespaces define the "areas" of the code between which the same variable names can appear. Modules are a great example of a namespace: End of explanation class MyClass: A simple example class i = 12345 def __init__(self): self.data = [] def f(self): return 'hello world' Explanation: The attribute pi is present in both namespaces, so that there are no conflicts between variables or functions with the same name. Classes are also examples of namespaces. Classes Think of a class as a container of data and functionality at the same time. A class is essentially a new object type, which can create new instances, much like the int type is used to create different numbers (the instances). Each class instance can have attributes of any type attached to it and methods that can act on those attributes or other variables. Example: a cake recipe (class or type) and a baked cake (instance) The syntax to create a class is as follows; notice how class names are by convention written with the first letter uppercase. Python class ClassName: statement_1 . . . statement_N When a class definition is entered, a new namespace is created, and used as the local scope. End of explanation x = MyClass() print(x.i) print(x.f()) print(x.__doc__) print(x.i) x.i = 2 print(x.i) Explanation: MyClass.i and MyClass.f are valid attribute references, returning an integer and a function object. Class attributes can also be assigned to, so you can change the value of MyClass.i by assignment. __doc__ is also a valid attribute, returning the docstring belonging to the class. Class instantiation is the creation of a new instance of type MyClass, and uses the function notation. End of explanation class Complex: def __init__(self, realpart, imagpart): self.r = realpart self.i = imagpart def generic_method(self, value): print(value) x = Complex() x = Complex(1.1, -2.3) x.r, x.i Explanation: New class instances can be created with specific initial variables, either with default values or user-defined ones. The __init__ method is used for this task, usually as the first method in the class definition. If __init__ has any positional arguments, an instance cannot be created without providing them. End of explanation x.generic_method(100) Complex.generic_method(x, 100) Explanation: What about the self variable? self refers to the specific instance of the class any method acts upon. The two following cells are perfectly equivalent, even though the second notation is very rare. End of explanation x.counter = 1 while x.counter < 10: x.counter = x.counter * 2 print(x.counter) del x.counter x.counter Explanation: On top of the attributes (variables) and methods (functions) created when a class instance is initiated, we can attach attributes to an already existing class instance End of explanation class Dog: # class variable shared by all instances kind = 'canine' def __init__(self, name): # instance variable unique to each instance self.name = name d = Dog('Fido') e = Dog('Buddy') # shared by all dogs print(d.kind) print(e.kind) # unique to each instance print(d.name) print(e.name) Explanation: Class and instance variables Generally speaking, instance variables are for data unique to each instance and class variables are for attributes and methods shared by all instances of the class: End of explanation class Dog: # this is ok kind = 'canine' # mutable class variable tricks = [] def __init__(self, name): self.name = name def add_trick(self, trick): self.tricks.append(trick) d = Dog('Fido') e = Dog('Buddy') # operating on the `tricks` class variable in two separate instances d.add_trick('roll over') e.add_trick('play dead') # changing the `kind` class variable e.kind = 'super-dog' print(d.kind) print(d.tricks) Explanation: Warning When mutable objects (lists, and so on, see previous chapter) are used as class variables, any change to that variable will be shared by all of that class instances. End of explanation # base class class Sequence: def __init__(self, name, sequence): self.name = name self.sequence = sequence # inherits Sequence, # has specific attributes and methods class Dna(Sequence): def reverse_complement(self): translation_table = str.maketrans('ACGTacgt', 'TGCAtgca') revcomp_sequence = self.sequence.translate(translation_table)[::-1] return revcomp_sequence # inherits Sequence, # has specific attributes and methods class Protein(Sequence): def get_exon_length(self): return len(self.sequence) * 3 dna = Dna('gene1', 'ACTGCGACCAAGACATAG') dna.reverse_complement() prot = Protein('protein1', 'MPNFFIDRPIFAWVIAIIIMLAGGLAILKLPVAQYPTIAP') prot.reverse_complement() prot = Protein('protein1', 'MPNFFIDRPIFAWVIAIIIMLAGGLAILKLPVAQYPTIAP') prot.get_exon_length() Explanation: Inheritance A powerful design principle in OOP is class inheritance: in a nutshell, it allows to reuse and expand code written for a class (the parent) and create a new one that has all the characteristics of the parent class and additional attributes and methods. From a type we can then create an infinite number of subtypes. Usually the parent class is a generic object and the subsequent subtypes (children) are more specialized concepts. End of explanation class BetterInt(int): def is_odd(self): return bool(self % 2) x = BetterInt(2) x.is_odd() Explanation: An underappreciated advantage of inheritance is that it is allows to expand classes that belong to different namespaces. This means that even classes belonging to different modules (or even the base namespace) can be expanded. End of explanation class Reverser(): def __init__(self, name): self.public = name self._private = name[::-1] def get_reverse(self): return self._private x = Reverser('hello world') print(x.public) print(x.get_reverse()) x._private = 'luddism' print(x.get_reverse()) Explanation: Public and private attributes/methods Another paradigm of OOP is the distinction between public/private/protected attributes and methods. Specifically: public: completely visible and accessible private: only visible from inside the class protected: only visible from inside the class they belong to, and any subclass derived from it In python, all attributes and methods are public, but there are a few conventions to have them treated as private. They would still be publically accessible, but the author of the class has "warned" the user not to tamper with them to avoid possible conflicts. End of explanation class Reverser(): def __init__(self, name): self.public = name self.__private = name[::-1] def get_reverse(self): return self.__private x = Reverser('hello world') print(x.public) print(x.get_reverse()) x.__private = 'luddism' print(x.get_reverse()) Explanation: In the above example, the _private attribute is not meant to be called by the class user, but it can still be easily accessed. In languages like C++ accessing or changing the value of a private attribute would trigger an error. In python it is possible but might interfere with the intended purpose of that attribute/method. A way to obfuscate a private attribute/method a bit more is to use Name mangling, that is using a double underscore before the attribute name: End of explanation print(x.__private) print(x._Reverser__private) Explanation: We have created a new attribute called __private, but the original class attribute has not been changed. That is because name mangling has transformed the __private attribute to _Reverser__private internally. End of explanation x = 1 y = 2 x + 2 Explanation: Operators: handy methods As stated at the beginning of this chapter, everything in python is an object. As we have seen with objects of type int, we can apply some operators to them: End of explanation x = 1 y = 2 x.__add__(y) Explanation: The sum operator is in fact a method of the int class. The following expression is exactly equivalent to calling x + y. End of explanation x = Protein('prot1', 'MPNFFIDRPIFAWVIAIIIMLAGGLAILKLPVAQYPTIAP') dir(x) Explanation: A comprehensive list of operators that can be implemented for any given class can be found here. It's worth noting that many of those operators are already implemented for any class. Re-implementing an existing operator (or more generally a method) is termed overloading. End of explanation p1 = Protein('prot1', 'MPNFFIDRPIFAWVIAIIIMLAGGLAILKLPVAQYPTIAP') p2 = Protein('prot1', 'MPNFFIDRPIFAWVIAIIIMLAGGLAILKLPVAQYPTIAP') p1 == p2 # let's fix it class Protein(Sequence): def get_exon_length(self): return len(self.sequence) * 3 def __eq__(self, other_instance): return self.sequence == other_instance.sequence p1 = Protein('prot1', 'MPNFFIDRPIFAWVIAIIIMLAGGLAILKLPVAQYPTIAP') p2 = Protein('prot1', 'MPNFFIDRPIFAWVIAIIIMLAGGLAILKLPVAQYPTIAP') p1 == p2 Explanation: For instance, the __eq__ method implements the == boolean operation. The basic implementation checks whether two instances are exactly the same, a behaviour that is not always intuitive. End of explanation def sum_two_things(a, b): return a + b sum_two_things(1, 2) sum_two_things('a', 'b') Explanation: Other interesting operators: __lt__ (x<y), __le__ (x<=y) __gt__ (x>y), __ge__ (x>=y) __eq__ (x==y), __ne__ (x!=y) __str__: how the instance will be represented when calling the print or format functions on it __bool__ cast the instance to bool, for instance based on one of its attributes Many more are available, and allow to create new interesting data types. Duck typing Unlike languages like c, where the type of arguments to functions have to be previously defined, python uses the "Duck typing" paradigm. "If it walks like a duck and it quacks like a duck, then it must be a duck." In other words it means that we are not interested in checking and enforcing the type of an object to be used by a method, only that it needs to contain certain attributes and methods. More importantly, the check is performed at runtime, and not at compilation time (which python doesn't have anyway!). This allows greater flexibility in passing objects to functions. End of explanation
3,632
Given the following text description, write Python code to implement the functionality described below step by step Description: Annealed importance sampling [This largely follows the review in section 3 of Step1: 2. Define annealing distributions Here we'll be annealing between two unnormalized Gaussian distributions with geometric mean intermediates. They have the same variances, so they should have the same normalizing constants. Step2: 2.1. Plot annealing distributions Step3: 3. Define transition kernel Here we'll just do a metropolized random walk with spherical gaussian proposals. Step4: 4. Run AIS on this toy example Step5: 4.1. Plot results It should converge to one, since the initial and target distributions have the same normalizing constant. Step6: 5. A more interesting example Let's sample a biomolecule's configuration space in this way, and maybe estimate its partition function. For now, let's do alanine dipeptide. 5.1-5.2. Define annealing distributions and transition kernels Step7: 5.3. Run AIS From $\mathcal{N}(\mathbf{0},\mathbf{I})$ to $\exp[-U(\x)/k_BT]$ in only a gazillion annealing distributions!
Python Code: import numpy as np import numpy.random as npr npr.seed(0) import matplotlib.pyplot as plt plt.rc('font', family='serif') %matplotlib inline def annealed_importance_sampling(draw_exact_initial_sample, transition_kernels, annealing_distributions, n_samples=1000): ''' draw_exact_initial_sample: Signature: Arguments: none Returns: R^d transition_kernels: length-T list of functions, each function signature: Arguments: R^d Returns: R^d can be any transition operator that preserves its corresponding annealing distribution annealing_distributions: length-T list of functions, each function signature: Arguments: R^d Returns: R^+ annealing_distributions[0] is the initial density annealing_distributions[-1] is the target density n_samples: positive integer ''' dim=len(draw_exact_initial_sample()) T = len(annealing_distributions) weights = np.ones(n_samples,dtype=np.double) ratios = [] xs = [] for k in range(n_samples): x = np.zeros((T,dim)) ratios_ = np.zeros(T-1,dtype=np.double) x[0] = draw_exact_initial_sample() for t in range(1,T): f_tminus1 = annealing_distributions[t-1](x[t-1]) f_t = annealing_distributions[t](x[t-1]) ratios_[t-1] = f_t/f_tminus1 weights[k] *= ratios_[t-1] x[t] = transition_kernels[t](x[t-1],target_f=annealing_distributions[t]) xs.append(x) ratios.append(ratios_) return np.array(xs), weights, np.array(ratios) Explanation: Annealed importance sampling [This largely follows the review in section 3 of: Sandwiching the marginal likelihood using bidirectional Monte Carlo (Grosse, Ghahramani, and Adams, 2015)] $\newcommand{\x}{\mathbf{x}} \newcommand{\Z}{\mathcal{Z}}$ Goal: We want to estimate the normalizing constant $\Z = \int p_T(\x) d\x$ of a complicated distribution $p_T$ we know only up to a normalizing constant. A basic strategy: Importance sampling, i.e. draw each sample from an easy distribution $\x^{(k)} \sim p_1$, then reweight by $w^{(k)}\equiv p_T(\x)/p_1(\x)$. After drawing $K$ such samples, we can estimate the normalizing constant as $$\hat{\Z} = \frac{1}{K} \sum_{k=1}^K w^{(k)} \equiv \frac{1}{K} \sum_{k=1}^K \frac{p_T(\x^{(k)} )}{p_1(\x^{(k)})}$$. Problem: Although importance sampling will eventually work as $K \to \infty$ as long as the support of $p_1$ contains the support of $p_T$, this will be extremely inefficient if $p_1$ and $p_T$ are very different. Actual strategy: Instead of doing the importance reweighting computation in one step, gradually convert a sample from the simpler distribution $p_1$ to the target distribution $p_T$ by introducing a series of intermediate distributions $p_1,p_2,\dots,p_{T}$, chosen so that no $p_t$ and $p_{t+1}$ are dramatically different. We can then estimate the overall importance weight as a product of more reasonable ratios. Inputs: - Desired number of samples $K$ - An initial distribution $p_1(\x)$ for which we can: - Draw samples: $\x_s \sim p_1(\x)$ - Evaluate the normalizing constant: $\Z_1$ - A target (unnormalized) distribution function: $f_T(\x)$ - A sequence of annealing distribution functions $f_1,\dots,f_T$. These can be almost arbitrary, but here are some options: - We can construct these generically by taking geometric averages of the initial and target distributions: $f_t(\x_) = f_1(\x)^{1-\beta_t}f_T(\x)^{\beta_t}$ - In the case of a target distribution $f_T(\x) \propto \exp(-U(\x) \beta)$ (where $\beta$ is the inverse temperature), we could also construct the annealing distributions as Boltzmann distributions at decreasing temperatures. - In the case of a target distribution defined in terms of a force field, we could also construct the annealing distributions by starting from an alchemically softened form of the potential and gradually turning on various parts of the potential. - Could use "boost potentials" from accelerated MD (http://www.ks.uiuc.edu/Research/namd/2.9/ug/node63.html) - If we have some way to make dimension-matching proposals, we might use coarse-grained potentials as intermediates. - A sequence of Markov transition kernels $\mathcal{T}_1,\dots,\mathcal{T}_T$, where each $\mathcal{T}_t$ leaves its corresponding distribution $p_t$ invariant. These can be almost arbitrary, but here are some options: - Random-walk Metropolis - Symplectic integrators of Hamiltonian dynamics - NCMC Outputs: - A collection of weights $w^{(k)}$, from which we can compute an unbiased estimate of the normalizing constant of $f_t$ by $\hat{\Z}=\sum_{k=1}^K w^{(k)} / K$ Algorithm: for $k=1$ to $K\$: 1. $\x_1 \leftarrow$ sample from $p_1(\x)$ 2. $w^{(k)} \leftarrow \Z_1$ 3. for $t=2$ to $T$: - $w^{(k)} \leftarrow w^{(k)} \frac{f_t(\x_{t-1})}{f_{t-1}(\x_{t-1})}$ - $\x_t \leftarrow $ sample from $\mathcal{T}t(\x | \x{t-1})$ 1. Implement AIS End of explanation num_intermediates = 25 betas = np.linspace(0,1,num_intermediates+2) dim=1 def initial_density(x): return np.exp(-((x)**2).sum()/2) def draw_from_initial(): return npr.randn(dim) def target_density(x): return np.exp(-((x-4)**2).sum()/2) class GeometricMean(): def __init__(self,initial,target,beta): self.initial = initial self.target = target self.beta = beta def __call__(self,x): f1_x = self.initial(x) fT_x = self.target(x) return f1_x**(1-self.beta) * fT_x**self.beta annealing_distributions = [GeometricMean(initial_density,target_density,beta) for beta in betas] Explanation: 2. Define annealing distributions Here we'll be annealing between two unnormalized Gaussian distributions with geometric mean intermediates. They have the same variances, so they should have the same normalizing constants. End of explanation x = np.linspace(-5,10,100) for i,f in enumerate(annealing_distributions): if i == 0 or i == len(annealing_distributions)-1: if i == 0: label='Initial' else: label='Target' else: label=None y = np.array([f(x_) for x_ in x]) plt.plot(x,y/y.max(),label=label) plt.title('Annealing distributions') plt.xlabel(r'$x$') plt.ylabel(r'$f_t(x)$') plt.legend(loc='best') Explanation: 2.1. Plot annealing distributions End of explanation def gaussian_random_walk(x, target_f, n_steps=10, scale=0.5): x_old = x f_old = target_f(x_old) dim=len(x) for i in range(n_steps): proposal = x_old + npr.randn(dim)*scale f_prop = target_f(proposal) if (f_prop / f_old) > npr.rand(): x_old = proposal f_old = f_prop return x_old transition_kernels = [gaussian_random_walk]*len(annealing_distributions) Explanation: 3. Define transition kernel Here we'll just do a metropolized random walk with spherical gaussian proposals. End of explanation xs, weights, ratios = annealed_importance_sampling(draw_from_initial, transition_kernels, annealing_distributions, n_samples=10000) Explanation: 4. Run AIS on this toy example End of explanation plt.plot((np.cumsum(weights)/np.arange(1,len(weights)+1))) plt.hlines(1.0,0,len(weights)) plt.xlabel('# samples') plt.ylabel(r'Estimated $\mathcal{Z}_T / \mathcal{Z}_1$') plt.title(r'Estimated $\mathcal{Z}_T / \mathcal{Z}_1$') ratios_ = ratios mean=ratios_.mean(0)[1:] err = ratios_.std(0)[1:] plt.plot(mean); plt.fill_between(range(len(mean)),mean-err,mean+err,alpha=0.4); plt.xlabel(r'Annealing distribution index ($t$)') plt.ylabel(r'$f_{t+1}(\mathbf{x}_{t})/f_{t}(\mathbf{x}_{t})$') plt.title(r'Weight updates $f_{t+1}(\mathbf{x}_{t})/f_{t}(\mathbf{x}_{t})$') end_samples = np.array([x_[-1] for x_ in xs]) plt.hist(end_samples,bins=50,normed=True); plt.plot(x,[initial_density(x_)/2 for x_ in x]) plt.plot(x,[target_density(x_)/2 for x_ in x]) plt.title(r"$x_T$ samples") plt.xlabel(r'$x$') plt.ylabel(r'$p_T(x)$') Explanation: 4.1. Plot results It should converge to one, since the initial and target distributions have the same normalizing constant. End of explanation from simtk.openmm.app import * from simtk.openmm import * from simtk.unit import * from openmmtools.integrators import MetropolisMonteCarloIntegrator,HMCIntegrator # all I want is the alanine dipeptide topology from msmbuilder.example_datasets import AlanineDipeptide ala = AlanineDipeptide().get().trajectories top_md = ala[0][0].topology topology = top_md.to_openmm() n_atoms = top_md.n_atoms dim = n_atoms*3 # create an openmm system forcefield = ForceField('amber99sb.xml','amber10_obc.xml') system = forcefield.createSystem(topology, nonbondedCutoff=1*nanometer, constraints=HBonds) integrator = HMCIntegrator(300*kelvin) simulation = Simulation(topology, system, integrator) # create a thin wrapper class class PeptideSystem(): def __init__(self): self.simulation = simulation self.positions = self.simulation.context.getState(getPositions=True).getPositions().value_in_unit(nanometer) self.n_atoms = len(self.positions) def evaluate_potential_flat(self,position_vec): positions = position_vec.reshape(self.n_atoms,3) self.simulation.context.setPositions(positions) return self.simulation.context.getState(getEnergy=True).getPotentialEnergy() def propagate(self,position_vec,n_steps=1000,temp=300): integrator = HMCIntegrator(temp*kelvin) self.simulation = Simulation(topology,system,integrator) positions = position_vec.reshape(self.n_atoms,3) self.simulation.context.setPositions(positions) simulation.step(n_steps) return np.array(self.simulation.context.getState(getPositions=True).getPositions().value_in_unit(nanometer)).flatten() def probability_at_temp(self,position_vec,temp=300.0): return np.exp(-self.evaluate_potential_flat(position_vec).value_in_unit(kilojoule/mole)/temp) peptide = PeptideSystem() temperatures = np.logspace(3,0,1000)*300 # for some reason I can't create a bunch of parametrized anonymous functions in a list comprehension? # i.e. [lambda x:peptide.probability_at_temp(x,t) for t in temperatures] gives me a list of functions # that all evaluate probability at t = temperatures[-1]... # just creating a seperate object for each temperature to avoid any surprises here class TempDist(): def __init__(self,temperature): self.temp = temperature def __call__(self,x): return peptide.probability_at_temp(x,self.temp) annealing_distributions = [initial_density] + [TempDist(t) for t in temperatures] #num_intermediates = 1000 #betas = np.linspace(0,1,num_intermediates+2) #annealing_distributions = [GeometricMean(initial_density,TempDist(300),beta) for beta in betas] # same deal for transition kernels at different temperatures class TempProp(): def __init__(self,temperature): self.temp = temperature def __call__(self,x,target_f=None): return peptide.propagate(x,n_steps=100,temp=self.temp) #transition_kernels = [None] + [TempProp(t) for t in temperatures] # transition_kernels[0] is never referenced... scales = np.logspace(1,0,len(annealing_distributions)+1)*0.005 class RwProp(): def __init__(self,n_steps=100,scale=0.05): self.n_steps=n_steps self.scale=scale def __call__(self,x,target_f): return gaussian_random_walk(x,target_f,n_steps=self.n_steps,scale=self.scale) transition_kernels = [RwProp(n_steps=30,scale=s) for s in scales] %%timeit peptide.evaluate_potential_flat(npr.randn(dim)) %%timeit transition_kernels[1](npr.randn(dim),annealing_distributions[1]) len(annealing_distributions),len(transition_kernels) # annealing schedule plt.plot(temperatures) plt.xlabel('Annealing distribution #') plt.ylabel('Temperature') plt.title('Annealing schedule') Explanation: 5. A more interesting example Let's sample a biomolecule's configuration space in this way, and maybe estimate its partition function. For now, let's do alanine dipeptide. 5.1-5.2. Define annealing distributions and transition kernels End of explanation %%time xs, weights, ratios = annealed_importance_sampling(draw_from_initial, transition_kernels, annealing_distributions, n_samples=1) weights,np.log(weights) # expected number of hours to collect 1000 samples: (1000*36/60)/60 %%time xs, weights, ratios = annealed_importance_sampling(draw_from_initial, transition_kernels, annealing_distributions, n_samples=1000) best_traj = weights.argmax() coords = [xs[best_traj][i].reshape(n_atoms,3) for i in range(len(xs[0]))] import mdtraj as md annealing_traj = md.Trajectory(coords,top_md) annealing_traj.save_pdb('annealing_traj.pdb') plt.plot(np.log((np.cumsum(weights)/np.arange(1,len(weights)+1)))) plt.xlabel('# samples') plt.ylabel(r'Estimated $\log ( \mathcal{Z}_T / \mathcal{Z}_1 )$') plt.title(r'Estimated $\log ( \mathcal{Z}_T / \mathcal{Z}_1 )$') plt.plot(np.log(weights)) ratios.mean(0)[:10] ratios_ = ratios mean=ratios_.mean(0)[1:] err = ratios_.std(0)[1:] plt.plot(mean); plt.fill_between(range(len(mean)),mean-err,mean+err,alpha=0.4); plt.xlabel(r'Annealing distribution index ($t$)') plt.ylabel(r'$f_t/f_{t-1}$') plt.title('Weight updates') plt.savefig('weight_updates.jpg',dpi=300) plt.close() from IPython.display import Image Image('weight_updates.jpg',retina=True) # numerical underflow isn't as big a concern as I thought np.exp(sum(np.log(ratios[0]))),weights[0] np.savez('AIS_results_alanine_dipeptide.npz',ratios) # what if, instead of running 30 steps of rw metropolis between each of a 1000 annealing distributions, we instead # run 1 step of rw metropolis between each of 30,000 annealing distributions? Explanation: 5.3. Run AIS From $\mathcal{N}(\mathbf{0},\mathbf{I})$ to $\exp[-U(\x)/k_BT]$ in only a gazillion annealing distributions! End of explanation
3,633
Given the following text description, write Python code to implement the functionality described below step by step Description: 전처리, 클래스 그 전에 지금까지 복습한 것 관련 퀴즈 문제 1~100까지의 숫자 중에서 3과 5로 나누어 떨어지는 수를 저장하는 List Step1: 3의 배수가 입력되면 beer, 5의 배수가 입력되면 chicken, 15의 배수는 beerchicken Step2: Palindrome 거꾸로 해도 같은 단어 기러기 => 기러기, 소주만병만주소 => 소주만병만주소 문자열을 받아서, 뒤집었을 때 같으면 True, 다르면 False Step3: word_split => 문장을 받아 단어 리스트 word_join => 단어 리스트를 받아서 문장으로 만드는 word_replace => 단어를 받아서, 특정 단어만 다른 단어로 바꾸는 거 Step4: 핸드폰 번호 전처리 Step5: 절차 지향 프로그래밍 Step6: Class ( Rectangle ) => 붕어빵틀, 이데아 객체를 정의해 놓은 것 객체를 생성하기 위해 사용 Object=객체 ( rec1, rec2 ) => 실제 있는 애, 붕어빵, .. 실제로 존재하는 것. 사물 또는 개념
Python Code: print([i for i in range(1, 100+1) if i%3==0 or i%5==0]) Explanation: 전처리, 클래스 그 전에 지금까지 복습한 것 관련 퀴즈 문제 1~100까지의 숫자 중에서 3과 5로 나누어 떨어지는 수를 저장하는 List End of explanation num = 15 result = "" if num % 3 == 0: result += "Beer" if num % 5 == 0: result += "Chichen" print(result) result = [] for i in range(1, 100+1): word = '' if i % 3 == 0: word += "Beer" if i % 5 == 0: word += "Chicken" result.append(word) print(result) beer_list = [ "Beer" if x % 3 == 0 else "" for x in range(1, 100+1) ] chicken_list = [ "Chicken" if x % 5 == 0 else "" for x in range(1, 100+1) ] print([ beer_list[i] + chicken_list[i] for i in range(100) ]) def word_add_num(count, first_num, first_word, second_num, second_word): first_list = [ first_word if i % first_num == 0 else "" for i in range(1, count+1) ] second_list = [ second_word if i % second_num == 0 else "" for i in range(1, count+1) ] return [ first_list[i] + second_list[i] for i in range(count) ] #함수이기 때문에 return을 해주어야 한다. print(word_add_num(100, 3, "ki", 7, "poy")) Explanation: 3의 배수가 입력되면 beer, 5의 배수가 입력되면 chicken, 15의 배수는 beerchicken End of explanation def reverse(word): reversed_word = "" for i in range(len(word)): reversed_word += word[len(word)-1-i] return reversed_word def is_palindrome(word): return word == reverse(word) is_palindrome("기러기") is_palindrome("소주만병만주소") is_palindrome("김기표") "자일리톨껌"[::-1] def reverse(word): return word[::-1] def is_palindrome(word): return word == reverse(word) is_palindrome("기러기") is_palindrome("PCA") is_palindrome("ABCBA") def is_palindrome(word): return word == word[::-1] is_palindrome("수박이박수") (lambda x: x == x[::-1])("손가방") (lambda x: x == x[::-1])("고기고") def word_split(sentence, sperat=" "): word_list = [] word = "" for char in sentence: if char == " ": word_list.append(word) word = "" else: word += char return word_list word_split("오늘은 날씨가 참 맑구나.") def word_split(sentence, sperat=" "): word_list = [] word = "" for char in sentence + " ": # " " 이거를 추가해야 끝까지 출력이 됩니다. if char == " ": word_list.append(word) word = "" else: word += char return word_list word_split("오늘은 날씨가 참 맑구나.") def word_split(sentence, seperate=" "): word_list =[] word = "" for char in sentence: if char == " ": word_list.append(word) word="" else: word += char if word != "": word_list.append(word) return word_list word_split("오늘은 날씨가 참 맑구나.") def word_split(sentence, seperate=" "): word_list = [] word = "" for char in sentence: if char == seperate: word_list.append(word) word = "" else: word += char if word != "": word_list.append(word) return word_list word_split("오늘은 왠지 날씨가 참 맑은가.", "가") Explanation: Palindrome 거꾸로 해도 같은 단어 기러기 => 기러기, 소주만병만주소 => 소주만병만주소 문자열을 받아서, 뒤집었을 때 같으면 True, 다르면 False End of explanation def word_join(word_list, seperate=" "): result = "" for word in word_list: result += word result += seperate return result word_join(["오늘은", "정말", "공부가", "하기", "싫다고", "말하면", "안", "되고", "그냥", "그래요"]) def word_join_2(word_list, seperate=" "): result = "" for index, word in enumerate(word_list): result += word if not index == len(word_list)-1: result += seperate return result word_join_2(["당신은", "존재", "자체로", "소중합니다."]) word_list = "카페나 도서관이나 공부하기는 참 좋다.".split(" ") word_list [word for word in word_list if not word == ""] " ".join(["카페나", "도서관이나", "공부하기는", "참", "좋다."]) "지진무서워".replace("지진", "홍수") user_list = [ ["김기표", "주소1"], ["김깊효", "주소2"], ] user_dict_list = [] for user in user_list: name = user[0] address = user[1] user_dict = { "name": name, "address": address, } user_dict_list.append(user_dict) user_dict_list def get_user_dict(user): return { "name": user[0], "address": user[1] } get_user_dict(user_list) [ { "name": user[0], "address": user[1] } for user in user_list ] with open("./users.csv", "r") as f: #기존에 users_csv에 사용자 정보가 있다면 user_list = [] for line in f.readlines(): user_list.append({ "name": line.split(",")[0], "address": line.split(",")[1].replace("\n", "") }) user_list with open("./users.csv", "r") as f: user_list = [ { "name": line.split(",")[0], "address": line.split(",")[1].replace("\n", "") } for line in f.readlines() ] user_list Explanation: word_split => 문장을 받아 단어 리스트 word_join => 단어 리스트를 받아서 문장으로 만드는 word_replace => 단어를 받아서, 특정 단어만 다른 단어로 바꾸는 거 End of explanation def preprocess(phonenumber): phonenumber_process_dict = { "공": 0, "영": 0, "일": 1, "이": 2, "삼": 3, "사": 4, "오": 5, "육": 6, "칠": 7, "팔": 8, "구": 9, "-": "", " ": "", } for key, value in phonenumber_process_dict.items(): #items는 key, value를 한 번에 뽑을 때 사용된다. phonenumber = phonenumber.replace(key, str(value)) return phonenumber preprocess("공일공육이35-삼삼1구") with open("./phonenumber.txt", "r") as input_file: #전처리 대상 텍스트 파일 있을 때 result = [ preprocess(line.replace("\n", "")) for line in input_file.readlines() ] result with open("./phonenumber.txt", "r") as input_file: with open("./phonenumber_preprocessed.txt", "w") as output_file: [ output_file.write( preprocess(line.replace("\n", "")) + "\n" ) for line in input_file.readlines() ] Explanation: 핸드폰 번호 전처리 End of explanation class Student(): # Student() => __init__ 함수가 실행되는 것 __campus = "패스트캠퍼스" #변수를 밖에서 부를 수는 있지만 안 부르는 것이 약속 def __init__(self, name, age): # init => initialize ( 초기화하다 ) self.name = name self.age = age print("학생 {name}({age}) 가 태어났습니다.".format( name=self.name, age=self.age )) # 자기소개를 할 수 있다. def introduce(self): print("안녕하세요, 저는 {campus}에 다녔던 {age}살 {name} 입니다.".format( campus=self.__campus, age=self.age, name=self.name, )) kimkipoy = Student("김기표", 29) kimkipoy.introduce() kimkipoy.campus = "경쟁사" kimkipoy.introduce() kimkipoy #_Student__campus 이와 같은 형태로 변수가 바뀌었다. dir(kimkipoy) kimkipoy._Student__campus = "경쟁사2" kimkipoy.introduce() class Rectangle(): def __init__(self, width, height): self.width = width self.height = height def area(self): return self.width * self.height def girth(self): return 2 * (self.width + self.height) def is_bigger(self, another): if self.area() - another.area() >= 0: print("내가 더 큼") else: print("내가 더 작음") rec1 = Rectangle(10, 20) rec2 = Rectangle(30, 10) rec1.is_bigger(rec2) Explanation: 절차 지향 프로그래밍: 데이터, 데이터 처리하는 함수 객체 지향 프로그래밍 ( Object Oriented Programming ) 절차 <<<< 객체 둥둥 떠다니는 객체 객체(데이터, 각각의 데이터를 처리하는 방법) <=> 객체 : 메시지를 전달 실험 => 완벽하게 적합한 알고리즘 함수형 프로그래밍 => Lambda, Lambda Operator, List Comprehension 객체는 자료형(클래스)를 가진다 모듈은 .py파일 형식. 함수와 변수 선언을 담고 있다. 다른 .py파일에서 추가해서 사용할 수 있다. import로 모듈을 불러올 수 있다. End of explanation class Rectangle(): def __init__(self, width, height): self.width = width self.height = height def area(self): return "면적은 {area} 입니다.".format( area=self.width * self.height ) def girth(self): return "둘레는 {girth} 입니다".format( girth = self.width * 2 + self.height * 2, ) def is_bigger(self, another): my_area = self.area() another_area = another.area() return my_area > another_area my_rectangle = Rectangle(100, 200) another_rect = Rectangle(10, 20) my_rectangle.is_bigger(another_rect) class Person(): def __init__(self, name, money): self.name = name self.money = money def send_money(self, to, amount): print("{to_name}한테 {amount}원 만큼 돈을 보냅니다.".format( to_name=to.name, amount=amount, )) self.money -= amount to.money += amount person1 = Person("돈 빌려준 사람", 1000) person2 = Person("돈 빌린 사람", 500) person2.send_money(person1, 500) person1.money, person2.money class Student(): def __init__(self, name, address): self.name = name self.address = address def introduce(self): print("저는 {address}에 살고 있는 {name}입니다.".format( address=self.address, name=self.name, )) student = Student("김기표", "경기도 안양시") student.introduce() with open("../users.csv", "r") as f: #users.csv라는 사용자 정보 있을 때 student_list = [ Student( line.split(",")[0], line.split(",")[1].replace("\n", "") ) for line in f.readlines() ] for student in student_list: student.introduce() Explanation: Class ( Rectangle ) => 붕어빵틀, 이데아 객체를 정의해 놓은 것 객체를 생성하기 위해 사용 Object=객체 ( rec1, rec2 ) => 실제 있는 애, 붕어빵, .. 실제로 존재하는 것. 사물 또는 개념 End of explanation
3,634
Given the following text description, write Python code to implement the functionality described below step by step Description: <a href="https Step1: To execute the code in the above cell, select it with a click and then either press the play button to the left of the code, or use the keyboard shortcut "Command/Ctrl+Enter". To edit the code, just click the cell and start editing. Variables that you define in one cell can later be used in other cells Step2: Colab notebooks allow you to combine executable code and rich text in a single document, along with images, HTML, LaTeX and more. When you create your own Colab notebooks, they are stored in your Google Drive account. You can easily share your Colab notebooks with co-workers or friends, allowing them to comment on your notebooks or even edit them. To learn more, see Overview of Colab. To create a new Colab notebook you can use the File menu above, or use the following link
Python Code: seconds_in_a_day = 24 * 60 * 60 seconds_in_a_day Explanation: <a href="https://colab.research.google.com/github/cipang/hello-world/blob/master/Welcome_To_Colaboratory.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> <p><img alt="Colaboratory logo" height="45px" src="/img/colab_favicon.ico" align="left" hspace="10px" vspace="0px"></p> <h1>What is Colaboratory?</h1> Colaboratory, or "Colab" for short, allows you to write and execute Python in your browser, with - Zero configuration required - Free access to GPUs - Easy sharing Whether you're a student, a data scientist or an AI researcher, Colab can make your work easier. Watch Introduction to Colab to learn more, or just get started below! Getting started The document you are reading is not a static web page, but an interactive environment called a Colab notebook that lets you write and execute code. For example, here is a code cell with a short Python script that computes a value, stores it in a variable, and prints the result: End of explanation seconds_in_a_week = 7 * seconds_in_a_day seconds_in_a_week Explanation: To execute the code in the above cell, select it with a click and then either press the play button to the left of the code, or use the keyboard shortcut "Command/Ctrl+Enter". To edit the code, just click the cell and start editing. Variables that you define in one cell can later be used in other cells: End of explanation import numpy as np from matplotlib import pyplot as plt ys = 200 + np.random.randn(100) x = [x for x in range(len(ys))] plt.plot(x, ys, '-') plt.fill_between(x, ys, 195, where=(ys > 195), facecolor='g', alpha=0.6) plt.title("Sample Visualization") plt.show() Explanation: Colab notebooks allow you to combine executable code and rich text in a single document, along with images, HTML, LaTeX and more. When you create your own Colab notebooks, they are stored in your Google Drive account. You can easily share your Colab notebooks with co-workers or friends, allowing them to comment on your notebooks or even edit them. To learn more, see Overview of Colab. To create a new Colab notebook you can use the File menu above, or use the following link: create a new Colab notebook. Colab notebooks are Jupyter notebooks that are hosted by Colab. To learn more about the Jupyter project, see jupyter.org. Data science With Colab you can harness the full power of popular Python libraries to analyze and visualize data. The code cell below uses numpy to generate some random data, and uses matplotlib to visualize it. To edit the code, just click the cell and start editing. End of explanation
3,635
Given the following text description, write Python code to implement the functionality described below step by step Description: Q1 In this question, we'll review the basics of file I/O (file input/output) and the various function calls and modes required (this will draw on material from L14). A Write a function read_file_contents which takes a string pathname as an argument, and returns a single string that contains all the contents of the file. Don't import any additional packages. If I have a file random_text.txt, I'll give the full path to this file to the function Step1: B This time, write a function read_file that takes two arguments Step2: C In this question, you'll read from one file, perform a simple computation, and write the results to a new file. Write a function count_lines that takes two arguments Step3: D In this question, you'll write a function acount_lines that performs the same operation as before, except in the case that the output file already exists
Python Code: truth = "This is some text.\nMore text, but on a different line!\nInsert your favorite meme here.\n" pred = read_file_contents("q1data/file1.txt") assert truth == pred retval = -1 try: retval = read_file_contents("nonexistent/path.txt") except: assert False else: assert retval is None Explanation: Q1 In this question, we'll review the basics of file I/O (file input/output) and the various function calls and modes required (this will draw on material from L14). A Write a function read_file_contents which takes a string pathname as an argument, and returns a single string that contains all the contents of the file. Don't import any additional packages. If I have a file random_text.txt, I'll give the full path to this file to the function: contents = read_file_contents("random_text.txt"), and I should get back a single string contents that contains all the contents of the file. NOTE: Your function should be able to handle errors gracefully! If an error occurs when trying to read from the file, your function should return None (note the capitalization of the first letter). End of explanation truth = "Yo dawg, I heard yo and yo dawg like yo-yos.\nSo we put yo dawg in a yo-yo.\nSo yo can yo-yo yo dawg while yo dawg yo-yos, dawg.\nMaximum ridiculousness reached.\n" pred = read_file("q1data/file2.txt") assert truth == pred truth = ['Yo dawg, I heard yo and yo dawg like yo-yos.\n', 'So we put yo dawg in a yo-yo.\n', 'So yo can yo-yo yo dawg while yo dawg yo-yos, dawg.\n', 'Maximum ridiculousness reached.\n'] pred = read_file("q1data/file2.txt", as_list = True) for item in truth: assert item in pred for item in pred: assert item in truth retval = -1 try: retval = read_file("another/nonexistent/path.txt") except: assert False else: assert retval is None Explanation: B This time, write a function read_file that takes two arguments: the first is the path to the file (same as before), and the second is an optional boolean argument as_list that defaults to False. When this flag is False (the default), your function should behave identically to read_file_contents. In fact, if as_list is False, you can just call your previous function. If as_list is True, instead of returning a single string of the file's contents, return a list of strings, where each item in the list is a line from the file. NOTE: Your function should be able to handle errors gracefully! If an error occurs when trying to read from the file, your function should return None (note the capitalization of the first letter). End of explanation import os.path assert count_lines("q1data/file1.txt", "q1data/file1_out.txt") assert os.path.exists("q1data/file1_out.txt") assert int(open("q1data/file1_out.txt", "r").read()) == 3 r1 = None try: r1 = count_lines("yet/another/nonexistent/path.txt", "meaningless") except: assert False else: assert not r1 r2 = None try: r2 = count_lines("q1data/file1.txt", "/this/should/throw/an/error.txt") except: assert False else: assert not r2 Explanation: C In this question, you'll read from one file, perform a simple computation, and write the results to a new file. Write a function count_lines that takes two arguments: the first is a path to a file to read, the second is the path to an output file. Your function will count the number of lines in the file at the first argument, and write this number to a file at the second argument. Your function should return True on success, and False if an error occurred. NOTE: Your function should be able to handle errors gracefully! If an error occurs when trying to read from the file or write to the output file, your function should return False. End of explanation if os.path.exists("q1data/out_again.txt"): os.remove("q1data/out_again.txt") assert acount_lines("q1data/file1.txt", "q1data/out_again.txt") assert os.path.exists("q1data/out_again.txt") assert int(open("q1data/out_again.txt", "r").read()) == 3 assert acount_lines("q1data/file2.txt", "q1data/out_again.txt") assert os.path.exists("q1data/out_again.txt") assert int("".join(open("q1data/out_again.txt", "r").read().split("\n"))) == 34 r1 = None try: r1 = acount_lines("yet/another/nonexistent/path.txt", "meaningless") except: assert False else: assert not r1 r2 = None try: r2 = acount_lines("q1data/file2.txt", "/this/should/throw/an/error.txt") except: assert False else: assert not r2 Explanation: D In this question, you'll write a function acount_lines that performs the same operation as before, except in the case that the output file already exists: in this case, you'll append the line count to the file instead of overwriting it, thus preserving any existing previous line counts. Each new appended line count should be on its own line in the output file. You may need to manually insert newline characters, which are a backslash followed by the letter n: \n Your function should return True on success, and False if an error occurred. NOTE: Your function should be able to handle errors gracefully! If an error occurs when trying to read from the file or write to the output file, your function should return False. End of explanation
3,636
Given the following text description, write Python code to implement the functionality described below step by step Description: Q1 In this question, we'll compute some basic probabilities of events using loops, lists, and dictionaries. Part A The Polya urn model is a popular model for both statistics and to illustrate certain mental exercises. Typically, these exercises involve randomly selecting colored balls, and these selection exercises can vary the properties of the remaining contents of the urn. A common question to ask is Step1: Part B In this part, you'll write code to compute the probabilities of certain colors using the dictionary object in the previous part. Your code will receive a dictionary of colors with their relative counts (i.e., the output of Part A), and a "query" color, and you will need to return the chances of randomly selecting a ball of that query color. Write a function which Step2: Part C In this part, you'll do the opposite of what you implemented in Part B Step3: Part D Even more interesting is when we start talking about combinations of colors. Let's say I'm reaching into a Polya urn to pull out two balls; it's valuable to know what my chances of at least 1 ball being a certain color would be. Write a function which Step4: Part E One final wrinkle
Python Code: u1 = ["green", "green", "blue", "green"] a1 = set({("green", 3), ("blue", 1)}) assert a1 == set(urn_to_dict(u1).items()) u2 = ["red", "blue", "blue", "green", "yellow", "black", "black", "green", "blue", "yellow", "red", "green", "blue", "black", "yellow", "yellow", "yellow", "green", "blue", "red", "red", "blue", "red", "blue", "yellow", "yellow", "yellow"] a2 = set({('black', 3), ('blue', 7), ('green', 4), ('red', 5), ('yellow', 8)}) assert a2 == set(urn_to_dict(u2).items()) Explanation: Q1 In this question, we'll compute some basic probabilities of events using loops, lists, and dictionaries. Part A The Polya urn model is a popular model for both statistics and to illustrate certain mental exercises. Typically, these exercises involve randomly selecting colored balls, and these selection exercises can vary the properties of the remaining contents of the urn. A common question to ask is: given some number of colors and some number of balls, what are the chances of randomly selecting a ball of a specific color? Write a function which: is named urn_to_dict takes 1 argument: a list of color names (e.g. "blue", "red", "green", etc) returns 1 value: a dictionary, with color names for keys and the frequency counts of those colors as values The contents of the urn will be handed to you in a list form (the input argument), where each element of the list represents a ball in an urn, and the element itself will be a certain color. You then need to count how many times each color occurs in the list, and assemble those counts in the dictionary that your function should return. For example, the list ["blue", "blue", "green", "blue"] should result in the dictionary {"blue": 3, "green": 1}. Use the urn_dict dictionary object to store the results. End of explanation import numpy.testing as t c1 = {"blue": 3, "red": 1} t.assert_allclose(chances_of_color(c1, "blue"), 0.75) import numpy.testing as t c2 = {"red": 934, "blue": 493859, "yellow": 31, "green": 3892, "black": 487} t.assert_allclose(chances_of_color(c2, "green"), 0.007796427505443677) import numpy.testing as t c3 = {"red": 5, "blue": 5, "yellow": 5, "green": 5, "black": 5} t.assert_allclose(chances_of_color(c2, "orange"), 0.0) Explanation: Part B In this part, you'll write code to compute the probabilities of certain colors using the dictionary object in the previous part. Your code will receive a dictionary of colors with their relative counts (i.e., the output of Part A), and a "query" color, and you will need to return the chances of randomly selecting a ball of that query color. Write a function which: is named chances_of_color takes 2 arguments: a dictionary mapping colors to counts (output of Part A), and a string that will contain a query color returns 1 value: a floating-point number, the probability of selecting the "query" color at random Remember, probability is a fraction: the numerator is the number of occurrences of the event you're interested in, and the denominator is the number of all possible events. It's kind of like an average. For example, if the input dictionary is {"red": 3, "blue": 1} and the query color is "blue", then the fraction you would return is 1/4, or 0.25 (probabilities should always be between 0 and 1). End of explanation import numpy.testing as t c1 = {"blue": 3, "red": 1} t.assert_allclose(chances_of_not_color(c1, "blue"), 0.25) import numpy.testing as t c2 = {"red": 934, "blue": 493859, "yellow": 31, "green": 3892, "black": 487} t.assert_allclose(chances_of_not_color(c2, "blue"), 0.010705063871811693) import numpy.testing as t c3 = {"red": 5, "blue": 5, "yellow": 5, "green": 5, "black": 5} t.assert_allclose(chances_of_not_color(c2, "orange"), 1.0) Explanation: Part C In this part, you'll do the opposite of what you implemented in Part B: you'll get a dictionary and a query color, but you'll need to return the chances of drawing a ball that is not the same color as the query. Write a function which: is named chances_of_not_color takes 2 arguments: a dictionary mapping colors to counts (output of Part A), and a string that will contain a query color returns 1 value: a floating-point number, the probability of NOT selecting the "query" color at random For example, if the input dictionary is {"red": 3, "blue": 1} and the query color is "blue", then the fraction you would return is 3/4, or 0.75. HINT: You can use the function you wrote in Part B to help! End of explanation import numpy.testing as t q1 = ["blue", "green", "red"] t.assert_allclose(select_chances(q1, 2, "red"), 2/3) q2 = ["red", "blue", "blue", "green", "yellow", "black", "black", "green", "blue", "yellow", "red", "green", "blue", "black", "yellow", "yellow", "yellow", "green", "blue", "red", "red", "blue", "red", "blue", "yellow", "yellow", "yellow"] t.assert_allclose(select_chances(q2, 3, "red"), 0.4735042735042735) Explanation: Part D Even more interesting is when we start talking about combinations of colors. Let's say I'm reaching into a Polya urn to pull out two balls; it's valuable to know what my chances of at least 1 ball being a certain color would be. Write a function which: is named select_chances takes 3 arguments: a list of colors of balls in an urn (same as input to Part A), an integer number (number of balls to draw out of the urn), and a string containing a single color returns 1 value: a floating-point number, the probability that at least one ball from the "number" drawn from the urn is the specified color Remember, you compute probability exactly as before--the number of events of interest (selecting a certain number of balls with at least one of a certain color) divided by the total number of possible events (all possible draws)--only this time you'll need to account for combinations of multiple balls. For example, if I give you an urn list of ["blue", "green", "red"], the number 2, and the query color "blue", then you would return 2/3, or 0.66666 (There are three possible combinations of groupings of 2 balls: blue-green, blue-red, and green-red. Two of these three combinations contain the query color blue). HINT: It will be very, very helpful if make use of the itertools module for generating combinations of colored balls. If you can't remember how the module works, consult its documentation. Seriously though, it will vastly simplify your life in this question. End of explanation import numpy.testing as t q1 = ["blue", "green", "red"] t.assert_allclose(select_chances_first(q1, 2, "red"), 2/6) q2 = ["red", "blue", "blue", "green", "yellow", "black", "black", "green", "blue", "yellow", "red", "green", "blue", "black", "yellow", "yellow", "yellow", "green", "blue", "red", "red", "blue", "red", "blue", "yellow", "yellow", "yellow"] t.assert_allclose(select_chances_first(q2, 3, "red"), 0.18518518518518517) Explanation: Part E One final wrinkle: let's say I'm no longer picking colored balls simultaneously from the urn, but rather in sequence--that is, one right after the other. Now I can ask, for a given urn and a certain number of balls I'm going to pick, what are the chances that I draw a ball of a certain color first? For example, if I give you an urn list of ["blue", "green", "red"], the number 2, and the query color "blue", then you would return 2/6, or 0.33333. (There are six possible ways of drawing two balls in sequence: - BLUE then GREEN - BLUE then RED - GREEN then BLUE - GREEN then RED - RED then GREEN - RED then BLUE and two of those six involve drawing the blue one first) Write a function which: is named select_chances_first takes 3 arguments: a list of colors in the urn (same input as Part A and Part D), an integer number of balls to draw in sequence, and a string containing the query color for the first draw returns 1 value: a floating-point number, the probability of drawing the query color first in a sequence of draws of the specified length You are welcome to again use itertools. End of explanation
3,637
Given the following text description, write Python code to implement the functionality described below step by step Description: Simple Reinforcement Learning Step1: Load the environment Step2: The Deep Q-Network Helper functions Step3: Implementing the network itself Step4: Training the network Step5: Some statistics on network performance
Python Code: from __future__ import division import gym import numpy as np import random import tensorflow as tf import matplotlib.pyplot as plt %matplotlib inline import tensorflow.contrib.slim as slim Explanation: Simple Reinforcement Learning: Exploration Strategies This notebook contains implementations of various action-selections methods that can be used to encourage exploration during the learning process. To learn more about these methods, see the accompanying Medium post. Also see the interactive visualization: here. For more reinforcment learning tutorials see: https://github.com/awjuliani/DeepRL-Agents End of explanation env = gym.make('CartPole-v0') Explanation: Load the environment End of explanation class experience_buffer(): def __init__(self, buffer_size = 10000): self.buffer = [] self.buffer_size = buffer_size def add(self,experience): if len(self.buffer) + len(experience) >= self.buffer_size: self.buffer[0:(len(experience)+len(self.buffer))-self.buffer_size] = [] self.buffer.extend(experience) def sample(self,size): return np.reshape(np.array(random.sample(self.buffer,size)),[size,5]) def updateTargetGraph(tfVars,tau): total_vars = len(tfVars) op_holder = [] for idx,var in enumerate(tfVars[0:total_vars//2]): op_holder.append(tfVars[idx+total_vars//2].assign((var.value()*tau) + ((1-tau)*tfVars[idx+total_vars//2].value()))) return op_holder def updateTarget(op_holder,sess): for op in op_holder: sess.run(op) Explanation: The Deep Q-Network Helper functions End of explanation class Q_Network(): def __init__(self): #These lines establish the feed-forward part of the network used to choose actions self.inputs = tf.placeholder(shape=[None,4],dtype=tf.float32) self.Temp = tf.placeholder(shape=None,dtype=tf.float32) self.keep_per = tf.placeholder(shape=None,dtype=tf.float32) hidden = slim.fully_connected(self.inputs,64,activation_fn=tf.nn.tanh,biases_initializer=None) hidden = slim.dropout(hidden,self.keep_per) self.Q_out = slim.fully_connected(hidden,2,activation_fn=None,biases_initializer=None) self.predict = tf.argmax(self.Q_out,1) self.Q_dist = tf.nn.softmax(self.Q_out/self.Temp) #Below we obtain the loss by taking the sum of squares difference between the target and prediction Q values. self.actions = tf.placeholder(shape=[None],dtype=tf.int32) self.actions_onehot = tf.one_hot(self.actions,2,dtype=tf.float32) self.Q = tf.reduce_sum(tf.multiply(self.Q_out, self.actions_onehot), reduction_indices=1) self.nextQ = tf.placeholder(shape=[None],dtype=tf.float32) loss = tf.reduce_sum(tf.square(self.nextQ - self.Q)) trainer = tf.train.GradientDescentOptimizer(learning_rate=0.0005) self.updateModel = trainer.minimize(loss) Explanation: Implementing the network itself End of explanation # Set learning parameters exploration = "e-greedy" #Exploration method. Choose between: greedy, random, e-greedy, boltzmann, bayesian. y = .99 #Discount factor. num_episodes = 20000 #Total number of episodes to train network for. tau = 0.001 #Amount to update target network at each step. batch_size = 32 #Size of training batch startE = 1 #Starting chance of random action endE = 0.1 #Final chance of random action anneling_steps = 200000 #How many steps of training to reduce startE to endE. pre_train_steps = 50000 #Number of steps used before training updates begin. tf.reset_default_graph() q_net = Q_Network() target_net = Q_Network() init = tf.initialize_all_variables() trainables = tf.trainable_variables() targetOps = updateTargetGraph(trainables,tau) myBuffer = experience_buffer() #create lists to contain total rewards and steps per episode jList = [] jMeans = [] rList = [] rMeans = [] with tf.Session() as sess: sess.run(init) updateTarget(targetOps,sess) e = startE stepDrop = (startE - endE)/anneling_steps total_steps = 0 for i in range(num_episodes): s = env.reset() rAll = 0 d = False j = 0 while j < 999: j+=1 if exploration == "greedy": #Choose an action with the maximum expected value. a,allQ = sess.run([q_net.predict,q_net.Q_out],feed_dict={q_net.inputs:[s],q_net.keep_per:1.0}) a = a[0] if exploration == "random": #Choose an action randomly. a = env.action_space.sample() if exploration == "e-greedy": #Choose an action by greedily (with e chance of random action) from the Q-network if np.random.rand(1) < e or total_steps < pre_train_steps: a = env.action_space.sample() else: a,allQ = sess.run([q_net.predict,q_net.Q_out],feed_dict={q_net.inputs:[s],q_net.keep_per:1.0}) a = a[0] if exploration == "boltzmann": #Choose an action probabilistically, with weights relative to the Q-values. Q_d,allQ = sess.run([q_net.Q_dist,q_net.Q_out],feed_dict={q_net.inputs:[s],q_net.Temp:e,q_net.keep_per:1.0}) a = np.random.choice(Q_d[0],p=Q_d[0]) a = np.argmax(Q_d[0] == a) if exploration == "bayesian": #Choose an action using a sample from a dropout approximation of a bayesian q-network. a,allQ = sess.run([q_net.predict,q_net.Q_out],feed_dict={q_net.inputs:[s],q_net.keep_per:(1-e)+0.1}) a = a[0] #Get new state and reward from environment s1,r,d,_ = env.step(a) myBuffer.add(np.reshape(np.array([s,a,r,s1,d]),[1,5])) if e > endE and total_steps > pre_train_steps: e -= stepDrop if total_steps > pre_train_steps and total_steps % 5 == 0: #We use Double-DQN training algorithm trainBatch = myBuffer.sample(batch_size) Q1 = sess.run(q_net.predict,feed_dict={q_net.inputs:np.vstack(trainBatch[:,3]),q_net.keep_per:1.0}) Q2 = sess.run(target_net.Q_out,feed_dict={target_net.inputs:np.vstack(trainBatch[:,3]),target_net.keep_per:1.0}) end_multiplier = -(trainBatch[:,4] - 1) doubleQ = Q2[range(batch_size),Q1] targetQ = trainBatch[:,2] + (y*doubleQ * end_multiplier) _ = sess.run(q_net.updateModel,feed_dict={q_net.inputs:np.vstack(trainBatch[:,0]),q_net.nextQ:targetQ,q_net.keep_per:1.0,q_net.actions:trainBatch[:,1]}) updateTarget(targetOps,sess) rAll += r s = s1 total_steps += 1 if d == True: break jList.append(j) rList.append(rAll) if i % 100 == 0 and i != 0: r_mean = np.mean(rList[-100:]) j_mean = np.mean(jList[-100:]) if exploration == 'e-greedy': print("Mean Reward: " + str(r_mean) + " Total Steps: " + str(total_steps) + " e: " + str(e)) if exploration == 'boltzmann': print("Mean Reward: " + str(r_mean) + " Total Steps: " + str(total_steps) + " t: " + str(e)) if exploration == 'bayesian': print("Mean Reward: " + str(r_mean) + " Total Steps: " + str(total_steps) + " p: " + str(e)) if exploration == 'random' or exploration == 'greedy': print("Mean Reward: " + str(r_mean) + " Total Steps: " + str(total_steps)) rMeans.append(r_mean) jMeans.append(j_mean) print("Percent of succesful episodes: " + str(sum(rList)/num_episodes) + "%") Explanation: Training the network End of explanation plt.plot(rMeans) plt.plot(jMeans) Explanation: Some statistics on network performance End of explanation
3,638
Given the following text description, write Python code to implement the functionality described below step by step Description: Solution of 4.10.1, Jiang et al. 2013 Write a function that takes as input the desired Taxon, and returns the mean value of r. First, we're going to import the csv module, and read the data. We store the taxon name in the list Taxa, and the corresponding r value in the list r_values. Note that we need to convert the values to float (we need numbers, and they are read as strings). Step1: We check the first five entries to make sure that everything went well Step2: Now we write a function that, given a list of taxa names and corresponding r values, calculates the mean r for a given category of taxa Step3: Test the function using Fish as target taxon Step4: Let's try to run this on all taxa. We can write a separate function that returns the set of unique taxa in the database Step5: Calculate the mean r for each taxon Step6: You should see that fish have a positive value of r, but that this is also true for other taxa. Is the mean value of r especially high for fish? To test this, compute a p-value by repeatedly sampling 37 values of r at random (37 experiments on fish are reported in the database), and calculating the probability of observing a higher mean value of r. To get an accurate estimate of the p-value, use 50,000 randomizations. Are these values of assortative mating high, compared to what is expected by chance? We can try associating a p-value to each r value by repeatedly computing the mean r of randomized taxa and observing how often we obtain a mean r larger than the observed value. There are many other ways of obtaining such an emperical p-value, for example counting how many times a certain taxon is represented, and sampling the values at random. Step7: Let's try the function on Fish Step8: A very small p-value
Python Code: import csv with open('../data/Jiang2013_data.csv') as csvfile: # set up csv reader and specify correct delimiter reader = csv.DictReader(csvfile, delimiter = '\t') taxa = [] r_values = [] for row in reader: taxa.append(row['Taxon']) r_values.append(float(row['r'])) Explanation: Solution of 4.10.1, Jiang et al. 2013 Write a function that takes as input the desired Taxon, and returns the mean value of r. First, we're going to import the csv module, and read the data. We store the taxon name in the list Taxa, and the corresponding r value in the list r_values. Note that we need to convert the values to float (we need numbers, and they are read as strings). End of explanation taxa[:5] r_values[:5] Explanation: We check the first five entries to make sure that everything went well: End of explanation def get_mean_r(names, values, target_taxon = 'Fish'): n = len(names) mean_r = 0.0 sample_size = 0 for i in range(n): if names[i] == target_taxon: mean_r = mean_r + values[i] sample_size = sample_size + 1 return mean_r / sample_size Explanation: Now we write a function that, given a list of taxa names and corresponding r values, calculates the mean r for a given category of taxa: End of explanation get_mean_r(taxa, r_values, target_taxon = 'Fish') Explanation: Test the function using Fish as target taxon: End of explanation def get_taxa_list(names): return(set(names)) get_taxa_list(taxa) Explanation: Let's try to run this on all taxa. We can write a separate function that returns the set of unique taxa in the database: End of explanation for t in get_taxa_list(taxa): print(t, get_mean_r(taxa, r_values, target_taxon = t)) Explanation: Calculate the mean r for each taxon: End of explanation import scipy # scipy for random shuffle def get_p_value_for_mean_r(names, values, target_taxon = 'Fish', num_simulations = 1000): # compute the (observed) mean_r obs_mean_r = get_mean_r(names, values, target_taxon) # create a copy of the names, to be randomized rnd_names = names[:] # create counter for observations that are higher than obs_mean_r count_mean_r = 0.0 for i in range(num_simulations): # shuffle the taxa names scipy.random.shuffle(rnd_names) # calculate mean r value of randomized data rnd_mean_r = get_mean_r(rnd_names, values, target_taxon) # count number of rdn_mean_r that are larger or equal to obs_mean_r if rnd_mean_r >= obs_mean_r: count_mean_r = count_mean_r + 1.0 # calculate p_value: chance of observing rnd_r_mean larger than r_mean p_value = count_mean_r / num_simulations return [target_taxon, round(obs_mean_r, 3), round(p_value, 5)] Explanation: You should see that fish have a positive value of r, but that this is also true for other taxa. Is the mean value of r especially high for fish? To test this, compute a p-value by repeatedly sampling 37 values of r at random (37 experiments on fish are reported in the database), and calculating the probability of observing a higher mean value of r. To get an accurate estimate of the p-value, use 50,000 randomizations. Are these values of assortative mating high, compared to what is expected by chance? We can try associating a p-value to each r value by repeatedly computing the mean r of randomized taxa and observing how often we obtain a mean r larger than the observed value. There are many other ways of obtaining such an emperical p-value, for example counting how many times a certain taxon is represented, and sampling the values at random. End of explanation get_p_value_for_mean_r(taxa, r_values, 'Fish', 50000) Explanation: Let's try the function on Fish: End of explanation for t in get_taxa_list(taxa): print(get_p_value_for_mean_r(taxa, r_values, t, 50000)) Explanation: A very small p-value: this means that the observed mean r value (0.397) is larger than what we would expect by chance. Note that your calculated p-value might deviate slightly from ours given the randomness in a simulation. Repeat the procedure for all taxa. End of explanation
3,639
Given the following text description, write Python code to implement the functionality described below step by step Description: odm2api demo with Little Bear SQLite sample DB Largely from https Step1: SamplingFeatures tests Step2: Back to the rest of the demo Step3: Foreign Key Example Drill down and get objects linked by foreign keys Step4: Example of Retrieving Attributes of a Time Series Result using a ResultID Step5: Why are ProcessingLevelObj, VariableObj and UnitsObj objects not shown in the above vars() listing!? They are actually available, as demonstrated in much of the code below. Step6: Example of Retrieving Time Series Result Values, then plotting them
Python Code: %matplotlib inline import matplotlib.pyplot as plt from matplotlib import dates from odm2api.ODMconnection import dbconnection from odm2api.ODM2.services.readService import ReadODM2 # Create a connection to the ODM2 database # ---------------------------------------- odm2db_fpth = '/home/mayorga/Desktop/TylerYeats/ODM2-LittleBear1.sqlite' session_factory = dbconnection.createConnection('sqlite', odm2db_fpth, 2.0) read = ReadODM2(session_factory) # Run some basic sample queries. # ------------------------------ # Get all of the variables from the database and print their names to the console allVars = read.getVariables() for x in allVars: print x.VariableCode + ": " + x.VariableNameCV # Get all of the people from the database allPeople = read.getPeople() for x in allPeople: print x.PersonFirstName + " " + x.PersonLastName try: print "\n-------- Information about an Affiliation ---------" allaff = read.getAffiliations() for x in allaff: print x.PersonObj.PersonFirstName + ": " + str(x.OrganizationID) except Exception as e: print "Unable to demo getAllAffiliations", e allaff = read.getAffiliations() type(allaff) Explanation: odm2api demo with Little Bear SQLite sample DB Largely from https://github.com/ODM2/ODM2PythonAPI/blob/master/Examples/Sample.py - 4/25/2016. Started testing with the new odm2 conda channel, based on the new 0.5.0-alpha odm2api release. See my odm2api_odm2channel env. Ran into problems b/c the SQLite database needed to be updated to have a SamplingFeature.FeatureGeometryWKT field; so I added and populated it manually with SQLite Manager. - 2/7/2016. Tested successfully with sfgeometry_em_1 branch, with my overhauls. Using odm2api_dev env. - 2/1 - 1/31. Errors with SamplingFeatures code, with latest odm2api from master (on env odm2api_jan31test). The code also fails the same way with the odm2api env, but it does still run fine with the odm2api_jan21 env! I'm investigating the differences between those two envs. - 1/22-20,9/2016. Emilio Mayorga End of explanation # from odm2api.ODM2.models import SamplingFeatures # read._session.query(SamplingFeatures).filter_by(SamplingFeatureTypeCV='Site').all() # Get all of the SamplingFeatures from the database that are Sites try: siteFeatures = read.getSamplingFeatures(type='Site') numSites = len(siteFeatures) for x in siteFeatures: print x.SamplingFeatureCode + ": " + x.SamplingFeatureName except Exception as e: print "Unable to demo getSamplingFeatures(type='Site')", e read.getSamplingFeatures() read.getSamplingFeatures(codes=['USU-LBR-Mendon']) # Now get the SamplingFeature object for a SamplingFeature code sf_lst = read.getSamplingFeatures(codes=['USU-LBR-Mendon']) vars(sf_lst[0]) sf = sf_lst[0] print sf, "\n" print type(sf) print type(sf.FeatureGeometryWKT), sf.FeatureGeometryWKT print type(sf.FeatureGeometry) vars(sf.FeatureGeometry) sf.FeatureGeometry.__doc__ sf.FeatureGeometry.geom_wkb, sf.FeatureGeometry.geom_wkt # 4/25/2016: Don't know why the shape is listed 4 times ... type(sf.shape()), sf.shape().wkt Explanation: SamplingFeatures tests End of explanation read.getResults() firstResult = read.getResults()[0] firstResult.FeatureActionObj.ActionObj Explanation: Back to the rest of the demo End of explanation try: # Call getResults, but return only the first result firstResult = read.getResults()[0] action_firstResult = firstResult.FeatureActionObj.ActionObj print "The FeatureAction object for the Result is: ", firstResult.FeatureActionObj print "The Action object for the Result is: ", action_firstResult print ("\nThe following are some of the attributes for the Action that created the Result: \n" + "ActionTypeCV: " + action_firstResult.ActionTypeCV + "\n" + "ActionDescription: " + action_firstResult.ActionDescription + "\n" + "BeginDateTime: " + str(action_firstResult.BeginDateTime) + "\n" + "EndDateTime: " + str(action_firstResult.EndDateTime) + "\n" + "MethodName: " + action_firstResult.MethodObj.MethodName + "\n" + "MethodDescription: " + action_firstResult.MethodObj.MethodDescription) except Exception as e: print "Unable to demo Foreign Key Example: ", e Explanation: Foreign Key Example Drill down and get objects linked by foreign keys End of explanation tsResult = read.getResults(ids=[1])[0] type(tsResult), vars(tsResult) Explanation: Example of Retrieving Attributes of a Time Series Result using a ResultID End of explanation try: tsResult = read.getResults(ids=[1])[0] # Get the site information by drilling down sf_tsResult = tsResult.FeatureActionObj.SamplingFeatureObj print( "Some of the attributes for the TimeSeriesResult retrieved using getResults(ids=[]): \n" + "ResultTypeCV: " + tsResult.ResultTypeCV + "\n" + # Get the ProcessingLevel from the TimeSeriesResult's ProcessingLevel object "ProcessingLevel: " + tsResult.ProcessingLevelObj.Definition + "\n" + "SampledMedium: " + tsResult.SampledMediumCV + "\n" + # Get the variable information from the TimeSeriesResult's Variable object "Variable: " + tsResult.VariableObj.VariableCode + ": " + tsResult.VariableObj.VariableNameCV + "\n" + "AggregationStatistic: " + tsResult.AggregationStatisticCV + "\n" + # Get the site information by drilling down "Elevation_m: " + str(sf_tsResult.Elevation_m) + "\n" + "SamplingFeature: " + sf_tsResult.SamplingFeatureCode + " - " + sf_tsResult.SamplingFeatureName) except Exception as e: print "Unable to demo Example of retrieving Attributes of a time Series Result: ", e Explanation: Why are ProcessingLevelObj, VariableObj and UnitsObj objects not shown in the above vars() listing!? They are actually available, as demonstrated in much of the code below. End of explanation # Get the values for a particular TimeSeriesResult tsValues = read.getResultValues(resultid=1) # Return type is a pandas dataframe # Print a few Time Series Values to the console # tsValues.set_index('ValueDateTime', inplace=True) tsValues.head() # Plot the time series try: fig = plt.figure() ax = fig.add_subplot(111) tsValues.plot(x='ValueDateTime', y='DataValue', kind='line', title=tsResult.VariableObj.VariableNameCV + " at " + tsResult.FeatureActionObj.SamplingFeatureObj.SamplingFeatureName, ax=ax) ax.set_ylabel(tsResult.VariableObj.VariableNameCV + " (" + tsResult.UnitsObj.UnitsAbbreviation + ")") ax.set_xlabel("Date/Time") ax.xaxis.set_minor_locator(dates.MonthLocator()) ax.xaxis.set_minor_formatter(dates.DateFormatter('%b')) ax.xaxis.set_major_locator(dates.YearLocator()) ax.xaxis.set_major_formatter(dates.DateFormatter('\n%Y')) ax.grid(True) except Exception as e: print "Unable to demo plotting of tsValues: ", e Explanation: Example of Retrieving Time Series Result Values, then plotting them End of explanation
3,640
Given the following text description, write Python code to implement the functionality described below step by step Description: Vertex client library Step1: Install the latest GA version of google-cloud-storage library as well. Step2: Restart the kernel Once you've installed the Vertex client library and Google cloud-storage, you need to restart the notebook kernel so it can find the packages. Step3: Before you begin GPU runtime Make sure you're running this notebook in a GPU runtime if you have that option. In Colab, select Runtime > Change Runtime Type > GPU Set up your Google Cloud project The following steps are required, regardless of your notebook environment. Select or create a Google Cloud project. When you first create an account, you get a $300 free credit towards your compute/storage costs. Make sure that billing is enabled for your project. Enable the Vertex APIs and Compute Engine APIs. The Google Cloud SDK is already installed in Google Cloud Notebook. Enter your project ID in the cell below. Then run the cell to make sure the Cloud SDK uses the right project for all the commands in this notebook. Note Step4: Region You can also change the REGION variable, which is used for operations throughout the rest of this notebook. Below are regions supported for Vertex. We recommend that you choose the region closest to you. Americas Step5: Timestamp If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append onto the name of resources which will be created in this tutorial. Step6: Authenticate your Google Cloud account If you are using Google Cloud Notebook, your environment is already authenticated. Skip this step. If you are using Colab, run the cell below and follow the instructions when prompted to authenticate your account via oAuth. Otherwise, follow these steps Step7: Set up variables Next, set up some variables used throughout the tutorial. Import libraries and define constants Import Vertex client library Import the Vertex client library into our Python environment. Step8: Vertex constants Setup up the following constants for Vertex Step9: AutoML constants Set constants unique to AutoML datasets and training Step10: Tutorial Now you are ready to start creating your own AutoML image object detection model. Set up clients The Vertex client library works as a client/server model. On your side (the Python script) you will create a client that sends requests and receives responses from the Vertex server. You will use different clients in this tutorial for different steps in the workflow. So set them all up upfront. Dataset Service for Dataset resources. Model Service for Model resources. Pipeline Service for training. Endpoint Service for deployment. Prediction Service for serving. Step11: Dataset Now that your clients are ready, your first step in training a model is to create a managed dataset instance, and then upload your labeled data to it. Create Dataset resource instance Use the helper function create_dataset to create the instance of a Dataset resource. This function does the following Step12: Now save the unique dataset identifier for the Dataset resource instance you created. Step13: Data preparation The Vertex Dataset resource for images has some requirements for your data Step14: Quick peek at your data You will use a version of the Salads dataset that is stored in a public Cloud Storage bucket, using a CSV index file. Start by doing a quick peek at the data. You count the number of examples by counting the number of rows in the CSV index file (wc -l) and then peek at the first few rows. Step15: Import data Now, import the data into your Vertex Dataset resource. Use this helper function import_data to import the data. The function does the following Step16: Train the model Now train an AutoML image object detection model using your Vertex Dataset resource. To train the model, do the following steps Step17: Construct the task requirements Next, construct the task requirements. Unlike other parameters which take a Python (JSON-like) dictionary, the task field takes a Google protobuf Struct, which is very similar to a Python dictionary. Use the json_format.ParseDict method for the conversion. The minimal fields you need to specify are Step18: Now save the unique identifier of the training pipeline you created. Step19: Get information on a training pipeline Now get pipeline information for just this training pipeline instance. The helper function gets the job information for just this job by calling the the job client service's get_training_pipeline method, with the following parameter Step20: Deployment Training the above model may take upwards of 60 minutes time. Once your model is done training, you can calculate the actual time it took to train the model by subtracting end_time from start_time. For your model, you will need to know the fully qualified Vertex Model resource identifier, which the pipeline service assigned to it. You can get this from the returned pipeline instance as the field model_to_deploy.name. Step21: Model information Now that your model is trained, you can get some information on your model. Evaluate the Model resource Now find out how good the model service believes your model is. As part of training, some portion of the dataset was set aside as the test (holdout) data, which is used by the pipeline service to evaluate the model. List evaluations for all slices Use this helper function list_model_evaluations, which takes the following parameter Step22: Deploy the Model resource Now deploy the trained Vertex Model resource you created with AutoML. This requires two steps Step23: Now get the unique identifier for the Endpoint resource you created. Step24: Compute instance scaling You have several choices on scaling the compute instances for handling your online prediction requests Step25: Deploy Model resource to the Endpoint resource Use this helper function deploy_model to deploy the Model resource to the Endpoint resource you created for serving predictions, with the following parameters Step26: Make a online prediction request Now do a online prediction to your deployed model. Get test item You will use an arbitrary example out of the dataset as a test item. Don't be concerned that the example was likely used in training the model -- we just want to demonstrate how to make a prediction. Step27: Make a prediction Now you have a test item. Use this helper function predict_item, which takes the following parameters Step28: Undeploy the Model resource Now undeploy your Model resource from the serving Endpoint resoure. Use this helper function undeploy_model, which takes the following parameters Step29: Cleaning up To clean up all GCP resources used in this project, you can delete the GCP project you used for the tutorial. Otherwise, you can delete the individual resources you created in this tutorial
Python Code: import os import sys # Google Cloud Notebook if os.path.exists("/opt/deeplearning/metadata/env_version"): USER_FLAG = "--user" else: USER_FLAG = "" ! pip3 install -U google-cloud-aiplatform $USER_FLAG Explanation: Vertex client library: AutoML image object detection model for online prediction <table align="left"> <td> <a href="https://colab.research.google.com/github/GoogleCloudPlatform/vertex-ai-samples/blob/master/notebooks/community/gapic/automl/showcase_automl_image_object_detection_online.ipynb"> <img src="https://cloud.google.com/ml-engine/images/colab-logo-32px.png" alt="Colab logo"> Run in Colab </a> </td> <td> <a href="https://github.com/GoogleCloudPlatform/vertex-ai-samples/blob/master/notebooks/community/gapic/automl/showcase_automl_image_object_detection_online.ipynb"> <img src="https://cloud.google.com/ml-engine/images/github-logo-32px.png" alt="GitHub logo"> View on GitHub </a> </td> </table> <br/><br/><br/> Overview This tutorial demonstrates how to use the Vertex client library for Python to create image object detection models and do online prediction using Google Cloud's AutoML. Dataset The dataset used for this tutorial is the Salads category of the OpenImages dataset from TensorFlow Datasets. This dataset does not require any feature engineering. The version of the dataset you will use in this tutorial is stored in a public Cloud Storage bucket. The trained model predicts the bounding box locations and corresponding type of salad items in an image from a class of five items: salad, seafood, tomato, baked goods, or cheese. Objective In this tutorial, you create an AutoML image object detection model and deploy for online prediction from a Python script using the Vertex client library. You can alternatively create and deploy models using the gcloud command-line tool or online using the Google Cloud Console. The steps performed include: Create a Vertex Dataset resource. Train the model. View the model evaluation. Deploy the Model resource to a serving Endpoint resource. Make a prediction. Undeploy the Model. Costs This tutorial uses billable components of Google Cloud (GCP): Vertex AI Cloud Storage Learn about Vertex AI pricing and Cloud Storage pricing, and use the Pricing Calculator to generate a cost estimate based on your projected usage. Installation Install the latest version of Vertex client library. End of explanation ! pip3 install -U google-cloud-storage $USER_FLAG Explanation: Install the latest GA version of google-cloud-storage library as well. End of explanation if not os.getenv("IS_TESTING"): # Automatically restart kernel after installs import IPython app = IPython.Application.instance() app.kernel.do_shutdown(True) Explanation: Restart the kernel Once you've installed the Vertex client library and Google cloud-storage, you need to restart the notebook kernel so it can find the packages. End of explanation PROJECT_ID = "[your-project-id]" # @param {type:"string"} if PROJECT_ID == "" or PROJECT_ID is None or PROJECT_ID == "[your-project-id]": # Get your GCP project id from gcloud shell_output = !gcloud config list --format 'value(core.project)' 2>/dev/null PROJECT_ID = shell_output[0] print("Project ID:", PROJECT_ID) ! gcloud config set project $PROJECT_ID Explanation: Before you begin GPU runtime Make sure you're running this notebook in a GPU runtime if you have that option. In Colab, select Runtime > Change Runtime Type > GPU Set up your Google Cloud project The following steps are required, regardless of your notebook environment. Select or create a Google Cloud project. When you first create an account, you get a $300 free credit towards your compute/storage costs. Make sure that billing is enabled for your project. Enable the Vertex APIs and Compute Engine APIs. The Google Cloud SDK is already installed in Google Cloud Notebook. Enter your project ID in the cell below. Then run the cell to make sure the Cloud SDK uses the right project for all the commands in this notebook. Note: Jupyter runs lines prefixed with ! as shell commands, and it interpolates Python variables prefixed with $ into these commands. End of explanation REGION = "us-central1" # @param {type: "string"} Explanation: Region You can also change the REGION variable, which is used for operations throughout the rest of this notebook. Below are regions supported for Vertex. We recommend that you choose the region closest to you. Americas: us-central1 Europe: europe-west4 Asia Pacific: asia-east1 You may not use a multi-regional bucket for training with Vertex. Not all regions provide support for all Vertex services. For the latest support per region, see the Vertex locations documentation End of explanation from datetime import datetime TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S") Explanation: Timestamp If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append onto the name of resources which will be created in this tutorial. End of explanation # If you are running this notebook in Colab, run this cell and follow the # instructions to authenticate your GCP account. This provides access to your # Cloud Storage bucket and lets you submit training jobs and prediction # requests. # If on Google Cloud Notebook, then don't execute this code if not os.path.exists("/opt/deeplearning/metadata/env_version"): if "google.colab" in sys.modules: from google.colab import auth as google_auth google_auth.authenticate_user() # If you are running this notebook locally, replace the string below with the # path to your service account key and run this cell to authenticate your GCP # account. elif not os.getenv("IS_TESTING"): %env GOOGLE_APPLICATION_CREDENTIALS '' Explanation: Authenticate your Google Cloud account If you are using Google Cloud Notebook, your environment is already authenticated. Skip this step. If you are using Colab, run the cell below and follow the instructions when prompted to authenticate your account via oAuth. Otherwise, follow these steps: In the Cloud Console, go to the Create service account key page. Click Create service account. In the Service account name field, enter a name, and click Create. In the Grant this service account access to project section, click the Role drop-down list. Type "Vertex" into the filter box, and select Vertex Administrator. Type "Storage Object Admin" into the filter box, and select Storage Object Admin. Click Create. A JSON file that contains your key downloads to your local environment. Enter the path to your service account key as the GOOGLE_APPLICATION_CREDENTIALS variable in the cell below and run the cell. End of explanation import time from google.cloud.aiplatform import gapic as aip from google.protobuf import json_format from google.protobuf.json_format import MessageToJson, ParseDict from google.protobuf.struct_pb2 import Struct, Value Explanation: Set up variables Next, set up some variables used throughout the tutorial. Import libraries and define constants Import Vertex client library Import the Vertex client library into our Python environment. End of explanation # API service endpoint API_ENDPOINT = "{}-aiplatform.googleapis.com".format(REGION) # Vertex location root path for your dataset, model and endpoint resources PARENT = "projects/" + PROJECT_ID + "/locations/" + REGION Explanation: Vertex constants Setup up the following constants for Vertex: API_ENDPOINT: The Vertex API service endpoint for dataset, model, job, pipeline and endpoint services. PARENT: The Vertex location root path for dataset, model, job, pipeline and endpoint resources. End of explanation # Image Dataset type DATA_SCHEMA = "gs://google-cloud-aiplatform/schema/dataset/metadata/image_1.0.0.yaml" # Image Labeling type LABEL_SCHEMA = "gs://google-cloud-aiplatform/schema/dataset/ioformat/image_bounding_box_io_format_1.0.0.yaml" # Image Training task TRAINING_SCHEMA = "gs://google-cloud-aiplatform/schema/trainingjob/definition/automl_image_object_detection_1.0.0.yaml" Explanation: AutoML constants Set constants unique to AutoML datasets and training: Dataset Schemas: Tells the Dataset resource service which type of dataset it is. Data Labeling (Annotations) Schemas: Tells the Dataset resource service how the data is labeled (annotated). Dataset Training Schemas: Tells the Pipeline resource service the task (e.g., classification) to train the model for. End of explanation # client options same for all services client_options = {"api_endpoint": API_ENDPOINT} def create_dataset_client(): client = aip.DatasetServiceClient(client_options=client_options) return client def create_model_client(): client = aip.ModelServiceClient(client_options=client_options) return client def create_pipeline_client(): client = aip.PipelineServiceClient(client_options=client_options) return client def create_endpoint_client(): client = aip.EndpointServiceClient(client_options=client_options) return client def create_prediction_client(): client = aip.PredictionServiceClient(client_options=client_options) return client clients = {} clients["dataset"] = create_dataset_client() clients["model"] = create_model_client() clients["pipeline"] = create_pipeline_client() clients["endpoint"] = create_endpoint_client() clients["prediction"] = create_prediction_client() for client in clients.items(): print(client) Explanation: Tutorial Now you are ready to start creating your own AutoML image object detection model. Set up clients The Vertex client library works as a client/server model. On your side (the Python script) you will create a client that sends requests and receives responses from the Vertex server. You will use different clients in this tutorial for different steps in the workflow. So set them all up upfront. Dataset Service for Dataset resources. Model Service for Model resources. Pipeline Service for training. Endpoint Service for deployment. Prediction Service for serving. End of explanation TIMEOUT = 90 def create_dataset(name, schema, labels=None, timeout=TIMEOUT): start_time = time.time() try: dataset = aip.Dataset( display_name=name, metadata_schema_uri=schema, labels=labels ) operation = clients["dataset"].create_dataset(parent=PARENT, dataset=dataset) print("Long running operation:", operation.operation.name) result = operation.result(timeout=TIMEOUT) print("time:", time.time() - start_time) print("response") print(" name:", result.name) print(" display_name:", result.display_name) print(" metadata_schema_uri:", result.metadata_schema_uri) print(" metadata:", dict(result.metadata)) print(" create_time:", result.create_time) print(" update_time:", result.update_time) print(" etag:", result.etag) print(" labels:", dict(result.labels)) return result except Exception as e: print("exception:", e) return None result = create_dataset("salads-" + TIMESTAMP, DATA_SCHEMA) Explanation: Dataset Now that your clients are ready, your first step in training a model is to create a managed dataset instance, and then upload your labeled data to it. Create Dataset resource instance Use the helper function create_dataset to create the instance of a Dataset resource. This function does the following: Uses the dataset client service. Creates an Vertex Dataset resource (aip.Dataset), with the following parameters: display_name: The human-readable name you choose to give it. metadata_schema_uri: The schema for the dataset type. Calls the client dataset service method create_dataset, with the following parameters: parent: The Vertex location root path for your Database, Model and Endpoint resources. dataset: The Vertex dataset object instance you created. The method returns an operation object. An operation object is how Vertex handles asynchronous calls for long running operations. While this step usually goes fast, when you first use it in your project, there is a longer delay due to provisioning. You can use the operation object to get status on the operation (e.g., create Dataset resource) or to cancel the operation, by invoking an operation method: | Method | Description | | ----------- | ----------- | | result() | Waits for the operation to complete and returns a result object in JSON format. | | running() | Returns True/False on whether the operation is still running. | | done() | Returns True/False on whether the operation is completed. | | canceled() | Returns True/False on whether the operation was canceled. | | cancel() | Cancels the operation (this may take up to 30 seconds). | End of explanation # The full unique ID for the dataset dataset_id = result.name # The short numeric ID for the dataset dataset_short_id = dataset_id.split("/")[-1] print(dataset_id) Explanation: Now save the unique dataset identifier for the Dataset resource instance you created. End of explanation IMPORT_FILE = "gs://cloud-samples-data/vision/salads.csv" Explanation: Data preparation The Vertex Dataset resource for images has some requirements for your data: Images must be stored in a Cloud Storage bucket. Each image file must be in an image format (PNG, JPEG, BMP, ...). There must be an index file stored in your Cloud Storage bucket that contains the path and label for each image. The index file must be either CSV or JSONL. CSV For image object detection, the CSV index file has the requirements: No heading. First column is the Cloud Storage path to the image. Second column is the label. Third/Fourth columns are the upper left corner of bounding box. Coordinates are normalized, between 0 and 1. Fifth/Sixth/Seventh columns are not used and should be 0. Eighth/Ninth columns are the lower right corner of the bounding box. Location of Cloud Storage training data. Now set the variable IMPORT_FILE to the location of the CSV index file in Cloud Storage. End of explanation if "IMPORT_FILES" in globals(): FILE = IMPORT_FILES[0] else: FILE = IMPORT_FILE count = ! gsutil cat $FILE | wc -l print("Number of Examples", int(count[0])) print("First 10 rows") ! gsutil cat $FILE | head Explanation: Quick peek at your data You will use a version of the Salads dataset that is stored in a public Cloud Storage bucket, using a CSV index file. Start by doing a quick peek at the data. You count the number of examples by counting the number of rows in the CSV index file (wc -l) and then peek at the first few rows. End of explanation def import_data(dataset, gcs_sources, schema): config = [{"gcs_source": {"uris": gcs_sources}, "import_schema_uri": schema}] print("dataset:", dataset_id) start_time = time.time() try: operation = clients["dataset"].import_data( name=dataset_id, import_configs=config ) print("Long running operation:", operation.operation.name) result = operation.result() print("result:", result) print("time:", int(time.time() - start_time), "secs") print("error:", operation.exception()) print("meta :", operation.metadata) print( "after: running:", operation.running(), "done:", operation.done(), "cancelled:", operation.cancelled(), ) return operation except Exception as e: print("exception:", e) return None import_data(dataset_id, [IMPORT_FILE], LABEL_SCHEMA) Explanation: Import data Now, import the data into your Vertex Dataset resource. Use this helper function import_data to import the data. The function does the following: Uses the Dataset client. Calls the client method import_data, with the following parameters: name: The human readable name you give to the Dataset resource (e.g., salads). import_configs: The import configuration. import_configs: A Python list containing a dictionary, with the key/value entries: gcs_sources: A list of URIs to the paths of the one or more index files. import_schema_uri: The schema identifying the labeling type. The import_data() method returns a long running operation object. This will take a few minutes to complete. If you are in a live tutorial, this would be a good time to ask questions, or take a personal break. End of explanation def create_pipeline(pipeline_name, model_name, dataset, schema, task): dataset_id = dataset.split("/")[-1] input_config = { "dataset_id": dataset_id, "fraction_split": { "training_fraction": 0.8, "validation_fraction": 0.1, "test_fraction": 0.1, }, } training_pipeline = { "display_name": pipeline_name, "training_task_definition": schema, "training_task_inputs": task, "input_data_config": input_config, "model_to_upload": {"display_name": model_name}, } try: pipeline = clients["pipeline"].create_training_pipeline( parent=PARENT, training_pipeline=training_pipeline ) print(pipeline) except Exception as e: print("exception:", e) return None return pipeline Explanation: Train the model Now train an AutoML image object detection model using your Vertex Dataset resource. To train the model, do the following steps: Create an Vertex training pipeline for the Dataset resource. Execute the pipeline to start the training. Create a training pipeline You may ask, what do we use a pipeline for? You typically use pipelines when the job (such as training) has multiple steps, generally in sequential order: do step A, do step B, etc. By putting the steps into a pipeline, we gain the benefits of: Being reusable for subsequent training jobs. Can be containerized and ran as a batch job. Can be distributed. All the steps are associated with the same pipeline job for tracking progress. Use this helper function create_pipeline, which takes the following parameters: pipeline_name: A human readable name for the pipeline job. model_name: A human readable name for the model. dataset: The Vertex fully qualified dataset identifier. schema: The dataset labeling (annotation) training schema. task: A dictionary describing the requirements for the training job. The helper function calls the Pipeline client service'smethod create_pipeline, which takes the following parameters: parent: The Vertex location root path for your Dataset, Model and Endpoint resources. training_pipeline: the full specification for the pipeline training job. Let's look now deeper into the minimal requirements for constructing a training_pipeline specification: display_name: A human readable name for the pipeline job. training_task_definition: The dataset labeling (annotation) training schema. training_task_inputs: A dictionary describing the requirements for the training job. model_to_upload: A human readable name for the model. input_data_config: The dataset specification. dataset_id: The Vertex dataset identifier only (non-fully qualified) -- this is the last part of the fully-qualified identifier. fraction_split: If specified, the percentages of the dataset to use for training, test and validation. Otherwise, the percentages are automatically selected by AutoML. End of explanation PIPE_NAME = "salads_pipe-" + TIMESTAMP MODEL_NAME = "salads_model-" + TIMESTAMP task = json_format.ParseDict( { "budget_milli_node_hours": 20000, "model_type": "CLOUD_HIGH_ACCURACY_1", "disable_early_stopping": False, }, Value(), ) response = create_pipeline(PIPE_NAME, MODEL_NAME, dataset_id, TRAINING_SCHEMA, task) Explanation: Construct the task requirements Next, construct the task requirements. Unlike other parameters which take a Python (JSON-like) dictionary, the task field takes a Google protobuf Struct, which is very similar to a Python dictionary. Use the json_format.ParseDict method for the conversion. The minimal fields you need to specify are: budget_milli_node_hours: The maximum time to budget (billed) for training the model, where 1000 = 1 hour. For image object detection, the budget must be a minimum of 20 hours. model_type: The type of deployed model: CLOUD_HIGH_ACCURACY_1: For deploying to Google Cloud and optimizing for accuracy. CLOUD_LOW_LATENCY_1: For deploying to Google Cloud and optimizing for latency (response time), MOBILE_TF_HIGH_ACCURACY_1: For deploying to the edge and optimizing for accuracy. MOBILE_TF_LOW_LATENCY_1: For deploying to the edge and optimizing for latency (response time). MOBILE_TF_VERSATILE_1: For deploying to the edge and optimizing for a trade off between latency and accuracy. disable_early_stopping: Whether True/False to let AutoML use its judgement to stop training early or train for the entire budget. Finally, create the pipeline by calling the helper function create_pipeline, which returns an instance of a training pipeline object. End of explanation # The full unique ID for the pipeline pipeline_id = response.name # The short numeric ID for the pipeline pipeline_short_id = pipeline_id.split("/")[-1] print(pipeline_id) Explanation: Now save the unique identifier of the training pipeline you created. End of explanation def get_training_pipeline(name, silent=False): response = clients["pipeline"].get_training_pipeline(name=name) if silent: return response print("pipeline") print(" name:", response.name) print(" display_name:", response.display_name) print(" state:", response.state) print(" training_task_definition:", response.training_task_definition) print(" training_task_inputs:", dict(response.training_task_inputs)) print(" create_time:", response.create_time) print(" start_time:", response.start_time) print(" end_time:", response.end_time) print(" update_time:", response.update_time) print(" labels:", dict(response.labels)) return response response = get_training_pipeline(pipeline_id) Explanation: Get information on a training pipeline Now get pipeline information for just this training pipeline instance. The helper function gets the job information for just this job by calling the the job client service's get_training_pipeline method, with the following parameter: name: The Vertex fully qualified pipeline identifier. When the model is done training, the pipeline state will be PIPELINE_STATE_SUCCEEDED. End of explanation while True: response = get_training_pipeline(pipeline_id, True) if response.state != aip.PipelineState.PIPELINE_STATE_SUCCEEDED: print("Training job has not completed:", response.state) model_to_deploy_id = None if response.state == aip.PipelineState.PIPELINE_STATE_FAILED: raise Exception("Training Job Failed") else: model_to_deploy = response.model_to_upload model_to_deploy_id = model_to_deploy.name print("Training Time:", response.end_time - response.start_time) break time.sleep(60) print("model to deploy:", model_to_deploy_id) Explanation: Deployment Training the above model may take upwards of 60 minutes time. Once your model is done training, you can calculate the actual time it took to train the model by subtracting end_time from start_time. For your model, you will need to know the fully qualified Vertex Model resource identifier, which the pipeline service assigned to it. You can get this from the returned pipeline instance as the field model_to_deploy.name. End of explanation def list_model_evaluations(name): response = clients["model"].list_model_evaluations(parent=name) for evaluation in response: print("model_evaluation") print(" name:", evaluation.name) print(" metrics_schema_uri:", evaluation.metrics_schema_uri) metrics = json_format.MessageToDict(evaluation._pb.metrics) for metric in metrics.keys(): print(metric) print("evaluatedBoundingBoxCount", metrics["evaluatedBoundingBoxCount"]) print( "boundingBoxMeanAveragePrecision", metrics["boundingBoxMeanAveragePrecision"], ) return evaluation.name last_evaluation = list_model_evaluations(model_to_deploy_id) Explanation: Model information Now that your model is trained, you can get some information on your model. Evaluate the Model resource Now find out how good the model service believes your model is. As part of training, some portion of the dataset was set aside as the test (holdout) data, which is used by the pipeline service to evaluate the model. List evaluations for all slices Use this helper function list_model_evaluations, which takes the following parameter: name: The Vertex fully qualified model identifier for the Model resource. This helper function uses the model client service's list_model_evaluations method, which takes the same parameter. The response object from the call is a list, where each element is an evaluation metric. For each evaluation -- you probably only have one, we then print all the key names for each metric in the evaluation, and for a small set (evaluatedBoundingBoxCount and boundingBoxMeanAveragePrecision) you will print the result. End of explanation ENDPOINT_NAME = "salads_endpoint-" + TIMESTAMP def create_endpoint(display_name): endpoint = {"display_name": display_name} response = clients["endpoint"].create_endpoint(parent=PARENT, endpoint=endpoint) print("Long running operation:", response.operation.name) result = response.result(timeout=300) print("result") print(" name:", result.name) print(" display_name:", result.display_name) print(" description:", result.description) print(" labels:", result.labels) print(" create_time:", result.create_time) print(" update_time:", result.update_time) return result result = create_endpoint(ENDPOINT_NAME) Explanation: Deploy the Model resource Now deploy the trained Vertex Model resource you created with AutoML. This requires two steps: Create an Endpoint resource for deploying the Model resource to. Deploy the Model resource to the Endpoint resource. Create an Endpoint resource Use this helper function create_endpoint to create an endpoint to deploy the model to for serving predictions, with the following parameter: display_name: A human readable name for the Endpoint resource. The helper function uses the endpoint client service's create_endpoint method, which takes the following parameter: display_name: A human readable name for the Endpoint resource. Creating an Endpoint resource returns a long running operation, since it may take a few moments to provision the Endpoint resource for serving. You call response.result(), which is a synchronous call and will return when the Endpoint resource is ready. The helper function returns the Vertex fully qualified identifier for the Endpoint resource: response.name. End of explanation # The full unique ID for the endpoint endpoint_id = result.name # The short numeric ID for the endpoint endpoint_short_id = endpoint_id.split("/")[-1] print(endpoint_id) Explanation: Now get the unique identifier for the Endpoint resource you created. End of explanation MIN_NODES = 1 MAX_NODES = 1 Explanation: Compute instance scaling You have several choices on scaling the compute instances for handling your online prediction requests: Single Instance: The online prediction requests are processed on a single compute instance. Set the minimum (MIN_NODES) and maximum (MAX_NODES) number of compute instances to one. Manual Scaling: The online prediction requests are split across a fixed number of compute instances that you manually specified. Set the minimum (MIN_NODES) and maximum (MAX_NODES) number of compute instances to the same number of nodes. When a model is first deployed to the instance, the fixed number of compute instances are provisioned and online prediction requests are evenly distributed across them. Auto Scaling: The online prediction requests are split across a scaleable number of compute instances. Set the minimum (MIN_NODES) number of compute instances to provision when a model is first deployed and to de-provision, and set the maximum (`MAX_NODES) number of compute instances to provision, depending on load conditions. The minimum number of compute instances corresponds to the field min_replica_count and the maximum number of compute instances corresponds to the field max_replica_count, in your subsequent deployment request. End of explanation DEPLOYED_NAME = "salads_deployed-" + TIMESTAMP def deploy_model( model, deployed_model_display_name, endpoint, traffic_split={"0": 100} ): deployed_model = { "model": model, "display_name": deployed_model_display_name, "automatic_resources": { "min_replica_count": MIN_NODES, "max_replica_count": MAX_NODES, }, } response = clients["endpoint"].deploy_model( endpoint=endpoint, deployed_model=deployed_model, traffic_split=traffic_split ) print("Long running operation:", response.operation.name) result = response.result() print("result") deployed_model = result.deployed_model print(" deployed_model") print(" id:", deployed_model.id) print(" model:", deployed_model.model) print(" display_name:", deployed_model.display_name) print(" create_time:", deployed_model.create_time) return deployed_model.id deployed_model_id = deploy_model(model_to_deploy_id, DEPLOYED_NAME, endpoint_id) Explanation: Deploy Model resource to the Endpoint resource Use this helper function deploy_model to deploy the Model resource to the Endpoint resource you created for serving predictions, with the following parameters: model: The Vertex fully qualified model identifier of the model to upload (deploy) from the training pipeline. deploy_model_display_name: A human readable name for the deployed model. endpoint: The Vertex fully qualified endpoint identifier to deploy the model to. The helper function calls the Endpoint client service's method deploy_model, which takes the following parameters: endpoint: The Vertex fully qualified Endpoint resource identifier to deploy the Model resource to. deployed_model: The requirements specification for deploying the model. traffic_split: Percent of traffic at the endpoint that goes to this model, which is specified as a dictionary of one or more key/value pairs. If only one model, then specify as { "0": 100 }, where "0" refers to this model being uploaded and 100 means 100% of the traffic. If there are existing models on the endpoint, for which the traffic will be split, then use model_id to specify as { "0": percent, model_id: percent, ... }, where model_id is the model id of an existing model to the deployed endpoint. The percents must add up to 100. Let's now dive deeper into the deployed_model parameter. This parameter is specified as a Python dictionary with the minimum required fields: model: The Vertex fully qualified model identifier of the (upload) model to deploy. display_name: A human readable name for the deployed model. disable_container_logging: This disables logging of container events, such as execution failures (default is container logging is enabled). Container logging is typically enabled when debugging the deployment and then disabled when deployed for production. automatic_resources: This refers to how many redundant compute instances (replicas). For this example, we set it to one (no replication). Traffic Split Let's now dive deeper into the traffic_split parameter. This parameter is specified as a Python dictionary. This might at first be a tad bit confusing. Let me explain, you can deploy more than one instance of your model to an endpoint, and then set how much (percent) goes to each instance. Why would you do that? Perhaps you already have a previous version deployed in production -- let's call that v1. You got better model evaluation on v2, but you don't know for certain that it is really better until you deploy to production. So in the case of traffic split, you might want to deploy v2 to the same endpoint as v1, but it only get's say 10% of the traffic. That way, you can monitor how well it does without disrupting the majority of users -- until you make a final decision. Response The method returns a long running operation response. We will wait sychronously for the operation to complete by calling the response.result(), which will block until the model is deployed. If this is the first time a model is deployed to the endpoint, it may take a few additional minutes to complete provisioning of resources. End of explanation test_items = !gsutil cat $IMPORT_FILE | head -n1 cols = str(test_items[0]).split(",") if len(cols) == 11: test_item = str(cols[1]) test_label = str(cols[2]) else: test_item = str(cols[0]) test_label = str(cols[1]) print(test_item, test_label) Explanation: Make a online prediction request Now do a online prediction to your deployed model. Get test item You will use an arbitrary example out of the dataset as a test item. Don't be concerned that the example was likely used in training the model -- we just want to demonstrate how to make a prediction. End of explanation import base64 import tensorflow as tf def predict_item(filename, endpoint, parameters_dict): parameters = json_format.ParseDict(parameters_dict, Value()) with tf.io.gfile.GFile(filename, "rb") as f: content = f.read() # The format of each instance should conform to the deployed model's prediction input schema. instances_list = [{"content": base64.b64encode(content).decode("utf-8")}] instances = [json_format.ParseDict(s, Value()) for s in instances_list] response = clients["prediction"].predict( endpoint=endpoint, instances=instances, parameters=parameters ) print("response") print(" deployed_model_id:", response.deployed_model_id) predictions = response.predictions print("predictions") for prediction in predictions: print(" prediction:", dict(prediction)) predict_item(test_item, endpoint_id, {"confidenceThreshold": 0.5, "maxPredictions": 2}) Explanation: Make a prediction Now you have a test item. Use this helper function predict_item, which takes the following parameters: filename: The Cloud Storage path to the test item. endpoint: The Vertex fully qualified identifier for the Endpoint resource where the Model resource was deployed. parameters_dict: Additional filtering parameters for serving prediction results. This function calls the prediction client service's predict method with the following parameters: endpoint: The Vertex fully qualified identifier for the Endpoint resource where the Model resource was deployed. instances: A list of instances (encoded images) to predict. parameters: Additional parameters for serving. confidence_threshold: The threshold for returning predictions. Must be between 0 and 1. max_predictions: The maximum number of predictions per object to return, sorted by confidence. You might ask, how does confidence_threshold affect the model accuracy? The threshold won't change the accuracy. What it changes is recall and precision. - Precision: The higher the precision the more likely what is predicted is the correct prediction, but return fewer predictions. Increasing the confidence threshold increases precision. - Recall: The higher the recall the more likely a correct prediction is returned in the result, but return more prediction with incorrect prediction. Decreasing the confidence threshold increases recall. In this example, you will predict for precision. You set the confidence threshold to 0.5 and the maximum number of predictions for an object to two. Since, all the confidence values across the classes must add up to one, there are only two possible outcomes: 1. There is a tie, both 0.5, and returns two predictions. 2. One value is above 0.5 and all the rest are below 0.5, and returns one prediction. Request Since in this example your test item is in a Cloud Storage bucket, you will open and read the contents of the image using tf.io.gfile.Gfile(). To pass the test data to the prediction service, we will encode the bytes into base 64 -- This makes binary data safe from modification while it is transferred over the Internet. The format of each instance is: { 'content': { 'b64': [base64_encoded_bytes] } } Since the predict() method can take multiple items (instances), you send our single test item as a list of one test item. As a final step, you package the instances list into Google's protobuf format -- which is what we pass to the predict() method. Response The response object returns a list, where each element in the list corresponds to the corresponding image in the request. You will see in the output for each prediction -- in our case there is just one: confidences: Confidence level in the prediction. displayNames: The predicted label. bboxes: The bounding box for the label. End of explanation def undeploy_model(deployed_model_id, endpoint): response = clients["endpoint"].undeploy_model( endpoint=endpoint, deployed_model_id=deployed_model_id, traffic_split={} ) print(response) undeploy_model(deployed_model_id, endpoint_id) Explanation: Undeploy the Model resource Now undeploy your Model resource from the serving Endpoint resoure. Use this helper function undeploy_model, which takes the following parameters: deployed_model_id: The model deployment identifier returned by the endpoint service when the Model resource was deployed to. endpoint: The Vertex fully qualified identifier for the Endpoint resource where the Model is deployed to. This function calls the endpoint client service's method undeploy_model, with the following parameters: deployed_model_id: The model deployment identifier returned by the endpoint service when the Model resource was deployed. endpoint: The Vertex fully qualified identifier for the Endpoint resource where the Model resource is deployed. traffic_split: How to split traffic among the remaining deployed models on the Endpoint resource. Since this is the only deployed model on the Endpoint resource, you simply can leave traffic_split empty by setting it to {}. End of explanation delete_dataset = True delete_pipeline = True delete_model = True delete_endpoint = True delete_batchjob = True delete_customjob = True delete_hptjob = True delete_bucket = True # Delete the dataset using the Vertex fully qualified identifier for the dataset try: if delete_dataset and "dataset_id" in globals(): clients["dataset"].delete_dataset(name=dataset_id) except Exception as e: print(e) # Delete the training pipeline using the Vertex fully qualified identifier for the pipeline try: if delete_pipeline and "pipeline_id" in globals(): clients["pipeline"].delete_training_pipeline(name=pipeline_id) except Exception as e: print(e) # Delete the model using the Vertex fully qualified identifier for the model try: if delete_model and "model_to_deploy_id" in globals(): clients["model"].delete_model(name=model_to_deploy_id) except Exception as e: print(e) # Delete the endpoint using the Vertex fully qualified identifier for the endpoint try: if delete_endpoint and "endpoint_id" in globals(): clients["endpoint"].delete_endpoint(name=endpoint_id) except Exception as e: print(e) # Delete the batch job using the Vertex fully qualified identifier for the batch job try: if delete_batchjob and "batch_job_id" in globals(): clients["job"].delete_batch_prediction_job(name=batch_job_id) except Exception as e: print(e) # Delete the custom job using the Vertex fully qualified identifier for the custom job try: if delete_customjob and "job_id" in globals(): clients["job"].delete_custom_job(name=job_id) except Exception as e: print(e) # Delete the hyperparameter tuning job using the Vertex fully qualified identifier for the hyperparameter tuning job try: if delete_hptjob and "hpt_job_id" in globals(): clients["job"].delete_hyperparameter_tuning_job(name=hpt_job_id) except Exception as e: print(e) if delete_bucket and "BUCKET_NAME" in globals(): ! gsutil rm -r $BUCKET_NAME Explanation: Cleaning up To clean up all GCP resources used in this project, you can delete the GCP project you used for the tutorial. Otherwise, you can delete the individual resources you created in this tutorial: Dataset Pipeline Model Endpoint Batch Job Custom Job Hyperparameter Tuning Job Cloud Storage Bucket End of explanation
3,641
Given the following text description, write Python code to implement the functionality described below step by step Description: <h1>Functions</h1> <h2>Calling a function</h2> Step1: <h2>Installing libraries and importing functions</h2> Step2: <h2>Importing functions</h2> Step3: <h3>Returning values from a function</h3> The <b>return</b> statement tells a function what to return to the calling program Step4: <h3>If no return statement, python returns None </h3> Step5: <h3>Returning multiple values</h3> Step6: <h4>Python unpacks the returned value into each of a,b, and c. If there is only one identifier on the LHS, it won't unpack</h4> Step7: <h4>If there is a mismatch between the number of identifiers on the LHS and the number of values returned, you'll get an error</h4> Step8: <h2>Value assignment to arguments</h2> <li>Left to right <li>Unless explicitly assigned to the argument identifiers in the function definition Step9: <h2>A function can have function arguments</h2>
Python Code: x=5 y=7 z=max(x,y) #max is the function. x and y are the arguments print(z) #print is the function. z is the argument Explanation: <h1>Functions</h1> <h2>Calling a function</h2> End of explanation !pip install easygui #pip: python installer program # ! run the program from the shell (not from python) # easygui: a python library for GUI widgets import easygui #Imports easygui into the current namespace. We now have access to functiona and objects in this library easygui.msgbox("To be or not to be","What Hamlet elocuted") #msgbox is a function in easygui. Explanation: <h2>Installing libraries and importing functions</h2> End of explanation import math #imports the math namespace into our program namespace math.sqrt(34.23) #Functions in the math namespace have to be disambiguated import math as m #imports the math namespace into our program namespace but gives it the name 'm' m.sqrt(34.23) #Functions in the math namespace have to be disambiguated using the name 'm' rather than 'math' from math import sqrt #imports the sqrt function into our program namespace. No other math functions are accessible sqrt(34.23) #No disambiguation necessary Explanation: <h2>Importing functions</h2> End of explanation def spam(x,y,k): if x>y: z=x else: z=y p = z/k return p #Only the value of p is returned by the function spam(6,4,2) Explanation: <h3>Returning values from a function</h3> The <b>return</b> statement tells a function what to return to the calling program End of explanation def eggs(x,y): z = x/y print(eggs(4,2)) Explanation: <h3>If no return statement, python returns None </h3> End of explanation def foo(x,y,z): if z=="DESCENDING": return max(x,y),min(x,y),z if z=="ASCENDING": return min(x,y),max(x,y),z else: return x,y,z a,b,c = foo(4,2,"ASCENDING") print(a,b,c) Explanation: <h3>Returning multiple values</h3> End of explanation a = foo(4,2,"ASCENDING") print(a) Explanation: <h4>Python unpacks the returned value into each of a,b, and c. If there is only one identifier on the LHS, it won't unpack</h4> End of explanation a,b = foo(4,2,"DESCENDING") Explanation: <h4>If there is a mismatch between the number of identifiers on the LHS and the number of values returned, you'll get an error</h4> End of explanation def bar(x,y): return x/y bar(4,2) #x takes the value 4 and y takes the value 2 def bar(x,y): return x/y bar(y=4,x=2) #x takes the value 2 and y takes the value 4 (Explicit assignment) Explanation: <h2>Value assignment to arguments</h2> <li>Left to right <li>Unless explicitly assigned to the argument identifiers in the function definition End of explanation def order_by(a,b,order_function): return order_function(a,b) print(order_by(4,2,min)) print(order_by(4,2,max)) def change(x): x = (1,) print(x) x = (1, 2) change(x) print(x) def replace(test_string, replace_string): start_index = test_string.find(replace_string) result = "" x = "bodega" if start_index >= 0: result = test_string[start_index:start_index+len(replace_string)] result = test_string.replace(result,x) return result print(replace("Hi how are you?", "yu")) Explanation: <h2>A function can have function arguments</h2> End of explanation
3,642
Given the following text problem statement, write Python code to implement the functionality described below in problem statement Problem: I am trying to vectorize some data using
Problem: import numpy as np import pandas as pd from sklearn.feature_extraction.text import CountVectorizer corpus = [ 'We are looking for Java developer', 'Frontend developer with knowledge in SQL and Jscript', 'And this is the third one.', 'Is this the first document?', ] vectorizer = CountVectorizer(stop_words="english", binary=True, lowercase=False, vocabulary=['Jscript', '.Net', 'TypeScript', 'SQL', 'NodeJS', 'Angular', 'Mongo', 'CSS', 'Python', 'PHP', 'Photoshop', 'Oracle', 'Linux', 'C++', "Java", 'TeamCity', 'Frontend', 'Backend', 'Full stack', 'UI Design', 'Web', 'Integration', 'Database design', 'UX']) X = vectorizer.fit_transform(corpus).toarray() feature_names = vectorizer.get_feature_names_out()
3,643
Given the following text description, write Python code to implement the functionality described below step by step Description: Copyright 2019 The TensorFlow Probability Authors. Licensed under the Apache License, Version 2.0 (the "License"); Step1: Optimizers in TensorFlow Probability <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https Step4: BFGS and L-BFGS Optimizers Quasi Newton methods are a class of popular first order optimization algorithm. These methods use a positive definite approximation to the exact Hessian to find the search direction. The Broyden-Fletcher-Goldfarb-Shanno algorithm (BFGS) is a specific implementation of this general idea. It is applicable and is the method of choice for medium sized problems where the gradient is continuous everywhere (e.g. linear regression with an $L_2$ penalty). L-BFGS is a limited-memory version of BFGS that is useful for solving larger problems whose Hessian matrices cannot be computed at a reasonable cost or are not sparse. Instead of storing fully dense $n \times n$ approximations of Hessian matrices, they only save a few vectors of length $n$ that represent these approximations implicitly. Step5: L-BFGS on a simple quadratic function Step6: Same problem with BFGS Step9: Linear Regression with L1 penalty Step11: Problem definition Step12: Solving with L-BFGS Fit using L-BFGS. Even though the L1 penalty introduces derivative discontinuities, in practice, L-BFGS works quite well still. Step13: Solving with Nelder Mead The Nelder Mead method is one of the most popular derivative free minimization methods. This optimizer doesn't use gradient information and makes no assumptions on the differentiability of the target function; it is therefore appropriate for non-smooth objective functions, for example optimization problems with L1 penalty. For an optimization problem in $n$-dimensions it maintains a set of $n+1$ candidate solutions that span a non-degenerate simplex. It successively modifies the simplex based on a set of moves (reflection, expansion, shrinkage and contraction) using the function values at each of the vertices. Step15: Logistic Regression with L2 penalty For this example, we create a synthetic data set for classification and use the L-BFGS optimizer to fit the parameters. Step16: Batching support Both BFGS and L-BFGS support batched computation, for example to optimize a single function from many different starting points; or multiple parametric functions from a single point. Single function, multiple starting points Himmelblau's function is a standard optimization test case. The function is given by Step17: Multiple functions For demonstration purposes, in this example we simultaneously optimize a large number of high dimensional randomly generated quadratic bowls.
Python Code: #@title Licensed under the Apache License, Version 2.0 (the "License"); { display-mode: "form" } # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. Explanation: Copyright 2019 The TensorFlow Probability Authors. Licensed under the Apache License, Version 2.0 (the "License"); End of explanation #@title Import { display-mode: "form" } %matplotlib inline import contextlib import functools import os import time import numpy as np import pandas as pd import scipy as sp from six.moves import urllib from sklearn import preprocessing import tensorflow.compat.v2 as tf tf.enable_v2_behavior() import tensorflow_probability as tfp Explanation: Optimizers in TensorFlow Probability <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://www.tensorflow.org/probability/examples/Optimizers_in_TensorFlow_Probability"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a> </td> <td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/probability/blob/main/tensorflow_probability/examples/jupyter_notebooks/Optimizers_in_TensorFlow_Probability.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a> </td> <td> <a target="_blank" href="https://github.com/tensorflow/probability/blob/main/tensorflow_probability/examples/jupyter_notebooks/Optimizers_in_TensorFlow_Probability.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a> </td> <td> <a href="https://storage.googleapis.com/tensorflow_docs/probability/tensorflow_probability/examples/jupyter_notebooks/Optimizers_in_TensorFlow_Probability.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a> </td> </table> Abstract In this colab we demonstrate how to use the various optimizers implemented in TensorFlow Probability. Dependencies & Prerequisites End of explanation #@title Helper functions CACHE_DIR = os.path.join(os.sep, 'tmp', 'datasets') def make_val_and_grad_fn(value_fn): @functools.wraps(value_fn) def val_and_grad(x): return tfp.math.value_and_gradient(value_fn, x) return val_and_grad @contextlib.contextmanager def timed_execution(): t0 = time.time() yield dt = time.time() - t0 print('Evaluation took: %f seconds' % dt) def np_value(tensor): Get numpy value out of possibly nested tuple of tensors. if isinstance(tensor, tuple): return type(tensor)(*(np_value(t) for t in tensor)) else: return tensor.numpy() def run(optimizer): Run an optimizer and measure it's evaluation time. optimizer() # Warmup. with timed_execution(): result = optimizer() return np_value(result) Explanation: BFGS and L-BFGS Optimizers Quasi Newton methods are a class of popular first order optimization algorithm. These methods use a positive definite approximation to the exact Hessian to find the search direction. The Broyden-Fletcher-Goldfarb-Shanno algorithm (BFGS) is a specific implementation of this general idea. It is applicable and is the method of choice for medium sized problems where the gradient is continuous everywhere (e.g. linear regression with an $L_2$ penalty). L-BFGS is a limited-memory version of BFGS that is useful for solving larger problems whose Hessian matrices cannot be computed at a reasonable cost or are not sparse. Instead of storing fully dense $n \times n$ approximations of Hessian matrices, they only save a few vectors of length $n$ that represent these approximations implicitly. End of explanation # Fix numpy seed for reproducibility np.random.seed(12345) # The objective must be supplied as a function that takes a single # (Tensor) argument and returns a tuple. The first component of the # tuple is the value of the objective at the supplied point and the # second value is the gradient at the supplied point. The value must # be a scalar and the gradient must have the same shape as the # supplied argument. # The `make_val_and_grad_fn` decorator helps transforming a function # returning the objective value into one that returns both the gradient # and the value. It also works for both eager and graph mode. dim = 10 minimum = np.ones([dim]) scales = np.exp(np.random.randn(dim)) @make_val_and_grad_fn def quadratic(x): return tf.reduce_sum(scales * (x - minimum) ** 2, axis=-1) # The minimization routine also requires you to supply an initial # starting point for the search. For this example we choose a random # starting point. start = np.random.randn(dim) # Finally an optional argument called tolerance let's you choose the # stopping point of the search. The tolerance specifies the maximum # (supremum) norm of the gradient vector at which the algorithm terminates. # If you don't have a specific need for higher or lower accuracy, leaving # this parameter unspecified (and hence using the default value of 1e-8) # should be good enough. tolerance = 1e-10 @tf.function def quadratic_with_lbfgs(): return tfp.optimizer.lbfgs_minimize( quadratic, initial_position=tf.constant(start), tolerance=tolerance) results = run(quadratic_with_lbfgs) # The optimization results contain multiple pieces of information. The most # important fields are: 'converged' and 'position'. # Converged is a boolean scalar tensor. As the name implies, it indicates # whether the norm of the gradient at the final point was within tolerance. # Position is the location of the minimum found. It is important to check # that converged is True before using the value of the position. print('L-BFGS Results') print('Converged:', results.converged) print('Location of the minimum:', results.position) print('Number of iterations:', results.num_iterations) Explanation: L-BFGS on a simple quadratic function End of explanation @tf.function def quadratic_with_bfgs(): return tfp.optimizer.bfgs_minimize( quadratic, initial_position=tf.constant(start), tolerance=tolerance) results = run(quadratic_with_bfgs) print('BFGS Results') print('Converged:', results.converged) print('Location of the minimum:', results.position) print('Number of iterations:', results.num_iterations) Explanation: Same problem with BFGS End of explanation def cache_or_download_file(cache_dir, url_base, filename): Read a cached file or download it. filepath = os.path.join(cache_dir, filename) if tf.io.gfile.exists(filepath): return filepath if not tf.io.gfile.exists(cache_dir): tf.io.gfile.makedirs(cache_dir) url = url_base + filename print("Downloading {url} to {filepath}.".format(url=url, filepath=filepath)) urllib.request.urlretrieve(url, filepath) return filepath def get_prostate_dataset(cache_dir=CACHE_DIR): Download the prostate dataset and read as Pandas dataframe. url_base = 'http://web.stanford.edu/~hastie/ElemStatLearn/datasets/' return pd.read_csv( cache_or_download_file(cache_dir, url_base, 'prostate.data'), delim_whitespace=True, index_col=0) prostate_df = get_prostate_dataset() Explanation: Linear Regression with L1 penalty: Prostate Cancer data Example from the Book: The Elements of Statistical Learning, Data Mining, Inference, and Prediction by Trevor Hastie, Robert Tibshirani and Jerome Friedman. Note this is an optimization problem with L1 penalty. Obtain dataset End of explanation np.random.seed(12345) feature_names = ['lcavol', 'lweight', 'age', 'lbph', 'svi', 'lcp', 'gleason', 'pgg45'] # Normalize features scalar = preprocessing.StandardScaler() prostate_df[feature_names] = pd.DataFrame( scalar.fit_transform( prostate_df[feature_names].astype('float64'))) # select training set prostate_df_train = prostate_df[prostate_df.train == 'T'] # Select features and labels features = prostate_df_train[feature_names] labels = prostate_df_train[['lpsa']] # Create tensors feat = tf.constant(features.values, dtype=tf.float64) lab = tf.constant(labels.values, dtype=tf.float64) dtype = feat.dtype regularization = 0 # regularization parameter dim = 8 # number of features # We pick a random starting point for the search start = np.random.randn(dim + 1) def regression_loss(params): Compute loss for linear regression model with L1 penalty Args: params: A real tensor of shape [dim + 1]. The zeroth component is the intercept term and the rest of the components are the beta coefficients. Returns: The mean square error loss including L1 penalty. params = tf.squeeze(params) intercept, beta = params[0], params[1:] pred = tf.matmul(feat, tf.expand_dims(beta, axis=-1)) + intercept mse_loss = tf.reduce_sum( tf.cast( tf.losses.mean_squared_error(y_true=lab, y_pred=pred), tf.float64)) l1_penalty = regularization * tf.reduce_sum(tf.abs(beta)) total_loss = mse_loss + l1_penalty return total_loss Explanation: Problem definition End of explanation @tf.function def l1_regression_with_lbfgs(): return tfp.optimizer.lbfgs_minimize( make_val_and_grad_fn(regression_loss), initial_position=tf.constant(start), tolerance=1e-8) results = run(l1_regression_with_lbfgs) minimum = results.position fitted_intercept = minimum[0] fitted_beta = minimum[1:] print('L-BFGS Results') print('Converged:', results.converged) print('Intercept: Fitted ({})'.format(fitted_intercept)) print('Beta: Fitted {}'.format(fitted_beta)) Explanation: Solving with L-BFGS Fit using L-BFGS. Even though the L1 penalty introduces derivative discontinuities, in practice, L-BFGS works quite well still. End of explanation # Nelder mead expects an initial_vertex of shape [n + 1, 1]. initial_vertex = tf.expand_dims(tf.constant(start, dtype=dtype), axis=-1) @tf.function def l1_regression_with_nelder_mead(): return tfp.optimizer.nelder_mead_minimize( regression_loss, initial_vertex=initial_vertex, func_tolerance=1e-10, position_tolerance=1e-10) results = run(l1_regression_with_nelder_mead) minimum = results.position.reshape([-1]) fitted_intercept = minimum[0] fitted_beta = minimum[1:] print('Nelder Mead Results') print('Converged:', results.converged) print('Intercept: Fitted ({})'.format(fitted_intercept)) print('Beta: Fitted {}'.format(fitted_beta)) Explanation: Solving with Nelder Mead The Nelder Mead method is one of the most popular derivative free minimization methods. This optimizer doesn't use gradient information and makes no assumptions on the differentiability of the target function; it is therefore appropriate for non-smooth objective functions, for example optimization problems with L1 penalty. For an optimization problem in $n$-dimensions it maintains a set of $n+1$ candidate solutions that span a non-degenerate simplex. It successively modifies the simplex based on a set of moves (reflection, expansion, shrinkage and contraction) using the function values at each of the vertices. End of explanation np.random.seed(12345) dim = 5 # The number of features n_obs = 10000 # The number of observations betas = np.random.randn(dim) # The true beta intercept = np.random.randn() # The true intercept features = np.random.randn(n_obs, dim) # The feature matrix probs = sp.special.expit( np.matmul(features, np.expand_dims(betas, -1)) + intercept) labels = sp.stats.bernoulli.rvs(probs) # The true labels regularization = 0.8 feat = tf.constant(features) lab = tf.constant(labels, dtype=feat.dtype) @make_val_and_grad_fn def negative_log_likelihood(params): Negative log likelihood for logistic model with L2 penalty Args: params: A real tensor of shape [dim + 1]. The zeroth component is the intercept term and the rest of the components are the beta coefficients. Returns: The negative log likelihood plus the penalty term. intercept, beta = params[0], params[1:] logit = tf.matmul(feat, tf.expand_dims(beta, -1)) + intercept log_likelihood = tf.reduce_sum(tf.nn.sigmoid_cross_entropy_with_logits( labels=lab, logits=logit)) l2_penalty = regularization * tf.reduce_sum(beta ** 2) total_loss = log_likelihood + l2_penalty return total_loss start = np.random.randn(dim + 1) @tf.function def l2_regression_with_lbfgs(): return tfp.optimizer.lbfgs_minimize( negative_log_likelihood, initial_position=tf.constant(start), tolerance=1e-8) results = run(l2_regression_with_lbfgs) minimum = results.position fitted_intercept = minimum[0] fitted_beta = minimum[1:] print('Converged:', results.converged) print('Intercept: Fitted ({}), Actual ({})'.format(fitted_intercept, intercept)) print('Beta:\n\tFitted {},\n\tActual {}'.format(fitted_beta, betas)) Explanation: Logistic Regression with L2 penalty For this example, we create a synthetic data set for classification and use the L-BFGS optimizer to fit the parameters. End of explanation # The function to minimize must take as input a tensor of shape [..., n]. In # this n=2 is the size of the domain of the input and [...] are batching # dimensions. The return value must be of shape [...], i.e. a batch of scalars # with the objective value of the function evaluated at each input point. @make_val_and_grad_fn def himmelblau(coord): x, y = coord[..., 0], coord[..., 1] return (x * x + y - 11) ** 2 + (x + y * y - 7) ** 2 starts = tf.constant([[1, 1], [-2, 2], [-1, -1], [1, -2]], dtype='float64') # The stopping_condition allows to further specify when should the search stop. # The default, tfp.optimizer.converged_all, will proceed until all points have # either converged or failed. There is also a tfp.optimizer.converged_any to # stop as soon as the first point converges, or all have failed. @tf.function def batch_multiple_starts(): return tfp.optimizer.lbfgs_minimize( himmelblau, initial_position=starts, stopping_condition=tfp.optimizer.converged_all, tolerance=1e-8) results = run(batch_multiple_starts) print('Converged:', results.converged) print('Minima:', results.position) Explanation: Batching support Both BFGS and L-BFGS support batched computation, for example to optimize a single function from many different starting points; or multiple parametric functions from a single point. Single function, multiple starting points Himmelblau's function is a standard optimization test case. The function is given by: $$f(x, y) = (x^2 + y - 11)^2 + (x + y^2 - 7)^2$$ The function has four minima located at: - (3, 2), - (-2.805118, 3.131312), - (-3.779310, -3.283186), - (3.584428, -1.848126). All these minima may be reached from appropriate starting points. End of explanation np.random.seed(12345) dim = 100 batches = 500 minimum = np.random.randn(batches, dim) scales = np.exp(np.random.randn(batches, dim)) @make_val_and_grad_fn def quadratic(x): return tf.reduce_sum(input_tensor=scales * (x - minimum)**2, axis=-1) # Make all starting points (1, 1, ..., 1). Note not all starting points need # to be the same. start = tf.ones((batches, dim), dtype='float64') @tf.function def batch_multiple_functions(): return tfp.optimizer.lbfgs_minimize( quadratic, initial_position=start, stopping_condition=tfp.optimizer.converged_all, max_iterations=100, tolerance=1e-8) results = run(batch_multiple_functions) print('All converged:', np.all(results.converged)) print('Largest error:', np.max(results.position - minimum)) Explanation: Multiple functions For demonstration purposes, in this example we simultaneously optimize a large number of high dimensional randomly generated quadratic bowls. End of explanation
3,644
Given the following text description, write Python code to implement the functionality described below step by step Description: STFT Analysis/Synthesis - MusicBricks Tutorial Introduction This tutorial will guide you through some tools for performing spectral analysis and synthesis using the Essentia library (http Step1: After importing Essentia library, let's import other numerical and plotting tools Step2: Define the parameters of the STFT workflow Step3: Specify input and output audio filenames Step4: Define algorithm chain for frame-by-frame process Step5: Now we set the algorithm network and store the processed audio samples in the output file Step6: Finally we run the process that will store an output file in a WAV file
Python Code: # import essentia in streaming mode import essentia import essentia.streaming as es Explanation: STFT Analysis/Synthesis - MusicBricks Tutorial Introduction This tutorial will guide you through some tools for performing spectral analysis and synthesis using the Essentia library (http://www.essentia.upf.edu). STFT stands for Short-Time Fourier Transform and it processes an input audio signal as a sequence of spectral frames. Spectral frames are complex-valued arrays contain the frequency representation of the windowed input signal. This algorithm shows how to analyze the input signal, and resynthesize it again, allowing to apply new transformations directly on the spectral domain. You should first install the Essentia library with Python bindings. Installation instructions are detailed here: http://essentia.upf.edu/documentation/installing.html . Processing steps End of explanation # import matplotlib for plotting import matplotlib.pyplot as plt import numpy as np Explanation: After importing Essentia library, let's import other numerical and plotting tools End of explanation # algorithm parameters framesize = 1024 hopsize = 256 Explanation: Define the parameters of the STFT workflow End of explanation inputFilename = 'singing-female.wav' outputFilename = 'singing-female-stft.wav' # create an audio loader and import audio file out = np.array(0) loader = es.MonoLoader(filename = inputFilename, sampleRate = 44100) pool = essentia.Pool() Explanation: Specify input and output audio filenames End of explanation # algorithm instantation fcut = es.FrameCutter(frameSize = framesize, hopSize = hopsize, startFromZero = False); w = es.Windowing(type = "hann"); fft = es.FFT(size = framesize); ifft = es.IFFT(size = framesize); overl = es.OverlapAdd (frameSize = framesize, hopSize = hopsize, gain = 1./framesize ); awrite = es.MonoWriter (filename = outputFilename, sampleRate = 44100); Explanation: Define algorithm chain for frame-by-frame process: FrameCutter -> Windowing -> FFT -> IFFT -> OverlapAdd -> AudioWriter End of explanation loader.audio >> fcut.signal fcut.frame >> w.frame w.frame >> fft.frame fft.fft >> ifft.fft ifft.frame >> overl.frame overl.signal >> awrite.audio overl.signal >> (pool, 'audio') Explanation: Now we set the algorithm network and store the processed audio samples in the output file End of explanation essentia.run(loader) Explanation: Finally we run the process that will store an output file in a WAV file End of explanation
3,645
Given the following text description, write Python code to implement the functionality described below step by step Description: Continuum subtraction The analysis and interpretation of spectral-line data is greatly simplified if all sources of continuum emission have already been removed from the data. This chapter describes the methods that can be used to subtract radio continuum emission from visibility datasets or datacubes, resulting in data that contain spectral-line emission and/or absorption only. Here we will adopt the following notation Step1: We create a datacube from the visibilities, and use MIRIAD's REGRID to remove the aforementioned pixel size scaling with frequency of the cube. Step2: The regridded cube can be visualised to show that the PSF sidelobes move significantly on the sky when going from one end to the other of the 7% fractional bandwidth. Step3: As exaplained above, this change in the sidelobes' position on the sky with frequency means that a cube-based continuum subtraction will not work well at the position of the sidelobes. To show this we subtract the continuum from the regridded cube using MIRIAD's CONTSUB and using a range of polynomial orders, and display the same three channels of the continuum subctracted cube on the same grey scale. Step4: As expected, for this noise-less ideal case, the cube-based continuum subtraction works perfectly at the position of the flat-spectrum source regardless of the order of the polynomial fit. However, the sidelobe emission is not subtracted well. The quality of the sidelobe continuum subtraction increases with increasing order of the polynomial but some level of residuals will always be present. For N=1 the residual level is between 5 and 10 percent of the source flux density, while this level is about halved for N=3. The impact of these residuals on the scientific goals of an observation depend on the flux density of the sources in the fields and the PSF sidelobe level relative to the other sources of noise and artefacts in the cube. 2.3. Channel selection and deconvolved cubes The selection of line-free (and RFI-free) channels is critical for a correct polynomial fit and continuum subtraction. Since for a dirty cube the PSF spreads any line emission/absorption at a given channel to all spatial pixels of that channel, the line-free channel selection does not depend on position and must be applied to all $(l,m)$. In fact, this selection is not always straightforward when working with a dirty cube. For example, the continuum emission (sources and sidelobes) may be much brighter than the line emission/absorption, and a few iterations of deconvolution and continuum subtraction may be required before the line-free channels are correctly identified. A natural question is whether, since deconvolution may be required to identify the line-free channels, the cube-based continuum subtraction method could be applied directly to the deconvolved cube $I(l,m,\nu)$. This would have the advantage that the PSF sidelobes of both continuum and spectral-line sources have been removed and, therefore, PSF-related continuum subtraction errors would be minimised. Furthermore, since the line emission/absorption has been deconvolved too, and is now localised to a few small regions within each channel, the line-free channel selection could be made position dependent. This could be easily achieved by including a simple outlier rejection algorithm in the polynomial fit, and would maximise the number of fitted channels along each sightline. (In practice, this is not possible in current implementation of the cube-based continuum subtraction in, e.g., CASA and MIRIAD, but has been tried outside these standard packages.) A singificant issue with subtracting the continuum from a deconvolved cube $I(l,m,\nu)$ is that deconvolution is non-linear and, therefore, leaves residuals and artefacts which vary from channel to channel. This would be particularly true for bright continuum emission and in the presence of significant calibration errors. The following step of continuum subtraction would not remove these artefacts. The final result may then be worse than one in which continuum subtraction is performed before deconvolution. In other words, continuum subtraction on the dirty cube $I^\mathrm{D}(l,m,\nu)$ is much more robust against calibration errors. For this reason, it may be better to attempt the cube-based continuum subtraction of a deconvolved cube only after the brightest continuum emission has been subtracted with a different method such as those described in Secs. 3 and 4. This combined approach is discussed in Sec. 5. 3. Visibility-based continuum subtraction 3.1. Basic method This approach consists of subtracting the continuum emission directly from the visibilities by modelling the continuum component $V_{ij,\mathrm{c}}(t,\nu)$ with a low order polynomial $V_{ij,\mathrm{c,model}}(t,\nu) = \sum_{n=0}^{N} a_{ij,n}(t)\ \nu^n$. This is done separately on the real and imaginary parts of each visibility spectrum. As for the cube-based continuum subtraction, the polynomial fit should only be run on line-free (and RFI-free) channels but their selection is not always straightforward. The line emission/absorption may be too faint to detect in individual visibility spectra, and a few iterations of continuum subtraction and spectral-line imaging may be required to identify the line-free channels correctly. Since all spectral line sources in the field contribute to all visibility spectra the line-free channel selection should be identical for all spectra. In fact, spatially-extended spectral line emission may be more significant on short baselines and, therefore, there is scope for a baseline dependent line-free channel selection. This could be easily achieved by including basic outlier rejection in the fit. The same could be useful to reject RFI from the fit. These advanced techniques are however not implemented in standard packages, e.g., CASA and MIRIAD. (CHECK!!!) 3.2. Limitations The visibility-based continuum subtraction works only as long as the polynomial approximation for real and imaginary part of $V_{ij,\mathrm{c}}(t,\nu)$ is valid. This approximation becomes progressively worse for larger distances from the phase centre, longer baselines and larger relative bandwidths, as we explain in what follows. The visibility of a unit point source at distance $\mathbf{s}$ from the phase-tracking centre is Step5: The figure shows that for a source at 10 arcmin from the phase centre fitting the continuum with a low-order polynomial does not work on a 2 km baseline, unless one has observed a significantly narrower bandwidth (e.g., 20 MHz instead of 100 MHz). On the contrary, for a 200 m baseline and/or for a source 1 arcmin away from the phase centre a low-order polynomial is a good approximation to the data. The same result can be obtained with a MIRIAD simulation identical to the one created in Sec. 2 except for the position of the point source, which we now place 10 arcmin north of the phase centre. The visibility spectra obtained this way are consistent with the ones shown above (right panels). Step6: 200m baseline <img src="sim02_200m.png" width="400"> 2km baseline <img src="sim02_2km.png" width="400"> One could be tempted to get around this limitation by shifting the phase centre to the position of the source that needs to be subtracted. The issue with this is that no source will ever be completely isolated, and each $V_{ij,\mathrm{c}}(t,\nu)$ "sees" other sources too. These sources will be at different positions and may be more difficult to subtract with the new phase centre. This highlights that this method of continuum subtraction is better suited for interferometers with a small primary beam size (i.e., larger dishes) as most continuum sources are detected close to the phase centre. The newest interferometers MeerKAT and ASKAP are built of smaller dishes and, therefore, their larger beams see sources out to larger distances from the phase centre. This makes visibility-based continuum subtraction less straightforward. We can use the same MIRIAD simulation above to have a look at the residuals left by this continuum-subtraction method in the visibilities as well as in the spectral line cube. Step7: 200m baseline <img src="sim02_vs_200m.png" width="400"> 2km baseline <img src="sim02_vs_2km.png" width="400"> As expected, the continuum is subtracted reasonably well on the short baseline but not on the long baseline. This will leave signatures in the spectral line cube, as we show below by displaying a few channels using the same grey scale adopted in Sec. 2. Step8: Another issue with the visibility-based continuum subtraction is that it modifies the noise characteristics of channels excluded from the polynomial fit relative to those included in it. EXPAND 3.3. Visibility-based continuum subtraction and calibration errors One significant advantage of this method is that it is insensitive to frequency-independent gain calibration errors. That is, once a good bandpass calibration has been achieved, the method works equally well regardless of whether a frequency-independent, time-dependent gain calibration has been performed. The reason is that visbility-based continuum subtraction works on each visibility spectrum (i.e.g, fixed time) independently. For each of these spectra the application of a frequency-independent gain calibration does not change the spectral shape and, therefore, the order of the polynomial required for a good fit of the coninuum. Of course, if the method is used on a dataset with significant gain calibration errors the resulting spectral line cube will show artefacts at the channels with significant emission/absorption. The advantage is that the level of those artefacts will depend on the brightness of the line signal and not of the continuum. This is important since the line signal is typically much fainter than the continuum one. 4. Model-based continuum subtraction 4.1. Basic method Both the cube-based and visibility-based continuum subctraction methods described above suffer from limitations that are related to chromatic effects. In the case of the cube-based continuum subtraction the issue is that the PSF changes with frequency and, therefore, the continuum spectral shape at the position of a source's sidelobes is complex and difficult to subtract. In the case of the visbility-based continuum subtraction the issue is that, since $u$ and $v$ change with frequency, the shape of the visibility spectrum of a source depends on the distance of the source from the phase centre -- and when the latter is significant continuum subtraction is challenging. The alternative method of performing a model-based continuum subtraction gets around these chromatic issues. It consists of modelling the radio continuum sky and subtracting the Fourier transform of the model from the visibilities. The operation of Fourier transform is done for each channel in the visibility dataset, and this takes properly into account all chromatic effects. Compared to cube- and visibility-based continuum subtraction this method is slower, especially if working on a large bandwidth as this requires modelling and Fourier transforming not only the flux density but also the spectral shape of each source on the sky. For this reason in many cases it is preferable to use the other methods as long as they give results of sufficient quality. To illustrate this method we consider a model which combines the two models used in Secs. 2 and 3. It consists of a point source at the phase centre and another one with half the flux 10 arcmin north of the phase centre. In this case we include some noise in the simulations, corresponding to a noise level of ~1 mJy/beam in the cube. Step9: We can image these data and display a few channels before and after continuum subtraction to show, once again, the chromatic effect of the sidelobe movement as a function of frequency, which complicates cube-based continuum subtraction. Step10: Furthermore, we can see also for this new simulation the rapid variation of the visibility on long baselines, which complicates visibility-based continuum subtraction. Step11: 2km baseline before continuum subtraction <img src="sim03_2km.png" width="400"> 2km baseline after continuum subtraction <img src="sim03_vs_2km.png" width="400"> In what follows we show that the model-based continuum subtraction gets around these issues. We will use INVERT and CLEAN to make a multi-frequency-synthesis model of the continuum sky, Fourier transform it and subtract it from the visibilities. (Note that we use an image-based mask to define clean regions.) Step12: We then display a few channels of a cube made from the continuum subtracted dataset and show a visibility spectrum to illustrate the quality of the continuum subtraction compared to what can be achieved with the cube-based and visibility-based ones. Step13: 2km baseline after continuum subtraction <img src="sim03_ms_2km.png" width="400"> As expected, this method does not suffer from the chromatic effects that limit the use of cube- and visibility-based continuum subtaction. Furthermore, multi-frequency synthesis allows the modelling of the spectral shape of each source in the field. Therefore, while here we analyse a case of flat spectra, the method can handle more complex source populations. 4.2. Limitations The main limitation of this method is that, unlike cube- and especially visibility-based continuum subtraction, it does not work well in the presence of calibration errors. The reason is that the method requires a good model of the continuum emission in order to subtract it, and poor calibration is a substantial obstacle to getting such model. In naive terms, this method allows us to Fourier transform and subtract and ideal continuum model from the visibilities, but all calibration artefacts present in the continuum image and obviously not included in the model will remain in the data and corrupt the spectral-line cube. An example of this can be easily obtained with another MIRIAD simulation. Step14: The noise in these channel maps is clearly larger compared to the above ideal case with no calibration errors. Of course, this noise would be much reduced if the gains were (self-) calibrated. 5. Combining the above approaches E.g., subtract a model, especially for distant sources, then UVLIN or IMLIN. Add Jing's method. Bibliography <a href="http
Python Code: print '# Executing MIRIAD commands' simuv='sim01.uv' if os.path.exists(simuv): shutil.rmtree(simuv) run_uvgen=Run('uvgen source=pointsource01.txt ant=ew_layout.txt baseunit=-51.0204 radec=19:39:25.0,-83:42:46 freq=1.4,0 corr=256,1,0,100 out=%s harange=-6,6,0.016667 systemp=0 lat=-30.7 jyperk=19.28'%(simuv)) print '# Done' Explanation: Continuum subtraction The analysis and interpretation of spectral-line data is greatly simplified if all sources of continuum emission have already been removed from the data. This chapter describes the methods that can be used to subtract radio continuum emission from visibility datasets or datacubes, resulting in data that contain spectral-line emission and/or absorption only. Here we will adopt the following notation: - $S(l,m,\nu)$ is the sky brightness as a function of position (relative to a reference position $l_0,m_0$, which we assume to be both the pointing and phase-tracking centre) and frequency; - $A(l,m,\nu)$ is the primary beam pattern; - $I(l,m,\nu) = S(l,m,\nu) \cdot A(l,m,\nu)$ is the apparent sky brightness; - $B(l,m,\nu)$ is the point spread function or PSF; - $I^\mathrm{D}(l,m,\nu)$ is the dirty cube obtained by convolving $I(l,m,\nu)$ with $B(l,m,\nu)$; - $\mathbf{b}{ij}$ is the baseline between antennas $i$ and $j$ - $V{ij}(t,\nu)$ is the complex visibility for the baseline $\mathbf{b}{ij}$ at time $t$ and frequency $\nu$; this notation is preferred to the more common $V{\nu}(u,v)$ because for a given visibility spectrum $V_{ij}(t,\nu)$ the coordinates $u$ and $v$ change with frequency (we will see that this is relevant for continuum subtraction); - cubes and visibilities are composed of a continuum and a spectral line term, e.g., $I(l,m,\nu) = I_\mathrm{c}(l,m,\nu) + I_\mathrm{s}(l,m,\nu)$ and $V_{ij}(t,\nu) = V_{ij,\mathrm{c}}(t,\nu) + V_{ij,\mathrm{s}}(t,\nu)$. Outline 1. <a href="http://localhost:8888/notebooks/contsub.ipynb#1.-Overview-of-continuum-subtraction-methods">Overview of continuum subtraction methods</a> 2. <a href="http://localhost:8888/notebooks/contsub.ipynb#2.-Cube-based-continuum-subtraction">Cube-based continuum subtraction</a> 3. <a href="http://localhost:8888/notebooks/contsub.ipynb#3.-Visibility-based-continuum-subtraction">Visibility-based continuum subtraction</a> 4. <a href="http://localhost:8888/notebooks/contsub.ipynb#4.-Model-based-continuum-subtraction">Model-based continuum subtraction</a> 1. Overview of continuum subtraction methods Continuum emission can be removed from interferometric data in a variety of ways, which can be grouped under the following 3 categories: - cube based, where the continuum emission is estimated and subtracted from a dirty cube $I^\mathrm{D}(l,m,\nu)$ or, under some circumstances, a deconvolved cube $I(l,m,\nu)$ -- Sec. 2; - visibility based, where the continuum emission is estimated and subtracted from visibility spectra $V_{ij}(t,\nu)$ -- Sec. 3; - model based, wheren the continuum emission $I_\mathrm{c}(l,m,\nu)$ is modelled from a continuum (possibly multi-frequency) image, and the model is Fourier transformed and subtracted from the visibilities $V_{ij}(t,\nu)$ -- Sec. 4. Each method has advantages and disadgantages, and often it is advisiable to use more than one method to completely remove continuum emission from the data (Sec. 5). As usual, simpler methods are faster but may be less accurate. Important factors affecting their performance include: - the fractional bandwidth over which one needs to subtract the continuum - the quality of the calibration - the distance of the continuum sources from the phase centre - the brightness of the spectral line relative to the continuum Below we describe these different methods, their advantages and disadvantages, and how the above factors come into play. 2. Cube-based continuum subtraction 2.1. Basic method If no continuum has been subtracted from the visibilities $V_{ij}(t,\nu)$, each sightline ($l,m$) of the dirty cube $I^\mathrm{D}(l,m,\nu)$ will in principle include some level of continuum emission $I^\mathrm{D}_\mathrm{c}(l,m,\nu)\ne0$. This includes both emission at the position of real continuum sources and emission corresponding to their PSF sidelobes. The sidelobes contribution would disappear if one could deconvolve the cube before subtracting the continuum. However, this is ususally not advisable and, for the moment, we assume that the cube is dirty. We will return to the point of deconvolution in Sec. 2.3. The basic idea of this method is to estimate and remove the continuum component $I^\mathrm{D}\mathrm{c}(l,m,\nu)$ of the dirty cube along each sightline ($l,m$) independently. This can be done by modelling it with a low order polynomial; that is, fit and subtract $I^\mathrm{D}\mathrm{c,model}(l,m,\nu)=\sum_{n=0}^{N} a_n(l,m)\ \nu^n$ to the line-free channels. In general, for smaller fractional bandwidths the variation of $I^\mathrm{D}_\mathrm{c}(l,m,\nu)$ with frequency is more limited and, therefore, the order $N$ of the polynomial can be smaller. The limiting case is one where it is sufficient to take the average of all line-free channels (or a $0^\mathrm{th}$-order polynomial fit) as an estimate of a frequency-independent continuum. 2.2. Limitations For a correct choice of $N$, note that the variation of $I^\mathrm{D}\mathrm{c}(l,m,\nu)$ with frequency is determined not only by the intrinsic continuum spectrum of the sky $S\mathrm{c}(l,m,\nu)$ but also by the variation of the primary beam $A(l,m,\nu)$ with frequency. The latter is a decreasing function of frequency (within the main lobe) and, therefore, it has the effect of decreasing the spectral slope of the observed sources. Because of such primary beam modulation, two identical sources at different positions within the primary beam will in general have different observed spectral shapes. The order of the polynomial will need to be chosen to deal with the "worst" source in the field. An additional effect to consider is that of the PSF. At a position of the cube where most of the continuum flux comes from the sidelobes of a nearby source, the observed flux density variation with frequency depends critically on the structure of the PSF $B(l,m,\nu)$ and its variation with frequency (which, besides the standard scaling with frequency, may be in part due to frequency-dependent flagging). The resulting frequency dependence of $I^\mathrm{D}_\mathrm{c}(l,m,\nu)$ can be more complex than that at the position of a real continuum source. This means that even when a low-$N$ approximation is valid for a source it may be inaccurate for its sidelobes. In other words, this method of continuum subtraction results in an error which depends on the distance from the continuum source being subtracted as well as on the 3D PSF pattern $B(l,m,\nu)$. Cornwell, Uson & Addad (1992) give a formal discussion of this error for the case in which the continuum sources have a spectrum which is a linear function of frequency. Clearly, this error is lower for a lower PSF sidelobe level and/or a smaller PSF variation with frequency (e.g., because of a low fractional bandwidth). Note that in cubes made with MIRIAD the pixel size scales with $1/\nu$ and, therefore, the PSF pattern does not change with frequency in the $(x,y,z)$ cube pixel grid. Therefore, for a source close to the image centre the aforementioned error is much less of an issue. Far from the image centre, however, this scaling means that sources move radially in $(x,y)$ as the frequency changes. This complicates their cube-based continuum subtraction, which is done at fixed $(x,y)$. In other words, MIRIAD's "trick" of varying the pixel size with frequency reduces the continuum subtraction error as a function of distance from a continuum source but introduces a new error which depends on the distance from the image centre. We will see that also the visibility-based continuum subtraction is characterized by a similar type of error (Sec. 3). Below we show an ideal example of this method by simulating a visibility dataset in MIRIAD. The visibilities are recorded for an east-west array with baselines between 200 m and 2 km. The sky model is made of a single, 1 Jy point source with a flat spectrum and located at the phase centre. The observing frequency is 1.4 GHz and the bandwidth is 100 MHz (7% fractional bandwidth). The observed band is sampled with 256 channels. No noise is included, and the observation consists of a full 12-h track from HA = -6 h to HA = +6 h. End of explanation print '# Executing MIRIAD commands' if os.path.exists('m01'): shutil.rmtree('m01') if os.path.exists('m01_ns'): shutil.rmtree('m01_ns') run_invert=Run('invert vis=sim01.uv map=m01 imsize=512 cell=5 slop=1 robust=0') run_regrid=Run('regrid in=m01 out=m01_ns options=noscale') run_fits=Run('fits in=m01_ns op=xyout out=m01_ns.fits') print '# Done' Explanation: We create a datacube from the visibilities, and use MIRIAD's REGRID to remove the aforementioned pixel size scaling with frequency of the cube. End of explanation f=fits.open('m01_ns.fits') cube=f[0].data[0] f.close() ppl.figure(figsize=(10,5)) #ppl.subplots_adjust(wspace=0.4,hspace=0.45) ppl.subplot(131) ppl.imshow(cube[0,212:300,212:300],origin='lower',cmap='gray',vmin=-0.03,vmax=0.1) ppl.subplot(132) ppl.imshow(cube[128,212:300,212:300],origin='lower',cmap='gray',vmin=-0.03,vmax=0.1) ppl.subplot(133) ppl.imshow(cube[-1,212:300,212:300],origin='lower',cmap='gray',vmin=-0.03,vmax=0.1) ppl.figtext(0.24,0.76,'first channel',ha='center') ppl.figtext(0.51,0.76,'middle channel',ha='center') ppl.figtext(0.78,0.76,'last channel',ha='center') ppl.show() Explanation: The regridded cube can be visualised to show that the PSF sidelobes move significantly on the sky when going from one end to the other of the 7% fractional bandwidth. End of explanation print '# Executing MIRIAD commands' for order in [1,2,3]: image_noscale_contsub='m01_ns_cs%i'%order if os.path.exists(image_noscale_contsub): shutil.rmtree(image_noscale_contsub) run_contsub=Run('contsub in=m01_ns out=%s mode=poly,%i contchan=(1,256)'%(image_noscale_contsub,order)) run_fits=Run('fits in=%s op=xyout out=%s.fits'%(image_noscale_contsub,image_noscale_contsub)) print '# Done' ppl.figure(figsize=(10,10)) ppl.subplots_adjust(wspace=0.1,hspace=0.3) for order in [1,2,3]: image_noscale_contsub='m01_ns_cs%i'%order print '# Plotting %s.fits'%(image_noscale_contsub) f=fits.open('%s.fits'%image_noscale_contsub) cube=f[0].data[0] f.close() ppl.subplot(3,3,(order-1)*3+1) ppl.imshow(cube[0,212:300,212:300],origin='lower',cmap='gray',vmin=-0.03,vmax=0.1) ppl.subplot(3,3,(order-1)*3+2) ppl.imshow(cube[128,212:300,212:300],origin='lower',cmap='gray',vmin=-0.03,vmax=0.1) ppl.subplot(3,3,(order-1)*3+3) ppl.imshow(cube[-1,212:300,212:300],origin='lower',cmap='gray',vmin=-0.03,vmax=0.1) ppl.figtext(0.25,0.91,'first channel',ha='center') ppl.figtext(0.51,0.91,'middle channel',ha='center') ppl.figtext(0.77,0.91,'last channel',ha='center') ppl.figtext(0.9,0.80,'order 1',ha='center',va='center',rotation=90) ppl.figtext(0.9,0.51,'order 2',ha='center',va='center',rotation=90) ppl.figtext(0.9,0.23,'order 3',ha='center',va='center',rotation=90) ppl.show() Explanation: As exaplained above, this change in the sidelobes' position on the sky with frequency means that a cube-based continuum subtraction will not work well at the position of the sidelobes. To show this we subtract the continuum from the regridded cube using MIRIAD's CONTSUB and using a range of polynomial orders, and display the same three channels of the continuum subctracted cube on the same grey scale. End of explanation nu=np.arange(1.3,1.4,0.001)*1e+9 # 1-kHz-wide channels over a 100-MHz bandwidth at a frequency of 1.4 GHz c=2.998e+8 ss=[1. ,10. ] # distance from phase centre in arcmin bb=[200.,2000.] # baseline length in metres nplot=0 ppl.subplots_adjust(wspace=0.4,hspace=0.45) for b in bb: for s in ss: nplot+=1 v=np.cos(2*np.pi*nu/c*(s/60/180*np.pi)*b) ppl.subplot(2,2,nplot) ppl.plot(nu/1e+9,v,'r-') ppl.text(1.35,0.7,"s = %i', b = %i m"%(s,b),ha="center") ppl.ylim(-1.1,1.1) ppl.xlabel('frequency (GHz)') ppl.ylabel('Re(V)') ppl.show() Explanation: As expected, for this noise-less ideal case, the cube-based continuum subtraction works perfectly at the position of the flat-spectrum source regardless of the order of the polynomial fit. However, the sidelobe emission is not subtracted well. The quality of the sidelobe continuum subtraction increases with increasing order of the polynomial but some level of residuals will always be present. For N=1 the residual level is between 5 and 10 percent of the source flux density, while this level is about halved for N=3. The impact of these residuals on the scientific goals of an observation depend on the flux density of the sources in the fields and the PSF sidelobe level relative to the other sources of noise and artefacts in the cube. 2.3. Channel selection and deconvolved cubes The selection of line-free (and RFI-free) channels is critical for a correct polynomial fit and continuum subtraction. Since for a dirty cube the PSF spreads any line emission/absorption at a given channel to all spatial pixels of that channel, the line-free channel selection does not depend on position and must be applied to all $(l,m)$. In fact, this selection is not always straightforward when working with a dirty cube. For example, the continuum emission (sources and sidelobes) may be much brighter than the line emission/absorption, and a few iterations of deconvolution and continuum subtraction may be required before the line-free channels are correctly identified. A natural question is whether, since deconvolution may be required to identify the line-free channels, the cube-based continuum subtraction method could be applied directly to the deconvolved cube $I(l,m,\nu)$. This would have the advantage that the PSF sidelobes of both continuum and spectral-line sources have been removed and, therefore, PSF-related continuum subtraction errors would be minimised. Furthermore, since the line emission/absorption has been deconvolved too, and is now localised to a few small regions within each channel, the line-free channel selection could be made position dependent. This could be easily achieved by including a simple outlier rejection algorithm in the polynomial fit, and would maximise the number of fitted channels along each sightline. (In practice, this is not possible in current implementation of the cube-based continuum subtraction in, e.g., CASA and MIRIAD, but has been tried outside these standard packages.) A singificant issue with subtracting the continuum from a deconvolved cube $I(l,m,\nu)$ is that deconvolution is non-linear and, therefore, leaves residuals and artefacts which vary from channel to channel. This would be particularly true for bright continuum emission and in the presence of significant calibration errors. The following step of continuum subtraction would not remove these artefacts. The final result may then be worse than one in which continuum subtraction is performed before deconvolution. In other words, continuum subtraction on the dirty cube $I^\mathrm{D}(l,m,\nu)$ is much more robust against calibration errors. For this reason, it may be better to attempt the cube-based continuum subtraction of a deconvolved cube only after the brightest continuum emission has been subtracted with a different method such as those described in Secs. 3 and 4. This combined approach is discussed in Sec. 5. 3. Visibility-based continuum subtraction 3.1. Basic method This approach consists of subtracting the continuum emission directly from the visibilities by modelling the continuum component $V_{ij,\mathrm{c}}(t,\nu)$ with a low order polynomial $V_{ij,\mathrm{c,model}}(t,\nu) = \sum_{n=0}^{N} a_{ij,n}(t)\ \nu^n$. This is done separately on the real and imaginary parts of each visibility spectrum. As for the cube-based continuum subtraction, the polynomial fit should only be run on line-free (and RFI-free) channels but their selection is not always straightforward. The line emission/absorption may be too faint to detect in individual visibility spectra, and a few iterations of continuum subtraction and spectral-line imaging may be required to identify the line-free channels correctly. Since all spectral line sources in the field contribute to all visibility spectra the line-free channel selection should be identical for all spectra. In fact, spatially-extended spectral line emission may be more significant on short baselines and, therefore, there is scope for a baseline dependent line-free channel selection. This could be easily achieved by including basic outlier rejection in the fit. The same could be useful to reject RFI from the fit. These advanced techniques are however not implemented in standard packages, e.g., CASA and MIRIAD. (CHECK!!!) 3.2. Limitations The visibility-based continuum subtraction works only as long as the polynomial approximation for real and imaginary part of $V_{ij,\mathrm{c}}(t,\nu)$ is valid. This approximation becomes progressively worse for larger distances from the phase centre, longer baselines and larger relative bandwidths, as we explain in what follows. The visibility of a unit point source at distance $\mathbf{s}$ from the phase-tracking centre is: $V_{ij,\mathrm{c}}(t,\nu) = \cos({2\pi\nu/c \ \mathbf{s}\cdot\mathbf{b}{ij}}) + i \sin({2\pi\nu/c \ \mathbf{s}\cdot\mathbf{b}{ij}}$), where $\mathbf{s}\cdot\mathbf{b}{ij}$ is a function of time $t$. That is, the variation of both real and imaginary part of $V{ij,\mathrm{c}}(t,\nu)$ with $\nu$ is represented by a sinusoid whose oscillation rate grows with $\mathbf{s}\cdot\mathbf{b}{ij}$. When the oscillation is slow, for example because the source is at the phase centre or because the projected baseline is very short, the polynomial approximation is sufficiently good even with order 1 or 2. However, when the oscillation is so fast that the observed bandwidth "sees" something of the order of a sinusoid period the polynomial approximation becomes quite poor. For example, at fixed bandwidth, the larger $\mathbf{s}\cdot\mathbf{b}{ij}$ (either because of a larger distance $\mathbf{s}$ from the phase centre or because of a longer baseline $\mathbf{b}{ij}$ -- or both), the faster the sinusoidal variation of real and imaginary part of $V{ij,\mathrm{c}}(t,\nu)$ with frequency, and the poorer the polynomial approximation. Conversely, at fixed $\mathbf{s}\cdot\mathbf{b}_{ij}$, the larger the bandwidth the larger the portion of the sinusoid that we try to approximate with a polynomial and, therefore, the poorer the approximation. The following example shows the situation for a 100 MHz bandwidth, projected baseline length of 200 m and 2 km, and distance from the phase centre of 1 arcmin and 10 arcmin. End of explanation print '# Executing MIRIAD commands' if os.path.exists('sim02.uv'): shutil.rmtree('sim02.uv') run_uvgen=Run('uvgen source=pointsource02.txt ant=ew_layout.txt baseunit=-51.0204 radec=19:39:25.0,-83:42:46 freq=1.4,0 corr=256,1,0,100 out=sim02.uv harange=-6,6,0.016667 systemp=0 lat=-30.7 jyperk=19.28') run_uvspec=Run('uvspec vis=sim02.uv device=sim02_2km.png/png nxy=1,1 select=an(2)(5),vis(1,10) axis=freq,real yrange=-1.1,1.1') run_uvspec=Run('uvspec vis=sim02.uv device=sim02_200m.png/png nxy=1,1 select=an(1)(2),vis(1,10) axis=freq,real yrange=-1.1,1.1') print '# Done' Explanation: The figure shows that for a source at 10 arcmin from the phase centre fitting the continuum with a low-order polynomial does not work on a 2 km baseline, unless one has observed a significantly narrower bandwidth (e.g., 20 MHz instead of 100 MHz). On the contrary, for a 200 m baseline and/or for a source 1 arcmin away from the phase centre a low-order polynomial is a good approximation to the data. The same result can be obtained with a MIRIAD simulation identical to the one created in Sec. 2 except for the position of the point source, which we now place 10 arcmin north of the phase centre. The visibility spectra obtained this way are consistent with the ones shown above (right panels). End of explanation print '# Executing MIRIAD commands' if os.path.exists('sim02_vs.uv'): shutil.rmtree('sim02_vs.uv') run_uvlin=Run('uvlin vis=sim02.uv order=2 options=relax out=sim02_vs.uv') run_uvspec=Run('uvspec vis=sim02_vs.uv device=sim02_vs_2km.png/png nxy=1,1 select=an(2)(5),vis(1,10) axis=freq,real yrange=-1.1,1.1') run_uvspec=Run('uvspec vis=sim02_vs.uv device=sim02_vs_200m.png/png nxy=1,1 select=an(1)(2),vis(1,10) axis=freq,real yrange=-1.1,1.1') print '# Done' Explanation: 200m baseline <img src="sim02_200m.png" width="400"> 2km baseline <img src="sim02_2km.png" width="400"> One could be tempted to get around this limitation by shifting the phase centre to the position of the source that needs to be subtracted. The issue with this is that no source will ever be completely isolated, and each $V_{ij,\mathrm{c}}(t,\nu)$ "sees" other sources too. These sources will be at different positions and may be more difficult to subtract with the new phase centre. This highlights that this method of continuum subtraction is better suited for interferometers with a small primary beam size (i.e., larger dishes) as most continuum sources are detected close to the phase centre. The newest interferometers MeerKAT and ASKAP are built of smaller dishes and, therefore, their larger beams see sources out to larger distances from the phase centre. This makes visibility-based continuum subtraction less straightforward. We can use the same MIRIAD simulation above to have a look at the residuals left by this continuum-subtraction method in the visibilities as well as in the spectral line cube. End of explanation print '# Executing MIRIAD commands' if os.path.exists('m02_vs'): shutil.rmtree('m02_vs') run_invert=Run('invert vis=sim02_vs.uv map=m02_vs imsize=512 cell=5 slop=1 robust=0') run_fits=Run('fits in=m02_vs op=xyout out=m02_vs.fits') print '# Done' f=fits.open('m02_vs.fits') cube=f[0].data[0] f.close() ppl.figure(figsize=(10,5)) #ppl.subplots_adjust(wspace=0.4,hspace=0.45) ppl.subplot(131) ppl.imshow(cube[0,212+100:300+100,212:300],origin='lower',cmap='gray',vmin=-0.03,vmax=0.1) ppl.subplot(132) ppl.imshow(cube[128,212+100:300+100,212:300],origin='lower',cmap='gray',vmin=-0.03,vmax=0.1) ppl.subplot(133) ppl.imshow(cube[-1,212+100:300+100,212:300],origin='lower',cmap='gray',vmin=-0.03,vmax=0.1) ppl.figtext(0.20,0.8,'first channel',ha='center') ppl.figtext(0.50,0.8,'middle channel',ha='center') ppl.figtext(0.80,0.8,'last channel',ha='center') ppl.show() Explanation: 200m baseline <img src="sim02_vs_200m.png" width="400"> 2km baseline <img src="sim02_vs_2km.png" width="400"> As expected, the continuum is subtracted reasonably well on the short baseline but not on the long baseline. This will leave signatures in the spectral line cube, as we show below by displaying a few channels using the same grey scale adopted in Sec. 2. End of explanation print '# Executing MIRIAD commands' if os.path.exists('sim03.uv'): shutil.rmtree('sim03.uv') if os.path.exists('m03'): shutil.rmtree('m03') if os.path.exists('b03'): shutil.rmtree('b03') if os.path.exists('m03_ns'): shutil.rmtree('m03_ns') if os.path.exists('m03_ns_cs'): shutil.rmtree('m03_ns_cs') run_uvgen=Run('uvgen source=pointsource03.txt ant=ew_layout.txt baseunit=-51.0204 radec=19:39:25.0,-83:42:46 freq=1.4,0 corr=256,1,0,100 out=sim03.uv harange=-6,6,0.016667 systemp=30 lat=-30.7 jyperk=19.28') run_invert=Run('invert vis=sim03.uv map=m03 beam=b03 imsize=512 cell=5 slop=1 robust=0') run_regrid=Run('regrid in=m03 out=m03_ns options=noscale') run_fits=Run('fits in=m03_ns op=xyout out=m03_ns.fits') run_contsub=Run('contsub in=m03_ns, out=m03_ns_cs mode=poly,3 contchan=(1,256)') run_fits=Run('fits in=m03_ns_cs op=xyout out=m03_ns_cs.fits') print '# Done' Explanation: Another issue with the visibility-based continuum subtraction is that it modifies the noise characteristics of channels excluded from the polynomial fit relative to those included in it. EXPAND 3.3. Visibility-based continuum subtraction and calibration errors One significant advantage of this method is that it is insensitive to frequency-independent gain calibration errors. That is, once a good bandpass calibration has been achieved, the method works equally well regardless of whether a frequency-independent, time-dependent gain calibration has been performed. The reason is that visbility-based continuum subtraction works on each visibility spectrum (i.e.g, fixed time) independently. For each of these spectra the application of a frequency-independent gain calibration does not change the spectral shape and, therefore, the order of the polynomial required for a good fit of the coninuum. Of course, if the method is used on a dataset with significant gain calibration errors the resulting spectral line cube will show artefacts at the channels with significant emission/absorption. The advantage is that the level of those artefacts will depend on the brightness of the line signal and not of the continuum. This is important since the line signal is typically much fainter than the continuum one. 4. Model-based continuum subtraction 4.1. Basic method Both the cube-based and visibility-based continuum subctraction methods described above suffer from limitations that are related to chromatic effects. In the case of the cube-based continuum subtraction the issue is that the PSF changes with frequency and, therefore, the continuum spectral shape at the position of a source's sidelobes is complex and difficult to subtract. In the case of the visbility-based continuum subtraction the issue is that, since $u$ and $v$ change with frequency, the shape of the visibility spectrum of a source depends on the distance of the source from the phase centre -- and when the latter is significant continuum subtraction is challenging. The alternative method of performing a model-based continuum subtraction gets around these chromatic issues. It consists of modelling the radio continuum sky and subtracting the Fourier transform of the model from the visibilities. The operation of Fourier transform is done for each channel in the visibility dataset, and this takes properly into account all chromatic effects. Compared to cube- and visibility-based continuum subtraction this method is slower, especially if working on a large bandwidth as this requires modelling and Fourier transforming not only the flux density but also the spectral shape of each source on the sky. For this reason in many cases it is preferable to use the other methods as long as they give results of sufficient quality. To illustrate this method we consider a model which combines the two models used in Secs. 2 and 3. It consists of a point source at the phase centre and another one with half the flux 10 arcmin north of the phase centre. In this case we include some noise in the simulations, corresponding to a noise level of ~1 mJy/beam in the cube. End of explanation f=fits.open('m03_ns.fits') cube=f[0].data[0] f.close() ppl.figure(figsize=(10,5)) #ppl.subplots_adjust(wspace=0.4,hspace=0.45) ppl.subplot(131) ppl.imshow(cube[0,120:392,120:392],origin='lower',cmap='gray',vmin=-0.03,vmax=0.1) ppl.subplot(132) ppl.imshow(cube[128,120:392,120:392],origin='lower',cmap='gray',vmin=-0.03,vmax=0.1) ppl.subplot(133) ppl.imshow(cube[-1,120:392,120:392],origin='lower',cmap='gray',vmin=-0.03,vmax=0.1) ppl.figtext(0.24,0.76,'first channel',ha='center') ppl.figtext(0.51,0.76,'middle channel',ha='center') ppl.figtext(0.78,0.76,'last channel',ha='center') ppl.show() f=fits.open('m03_ns_cs.fits') cube=f[0].data[0] f.close() ppl.figure(figsize=(10,5)) #ppl.subplots_adjust(wspace=0.4,hspace=0.45) ppl.subplot(131) ppl.imshow(cube[0,120:392,120:392],origin='lower',cmap='gray',vmin=-0.03,vmax=0.1) ppl.subplot(132) ppl.imshow(cube[128,120:392,120:392],origin='lower',cmap='gray',vmin=-0.03,vmax=0.1) ppl.subplot(133) ppl.imshow(cube[-1,120:392,120:392],origin='lower',cmap='gray',vmin=-0.03,vmax=0.1) ppl.figtext(0.24,0.76,'first channel',ha='center') ppl.figtext(0.51,0.76,'middle channel',ha='center') ppl.figtext(0.78,0.76,'last channel',ha='center') ppl.show() Explanation: We can image these data and display a few channels before and after continuum subtraction to show, once again, the chromatic effect of the sidelobe movement as a function of frequency, which complicates cube-based continuum subtraction. End of explanation if os.path.exists('sim03_vs.uv'): shutil.rmtree('sim03_vs.uv') print '# Executing MIRIAD commands' run_uvlin=Run('uvlin vis=sim03.uv order=3 options=relax out=sim03_vs.uv') run_uvspec=Run('uvspec vis=sim03.uv device=sim03_2km.png/png nxy=1,1 select=an(2)(5),vis(1,10) axis=freq,real yrange=-2,2') run_uvspec=Run('uvspec vis=sim03_vs.uv device=sim03_vs_2km.png/png nxy=1,1 select=an(2)(5),vis(1,10) axis=freq,real yrange=-2,2') print '# Done' Explanation: Furthermore, we can see also for this new simulation the rapid variation of the visibility on long baselines, which complicates visibility-based continuum subtraction. End of explanation if os.path.exists('mfs03_m00'): shutil.rmtree('mfs03_m00') if os.path.exists('mfs03_b'): shutil.rmtree('mfs03_b') if os.path.exists('mfs03_msk'): shutil.rmtree('mfs03_msk') if os.path.exists('mfs03_c01'): shutil.rmtree('mfs03_c01') if os.path.exists('mfs03_m01'): shutil.rmtree('mfs03_m01') if os.path.exists('mfs03_c02'): shutil.rmtree('mfs03_c02') if os.path.exists('mfs03_m02'): shutil.rmtree('mfs03_m02') if os.path.exists('sim03_ms.uv'): shutil.rmtree('sim03_ms.uv') print '# Executing MIRIAD commands' run_invert=Run('invert vis=sim03.uv map=mfs03_m00 beam=mfs03_b imsize=1024 cell=3 slop=1 robust=-2 options=mfs,double') run_maths=Run('maths exp=mfs03_m00 mask=mfs03_m00.gt.0.2 out=mfs03_msk') run_clean=Run('clean map=mfs03_m00 beam=mfs03_b region=mask(mfs03_msk) cutoff=1e-4 niters=1e+9 out=mfs03_c01') run_restor=Run('restor map=mfs03_m00 beam=mfs03_b model=mfs03_c01 out=mfs03_m01') run_rm=Run('rm -rf mfs03_msk') run_maths=Run('maths exp=mfs03_m00 mask=mfs03_m01.gt.0.1 out=mfs03_msk') run_clean=Run('clean map=mfs03_m00 beam=mfs03_b region=mask(mfs03_msk) cutoff=1e-4 niters=1e+9 out=mfs03_c02') run_restor=Run('restor map=mfs03_m00 beam=mfs03_b model=mfs03_c02 out=mfs03_m02') run_uvmodel=Run('uvmodel vis=sim03.uv options=subtract,mfs model=mfs03_c02 out=sim03_ms.uv') print '# Done' Explanation: 2km baseline before continuum subtraction <img src="sim03_2km.png" width="400"> 2km baseline after continuum subtraction <img src="sim03_vs_2km.png" width="400"> In what follows we show that the model-based continuum subtraction gets around these issues. We will use INVERT and CLEAN to make a multi-frequency-synthesis model of the continuum sky, Fourier transform it and subtract it from the visibilities. (Note that we use an image-based mask to define clean regions.) End of explanation if os.path.exists('m03_ms'): shutil.rmtree('m03_ms') if os.path.exists('m03_ms_ns'): shutil.rmtree('m03_ms_ns') print '# Executing MIRIAD commands' run_invert=Run('invert vis=sim03_ms.uv map=m03_ms imsize=512 cell=5 slop=1 robust=0') run_regrid=Run('regrid in=m03_ms out=m03_ms_ns options=noscale') run_fits=Run('fits in=m03_ms_ns op=xyout out=m03_ms_ns.fits') run_uvspec=Run('uvspec vis=sim03_ms.uv device=sim03_ms_2km.png/png nxy=1,1 select=an(2)(5),vis(1,10) axis=freq,real yrange=-2,2') print '# Done' f=fits.open('m03_ms_ns.fits') cube=f[0].data[0] f.close() ppl.figure(figsize=(10,5)) #ppl.subplots_adjust(wspace=0.4,hspace=0.45) ppl.subplot(131) ppl.imshow(cube[0,120:392,120:392],origin='lower',cmap='gray',vmin=-0.03,vmax=0.1) ppl.subplot(132) ppl.imshow(cube[128,120:392,120:392],origin='lower',cmap='gray',vmin=-0.03,vmax=0.1) ppl.subplot(133) ppl.imshow(cube[-1,120:392,120:392],origin='lower',cmap='gray',vmin=-0.03,vmax=0.1) ppl.figtext(0.24,0.76,'first channel',ha='center') ppl.figtext(0.51,0.76,'middle channel',ha='center') ppl.figtext(0.78,0.76,'last channel',ha='center') ppl.show() Explanation: We then display a few channels of a cube made from the continuum subtracted dataset and show a visibility spectrum to illustrate the quality of the continuum subtraction compared to what can be achieved with the cube-based and visibility-based ones. End of explanation if os.path.exists('sim04.uv'): shutil.rmtree('sim04.uv') if os.path.exists('mfs04_m00'): shutil.rmtree('mfs04_m00') if os.path.exists('mfs04_b'): shutil.rmtree('mfs04_b') if os.path.exists('mfs04_msk'): shutil.rmtree('mfs04_msk') if os.path.exists('mfs04_c01'): shutil.rmtree('mfs04_c01') if os.path.exists('mfs04_m01'): shutil.rmtree('mfs04_m01') if os.path.exists('sim04_ms.uv'): shutil.rmtree('sim04_ms.uv') if os.path.exists('m04_ms'): shutil.rmtree('m04_ms') if os.path.exists('m04_ms_ns'): shutil.rmtree('m04_ms_ns') print '# Executing MIRIAD commands' run_uvgen=Run('uvgen source=pointsource03.txt ant=ew_layout.txt baseunit=-51.0204 radec=19:39:25.0,-83:42:46 freq=1.4,0 corr=256,1,0,100 out=sim04.uv harange=-6,6,0.016667 systemp=30 lat=-30.7 jyperk=19.28 pnoise=10') run_invert=Run('invert vis=sim04.uv map=mfs04_m00 beam=mfs04_b imsize=1024 cell=3 slop=1 robust=-2 options=mfs,double') run_maths=Run('maths exp=mfs04_m00 mask=mfs03_m01.gt.0.1 out=mfs04_msk') run_clean=Run('clean map=mfs04_m00 beam=mfs04_b region=mask(mfs04_msk) cutoff=1e-4 niters=1e+9 out=mfs04_c01') run_restor=Run('restor map=mfs04_m00 beam=mfs04_b model=mfs04_c01 out=mfs04_m01') run_uvmodel=Run('uvmodel vis=sim04.uv options=subtract,mfs model=mfs04_c01 out=sim04_ms.uv') run_invert=Run('invert vis=sim04_ms.uv map=m04_ms imsize=512 cell=5 slop=1 robust=0') run_regrid=Run('regrid in=m04_ms out=m04_ms_ns options=noscale') run_fits=Run('fits in=m04_ms_ns op=xyout out=m04_ms_ns.fits') run_uvspec=Run('uvspec vis=sim04_ms.uv device=sim04_ms_2km.png/png nxy=1,1 select=an(2)(5),vis(1,10) axis=freq,real yrange=-2,2') print '# Done' f=fits.open('m04_ms_ns.fits') cube=f[0].data[0] f.close() ppl.figure(figsize=(10,5)) #ppl.subplots_adjust(wspace=0.4,hspace=0.45) ppl.subplot(131) ppl.imshow(cube[0,120:392,120:392],origin='lower',cmap='gray',vmin=-0.03,vmax=0.1) ppl.subplot(132) ppl.imshow(cube[128,120:392,120:392],origin='lower',cmap='gray',vmin=-0.03,vmax=0.1) ppl.subplot(133) ppl.imshow(cube[-1,120:392,120:392],origin='lower',cmap='gray',vmin=-0.03,vmax=0.1) ppl.figtext(0.24,0.76,'first channel',ha='center') ppl.figtext(0.51,0.76,'middle channel',ha='center') ppl.figtext(0.78,0.76,'last channel',ha='center') ppl.show() Explanation: 2km baseline after continuum subtraction <img src="sim03_ms_2km.png" width="400"> As expected, this method does not suffer from the chromatic effects that limit the use of cube- and visibility-based continuum subtaction. Furthermore, multi-frequency synthesis allows the modelling of the spectral shape of each source in the field. Therefore, while here we analyse a case of flat spectra, the method can handle more complex source populations. 4.2. Limitations The main limitation of this method is that, unlike cube- and especially visibility-based continuum subtraction, it does not work well in the presence of calibration errors. The reason is that the method requires a good model of the continuum emission in order to subtract it, and poor calibration is a substantial obstacle to getting such model. In naive terms, this method allows us to Fourier transform and subtract and ideal continuum model from the visibilities, but all calibration artefacts present in the continuum image and obviously not included in the model will remain in the data and corrupt the spectral-line cube. An example of this can be easily obtained with another MIRIAD simulation. End of explanation rm -r b03 m01 m01_ns.fits m01_ns m01_ns_cs1.fits m01_ns_cs1 m01_ns_cs2.fits m01_ns_cs2 m01_ns_cs3.fits m01_ns_cs3 m02_vs m02_vs.fits m03 m03_ms m03_ms_ns.fits m03_ms_ns m03_ns.fits m03_ns m03_ns_cs.fits m03_ns_cs m04_ms m04_ms_ns.fits m04_ms_ns mfs03_b mfs03_c01 mfs03_c02 mfs03_m00 mfs03_m01 mfs03_m02 mfs03_msk mfs04_b mfs04_c01 mfs04_m00 mfs04_m01 mfs04_msk sim01.uv sim02.uv sim02_200m.png sim02_2km.png sim02_vs.uv sim02_vs_200m.png sim02_vs_2km.png sim03.uv sim03_2km.png sim03_ms.uv sim03_ms_2km.png sim03_vs.uv sim03_vs_2km.png sim04.uv sim04_ms.uv sim04_ms_2km.png Explanation: The noise in these channel maps is clearly larger compared to the above ideal case with no calibration errors. Of course, this noise would be much reduced if the gains were (self-) calibrated. 5. Combining the above approaches E.g., subtract a model, especially for distant sources, then UVLIN or IMLIN. Add Jing's method. Bibliography <a href="http://adsabs.harvard.edu/abs/1992A%26A...258..583C">Cornwell, Uson & Addad 1992, A&A, 258, 583</a> <a href="http://adsabs.harvard.edu/abs/1999ASPC..180..229R">Rupen 1999, ASPC, 180, 229</a> <a href="http://adsabs.harvard.edu/abs/1994A%26AS..107...55S">Salt 1994, A&AS, 107, 55</a> <a href="http://adsabs.harvard.edu/abs/1983ApJ...267..528V">van Gorkom & Ekers 1983, ApJ, 267, 528</a> <a href="http://adsabs.harvard.edu/abs/1986syim.conf..177V">van Gorkom & Ekers 1986, Synthesis imaging, 177</a> <a href="http://adsabs.harvard.edu/abs/1989ASPC....6..341V">van Gorkom & Ekers 1989, ASPC, 6, 341</a> <a href="http://adsabs.harvard.edu/abs/1990A%26A...239L...5V">van Langevelde & Cotton 1990, A&A, 239L, 5 <a href="http://adsabs.harvard.edu/abs/2015MNRAS.453.2399W">Wang et al. 2015, MNRAS, 453, 2399</a> Cleaning up End of explanation
3,646
Given the following text description, write Python code to implement the functionality described below step by step Description: Think Bayes This notebook presents example code and exercise solutions for Think Bayes. Copyright 2018 Allen B. Downey MIT License Step4: The Weibull distribution The Weibull distribution is often used in survival analysis because it models the distribution of lifetimes for manufactured products, at least over some parts of the range. The following functions evaluate its PDF and CDF. Step5: SciPy also provides functions to evaluate the Weibull distribution, which I'll use to check my implementation. Step6: And here's what the PDF looks like, for these parameters. Step7: We can use np.random.weibull to generate random values from a Weibull distribution with given parameters. To check that it is correct, I generate a large sample and compare its CDF to the analytic CDF. Step8: Exercise Step9: Exercise Step10: Exercise Step11: Now I'll process the DataFrame to generate data in the form we want for the update. Step12: Exercise Step13: Prediction Exercise Step14: Exercise
Python Code: # Configure Jupyter so figures appear in the notebook %matplotlib inline # Configure Jupyter to display the assigned value after an assignment %config InteractiveShell.ast_node_interactivity='last_expr_or_assign' # import classes from thinkbayes2 from thinkbayes2 import Pmf, Cdf, Suite, Joint import thinkbayes2 import thinkplot import numpy as np Explanation: Think Bayes This notebook presents example code and exercise solutions for Think Bayes. Copyright 2018 Allen B. Downey MIT License: https://opensource.org/licenses/MIT End of explanation def EvalWeibullPdf(x, lam, k): Computes the Weibull PDF. x: value lam: parameter lambda in events per unit time k: parameter returns: float probability density arg = (x / lam) return k / lam * arg**(k-1) * np.exp(-arg**k) def EvalWeibullCdf(x, lam, k): Evaluates CDF of the Weibull distribution. arg = (x / lam) return 1 - np.exp(-arg**k) def MakeWeibullPmf(lam, k, high, n=200): Makes a PMF discrete approx to a Weibull distribution. lam: parameter lambda in events per unit time k: parameter high: upper bound n: number of values in the Pmf returns: normalized Pmf xs = np.linspace(0, high, n) ps = EvalWeibullPdf(xs, lam, k) return Pmf(dict(zip(xs, ps))) Explanation: The Weibull distribution The Weibull distribution is often used in survival analysis because it models the distribution of lifetimes for manufactured products, at least over some parts of the range. The following functions evaluate its PDF and CDF. End of explanation from scipy.stats import weibull_min lam = 2 k = 1.5 x = 0.5 weibull_min.pdf(x, k, scale=lam) EvalWeibullPdf(x, lam, k) weibull_min.cdf(x, k, scale=lam) EvalWeibullCdf(x, lam, k) Explanation: SciPy also provides functions to evaluate the Weibull distribution, which I'll use to check my implementation. End of explanation pmf = MakeWeibullPmf(lam, k, high=10) thinkplot.Pdf(pmf) thinkplot.decorate(xlabel='Lifetime', ylabel='PMF') Explanation: And here's what the PDF looks like, for these parameters. End of explanation def SampleWeibull(lam, k, n=1): return np.random.weibull(k, size=n) * lam data = SampleWeibull(lam, k, 10000) cdf = Cdf(data) model = pmf.MakeCdf() thinkplot.Cdfs([cdf, model]) thinkplot.decorate(xlabel='Lifetime', ylabel='CDF') Explanation: We can use np.random.weibull to generate random values from a Weibull distribution with given parameters. To check that it is correct, I generate a large sample and compare its CDF to the analytic CDF. End of explanation # Solution goes here # Solution goes here # Solution goes here # Solution goes here # Solution goes here # Solution goes here Explanation: Exercise: Write a class called LightBulb that inherits from Suite and Joint and provides a Likelihood function that takes an observed lifespan as data and a tuple, (lam, k), as a hypothesis. It should return a likelihood proportional to the probability of the observed lifespan in a Weibull distribution with the given parameters. Test your method by creating a LightBulb object with an appropriate prior and update it with a random sample from a Weibull distribution. Plot the posterior distributions of lam and k. As the sample size increases, does the posterior distribution converge on the values of lam and k used to generate the sample? End of explanation # Solution goes here # Solution goes here # Solution goes here # Solution goes here # Solution goes here # Solution goes here Explanation: Exercise: Now suppose that instead of observing a lifespan, k, you observe a lightbulb that has operated for 1 year and is still working. Write another version of LightBulb that takes data in this form and performs an update. End of explanation import pandas as pd lam = 2 k = 1.5 n = 15 t_end = 10 starts = np.random.uniform(0, t_end, n) lifespans = SampleWeibull(lam, k, n) df = pd.DataFrame({'start': starts, 'lifespan': lifespans}) df['end'] = df.start + df.lifespan df['age_t'] = t_end - df.start df.head() Explanation: Exercise: Now let's put it all together. Suppose you have 15 lightbulbs installed at different times over a 10 year period. When you observe them, some have died and some are still working. Write a version of LightBulb that takes data in the form of a (flag, x) tuple, where: If flag is eq, it means that x is the actual lifespan of a bulb that has died. If flag is gt, it means that x is the current age of a bulb that is still working, so it is a lower bound on the lifespan. To help you test, I will generate some fake data. First, I'll generate a Pandas DataFrame with random start times and lifespans. The columns are: start: time when the bulb was installed lifespan: lifespan of the bulb in years end: time when bulb died or will die age_t: age of the bulb at t=10 End of explanation data = [] for i, row in df.iterrows(): if row.end < t_end: data.append(('eq', row.lifespan)) else: data.append(('gt', row.age_t)) for pair in data: print(pair) # Solution goes here # Solution goes here # Solution goes here # Solution goes here # Solution goes here Explanation: Now I'll process the DataFrame to generate data in the form we want for the update. End of explanation # Solution goes here Explanation: Exercise: Suppose you install a light bulb and then you don't check on it for a year, but when you come back, you find that it has burned out. Extend LightBulb to handle this kind of data, too. End of explanation # Solution goes here # Solution goes here Explanation: Prediction Exercise: Suppose we know that, for a particular kind of lightbulb in a particular location, the distribution of lifespans is well modeled by a Weibull distribution with lam=2 and k=1.5. If we install n=100 lightbulbs and come back one year later, what is the distribution of c, the number of lightbulbs that have burned out? End of explanation # Solution goes here # Solution goes here Explanation: Exercise: Now suppose that lam and k are not known precisely, but we have a LightBulb object that represents the joint posterior distribution of the parameters after seeing some data. Compute the posterior predictive distribution for c, the number of bulbs burned out after one year. End of explanation
3,647
Given the following text description, write Python code to implement the functionality described below step by step Description: FloPy SWI2 Example 4. Upconing Below a Pumping Well in a Two-Aquifer Island System This example problem is the fourth example problem in the SWI2 documentation (http Step1: Define model name of your model and the location of MODFLOW executable. All MODFLOW files and output will be stored in the subdirectory defined by the workspace. Create a model named ml and specify that this is a MODFLOW-2005 model. Step2: Define the number of layers, rows and columns. The heads are computed quasi-steady state (hence a steady MODFLOW run) while the interface will move. There are three stress periods with a length of 200, 12, and 18 years and 1,000, 120, and 180 steps. Step3: Specify the cell size along the rows (delr) and along the columns (delc) and the top and bottom of the aquifer for the DIS package. Step4: Define the IBOUND array and starting heads for the BAS package. The corners of the model are defined to be inactive. Step5: Define the layers to be confined and define the horizontal and vertical hydraulic conductivity of the aquifer for the LPF package. Step6: Define the boundary condition data for the model Step7: Create output control (OC) data using words Step8: Create the model with the freshwater well (Simulation 1) Step9: Write the simulation 1 MODFLOW input files and run the model Step10: Create the model with the saltwater well (Simulation 2) Step11: Write the simulation 2 MODFLOW input files and run the model Step12: Load the simulation 1 ZETA data and ZETA observations. Step13: Load the simulation 2 ZETA data and ZETA observations. Step14: Create arrays for the x-coordinates and the output years Step15: Define figure dimensions and colors used for plotting ZETA surfaces Step16: Recreate Figure 9 from the SWI2 documentation (http Step17: Use ModelCrossSection plotting class and plot_fill_between() method to fill between zeta surfaces.
Python Code: %matplotlib inline import os import sys import platform import numpy as np import matplotlib as mpl import matplotlib.pyplot as plt import flopy print(sys.version) print('numpy version: {}'.format(np.__version__)) print('matplotlib version: {}'.format(mpl.__version__)) print('flopy version: {}'.format(flopy.__version__)) Explanation: FloPy SWI2 Example 4. Upconing Below a Pumping Well in a Two-Aquifer Island System This example problem is the fourth example problem in the SWI2 documentation (http://pubs.usgs.gov/tm/6a46/) and simulates transient movement of the freshwater-seawater interface beneath an island in response to recharge and groundwater withdrawals. The island is 2,050$\times$2,050 m and consists of two 20-m thick aquifers that extend below sea level. The aquifers are confined, storage changes are not considered (all MODFLOW stress periods are steady-state), and the top and bottom of each aquifer is horizontal. The top of the upper aquifer and the bottom of the lower aquifer are impermeable. The domain is discretized into 61 columns, 61 rows, and 2 layers, with respective cell dimensions of 50 m (DELR), 50 m (DELC), and 20 m. A total of 230 years is simulated using three stress periods with lengths of 200, 12, and 18 years, with constant time steps of 0.2, 0.1, and 0.1 years, respectively. The horizontal and vertical hydraulic conductivity of both aquifers are 10 m/d and 0.2 m/d, respectively. The effective porosity is 0.2 for both aquifers. The model is extended 500 m offshore along all sides and the ocean boundary is represented as a general head boundary condition (GHB) in model layer 1. A freshwater head of 0 m is specified at the ocean bottom in all general head boundaries. The GHB conductance that controls outflow from the aquifer into the ocean is 62.5 m$^{2}$/d and corresponds to a leakance of 0.025 d$^{-1}$ (or a resistance of 40 days). The groundwater is divided into a freshwater zone and a seawater zone, separated by an active ZETA surface between the zones (NSRF=1) that approximates the 50-percent seawater salinity contour. Fluid density is represented using the stratified density option (ISTRAT=1). The dimensionless density difference ($\nu$) between freshwater and saltwater is 0.025. The tip and toe tracking parameters are a TOESLOPE and TIPSLOPE of 0.005, a default ALPHA of 0.1, and a default BETA of 0.1. Initially, the interface between freshwater and saltwater is 1 m below land surface on the island and at the top of the upper aquifer offshore. The SWI2 ISOURCE parameter is set to -2 in cells having GHBs so that water that infiltrates into the aquifer from the GHB cells is saltwater (zone 2), whereas water that flows out of the model at the GHB cells is identical to water at the top of the aquifer. ISOURCE in layer 2, row 31, column 36 is set to 2 so that a saltwater well may be simulated in the third stress period of simulation 2. In all other cells, the SWI2 ISOURCE parameter is set to 0, indicating boundary conditions have water that is identical to water at the top of the aquifer and can be either freshwater or saltwater, depending on the elevation of the active ZETA surface in the cell. A constant recharge rate of 0.4 millimeters per day (mm/d) is used in all three stress periods. The development of the freshwater lens is simulated for 200 years, after which a pumping well having a withdrawal rate of 250 m$^3$/d is started in layer 1, row 31, column 36. For the first simulation (simulation 1), the well pumps for 30 years, after which the interface almost reaches the top of the upper aquifer layer. In the second simulation (simulation 2), an additional well withdrawing saltwater at a rate of 25 m$^3$/d is simulated below the freshwater well in layer 2 , row 31, column 36, 12 years after the freshwater groundwater withdrawal begins in the well in layer 1. The saltwater well is intended to prevent the interface from upconing into the upper aquifer (model layer). Import numpy and matplotlib, set all figures to be inline, import flopy.modflow and flopy.utils. End of explanation #Set name of MODFLOW exe # assumes executable is in users path statement exe_name = 'mf2005' if platform.system() == 'Windows': exe_name = 'mf2005.exe' workspace = os.path.join('data') #make sure workspace directory exists if not os.path.exists(workspace): os.makedirs(workspace) Explanation: Define model name of your model and the location of MODFLOW executable. All MODFLOW files and output will be stored in the subdirectory defined by the workspace. Create a model named ml and specify that this is a MODFLOW-2005 model. End of explanation ncol = 61 nrow = 61 nlay = 2 nper = 3 perlen = [365.25 * 200., 365.25 * 12., 365.25 * 18.] nstp = [1000, 120, 180] save_head = [200, 60, 60] steady = True Explanation: Define the number of layers, rows and columns. The heads are computed quasi-steady state (hence a steady MODFLOW run) while the interface will move. There are three stress periods with a length of 200, 12, and 18 years and 1,000, 120, and 180 steps. End of explanation # dis data delr, delc = 50.0, 50.0 botm = np.array([-10., -30., -50.]) Explanation: Specify the cell size along the rows (delr) and along the columns (delc) and the top and bottom of the aquifer for the DIS package. End of explanation # bas data # ibound - active except for the corners ibound = np.ones((nlay, nrow, ncol), dtype= np.int) ibound[:, 0, 0] = 0 ibound[:, 0, -1] = 0 ibound[:, -1, 0] = 0 ibound[:, -1, -1] = 0 # initial head data ihead = np.zeros((nlay, nrow, ncol), dtype=np.float) Explanation: Define the IBOUND array and starting heads for the BAS package. The corners of the model are defined to be inactive. End of explanation # lpf data laytyp = 0 hk = 10. vka = 0.2 Explanation: Define the layers to be confined and define the horizontal and vertical hydraulic conductivity of the aquifer for the LPF package. End of explanation # boundary condition data # ghb data colcell, rowcell = np.meshgrid(np.arange(0, ncol), np.arange(0, nrow)) index = np.zeros((nrow, ncol), dtype=np.int) index[:, :10] = 1 index[:, -10:] = 1 index[:10, :] = 1 index[-10:, :] = 1 nghb = np.sum(index) lrchc = np.zeros((nghb, 5)) lrchc[:, 0] = 0 lrchc[:, 1] = rowcell[index == 1] lrchc[:, 2] = colcell[index == 1] lrchc[:, 3] = 0. lrchc[:, 4] = 50.0 * 50.0 / 40.0 # create ghb dictionary ghb_data = {0:lrchc} # recharge data rch = np.zeros((nrow, ncol), dtype=np.float) rch[index == 0] = 0.0004 # create recharge dictionary rch_data = {0: rch} # well data nwells = 2 lrcq = np.zeros((nwells, 4)) lrcq[0, :] = np.array((0, 30, 35, 0)) lrcq[1, :] = np.array([1, 30, 35, 0]) lrcqw = lrcq.copy() lrcqw[0, 3] = -250 lrcqsw = lrcq.copy() lrcqsw[0, 3] = -250. lrcqsw[1, 3] = -25. # create well dictionary base_well_data = {0:lrcq, 1:lrcqw} swwells_well_data = {0:lrcq, 1:lrcqw, 2:lrcqsw} # swi2 data nadptmx = 10 nadptmn = 1 nu = [0, 0.025] numult = 5.0 toeslope = nu[1] / numult #0.005 tipslope = nu[1] / numult #0.005 z1 = -10.0 * np.ones((nrow, ncol)) z1[index == 0] = -11.0 z = np.array([[z1, z1]]) iso = np.zeros((nlay, nrow, ncol), dtype=np.int) iso[0, :, :][index == 0] = 1 iso[0, :, :][index == 1] = -2 iso[1, 30, 35] = 2 ssz=0.2 # swi2 observations obsnam = ['layer1_', 'layer2_'] obslrc=[[0, 30, 35], [1, 30, 35]] nobs = len(obsnam) iswiobs = 1051 Explanation: Define the boundary condition data for the model End of explanation # oc data spd = {(0,199): ['print budget', 'save head'], (0,200): [], (0,399): ['print budget', 'save head'], (0,400): [], (0,599): ['print budget', 'save head'], (0,600): [], (0,799): ['print budget', 'save head'], (0,800): [], (0,999): ['print budget', 'save head'], (1,0): [], (1,59): ['print budget', 'save head'], (1,60): [], (1,119): ['print budget', 'save head'], (1,120): [], (2,0): [], (2,59): ['print budget', 'save head'], (2,60): [], (2,119): ['print budget', 'save head'], (2,120): [], (2,179): ['print budget', 'save head']} Explanation: Create output control (OC) data using words End of explanation modelname = 'swiex4_s1' ml = flopy.modflow.Modflow(modelname, version='mf2005', exe_name=exe_name, model_ws=workspace) discret = flopy.modflow.ModflowDis(ml, nlay=nlay, nrow=nrow, ncol=ncol, laycbd=0, delr=delr, delc=delc, top=botm[0], botm=botm[1:], nper=nper, perlen=perlen, nstp=nstp) bas = flopy.modflow.ModflowBas(ml, ibound=ibound, strt=ihead) lpf = flopy.modflow.ModflowLpf(ml, laytyp=laytyp, hk=hk, vka=vka) wel = flopy.modflow.ModflowWel(ml, stress_period_data=base_well_data) ghb = flopy.modflow.ModflowGhb(ml, stress_period_data=ghb_data) rch = flopy.modflow.ModflowRch(ml, rech=rch_data) swi = flopy.modflow.ModflowSwi2(ml, nsrf=1, istrat=1, toeslope=toeslope, tipslope=tipslope, nu=nu, zeta=z, ssz=ssz, isource=iso, nsolver=1, nadptmx=nadptmx, nadptmn=nadptmn, nobs=nobs, iswiobs=iswiobs, obsnam=obsnam, obslrc=obslrc, iswizt=55) oc = flopy.modflow.ModflowOc(ml, stress_period_data=spd) pcg = flopy.modflow.ModflowPcg(ml, hclose=1.0e-6, rclose=3.0e-3, mxiter=100, iter1=50) Explanation: Create the model with the freshwater well (Simulation 1) End of explanation ml.write_input() ml.run_model(silent=True) Explanation: Write the simulation 1 MODFLOW input files and run the model End of explanation modelname2 = 'swiex4_s2' ml2 = flopy.modflow.Modflow(modelname2, version='mf2005', exe_name=exe_name, model_ws=workspace) discret = flopy.modflow.ModflowDis(ml2, nlay=nlay, nrow=nrow, ncol=ncol, laycbd=0, delr=delr, delc=delc, top=botm[0], botm=botm[1:], nper=nper, perlen=perlen, nstp=nstp) bas = flopy.modflow.ModflowBas(ml2, ibound=ibound, strt=ihead) lpf = flopy.modflow.ModflowLpf(ml2, laytyp=laytyp, hk=hk, vka=vka) wel = flopy.modflow.ModflowWel(ml2, stress_period_data=swwells_well_data) ghb = flopy.modflow.ModflowGhb(ml2, stress_period_data=ghb_data) rch = flopy.modflow.ModflowRch(ml2, rech=rch_data) swi = flopy.modflow.ModflowSwi2(ml2, nsrf=1, istrat=1, toeslope=toeslope, tipslope=tipslope, nu=nu, zeta=z, ssz=ssz, isource=iso, nsolver=1, nadptmx=nadptmx, nadptmn=nadptmn, nobs=nobs, iswiobs=iswiobs, obsnam=obsnam, obslrc=obslrc, iswizt=55) oc = flopy.modflow.ModflowOc(ml2, stress_period_data=spd) pcg = flopy.modflow.ModflowPcg(ml2, hclose=1.0e-6, rclose=3.0e-3, mxiter=100, iter1=50) Explanation: Create the model with the saltwater well (Simulation 2) End of explanation ml2.write_input() ml2.run_model(silent=True) Explanation: Write the simulation 2 MODFLOW input files and run the model End of explanation # read base model zeta zfile = flopy.utils.CellBudgetFile(os.path.join(ml.model_ws, modelname+'.zta')) kstpkper = zfile.get_kstpkper() zeta = [] for kk in kstpkper: zeta.append(zfile.get_data(kstpkper=kk, text='ZETASRF 1')[0]) zeta = np.array(zeta) # read swi obs zobs = np.genfromtxt(os.path.join(ml.model_ws, modelname+'.zobs.out'), names=True) Explanation: Load the simulation 1 ZETA data and ZETA observations. End of explanation # read saltwater well model zeta zfile2 = flopy.utils.CellBudgetFile(os.path.join(ml2.model_ws, modelname2+'.zta')) kstpkper = zfile2.get_kstpkper() zeta2 = [] for kk in kstpkper: zeta2.append(zfile2.get_data(kstpkper=kk, text='ZETASRF 1')[0]) zeta2 = np.array(zeta2) # read swi obs zobs2 = np.genfromtxt(os.path.join(ml2.model_ws, modelname2+'.zobs.out'), names=True) Explanation: Load the simulation 2 ZETA data and ZETA observations. End of explanation x = np.linspace(-1500, 1500, 61) xcell = np.linspace(-1500, 1500, 61) + delr / 2. xedge = np.linspace(-1525, 1525, 62) years = [40, 80, 120, 160, 200, 6, 12, 18, 24, 30] Explanation: Create arrays for the x-coordinates and the output years End of explanation # figure dimensions fwid, fhgt = 8.00, 5.50 flft, frgt, fbot, ftop = 0.125, 0.95, 0.125, 0.925 # line color definition icolor = 5 colormap = plt.cm.jet #winter cc = [] cr = np.linspace(0.9, 0.0, icolor) for idx in cr: cc.append(colormap(idx)) Explanation: Define figure dimensions and colors used for plotting ZETA surfaces End of explanation plt.rcParams.update({'legend.fontsize': 6, 'legend.frameon' : False}) fig = plt.figure(figsize=(fwid, fhgt), facecolor='w') fig.subplots_adjust(wspace=0.25, hspace=0.25, left=flft, right=frgt, bottom=fbot, top=ftop) # first plot ax = fig.add_subplot(2, 2, 1) # axes limits ax.set_xlim(-1500, 1500) ax.set_ylim(-50, -10) for idx in range(5): # layer 1 ax.plot(xcell, zeta[idx, 0, 30, :], drawstyle='steps-mid', linewidth=0.5, color=cc[idx], label='{:2d} years'.format(years[idx])) # layer 2 ax.plot(xcell, zeta[idx, 1, 30, :], drawstyle='steps-mid', linewidth=0.5, color=cc[idx], label='_None') ax.plot([-1500, 1500], [-30, -30], color='k', linewidth=1.0) # legend plt.legend(loc='lower left') # axes labels and text ax.set_xlabel('Horizontal distance, in meters') ax.set_ylabel('Elevation, in meters') ax.text(0.025, .55, 'Layer 1', transform=ax.transAxes, va='center', ha='left', size='7') ax.text(0.025, .45, 'Layer 2', transform=ax.transAxes, va='center', ha='left', size='7') ax.text(0.975, .1, 'Recharge conditions', transform=ax.transAxes, va='center', ha='right', size='8') # second plot ax = fig.add_subplot(2, 2, 2) # axes limits ax.set_xlim(-1500, 1500) ax.set_ylim(-50, -10) for idx in range(5, len(years)): # layer 1 ax.plot(xcell, zeta[idx, 0, 30, :], drawstyle='steps-mid', linewidth=0.5, color=cc[idx-5], label='{:2d} years'.format(years[idx])) # layer 2 ax.plot(xcell, zeta[idx, 1, 30, :], drawstyle='steps-mid', linewidth=0.5, color=cc[idx-5], label='_None') ax.plot([-1500, 1500], [-30, -30], color='k', linewidth=1.0) # legend plt.legend(loc='lower left') # axes labels and text ax.set_xlabel('Horizontal distance, in meters') ax.set_ylabel('Elevation, in meters') ax.text(0.025, .55, 'Layer 1', transform=ax.transAxes, va='center', ha='left', size='7') ax.text(0.025, .45, 'Layer 2', transform=ax.transAxes, va='center', ha='left', size='7') ax.text(0.975, .1, 'Freshwater well withdrawal', transform=ax.transAxes, va='center', ha='right', size='8') # third plot ax = fig.add_subplot(2, 2, 3) # axes limits ax.set_xlim(-1500, 1500) ax.set_ylim(-50, -10) for idx in range(5, len(years)): # layer 1 ax.plot(xcell, zeta2[idx, 0, 30, :], drawstyle='steps-mid', linewidth=0.5, color=cc[idx-5], label='{:2d} years'.format(years[idx])) # layer 2 ax.plot(xcell, zeta2[idx, 1, 30, :], drawstyle='steps-mid', linewidth=0.5, color=cc[idx-5], label='_None') ax.plot([-1500, 1500], [-30, -30], color='k', linewidth=1.0) # legend plt.legend(loc='lower left') # axes labels and text ax.set_xlabel('Horizontal distance, in meters') ax.set_ylabel('Elevation, in meters') ax.text(0.025, .55, 'Layer 1', transform=ax.transAxes, va='center', ha='left', size='7') ax.text(0.025, .45, 'Layer 2', transform=ax.transAxes, va='center', ha='left', size='7') ax.text(0.975, .1, 'Freshwater and saltwater\nwell withdrawals', transform=ax.transAxes, va='center', ha='right', size='8') # fourth plot ax = fig.add_subplot(2, 2, 4) # axes limits ax.set_xlim(0, 30) ax.set_ylim(-50, -10) t = zobs['TOTIM'][999:] / 365 - 200. tz2 = zobs['layer1_001'][999:] tz3 = zobs2['layer1_001'][999:] for i in range(len(t)): if zobs['layer2_001'][i+999] < -30. - 0.1: tz2[i] = zobs['layer2_001'][i+999] if zobs2['layer2_001'][i+999] < 20. - 0.1: tz3[i] = zobs2['layer2_001'][i+999] ax.plot(t, tz2, linestyle='solid', color='r', linewidth=0.75, label='Freshwater well') ax.plot(t, tz3, linestyle='dotted', color='r', linewidth=0.75, label='Freshwater and saltwater well') ax.plot([0, 30], [-30, -30], 'k', linewidth=1.0, label='_None') # legend leg = plt.legend(loc='lower right', numpoints=1) # axes labels and text ax.set_xlabel('Time, in years') ax.set_ylabel('Elevation, in meters') ax.text(0.025, .55, 'Layer 1', transform=ax.transAxes, va='center', ha='left', size='7') ax.text(0.025, .45, 'Layer 2', transform=ax.transAxes, va='center', ha='left', size='7'); Explanation: Recreate Figure 9 from the SWI2 documentation (http://pubs.usgs.gov/tm/6a46/). End of explanation fig = plt.figure(figsize=(fwid, fhgt/2)) fig.subplots_adjust(wspace=0.25, hspace=0.25, left=flft, right=frgt, bottom=fbot, top=ftop) colors = ['#40d3f7', '#F76541'] ax = fig.add_subplot(1, 2, 1) modelxsect = flopy.plot.ModelCrossSection(model=ml, line={'Row': 30}, extent=(0, 3050, -50, -10)) modelxsect.plot_fill_between(zeta[4, :, :, :], colors=colors, ax=ax, edgecolors='none') linecollection = modelxsect.plot_grid(ax=ax) ax.set_title('Recharge year {}'.format(years[4])); ax = fig.add_subplot(1, 2, 2) ax.set_xlim(0, 3050) ax.set_ylim(-50, -10) modelxsect.plot_fill_between(zeta[-1, :, :, :], colors=colors, ax=ax) linecollection = modelxsect.plot_grid(ax=ax) ax.set_title('Scenario year {}'.format(years[-1])); Explanation: Use ModelCrossSection plotting class and plot_fill_between() method to fill between zeta surfaces. End of explanation
3,648
Given the following text description, write Python code to implement the functionality described below step by step Description: Choosing a loss function for regression TL;DR In short, the workflow can be summarized as follows Step1: The problem with the squared error is that it makes small errors (< 1.0) even smaller and large large errors (>1.0) disproportionately larger. This means that if you have a few outliers in your data, for example, due to a large measurement error, the model will put useless effort to fit these data points often degrading the accuracy of the majority of predictions as a result. Step2: Datasets with large outliers show heavy tails in the residual distribution when trained using MSE as shown above. In these cases what you would usually want to do is instead of fitting the mean squared error to fit the mean absolute error (MAE) $$L(a_i,p_i) = \vert a_i - p_i\vert^2$$ Step3: MAE does not over-prioritize outliers and it is easy to interpret Step4: Particularly interesting is that for $\delta=1.0$ Huber loss is asymptotically similar to MAE and behaves as MSE around 0. For other values of $\delta$, Huber loss still increases linearly for large errors and $\delta$ controls simply the slope of this growth. In machine learning data frame analytics, we implemented the so-called Pseudo-Huber loss $$L(a_i,p_i) = \delta^2 \left(\sqrt{1+((a_i-p_i)/\delta)^2}-1\right).$$ This results in faster code while keeping the properties of the regular Huber loss. Step5: Conclusion To use Huber loss with a certain parameter $\delta$ in your data frame analytics regression job, you can specify parameter loss_function (set it to huber) and loss_function_parameter. Use Huber loss if you have outliers in your data coming from measurement errors. This is often the case when you can see a symmetric heavy-tailed distribution of residuals after training your model with MSE loss function. You can experiment with different values of $\delta$, also the default value 1.0 is fine for most cases. Mean squared logarithmic error (MSLE) This loss function is used when you have positive targets distributed with a long tail such as log-normal distribution. There are numerous examples of such data Step6: The distribution of the values has a "bump" for smaller values and then a long tail with a small number of very large values. Naturally, when you want to predict the target values, you want to allow for an error depending on the magnitude of the target. It’s ok to miss a population of a metropolitan area by 10 000, but for a small town it is unforgivable. In this case MSLE can be more suitable for the regression job than MSE. As the name suggests, MSLE minimizes the quadratic difference between the logarithms of the actual value and the prediction $$L(a_i, p_i) = (\log(a_i + t) - \log(p_i + t))^2$$ For example, $\log\left((1 + 1100) / (1 + 1000)\right)^2$ is about the same as $\log((1 + 11000) / (1 + 10000))^2$ so missing a value by 10\% is penalised equivalently whether the actual error is 100 or 1000. At the same time, with MSE the latter case costs 100x more. Qualitatively speaking, MSLE tends to lead to lower error for small targets and for higher error on large targets than MSE. This trade-off can be controlled by adjusting an offset parameter $t$ which is set to 1.0 by default. By increasing this parameter, you influence the “transition point” at which you go from minimizing quadratic error to minimizing quadratic log error. Let's take a look how this offset affects quantitative results. Step7: Note that the scale of the y-axis in the bottom has been changed to be able to see those small residuals! First, MSE tends to result in larger errors for small target values despite the fact that most of the targets are rather small. It rather focuses on producing smaller errors for large target values. Second, if you look at the density of the target values (top diagram), you see that qualitatively the "transition point" happens somewhere around 10
Python Code: a = symbols('a') #actual value p = symbols('p') #predicted value mse = lambda a,p: (a-p)**2 mse_plot = plot(mse(0, p),(p, -3, 3), show=False, legend=True, line_color="red") mse_plot[0].label='MSE' mse_plot.show() Explanation: Choosing a loss function for regression TL;DR In short, the workflow can be summarized as follows: 1. Choose MSE 2. Check distribution of residual errors 3. If this is symmetric with heavy tails consider Huber 4. Else if the values are positive (although offset can deal with negative values) and the errors are positively skewed with a heavy right tail consider MSLE With the regression data frame analytics jobs we now support three different loss functions: mse, msle, and huber. While all three of them can be used to train a model to predict real-valued data by minimizing average loss between actual values $a_i$ and predictions $p_i$ $$\frac{1}{N}\sum_{i=1}^{N}L(a_i,p_i),$$ they would work best in different scenarios. Let's look at what loss function works best for which case. Mean squared error (MSE) MSE is the most commonly used loss function. It works well in many scenarios and you should try it if you are unsure which loss function to use or you don't know much about your data. As the name suggests, MSE minimizes the quadratic difference between the prediction and the actual value: $$L(a_i,p_i) = \tfrac{1}{2}(a_i-p_i)^2$$ End of explanation # Let's see how to identify np.random.seed(1000) N = 10000 w = np.ones(5) X = np.random.randn(N, 5) y = X.dot(w) # The last 1000 target values get some serious noise y[-1000:] += np.random.normal(0,100.0, 1000) mse = lambda x: mean_squared_error(y, X.dot(x)) x0 = np.zeros(X.shape[1]) #initial guess result_w = minimize(mse, x0, tol=1e-5)['x'] #MSE result residuals_mse = y - X.dot(result_w) # Plot the distribution of the residuals kde = gaussian_kde(residuals_mse) dist_space = np.linspace(min(residuals_mse), max(residuals_mse), 100) _ = pl.plot(dist_space, kde(dist_space)) Explanation: The problem with the squared error is that it makes small errors (< 1.0) even smaller and large large errors (>1.0) disproportionately larger. This means that if you have a few outliers in your data, for example, due to a large measurement error, the model will put useless effort to fit these data points often degrading the accuracy of the majority of predictions as a result. End of explanation mae = lambda a,p: Abs(a-p) mae_plot = plot(mae(0, p),(p, -3, 3), show=False, line_color="blue") mae_plot[0].label = "MAE" mse_plot.extend(mae_plot) mse_plot.show() Explanation: Datasets with large outliers show heavy tails in the residual distribution when trained using MSE as shown above. In these cases what you would usually want to do is instead of fitting the mean squared error to fit the mean absolute error (MAE) $$L(a_i,p_i) = \vert a_i - p_i\vert^2$$ End of explanation huber = lambda delta,a,p: Piecewise((0.5*(a-p)**2, Abs(a-p) <= delta), (delta*Abs(a-p)-0.5*delta**2, True)) huber1_plot = plot(huber(1.0, 0, p),(p, -3, 3), show=False, line_color="green") huber1_plot[0].label = "Huber $\delta=1.0$" huber2_plot = plot(huber(2.0, 0, p),(p, -3, 3), show=False, line_color="turquoise") huber2_plot[0].label = "Huber $\delta=2.0$" mse_plot.extend(huber1_plot) mse_plot.extend(huber2_plot) mse_plot.show() Explanation: MAE does not over-prioritize outliers and it is easy to interpret: mean absolute error of 20 USD on price prediction means that you are on average 20 USD off your target mark in one way or another. Unfortunately, it is not straightforward to train models on MAE using conventional algorithms because of the sharp kink around 0.0. For this reason, you may want to use a loss function that behaves as MAE for errors larger than 1 and as MSE for errors smaller than 1. This loss function is called Huber loss. Huber loss Generally, Huber loss uses a parameter $\delta$ to define the transition point between MSE and MAE: $$L(a, p) = \left{\begin{aligned} \tfrac{1}{2}(a-p)^2 \quad\mathrm{for}\ |a-p| \le \delta \ \delta\vert a - p \vert - \tfrac{1}{2}\delta^2 \quad\mathrm{otherwise} \end{aligned}\right. $$ End of explanation pseudo_huber = lambda delta,a,p: delta**2*(sqrt(1+((a-p)/delta)**2)-1) pseudo_huber1 = plot(pseudo_huber(1.0, 0,p),(p, -3, 3), show=False, legend=True, line_color='lightgreen') pseudo_huber1[0].label='Pseud-Huber $\delta=1.0$' huber1_plot.extend(pseudo_huber1) huber1_plot.legend = True huber1_plot.show() Explanation: Particularly interesting is that for $\delta=1.0$ Huber loss is asymptotically similar to MAE and behaves as MSE around 0. For other values of $\delta$, Huber loss still increases linearly for large errors and $\delta$ controls simply the slope of this growth. In machine learning data frame analytics, we implemented the so-called Pseudo-Huber loss $$L(a_i,p_i) = \delta^2 \left(\sqrt{1+((a_i-p_i)/\delta)^2}-1\right).$$ This results in faster code while keeping the properties of the regular Huber loss. End of explanation from sympy.stats import LogNormal, density z = symbols('z') log_norm_1 = LogNormal("x", 0, 1.0) log_norm_2 = LogNormal("x", 0, 0.5) log_norm_3 = LogNormal("x", 0, 0.25) pdf_plot1= plot(density(log_norm_1)(z), (z,0.01, 5), show=False, line_color="blue") pdf_plot2= plot(density(log_norm_2)(z), (z,0.01, 5), show=False, line_color="green") pdf_plot3= plot(density(log_norm_3)(z), (z,0.01, 5), show=False, line_color="red") pdf_plot1.extend(pdf_plot2) pdf_plot1.extend(pdf_plot3) pdf_plot1.show() Explanation: Conclusion To use Huber loss with a certain parameter $\delta$ in your data frame analytics regression job, you can specify parameter loss_function (set it to huber) and loss_function_parameter. Use Huber loss if you have outliers in your data coming from measurement errors. This is often the case when you can see a symmetric heavy-tailed distribution of residuals after training your model with MSE loss function. You can experiment with different values of $\delta$, also the default value 1.0 is fine for most cases. Mean squared logarithmic error (MSLE) This loss function is used when you have positive targets distributed with a long tail such as log-normal distribution. There are numerous examples of such data: house pricing, income of private households, city population, etc. End of explanation # We generate random data and train a simple linear model to minimize # MSE and MSLE loss functions. np.random.seed(1000) N = 10000 w = np.ones(5) X = np.random.randn(N, 5) y = X.dot(w) yexp = np.exp(y) mse = lambda x: mean_squared_error(yexp, X.dot(x)) def msle(x, t=1): pred = X.dot(x)+t pred[pred<0] = 0 return mean_squared_log_error(yexp+t, pred) x0 = np.zeros(X.shape[1]) #initial guess res_mse = minimize(mse, x0, tol=1e-5) #MSE residuals results = {} #MSLE residuals for different offsets T = [1.0, 10.0, 100.0] for t in T: results[t] = minimize(lambda x: msle(x, t), x0, tol=1e-5) sorted_idx = np.argsort(yexp) label_sorted = yexp[sorted_idx] def get_sorted_residuals(results): residuals = yexp - X.dot(results['x']) return residuals[sorted_idx] f, (axTop, axMiddle, axBottom) = pl.subplots(3, 1, sharex=True, figsize=(10, 15)) # plot the density distribution of the target values on top kde = gaussian_kde(yexp) yrange = np.logspace(min(y), max(y), 100, base=np.exp(1)) axTop.plot(yrange, kde(yrange)) axTop.set_ylabel('density') # plot the magnitudes of residuals for MSE and MSLE with different offsets for t in T: # use a median filter to smooth residuals and see the trend. axMiddle.plot(label_sorted, median_filter(get_sorted_residuals(results[t]), size=50), label='t={}'.format(t)) axBottom.plot(label_sorted, median_filter(get_sorted_residuals(results[t]), size=50), label='t={}'.format(t)) axMiddle.plot(label_sorted, median_filter(get_sorted_residuals(res_mse), size=50), label='mse') axBottom.plot(label_sorted, median_filter(get_sorted_residuals(res_mse), size=50), label='mse') axBottom.set_xscale('log') axBottom.set_xlabel('target') axBottom.set_xticks(np.logspace(-3,3,7)) axMiddle.set_ylabel('residual') axBottom.set_ylabel('residual') axMiddle.set_ylim(800,1000) axBottom.set_ylim(-20,75) axMiddle.legend() pl.show() Explanation: The distribution of the values has a "bump" for smaller values and then a long tail with a small number of very large values. Naturally, when you want to predict the target values, you want to allow for an error depending on the magnitude of the target. It’s ok to miss a population of a metropolitan area by 10 000, but for a small town it is unforgivable. In this case MSLE can be more suitable for the regression job than MSE. As the name suggests, MSLE minimizes the quadratic difference between the logarithms of the actual value and the prediction $$L(a_i, p_i) = (\log(a_i + t) - \log(p_i + t))^2$$ For example, $\log\left((1 + 1100) / (1 + 1000)\right)^2$ is about the same as $\log((1 + 11000) / (1 + 10000))^2$ so missing a value by 10\% is penalised equivalently whether the actual error is 100 or 1000. At the same time, with MSE the latter case costs 100x more. Qualitatively speaking, MSLE tends to lead to lower error for small targets and for higher error on large targets than MSE. This trade-off can be controlled by adjusting an offset parameter $t$ which is set to 1.0 by default. By increasing this parameter, you influence the “transition point” at which you go from minimizing quadratic error to minimizing quadratic log error. Let's take a look how this offset affects quantitative results. End of explanation quantiles = np.linspace(0, 1.0, 11) qvalues = np.quantile(yexp, np.linspace(0, 1.0, 11)) for q, val in zip(quantiles, qvalues): print("Percentile {:.0f}\tvalue {:.2f}".format(q*100, val)) Explanation: Note that the scale of the y-axis in the bottom has been changed to be able to see those small residuals! First, MSE tends to result in larger errors for small target values despite the fact that most of the targets are rather small. It rather focuses on producing smaller errors for large target values. Second, if you look at the density of the target values (top diagram), you see that qualitatively the "transition point" happens somewhere around 10: we have a high concentration of small targets <10 and low number of targets >>10. Quantitatively, this "transition point" is somewhere between 80th and 90th percentile. End of explanation
3,649
Given the following text description, write Python code to implement the functionality described below step by step Description: Embedding Matplotlib Animations in IPython Notebooks This notebook first appeared as a blog post on Pythonic Perambulations. License Step2: Now we'll create a function that will save an animation and embed it in an html string. Note that this will require ffmpeg or mencoder to be installed on your system. For reasons entirely beyond my limited understanding of video encoding details, this also requires using the libx264 encoding for the resulting mp4 to be properly embedded into HTML5. Step3: With this HTML function in place, we can use IPython's HTML display tools to create a function which will show the video inline Step4: Example of Embedding an Animation The result looks something like this -- we'll use a basic animation example taken from my earlier Matplotlib Animation Tutorial post Step5: Making the Embedding Automatic We can go a step further and use IPython's display hooks to automatically represent animation objects with the correct HTML. We'll simply set the _repr_html_ member of the animation base class to our HTML converter function Step6: Now simply creating an animation will lead to it being automatically embedded in the notebook, without any further function calls
Python Code: %pylab inline Explanation: Embedding Matplotlib Animations in IPython Notebooks This notebook first appeared as a blog post on Pythonic Perambulations. License: BSD (C) 2013, Jake Vanderplas. Feel free to use, distribute, and modify with the above attribution. <!-- PELICAN_BEGIN_SUMMARY --> I've spent a lot of time on this blog working with matplotlib animations (see the basic tutorial here, as well as my examples of animating a quantum system, an optical illusion, the Lorenz system in 3D, and recreating Super Mario). Up until now, I've not have not combined the animations with IPython notebooks. The problem is that so far the integration of IPython with matplotlib is entirely static, while animations are by their nature dynamic. There are some efforts in the IPython and matplotlib development communities to remedy this, but it's still not an ideal setup. I had an idea the other day about how one might get around this limitation in the case of animations. By creating a function which saves an animation and embeds the binary data into an HTML string, you can fairly easily create automatically-embedded animations within a notebook. <!-- PELICAN_END_SUMMARY --> The Animation Display Function As usual, we'll start by enabling the pylab inline mode to make the notebook play well with matplotlib. End of explanation from tempfile import NamedTemporaryFile VIDEO_TAG = <video controls> <source src="data:video/x-m4v;base64,{0}" type="video/mp4"> Your browser does not support the video tag. </video> def anim_to_html(anim): if not hasattr(anim, '_encoded_video'): with NamedTemporaryFile(suffix='.mp4') as f: anim.save(f.name, fps=20, extra_args=['-vcodec', 'libx264']) video = open(f.name, "rb").read() anim._encoded_video = video.encode("base64") return VIDEO_TAG.format(anim._encoded_video) Explanation: Now we'll create a function that will save an animation and embed it in an html string. Note that this will require ffmpeg or mencoder to be installed on your system. For reasons entirely beyond my limited understanding of video encoding details, this also requires using the libx264 encoding for the resulting mp4 to be properly embedded into HTML5. End of explanation from IPython.display import HTML def display_animation(anim): plt.close(anim._fig) return HTML(anim_to_html(anim)) Explanation: With this HTML function in place, we can use IPython's HTML display tools to create a function which will show the video inline: End of explanation from matplotlib import animation # First set up the figure, the axis, and the plot element we want to animate fig = plt.figure() ax = plt.axes(xlim=(0, 2), ylim=(-2, 2)) line, = ax.plot([], [], lw=2) # initialization function: plot the background of each frame def init(): line.set_data([], []) return line, # animation function. This is called sequentially def animate(i): x = np.linspace(0, 2, 1000) y = np.sin(2 * np.pi * (x - 0.01 * i)) line.set_data(x, y) return line, # call the animator. blit=True means only re-draw the parts that have changed. anim = animation.FuncAnimation(fig, animate, init_func=init, frames=100, interval=20, blit=True) # call our new function to display the animation display_animation(anim) Explanation: Example of Embedding an Animation The result looks something like this -- we'll use a basic animation example taken from my earlier Matplotlib Animation Tutorial post: End of explanation animation.Animation._repr_html_ = anim_to_html Explanation: Making the Embedding Automatic We can go a step further and use IPython's display hooks to automatically represent animation objects with the correct HTML. We'll simply set the _repr_html_ member of the animation base class to our HTML converter function: End of explanation animation.FuncAnimation(fig, animate, init_func=init, frames=100, interval=20, blit=True) Explanation: Now simply creating an animation will lead to it being automatically embedded in the notebook, without any further function calls: End of explanation
3,650
Given the following text description, write Python code to implement the functionality described below step by step Description: gc exposes the underlying memory management mechanism of Python, the automatic garbage collector. The module includes functions for controlling how the collector operates and to examine the objects known to the system, either pending collection or stuck in reference cycles and unable to be freed. Tracing References Step1: Forcing Garbage Collection Step2: Collection Thresholds and Generations Step3: Debugging
Python Code: import gc import pprint class Graph: def __init__(self, name): self.name = name self.next = None def set_next(self, next): print('Linking nodes {}.next = {}'.format(self, next)) self.next = next def __repr__(self): return '{}({})'.format( self.__class__.__name__, self.name) # Construct a graph cycle one = Graph('one') two = Graph('two') three = Graph('three') one.set_next(two) two.set_next(three) three.set_next(one) print() print('three refers to:') for r in gc.get_referents(three): pprint.pprint(r) import gc import pprint import queue class Graph: def __init__(self, name): self.name = name self.next = None def set_next(self, next): print('Linking nodes {}.next = {}'.format(self, next)) self.next = next def __repr__(self): return '{}({})'.format( self.__class__.__name__, self.name) # Construct a graph cycle one = Graph('one') two = Graph('two') three = Graph('three') one.set_next(two) two.set_next(three) three.set_next(one) print() seen = set() to_process = queue.Queue() # Start with an empty object chain and Graph three. to_process.put(([], three)) # Look for cycles, building the object chain for each object # found in the queue so the full cycle can be printed at the # end. while not to_process.empty(): chain, next = to_process.get() chain = chain[:] chain.append(next) print('Examining:', repr(next)) seen.add(id(next)) for r in gc.get_referents(next): if isinstance(r, str) or isinstance(r, type): # Ignore strings and classes pass elif id(r) in seen: print() print('Found a cycle to {}:'.format(r)) for i, link in enumerate(chain): print(' {}: '.format(i), end=' ') pprint.pprint(link) else: to_process.put((chain, r)) Explanation: gc exposes the underlying memory management mechanism of Python, the automatic garbage collector. The module includes functions for controlling how the collector operates and to examine the objects known to the system, either pending collection or stuck in reference cycles and unable to be freed. Tracing References End of explanation import gc import pprint class Graph: def __init__(self, name): self.name = name self.next = None def set_next(self, next): print('Linking nodes {}.next = {}'.format(self, next)) self.next = next def __repr__(self): return '{}({})'.format( self.__class__.__name__, self.name) # Construct a graph cycle one = Graph('one') two = Graph('two') three = Graph('three') one.set_next(two) two.set_next(three) three.set_next(one) # Remove references to the graph nodes in this module's namespace one = two = three = None # Show the effect of garbage collection for i in range(2): print('\nCollecting {} ...'.format(i)) n = gc.collect() print('Unreachable objects:', n) print('Remaining Garbage:', end=' ') pprint.pprint(gc.garbage) Explanation: Forcing Garbage Collection End of explanation import gc print(gc.get_threshold()) Explanation: Collection Thresholds and Generations End of explanation import gc gc.set_debug(gc.DEBUG_STATS) gc.collect() print('Exiting') import gc flags = (gc.DEBUG_COLLECTABLE | gc.DEBUG_UNCOLLECTABLE | gc.DEBUG_SAVEALL ) gc.set_debug(flags) class Graph: def __init__(self, name): self.name = name self.next = None def set_next(self, next): self.next = next def __repr__(self): return '{}({})'.format( self.__class__.__name__, self.name) class CleanupGraph(Graph): def __del__(self): print('{}.__del__()'.format(self)) # Construct a graph cycle one = Graph('one') two = Graph('two') one.set_next(two) two.set_next(one) # Construct another node that stands on its own three = CleanupGraph('three') # Construct a graph cycle with a finalizer four = CleanupGraph('four') five = CleanupGraph('five') four.set_next(five) five.set_next(four) # Remove references to the graph nodes in this module's namespace one = two = three = four = five = None # Force a sweep print('Collecting') gc.collect() print('Done') # Report on what was left for o in gc.garbage: if isinstance(o, Graph): print('Retained: {} 0x{:x}'.format(o, id(o))) # Reset the debug flags before exiting to avoid dumping a lot # of extra information and making the example output more # confusing. gc.set_debug(0) Explanation: Debugging End of explanation
3,651
Given the following text description, write Python code to implement the functionality described below step by step Description: Experiments reported in "Domain Conditional Predictors for Domain Adaptation" Copyright 2021 Google LLC Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at https Step1: Colab tested with Step2: Data preparation Define 4 domains by transforming the data on the fly. Current transformations are rotation, blurring, flipping colors between background and digits, and horizontal flip. Step3: Look at samples from the training domains. Domain labels are such that Step4: Baseline 1 Step6: Training of the baseline Step7: Baseline 2 Step9: Training of the DANN baseline Step10: Definition of our models The models for our proposed setting are defined below. The FiLM layer simply projects z onto 2 tensors (independent dense layers for each projection) matching the shape of the features. Each such tensor is used for element-wise multiplication and addition with the input features. m_domain corresponds to a domain classifier. It outputs the output of the second conv. layer to be used as z, as well as a set of logits over the set of train domains. m_task is the main classifier and it contains FiLM layers that take z as input. Its output corresponds to the set of logits over the labels. Step12: Training of the proposed model Step14: Ablation 1 Step17: Ablation 2 Step18: Results Plots of training losses Step19: Out-of-domain evaluations The original test set of mnist without any transformations is considered Step20: In-domain evaluations and domain prediction The same transformations applied in train data are applied during test Step21: Samples and corresponding predictions
Python Code: #@test {"skip": true} !pip install dm-sonnet==2.0.0 --quiet !pip install tensorflow_addons==0.12 --quiet #@test {"output": "ignore"} import numpy as np import tensorflow as tf import tensorflow_datasets as tfds import tensorflow_addons as tfa try: import sonnet.v2 as snt except ModuleNotFoundError: import sonnet as snt Explanation: Experiments reported in "Domain Conditional Predictors for Domain Adaptation" Copyright 2021 Google LLC Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at https://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. Preamble End of explanation #@test {"skip": true} print(" TensorFlow version: {}".format(tf.__version__)) print(" Sonnet version: {}".format(snt.__version__)) print("TensorFlow Addons version: {}".format(tfa.__version__)) Explanation: Colab tested with: TensorFlow version: 2.4.1 Sonnet version: 2.0.0 TensorFlow Addons version: 0.12.0 End of explanation #@test {"output": "ignore"} batch_size = 100 NUM_DOMAINS = 4 def process_batch_train(images, labels): images = tf.image.grayscale_to_rgb(images) images = tf.cast(images, dtype=tf.float32) images = images / 255. domain_index_candidates = tf.convert_to_tensor(list(range(NUM_DOMAINS))) samples = tf.random.categorical(tf.math.log([[1/NUM_DOMAINS for i in range(NUM_DOMAINS)]]), 1) # note log-prob domain_index=domain_index_candidates[tf.cast(samples[0][0], dtype=tf.int64)] if tf.math.equal(domain_index, tf.constant(0)): images = tfa.image.rotate(images, np.pi/3) elif tf.math.equal(domain_index, tf.constant(1)): images = tfa.image.gaussian_filter2d(images, filter_shape=[8,8]) elif tf.math.equal(domain_index, tf.constant(2)): images = tf.ones_like(images) - images elif tf.math.equal(domain_index, tf.constant(3)): images = tf.image.flip_left_right(images) domain_label = tf.cast(domain_index, tf.int64) return images, labels, domain_label def process_batch_test(images, labels): images = tf.image.grayscale_to_rgb(images) images = tf.cast(images, dtype=tf.float32) images = images / 255. return images, labels def mnist(split, multi_domain_test=False): dataset = tfds.load("mnist", split=split, as_supervised=True) if split == "train": process_batch = process_batch_train else: if multi_domain_test: process_batch = process_batch_train else: process_batch = process_batch_test dataset = dataset.map(process_batch) dataset = dataset.batch(batch_size) dataset = dataset.prefetch(tf.data.experimental.AUTOTUNE) dataset = dataset.cache() return dataset mnist_train = mnist("train").shuffle(1000) mnist_test = mnist("test") mnist_test_multidomain = mnist("test", multi_domain_test=True) Explanation: Data preparation Define 4 domains by transforming the data on the fly. Current transformations are rotation, blurring, flipping colors between background and digits, and horizontal flip. End of explanation #@test {"skip": true} import matplotlib.pyplot as plt images, label, domain_label = next(iter(mnist_train)) print(label[0], domain_label[0]) plt.imshow(images[0]); Explanation: Look at samples from the training domains. Domain labels are such that: Rotation >> 0, Blurring >> 1, Color flipping >> 2, Horizontal flip >> 3. End of explanation class M_unconditional(snt.Module): def __init__(self): super(M_unconditional, self).__init__() self.hidden1 = snt.Conv2D(output_channels=10, kernel_shape=5, name="hidden1") self.hidden2 = snt.Conv2D(output_channels=20, kernel_shape=5, name="hidden2") self.flatten = snt.Flatten() self.logits = snt.Linear(10, name="logits") def __call__(self, images): output = tf.nn.relu(self.hidden1(images)) output = tf.nn.relu(self.hidden2(output)) output = self.flatten(output) output = self.logits(output) return output m_unconditional = M_unconditional() Explanation: Baseline 1: Unconditional model A baseline model is defined below and referred to as unconditional since it does not take domain labels into account in any way. End of explanation #@test {"output": "ignore"} opt_unconditional = snt.optimizers.SGD(learning_rate=0.01) num_epochs = 10 loss_log_unconditional = [] def step(images, labels): Performs one optimizer step on a single mini-batch. with tf.GradientTape() as tape: logits_unconditional = m_unconditional(images) loss_unconditional = tf.nn.sparse_softmax_cross_entropy_with_logits(logits=logits_unconditional, labels=labels) loss_unconditional = tf.reduce_mean(loss_unconditional) params_unconditional = m_unconditional.trainable_variables grads_unconditional = tape.gradient(loss_unconditional, params_unconditional) opt_unconditional.apply(grads_unconditional, params_unconditional) return loss_unconditional for images, labels, domain_labels in mnist_train.repeat(num_epochs): loss_unconditional = step(images, labels) loss_log_unconditional.append(loss_unconditional.numpy()) print("\n\nFinal loss: {}".format(loss_log_unconditional[-1])) REDUCTION_FACTOR = 0.2 ## Factor in [0,1] used to check whether the training loss reduces during training ## Checks whether the training loss reduces assert loss_log_unconditional[-1] < REDUCTION_FACTOR*loss_log_unconditional[0] Explanation: Training of the baseline: End of explanation #@test {"skip": true} class DANN_task(snt.Module): def __init__(self): super(DANN_task, self).__init__() self.hidden1 = snt.Conv2D(output_channels=10, kernel_shape=5, name="hidden1") self.hidden2 = snt.Conv2D(output_channels=20, kernel_shape=5, name="hidden2") self.flatten = snt.Flatten() self.logits = snt.Linear(10, name="logits") def __call__(self, images): output = tf.nn.relu(self.hidden1(images)) output = tf.nn.relu(self.hidden2(output)) z = self.flatten(output) output = self.logits(z) return output, z #@test {"skip": true} class DANN_domain(snt.Module): def __init__(self): super(DANN_domain, self).__init__() self.logits = snt.Linear(NUM_DOMAINS, name="logits") def __call__(self, z): output = self.logits(z) return output #@test {"skip": true} m_DANN_task = DANN_task() m_DANN_domain = DANN_domain() Explanation: Baseline 2: Domain invariant representations DANN-like model where the domain discriminator is replaced by a domain classifier aiming to induce invariance across training domains End of explanation #@test {"skip": true} opt_task = snt.optimizers.SGD(learning_rate=0.01) opt_domain = snt.optimizers.SGD(learning_rate=0.01) domain_loss_weight = 0.2 ## Hyperparameter - factor to be multiplied by the domain entropy term when training the task classifier num_epochs = 20 ## Doubled the number of epochs to train the task classifier for as many iterations as the other methods since we have alternate updates loss_log_dann = {'task_loss':[],'domain_loss':[]} number_of_iterations = 0 def step(images, labels, domain_labels, iteration_count): Performs one optimizer step on a single mini-batch. if iteration_count%2==0: ## Alternate between training the class classifier and the domain classifier with tf.GradientTape() as tape: logits_DANN_task, z_DANN = m_DANN_task(images) logist_DANN_domain = m_DANN_domain(z_DANN) loss_DANN_task = tf.nn.sparse_softmax_cross_entropy_with_logits(logits=logits_DANN_task, labels=labels) loss_DANN_domain = tf.nn.softmax_cross_entropy_with_logits(logits=logist_DANN_domain, labels=1/NUM_DOMAINS*tf.ones_like(logist_DANN_domain)) ## Negative entropy of P(Y|X) measured as the cross-entropy against the uniform dist. loss_DANN = tf.reduce_mean(loss_DANN_task + domain_loss_weight*loss_DANN_domain) params_DANN = m_DANN_task.trainable_variables grads_DANN = tape.gradient(loss_DANN, params_DANN) opt_task.apply(grads_DANN, params_DANN) return 'task_loss', loss_DANN else: with tf.GradientTape() as tape: _, z_DANN = m_DANN_task(images) logist_DANN_domain_classifier = m_DANN_domain(z_DANN) loss_DANN_domain_classifier = tf.nn.sparse_softmax_cross_entropy_with_logits(logits=logist_DANN_domain_classifier, labels=domain_labels) loss_DANN_domain_classifier = tf.reduce_mean(loss_DANN_domain_classifier) params_DANN_domain_classifier = m_DANN_domain.trainable_variables grads_DANN_domain_classifier = tape.gradient(loss_DANN_domain_classifier, params_DANN_domain_classifier) opt_domain.apply(grads_DANN_domain_classifier, params_DANN_domain_classifier) return 'domain_loss', loss_DANN_domain_classifier for images, labels, domain_labels in mnist_train.repeat(num_epochs): number_of_iterations += 1 loss_tag, loss_dann = step(images, labels, domain_labels, number_of_iterations) loss_log_dann[loss_tag].append(loss_dann.numpy()) print("\n\nFinal losses: {} - {}, {} - {}".format('task_loss', loss_log_dann['task_loss'][-1], 'domain_loss', loss_log_dann['domain_loss'][-1])) Explanation: Training of the DANN baseline End of explanation #@test {"skip": true} class FiLM(snt.Module): def __init__(self, features_shape): super(FiLM, self).__init__() self.features_shape = features_shape target_dimension = np.prod(features_shape) self.hidden_W = snt.Linear(output_size=target_dimension) self.hidden_B = snt.Linear(output_size=target_dimension) def __call__(self, features, z): W = snt.reshape(self.hidden_W(z), output_shape=self.features_shape) B = snt.reshape(self.hidden_B(z), output_shape=self.features_shape) output = W*features+B return output #@test {"skip": true} class M_task(snt.Module): def __init__(self): super(M_task, self).__init__() self.hidden1 = snt.Conv2D(output_channels=10, kernel_shape=5, name="hidden1") self.film1 = FiLM(features_shape=[28,28,10]) self.hidden2 = snt.Conv2D(output_channels=20, kernel_shape=5, name="hidden2") self.film2 = FiLM(features_shape=[28,28,20]) self.flatten = snt.Flatten() self.logits = snt.Linear(10, name="logits") def __call__(self, images, z): output = tf.nn.relu(self.hidden1(images)) output = self.film1(output,z) output = tf.nn.relu(self.hidden2(output)) output = self.film2(output,z) output = self.flatten(output) output = self.logits(output) return output #@test {"skip": true} class M_domain(snt.Module): def __init__(self): super(M_domain, self).__init__() self.hidden = snt.Conv2D(output_channels=10, kernel_shape=5, name="hidden") self.flatten = snt.Flatten() self.logits = snt.Linear(NUM_DOMAINS, name="logits") def __call__(self, images): output = tf.nn.relu(self.hidden(images)) z = self.flatten(output) output = self.logits(z) return output, z #@test {"skip": true} m_task = M_task() m_domain = M_domain() #@test {"skip": true} images, labels = next(iter(mnist_test)) domain_logits, z = m_domain(images) logits = m_task(images, z) prediction = tf.argmax(logits[0]).numpy() actual = labels[0].numpy() print("Predicted class: {} actual class: {}".format(prediction, actual)) plt.imshow(images[0]) Explanation: Definition of our models The models for our proposed setting are defined below. The FiLM layer simply projects z onto 2 tensors (independent dense layers for each projection) matching the shape of the features. Each such tensor is used for element-wise multiplication and addition with the input features. m_domain corresponds to a domain classifier. It outputs the output of the second conv. layer to be used as z, as well as a set of logits over the set of train domains. m_task is the main classifier and it contains FiLM layers that take z as input. Its output corresponds to the set of logits over the labels. End of explanation #@test {"skip": true} from tqdm import tqdm # MNIST training set has 60k images. num_images = 60000 def progress_bar(generator): return tqdm( generator, unit='images', unit_scale=batch_size, total=(num_images // batch_size) * num_epochs) #@test {"skip": true} opt = snt.optimizers.SGD(learning_rate=0.01) num_epochs = 10 loss_log = {'total_loss':[], 'task_loss':[], 'domain_loss':[]} def step(images, labels, domain_labels): Performs one optimizer step on a single mini-batch. with tf.GradientTape() as tape: domain_logits, z = m_domain(images) logits = m_task(images, z) loss_task = tf.nn.sparse_softmax_cross_entropy_with_logits(logits=logits, labels=labels) loss_domain = tf.nn.sparse_softmax_cross_entropy_with_logits(logits=domain_logits, labels=domain_labels) loss = loss_task + loss_domain loss = tf.reduce_mean(loss) loss_task = tf.reduce_mean(loss_task) loss_domain = tf.reduce_mean(loss_domain) params = m_task.trainable_variables + m_domain.trainable_variables grads = tape.gradient(loss, params) opt.apply(grads, params) return loss, loss_task, loss_domain for images, labels, domain_labels in progress_bar(mnist_train.repeat(num_epochs)): loss, loss_task, loss_domain = step(images, labels, domain_labels) loss_log['total_loss'].append(loss.numpy()) loss_log['task_loss'].append(loss_task.numpy()) loss_log['domain_loss'].append(loss_domain.numpy()) print("\n\nFinal total loss: {}".format(loss.numpy())) print("\n\nFinal task loss: {}".format(loss_task.numpy())) print("\n\nFinal domain loss: {}".format(loss_domain.numpy())) Explanation: Training of the proposed model End of explanation #@test {"skip": true} class M_learned_z(snt.Module): def __init__(self): super(M_learned_z, self).__init__() self.context = snt.Embed(vocab_size=NUM_DOMAINS, embed_dim=128) self.hidden1 = snt.Conv2D(output_channels=10, kernel_shape=5, name="hidden1") self.film1 = FiLM(features_shape=[28,28,10]) self.hidden2 = snt.Conv2D(output_channels=20, kernel_shape=5, name="hidden2") self.film2 = FiLM(features_shape=[28,28,20]) self.flatten = snt.Flatten() self.logits = snt.Linear(10, name="logits") def __call__(self, images, domain_labels): z = self.context(domain_labels) output = tf.nn.relu(self.hidden1(images)) output = self.film1(output,z) output = tf.nn.relu(self.hidden2(output)) output = self.film2(output,z) output = self.flatten(output) output = self.logits(output) return output #@test {"skip": true} m_learned_z = M_learned_z() #@test {"skip": true} opt_learned_z = snt.optimizers.SGD(learning_rate=0.01) num_epochs = 10 loss_log_learned_z = [] def step(images, labels, domain_labels): Performs one optimizer step on a single mini-batch. with tf.GradientTape() as tape: logits_learned_z = m_learned_z(images, domain_labels) loss_learned_z = tf.nn.sparse_softmax_cross_entropy_with_logits(logits=logits_learned_z, labels=labels) loss_learned_z = tf.reduce_mean(loss_learned_z) params_learned_z = m_learned_z.trainable_variables grads_learned_z = tape.gradient(loss_learned_z, params_learned_z) opt_learned_z.apply(grads_learned_z, params_learned_z) return loss_learned_z for images, labels, domain_labels in mnist_train.repeat(num_epochs): loss_learned_z = step(images, labels, domain_labels) loss_log_learned_z.append(loss_learned_z.numpy()) print("\n\nFinal loss: {}".format(loss_log_learned_z[-1])) Explanation: Ablation 1: Learned domain-wise context variable z Here we consider a case where the context variables z used for conditioning are learned directly from data, and the domain predictor is discarded. This only allows for in-domain prediction though. End of explanation #@test {"skip": true} m_task_ablation = M_task() m_domain_ablation = M_domain() m_DANN_ablation = DANN_domain() ## Used for evaluating how domain dependent the representations are #@test {"skip": true} opt_ablation = snt.optimizers.SGD(learning_rate=0.01) num_epochs = 10 loss_log_ablation = [] def step(images, labels, domain_labels): Performs one optimizer step on a single mini-batch. with tf.GradientTape() as tape: domain_logits_ablation, z = m_domain_ablation(images) logits_ablation = m_task_ablation(images, z) loss_ablation = tf.nn.sparse_softmax_cross_entropy_with_logits(logits=logits_ablation, labels=labels) loss_ablation = tf.reduce_mean(loss_ablation) params_ablation = m_task_ablation.trainable_variables + m_domain_ablation.trainable_variables grads_ablation = tape.gradient(loss_ablation, params_ablation) opt_ablation.apply(grads_ablation, params_ablation) return loss_ablation for images, labels, domain_labels in mnist_train.repeat(num_epochs): loss_ablation = step(images, labels, domain_labels) loss_log_ablation.append(loss_ablation.numpy()) print("\n\nFinal task loss: {}".format(loss_ablation.numpy())) #@test {"skip": true} opt_ablation_domain_classifier = snt.optimizers.SGD(learning_rate=0.01) num_epochs = 10 log_loss_ablation_domain_classification = [] def step(images, labels, domain_labels): Performs one optimizer step on a single mini-batch. with tf.GradientTape() as tape: _, z = m_domain_ablation(images) logits_ablation_domain_classification = m_DANN_ablation(z) loss_ablation_domain_classification = tf.nn.sparse_softmax_cross_entropy_with_logits(logits=logits_ablation_domain_classification, labels=domain_labels) loss_ablation_domain_classification = tf.reduce_mean(loss_ablation_domain_classification) params_ablation_domain_classification = m_DANN_ablation.trainable_variables grads_ablation_domain_classification = tape.gradient(loss_ablation_domain_classification, params_ablation_domain_classification) opt_ablation.apply(grads_ablation_domain_classification, params_ablation_domain_classification) return loss_ablation_domain_classification for images, labels, domain_labels in mnist_train.repeat(num_epochs): loss_ablation_domain_classifier = step(images, labels, domain_labels) log_loss_ablation_domain_classification.append(loss_ablation_domain_classifier.numpy()) print("\n\nFinal task loss: {}".format(loss_ablation_domain_classifier.numpy())) Explanation: Ablation 2: Dropping the domain classification term of the loss We consider an ablation where the same models as in our conditional predictor are used, but training is carried out with the classification loss only. This gives us a model with the same capacity as ours but no explicit mechanism to account for domain variations in train data. The goal of this ablation is to understand whether the improvement might be simply coming from the added capacity rather than the conditional modeling. End of explanation #@test {"skip": true} f = plt.figure(figsize=(32,8)) ax = f.add_subplot(1,3,1) ax.plot(loss_log['total_loss']) ax.set_title('Total Loss') ax = f.add_subplot(1,3,2) ax.plot(loss_log['task_loss']) ax.set_title('Task loss') ax = f.add_subplot(1,3,3) ax.plot(loss_log['domain_loss']) ax.set_title('Domain loss') #@test {"skip": true} f = plt.figure(figsize=(8,8)) ax = f.add_axes([1,1,1,1]) ax.plot(loss_log_unconditional) ax.set_title('Unconditional baseline - Train Loss') #@test {"skip": true} f = plt.figure(figsize=(16,8)) ax = f.add_subplot(1,2,1) ax.plot(loss_log_dann['task_loss']) ax.set_title('Domain invariant baseline - Task loss (Class. + -Entropy)') ax = f.add_subplot(1,2,2) ax.plot(loss_log_dann['domain_loss']) ax.set_title('Domain invariant baseline - Domain classification loss') #@test {"skip": true} f = plt.figure(figsize=(16,8)) ax = f.add_subplot(1,2,1) ax.plot(loss_log_learned_z) ax.set_title('Ablation 1 - Task loss') ax = f.add_subplot(1,2,2) ax.plot(loss_log_ablation) ax.set_title('Ablation 2 - Task loss') #@test {"skip": true} f = plt.figure(figsize=(8,8)) ax = f.add_axes([1,1,1,1]) ax.plot(log_loss_ablation_domain_classification) ax.set_title('Ablation 2: Domain classification - Train Loss') Explanation: Results Plots of training losses End of explanation #@test {"skip": true} total = 0 correct = 0 correct_unconditional = 0 correct_adversarial = 0 correct_ablation2 = 0 ## The model corresponding to ablation 1 can only be used with in-domain data (with domain labels) for images, labels in mnist_test: domain_logits, z = m_domain(images) logits = m_task(images, z) logits_unconditional = m_unconditional(images) logits_adversarial, _ = m_DANN_task(images) domain_logits_ablation, z_ablation = m_domain_ablation(images) logits_ablation2 = m_task_ablation(images, z_ablation) predictions = tf.argmax(logits, axis=1) predictions_unconditional = tf.argmax(logits_unconditional, axis=1) predictions_adversarial = tf.argmax(logits_adversarial, axis=1) predictions_ablation2 = tf.argmax(logits_ablation2, axis=1) correct += tf.math.count_nonzero(tf.equal(predictions, labels)) correct_unconditional += tf.math.count_nonzero(tf.equal(predictions_unconditional, labels)) correct_adversarial += tf.math.count_nonzero(tf.equal(predictions_adversarial, labels)) correct_ablation2 += tf.math.count_nonzero(tf.equal(predictions_ablation2, labels)) total += images.shape[0] print("Got %d/%d (%.02f%%) correct" % (correct, total, correct / total * 100.)) print("Unconditional baseline perf.: %d/%d (%.02f%%) correct" % (correct_unconditional, total, correct_unconditional / total * 100.)) print("Adversarial baseline perf.: %d/%d (%.02f%%) correct" % (correct_adversarial, total, correct_adversarial / total * 100.)) print("Ablation 2 perf.: %d/%d (%.02f%%) correct" % (correct_ablation2, total, correct_ablation2 / total * 100.)) Explanation: Out-of-domain evaluations The original test set of mnist without any transformations is considered End of explanation #@test {"skip": true} n_repetitions = 10 ## Going over the test set multiple times to account for multiple transformations total = 0 correct_class = 0 correct_unconditional = 0 correct_adversarial = 0 correct_ablation1 = 0 correct_ablation2 = 0 correct_domain = 0 correct_domain_adversarial = 0 correct_domain_ablation = 0 for images, labels, domain_labels in mnist_test_multidomain.repeat(n_repetitions): domain_logits, z = m_domain(images) class_logits = m_task(images, z) logits_unconditional = m_unconditional(images) logits_adversarial, z_adversarial = m_DANN_task(images) domain_logits_adversarial = m_DANN_domain(z_adversarial) logits_ablation1 = m_learned_z(images, domain_labels) _, z_ablation = m_domain_ablation(images) domain_logits_ablation = m_DANN_ablation(z_ablation) logits_ablation2 = m_task_ablation(images, z_ablation) predictions_class = tf.argmax(class_logits, axis=1) predictions_unconditional = tf.argmax(logits_unconditional, axis=1) predictions_adversarial = tf.argmax(logits_adversarial, axis=1) predictions_ablation1 = tf.argmax(logits_ablation1, axis=1) predictions_ablation2 = tf.argmax(logits_ablation2, axis=1) predictions_domain = tf.argmax(domain_logits, axis=1) predictions_domain_adversarial = tf.argmax(domain_logits_adversarial, axis=1) predictions_domain_ablation = tf.argmax(domain_logits_ablation, axis=1) correct_class += tf.math.count_nonzero(tf.equal(predictions_class, labels)) correct_unconditional += tf.math.count_nonzero(tf.equal(predictions_unconditional, labels)) correct_adversarial += tf.math.count_nonzero(tf.equal(predictions_adversarial, labels)) correct_ablation1 += tf.math.count_nonzero(tf.equal(predictions_ablation1, labels)) correct_ablation2 += tf.math.count_nonzero(tf.equal(predictions_ablation2, labels)) correct_domain += tf.math.count_nonzero(tf.equal(predictions_domain, domain_labels)) correct_domain_adversarial += tf.math.count_nonzero(tf.equal(predictions_domain_adversarial, domain_labels)) correct_domain_ablation += tf.math.count_nonzero(tf.equal(predictions_domain_ablation, domain_labels)) total += images.shape[0] print("In domain unconditional baseline perf.: %d/%d (%.02f%%) correct" % (correct_unconditional, total, correct_unconditional / total * 100.)) print("In domain adversarial baseline perf.: %d/%d (%.02f%%) correct" % (correct_adversarial, total, correct_adversarial / total * 100.)) print("In domain ablation 1: %d/%d (%.02f%%) correct" % (correct_ablation1, total, correct_ablation1 / total * 100.)) print("In domain ablation 2: %d/%d (%.02f%%) correct" % (correct_ablation2, total, correct_ablation2 / total * 100.)) print("In domain class predictions: Got %d/%d (%.02f%%) correct" % (correct_class, total, correct_class / total * 100.)) print("\n\nDomain predictions: Got %d/%d (%.02f%%) correct" % (correct_domain, total, correct_domain / total * 100.)) print("Adversarial baseline domain predictions: Got %d/%d (%.02f%%) correct" % (correct_domain_adversarial, total, correct_domain_adversarial / total * 100.)) print("Ablation 2 domain predictions: Got %d/%d (%.02f%%) correct" % (correct_domain_ablation, total, correct_domain_ablation / total * 100.)) #@test {"skip": true} def sample(correct, rows, cols): n = 0 f, ax = plt.subplots(rows, cols) if rows > 1: ax = tf.nest.flatten([tuple(ax[i]) for i in range(rows)]) f.set_figwidth(14) f.set_figheight(4 * rows) for images, labels in mnist_test: domain_logits, z = m_domain(images) logits = m_task(images, z) predictions = tf.argmax(logits, axis=1) eq = tf.equal(predictions, labels) for i, x in enumerate(eq): if x.numpy() == correct: label = labels[i] prediction = predictions[i] image = images[i] ax[n].imshow(image) ax[n].set_title("Prediction:{}\nActual:{}".format(prediction, label)) n += 1 if n == (rows * cols): break if n == (rows * cols): break Explanation: In-domain evaluations and domain prediction The same transformations applied in train data are applied during test End of explanation #@test {"skip": true} sample(correct=True, rows=1, cols=5) #@test {"skip": true} sample(correct=False, rows=2, cols=5) Explanation: Samples and corresponding predictions End of explanation
3,652
Given the following text description, write Python code to implement the functionality described below step by step Description: Logistic Regression Notebook version Step1: Logistic Regression 1. Introduction 1.1. Binary classification and decision theory. The MAP criterion Goal of a classification problem is to assign a class or category to every instance or observation of a data collection. Here, we will assume that every instance ${\bf x}$ is an $N$-dimensional vector in $\mathbb{R}^N$, and that the class $y$ of sample ${\bf x}$ is an element of a binary set ${\mathcal Y} = {0, 1}$. The goal of a classifier is to predict the true value of $y$ after observing ${\bf x}$. We will denote as $\hat{y}$ the classifier output or decision. If $y=\hat{y}$, the decision is an hit, otherwise $y\neq \hat{y}$ and the decision is an error. Decision theory provides a solution to the classification problem in situations where the relation between instance ${\bf x}$ and its class $y$ is given by a known probabilistic model Step2: 2.2. Classifiers based on the logistic model. The MAP classifier under a logistic model will have the form $$P_{Y|{\bf X}}(1|{\bf x}, {\bf w}) = g({\bf w}^\intercal{\bf x}) \quad\mathop{\gtrless}^{\hat{y}=1}_{\hat{y}=0} \quad \frac{1}{2} $$ Therefore $$ 2 \quad\mathop{\gtrless}^{\hat{y}=1}_{\hat{y}=0} \quad 1 + \exp(-{\bf w}^\intercal{\bf x}) $$ which is equivalent to $${\bf w}^\intercal{\bf x} \quad\mathop{\gtrless}^{\hat{y}=1}_{\hat{y}=0}\quad 0 $$ Therefore, the classifiers based on the logistic model are given by linear decision boundaries passing through the origin, ${\bf x} = {\bf 0}$. Step3: 3.3. Nonlinear classifiers. The logistic model can be extended to construct non-linear classifiers by using non-linear data transformations. A general form for a nonlinear logistic regression model is $$P_{Y|{\bf X}}(1|{\bf x}, {\bf w}) = g[{\bf w}^\intercal{\bf z}({\bf x})] $$ where ${\bf z}({\bf x})$ is an arbitrary nonlinear transformation of the original variables. The boundary decision in that case is given by equation $$ {\bf w}^\intercal{\bf z} = 0 $$ Exercise 2 Step4: 3. Inference Remember that the idea of parametric classification is to use the training data set $\mathcal S = {({\bf x}^{(k)}, y^{(k)}) \in {\mathbb{R}}^N \times {0,1}, k=1,\ldots,K}$ to set the parameter vector ${\bf w}$ according to certain criterion. Then, the estimate $\hat{\bf w}$ can be used to compute the label prediction for any new observation as $$\hat{y} = \arg\max_y P_{Y|{\bf X}}(y|{\bf x},\hat{\bf w}).$$ <img src="figs/parametric_decision.png", width=300> In the following, we will make the following assumptions Step5: Now, we select two classes and two attributes. Step6: 3.2.2. Data normalization Normalization of data is a common pre-processing step in many machine learning algorithms. Its goal is to get a dataset where all input coordinates have a similar scale. Learning algorithms usually show less instabilities and convergence problems when data are normalized. We will define a normalization function that returns a training data matrix with zero sample mean and unit sample variance. Step7: Now, we can normalize training and test data. Observe in the code that the same transformation should be applied to training and test data. This is the reason why normalization with the test data is done using the means and the variances computed with the training set. Step8: The following figure generates a plot of the normalized training data. Step9: In order to apply the gradient descent rule, we need to define two methods Step10: We can test the behavior of the gradient descent method by fitting a logistic regression model with ${\bf z}({\bf x}) = (1, {\bf x}^\intercal)^\intercal$. Step11: 3.2.3. Free parameters Under certain conditions, the gradient descent method can be shown to converge asymptotically (i.e. as the number of iterations goes to infinity) to the ML estimate of the logistic model. However, in practice, the final estimate of the weights ${\bf w}$ depend on several factors Step12: 3.2.5. Polynomial Logistic Regression The error rates of the logistic regression model can be potentially reduced by using polynomial transformations. To compute the polynomial transformation up to a given degree, we can use the PolynomialFeatures method in sklearn.preprocessing. Step13: Visualizing the posterior map we can se that the polynomial transformation produces nonlinear decision boundaries. Step14: 4. Regularization and MAP estimation. An alternative to the ML estimation of the weights in logistic regression is Maximum A Posteriori estimation. Modelling the logistic regression weights as a random variable with prior distribution $p_{\bf W}({\bf w})$, the MAP estimate is defined as $$ \hat{\bf w}{\text{MAP}} = \arg\max{\bf w} p({\bf w}|{\mathcal S}) $$ The posterior density $p({\bf w}|{\mathcal S})$ is related to the likelihood function and the prior density of the weights, $p_{\bf W}({\bf w})$ through the Bayes rule $$ p({\bf w}|{\mathcal S}) = \frac{P\left(y^{(1)},\ldots,y^{(K)}|{\bf x}^{(1)},\ldots, {\bf x}^{(K)},{\bf w}\right) p_{\bf W}({\bf w})} {p\left(y^{(1)},\ldots,y^{(K)}|{\bf x}^{(1)},\ldots, {\bf x}^{(K)}\right)} $$ $$ p({\bf w}|{\mathcal S}) = \frac{P\left(y^{(1)},\ldots,y^{(K)}|{\bf x}^{(1)},\ldots, {\bf x}^{(K)},{\bf w}\right) p_{\bf W}({\bf w})} {p\left(y^{(1)},\ldots,y^{(K)}|{\bf x}^{(1)},\ldots, {\bf x}^{(K)}\right)} $$ The numerator of the above expression is the product of two terms Step15: 6. Logistic regression in Scikit Learn. The <a href="http
Python Code: # To visualize plots in the notebook %matplotlib inline # Imported libraries import csv import random import matplotlib import matplotlib.pyplot as plt import pylab import numpy as np from mpl_toolkits.mplot3d import Axes3D from sklearn.preprocessing import PolynomialFeatures from sklearn import linear_model Explanation: Logistic Regression Notebook version: 1.0 (Oct 12, 2016) Author: Jesús Cid Sueiro ([email protected]) Jerónimo Arenas García ([email protected]) Changes: v.1.0 - First version v.1.1 - Typo correction. Prepared for slide presentation End of explanation # Define the logistic function def logistic(x): p = #<FILL IN> return p # Plot the logistic function t = np.arange(-6, 6, 0.1) z = logistic(t) plt.plot(t, z) plt.xlabel('$t$', fontsize=14) plt.ylabel('$\phi(t)$', fontsize=14) plt.title('The logistic function') plt.grid() Explanation: Logistic Regression 1. Introduction 1.1. Binary classification and decision theory. The MAP criterion Goal of a classification problem is to assign a class or category to every instance or observation of a data collection. Here, we will assume that every instance ${\bf x}$ is an $N$-dimensional vector in $\mathbb{R}^N$, and that the class $y$ of sample ${\bf x}$ is an element of a binary set ${\mathcal Y} = {0, 1}$. The goal of a classifier is to predict the true value of $y$ after observing ${\bf x}$. We will denote as $\hat{y}$ the classifier output or decision. If $y=\hat{y}$, the decision is an hit, otherwise $y\neq \hat{y}$ and the decision is an error. Decision theory provides a solution to the classification problem in situations where the relation between instance ${\bf x}$ and its class $y$ is given by a known probabilistic model: assume that every tuple $({\bf x}, y)$ is an outcome of a random vector $({\bf X}, Y)$ with joint distribution $p_{{\bf X},Y}({\bf x}, y)$. A natural criteria for classification is to select predictor $\hat{Y}=f({\bf x})$ in such a way that the probability or error, $P{\hat{Y} \neq Y}$ is minimum. Noting that $$ P{\hat{Y} \neq Y} = \int P{\hat{Y} \neq Y | {\bf x}} p_{\bf X}({\bf x}) d{\bf x} $$ the optimal decision is got if, for every sample ${\bf x}$, we make decision minimizing the conditional error probability: \begin{align} \hat{y}^* &= \arg\min_{\hat{y}} P{\hat{y} \neq Y |{\bf x}} \ &= \arg\max_{\hat{y}} P{\hat{y} = Y |{\bf x}} \ \end{align} Thus, the optimal decision rule can be expressed as $$ P_{Y|{\bf X}}(1|{\bf x}) \quad\mathop{\gtrless}^{\hat{y}=1}{\hat{y}=0}\quad P{Y|{\bf X}}(0|{\bf x}) $$ or, equivalently $$ P_{Y|{\bf X}}(1|{\bf x}) \quad\mathop{\gtrless}^{\hat{y}=1}_{\hat{y}=0}\quad \frac{1}{2} $$ The classifier implementing this decision rule is usually named MAP (Maximum A Posteriori). 1.2. Parametric classification. Classical decision theory is grounded on the assumption that the probabilistic model relating the observed sample ${\bf X}$ and the true hypothesis $Y$ is known. Unfortunately, this is unrealistic in many applications, where the only available information to construct the classifier is a dataset $\mathcal S = {({\bf x}^{(k)}, y^{(k)}), \,k=1,\ldots,K}$ of instances and their respective class labels. A more realistic formulation of the classification problem is the following: given a dataset $\mathcal S = {({\bf x}^{(k)}, y^{(k)}) \in {\mathbb{R}}^N \times {\mathcal Y}, \, k=1,\ldots,K}$ of independent and identically distributed (i.i.d.) samples from an unknown distribution $p_{{\bf X},Y}({\bf x}, y)$, predict the class $y$ of a new sample ${\bf x}$ with the minimum probability of error. Since the probabilistic model generating the data is unknown, the MAP decision rule cannot be applied. However, many classification algorithms use the dataset to obtain an estimate of the posterior class probabilities, and apply it to implement an approximation to the MAP decision maker. Parametric classifiers based on this idea assume, additionally, that the posterior class probabilty satisfies some parametric formula: $$ P_{Y|X}(1|{\bf x},{\bf w}) = f_{\bf w}({\bf x}) $$ where ${\bf w}$ is a vector of parameters. Given the expression of the MAP decision maker, classification consists in comparing the value of $f_{\bf w}({\bf x})$ with the threshold $\frac{1}{2}$, and each parameter vector would be associated to a different decision maker. In practice, the dataset ${\mathcal S}$ is used to select a particular parameter vector $\hat{\bf w}$ according to certain criterion. Accordingly, the decision rule becomes $$ f_{\hat{\bf w}}({\bf x}) \quad\mathop{\gtrless}^{\hat{y}=1}_{\hat{y}=0}\quad \frac{1}{2} $$ In this lesson, we explore one of the most popular model-based parametric classification methods: logistic regression. <img src="figs/parametric_decision.png", width=300> 2. Logistic regression. 2.1. The logistic function The logistic regression model assumes that the binary class label $Y \in {0,1}$ of observation $X\in \mathbb{R}^N$ satisfies the expression. $$P_{Y|{\bf X}}(1|{\bf x}, {\bf w}) = g({\bf w}^\intercal{\bf x})$$ $$P_{Y|{\bf,X}}(0|{\bf x}, {\bf w}) = 1-g({\bf w}^\intercal{\bf x})$$ where ${\bf w}$ is a parameter vector and $g(·)$ is the logistic function, which is defined by $$g(t) = \frac{1}{1+\exp(-t)}$$ It is straightforward to see that the logistic function has the following properties: P1: Probabilistic output: $\quad 0 \le g(t) \le 1$ P2: Symmetry: $\quad g(-t) = 1-g(t)$ P3: Monotonicity: $\quad g'(t) = g(t)·[1-g(t)] \ge 0$ In the following we define a logistic function in python, and use it to plot a graphical representation. Exercise 1: Verify properties P2 and P3. Exercise 2: Implement a function to compute the logistic function, and use it to plot such function in the inverval $[-6,6]$. End of explanation # Weight vector: w = [1, 4, 8] # Try different weights # Create a rectangular grid. x_min = -1 x_max = 1 dx = x_max - x_min h = float(dx) / 200 xgrid = np.arange(x_min, x_max, h) xx0, xx1 = np.meshgrid(xgrid, xgrid) # Compute the logistic map for the given weights Z = logistic(w[0] + w[1]*xx0 + w[2]*xx1) # Plot the logistic map fig = plt.figure() ax = fig.gca(projection='3d') ax.plot_surface(xx0, xx1, Z, cmap=plt.cm.copper) plt.xlabel('$x_0$') plt.ylabel('$x_1$') ax.set_zlabel('P(1|x,w)') Explanation: 2.2. Classifiers based on the logistic model. The MAP classifier under a logistic model will have the form $$P_{Y|{\bf X}}(1|{\bf x}, {\bf w}) = g({\bf w}^\intercal{\bf x}) \quad\mathop{\gtrless}^{\hat{y}=1}_{\hat{y}=0} \quad \frac{1}{2} $$ Therefore $$ 2 \quad\mathop{\gtrless}^{\hat{y}=1}_{\hat{y}=0} \quad 1 + \exp(-{\bf w}^\intercal{\bf x}) $$ which is equivalent to $${\bf w}^\intercal{\bf x} \quad\mathop{\gtrless}^{\hat{y}=1}_{\hat{y}=0}\quad 0 $$ Therefore, the classifiers based on the logistic model are given by linear decision boundaries passing through the origin, ${\bf x} = {\bf 0}$. End of explanation # SOLUTION TO THE EXERCISE # Weight vector: w = [1, 10, 10, -20, 5, 1] # Try different weights # Create a regtangular grid. x_min = -1 x_max = 1 dx = x_max - x_min h = float(dx) / 200 xgrid = np.arange(x_min, x_max, h) xx0, xx1 = np.meshgrid(xgrid, xgrid) # Compute the logistic map for the given weights Z = #<FILL IN> # Plot the logistic map fig = plt.figure() ax = fig.gca(projection='3d') ax.plot_surface(xx0, xx1, Z, cmap=plt.cm.copper) plt.xlabel('$x_0$') plt.ylabel('$x_1$') ax.set_zlabel('P(1|x,w)') Explanation: 3.3. Nonlinear classifiers. The logistic model can be extended to construct non-linear classifiers by using non-linear data transformations. A general form for a nonlinear logistic regression model is $$P_{Y|{\bf X}}(1|{\bf x}, {\bf w}) = g[{\bf w}^\intercal{\bf z}({\bf x})] $$ where ${\bf z}({\bf x})$ is an arbitrary nonlinear transformation of the original variables. The boundary decision in that case is given by equation $$ {\bf w}^\intercal{\bf z} = 0 $$ Exercise 2: Modify the code above to generate a 3D surface plot of the polynomial logistic regression model given by $$ P_{Y|{\bf X}}(1|{\bf x}, {\bf w}) = g(1 + 10 x_0 + 10 x_1 - 20 x_0^2 + 5 x_0 x_1 + x_1^2) $$ End of explanation # Adapted from a notebook by Jason Brownlee def loadDataset(filename, split): xTrain = [] cTrain = [] xTest = [] cTest = [] with open(filename, 'rb') as csvfile: lines = csv.reader(csvfile) dataset = list(lines) for i in range(len(dataset)-1): for y in range(4): dataset[i][y] = float(dataset[i][y]) item = dataset[i] if random.random() < split: xTrain.append(item[0:4]) cTrain.append(item[4]) else: xTest.append(item[0:4]) cTest.append(item[4]) return xTrain, cTrain, xTest, cTest with open('iris.data', 'rb') as csvfile: lines = csv.reader(csvfile) xTrain_all, cTrain_all, xTest_all, cTest_all = loadDataset('iris.data', 0.66) nTrain_all = len(xTrain_all) nTest_all = len(xTest_all) print 'Train: ' + str(nTrain_all) print 'Test: ' + str(nTest_all) Explanation: 3. Inference Remember that the idea of parametric classification is to use the training data set $\mathcal S = {({\bf x}^{(k)}, y^{(k)}) \in {\mathbb{R}}^N \times {0,1}, k=1,\ldots,K}$ to set the parameter vector ${\bf w}$ according to certain criterion. Then, the estimate $\hat{\bf w}$ can be used to compute the label prediction for any new observation as $$\hat{y} = \arg\max_y P_{Y|{\bf X}}(y|{\bf x},\hat{\bf w}).$$ <img src="figs/parametric_decision.png", width=300> In the following, we will make the following assumptions: A1. The samples in ${\mathcal S}$ are i.i.d. A2. Target $Y^{(k)}$ only depends on ${\bf x}^{(k)}$, but not on ${\bf x}^{(l)}$ for any $l\neq k$. A3. (Logistic Regression): We assume a logistic model for the a posteriori probability of ${Y=1}$ given ${\bf X}$, i.e., $$P_{Y|{\bf X}}(1|{\bf x}, {\bf w}) = g[{\bf w}^\intercal{\bf z}({\bf x})].$$ We need still to choose a criterion to optimize with the selection of the parameter vector. In the notebook, we will discuss two different approaches to the estimation of ${\bf w}$: Maximum Likelihood (ML): $\hat{\bf w}{\text{ML}} = \arg\max{\bf w} P_{{\mathcal S}|{\bf W}}({\mathcal S}|{\bf w})$ Maximum A Posteriori (MAP): $\hat{\bf w}{\text{MAP}} = \arg\max{\bf w} p_{{\bf W}|{\mathcal S}}({\bf w}|{\mathcal S})$ For the mathematical derivation of the logistic regression algorithm, the following representation of the logistic model will be useful: noting that $$P_{Y|{\bf X}}(0|{\bf x}, {\bf w}) = 1-g[{\bf w}^\intercal{\bf z}({\bf x})] = g[-{\bf w}^\intercal{\bf z}({\bf x})]$$ we can write $$P_{Y|{\bf X}}(y|{\bf x}, {\bf w}) = g[\overline{y}{\bf w}^\intercal{\bf z}({\bf x})]$$ where $\overline{y} = 2y-1$ is a symmetrized label ($\overline{y}\in{-1, 1}$). 3.1. ML estimation. The ML estimate is defined as $$\hat{\bf w}{\text{ML}} = \arg\max{\bf w} P_{{\mathcal S}|{\bf W}}({\mathcal S}|{\bf w}) = \arg\min_{\bf w} L({\bf w}) $$ where $L({\bf w})$ is the negative log-likelihood function, given by $$ L({\bf w}) = - \log P_{{\mathcal S}|{\bf W}}({\mathcal S}|{\bf w}) = - \log\left[P\left(y^{(1)},\ldots,y^{(K)}| {\bf x}^{(1)},\ldots, {\bf x}^{(K)},{\bf w}\right)\right] $$ Using assumption A1, $$ L({\bf w}) = - \log\left[\prod_{k=1}^K P\left(y^{(k)}|{\bf x}^{(1)},\ldots,{\bf x}^{(K)},{\bf w}\right)\right]. $$ Using A2, \begin{align} L({\bf w}) &= - \log\left[\prod_{k=1}^K P_{Y|{\bf X}}\left(y^{(k)}|{\bf x}^{(k)},{\bf w}\right)\right] \ &= - \sum_{k=1}^K\log\left[P_{Y|{\bf X}}\left(y^{(k)}|{\bf x}^{(k)},{\bf w}\right)\right] \end{align} Using A3 (the logistic model) \begin{align} L({\bf w}) &= - \sum_{k=1}^K\log\left[g\left(\overline{y}^{(k)}{\bf w}^\intercal {\bf z}^{(k)}\right)\right] \ &= \sum_{k=1}^K\log\left[1+\exp\left(-\overline{y}^{(k)}{\bf w}^\intercal {\bf z}^{(k)}\right)\right] \end{align} where ${\bf z}^{(k)}={\bf z}({\bf x}^{(k)})$. It can be shown that $L({\bf w})$ is a convex and differentiable function of ${\bf w}$. Therefore, its minimum is a point with zero gradient. \begin{align} \nabla_{\bf w} L(\hat{\bf w}{\text{ML}}) &= - \sum{k=1}^K \frac{\exp\left(-\overline{y}^{(k)}\hat{\bf w}{\text{ML}}^\intercal {\bf z}^{(k)}\right) \overline{y}^{(k)} {\bf z}^{(k)}} {1+\exp\left(-\overline{y}^{(k)}\hat{\bf w}{\text{ML}}^\intercal {\bf z}^{(k)} \right)} = \ &= - \sum_{k=1}^K \left[y^{(k)}-g(\hat{\bf w}_{\text{ML}}^T {\bf z}^{(k)})\right] {\bf z}^{(k)} = 0 \end{align} Unfortunately, $\hat{\bf w}_{\text{ML}}$ cannot be taken out from the above equation, and some iterative optimization algorithm must be used to search for the minimum. 3.2. Gradient descent. A simple iterative optimization algorithm is <a href = https://en.wikipedia.org/wiki/Gradient_descent> gradient descent</a>. \begin{align} {\bf w}{n+1} = {\bf w}_n - \rho_n \nabla{\bf w} L({\bf w}_n) \end{align} where $\rho_n >0$ is the learning step. Applying the gradient descent rule to logistic regression, we get the following algorithm: \begin{align} {\bf w}{n+1} &= {\bf w}_n + \rho_n \sum{k=1}^K \left[y^{(k)}-g({\bf w}_n^\intercal {\bf z}^{(k)})\right] {\bf z}^{(k)} \end{align} Defining vectors \begin{align} {\bf y} &= [y^{(1)},\ldots,y^{(K)}]^\intercal \ \hat{\bf p}_n &= [g({\bf w}_n^\intercal {\bf z}^{(1)}), \ldots, g({\bf w}_n^\intercal {\bf z}^{(K)})]^\intercal \end{align} and matrix \begin{align} {\bf Z} = \left[{\bf z}^{(1)},\ldots,{\bf z}^{(K)}\right]^\intercal \end{align} we can write \begin{align} {\bf w}_{n+1} &= {\bf w}_n + \rho_n {\bf Z} \left({\bf y}-\hat{\bf p}_n\right) \end{align} In the following, we will explore the behavior of the gradient descend method using the Iris Dataset. 3.2.1 Example: Iris Dataset. As an illustration, consider the <a href = http://archive.ics.uci.edu/ml/datasets/Iris> Iris dataset </a>, taken from the <a href=http://archive.ics.uci.edu/ml/> UCI Machine Learning repository</a>. This data set contains 3 classes of 50 instances each, where each class refers to a type of iris plant (setosa, versicolor or virginica). Each instance contains 4 measurements of given flowers: sepal length, sepal width, petal length and petal width, all in centimeters. We will try to fit the logistic regression model to discriminate between two classes using only two attributes. First, we load the dataset and split them in training and test subsets. End of explanation # Select attributes i = 0 # Try 0,1,2,3 j = 1 # Try 0,1,2,3 with j!=i # Select two classes c0 = 'Iris-versicolor' c1 = 'Iris-virginica' # Select two coordinates ind = [i, j] # Take training test X_tr = np.array([[xTrain_all[n][i] for i in ind] for n in range(nTrain_all) if cTrain_all[n]==c0 or cTrain_all[n]==c1]) C_tr = [cTrain_all[n] for n in range(nTrain_all) if cTrain_all[n]==c0 or cTrain_all[n]==c1] Y_tr = np.array([int(c==c1) for c in C_tr]) n_tr = len(X_tr) # Take test set X_tst = np.array([[xTest_all[n][i] for i in ind] for n in range(nTest_all) if cTest_all[n]==c0 or cTest_all[n]==c1]) C_tst = [cTest_all[n] for n in range(nTest_all) if cTest_all[n]==c0 or cTest_all[n]==c1] Y_tst = np.array([int(c==c1) for c in C_tst]) n_tst = len(X_tst) Explanation: Now, we select two classes and two attributes. End of explanation def normalize(X, mx=None, sx=None): # Compute means and standard deviations if mx is None: mx = np.mean(X, axis=0) if sx is None: sx = np.std(X, axis=0) # Normalize X0 = (X-mx)/sx return X0, mx, sx Explanation: 3.2.2. Data normalization Normalization of data is a common pre-processing step in many machine learning algorithms. Its goal is to get a dataset where all input coordinates have a similar scale. Learning algorithms usually show less instabilities and convergence problems when data are normalized. We will define a normalization function that returns a training data matrix with zero sample mean and unit sample variance. End of explanation # Normalize data Xn_tr, mx, sx = normalize(X_tr) Xn_tst, mx, sx = normalize(X_tst, mx, sx) Explanation: Now, we can normalize training and test data. Observe in the code that the same transformation should be applied to training and test data. This is the reason why normalization with the test data is done using the means and the variances computed with the training set. End of explanation # Separate components of x into different arrays (just for the plots) x0c0 = [Xn_tr[n][0] for n in range(n_tr) if Y_tr[n]==0] x1c0 = [Xn_tr[n][1] for n in range(n_tr) if Y_tr[n]==0] x0c1 = [Xn_tr[n][0] for n in range(n_tr) if Y_tr[n]==1] x1c1 = [Xn_tr[n][1] for n in range(n_tr) if Y_tr[n]==1] # Scatterplot. labels = {'Iris-setosa': 'Setosa', 'Iris-versicolor': 'Versicolor', 'Iris-virginica': 'Virginica'} plt.plot(x0c0, x1c0,'r.', label=labels[c0]) plt.plot(x0c1, x1c1,'g+', label=labels[c1]) plt.xlabel('$x_' + str(ind[0]) + '$') plt.ylabel('$x_' + str(ind[1]) + '$') plt.legend(loc='best') plt.axis('equal') Explanation: The following figure generates a plot of the normalized training data. End of explanation def logregFit(Z_tr, Y_tr, rho, n_it): # Data dimension n_dim = Z_tr.shape[1] # Initialize variables nll_tr = np.zeros(n_it) pe_tr = np.zeros(n_it) w = np.random.randn(n_dim,1) # Running the gradient descent algorithm for n in range(n_it): # Compute posterior probabilities for weight w p1_tr = logistic(np.dot(Z_tr, w)) p0_tr = logistic(-np.dot(Z_tr, w)) # Compute negative log-likelihood nll_tr[n] = - np.dot(Y_tr.T, np.log(p1_tr)) - np.dot((1-Y_tr).T, np.log(p0_tr)) # Update weights w += rho*np.dot(Z_tr.T, Y_tr - p1_tr) return w, nll_tr def logregPredict(Z, w): # Compute posterior probability of class 1 for weights w. p = logistic(np.dot(Z, w)) # Class D = [int(round(pn)) for pn in p] return p, D Explanation: In order to apply the gradient descent rule, we need to define two methods: - A fit method, that receives the training data and returns the model weights and the value of the negative log-likelihood during all iterations. - A predict method, that receives the model weight and a set of inputs, and returns the posterior class probabilities for that input, as well as their corresponding class predictions. End of explanation # Parameters of the algorithms rho = float(1)/50 # Learning step n_it = 200 # Number of iterations # Compute Z's Z_tr = np.c_[np.ones(n_tr), Xn_tr] Z_tst = np.c_[np.ones(n_tst), Xn_tst] n_dim = Z_tr.shape[1] # Convert target arrays to column vectors Y_tr2 = Y_tr[np.newaxis].T Y_tst2 = Y_tst[np.newaxis].T # Running the gradient descent algorithm w, nll_tr = logregFit(Z_tr, Y_tr2, rho, n_it) # Classify training and test data p_tr, D_tr = logregPredict(Z_tr, w) p_tst, D_tst = logregPredict(Z_tst, w) # Compute error rates E_tr = D_tr!=Y_tr E_tst = D_tst!=Y_tst # Error rates pe_tr = float(sum(E_tr)) / n_tr pe_tst = float(sum(E_tst)) / n_tst # NLL plot. plt.plot(range(n_it), nll_tr,'b.:', label='Train') plt.xlabel('Iteration') plt.ylabel('Negative Log-Likelihood') plt.legend() print "The optimal weights are:" print w print "The final error rates are:" print "- Training: " + str(pe_tr) print "- Test: " + str(pe_tst) print "The NLL after training is " + str(nll_tr[len(nll_tr)-1]) Explanation: We can test the behavior of the gradient descent method by fitting a logistic regression model with ${\bf z}({\bf x}) = (1, {\bf x}^\intercal)^\intercal$. End of explanation # Create a regtangular grid. x_min, x_max = Xn_tr[:, 0].min(), Xn_tr[:, 0].max() y_min, y_max = Xn_tr[:, 1].min(), Xn_tr[:, 1].max() dx = x_max - x_min dy = y_max - y_min h = dy /400 xx, yy = np.meshgrid(np.arange(x_min - 0.1 * dx, x_max + 0.1 * dx, h), np.arange(y_min - 0.1 * dx, y_max + 0.1 * dy, h)) X_grid = np.array([xx.ravel(), yy.ravel()]).T # Compute Z's Z_grid = np.c_[np.ones(X_grid.shape[0]), X_grid] # Compute the classifier output for all samples in the grid. pp, dd = logregPredict(Z_grid, w) # Put the result into a color plot plt.plot(x0c0, x1c0,'r.', label=labels[c0]) plt.plot(x0c1, x1c1,'g+', label=labels[c1]) plt.xlabel('$x_' + str(ind[0]) + '$') plt.ylabel('$x_' + str(ind[1]) + '$') plt.legend(loc='best') plt.axis('equal') pp = pp.reshape(xx.shape) plt.contourf(xx, yy, pp, cmap=plt.cm.copper) Explanation: 3.2.3. Free parameters Under certain conditions, the gradient descent method can be shown to converge asymptotically (i.e. as the number of iterations goes to infinity) to the ML estimate of the logistic model. However, in practice, the final estimate of the weights ${\bf w}$ depend on several factors: Number of iterations Initialization Learning step Exercise: Visualize the variability of gradient descent caused by initializations. To do so, fix the number of iterations to 200 and the learning step, and execute the gradient descent 100 times, storing the training error rate of each execution. Plot the histogram of the error rate values. Note that you can do this exercise with a loop over the 100 executions, including the code in the previous code slide inside the loop, with some proper modifications. To plot a histogram of the values in array p with nbins, you can use plt.hist(p, n) 3.2.3.1. Learning step The learning step, $\rho$, is a free parameter of the algorithm. Its choice is critical for the convergence of the algorithm. Too large values of $\rho$ make the algorithm diverge. For too small values, the convergence gets very slow and more iterations are required for a good convergence. Exercise 3: Observe the evolution of the negative log-likelihood with the number of iterations for different values of $\rho$. It is easy to check that, for large enough $\rho$, the gradient descent method does not converge. Can you estimate (through manual observation) an approximate value of $\rho$ stating a boundary between convergence and divergence? Exercise 4: In this exercise we explore the influence of the learning step more sistematically. Use the code in the previouse exercises to compute, for every value of $\rho$, the average error rate over 100 executions. Plot the average error rate vs. $\rho$. Note that you should explore the values of $\rho$ in a logarithmic scale. For instance, you can take $\rho = 1, 1/10, 1/100, 1/1000, \ldots$ In practice, the selection of $\rho$ may be a matter of trial an error. Also there is some theoretical evidence that the learning step should decrease along time up to cero, and the sequence $\rho_n$ should satisfy two conditions: - C1: $\sum_{n=0}^{\infty} \rho_n^2 < \infty$ (decrease slowly) - C2: $\sum_{n=0}^{\infty} \rho_n = \infty$ (but not too slowly) For instance, we can take $\rho_n= 1/n$. Another common choice is $\rho_n = \alpha/(1+\beta n)$ where $\alpha$ and $\beta$ are also free parameters that can be selected by trial and error with some heuristic method. 3.2.4. Visualizing the posterior map. We can also visualize the posterior probability map estimated by the logistic regression model for the estimated weights. End of explanation # Parameters of the algorithms rho = float(1)/50 # Learning step n_it = 500 # Number of iterations g = 5 # Degree of polynomial # Compute Z_tr poly = PolynomialFeatures(degree=g) Z_tr = poly.fit_transform(Xn_tr) # Normalize columns (this is useful to make algorithms more stable).) Zn, mz, sz = normalize(Z_tr[:,1:]) Z_tr = np.concatenate((np.ones((n_tr,1)), Zn), axis=1) # Compute Z_tst Z_tst = poly.fit_transform(Xn_tst) Zn, mz, sz = normalize(Z_tst[:,1:], mz, sz) Z_tst = np.concatenate((np.ones((n_tst,1)), Zn), axis=1) # Convert target arrays to column vectors Y_tr2 = Y_tr[np.newaxis].T Y_tst2 = Y_tst[np.newaxis].T # Running the gradient descent algorithm w, nll_tr = logregFit(Z_tr, Y_tr2, rho, n_it) # Classify training and test data p_tr, D_tr = logregPredict(Z_tr, w) p_tst, D_tst = logregPredict(Z_tst, w) # Compute error rates E_tr = D_tr!=Y_tr E_tst = D_tst!=Y_tst # Error rates pe_tr = float(sum(E_tr)) / n_tr pe_tst = float(sum(E_tst)) / n_tst # NLL plot. plt.plot(range(n_it), nll_tr,'b.:', label='Train') plt.xlabel('Iteration') plt.ylabel('Negative Log-Likelihood') plt.legend() print "The optimal weights are:" print w print "The final error rates are:" print "- Training: " + str(pe_tr) print "- Test: " + str(pe_tst) print "The NLL after training is " + str(nll_tr[len(nll_tr)-1]) Explanation: 3.2.5. Polynomial Logistic Regression The error rates of the logistic regression model can be potentially reduced by using polynomial transformations. To compute the polynomial transformation up to a given degree, we can use the PolynomialFeatures method in sklearn.preprocessing. End of explanation # Compute Z_grid Z_grid = poly.fit_transform(X_grid) n_grid = Z_grid.shape[0] Zn, mz, sz = normalize(Z_grid[:,1:], mz, sz) Z_grid = np.concatenate((np.ones((n_grid,1)), Zn), axis=1) # Compute the classifier output for all samples in the grid. pp, dd = logregPredict(Z_grid, w) pp = pp.reshape(xx.shape) # Paint output maps pylab.rcParams['figure.figsize'] = 8, 4 # Set figure size for i in [1, 2]: ax = plt.subplot(1,2,i) ax.plot(x0c0, x1c0,'r.', label=labels[c0]) ax.plot(x0c1, x1c1,'g+', label=labels[c1]) ax.set_xlabel('$x_' + str(ind[0]) + '$') ax.set_ylabel('$x_' + str(ind[1]) + '$') ax.axis('equal') if i==1: ax.contourf(xx, yy, pp, cmap=plt.cm.copper) else: ax.legend(loc='best') ax.contourf(xx, yy, np.round(pp), cmap=plt.cm.copper) Explanation: Visualizing the posterior map we can se that the polynomial transformation produces nonlinear decision boundaries. End of explanation def logregFit2(Z_tr, Y_tr, rho, n_it, C=1e4): # Compute Z's r = 2.0/C n_dim = Z_tr.shape[1] # Initialize variables nll_tr = np.zeros(n_it) pe_tr = np.zeros(n_it) w = np.random.randn(n_dim,1) # Running the gradient descent algorithm for n in range(n_it): p_tr = logistic(np.dot(Z_tr, w)) sk = np.multiply(p_tr, 1-p_tr) S = np.diag(np.ravel(sk.T)) # Compute negative log-likelihood nll_tr[n] = - np.dot(Y_tr.T, np.log(p_tr)) - np.dot((1-Y_tr).T, np.log(1-p_tr)) # Update weights invH = np.linalg.inv(r*np.identity(n_dim) + np.dot(Z_tr.T, np.dot(S, Z_tr))) w += rho*np.dot(invH, np.dot(Z_tr.T, Y_tr - p_tr)) return w, nll_tr # Parameters of the algorithms rho = float(1)/50 # Learning step n_it = 500 # Number of iterations C = 1000 g = 4 # Compute Z_tr poly = PolynomialFeatures(degree=g) Z_tr = poly.fit_transform(X_tr) # Normalize columns (this is useful to make algorithms more stable).) Zn, mz, sz = normalize(Z_tr[:,1:]) Z_tr = np.concatenate((np.ones((n_tr,1)), Zn), axis=1) # Compute Z_tst Z_tst = poly.fit_transform(X_tst) Zn, mz, sz = normalize(Z_tst[:,1:], mz, sz) Z_tst = np.concatenate((np.ones((n_tst,1)), Zn), axis=1) # Convert target arrays to column vectors Y_tr2 = Y_tr[np.newaxis].T Y_tst2 = Y_tst[np.newaxis].T # Running the gradient descent algorithm w, nll_tr = logregFit2(Z_tr, Y_tr2, rho, n_it, C) # Classify training and test data p_tr, D_tr = logregPredict(Z_tr, w) p_tst, D_tst = logregPredict(Z_tst, w) # Compute error rates E_tr = D_tr!=Y_tr E_tst = D_tst!=Y_tst # Error rates pe_tr = float(sum(E_tr)) / n_tr pe_tst = float(sum(E_tst)) / n_tst # NLL plot. plt.plot(range(n_it), nll_tr,'b.:', label='Train') plt.xlabel('Iteration') plt.ylabel('Negative Log-Likelihood') plt.legend() print "The final error rates are:" print "- Training: " + str(pe_tr) print "- Test: " + str(pe_tst) print "The NLL after training is " + str(nll_tr[len(nll_tr)-1]) Explanation: 4. Regularization and MAP estimation. An alternative to the ML estimation of the weights in logistic regression is Maximum A Posteriori estimation. Modelling the logistic regression weights as a random variable with prior distribution $p_{\bf W}({\bf w})$, the MAP estimate is defined as $$ \hat{\bf w}{\text{MAP}} = \arg\max{\bf w} p({\bf w}|{\mathcal S}) $$ The posterior density $p({\bf w}|{\mathcal S})$ is related to the likelihood function and the prior density of the weights, $p_{\bf W}({\bf w})$ through the Bayes rule $$ p({\bf w}|{\mathcal S}) = \frac{P\left(y^{(1)},\ldots,y^{(K)}|{\bf x}^{(1)},\ldots, {\bf x}^{(K)},{\bf w}\right) p_{\bf W}({\bf w})} {p\left(y^{(1)},\ldots,y^{(K)}|{\bf x}^{(1)},\ldots, {\bf x}^{(K)}\right)} $$ $$ p({\bf w}|{\mathcal S}) = \frac{P\left(y^{(1)},\ldots,y^{(K)}|{\bf x}^{(1)},\ldots, {\bf x}^{(K)},{\bf w}\right) p_{\bf W}({\bf w})} {p\left(y^{(1)},\ldots,y^{(K)}|{\bf x}^{(1)},\ldots, {\bf x}^{(K)}\right)} $$ The numerator of the above expression is the product of two terms: The likelihood $P_{{\mathcal S}|{\bf W}}({\mathcal S}|{\bf w})$, which takes large values for parameter vectors $\bf w$ that fit well the training data The prior distribution of weights $p_{\bf W}({\bf w})$, which expresses our a priori preference for some solutions. Usually, we recur to prior distributions that take large values when $\|{\bf w}\|$ is small (associated to soft classification borders). In general, the denominator in this expression cannot be computed analytically. However, it is not required for MAP estimation because it does not depend on ${\bf w}$. Therefore, the MAP criterion prefers solutions that simultaneously fit well the data and our a priori belief about which solutions should be preferred. $$\hat{\bf w}{\text{MAP}} = \arg\max{\bf w} P_{{\mathcal S}|{\bf W}}({\mathcal S}|{\bf w}) \cdot p_{\bf W}({\bf w})$$ We can compute the MAP estimate as \begin{align} \hat{\bf w}{\text{MAP}} &= \arg\max{\bf w} P\left(y^{(1)},\ldots,y^{(K)}|{\bf x}^{(1)},\ldots, {\bf x}^{(K)},{\bf w}\right) p_{\bf W}({\bf w}) \ &= \arg\max_{\bf w} \left{ \log\left[P\left(y^{(1)},\ldots,y^{(K)}|{\bf x}^{(1)},\ldots, {\bf x}^{(K)},{\bf w}\right) \right] + \log\left[ p_{\bf W}({\bf w})\right] \right} \ &= \arg\min_{\bf w} \left{L({\bf w}) - \log\left[ p_{\bf W}({\bf w})\right] \right} \end{align} where $L(·)$ is the negative log-likelihood function. We can check that the MAP criterion adds a penalty term to the ML objective, that penalizes parameter vectors for which the prior distribution of weights takes small values. 4.1 MAP estimation with Gaussian prior If we assume that ${\bf W}$ is a zero-mean Gaussian random variable with variance matrix $v{\bf I}$, $$ p_{\bf W}({\bf w}) = \frac{1}{(2\pi v)^{N/2}} \exp\left(-\frac{1}{2v}\|{\bf w}\|^2\right) $$ the MAP estimate becomes \begin{align} \hat{\bf w}{\text{MAP}} &= \arg\min{\bf w} \left{L({\bf w}) + \frac{1}{C}\|{\bf w}\|^2 \right} \end{align} where $C = 2v$. Noting that $$\nabla_{\bf w}\left{L({\bf w}) + \frac{1}{C}\|{\bf w}\|^2\right} = - {\bf Z} \left({\bf y}-\hat{\bf p}_n\right) + \frac{2}{C}{\bf w}, $$ we obtain the following gradient descent rule for MAP estimation \begin{align} {\bf w}_{n+1} &= \left(1-\frac{2\rho_n}{C}\right){\bf w}_n + \rho_n {\bf Z} \left({\bf y}-\hat{\bf p}_n\right) \end{align} 4.2 MAP estimation with Laplacian prior If we assume that ${\bf W}$ follows a multivariate zero-mean Laplacian distribution given by $$ p_{\bf W}({\bf w}) = \frac{1}{(2 C)^{N}} \exp\left(-\frac{1}{C}\|{\bf w}\|_1\right) $$ (where $\|{\bf w}\|=|w_1|+\ldots+|w_N|$ is the $L_1$ norm of ${\bf w}$), the MAP estimate is \begin{align} \hat{\bf w}{\text{MAP}} &= \arg\min{\bf w} \left{L({\bf w}) + \frac{1}{C}\|{\bf w}\|_1 \right} \end{align} The additional term introduced by the prior in the optimization algorithm is usually named the regularization term. It is usually very effective to avoid overfitting when the dimension of the weight vectors is high. Parameter $C$ is named the inverse regularization strength. Exercise 5: Derive the gradient descent rules for MAP estimation of the logistic regression weights with Laplacian prior. 5. Other optimization algorithms 5.1. Stochastic Gradient descent. Stochastic gradient descent (SGD) is based on the idea of using a single sample at each iteration of the learning algorithm. The SGD rule for ML logistic regression is \begin{align} {\bf w}_{n+1} &= {\bf w}_n + \rho_n {\bf z}^{(n)} \left(y^{(n)}-\hat{p}^{(n)}_n\right) \end{align} Once all samples in the training set have been applied, the algorith can continue by applying the training set several times. The computational cost of each iteration of SGD is much smaller than that of gradient descent, though it usually needs more iterations to converge. Exercise 5: Modify logregFit to implement an algorithm that applies the SGD rule. 5.2. Newton's method Assume that the function to be minimized, $C({\bf w})$, can be approximated by its second order Taylor series expansion around ${\bf w}_0$ $$ C({\bf w}) \approx C({\bf w}0) + \nabla{\bf w}^\intercal C({\bf w}_0)({\bf w}-{\bf w}_0) + \frac{1}{2}({\bf w}-{\bf w}_0)^\intercal{\bf H}({\bf w}_0)({\bf w}-{\bf w}_0) $$ where ${\bf H}({\bf w}_k)$ is the <a href=https://en.wikipedia.org/wiki/Hessian_matrix> Hessian matrix</a> of $C$ at ${\bf w}_k$. Taking the gradient of $C({\bf w})$, and setting the result to ${\bf 0}$, the minimum of C around ${\bf w}_0$ can be approximated as $$ {\bf w}^* = {\bf w}0 - {\bf H}({\bf w}_0)^{-1} \nabla{\bf w}^\intercal C({\bf w}_0) $$ Since the second order polynomial is only an approximation to $C$, ${\bf w}^$ is only an approximation to the optimal weight vector, but we can expect ${\bf w}^$ to be closer to the minimizer of $C$ than ${\bf w}_0$. Thus, we can repeat the process, computing a second order approximation around ${\bf w}^*$ and a new approximation to the minimizer. <a href=https://en.wikipedia.org/wiki/Newton%27s_method_in_optimization> Newton's method</a> is based on this idea. At each optization step, the function to be minimized is approximated by a second order approximation using a Taylor series expansion around the current estimate. As a result, the learning rules becomes $$\hat{\bf w}{n+1} = \hat{\bf w}{n} - \rho_n {\bf H}({\bf w}k)^{-1} \nabla{{\bf w}}C({\bf w}_k) $$ For instance, for the MAP estimate with Gaussian prior, the Hessian matrix becomes $$ {\bf H}({\bf w}) = \frac{2}{C}{\bf I} + \sum_{k=1}^K f({\bf w}^T {\bf z}^{(k)}) \left(1-f({\bf w}^T {\bf z}^{(k)})\right){\bf z}^{(k)} ({\bf z}^{(k)})^\intercal $$ Defining diagonal matrix $$ {\mathbf S}({\bf w}) = \text{diag}\left(f({\bf w}^T {\bf z}^{(k)}) \left(1-f({\bf w}^T {\bf z}^{(k)})\right)\right) $$ the Hessian matrix can be written in more compact form as $$ {\bf H}({\bf w}) = \frac{2}{C}{\bf I} + {\bf Z}^\intercal {\bf S}({\bf w}) {\bf Z} $$ Therefore, the Newton's algorithm for logistic regression becomes \begin{align} \hat{\bf w}{n+1} = \hat{\bf w}{n} + \rho_n \left(\frac{2}{C}{\bf I} + {\bf Z}^\intercal {\bf S}(\hat{\bf w}_{n}) {\bf Z} \right)^{-1} {\bf Z}^\intercal \left({\bf y}-\hat{\bf p}_n\right) \end{align} Some variants of the Newton method are implemented in the <a href="http://scikit-learn.org/stable/"> Scikit-learn </a> package. End of explanation # Create a logistic regression object. LogReg = linear_model.LogisticRegression(C=1.0) # Compute Z_tr poly = PolynomialFeatures(degree=g) Z_tr = poly.fit_transform(Xn_tr) # Normalize columns (this is useful to make algorithms more stable).) Zn, mz, sz = normalize(Z_tr[:,1:]) Z_tr = np.concatenate((np.ones((n_tr,1)), Zn), axis=1) # Compute Z_tst Z_tst = poly.fit_transform(Xn_tst) Zn, mz, sz = normalize(Z_tst[:,1:], mz, sz) Z_tst = np.concatenate((np.ones((n_tst,1)), Zn), axis=1) # Fit model to data. LogReg.fit(Z_tr, Y_tr) # Classify training and test data D_tr = LogReg.predict(Z_tr) D_tst = LogReg.predict(Z_tst) # Compute error rates E_tr = D_tr!=Y_tr E_tst = D_tst!=Y_tst # Error rates pe_tr = float(sum(E_tr)) / n_tr pe_tst = float(sum(E_tst)) / n_tst print "The final error rates are:" print "- Training: " + str(pe_tr) print "- Test: " + str(pe_tst) # Compute Z_grid Z_grid = poly.fit_transform(X_grid) n_grid = Z_grid.shape[0] Zn, mz, sz = normalize(Z_grid[:,1:], mz, sz) Z_grid = np.concatenate((np.ones((n_grid,1)), Zn), axis=1) # Compute the classifier output for all samples in the grid. dd = LogReg.predict(Z_grid) pp = LogReg.predict_proba(Z_grid)[:,1] pp = pp.reshape(xx.shape) # Paint output maps pylab.rcParams['figure.figsize'] = 8, 4 # Set figure size for i in [1, 2]: ax = plt.subplot(1,2,i) ax.plot(x0c0, x1c0,'r.', label=labels[c0]) ax.plot(x0c1, x1c1,'g+', label=labels[c1]) ax.set_xlabel('$x_' + str(ind[0]) + '$') ax.set_ylabel('$x_' + str(ind[1]) + '$') ax.axis('equal') if i==1: ax.contourf(xx, yy, pp, cmap=plt.cm.copper) else: ax.legend(loc='best') ax.contourf(xx, yy, np.round(pp), cmap=plt.cm.copper) Explanation: 6. Logistic regression in Scikit Learn. The <a href="http://scikit-learn.org/stable/"> scikit-learn </a> package includes an efficient implementation of <a href="http://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html#sklearn.linear_model.LogisticRegression"> logistic regression</a>. To use it, we must first create a classifier object, specifying the parameters of the logistic regression algorithm. End of explanation
3,653
Given the following text description, write Python code to implement the functionality described below step by step Description: Background Modeling When fitting a spectrum with a background, it is invalid to simply subtract off the background if the background is part of the data's generative model van Dyk et al. (2001). Therefore, we are often left with the task of modeling the statistical process of the background along with our source. In typical spectral modeling, we find a few common cases when background is involved. If we have total counts ($S_i$) in $i^{\rm th}$ on $N$ bins observed for an exposure of $t_{\rm s}$ and also a measurement of $B_i$ background counts from looking off source for $t_{\rm b}$ seconds, we can then suppose a model for the source rate ($m_i$) and background rate ($b_i$). Poisson source with Poisson background This is described by a likelihood of the following form Step1: First we will create an observation where we have a simulated broken power law source spectrum along with an observed background spectrum. The background is a powerl law continuum with a Gaussian line. Step2: Using a profile likelihood We have very few counts counts in some channels (in fact sometimes zero), but let's assume we do not know the model for the background. In this case, we will use the profile Poisson likelihood. Step3: Our fit recovers the simulated parameters. However, we should have binned the spectrum up such that there is at least one background count per spectral bin for the profile to be valid. Step4: Modeling the background Now let's try to model the background assuming we know that the background is a power law with a Gaussian line. We can extract a background plugin from the data by passing the original plugin to a classmethod of spectrum like. Step5: This constructs a new plugin with only the observed background so that we can first model it. Step6: We now construct our background model and fit it to the data. Let's assume we know that the line occurs at 511 keV, but we are unsure of its strength an width. We do not need to bin the data up because we are using a simple Poisson likelihood which is valid even when we have zero counts Cash (1979). Step7: We now have a model and estimate for the background which we can use when fitting with the source spectrum. We now create a new plugin with just the total observation and pass our background plugin as the background argument. Step8: When we look at out count spectrum now, we will see the predicted background, rather than the measured one Step9: Now we simply fit the spectrum as we did in the profiled case. The background plugin's parameters are stored in our new plugin as nuissance parameters Step10: and the fitting engine will use them in the fit. The parameters will still be connected to the background plugin and its model and thus we can free/fix them there as well as set priors on them.
Python Code: from threeML import * %matplotlib inline import warnings warnings.simplefilter('ignore') Explanation: Background Modeling When fitting a spectrum with a background, it is invalid to simply subtract off the background if the background is part of the data's generative model van Dyk et al. (2001). Therefore, we are often left with the task of modeling the statistical process of the background along with our source. In typical spectral modeling, we find a few common cases when background is involved. If we have total counts ($S_i$) in $i^{\rm th}$ on $N$ bins observed for an exposure of $t_{\rm s}$ and also a measurement of $B_i$ background counts from looking off source for $t_{\rm b}$ seconds, we can then suppose a model for the source rate ($m_i$) and background rate ($b_i$). Poisson source with Poisson background This is described by a likelihood of the following form: $$ L = \prod^N_{i=1} \frac{(t_{\rm s}(m_i+b_i))^{S_i} e^{-t_{\rm s}(m_i+b_i)}}{S_i!} \times \frac{(t_{\rm b} b_i)^{B_i} e^{-t_{\rm b}b_i}}{B_i!} $$ which is a Poisson likelihood for the total model ($m_i +b_i$) conditional on the Poisson distributed background observation. This is the typical case for e.g. aperture x-ray instruments that observe a source region and then a background region. Both observations are Poisson distributed. Poisson source with Gaussian background This likelihood is similar, but the conditonal background distribution is described by Gaussian: $$ L = \prod^N_{i=1} \frac{(t_{\rm s}(m_i+b_i))^{S_i} e^{-t_{\rm s}(m_i+b_i)}}{S_i!} \times \frac{1}{\sigma_{b,i}\sqrt{2 \pi}} \exp \left[ \frac{({B_i} - t_{\rm b} b_i)^2} {2 \sigma_{b,i}^2} \right] $$ where the $\sigma_{b,i}$ are the measured errors on $B_i$. This situation occurs e.g. when the background counts are estimated from a fitted model such as time-domain instruments that estimate the background counts from temporal fits to the lightcurve. In 3ML, we can fit a background model along with the the source model which allows for arbitrarily low background counts (in fact zero) in channels. The alternative is to use profile likelihoods where we first differentiate the likelihood with respect to the background model $$ \frac{ \partial L}{{\partial b_i}} = 0$$ and solve for the $b_i$ that maximize the likelihood. Both the Poisson and Gaussian background profile likelihoods are described in the XSPEC statistics guide. This implicitly yields $N$ parameters to the model thus requiring at least one background count per channel. These profile likelihoods are the default Poisson likelihoods in 3ML when a background model is not used with a SpectrumLike (and its children, DispersionSpectrumLike and OGIPLike) plugin. Let's examine how to handle both cases. End of explanation # create the simulated observation energies = np.logspace(1,4,151) low_edge = energies[:-1] high_edge = energies[1:] # get a BPL source function source_function = Broken_powerlaw(K=2,xb=300,piv=300, alpha=0., beta=-3.) # power law background function background_function = Powerlaw(K=.5,index=-1.5, piv=100.) + Gaussian(F=50,mu=511,sigma=20) spectrum_generator = SpectrumLike.from_function('fake', source_function=source_function, background_function=background_function, energy_min=low_edge, energy_max=high_edge) spectrum_generator.view_count_spectrum() Explanation: First we will create an observation where we have a simulated broken power law source spectrum along with an observed background spectrum. The background is a powerl law continuum with a Gaussian line. End of explanation # instance our source spectrum bpl = Broken_powerlaw(piv=300,xb=500) # instance a point source ra, dec = 0,0 ps_src = PointSource('source',ra,dec,spectral_shape=bpl) # instance the likelihood model src_model = Model(ps_src) # pass everything to a joint likelihood object jl_profile = JointLikelihood(src_model,DataList(spectrum_generator)) # fit the model _ = jl_profile.fit() # plot the fit in count space _ = spectrum_generator.display_model(step=False) Explanation: Using a profile likelihood We have very few counts counts in some channels (in fact sometimes zero), but let's assume we do not know the model for the background. In this case, we will use the profile Poisson likelihood. End of explanation spectrum_generator.rebin_on_background(1) spectrum_generator.view_count_spectrum() _ = jl_profile.fit() _ = spectrum_generator.display_model(step=False) Explanation: Our fit recovers the simulated parameters. However, we should have binned the spectrum up such that there is at least one background count per spectral bin for the profile to be valid. End of explanation # extract the background from the spectrum plugin. # This works for OGIPLike plugins as well, though we could easily also just read # in a bakcground PHA background_plugin = SpectrumLike.from_background('bkg',spectrum_generator) Explanation: Modeling the background Now let's try to model the background assuming we know that the background is a power law with a Gaussian line. We can extract a background plugin from the data by passing the original plugin to a classmethod of spectrum like. End of explanation background_plugin.view_count_spectrum() Explanation: This constructs a new plugin with only the observed background so that we can first model it. End of explanation # instance the spectrum setting the line's location to 511 bkg_spectrum = Powerlaw(piv=100) + Gaussian(F=50,mu=511) # setup model parameters # fix the line's location bkg_spectrum.mu_2.fix = True # nice parameter bounds bkg_spectrum.K_1.bounds = (1E-4, 10) bkg_spectrum.F_2.bounds = (0., 1000) bkg_spectrum.sigma_2.bounds = (2,30) ps_bkg = PointSource('bkg',0,0,spectral_shape=bkg_spectrum) bkg_model = Model(ps_bkg) jl_bkg = JointLikelihood(bkg_model,DataList(background_plugin)) _ = jl_bkg.fit() _ = background_plugin.display_model(step=False, data_color='#1A68F0', model_color='#FF9700') Explanation: We now construct our background model and fit it to the data. Let's assume we know that the line occurs at 511 keV, but we are unsure of its strength an width. We do not need to bin the data up because we are using a simple Poisson likelihood which is valid even when we have zero counts Cash (1979). End of explanation modeled_background_plugin = SpectrumLike('full', # here we use the original observation observation=spectrum_generator.observed_spectrum, # we pass the background plugin as the background! background=background_plugin) Explanation: We now have a model and estimate for the background which we can use when fitting with the source spectrum. We now create a new plugin with just the total observation and pass our background plugin as the background argument. End of explanation modeled_background_plugin.view_count_spectrum() Explanation: When we look at out count spectrum now, we will see the predicted background, rather than the measured one: End of explanation modeled_background_plugin.nuisance_parameters Explanation: Now we simply fit the spectrum as we did in the profiled case. The background plugin's parameters are stored in our new plugin as nuissance parameters: End of explanation # instance the source model... the background plugin has it's model already specified bpl = Broken_powerlaw(piv=300,xb=500) bpl.K.bounds = (1E-5,1E1) bpl.xb.bounds = (1E1,1E4) ps_src = PointSource('source',0,0,bpl) src_model = Model(ps_src) jl_src = JointLikelihood(src_model,DataList(modeled_background_plugin)) _ = jl_src.fit() # over plot the joint background and source fits fig = modeled_background_plugin.display_model(step=False) _ = background_plugin.display_model(data_color='#1A68F0', model_color='#FF9700',model_subplot=fig.axes,step=False) Explanation: and the fitting engine will use them in the fit. The parameters will still be connected to the background plugin and its model and thus we can free/fix them there as well as set priors on them. End of explanation
3,654
Given the following text description, write Python code to implement the functionality described below step by step Description: Content and Objective Show different aspects when dealing with FFT Using rectangular function in time and frequency for illustration Importing and Plotting Options Step1: Define Rect and Get Spectrum
Python Code: import numpy as np import matplotlib import matplotlib.pyplot as plt %matplotlib inline # plotting options font = {'size' : 20} plt.rc('font', **font) plt.rc('text', usetex=True) matplotlib.rc('figure', figsize=(18, 6) ) Explanation: Content and Objective Show different aspects when dealing with FFT Using rectangular function in time and frequency for illustration Importing and Plotting Options End of explanation # min and max time, sampling time t_min = -5.0 t_max = 5.0 t_s = 0.1 # sample time # vector of times t=np.arange(t_min, t_max + t_s, t_s ) # duration of rect and according instants in time T_rect = 2 # width of the rectangular t_rect = np.arange( - T_rect/2, T_rect / 2 + t_s, t_s ) # sample number of domain and signal M = len( t ) M_rect = len( t_rect ) # frequency axis # NOTE: resolution given by characteristics of DFT f_Nyq = 1 / ( 2*t_s ) delta_f = 1 / ( t_max-t_min ) f = np.arange( -f_Nyq, f_Nyq + delta_f, delta_f ) # rectangular function, # one signal with ones in the middle, one signal with ones at the beginning, one signal being periodical rect_midway = 0 * t rect_midway[ (M-M_rect)//2 : (M-M_rect)//2+M_rect ] = 1 rect_left = 0*t rect_left[ : M_rect] = 1 rect_periodic = 0*t rect_periodic[ : M_rect// 2 +1 ] = 1 rect_periodic[ len(t) - M_rect // 2 : ] = 1 # choose rect = rect_left rect = rect_midway rect = rect_periodic # frequency rect corresponds to time sinc RECT = np.fft.fft( rect ) RECT = RECT / np.max( np.abs(RECT) ) # plotting plt.subplot(121) plt.plot(t, rect) plt.grid(True); plt.xlabel('$t/\mathrm{s}$'); plt.ylabel('$x(t)$') plt.subplot(122) plt.plot(f, np.real( RECT ) ) plt.plot(f, np.imag( RECT ) ) plt.grid(True); plt.xlabel('$f/\mathrm{Hz}$'); plt.ylabel('$X(f)$') Explanation: Define Rect and Get Spectrum End of explanation
3,655
Given the following text description, write Python code to implement the functionality described below step by step Description: Jupyter Notebooks Advanced Features <div class="alert bg-primary">PYNQ notebook front end allows interactive coding, output visualizations and documentation using text, equations, images, video and other rich media.</div> <div class="alert bg-primary">Code, analysis, debug, documentation and demos are all alive, editable and connected in the Notebooks.</div> ## Contents Live, Interactive Cell for Python Coding Guess that Number Generate Fibonacci numbers Plotting Output Interactive input and output analysis Interactive debug Rich Output Media Display Images Render SVG images Audio Playback Add Video Add webpages as Interactive Frames Render Latex Interactive Plots and Visualization Matplotlib Notebooks are not just for Python Access to linux shell commands Shell commands in python code Python variables in shell commands Magics Timing code using magics Coding other languages Contents Live, Interactive Python Coding Guess that number game Run the cell to play Cell can be run by selecting the cell and pressing Shift+Enter Step1: Contents Generate Fibonacci numbers Step2: Contents Plotting Fibonacci numbers Plotting is done using the matplotlib library Step3: Contents Interactive input and output analysis Input and output interaction can be achieved using Ipython widgets Step4: Contents Interactive debug Uses set_trace from the Ipython debugger library Type 'h' in debug prompt for the debug commands list and 'q' to exit Step5: Contents Rich Output Media Display images Images can be displayed using combination of HTML, Markdown, PNG, JPG, etc. Image below is displayed in a markdown cell which is rendered at startup. Contents Render SVG images SVG image is rendered in a code cell using Ipython display library. Step6: Contents Audio Playback IPython.display.Audio lets you play audio directly in the notebook Step7: Contents Add Video IPython.display.YouTubeVideo lets you play Youtube video directly in the notebook. Library support is available to play Vimeo and local videos as well Step8: Video Link with image display <a href="https Step9: Contents Render Latex Display of mathematical expressions typeset in LaTeX for documentation. Step10: Contents Interactive Plots and Visualization Plotting and Visualization can be achieved using various available python libraries such as Matplotlib, Bokeh, Seaborn, etc. Below is shown a Iframe of the Matplotlib website. Navigate to 'gallery' and choose a plot to run in the notebook Step15: Contents Matplotlib Below we run the code available under examples --> Matplotlib API --> Radar_chart in the above webpage Link to Radar chart Step16: Contents Notebooks are not just for Python Access to linux shell commands <div class="alert alert-info">Starting a code cell with a bang character, e.g. `!`, instructs jupyter to treat the code on that line as an OS shell command</div> System Information Step17: Verify Linux Version Step18: CPU speed calculation made by the Linux kernel Step19: Available DRAM Step20: Network connection Step21: Directory Information Step22: Contents Shell commands in python code Step23: Python variables in shell commands By enclosing a Python expression within {}, i.e. curly braces, we can substitute it into shell commands Step24: Contents Magics IPython has a set of predefined ‘magic functions’ that you can call with a command line style syntax. There are two kinds of magics, line-oriented and cell-oriented. Line magics are prefixed with the % character and work much like OS command-line calls Step25: Contents Timing code using magics The following examples show how to call the built-in%time magic %time times the execution of a single statement Reference Step26: Time the sorting of a pre-sorted list The list 'L' which was sorted in previous cell is re-sorted to observe execution time, it is much less as expected Step27: Contents Coding other languages If you want to, you can combine code from multiple kernels into one notebook. Just use IPython Magics with the name of your kernel at the start of each cell that you want to use that Kernel for
Python Code: import random the_number = random.randint(0, 10) guess = -1 name = input('Player what is your name? ') while guess != the_number: guess_text = input('Guess a number between 0 and 10: ') guess = int(guess_text) if guess < the_number: print(f'Sorry {name}, your guess of {guess} was too LOW.\n') elif guess > the_number: print(f'Sorry {name}, your guess of {guess} was too HIGH.\n') else: print(f'Excellent work {name}, you won, it was {guess}!\n') print('Done') Explanation: Jupyter Notebooks Advanced Features <div class="alert bg-primary">PYNQ notebook front end allows interactive coding, output visualizations and documentation using text, equations, images, video and other rich media.</div> <div class="alert bg-primary">Code, analysis, debug, documentation and demos are all alive, editable and connected in the Notebooks.</div> ## Contents Live, Interactive Cell for Python Coding Guess that Number Generate Fibonacci numbers Plotting Output Interactive input and output analysis Interactive debug Rich Output Media Display Images Render SVG images Audio Playback Add Video Add webpages as Interactive Frames Render Latex Interactive Plots and Visualization Matplotlib Notebooks are not just for Python Access to linux shell commands Shell commands in python code Python variables in shell commands Magics Timing code using magics Coding other languages Contents Live, Interactive Python Coding Guess that number game Run the cell to play Cell can be run by selecting the cell and pressing Shift+Enter End of explanation def generate_fibonacci_list(limit, output=False): nums = [] current, ne_xt = 0, 1 while current < limit: current, ne_xt = ne_xt, ne_xt + current nums.append(current) if output == True: print(f'{len(nums[:-1])} Fibonacci numbers below the number ' f'{limit} are:\n{nums[:-1]}') return nums[:-1] limit = 1000 fib = generate_fibonacci_list(limit, True) Explanation: Contents Generate Fibonacci numbers End of explanation %matplotlib inline import matplotlib.pyplot as plt from ipywidgets import * limit = 1000000 fib = generate_fibonacci_list(limit) plt.plot(fib) plt.plot(range(len(fib)), fib, 'ro') plt.show() Explanation: Contents Plotting Fibonacci numbers Plotting is done using the matplotlib library End of explanation %matplotlib inline import matplotlib.pyplot as plt from ipywidgets import * def update(limit, print_output): i = generate_fibonacci_list(limit, print_output) plt.plot(range(len(i)), i) plt.plot(range(len(i)), i, 'ro') plt.show() limit=widgets.IntSlider(min=10,max=1000000,step=1,value=10) interact(update, limit=limit, print_output=False); Explanation: Contents Interactive input and output analysis Input and output interaction can be achieved using Ipython widgets End of explanation from IPython.core.debugger import set_trace def debug_fibonacci_list(limit): nums = [] current, ne_xt = 0, 1 while current < limit: if current > 1000: set_trace() current, ne_xt = ne_xt, ne_xt + current nums.append(current) print(f'The fibonacci numbers below the number {limit} are:\n{nums[:-1]}') debug_fibonacci_list(10000) Explanation: Contents Interactive debug Uses set_trace from the Ipython debugger library Type 'h' in debug prompt for the debug commands list and 'q' to exit End of explanation from IPython.display import SVG SVG(filename='images/python.svg') Explanation: Contents Rich Output Media Display images Images can be displayed using combination of HTML, Markdown, PNG, JPG, etc. Image below is displayed in a markdown cell which is rendered at startup. Contents Render SVG images SVG image is rendered in a code cell using Ipython display library. End of explanation import numpy as np from IPython.display import Audio framerate = 44100 t = np.linspace(0,5,framerate*5) data = np.sin(2*np.pi*220*t**2) Audio(data,rate=framerate) Explanation: Contents Audio Playback IPython.display.Audio lets you play audio directly in the notebook End of explanation from IPython.display import YouTubeVideo YouTubeVideo('K5okTyjKr5U') Explanation: Contents Add Video IPython.display.YouTubeVideo lets you play Youtube video directly in the notebook. Library support is available to play Vimeo and local videos as well End of explanation from IPython.display import IFrame IFrame('https://pynq.readthedocs.io/en/latest/getting_started.html', width='100%', height=500) Explanation: Video Link with image display <a href="https://www.youtube.com/watch?v=K5okTyjKr5U"> <img src="http://img.youtube.com/vi/K5okTyjKr5U/0.jpg" width="400" height="400" align="left"></a> Contents Add webpages as Interactive Frames Embed an entire page from another site in an iframe; for example this is the PYNQ documentation page on readthedocs End of explanation %%latex \begin{align} P(Y=i|x, W,b) = softmax_i(W x + b)= \frac {e^{W_i x + b_i}} {\sum_j e^{W_j x + b_j}}\end{align} Explanation: Contents Render Latex Display of mathematical expressions typeset in LaTeX for documentation. End of explanation from IPython.display import IFrame IFrame('https://matplotlib.org/gallery/index.html', width='100%', height=500) Explanation: Contents Interactive Plots and Visualization Plotting and Visualization can be achieved using various available python libraries such as Matplotlib, Bokeh, Seaborn, etc. Below is shown a Iframe of the Matplotlib website. Navigate to 'gallery' and choose a plot to run in the notebook End of explanation import numpy as np import matplotlib.pyplot as plt from matplotlib.path import Path from matplotlib.spines import Spine from matplotlib.projections.polar import PolarAxes from matplotlib.projections import register_projection def radar_factory(num_vars, frame='circle'): Create a radar chart with `num_vars` axes. This function creates a RadarAxes projection and registers it. Parameters ---------- num_vars : int Number of variables for radar chart. frame : {'circle' | 'polygon'} Shape of frame surrounding axes. # calculate evenly-spaced axis angles theta = np.linspace(0, 2*np.pi, num_vars, endpoint=False) def draw_poly_patch(self): # rotate theta such that the first axis is at the top verts = unit_poly_verts(theta + np.pi / 2) return plt.Polygon(verts, closed=True, edgecolor='k') def draw_circle_patch(self): # unit circle centered on (0.5, 0.5) return plt.Circle((0.5, 0.5), 0.5) patch_dict = {'polygon': draw_poly_patch, 'circle': draw_circle_patch} if frame not in patch_dict: raise ValueError('unknown value for `frame`: %s' % frame) class RadarAxes(PolarAxes): name = 'radar' # use 1 line segment to connect specified points RESOLUTION = 1 # define draw_frame method draw_patch = patch_dict[frame] def __init__(self, *args, **kwargs): super(RadarAxes, self).__init__(*args, **kwargs) # rotate plot such that the first axis is at the top self.set_theta_zero_location('N') def fill(self, *args, **kwargs): Override fill so that line is closed by default closed = kwargs.pop('closed', True) return super(RadarAxes, self).fill(closed=closed, *args, **kwargs) def plot(self, *args, **kwargs): Override plot so that line is closed by default lines = super(RadarAxes, self).plot(*args, **kwargs) for line in lines: self._close_line(line) def _close_line(self, line): x, y = line.get_data() # FIXME: markers at x[0], y[0] get doubled-up if x[0] != x[-1]: x = np.concatenate((x, [x[0]])) y = np.concatenate((y, [y[0]])) line.set_data(x, y) def set_varlabels(self, labels): self.set_thetagrids(np.degrees(theta), labels) def _gen_axes_patch(self): return self.draw_patch() def _gen_axes_spines(self): if frame == 'circle': return PolarAxes._gen_axes_spines(self) # The following is a hack to get the spines (i.e. the axes frame) # to draw correctly for a polygon frame. # spine_type must be 'left', 'right', 'top', 'bottom', or `circle`. spine_type = 'circle' verts = unit_poly_verts(theta + np.pi / 2) # close off polygon by repeating first vertex verts.append(verts[0]) path = Path(verts) spine = Spine(self, spine_type, path) spine.set_transform(self.transAxes) return {'polar': spine} register_projection(RadarAxes) return theta def unit_poly_verts(theta): Return vertices of polygon for subplot axes. This polygon is circumscribed by a unit circle centered at (0.5, 0.5) x0, y0, r = [0.5] * 3 verts = [(r*np.cos(t) + x0, r*np.sin(t) + y0) for t in theta] return verts def example_data(): # The following data is from the Denver Aerosol Sources and Health study. # See doi:10.1016/j.atmosenv.2008.12.017 # # The data are pollution source profile estimates for five modeled # pollution sources (e.g., cars, wood-burning, etc) that emit 7-9 chemical # species. The radar charts are experimented with here to see if we can # nicely visualize how the modeled source profiles change across four # scenarios: # 1) No gas-phase species present, just seven particulate counts on # Sulfate # Nitrate # Elemental Carbon (EC) # Organic Carbon fraction 1 (OC) # Organic Carbon fraction 2 (OC2) # Organic Carbon fraction 3 (OC3) # Pyrolized Organic Carbon (OP) # 2)Inclusion of gas-phase specie carbon monoxide (CO) # 3)Inclusion of gas-phase specie ozone (O3). # 4)Inclusion of both gas-phase species is present... data = [ ['Sulfate', 'Nitrate', 'EC', 'OC1', 'OC2', 'OC3', 'OP', 'CO', 'O3'], ('Basecase', [ [0.88, 0.01, 0.03, 0.03, 0.00, 0.06, 0.01, 0.00, 0.00], [0.07, 0.95, 0.04, 0.05, 0.00, 0.02, 0.01, 0.00, 0.00], [0.01, 0.02, 0.85, 0.19, 0.05, 0.10, 0.00, 0.00, 0.00], [0.02, 0.01, 0.07, 0.01, 0.21, 0.12, 0.98, 0.00, 0.00], [0.01, 0.01, 0.02, 0.71, 0.74, 0.70, 0.00, 0.00, 0.00]]), ('With CO', [ [0.88, 0.02, 0.02, 0.02, 0.00, 0.05, 0.00, 0.05, 0.00], [0.08, 0.94, 0.04, 0.02, 0.00, 0.01, 0.12, 0.04, 0.00], [0.01, 0.01, 0.79, 0.10, 0.00, 0.05, 0.00, 0.31, 0.00], [0.00, 0.02, 0.03, 0.38, 0.31, 0.31, 0.00, 0.59, 0.00], [0.02, 0.02, 0.11, 0.47, 0.69, 0.58, 0.88, 0.00, 0.00]]), ('With O3', [ [0.89, 0.01, 0.07, 0.00, 0.00, 0.05, 0.00, 0.00, 0.03], [0.07, 0.95, 0.05, 0.04, 0.00, 0.02, 0.12, 0.00, 0.00], [0.01, 0.02, 0.86, 0.27, 0.16, 0.19, 0.00, 0.00, 0.00], [0.01, 0.03, 0.00, 0.32, 0.29, 0.27, 0.00, 0.00, 0.95], [0.02, 0.00, 0.03, 0.37, 0.56, 0.47, 0.87, 0.00, 0.00]]), ('CO & O3', [ [0.87, 0.01, 0.08, 0.00, 0.00, 0.04, 0.00, 0.00, 0.01], [0.09, 0.95, 0.02, 0.03, 0.00, 0.01, 0.13, 0.06, 0.00], [0.01, 0.02, 0.71, 0.24, 0.13, 0.16, 0.00, 0.50, 0.00], [0.01, 0.03, 0.00, 0.28, 0.24, 0.23, 0.00, 0.44, 0.88], [0.02, 0.00, 0.18, 0.45, 0.64, 0.55, 0.86, 0.00, 0.16]]) ] return data if __name__ == '__main__': N = 9 theta = radar_factory(N, frame='polygon') data = example_data() spoke_labels = data.pop(0) fig, axes = plt.subplots(figsize=(9, 9), nrows=2, ncols=2, subplot_kw=dict(projection='radar')) fig.subplots_adjust(wspace=0.25, hspace=0.20, top=0.85, bottom=0.05) colors = ['b', 'r', 'g', 'm', 'y'] # Plot the four cases from the example data on separate axes for ax, (title, case_data) in zip(axes.flatten(), data): ax.set_rgrids([0.2, 0.4, 0.6, 0.8]) ax.set_title(title, weight='bold', size='medium', position=(0.5, 1.1), horizontalalignment='center', verticalalignment='center') for d, color in zip(case_data, colors): ax.plot(theta, d, color=color) ax.fill(theta, d, facecolor=color, alpha=0.25) ax.set_varlabels(spoke_labels) # add legend relative to top-left plot ax = axes[0, 0] labels = ('Factor 1', 'Factor 2', 'Factor 3', 'Factor 4', 'Factor 5') legend = ax.legend(labels, loc=(0.9, .95), labelspacing=0.1, fontsize='small') fig.text(0.5, 0.965, '5-Factor Solution Profiles Across Four Scenarios', horizontalalignment='center', color='black', weight='bold', size='large') plt.show() Explanation: Contents Matplotlib Below we run the code available under examples --> Matplotlib API --> Radar_chart in the above webpage Link to Radar chart End of explanation !cat /proc/cpuinfo Explanation: Contents Notebooks are not just for Python Access to linux shell commands <div class="alert alert-info">Starting a code cell with a bang character, e.g. `!`, instructs jupyter to treat the code on that line as an OS shell command</div> System Information End of explanation !cat /etc/os-release | grep VERSION Explanation: Verify Linux Version End of explanation !head -5 /proc/cpuinfo | grep "BogoMIPS" Explanation: CPU speed calculation made by the Linux kernel End of explanation !cat /proc/meminfo | grep 'Mem*' Explanation: Available DRAM End of explanation !ifconfig Explanation: Network connection End of explanation !pwd !echo -------------------------------------------- !ls -C --color Explanation: Directory Information End of explanation files = !ls | head -3 print(files) Explanation: Contents Shell commands in python code End of explanation shell_nbs = '*.ipynb | grep "ipynb"' !ls {shell_nbs} Explanation: Python variables in shell commands By enclosing a Python expression within {}, i.e. curly braces, we can substitute it into shell commands End of explanation %lsmagic Explanation: Contents Magics IPython has a set of predefined ‘magic functions’ that you can call with a command line style syntax. There are two kinds of magics, line-oriented and cell-oriented. Line magics are prefixed with the % character and work much like OS command-line calls: they get as an argument the rest of the line, where arguments are passed without parentheses or quotes. Cell magics are prefixed with a double %%, and they are functions that get as an argument not only the rest of the line, but also the lines below it in a separate argument. To learn more about the IPython magics, simple type %magic in a separate cell Below is a list of available magics End of explanation import random L = [random.random() for _ in range(100000)] %time L.sort() Explanation: Contents Timing code using magics The following examples show how to call the built-in%time magic %time times the execution of a single statement Reference: The next two code cells are excerpted from the Python Data Science Handbook by Jake VanderPlas Link to full handbook Time the sorting on an unsorted list A list of 100000 random numbers is sorted and stored in a variable 'L' End of explanation %time L.sort() Explanation: Time the sorting of a pre-sorted list The list 'L' which was sorted in previous cell is re-sorted to observe execution time, it is much less as expected End of explanation %%bash factorial() { if [ "$1" -gt "1" ] then i=`expr $1 - 1` j=`factorial $i` k=`expr $1 \* $j` echo $k else echo 1 fi } input=5 val=$(factorial $input) echo "Factorial of $input is : "$val Explanation: Contents Coding other languages If you want to, you can combine code from multiple kernels into one notebook. Just use IPython Magics with the name of your kernel at the start of each cell that you want to use that Kernel for: %%bash %%HTML %%python2 %%python3 %%ruby %%perl End of explanation
3,656
Given the following text description, write Python code to implement the functionality described below step by step Description: ES-DOC CMIP6 Model Properties - Ocean MIP Era Step1: Document Authors Set document authors Step2: Document Contributors Specify document contributors Step3: Document Publication Specify document publication status Step4: Document Table of Contents 1. Key Properties 2. Key Properties --&gt; Seawater Properties 3. Key Properties --&gt; Bathymetry 4. Key Properties --&gt; Nonoceanic Waters 5. Key Properties --&gt; Software Properties 6. Key Properties --&gt; Resolution 7. Key Properties --&gt; Tuning Applied 8. Key Properties --&gt; Conservation 9. Grid 10. Grid --&gt; Discretisation --&gt; Vertical 11. Grid --&gt; Discretisation --&gt; Horizontal 12. Timestepping Framework 13. Timestepping Framework --&gt; Tracers 14. Timestepping Framework --&gt; Baroclinic Dynamics 15. Timestepping Framework --&gt; Barotropic 16. Timestepping Framework --&gt; Vertical Physics 17. Advection 18. Advection --&gt; Momentum 19. Advection --&gt; Lateral Tracers 20. Advection --&gt; Vertical Tracers 21. Lateral Physics 22. Lateral Physics --&gt; Momentum --&gt; Operator 23. Lateral Physics --&gt; Momentum --&gt; Eddy Viscosity Coeff 24. Lateral Physics --&gt; Tracers 25. Lateral Physics --&gt; Tracers --&gt; Operator 26. Lateral Physics --&gt; Tracers --&gt; Eddy Diffusity Coeff 27. Lateral Physics --&gt; Tracers --&gt; Eddy Induced Velocity 28. Vertical Physics 29. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Details 30. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Tracers 31. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Momentum 32. Vertical Physics --&gt; Interior Mixing --&gt; Details 33. Vertical Physics --&gt; Interior Mixing --&gt; Tracers 34. Vertical Physics --&gt; Interior Mixing --&gt; Momentum 35. Uplow Boundaries --&gt; Free Surface 36. Uplow Boundaries --&gt; Bottom Boundary Layer 37. Boundary Forcing 38. Boundary Forcing --&gt; Momentum --&gt; Bottom Friction 39. Boundary Forcing --&gt; Momentum --&gt; Lateral Friction 40. Boundary Forcing --&gt; Tracers --&gt; Sunlight Penetration 41. Boundary Forcing --&gt; Tracers --&gt; Fresh Water Forcing 1. Key Properties Ocean key properties 1.1. Model Overview Is Required Step5: 1.2. Model Name Is Required Step6: 1.3. Model Family Is Required Step7: 1.4. Basic Approximations Is Required Step8: 1.5. Prognostic Variables Is Required Step9: 2. Key Properties --&gt; Seawater Properties Physical properties of seawater in ocean 2.1. Eos Type Is Required Step10: 2.2. Eos Functional Temp Is Required Step11: 2.3. Eos Functional Salt Is Required Step12: 2.4. Eos Functional Depth Is Required Step13: 2.5. Ocean Freezing Point Is Required Step14: 2.6. Ocean Specific Heat Is Required Step15: 2.7. Ocean Reference Density Is Required Step16: 3. Key Properties --&gt; Bathymetry Properties of bathymetry in ocean 3.1. Reference Dates Is Required Step17: 3.2. Type Is Required Step18: 3.3. Ocean Smoothing Is Required Step19: 3.4. Source Is Required Step20: 4. Key Properties --&gt; Nonoceanic Waters Non oceanic waters treatement in ocean 4.1. Isolated Seas Is Required Step21: 4.2. River Mouth Is Required Step22: 5. Key Properties --&gt; Software Properties Software properties of ocean code 5.1. Repository Is Required Step23: 5.2. Code Version Is Required Step24: 5.3. Code Languages Is Required Step25: 6. Key Properties --&gt; Resolution Resolution in the ocean grid 6.1. Name Is Required Step26: 6.2. Canonical Horizontal Resolution Is Required Step27: 6.3. Range Horizontal Resolution Is Required Step28: 6.4. Number Of Horizontal Gridpoints Is Required Step29: 6.5. Number Of Vertical Levels Is Required Step30: 6.6. Is Adaptive Grid Is Required Step31: 6.7. Thickness Level 1 Is Required Step32: 7. Key Properties --&gt; Tuning Applied Tuning methodology for ocean component 7.1. Description Is Required Step33: 7.2. Global Mean Metrics Used Is Required Step34: 7.3. Regional Metrics Used Is Required Step35: 7.4. Trend Metrics Used Is Required Step36: 8. Key Properties --&gt; Conservation Conservation in the ocean component 8.1. Description Is Required Step37: 8.2. Scheme Is Required Step38: 8.3. Consistency Properties Is Required Step39: 8.4. Corrected Conserved Prognostic Variables Is Required Step40: 8.5. Was Flux Correction Used Is Required Step41: 9. Grid Ocean grid 9.1. Overview Is Required Step42: 10. Grid --&gt; Discretisation --&gt; Vertical Properties of vertical discretisation in ocean 10.1. Coordinates Is Required Step43: 10.2. Partial Steps Is Required Step44: 11. Grid --&gt; Discretisation --&gt; Horizontal Type of horizontal discretisation scheme in ocean 11.1. Type Is Required Step45: 11.2. Staggering Is Required Step46: 11.3. Scheme Is Required Step47: 12. Timestepping Framework Ocean Timestepping Framework 12.1. Overview Is Required Step48: 12.2. Diurnal Cycle Is Required Step49: 13. Timestepping Framework --&gt; Tracers Properties of tracers time stepping in ocean 13.1. Scheme Is Required Step50: 13.2. Time Step Is Required Step51: 14. Timestepping Framework --&gt; Baroclinic Dynamics Baroclinic dynamics in ocean 14.1. Type Is Required Step52: 14.2. Scheme Is Required Step53: 14.3. Time Step Is Required Step54: 15. Timestepping Framework --&gt; Barotropic Barotropic time stepping in ocean 15.1. Splitting Is Required Step55: 15.2. Time Step Is Required Step56: 16. Timestepping Framework --&gt; Vertical Physics Vertical physics time stepping in ocean 16.1. Method Is Required Step57: 17. Advection Ocean advection 17.1. Overview Is Required Step58: 18. Advection --&gt; Momentum Properties of lateral momemtum advection scheme in ocean 18.1. Type Is Required Step59: 18.2. Scheme Name Is Required Step60: 18.3. ALE Is Required Step61: 19. Advection --&gt; Lateral Tracers Properties of lateral tracer advection scheme in ocean 19.1. Order Is Required Step62: 19.2. Flux Limiter Is Required Step63: 19.3. Effective Order Is Required Step64: 19.4. Name Is Required Step65: 19.5. Passive Tracers Is Required Step66: 19.6. Passive Tracers Advection Is Required Step67: 20. Advection --&gt; Vertical Tracers Properties of vertical tracer advection scheme in ocean 20.1. Name Is Required Step68: 20.2. Flux Limiter Is Required Step69: 21. Lateral Physics Ocean lateral physics 21.1. Overview Is Required Step70: 21.2. Scheme Is Required Step71: 22. Lateral Physics --&gt; Momentum --&gt; Operator Properties of lateral physics operator for momentum in ocean 22.1. Direction Is Required Step72: 22.2. Order Is Required Step73: 22.3. Discretisation Is Required Step74: 23. Lateral Physics --&gt; Momentum --&gt; Eddy Viscosity Coeff Properties of eddy viscosity coeff in lateral physics momemtum scheme in the ocean 23.1. Type Is Required Step75: 23.2. Constant Coefficient Is Required Step76: 23.3. Variable Coefficient Is Required Step77: 23.4. Coeff Background Is Required Step78: 23.5. Coeff Backscatter Is Required Step79: 24. Lateral Physics --&gt; Tracers Properties of lateral physics for tracers in ocean 24.1. Mesoscale Closure Is Required Step80: 24.2. Submesoscale Mixing Is Required Step81: 25. Lateral Physics --&gt; Tracers --&gt; Operator Properties of lateral physics operator for tracers in ocean 25.1. Direction Is Required Step82: 25.2. Order Is Required Step83: 25.3. Discretisation Is Required Step84: 26. Lateral Physics --&gt; Tracers --&gt; Eddy Diffusity Coeff Properties of eddy diffusity coeff in lateral physics tracers scheme in the ocean 26.1. Type Is Required Step85: 26.2. Constant Coefficient Is Required Step86: 26.3. Variable Coefficient Is Required Step87: 26.4. Coeff Background Is Required Step88: 26.5. Coeff Backscatter Is Required Step89: 27. Lateral Physics --&gt; Tracers --&gt; Eddy Induced Velocity Properties of eddy induced velocity (EIV) in lateral physics tracers scheme in the ocean 27.1. Type Is Required Step90: 27.2. Constant Val Is Required Step91: 27.3. Flux Type Is Required Step92: 27.4. Added Diffusivity Is Required Step93: 28. Vertical Physics Ocean Vertical Physics 28.1. Overview Is Required Step94: 29. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Details Properties of vertical physics in ocean 29.1. Langmuir Cells Mixing Is Required Step95: 30. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Tracers *Properties of boundary layer (BL) mixing on tracers in the ocean * 30.1. Type Is Required Step96: 30.2. Closure Order Is Required Step97: 30.3. Constant Is Required Step98: 30.4. Background Is Required Step99: 31. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Momentum *Properties of boundary layer (BL) mixing on momentum in the ocean * 31.1. Type Is Required Step100: 31.2. Closure Order Is Required Step101: 31.3. Constant Is Required Step102: 31.4. Background Is Required Step103: 32. Vertical Physics --&gt; Interior Mixing --&gt; Details *Properties of interior mixing in the ocean * 32.1. Convection Type Is Required Step104: 32.2. Tide Induced Mixing Is Required Step105: 32.3. Double Diffusion Is Required Step106: 32.4. Shear Mixing Is Required Step107: 33. Vertical Physics --&gt; Interior Mixing --&gt; Tracers *Properties of interior mixing on tracers in the ocean * 33.1. Type Is Required Step108: 33.2. Constant Is Required Step109: 33.3. Profile Is Required Step110: 33.4. Background Is Required Step111: 34. Vertical Physics --&gt; Interior Mixing --&gt; Momentum *Properties of interior mixing on momentum in the ocean * 34.1. Type Is Required Step112: 34.2. Constant Is Required Step113: 34.3. Profile Is Required Step114: 34.4. Background Is Required Step115: 35. Uplow Boundaries --&gt; Free Surface Properties of free surface in ocean 35.1. Overview Is Required Step116: 35.2. Scheme Is Required Step117: 35.3. Embeded Seaice Is Required Step118: 36. Uplow Boundaries --&gt; Bottom Boundary Layer Properties of bottom boundary layer in ocean 36.1. Overview Is Required Step119: 36.2. Type Of Bbl Is Required Step120: 36.3. Lateral Mixing Coef Is Required Step121: 36.4. Sill Overflow Is Required Step122: 37. Boundary Forcing Ocean boundary forcing 37.1. Overview Is Required Step123: 37.2. Surface Pressure Is Required Step124: 37.3. Momentum Flux Correction Is Required Step125: 37.4. Tracers Flux Correction Is Required Step126: 37.5. Wave Effects Is Required Step127: 37.6. River Runoff Budget Is Required Step128: 37.7. Geothermal Heating Is Required Step129: 38. Boundary Forcing --&gt; Momentum --&gt; Bottom Friction Properties of momentum bottom friction in ocean 38.1. Type Is Required Step130: 39. Boundary Forcing --&gt; Momentum --&gt; Lateral Friction Properties of momentum lateral friction in ocean 39.1. Type Is Required Step131: 40. Boundary Forcing --&gt; Tracers --&gt; Sunlight Penetration Properties of sunlight penetration scheme in ocean 40.1. Scheme Is Required Step132: 40.2. Ocean Colour Is Required Step133: 40.3. Extinction Depth Is Required Step134: 41. Boundary Forcing --&gt; Tracers --&gt; Fresh Water Forcing Properties of surface fresh water forcing in ocean 41.1. From Atmopshere Is Required Step135: 41.2. From Sea Ice Is Required Step136: 41.3. Forced Mode Restoring Is Required
Python Code: # DO NOT EDIT ! from pyesdoc.ipython.model_topic import NotebookOutput # DO NOT EDIT ! DOC = NotebookOutput('cmip6', 'cmcc', 'sandbox-2', 'ocean') Explanation: ES-DOC CMIP6 Model Properties - Ocean MIP Era: CMIP6 Institute: CMCC Source ID: SANDBOX-2 Topic: Ocean Sub-Topics: Timestepping Framework, Advection, Lateral Physics, Vertical Physics, Uplow Boundaries, Boundary Forcing. Properties: 133 (101 required) Model descriptions: Model description details Initialized From: -- Notebook Help: Goto notebook help page Notebook Initialised: 2018-02-15 16:53:50 Document Setup IMPORTANT: to be executed each time you run the notebook End of explanation # Set as follows: DOC.set_author("name", "email") # TODO - please enter value(s) Explanation: Document Authors Set document authors End of explanation # Set as follows: DOC.set_contributor("name", "email") # TODO - please enter value(s) Explanation: Document Contributors Specify document contributors End of explanation # Set publication status: # 0=do not publish, 1=publish. DOC.set_publication_status(0) Explanation: Document Publication Specify document publication status End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.model_overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: Document Table of Contents 1. Key Properties 2. Key Properties --&gt; Seawater Properties 3. Key Properties --&gt; Bathymetry 4. Key Properties --&gt; Nonoceanic Waters 5. Key Properties --&gt; Software Properties 6. Key Properties --&gt; Resolution 7. Key Properties --&gt; Tuning Applied 8. Key Properties --&gt; Conservation 9. Grid 10. Grid --&gt; Discretisation --&gt; Vertical 11. Grid --&gt; Discretisation --&gt; Horizontal 12. Timestepping Framework 13. Timestepping Framework --&gt; Tracers 14. Timestepping Framework --&gt; Baroclinic Dynamics 15. Timestepping Framework --&gt; Barotropic 16. Timestepping Framework --&gt; Vertical Physics 17. Advection 18. Advection --&gt; Momentum 19. Advection --&gt; Lateral Tracers 20. Advection --&gt; Vertical Tracers 21. Lateral Physics 22. Lateral Physics --&gt; Momentum --&gt; Operator 23. Lateral Physics --&gt; Momentum --&gt; Eddy Viscosity Coeff 24. Lateral Physics --&gt; Tracers 25. Lateral Physics --&gt; Tracers --&gt; Operator 26. Lateral Physics --&gt; Tracers --&gt; Eddy Diffusity Coeff 27. Lateral Physics --&gt; Tracers --&gt; Eddy Induced Velocity 28. Vertical Physics 29. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Details 30. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Tracers 31. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Momentum 32. Vertical Physics --&gt; Interior Mixing --&gt; Details 33. Vertical Physics --&gt; Interior Mixing --&gt; Tracers 34. Vertical Physics --&gt; Interior Mixing --&gt; Momentum 35. Uplow Boundaries --&gt; Free Surface 36. Uplow Boundaries --&gt; Bottom Boundary Layer 37. Boundary Forcing 38. Boundary Forcing --&gt; Momentum --&gt; Bottom Friction 39. Boundary Forcing --&gt; Momentum --&gt; Lateral Friction 40. Boundary Forcing --&gt; Tracers --&gt; Sunlight Penetration 41. Boundary Forcing --&gt; Tracers --&gt; Fresh Water Forcing 1. Key Properties Ocean key properties 1.1. Model Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of ocean model. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.model_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 1.2. Model Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Name of ocean model code (NEMO 3.6, MOM 5.0,...) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.model_family') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "OGCM" # "slab ocean" # "mixed layer ocean" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 1.3. Model Family Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of ocean model. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.basic_approximations') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Primitive equations" # "Non-hydrostatic" # "Boussinesq" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 1.4. Basic Approximations Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Basic approximations made in the ocean. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.prognostic_variables') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Potential temperature" # "Conservative temperature" # "Salinity" # "U-velocity" # "V-velocity" # "W-velocity" # "SSH" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 1.5. Prognostic Variables Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N List of prognostic variables in the ocean component. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Linear" # "Wright, 1997" # "Mc Dougall et al." # "Jackett et al. 2006" # "TEOS 2010" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 2. Key Properties --&gt; Seawater Properties Physical properties of seawater in ocean 2.1. Eos Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of EOS for sea water End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_temp') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Potential temperature" # "Conservative temperature" # TODO - please enter value(s) Explanation: 2.2. Eos Functional Temp Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Temperature used in EOS for sea water End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_salt') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Practical salinity Sp" # "Absolute salinity Sa" # TODO - please enter value(s) Explanation: 2.3. Eos Functional Salt Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Salinity used in EOS for sea water End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_depth') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Pressure (dbars)" # "Depth (meters)" # TODO - please enter value(s) Explanation: 2.4. Eos Functional Depth Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Depth or pressure used in EOS for sea water ? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_freezing_point') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "TEOS 2010" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 2.5. Ocean Freezing Point Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Equation used to compute the freezing point (in deg C) of seawater, as a function of salinity and pressure End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_specific_heat') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 2.6. Ocean Specific Heat Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Specific heat in ocean (cpocean) in J/(kg K) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_reference_density') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 2.7. Ocean Reference Density Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Boussinesq reference density (rhozero) in kg / m3 End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.bathymetry.reference_dates') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Present day" # "21000 years BP" # "6000 years BP" # "LGM" # "Pliocene" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 3. Key Properties --&gt; Bathymetry Properties of bathymetry in ocean 3.1. Reference Dates Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Reference date of bathymetry End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.bathymetry.type') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 3.2. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is the bathymetry fixed in time in the ocean ? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.bathymetry.ocean_smoothing') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 3.3. Ocean Smoothing Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe any smoothing or hand editing of bathymetry in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.bathymetry.source') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 3.4. Source Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe source of bathymetry in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.isolated_seas') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 4. Key Properties --&gt; Nonoceanic Waters Non oceanic waters treatement in ocean 4.1. Isolated Seas Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how isolated seas is performed End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.river_mouth') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 4.2. River Mouth Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how river mouth mixing or estuaries specific treatment is performed End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.software_properties.repository') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 5. Key Properties --&gt; Software Properties Software properties of ocean code 5.1. Repository Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Location of code for this component. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.software_properties.code_version') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 5.2. Code Version Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Code version identifier. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.software_properties.code_languages') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 5.3. Code Languages Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Code language(s). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.resolution.name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 6. Key Properties --&gt; Resolution Resolution in the ocean grid 6.1. Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 This is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.resolution.canonical_horizontal_resolution') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 6.2. Canonical Horizontal Resolution Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.resolution.range_horizontal_resolution') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 6.3. Range Horizontal Resolution Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Range of horizontal resolution with spatial details, eg. 50(Equator)-100km or 0.1-0.5 degrees etc. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.resolution.number_of_horizontal_gridpoints') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 6.4. Number Of Horizontal Gridpoints Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Total number of horizontal (XY) points (or degrees of freedom) on computational grid. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.resolution.number_of_vertical_levels') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 6.5. Number Of Vertical Levels Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Number of vertical levels resolved on computational grid. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.resolution.is_adaptive_grid') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 6.6. Is Adaptive Grid Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Default is False. Set true if grid resolution changes during execution. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.resolution.thickness_level_1') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 6.7. Thickness Level 1 Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Thickness of first surface ocean level (in meters) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.tuning_applied.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 7. Key Properties --&gt; Tuning Applied Tuning methodology for ocean component 7.1. Description Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 General overview description of tuning: explain and motivate the main targets and metrics retained. &amp;Document the relative weight given to climate performance metrics versus process oriented metrics, &amp;and on the possible conflicts with parameterization level tuning. In particular describe any struggle &amp;with a parameter value that required pushing it to its limits to solve a particular model deficiency. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.tuning_applied.global_mean_metrics_used') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 7.2. Global Mean Metrics Used Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N List set of metrics of the global mean state used in tuning model/component End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.tuning_applied.regional_metrics_used') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 7.3. Regional Metrics Used Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N List of regional metrics of mean state (e.g THC, AABW, regional means etc) used in tuning model/component End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.tuning_applied.trend_metrics_used') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 7.4. Trend Metrics Used Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N List observed trend metrics used in tuning model/component End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.conservation.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 8. Key Properties --&gt; Conservation Conservation in the ocean component 8.1. Description Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Brief description of conservation methodology End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.conservation.scheme') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Energy" # "Enstrophy" # "Salt" # "Volume of ocean" # "Momentum" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 8.2. Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Properties conserved in the ocean by the numerical schemes End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.conservation.consistency_properties') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 8.3. Consistency Properties Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Any additional consistency properties (energy conversion, pressure gradient discretisation, ...)? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.conservation.corrected_conserved_prognostic_variables') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 8.4. Corrected Conserved Prognostic Variables Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Set of variables which are conserved by more than the numerical scheme alone. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.conservation.was_flux_correction_used') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 8.5. Was Flux Correction Used Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Does conservation involve flux correction ? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.grid.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 9. Grid Ocean grid 9.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of grid in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.grid.discretisation.vertical.coordinates') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Z-coordinate" # "Z*-coordinate" # "S-coordinate" # "Isopycnic - sigma 0" # "Isopycnic - sigma 2" # "Isopycnic - sigma 4" # "Isopycnic - other" # "Hybrid / Z+S" # "Hybrid / Z+isopycnic" # "Hybrid / other" # "Pressure referenced (P)" # "P*" # "Z**" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 10. Grid --&gt; Discretisation --&gt; Vertical Properties of vertical discretisation in ocean 10.1. Coordinates Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of vertical coordinates in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.grid.discretisation.vertical.partial_steps') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 10.2. Partial Steps Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Using partial steps with Z or Z vertical coordinate in ocean ?* End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Lat-lon" # "Rotated north pole" # "Two north poles (ORCA-style)" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 11. Grid --&gt; Discretisation --&gt; Horizontal Type of horizontal discretisation scheme in ocean 11.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Horizontal grid type End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.staggering') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Arakawa B-grid" # "Arakawa C-grid" # "Arakawa E-grid" # "N/a" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 11.2. Staggering Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Horizontal grid staggering type End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Finite difference" # "Finite volumes" # "Finite elements" # "Unstructured grid" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 11.3. Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Horizontal discretisation scheme in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.timestepping_framework.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 12. Timestepping Framework Ocean Timestepping Framework 12.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of time stepping in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.timestepping_framework.diurnal_cycle') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "None" # "Via coupling" # "Specific treatment" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 12.2. Diurnal Cycle Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Diurnal cycle type End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.timestepping_framework.tracers.scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Leap-frog + Asselin filter" # "Leap-frog + Periodic Euler" # "Predictor-corrector" # "Runge-Kutta 2" # "AM3-LF" # "Forward-backward" # "Forward operator" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 13. Timestepping Framework --&gt; Tracers Properties of tracers time stepping in ocean 13.1. Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Tracers time stepping scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.timestepping_framework.tracers.time_step') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 13.2. Time Step Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Tracers time step (in seconds) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Preconditioned conjugate gradient" # "Sub cyling" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 14. Timestepping Framework --&gt; Baroclinic Dynamics Baroclinic dynamics in ocean 14.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Baroclinic dynamics type End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Leap-frog + Asselin filter" # "Leap-frog + Periodic Euler" # "Predictor-corrector" # "Runge-Kutta 2" # "AM3-LF" # "Forward-backward" # "Forward operator" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 14.2. Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Baroclinic dynamics scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.time_step') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 14.3. Time Step Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Baroclinic time step (in seconds) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.timestepping_framework.barotropic.splitting') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "None" # "split explicit" # "implicit" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 15. Timestepping Framework --&gt; Barotropic Barotropic time stepping in ocean 15.1. Splitting Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Time splitting method End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.timestepping_framework.barotropic.time_step') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 15.2. Time Step Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Barotropic time step (in seconds) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.timestepping_framework.vertical_physics.method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 16. Timestepping Framework --&gt; Vertical Physics Vertical physics time stepping in ocean 16.1. Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Details of vertical time stepping in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.advection.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 17. Advection Ocean advection 17.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of advection in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.advection.momentum.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Flux form" # "Vector form" # TODO - please enter value(s) Explanation: 18. Advection --&gt; Momentum Properties of lateral momemtum advection scheme in ocean 18.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of lateral momemtum advection scheme in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.advection.momentum.scheme_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 18.2. Scheme Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Name of ocean momemtum advection scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.advection.momentum.ALE') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 18.3. ALE Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Using ALE for vertical advection ? (if vertical coordinates are sigma) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.advection.lateral_tracers.order') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 19. Advection --&gt; Lateral Tracers Properties of lateral tracer advection scheme in ocean 19.1. Order Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Order of lateral tracer advection scheme in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.advection.lateral_tracers.flux_limiter') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 19.2. Flux Limiter Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Monotonic flux limiter for lateral tracer advection scheme in ocean ? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.advection.lateral_tracers.effective_order') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 19.3. Effective Order Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Effective order of limited lateral tracer advection scheme in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.advection.lateral_tracers.name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 19.4. Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Descriptive text for lateral tracer advection scheme in ocean (e.g. MUSCL, PPM-H5, PRATHER,...) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Ideal age" # "CFC 11" # "CFC 12" # "SF6" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 19.5. Passive Tracers Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Passive tracers advected End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers_advection') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 19.6. Passive Tracers Advection Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Is advection of passive tracers different than active ? if so, describe. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.advection.vertical_tracers.name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 20. Advection --&gt; Vertical Tracers Properties of vertical tracer advection scheme in ocean 20.1. Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Descriptive text for vertical tracer advection scheme in ocean (e.g. MUSCL, PPM-H5, PRATHER,...) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.advection.vertical_tracers.flux_limiter') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 20.2. Flux Limiter Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Monotonic flux limiter for vertical tracer advection scheme in ocean ? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 21. Lateral Physics Ocean lateral physics 21.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of lateral physics in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "None" # "Eddy active" # "Eddy admitting" # TODO - please enter value(s) Explanation: 21.2. Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of transient eddy representation in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.direction') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Horizontal" # "Isopycnal" # "Isoneutral" # "Geopotential" # "Iso-level" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 22. Lateral Physics --&gt; Momentum --&gt; Operator Properties of lateral physics operator for momentum in ocean 22.1. Direction Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Direction of lateral physics momemtum scheme in the ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.order') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Harmonic" # "Bi-harmonic" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 22.2. Order Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Order of lateral physics momemtum scheme in the ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.discretisation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Second order" # "Higher order" # "Flux limiter" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 22.3. Discretisation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Discretisation of lateral physics momemtum scheme in the ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Constant" # "Space varying" # "Time + space varying (Smagorinsky)" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 23. Lateral Physics --&gt; Momentum --&gt; Eddy Viscosity Coeff Properties of eddy viscosity coeff in lateral physics momemtum scheme in the ocean 23.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Lateral physics momemtum eddy viscosity coeff type in the ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.constant_coefficient') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 23.2. Constant Coefficient Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If constant, value of eddy viscosity coeff in lateral physics momemtum scheme (in m2/s) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.variable_coefficient') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 23.3. Variable Coefficient Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If space-varying, describe variations of eddy viscosity coeff in lateral physics momemtum scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_background') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 23.4. Coeff Background Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe background eddy viscosity coeff in lateral physics momemtum scheme (give values in m2/s) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_backscatter') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 23.5. Coeff Backscatter Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is there backscatter in eddy viscosity coeff in lateral physics momemtum scheme ? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.mesoscale_closure') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 24. Lateral Physics --&gt; Tracers Properties of lateral physics for tracers in ocean 24.1. Mesoscale Closure Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is there a mesoscale closure in the lateral physics tracers scheme ? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.submesoscale_mixing') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 24.2. Submesoscale Mixing Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is there a submesoscale mixing parameterisation (i.e Fox-Kemper) in the lateral physics tracers scheme ? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.direction') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Horizontal" # "Isopycnal" # "Isoneutral" # "Geopotential" # "Iso-level" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 25. Lateral Physics --&gt; Tracers --&gt; Operator Properties of lateral physics operator for tracers in ocean 25.1. Direction Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Direction of lateral physics tracers scheme in the ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.order') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Harmonic" # "Bi-harmonic" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 25.2. Order Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Order of lateral physics tracers scheme in the ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.discretisation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Second order" # "Higher order" # "Flux limiter" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 25.3. Discretisation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Discretisation of lateral physics tracers scheme in the ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Constant" # "Space varying" # "Time + space varying (Smagorinsky)" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 26. Lateral Physics --&gt; Tracers --&gt; Eddy Diffusity Coeff Properties of eddy diffusity coeff in lateral physics tracers scheme in the ocean 26.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Lateral physics tracers eddy diffusity coeff type in the ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.constant_coefficient') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 26.2. Constant Coefficient Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If constant, value of eddy diffusity coeff in lateral physics tracers scheme (in m2/s) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.variable_coefficient') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 26.3. Variable Coefficient Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If space-varying, describe variations of eddy diffusity coeff in lateral physics tracers scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_background') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 26.4. Coeff Background Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe background eddy diffusity coeff in lateral physics tracers scheme (give values in m2/s) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_backscatter') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 26.5. Coeff Backscatter Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is there backscatter in eddy diffusity coeff in lateral physics tracers scheme ? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "GM" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 27. Lateral Physics --&gt; Tracers --&gt; Eddy Induced Velocity Properties of eddy induced velocity (EIV) in lateral physics tracers scheme in the ocean 27.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of EIV in lateral physics tracers in the ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.constant_val') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 27.2. Constant Val Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If EIV scheme for tracers is constant, specify coefficient value (M2/s) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.flux_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 27.3. Flux Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of EIV flux (advective or skew) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.added_diffusivity') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 27.4. Added Diffusivity Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of EIV added diffusivity (constant, flow dependent or none) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 28. Vertical Physics Ocean Vertical Physics 28.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of vertical physics in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.details.langmuir_cells_mixing') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 29. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Details Properties of vertical physics in ocean 29.1. Langmuir Cells Mixing Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is there Langmuir cells mixing in upper ocean ? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Constant value" # "Turbulent closure - TKE" # "Turbulent closure - KPP" # "Turbulent closure - Mellor-Yamada" # "Turbulent closure - Bulk Mixed Layer" # "Richardson number dependent - PP" # "Richardson number dependent - KT" # "Imbeded as isopycnic vertical coordinate" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 30. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Tracers *Properties of boundary layer (BL) mixing on tracers in the ocean * 30.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of boundary layer mixing for tracers in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.closure_order') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 30.2. Closure Order Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If turbulent BL mixing of tracers, specific order of closure (0, 1, 2.5, 3) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.constant') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 30.3. Constant Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If constant BL mixing of tracers, specific coefficient (m2/s) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.background') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 30.4. Background Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Background BL mixing of tracers coefficient, (schema and value in m2/s - may by none) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Constant value" # "Turbulent closure - TKE" # "Turbulent closure - KPP" # "Turbulent closure - Mellor-Yamada" # "Turbulent closure - Bulk Mixed Layer" # "Richardson number dependent - PP" # "Richardson number dependent - KT" # "Imbeded as isopycnic vertical coordinate" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 31. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Momentum *Properties of boundary layer (BL) mixing on momentum in the ocean * 31.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of boundary layer mixing for momentum in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.closure_order') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 31.2. Closure Order Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If turbulent BL mixing of momentum, specific order of closure (0, 1, 2.5, 3) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.constant') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 31.3. Constant Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If constant BL mixing of momentum, specific coefficient (m2/s) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.background') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 31.4. Background Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Background BL mixing of momentum coefficient, (schema and value in m2/s - may by none) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.convection_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Non-penetrative convective adjustment" # "Enhanced vertical diffusion" # "Included in turbulence closure" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 32. Vertical Physics --&gt; Interior Mixing --&gt; Details *Properties of interior mixing in the ocean * 32.1. Convection Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of vertical convection in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.tide_induced_mixing') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 32.2. Tide Induced Mixing Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe how tide induced mixing is modelled (barotropic, baroclinic, none) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.double_diffusion') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 32.3. Double Diffusion Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is there double diffusion End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.shear_mixing') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 32.4. Shear Mixing Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is there interior shear mixing End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Constant value" # "Turbulent closure / TKE" # "Turbulent closure - Mellor-Yamada" # "Richardson number dependent - PP" # "Richardson number dependent - KT" # "Imbeded as isopycnic vertical coordinate" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 33. Vertical Physics --&gt; Interior Mixing --&gt; Tracers *Properties of interior mixing on tracers in the ocean * 33.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of interior mixing for tracers in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.constant') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 33.2. Constant Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If constant interior mixing of tracers, specific coefficient (m2/s) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.profile') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 33.3. Profile Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is the background interior mixing using a vertical profile for tracers (i.e is NOT constant) ? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.background') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 33.4. Background Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Background interior mixing of tracers coefficient, (schema and value in m2/s - may by none) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Constant value" # "Turbulent closure / TKE" # "Turbulent closure - Mellor-Yamada" # "Richardson number dependent - PP" # "Richardson number dependent - KT" # "Imbeded as isopycnic vertical coordinate" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 34. Vertical Physics --&gt; Interior Mixing --&gt; Momentum *Properties of interior mixing on momentum in the ocean * 34.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of interior mixing for momentum in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.constant') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 34.2. Constant Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If constant interior mixing of momentum, specific coefficient (m2/s) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.profile') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 34.3. Profile Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is the background interior mixing using a vertical profile for momentum (i.e is NOT constant) ? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.background') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 34.4. Background Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Background interior mixing of momentum coefficient, (schema and value in m2/s - may by none) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 35. Uplow Boundaries --&gt; Free Surface Properties of free surface in ocean 35.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of free surface in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Linear implicit" # "Linear filtered" # "Linear semi-explicit" # "Non-linear implicit" # "Non-linear filtered" # "Non-linear semi-explicit" # "Fully explicit" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 35.2. Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Free surface scheme in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.embeded_seaice') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 35.3. Embeded Seaice Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is the sea-ice embeded in the ocean model (instead of levitating) ? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 36. Uplow Boundaries --&gt; Bottom Boundary Layer Properties of bottom boundary layer in ocean 36.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of bottom boundary layer in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.type_of_bbl') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Diffusive" # "Acvective" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 36.2. Type Of Bbl Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of bottom boundary layer in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.lateral_mixing_coef') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 36.3. Lateral Mixing Coef Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If bottom BL is diffusive, specify value of lateral mixing coefficient (in m2/s) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.sill_overflow') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 36.4. Sill Overflow Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe any specific treatment of sill overflows End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 37. Boundary Forcing Ocean boundary forcing 37.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of boundary forcing in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.surface_pressure') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 37.2. Surface Pressure Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe how surface pressure is transmitted to ocean (via sea-ice, nothing specific,...) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.momentum_flux_correction') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 37.3. Momentum Flux Correction Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe any type of ocean surface momentum flux correction and, if applicable, how it is applied and where. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.tracers_flux_correction') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 37.4. Tracers Flux Correction Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe any type of ocean surface tracers flux correction and, if applicable, how it is applied and where. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.wave_effects') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 37.5. Wave Effects Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe if/how wave effects are modelled at ocean surface. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.river_runoff_budget') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 37.6. River Runoff Budget Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe how river runoff from land surface is routed to ocean and any global adjustment done. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.geothermal_heating') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 37.7. Geothermal Heating Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe if/how geothermal heating is present at ocean bottom. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.momentum.bottom_friction.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Linear" # "Non-linear" # "Non-linear (drag function of speed of tides)" # "Constant drag coefficient" # "None" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 38. Boundary Forcing --&gt; Momentum --&gt; Bottom Friction Properties of momentum bottom friction in ocean 38.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of momentum bottom friction in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.momentum.lateral_friction.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "None" # "Free-slip" # "No-slip" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 39. Boundary Forcing --&gt; Momentum --&gt; Lateral Friction Properties of momentum lateral friction in ocean 39.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of momentum lateral friction in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "1 extinction depth" # "2 extinction depth" # "3 extinction depth" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 40. Boundary Forcing --&gt; Tracers --&gt; Sunlight Penetration Properties of sunlight penetration scheme in ocean 40.1. Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of sunlight penetration scheme in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.ocean_colour') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 40.2. Ocean Colour Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is the ocean sunlight penetration scheme ocean colour dependent ? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.extinction_depth') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 40.3. Extinction Depth Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe and list extinctions depths for sunlight penetration scheme (if applicable). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_atmopshere') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Freshwater flux" # "Virtual salt flux" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 41. Boundary Forcing --&gt; Tracers --&gt; Fresh Water Forcing Properties of surface fresh water forcing in ocean 41.1. From Atmopshere Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of surface fresh water forcing from atmos in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_sea_ice') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Freshwater flux" # "Virtual salt flux" # "Real salt flux" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 41.2. From Sea Ice Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of surface fresh water forcing from sea-ice in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.forced_mode_restoring') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 41.3. Forced Mode Restoring Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of surface salinity restoring in forced mode (OMIP) End of explanation
3,657
Given the following text description, write Python code to implement the functionality described below step by step Description: Step7: 4T_Pandas로 배우는 SQL 시작하기 (4) - HAVING, SUB QUERY SQL => 연산의 결과로 나온 데이터를 다시 Filtering ( HAVING ) SUB QUERY + TEMPORARY TABLE ( 임시 테이블 ) 실습) "5월 달에" / "지금까지" 렌탈 횟수가 30회 이상인 유저 유저이름과 유저 이메일 => 마케팅 customer, rental Pandas Step9: HAVING Step10: 실습) 제일 많이 매출을 발생시킨 "영화 제목" payment, film, inventory, rental 영화별 ( GROUP BY ) 매출 ( SUM )을 뽑아서 정렬하자 ( ORDER BY ) 데이터가 어디에 들어있는가? Step13: film_df => film_id, title inventory_df => inventory_id, film_id rental_df => rental_id, inventory_id payment_df => rental_id, amount Step15: 실습 추가) 결제 누적액이 많은 유저 상위 10명 ( customer, payment ) Step17: 영화를 흥행시킨 ( 매출이 많이 발생한 ) 배우 상위 10명 rental, payment ... actor_df => actor_id, first_name, last_name film_actor_df => actor_id, film_id inventory_df => inventory_id, film_id rental_df => rental_id, inventory_id payment_df => rental_id, amount
Python Code: import pymysql import curl db = pymysql.connect( "db.fastcamp.us", "root", "dkstncks", "sakila", charset = "utf8", ) customer_df = pd.read_sql("SELECT * FROM customer;", db) rental_df = pd.read_sql("SELECT * FROM rental;", db) df = rental_df.merge(customer_df, on="customer_id") df.head(1) rental_per_customer_groups = df.groupby("customer_id") rental_per_customer_groups.get_group(1) rental_per_customer_groups.agg({"customer_id": np.size})["customer_id"] # rental_per_customer_groups.agg({"customer_id": np.size})["customer_id"] > 30 is_many_rentals_user = rental_per_customer_groups.size() > 30 is_many_rentals_user #여기서는 어떤 문제로 되지 않는다. 다음에 알려주겠다. # SQL로 하겠다 # 1. Sub Query - Query 안에 Query 가 들어있다. SQL_QUERY = SELECT r.customer_id, COUNT(*) rentals FROM rental r JOIN customer c ON r.customer_id = c.customer_id GROUP BY r.customer_id WHERE rentals > 30 pd.read_sql(SQL_QUERY, db) #순서가 FROM -> group by -> where(=>rental이 없다.) -> select에서 카운트가 마지막이라서 # 1. Sub Query = Query 안에 Query가 들어있다. SQL_QUERY = SELECT rentals_per_customer.customer_id "Customer ID", rentals_per_customer.rentals FROM ( SELECT r.customer_id, COUNT(*) rentals FROM rental r JOIN customer c ON r.customer_id = c.customer_id GROUP BY r.customer_id ) AS rentals_per_customer WHERE rentals > 30 ; pd.read_sql(SQL_QUERY, db) # sub query스럽지 않아서 나눠 쓰면 보기가 좋다. RENTALS_PER_CUSTOMER_SQL_QUERY = SELECT r.customer_id, COUNT(*) rentals FROM rental r JOIN customer c ON r.customer_id = c.customer_id GROUP BY r.customer_id ; SQL_QUERY = SELECT * FROM ( {rentals_per_customer_sql_query} ) AS rentals_per_customer WHERE rentals > 30 ; .format( rentals_per_customer_sql_query=RENTALS_PER_CUSTOMER_SQL_QUERY.replace(";", "") ) pd.read_sql(SQL_QUERY, db) print(SQL_QUERY) # 30번 이상인 애들의 => 이름, 이메일 RESULT_SQL_QUERY = SELECT customer.last_name, customer.first_name, customer.email FROM ({SQL_QUERY}) many_rental_user JOIN customer ON many_rental_user.customer_id = customer.customer_id ; .format( SQL_QUERY=SQL_QUERY.replace(";", "") ) pd.read_sql(RESULT_SQL_QUERY, db) # Temporary Table ( 임시 테이블 ) SQL_QUERY = DROP TEMPORARY TABLE IF EXISTS rentals_per_customer; CREATE TEMPORARY TABLE rentals_per_customer SELECT r.customer_id, COUNT(*) rentals FROM rental r JOIN customer c ON r.customer_id = c.customer_id GROUP BY r.customer_id ; # pd.read_sql() => 이걸로 실행시키면 오류가 난다. 그래서 cursor로 실행 cursor = db.cursor() cursor.execute(SQL_QUERY) SQL_QUERY = SELECT rpc.customer_id, rpc.rentals FROM rentals_per_customer rpc WHERE rentals > 30 ; pd.read_sql(SQL_QUERY, db) Explanation: 4T_Pandas로 배우는 SQL 시작하기 (4) - HAVING, SUB QUERY SQL => 연산의 결과로 나온 데이터를 다시 Filtering ( HAVING ) SUB QUERY + TEMPORARY TABLE ( 임시 테이블 ) 실습) "5월 달에" / "지금까지" 렌탈 횟수가 30회 이상인 유저 유저이름과 유저 이메일 => 마케팅 customer, rental Pandas End of explanation SQL_QUERY = SELECT r.customer_id, COUNT(*) rentals FROM rental r JOIN customer c ON r.customer_id = c.customer_id GROUP BY r.customer_id # WHERE rentals > 30 #연산에 대한 결과로 Filtering을 할 수 없다. # 연산에 대한 결과로 Filtering을 할 수 있는 기능 HAVING rentals > 30 pd.read_sql(SQL_QUERY, db) Explanation: HAVING End of explanation db = pymysql.connect( "db.fastcamp.us", "root", "dkstncks", "sakila", charset="utf8" ) film_df = pd.read_sql("SELECT * FROM film;", db) rental_df = pd.read_sql("SELECT * FROM rental;", db) payment_df = pd.read_sql("SELECT * FROM payment;", db) inventory_df = pd.read_sql("SELECT * FROM inventory;", db) Explanation: 실습) 제일 많이 매출을 발생시킨 "영화 제목" payment, film, inventory, rental 영화별 ( GROUP BY ) 매출 ( SUM )을 뽑아서 정렬하자 ( ORDER BY ) 데이터가 어디에 들어있는가? End of explanation SQL_QUERY = SELECT f.film_id, f.title, SUM(p.amount) "revenue" FROM film f, rental r, payment p, inventory i WHERE f.film_id = i.film_id AND i.inventory_id = r.inventory_id AND r.rental_id = p.rental_id GROUP BY f.film_id ORDER BY revenue DESC ; pd.read_sql(SQL_QUERY, db) SQL_QUERY = SELECT f.film_id, f.title, SUM(p.amount) "revenue" FROM payment p JOIN rental r ON p.rental_id = r.rental_id JOIN inventory i ON i.inventory_id = r.inventory_id JOIN film f ON f.film_id = i.film_id GROUP BY f.film_id ORDER BY revenue DESC ; pd.read_sql(SQL_QUERY, db) Explanation: film_df => film_id, title inventory_df => inventory_id, film_id rental_df => rental_id, inventory_id payment_df => rental_id, amount End of explanation customer_df = pd.read_sql("SELECT * FROM customer;", db) customer_df.head(1) payment_df.head(1) SQL_QUERY = SELECT c.first_name, c.last_name, SUM(p.amount) "revenue" FROM customer c JOIN payment p ON c.customer_id = p.customer_id GROUP BY c.customer_id ORDER BY revenue DESC ; pd.read_sql(SQL_QUERY, db) Explanation: 실습 추가) 결제 누적액이 많은 유저 상위 10명 ( customer, payment ) End of explanation SQL_QUERY = SELECT a.first_name, a.last_name, SUM(p.amount) "revenue" FROM actor a, film_actor fa, inventory i, rental r, payment p WHERE a.actor_id = fa.actor_id AND fa.film_id = i.film_id AND i.inventory_id = r.inventory_id AND r.rental_id = p.rental_id GROUP BY a.actor_id ORDER BY revenue DESC ; pd.read_sql(SQL_QUERY, db) Explanation: 영화를 흥행시킨 ( 매출이 많이 발생한 ) 배우 상위 10명 rental, payment ... actor_df => actor_id, first_name, last_name film_actor_df => actor_id, film_id inventory_df => inventory_id, film_id rental_df => rental_id, inventory_id payment_df => rental_id, amount End of explanation
3,658
Given the following text description, write Python code to implement the functionality described below step by step Description: Тематическая модель Постнауки Peer Review (optional) В этом задании мы применим аппарат тематического моделирования к коллекции текстовых записей видеолекций, скачанных с сайта Постнаука. Мы будем визуализировать модель и создавать прототип тематического навигатора по коллекции. В коллекции 1728 документов, размер словаря - 38467 слов. Слова лемматизированы, то есть приведены к начальной форме, с помощью программы mystem, коллекция сохранена в формате vowpal wabbit. В каждой строке до первой черты записана информация о документе (ссылка на страницу с лекцией), после первой черты следует описание документа. Используются две модальности - текстовая ("text") и модальность авторов ("author"); у каждого документа один автор. Для выполнения задания понадобится библиотека BigARTM. В демонстрации показан пример использования библиотеки версии 0.7.4, на сайте предлагается скачивать версию 0.8.0. В новой версии изменены принципы работы со словарями Step1: Считывание данных Создайте объект класса artm.BatchVectorizer, который будет ссылаться на директорию с пакетами данных (батчами). Чтобы библиотека могла преобразовать текстовый файл в батчи, создайте пустую директорию и укажите ее название в параметре target_folder. Размер батча для небольших коллекций (как наша) не важен, вы можете указать любой. Step2: Инициализация модели Создайте объект класса artm.Model с 30 темами, именами тем, указанными ниже и единичными весами обеих модальностей. Количество тем выбрано не очень большим, чтобы вам было удобнее работать с темами. На этой коллекции можно строить и большее число тем, тогда они будут более узко специализированы. Step3: Мы будем строить 29 предметных тем и одну фоновую. Соберите словарь с помощью метода gather_dictionary и инициализируйте модель, указав random_seed=1. Обязательно укажите свое название словаря, оно понадобится при добавлении регуляризаторов. Step4: Добавление score Создайте два измерителя качества artm.TopTokensScore - по одному для каждой модальности; количество токенов 15. Названия для score придумайте самостоятельно. Step5: Построение модели Мы будем строить модель в два этапа Step6: Выполните 30 итераций по коллекции (num_collection_passes), количество внутренних итераций установите равным 1. Используйте метод fit_offline модели. Step7: Добавьте разреживающий регуляризатор с коэффициентом tau=-1e5, указав название своего словаря, модальность текста в class_ids и все темы "sbjX" в topic_names. Step8: Выполните еще 15 проходов по коллекции. Step9: Интерпретация тем Используя созданные score, выведите топы слов и топы авторов в темах. Удобнее всего выводить топ слов каждой темы с новой строки, указывая название темы в начале строки, и аналогично с авторами. Step10: В последней теме "bcg" должны находиться общеупотребительные слова. Важный шаг в работе с тематической моделью, когда речь идет о визуализации или создании тематического навигатора, это именование тем. Понять, о чем каждая тема, можно по списку ее топовых слов. Например, тему частица взаимодействие физика кварк симметрия элементарный нейтрино стандартный материя протон бозон заряд масса ускоритель слабый можно назвать "Физика элементарных частиц". Дайте названия 29 предметным темам. Если вы не знаете, как назвать тему, назовите ее первым встретившимся в ней существительным, хотя при таком подходе навигатор будет менее информативным. Из названий тем составьте список из 29 строк и запишите го в переменную sbj_topic_labels. В переменной topic_labels будут храниться названия всех тем, включая фоновую. Step11: Анализ тем Далее мы будем работать с распределениями тем в документах (матрица $\Theta$) и авторов в темах (одна из двух матриц $\Phi$, соответствующая модальности авторов). Создайте переменные, содержащие две этих матрицы, с помощью методов get_phi и get_theta модели. Назовите переменные theta и phi_a. Выведите формы обеих матриц, чтобы понять, по каким осям стоят темы. Step12: Визуализируем фрагмент матрицы $\Theta$ - первые 100 документов (это наиболее простой способ визуально оценить, как темы распределяются в документах). С помощью метода seaborn.heatmap выведите фрагмент theta как изображение. Рекомендация Step13: Вы должны увидеть, что фоновая тема имеет большую вероятность в почти каждом документе, и это логично. Кроме того, есть еще одна тема, которая чаще других встречается в документах. Судя по всему, это тема содержит много слов по науку в целом, а каждый документ (видео) в нашей коллекции связан с наукой. Можно (необязательно) дать этой теме название "Наука". Помимо этих двух тем, фоновой и общенаучной, каждый документ характеризуется малым числом других тем. Оценим $p(t)$ - долю каждой темы во всей коллекции. По формуле полной вероятности вычислять эти величины нужно как $p(t) = \sum_d p(t|d) p(d)$. Согласно вероятностной модели, $p(d)$ пропорционально длине документа d. Поступим проще Step14: Найдите 5 самых распространенных и 3 наименее освещенных темы в коллекции (наибольшие и наименьшие $p(t)$ соответственно), не считая фоновую и общенаучную. Укажите названия, которые вы дали этим темам. Визуализируйте матрицу $\Phi$ модальности авторов в виде изображения. Рекомендация Step15: Каждой теме соответствует не очень большое число авторов - матрица достаточно разреженная. Кроме того, некоторые темы имеют доминирующего автора $a$, имеющего большую вероятность $p(a|t)$ - этот автор записал больше всего лекций по теме. Будем считать, что автор $a$ значим в теме, если $p(a|t) > 0.01$. Для каждого автора посчитайте, в скольких темах он значим. Найдите авторов-рекордсменов, которые значимы (а значит, читали лекции) в >= 3 темах. Step16: Большинство авторов значимы в 1 теме, что логично. Построение тематической карты авторов По сути, в матрице $\Phi$, соответствующей модальности авторов, записаны тематические кластеры авторов. Для любого автора мы можем составить его тематический круг - авторов, разбирающихся в той же теме, что и данный. Интересующиеся слушатели могут попробовать выполнить эту процедуру для ученых, читающих лекции на Постнауке, которых они знают (например, на Постнауке есть лекции с К. В. Воронцовым - лектором текущего модуля Step17: Визуализируйте найденные двумерные представления с помощью функции scatter. Step18: Должно получиться, что некоторые грппы авторов формируют сгустки, которые можно считать тематическими группами авторов. Раскрасим точки следующим образом Step19: Создание простого тематического навигатора по Постнауке Наш тематический навигатор будет для каждой темы показывать ее список слов, а также список релевантных теме документов. Нам понадобятся распределения $p(d|t)$. По формуле Байеса $p(d|t) = \frac{p(t|d)p(d)}{\sum_{d'}p(t|d')p(d')}$, но поскольку мы считаем документы равновероятными, достаточно разделить каждую строку $\Theta$ на ее сумму, чтобы оценить распределение. Отсортируйте матрицу $p(d|t)$ по убыванию $p(d|t)$ в каждой теме (то есть построчно). Нам понадобятся индексы наиболее вероятных документов в каждой теме, поэтому используйте функцию argmax. Step20: Создавать навигатор мы будем прямо в jupiter notebook Step21: Кроме того, подключив модуль ipython.core.display, можно использовать html-разметку в выводе. Например Step22: В цикле для каждой темы выведите ее заголовок, в следующей строке - топ-10 слов темы, затем в виде списка ссылки на 10 наиболее релевантных (по $p(d|t)$) теме документов. Используйте html-разметку. Творчество приветствуется
Python Code: import artm from matplotlib import pyplot as plt import seaborn as sns %matplotlib inline sns.set_style("whitegrid", {'axes.grid' : False}) import numpy as np import pandas as pd from sklearn.externals import joblib from IPython.core.interactiveshell import InteractiveShell InteractiveShell.ast_node_interactivity = "all" Explanation: Тематическая модель Постнауки Peer Review (optional) В этом задании мы применим аппарат тематического моделирования к коллекции текстовых записей видеолекций, скачанных с сайта Постнаука. Мы будем визуализировать модель и создавать прототип тематического навигатора по коллекции. В коллекции 1728 документов, размер словаря - 38467 слов. Слова лемматизированы, то есть приведены к начальной форме, с помощью программы mystem, коллекция сохранена в формате vowpal wabbit. В каждой строке до первой черты записана информация о документе (ссылка на страницу с лекцией), после первой черты следует описание документа. Используются две модальности - текстовая ("text") и модальность авторов ("author"); у каждого документа один автор. Для выполнения задания понадобится библиотека BigARTM. В демонстрации показан пример использования библиотеки версии 0.7.4, на сайте предлагается скачивать версию 0.8.0. В новой версии изменены принципы работы со словарями: они вынесены в отдельный класс (пример в Release Notes). Строить модель и извлекать ее параметры нужно так же, как показано в демонстрации. Вы можете использовать предыдущий релиз или новый релиз на ваше усмотрение. Спецификации всех функций вы можете смотреть на странице Python API. End of explanation # Ваш код batch_vectorizer = artm.BatchVectorizer(data_path='lectures.txt', data_format='vowpal_wabbit', target_folder='lectures_batches', batch_size=250) Explanation: Считывание данных Создайте объект класса artm.BatchVectorizer, который будет ссылаться на директорию с пакетами данных (батчами). Чтобы библиотека могла преобразовать текстовый файл в батчи, создайте пустую директорию и укажите ее название в параметре target_folder. Размер батча для небольших коллекций (как наша) не важен, вы можете указать любой. End of explanation T = 30 # количество тем topic_names=["sbj"+str(i) for i in range(T-1)]+["bcg"] # Ваш код model = artm.ARTM(num_topics=T, topic_names=topic_names, num_processors=2, class_ids={'text':1, 'author':1}, reuse_theta=True, cache_theta=True) Explanation: Инициализация модели Создайте объект класса artm.Model с 30 темами, именами тем, указанными ниже и единичными весами обеих модальностей. Количество тем выбрано не очень большим, чтобы вам было удобнее работать с темами. На этой коллекции можно строить и большее число тем, тогда они будут более узко специализированы. End of explanation # Ваш код np.random.seed(1) dictionary = artm.Dictionary('dict') dictionary.gather(batch_vectorizer.data_path) model.initialize(dictionary=dictionary) Explanation: Мы будем строить 29 предметных тем и одну фоновую. Соберите словарь с помощью метода gather_dictionary и инициализируйте модель, указав random_seed=1. Обязательно укажите свое название словаря, оно понадобится при добавлении регуляризаторов. End of explanation # Ваш код model.scores.add(artm.TopTokensScore(name='top_tokens_score_mod1', class_id='text', num_tokens=15)) model.scores.add(artm.TopTokensScore(name='top_tokens_score_mod2', class_id='author', num_tokens=15)) Explanation: Добавление score Создайте два измерителя качества artm.TopTokensScore - по одному для каждой модальности; количество токенов 15. Названия для score придумайте самостоятельно. End of explanation # Ваш код model.regularizers.add(artm.SmoothSparsePhiRegularizer(tau=1e5, class_ids='text', dictionary='dict', topic_names='bcg')) Explanation: Построение модели Мы будем строить модель в два этапа: сначала добавим сглаживающий регуляризатор фоновой темы и настроим параметры модели, затем - добавим разреживающий регуляризатор предметрых тем и выполним еще несколько итераций. Так мы сможем получить наиболее чистые от фоновых слов предметные темы. Сглаживающий и разреживающий регуляризаторы задаются одним и тем же классом artm.SmoothSparsePhiRegularizer: если коэффициент tau положительный, то регуляризатор будет сглаживающий, если отрицательный - разреживающий. Если вы хотите подробнее разобраться, как выполняется регуляризация тематической модели в BigARTM, вы можете прочитать статью, раздел 4. Добавьте сглаживающий регуляризатор с коэффициентом tau = 1e5, указав название своего словаря в dictionary, модальность текста в class_ids и тему "bcg" в topic_names. End of explanation # Ваш код model.num_document_passes = 1 model.fit_offline(batch_vectorizer=batch_vectorizer, num_collection_passes=30) Explanation: Выполните 30 итераций по коллекции (num_collection_passes), количество внутренних итераций установите равным 1. Используйте метод fit_offline модели. End of explanation # Ваш код topic_names_cleared = list(topic_names).remove('bcg') model.regularizers.add(artm.SmoothSparsePhiRegularizer(tau=-1e5, class_ids='text', dictionary='dict', topic_names=topic_names_cleared)) Explanation: Добавьте разреживающий регуляризатор с коэффициентом tau=-1e5, указав название своего словаря, модальность текста в class_ids и все темы "sbjX" в topic_names. End of explanation # Ваш код model.fit_offline(batch_vectorizer=batch_vectorizer, num_collection_passes=15) Explanation: Выполните еще 15 проходов по коллекции. End of explanation # Ваш код tokens = model.score_tracker['top_tokens_score_mod1'].last_tokens for topic_name in model.topic_names: print topic_name + ': ', for word in tokens[topic_name]: print word, print # Ваш код authors = model.score_tracker['top_tokens_score_mod2'].last_tokens for topic_name in model.topic_names: print topic_name + ': ', for author in authors[topic_name]: print author, print Explanation: Интерпретация тем Используя созданные score, выведите топы слов и топы авторов в темах. Удобнее всего выводить топ слов каждой темы с новой строки, указывая название темы в начале строки, и аналогично с авторами. End of explanation sbj_topic_labels = [] # запишите названия тем в список for topic_name in model.topic_names[:29]: sbj_topic_labels.append(tokens[topic_name][0]) topic_labels = sbj_topic_labels + [u"Фоновая тема"] Explanation: В последней теме "bcg" должны находиться общеупотребительные слова. Важный шаг в работе с тематической моделью, когда речь идет о визуализации или создании тематического навигатора, это именование тем. Понять, о чем каждая тема, можно по списку ее топовых слов. Например, тему частица взаимодействие физика кварк симметрия элементарный нейтрино стандартный материя протон бозон заряд масса ускоритель слабый можно назвать "Физика элементарных частиц". Дайте названия 29 предметным темам. Если вы не знаете, как назвать тему, назовите ее первым встретившимся в ней существительным, хотя при таком подходе навигатор будет менее информативным. Из названий тем составьте список из 29 строк и запишите го в переменную sbj_topic_labels. В переменной topic_labels будут храниться названия всех тем, включая фоновую. End of explanation model.theta_columns_naming = "title" # включает именование столбцов Theta их названиями-ссылками, а не внутренними id # Ваш код theta = model.get_theta() print('Theta shape: %s' % str(theta.shape)) phi_a = model.get_phi(class_ids='author') print('Phi_a shape: %s' % str(phi_a.shape)) Explanation: Анализ тем Далее мы будем работать с распределениями тем в документах (матрица $\Theta$) и авторов в темах (одна из двух матриц $\Phi$, соответствующая модальности авторов). Создайте переменные, содержащие две этих матрицы, с помощью методов get_phi и get_theta модели. Назовите переменные theta и phi_a. Выведите формы обеих матриц, чтобы понять, по каким осям стоят темы. End of explanation # Ваш код theta.iloc[:,:100] plt.figure(figsize=(20,10)) plt.title('Theta matrix for the first 100 documents') sns.heatmap(theta.iloc[:,:100], cmap='YlGnBu', xticklabels=False) plt.show(); Explanation: Визуализируем фрагмент матрицы $\Theta$ - первые 100 документов (это наиболее простой способ визуально оценить, как темы распределяются в документах). С помощью метода seaborn.heatmap выведите фрагмент theta как изображение. Рекомендация: создайте фигуру pyplot размера (20, 10). End of explanation # Ваш код prob_theme_data = [np.sum(theta.iloc[i]) for i in range(theta.shape[0])] prob_theme_data_normed = prob_theme_data / np.sum(prob_theme_data) prob_theme = pd.DataFrame(data=prob_theme_data_normed, index=topic_labels, columns=['prob']) prob_theme prob_theme_max = prob_theme prob_theme_min = prob_theme print('Max 5 probabilities:') for i in range(5): max_value = prob_theme_max.max()[0] print(prob_theme_max[prob_theme_max.values == max_value].index[0]) prob_theme_max = prob_theme_max[prob_theme_max.values != max_value] print('\nMin 3 probabilities:') for i in range(3): min_value = prob_theme_min.min()[0] print(prob_theme_min[prob_theme_min.values == min_value].index[0]) prob_theme_min = prob_theme_min[prob_theme_min.values != min_value] Explanation: Вы должны увидеть, что фоновая тема имеет большую вероятность в почти каждом документе, и это логично. Кроме того, есть еще одна тема, которая чаще других встречается в документах. Судя по всему, это тема содержит много слов по науку в целом, а каждый документ (видео) в нашей коллекции связан с наукой. Можно (необязательно) дать этой теме название "Наука". Помимо этих двух тем, фоновой и общенаучной, каждый документ характеризуется малым числом других тем. Оценим $p(t)$ - долю каждой темы во всей коллекции. По формуле полной вероятности вычислять эти величины нужно как $p(t) = \sum_d p(t|d) p(d)$. Согласно вероятностной модели, $p(d)$ пропорционально длине документа d. Поступим проще: будем полагать, что все документы равновероятны. Тогда оценить $p(t)$ можно, просуммировав $p(t|d)$ по всем документам, а затем разделив полученный вектор на его сумму. Создайте переменную-датафрейм с T строками, индексированными названиями тем, и 1 столбцом, содержащим оценки $p(t)$. Выведите датафрейм на печать. End of explanation # Ваш код plt.figure(figsize=(20,10)) plt.title('Theta matrix for the first 100 documents') sns.heatmap(phi_a.iloc[:100], cmap='YlGnBu', yticklabels=False) plt.show(); Explanation: Найдите 5 самых распространенных и 3 наименее освещенных темы в коллекции (наибольшие и наименьшие $p(t)$ соответственно), не считая фоновую и общенаучную. Укажите названия, которые вы дали этим темам. Визуализируйте матрицу $\Phi$ модальности авторов в виде изображения. Рекомендация: установите yticklabels=False в heatmap. End of explanation phi_a for i in range(phi_a.shape[0]): num_valuble_topics = 0 for val in phi_a.iloc[i]: if val > 0.01: num_valuble_topics += 1 if num_valuble_topics >= 3: print(i), print(phi_a.index[i]) print(phi_a.iloc[184]) Explanation: Каждой теме соответствует не очень большое число авторов - матрица достаточно разреженная. Кроме того, некоторые темы имеют доминирующего автора $a$, имеющего большую вероятность $p(a|t)$ - этот автор записал больше всего лекций по теме. Будем считать, что автор $a$ значим в теме, если $p(a|t) > 0.01$. Для каждого автора посчитайте, в скольких темах он значим. Найдите авторов-рекордсменов, которые значимы (а значит, читали лекции) в >= 3 темах. End of explanation from sklearn.manifold import MDS from sklearn.metrics import pairwise_distances prob_theme_author = np.empty(phi_a.shape) for i in range(prob_theme_author.shape[0]): for j in range(prob_theme_author.shape[1]): prob_theme_author[i,j] = phi_a.iloc[i,j] * prob_theme.iloc[j,:] / np.sum(phi_a.iloc[i,:] * prob_theme.prob.values) # Ваш код similarities = pairwise_distances(prob_theme_author, metric='cosine') mds = MDS(n_components=2, dissimilarity='precomputed', random_state=42) pos = mds.fit_transform(similarities) Explanation: Большинство авторов значимы в 1 теме, что логично. Построение тематической карты авторов По сути, в матрице $\Phi$, соответствующей модальности авторов, записаны тематические кластеры авторов. Для любого автора мы можем составить его тематический круг - авторов, разбирающихся в той же теме, что и данный. Интересующиеся слушатели могут попробовать выполнить эту процедуру для ученых, читающих лекции на Постнауке, которых они знают (например, на Постнауке есть лекции с К. В. Воронцовым - лектором текущего модуля :) Составим карту близости авторов по тематике их исследований. Для этого применим метод понижения размерности MDS к тематическим профилям авторов. Чтобы получить тематический профиль автора, распределение $p(t|a)$, нужно воспользоваться формулой Байеса: $p(t|a) = \frac {p(a|t) p(t)} {\sum_t' p(a|t') p(t')}$. Все необходимые для этого величины у вас есть и записаны в переменных phi и pt. Передайте матрицу тематических профилей авторов, записанных по строкам, в метод MDS с n_components=2. Используйте косинусную метрику (она хорошо подходит для поиска расстояний между векторами, имеющими фиксированную сумму компонент). End of explanation # Ваш код plt.figure(figsize=(10,5)) plt.scatter(pos[:,0], pos[:,1]) plt.show(); Explanation: Визуализируйте найденные двумерные представления с помощью функции scatter. End of explanation import matplotlib.cm as cm colors = cm.rainbow(np.linspace(0, 1, T)) # цвета для тем # Ваш код max_theme_prob_for_colors = [np.argmax(author) for author in prob_theme_author] plt.figure(figsize=(15,10)) plt.axis('off') plt.scatter(pos[:,0], pos[:,1], s=100, c=colors[max_theme_prob_for_colors]) for i, author in enumerate(phi_a.index): plt.annotate(author, pos[i]) plt.savefig('authors_map.pdf', dpi=200, format='pdf') plt.show(); Explanation: Должно получиться, что некоторые грппы авторов формируют сгустки, которые можно считать тематическими группами авторов. Раскрасим точки следующим образом: для каждого автора выберем наиболее вероятную для него тему ($\max_t p(t|a)$), и каждой теме сопоставим цвет. Кроме того, добавим на карту имена и фамилии авторов, это можно сделать в цикле по всем точкам с помощью функции plt.annotate, указывая метку точки первым аргументом и ее координаты в аргументе xy. Рекомендуется сделать размер изображения большим, тогда маркеры точек тоже придется увеличить (s=100 в plt.scatter). Изобразите карту авторов и сохраните в pdf-файл с помощью функции plt.savefig. Метки авторов будут пересекаться. Будет очень хорошо, если вы найдете способ, как этого можно избежать. End of explanation # Ваш код prob_doc_theme = theta.values / np.array([np.sum(theme) for theme in theta.values])[:, np.newaxis] prob_doc_theme_sorted_indices = prob_doc_theme.argsort(axis=1)[:,::-1] prob_doc_theme_sorted_indices Explanation: Создание простого тематического навигатора по Постнауке Наш тематический навигатор будет для каждой темы показывать ее список слов, а также список релевантных теме документов. Нам понадобятся распределения $p(d|t)$. По формуле Байеса $p(d|t) = \frac{p(t|d)p(d)}{\sum_{d'}p(t|d')p(d')}$, но поскольку мы считаем документы равновероятными, достаточно разделить каждую строку $\Theta$ на ее сумму, чтобы оценить распределение. Отсортируйте матрицу $p(d|t)$ по убыванию $p(d|t)$ в каждой теме (то есть построчно). Нам понадобятся индексы наиболее вероятных документов в каждой теме, поэтому используйте функцию argmax. End of explanation print "http://yandex.ru" # получится кликабельная ссылка Explanation: Создавать навигатор мы будем прямо в jupiter notebook: это возможно благодаря тому факту, что при печати ссылки она автоматически превращается в гиперссылку. End of explanation from IPython.core.display import display, HTML display(HTML(u"<h1>Заголовок</h1>")) # также <h2>, <h3> display(HTML(u"<ul><li>Пункт 1</li><li>Пункт 2</li></ul>")) display(HTML(u'<font color="green">Зеленый!</font>')) display(HTML(u'<a href="http://yandex.ru">Еще один вариант вывода ссылки</a>')) Explanation: Кроме того, подключив модуль ipython.core.display, можно использовать html-разметку в выводе. Например: End of explanation # Ваш код for i, theme in enumerate(topic_labels): display(HTML("<h3>%s</h3>" % theme)) for j in range(10): print(tokens[model.topic_names[i]][j]), print('') for k in range(10): print(theta.columns[prob_doc_theme_sorted_indices[i,k]]) Explanation: В цикле для каждой темы выведите ее заголовок, в следующей строке - топ-10 слов темы, затем в виде списка ссылки на 10 наиболее релевантных (по $p(d|t)$) теме документов. Используйте html-разметку. Творчество приветствуется :) End of explanation
3,659
Given the following text description, write Python code to implement the functionality described below step by step Description: Lektion 13 Step1: Besselfunktionen Step2: Die ungeraden a werden rückwärts gelöst. Das ist verwirrend. Step3: Wir hatten das beim ersten Mal mit $N=8$ gemacht. Das sind zu wenige Daten. Jetzt noch Mal mit $N=18$. Aus Gründen, die ich nicht verstehe, muss man den Kernel zurücksetzen, bevor man mit dem neuen $N$ startet. Step4: Also $$ \frac{a_{2j+3}}{a_{2j+1}} = \frac1{(j+1)(j+2)} $$ Das bedeutet $$ a_{2n+3} = \prod_{j=0}^n \frac1{(j+1)(j+2)} a_1 = \frac{a_1}{(n+1)!(n+2)!}. $$ Probe Step5: Eine zweite Lösung müsste man erhalten können, indem man einen Ansatz aus einer Potenzreihe und dem Produkt aus dem Logarithmus und einer Potenzreihe macht. Das Reduktionsverfahren von d'Alembert führt auf ein schwieriges Integral. Step6: http Step7: Pattern matching
Python Code: from sympy import * init_printing() from IPython.display import display Explanation: Lektion 13 End of explanation x = Symbol('x') y = Function('y') dgl = Eq(y(x).diff(x, 2), -1/x*y(x).diff(x) + 1/x**2*y(x) +4*y(x)) dgl #dsolve(dgl) # NotImplementedError #N = 8 N=18 a = [Symbol('a'+str(j)) for j in range(N)] n = Symbol('n') ys = sum([a[j]*x**j for j in range(N)]) ys gl = dgl.subs(y(x), ys).doit() gl p1 = (gl.lhs - gl.rhs).expand() p1 p1.coeff(x**(-2)) p1.coeff(x**(-1)) p1.coeff(x, 1) gls = [] for j in range(N+1): glg = Eq(p1.coeff(x, j-2), 0) if glg != True: gls.append(glg) gls #solve(gls) #NotImplementedError Lsg = solve(gls[:-1]) Lsg Explanation: Besselfunktionen End of explanation var = a.copy() # böse Falle del var[1] var Lsg = solve(gls[:-1], var) Lsg Explanation: Die ungeraden a werden rückwärts gelöst. Das ist verwirrend. End of explanation #raise Unterbrechung Lsg[a[1]] = a[1] q = [Lsg[a[2*j+3]]/Lsg[a[2*j+1]] for j in range(int(N/2)-2)] display(q) liste = [] for j in range(int(N/2-2)): m = Lsg[a[2*j+1]]/Lsg[a[2*j+3]] liste.append(m/(j+2)) liste Explanation: Wir hatten das beim ersten Mal mit $N=8$ gemacht. Das sind zu wenige Daten. Jetzt noch Mal mit $N=18$. Aus Gründen, die ich nicht verstehe, muss man den Kernel zurücksetzen, bevor man mit dem neuen $N$ startet. End of explanation for j in range(int(N/2)-2): display((Lsg[a[2*j+1]], a[1]/factorial(j)/factorial(j+1))) S1 = Sum(x**(2*n+1)/factorial(n)/factorial(n+1), (n,0,oo)) S1 u = S1.doit() u srepr(u) besseli? Explanation: Also $$ \frac{a_{2j+3}}{a_{2j+1}} = \frac1{(j+1)(j+2)} $$ Das bedeutet $$ a_{2n+3} = \prod_{j=0}^n \frac1{(j+1)(j+2)} a_1 = \frac{a_1}{(n+1)!(n+2)!}. $$ Probe End of explanation tmp = dgl.subs(y(x), u).doit() tmp Explanation: Eine zweite Lösung müsste man erhalten können, indem man einen Ansatz aus einer Potenzreihe und dem Produkt aus dem Logarithmus und einer Potenzreihe macht. Das Reduktionsverfahren von d'Alembert führt auf ein schwieriges Integral. End of explanation (tmp.lhs - tmp.rhs).series(x, 0, 20) Explanation: http://dlmf.nist.gov $$ I_{\nu-1}(z) - I_{\nu+1}(z) = \frac{2\nu}z I_\nu(z) $$ End of explanation x = Symbol('x') x1 = Wild('x1') pattern = sin(x1) a = sin(2*x+5) m = a.match(pattern) m b = 2*sin(x1/2)*cos(x1/2) b b.subs(m) def expand_sin_x_halbe(term): x1 = Wild('x1') pattern = sin(x1) ersetzung = 2*sin(x1/2)*cos(x1/2) m = term.match(pattern) if m: return ersetzung.subs(m) else: return term expand_sin_x_halbe(sin(x/2)) series(expand_sin_x_halbe(sin(x)) - sin(x), x, 0, 20) a = 2*sin(x) expand_sin_x_halbe(a) a.is_Mul a.args def expand_sin_x_halbe(ausdr): ausdr = S(ausdr) x1 = Wild('x1') pattern = sin(x1) ersetzung = 2*sin(x1/2)*cos(x1/2) m = ausdr.match(pattern) if m: res = ersetzung.subs(m) elif ausdr.is_Mul: res = 1 for term in ausdr.args: res = res * expand_sin_x_halbe(term) elif ausdr.is_Add: res = 0 for term in ausdr.args: res = res + expand_sin_x_halbe(term) else: res = ausdr return res expand_sin_x_halbe(sin(2*x)) expand_sin_x_halbe(sin(2*x)/2) expand_sin_x_halbe(1+10*sin((x+1)**2)) ausdr = (1 + sin(x/2))**3 expand_sin_x_halbe(ausdr) ausdr.is_Pow Explanation: Pattern matching End of explanation
3,660
Given the following text description, write Python code to implement the functionality described below step by step Description: Implementing a Weighted Majority Rule Ensemble Classifier in scikit-learn <br> <br> Here, I want to present a simple and conservative approach of implementing a weighted majority rule ensemble classifier in scikit-learn that yielded remarkably good results when I tried it in a kaggle competition. For me personally, kaggle competitions are just a nice way to try out and compare different approaches and ideas -- basically an opportunity to learn in a controlled environment with nice datasets. Of course, there are other implementations of more sophisticated ensemble methods in scikit-learn, such as bagging classifiers, random forests, or the famous AdaBoost algorithm. However, as far as I am concerned, they all require the usage of a common "base classifier." In contrast, my motivation for the following approach was to combine conceptually different machine learning classifiers and use a majority vote rule. The reason for this was that I had trained a set of equally well performing models, and I wanted to balance out their individual weaknesses. <br> <br> Sections Classifying Iris Flowers Using Different Classification Models Implementing the Majority Voting Rule Ensemble Classifier Additional Note About the EnsembleClassifier Implementation Step1: As we can see from the cross-validation results above, the performance of the three models is almost equal. <br> <br> Implementing the Majority Voting Rule Ensemble Classifier [back to top] Hard Voting Now, we will implement a simple EnsembleClassifier class that allows us to combine the three different classifiers. We define a predict method that let's us simply take the majority rule of the predictions by the classifiers. E.g., if the prediction for a sample is classifier 1 -> class 1 classifier 2 -> class 1 classifier 3 -> class 2 we would classify the sample as "class 1." If weights are provided, the classifier multiplies the occurence of a class by this weight. For example, given the weights [$w_1$, $w_2$, $w_3$] = [3, 1, 1] classifier 1 -> class 1 * $w_1$ -> 1, 1, 1 classifier 2 -> class 2 * $w_2$ -> 2 classifier 3 -> class 2 * $w_3$ -> 2 we would classify the sample as "class 1, " which can also be illustrated by the following code snippet Step10: Soft Voting Furthermore, we add a weights parameter, which let's us assign a specific weight to each classifier. In order to work with the weights, we collect the predicted class probabilities for each classifier, multiply it by the classifier weight, and take the average. Based on these weighted average probabilties, we can then assign the class label. To illustrate this with a simple example, let's assume we have 3 classifiers and a 3-class classification problems where we assign equal weights to all classifiers (the default) Step11: <br> <br> EnsembleClassifier - Tuning Weights [back to top] Let's get back to our weights parameter. Here, we will use a naive brute-force approach to find the optimal weights for each classifier to increase the prediction accuracy. Step13: <br> <br> EnsembleClassifier - Pipelines [back to top] Of course, we can also use the EnsembleClassifier in Pipelines. This is especially useful if a certain classifier does a pretty good job on a certain feature subset or requires different preprocessing steps. For demonstration purposes, let us implement a simple ColumnSelector class. Step14: <br> <br> Ensemble EnsembleClassifier [back to top] If one EnsembleClassifier is not yet enough, we can also build an ensemble classifier of ensemble classifiers. Just like the other examples above, the following code is just meant to be a technical demonstration Step15: <br> <br> Some Final Words [back to top] When we applied the EnsembleClassifier to the iris example above, the results surely looked nice. But we have to keep in mind that this is just a toy example. The majority rule voting approach might not always work so well in practice, especially if the ensemble consists of more "weak" than "strong" classification models. Also, although we used a cross-validation approach to overcome the overfitting challenge, please always keep a spare validation dataset to evaluate the results. Anyway, if you are interested in those approaches, I added them to my mlxtend Python module; in mlxtend (short for "machine learning library extensions"), I collect certain things that I personally find useful but are not available in other packages yet. You can install mlxtend via pip install mlxtend and then load the ColumnSelector or EnsembleClassifier via from mlxtend.sklearn import ColumnSelector from mlxtend.sklearn import EnsembleClassifier <br> <br> Appendix I - Plotting Averaged Probabilities [back to top] Step16: <br> <br> Appendix II - Plotting Decision Boundaries [back to top] Step17: <br> <br> Appendix III - GridSearch Support [back to top] Step18: <br> <br> Appendix IV - Verbosity Levels [back to top] Step19: verbose=1 Step20: verbose=2 Step21: verbose=3 Step22: verbose=4
Python Code: from sklearn import datasets iris = datasets.load_iris() X, y = iris.data[:, 1:3], iris.target from sklearn import cross_validation from sklearn.linear_model import LogisticRegression from sklearn.naive_bayes import GaussianNB from sklearn.ensemble import RandomForestClassifier import numpy as np np.random.seed(123) clf1 = LogisticRegression() clf2 = RandomForestClassifier() clf3 = GaussianNB() print('5-fold cross validation:\n') for clf, label in zip([clf1, clf2, clf3], ['Logistic Regression', 'Random Forest', 'naive Bayes']): scores = cross_validation.cross_val_score(clf, X, y, cv=5, scoring='accuracy') print("Accuracy: %0.2f (+/- %0.2f) [%s]" % (scores.mean(), scores.std(), label)) Explanation: Implementing a Weighted Majority Rule Ensemble Classifier in scikit-learn <br> <br> Here, I want to present a simple and conservative approach of implementing a weighted majority rule ensemble classifier in scikit-learn that yielded remarkably good results when I tried it in a kaggle competition. For me personally, kaggle competitions are just a nice way to try out and compare different approaches and ideas -- basically an opportunity to learn in a controlled environment with nice datasets. Of course, there are other implementations of more sophisticated ensemble methods in scikit-learn, such as bagging classifiers, random forests, or the famous AdaBoost algorithm. However, as far as I am concerned, they all require the usage of a common "base classifier." In contrast, my motivation for the following approach was to combine conceptually different machine learning classifiers and use a majority vote rule. The reason for this was that I had trained a set of equally well performing models, and I wanted to balance out their individual weaknesses. <br> <br> Sections Classifying Iris Flowers Using Different Classification Models Implementing the Majority Voting Rule Ensemble Classifier Additional Note About the EnsembleClassifier Implementation: Class Labels vs. Probabilities EnsembleClassifier - Tuning Weights EnsembleClassifier - Pipelines Some Final Words Appendix I - Plotting-Averaged-Probabilities Appendix II - Plotting Decision Boundaries Appendix III - GridSearch Support <br> <br> Classifying Iris Flowers Using Different Classification Models [back to top] For a simple example, let us use three different classification models to classify the samples in the Iris dataset: Logistic regression, a naive Bayes classifier with a Gaussian kernel, and a random forest classifier -- an ensemble method itself. At this point, let's not worry about preprocessing the data and training and test sets. Also, we will only use 2 feature columns (sepal width and petal height) to make the classification problem harder. End of explanation import numpy as np np.argmax(np.bincount([1, 2, 2], weights=[3, 1, 1])) Explanation: As we can see from the cross-validation results above, the performance of the three models is almost equal. <br> <br> Implementing the Majority Voting Rule Ensemble Classifier [back to top] Hard Voting Now, we will implement a simple EnsembleClassifier class that allows us to combine the three different classifiers. We define a predict method that let's us simply take the majority rule of the predictions by the classifiers. E.g., if the prediction for a sample is classifier 1 -> class 1 classifier 2 -> class 1 classifier 3 -> class 2 we would classify the sample as "class 1." If weights are provided, the classifier multiplies the occurence of a class by this weight. For example, given the weights [$w_1$, $w_2$, $w_3$] = [3, 1, 1] classifier 1 -> class 1 * $w_1$ -> 1, 1, 1 classifier 2 -> class 2 * $w_2$ -> 2 classifier 3 -> class 2 * $w_3$ -> 2 we would classify the sample as "class 1, " which can also be illustrated by the following code snippet: End of explanation from sklearn.base import BaseEstimator from sklearn.base import ClassifierMixin from sklearn.base import TransformerMixin from sklearn.preprocessing import LabelEncoder from sklearn.externals import six from sklearn.base import clone from sklearn.pipeline import _name_estimators import numpy as np import operator class EnsembleClassifier(BaseEstimator, ClassifierMixin, TransformerMixin): Soft Voting/Majority Rule classifier for unfitted clfs. Parameters ---------- clfs : array-like, shape = [n_classifiers] A list of classifiers. Invoking the `fit` method on the `VotingClassifier` will fit clones of those original classifiers that will be stored in the class attribute `self.clfs_`. voting : str, {'hard', 'soft'} (default='hard') If 'hard', uses predicted class labels for majority rule voting. Else if 'soft', predicts the class label based on the argmax of the sums of the predicted probalities, which is recommended for an ensemble of well-calibrated classifiers. weights : array-like, shape = [n_classifiers], optional (default=`None`) Sequence of weights (`float` or `int`) to weight the occurances of predicted class labels (`hard` voting) or class probabilities before averaging (`soft` voting). Uses uniform weights if `None`. Attributes ---------- classes_ : array-like, shape = [n_predictions] Examples -------- >>> import numpy as np >>> from sklearn.linear_model import LogisticRegression >>> from sklearn.naive_bayes import GaussianNB >>> from sklearn.ensemble import RandomForestClassifier >>> clf1 = LogisticRegression(random_state=1) >>> clf2 = RandomForestClassifier(random_state=1) >>> clf3 = GaussianNB() >>> X = np.array([[-1, -1], [-2, -1], [-3, -2], [1, 1], [2, 1], [3, 2]]) >>> y = np.array([1, 1, 1, 2, 2, 2]) >>> eclf1 = VotingClassifier(clfs=[clf1, clf2, clf3], voting='hard') >>> eclf1 = eclf1.fit(X, y) >>> print(eclf1.predict(X)) [1 1 1 2 2 2] >>> eclf2 = VotingClassifier(clfs=[clf1, clf2, clf3], voting='soft') >>> eclf2 = eclf2.fit(X, y) >>> print(eclf2.predict(X)) [1 1 1 2 2 2] >>> eclf3 = VotingClassifier(clfs=[clf1, clf2, clf3], ... voting='soft', weights=[2,1,1]) >>> eclf3 = eclf3.fit(X, y) >>> print(eclf3.predict(X)) [1 1 1 2 2 2] >>> def __init__(self, clfs, voting='hard', weights=None): self.clfs = clfs self.named_clfs = {key:value for key,value in _name_estimators(clfs)} self.voting = voting self.weights = weights def fit(self, X, y): Fit the clfs. Parameters ---------- X : {array-like, sparse matrix}, shape = [n_samples, n_features] Training vectors, where n_samples is the number of samples and n_features is the number of features. y : array-like, shape = [n_samples] Target values. Returns ------- self : object if isinstance(y, np.ndarray) and len(y.shape) > 1 and y.shape[1] > 1: raise NotImplementedError('Multilabel and multi-output'\ ' classification is not supported.') if self.voting not in ('soft', 'hard'): raise ValueError("Voting must be 'soft' or 'hard'; got (voting=%r)" % voting) if self.weights and len(self.weights) != len(self.clfs): raise ValueError('Number of classifiers and weights must be equal' '; got %d weights, %d clfs' % (len(self.weights), len(self.clfs))) self.le_ = LabelEncoder() self.le_.fit(y) self.classes_ = self.le_.classes_ self.clfs_ = [] for clf in self.clfs: fitted_clf = clone(clf).fit(X, self.le_.transform(y)) self.clfs_.append(fitted_clf) return self def predict(self, X): Predict class labels for X. Parameters ---------- X : {array-like, sparse matrix}, shape = [n_samples, n_features] Training vectors, where n_samples is the number of samples and n_features is the number of features. Returns ---------- maj : array-like, shape = [n_samples] Predicted class labels. if self.voting == 'soft': maj = np.argmax(self.predict_proba(X), axis=1) else: # 'hard' voting predictions = self._predict(X) maj = np.apply_along_axis( lambda x: np.argmax(np.bincount(x, weights=self.weights)), axis=1, arr=predictions) maj = self.le_.inverse_transform(maj) return maj def predict_proba(self, X): Predict class probabilities for X. Parameters ---------- X : {array-like, sparse matrix}, shape = [n_samples, n_features] Training vectors, where n_samples is the number of samples and n_features is the number of features. Returns ---------- avg : array-like, shape = [n_samples, n_classes] Weighted average probability for each class per sample. avg = np.average(self._predict_probas(X), axis=0, weights=self.weights) return avg def transform(self, X): Return class labels or probabilities for X for each estimator. Parameters ---------- X : {array-like, sparse matrix}, shape = [n_samples, n_features] Training vectors, where n_samples is the number of samples and n_features is the number of features. Returns ------- If `voting='soft'`: array-like = [n_classifiers, n_samples, n_classes] Class probabilties calculated by each classifier. If `voting='hard'`: array-like = [n_classifiers, n_samples] Class labels predicted by each classifier. if self.voting == 'soft': return self._predict_probas(X) else: return self._predict(X) def get_params(self, deep=True): Return estimator parameter names for GridSearch support if not deep: return super(EnsembleClassifier, self).get_params(deep=False) else: out = self.named_clfs.copy() for name, step in six.iteritems(self.named_clfs): for key, value in six.iteritems(step.get_params(deep=True)): out['%s__%s' % (name, key)] = value return out def _predict(self, X): Collect results from clf.predict calls. return np.asarray([clf.predict(X) for clf in self.clfs_]).T def _predict_probas(self, X): Collect results from clf.predict calls. return np.asarray([clf.predict_proba(X) for clf in self.clfs_]) # Majority Rule (hard) Voting np.random.seed(123) eclf = EnsembleClassifier(clfs=[clf1, clf2, clf3], voting='hard') for clf, label in zip([clf1, clf2, clf3, eclf], ['Logistic Regression', 'Random Forest', 'naive Bayes', 'Ensemble']): scores = cross_validation.cross_val_score(clf, X, y, cv=5, scoring='accuracy') print("Accuracy: %0.2f (+/- %0.2f) [%s]" % (scores.mean(), scores.std(), label)) # Average Probabilities (soft) Voting np.random.seed(123) eclf = EnsembleClassifier(clfs=[clf1, clf2, clf3], voting='soft', weights=[2,1,5]) for clf, label in zip([clf1, clf2, clf3, eclf], ['Logistic Regression', 'Random Forest', 'naive Bayes', 'Ensemble']): scores = cross_validation.cross_val_score(clf, X, y, cv=5, scoring='accuracy') print("Accuracy: %0.2f (+/- %0.2f) [%s]" % (scores.mean(), scores.std(), label)) Explanation: Soft Voting Furthermore, we add a weights parameter, which let's us assign a specific weight to each classifier. In order to work with the weights, we collect the predicted class probabilities for each classifier, multiply it by the classifier weight, and take the average. Based on these weighted average probabilties, we can then assign the class label. To illustrate this with a simple example, let's assume we have 3 classifiers and a 3-class classification problems where we assign equal weights to all classifiers (the default): w1=1, w2=1, w3=1. The weighted average probabilities for a sample would then be calculated as follows: | classifier | class 1 | class 2 | class 3 | |-----------------|----------|----------|----------| | classifier 1 | w1 * 0.2 | w1 * 0.5 | w1 * 0.3 | | classifier 2 | w2 * 0.6 | w2 * 0.3 | w2 * 0.1 | | classifier 3 | w3 * 0.3 | w3 * 0.4 | w3 * 0.3 | | weighted average| 0.37 | 0.4 | 0.3 | We can see in the table above that class 2 has the highest weighted average probability, thus we classify the sample as class 2. Now, let's put it into code and apply it to our Iris classification. End of explanation import pandas as pd np.random.seed(123) df = pd.DataFrame(columns=('w1', 'w2', 'w3', 'mean', 'std')) i = 0 for w1 in range(1,4): for w2 in range(1,4): for w3 in range(1,4): if len(set((w1,w2,w3))) == 1: # skip if all weights are equal continue eclf = EnsembleClassifier(clfs=[clf1, clf2, clf3], voting='soft', weights=[w1,w2,w3]) scores = cross_validation.cross_val_score( estimator=eclf, X=X, y=y, cv=5, scoring='accuracy', n_jobs=1) df.loc[i] = [w1, w2, w3, scores.mean(), scores.std()] i += 1 df.sort(columns=['mean', 'std'], ascending=False) Explanation: <br> <br> EnsembleClassifier - Tuning Weights [back to top] Let's get back to our weights parameter. Here, we will use a naive brute-force approach to find the optimal weights for each classifier to increase the prediction accuracy. End of explanation class ColumnSelector(object): A feature selector for scikit-learn's Pipeline class that returns specified columns from a numpy array. def __init__(self, cols): self.cols = cols def transform(self, X, y=None): return X[:, self.cols] def fit(self, X, y=None): return self from sklearn.pipeline import Pipeline from sklearn.lda import LDA pipe1 = Pipeline([ ('sel', ColumnSelector([1])), # use only the 1st feature ('clf', GaussianNB())]) pipe2 = Pipeline([ ('sel', ColumnSelector([0, 1])), # use the 1st and 2nd feature ('dim', LDA(n_components=1)), # Dimensionality reduction via LDA ('clf', LogisticRegression())]) eclf = EnsembleClassifier([pipe1, pipe2]) scores = cross_validation.cross_val_score(eclf, X, y, cv=5, scoring='accuracy') print("Accuracy: %0.2f (+/- %0.2f) [%s]" % (scores.mean(), scores.std(), label)) Explanation: <br> <br> EnsembleClassifier - Pipelines [back to top] Of course, we can also use the EnsembleClassifier in Pipelines. This is especially useful if a certain classifier does a pretty good job on a certain feature subset or requires different preprocessing steps. For demonstration purposes, let us implement a simple ColumnSelector class. End of explanation eclf1 = EnsembleClassifier(clfs=[clf1, clf2, clf3], voting='soft', weights=[5,2,1]) eclf2 = EnsembleClassifier(clfs=[clf1, clf2, clf3], voting='soft', weights=[4,2,1]) eclf3 = EnsembleClassifier(clfs=[clf1, clf2, clf3], voting='soft', weights=[1,2,4]) eclf = EnsembleClassifier(clfs=[eclf1, eclf2, eclf3], voting='soft', weights=[2,1,1]) scores = cross_validation.cross_val_score(eclf, X, y, cv=5, scoring='accuracy') print("Accuracy: %0.2f (+/- %0.2f) [%s]" % (scores.mean(), scores.std(), label)) Explanation: <br> <br> Ensemble EnsembleClassifier [back to top] If one EnsembleClassifier is not yet enough, we can also build an ensemble classifier of ensemble classifiers. Just like the other examples above, the following code is just meant to be a technical demonstration: End of explanation from sklearn.linear_model import LogisticRegression from sklearn.naive_bayes import GaussianNB from sklearn.ensemble import RandomForestClassifier import matplotlib.pyplot as plt %matplotlib inline clf1 = LogisticRegression(random_state=123) clf2 = RandomForestClassifier(random_state=123) clf3 = GaussianNB() X = np.array([[-1.0, -1.0], [-1.2, -1.4], [-3.4, -2.2], [1.1, 1.2]]) y = np.array([1, 1, 2, 2]) eclf = EnsembleClassifier(clfs=[clf1, clf2, clf3], voting='soft', weights=[1, 1, 5]) # predict class probabilities for all classifiers probas = [c.fit(X, y).predict_proba(X) for c in (clf1, clf2, clf3, eclf)] # get class probabilities for the first sample in the dataset class1_1 = [pr[0,0] for pr in probas] class2_1 = [pr[0,1] for pr in probas] ##################### # plotting N = 4 # number of groups ind = np.arange(N) # group positions width = 0.35 # bar width fig, ax = plt.subplots(figsize=(7,5)) # bars for classifier 1-3 p1 = ax.bar(ind, np.hstack(([class1_1[:-1],[0]])), width, color='green') p2 = ax.bar(ind + width, np.hstack(([class2_1[:-1],[0]])), width, color='lightgreen') # bars for VotingClassifier p3 = ax.bar(ind, [0, 0, 0, class1_1[-1]], width, color='blue') p4 = ax.bar(ind + width, [0, 0, 0, class2_1[-1]], width, color='steelblue') # plot annotations plt.axvline(2.8, color='k', linestyle='dashed') ax.set_xticks(ind + width) ax.set_xticklabels(['LogisticRegression\nweight 1', 'GaussianNB\nweight 1', 'RandomForestClassifier\nweight 5', 'VotingClassifier\n(average probabilities)'], rotation=40, ha='right') plt.ylim([0,1]) plt.ylabel('probability') plt.title('Class probabilities for sample 1 by different classifiers') plt.legend([p1[0], p2[0]], ['class 1', 'class 2'] , loc='upper left') plt.tight_layout() #plt.savefig('../../images/sklean_ensemble_probas.png') plt.show() Explanation: <br> <br> Some Final Words [back to top] When we applied the EnsembleClassifier to the iris example above, the results surely looked nice. But we have to keep in mind that this is just a toy example. The majority rule voting approach might not always work so well in practice, especially if the ensemble consists of more "weak" than "strong" classification models. Also, although we used a cross-validation approach to overcome the overfitting challenge, please always keep a spare validation dataset to evaluate the results. Anyway, if you are interested in those approaches, I added them to my mlxtend Python module; in mlxtend (short for "machine learning library extensions"), I collect certain things that I personally find useful but are not available in other packages yet. You can install mlxtend via pip install mlxtend and then load the ColumnSelector or EnsembleClassifier via from mlxtend.sklearn import ColumnSelector from mlxtend.sklearn import EnsembleClassifier <br> <br> Appendix I - Plotting Averaged Probabilities [back to top] End of explanation from mlxtend.matplotlib import plot_decision_regions import matplotlib.pyplot as plt from sklearn import datasets from sklearn.tree import DecisionTreeClassifier from sklearn.neighbors import KNeighborsClassifier from sklearn.svm import SVC from mlxtend.sklearn import EnsembleClassifier %matplotlib inline # Loading some example data iris = datasets.load_iris() X = iris.data[:, [0,2]] y = iris.target # Training classifiers clf1 = DecisionTreeClassifier(max_depth=4) clf2 = KNeighborsClassifier(n_neighbors=7) clf3 = SVC(kernel='rbf', probability=True) eclf = EnsembleClassifier(clfs=[clf1, clf2, clf3], voting='soft', weights=[2,1,2]) clf1.fit(X,y) clf2.fit(X,y) clf3.fit(X,y) eclf.fit(X,y) from itertools import product # Plotting decision regions x_min, x_max = X[:, 0].min() - 1, X[:, 0].max() + 1 y_min, y_max = X[:, 1].min() - 1, X[:, 1].max() + 1 xx, yy = np.meshgrid(np.arange(x_min, x_max, 0.1), np.arange(y_min, y_max, 0.1)) f, axarr = plt.subplots(2, 2, sharex='col', sharey='row', figsize=(10, 8)) for idx, clf, tt in zip(product([0, 1], [0, 1]), [clf1, clf2, clf3, eclf], ['Decision Tree (depth=4)', 'KNN (k=7)', 'Kernel SVM', 'Soft Voting']): Z = clf.predict(np.c_[xx.ravel(), yy.ravel()]) Z = Z.reshape(xx.shape) axarr[idx[0], idx[1]].contourf(xx, yy, Z, alpha=0.4) axarr[idx[0], idx[1]].scatter(X[:, 0], X[:, 1], c=y, alpha=0.8) axarr[idx[0], idx[1]].set_title(tt) plt.show() Explanation: <br> <br> Appendix II - Plotting Decision Boundaries [back to top] End of explanation from sklearn.grid_search import GridSearchCV clf1 = LogisticRegression(random_state=1) clf2 = RandomForestClassifier(random_state=1) clf3 = GaussianNB() eclf = EnsembleClassifier(clfs=[clf1, clf2, clf3], voting='soft') params = {'logisticregression__C': [1.0, 100.0], 'randomforestclassifier__n_estimators': [20, 200],} grid = GridSearchCV(estimator=eclf, param_grid=params, cv=5) grid.fit(iris.data, iris.target) for params, mean_score, scores in grid.grid_scores_: print("%0.3f (+/-%0.03f) for %r" % (mean_score, scores.std() / 2, params)) [mean_score for params, mean_score, scores in grid.grid_scores_] Explanation: <br> <br> Appendix III - GridSearch Support [back to top] End of explanation import numpy as np from sklearn.linear_model import LogisticRegression from sklearn.naive_bayes import GaussianNB from sklearn.ensemble import RandomForestClassifier from mlxtend.sklearn import EnsembleClassifier clf1 = LogisticRegression(random_state=1) clf2 = RandomForestClassifier(random_state=1) clf3 = GaussianNB() X = np.array([[-1, -1], [-2, -1], [-3, -2], [1, 1], [2, 1], [3, 2]]) y = np.array([1, 1, 1, 2, 2, 2]) Explanation: <br> <br> Appendix IV - Verbosity Levels [back to top] End of explanation eclf1 = EnsembleClassifier(clfs=[clf1, clf2, clf3], voting='hard', verbose=1) eclf1 = eclf1.fit(X, y) print(eclf1.predict(X)) Explanation: verbose=1 End of explanation eclf1 = EnsembleClassifier(clfs=[clf1, clf2, clf3], voting='hard', verbose=2) eclf1 = eclf1.fit(X, y) print(eclf1.predict(X)) Explanation: verbose=2 End of explanation eclf1 = EnsembleClassifier(clfs=[clf1, clf2, clf3], voting='hard', verbose=3) eclf1 = eclf1.fit(X, y) print(eclf1.predict(X)) Explanation: verbose=3 End of explanation eclf1 = EnsembleClassifier(clfs=[clf1, clf2, clf3], voting='hard', verbose=4) eclf1 = eclf1.fit(X, y) print(eclf1.predict(X)) Explanation: verbose=4 End of explanation
3,661
Given the following text description, write Python code to implement the functionality described below step by step Description: Create the bag of words Step1: Claculate cosine I've tried here to compare first centence's vector to all other vectors. First vector status is not spam (=False). I also calculate how many true positives (vectors with cosine < 1, which is also not spam) and false positive (cosine < 1, but marked as a spam). I'll evaluate each method with F1 metric
Python Code: from sklearn.feature_extraction.text import CountVectorizer vectorizer = CountVectorizer(analyzer = "word", \ tokenizer = None, \ preprocessor = None, \ stop_words = None, \ max_features = 5000) train_data_features = vectorizer.fit_transform(data['lower']) # Numpy arrays are easy to work with, so convert the result to an # array train_data_features = train_data_features.toarray() # let's see what we have there vectorizer.get_feature_names()[-5:] len(data) Explanation: Create the bag of words End of explanation from scipy.spatial.distance import cosine cosines = {} # print("First sentence: %s\nSpam: %s\n\n" % (data['lower'][0], data['Status'][0])) first_vector = train_data_features[0] for i in range(1, len(data)): cosines[i] = cosine(first_vector, train_data_features[i]) # print(cosines) false_status = 0 true_status = 0 FN = 0 for i in range(1, len(data)): if cosines[i] < 1.0: if data['Status'][i] == True: true_status += 1 else: false_status += 1 else: if data['Status'][i] == False: FN += 1 TP = false_status FP = true_status F1 = 2*TP/(2*TP+FP+FN) print("F1 = %0.4f" % F1) Explanation: Claculate cosine I've tried here to compare first centence's vector to all other vectors. First vector status is not spam (=False). I also calculate how many true positives (vectors with cosine < 1, which is also not spam) and false positive (cosine < 1, but marked as a spam). I'll evaluate each method with F1 metric: $$F1=\dfrac{2TP}{(2TP + FP + FN)}$$ End of explanation
3,662
Given the following text description, write Python code to implement the functionality described below step by step Description: Ejemplo de como correr interactivamente la libreria TensorFlow en IPython Step1: De esta manera lanzar una sesion interactiva, util cuando queremos probar metodos Step2: Probamos la funcion que nos reduce un tensor por medio de medias Step3: Cerramos la session para liberar recursos
Python Code: import tensorflow as tf Explanation: Ejemplo de como correr interactivamente la libreria TensorFlow en IPython End of explanation sess = tf.InteractiveSession() x = tf.Variable([[2.0, 3.0],[4.0, 12.0]]) Explanation: De esta manera lanzar una sesion interactiva, util cuando queremos probar metodos End of explanation x.initializer.run() tf.reduce_mean(x).eval() tf.reduce_mean(x,1).eval() tf.reduce_mean(x,0).eval() Explanation: Probamos la funcion que nos reduce un tensor por medio de medias End of explanation sess.close() Explanation: Cerramos la session para liberar recursos End of explanation
3,663
Given the following text description, write Python code to implement the functionality described below step by step Description: 1. Understanding currents, fields, charges and potentials Cylinder app survey Step1: 2. Potential differences and Apparent Resistivities Using the widgets contained in this notebook you will develop a better understand of what values are actually measured in a DC resistivity survey and how these measurements can be processed, plotted, inverted, and interpreted. Computing Apparent Resistivity In practice we cannot measure the potentials everywhere, we are limited to those locations where we place electrodes. For each source (current electrode pair) many potential differences are measured between M and N electrode pairs to characterize the overall distribution of potentials. The widget below allows you to visualize the potentials, electric fields, and current densities from a dipole source in a simple model with 2 layers. For different electrode configurations you can measure the potential differences and see the calculated apparent resistivities. In a uniform halfspace the potential differences can be computed by summing up the potentials at each measurement point from the different current sources based on the following equations Step2: 3. Building Pseudosections 2D profiles are often plotted as pseudo-sections by extending $45^{\circ}$ lines downwards from the A-B and M-N midpoints and plotting the corresponding $\Delta V_{MN}$, $\rho_a$, or misfit value at the intersection of these lines as shown below. For pole-dipole or dipole-pole surveys the $45^{\circ}$ line is simply extended from the location of the pole. By using this method of plotting, the long offset electrodes plot deeper than those with short offsets. This provides a rough idea of the region sampled by each data point, but the vertical axis of a pseudo-section is not a true depth. In the widget below the red dot marks the midpoint of the current dipole or the location of the A electrode location in a pole-dipole array while the green dots mark the midpoints of the potential dipoles or M electrode locations in a dipole-pole array. The blue dots then mark the location in the pseudo-section where the lines from Tx and Rx midpoints intersect and the data is plotted. By stepping through the Tx (current electrode pairs) using the slider you can see how the pseudo section is built up. The figures shown below show how the points in a pseudo-section are plotted for pole-dipole, dipole-pole, and dipole-dipole arrays. The color coding of the dots match those shown in the widget. <br /> <br /> <img style="float Step3: DC pseudo-section app $\rho_1$ Step4: 4. Parametric Inversion In this final widget you are able to forward model the apparent resistivity of a cylinder embedded in a two layered earth. Pseudo-sections of the apparent resistivity can be generated using dipole-dipole, pole-dipole, or dipole-pole arrays to see how survey geometry can distort the size, shape, and location of conductive bodies in a pseudo-section. Due to distortion and artifacts present in pseudo-sections trying to interpret them directly is typically difficult and dangerous due to the risk of misinterpretation. Inverting the data to find a model which fits the observed data and is geologically reasonable should be standard practice. By systematically varying the model parameters and comparing the plots of observed vs. predicted apparent resistivity a parametric inversion can be preformed by hand to find the "best" fitting model. Normalized data misfits, which provide a numerical measure of the difference between the observed and predicted data, are useful for quantifying how well and inversion model fits the observed data. The manual inversion process can be difficult and time consuming even with small examples sure as the one presented here. Therefore, numerical optimization algorithms are typically utilized to minimized the data misfit and a model objective function, which provides information about the model structure and complexity, in order to find an optimal solution. Parametric DC inversion app Definition of variables
Python Code: app = cylinder_app(); display(app) Explanation: 1. Understanding currents, fields, charges and potentials Cylinder app survey: Type of survey A: (+) Current electrode location B: (-) Current electrode location M: (+) Potential electrode location N: (-) Potential electrode location r: radius of cylinder xc: x location of cylinder center zc: z location of cylinder center $\rho_1$: Resistivity of the halfspace $\rho_2$: Resistivity of the cylinder Field: Field to visualize Type: which part of the field Scale: Linear or Log Scale visualization End of explanation app = plot_layer_potentials_app() display(app) Explanation: 2. Potential differences and Apparent Resistivities Using the widgets contained in this notebook you will develop a better understand of what values are actually measured in a DC resistivity survey and how these measurements can be processed, plotted, inverted, and interpreted. Computing Apparent Resistivity In practice we cannot measure the potentials everywhere, we are limited to those locations where we place electrodes. For each source (current electrode pair) many potential differences are measured between M and N electrode pairs to characterize the overall distribution of potentials. The widget below allows you to visualize the potentials, electric fields, and current densities from a dipole source in a simple model with 2 layers. For different electrode configurations you can measure the potential differences and see the calculated apparent resistivities. In a uniform halfspace the potential differences can be computed by summing up the potentials at each measurement point from the different current sources based on the following equations: \begin{align} V_M = \frac{\rho I}{2 \pi} \left[ \frac{1}{AM} - \frac{1}{MB} \right] \ V_N = \frac{\rho I}{2 \pi} \left[ \frac{1}{AN} - \frac{1}{NB} \right] \end{align} where $AM$, $MB$, $AN$, and $NB$ are the distances between the corresponding electrodes. The potential difference $\Delta V_{MN}$ in a dipole-dipole survey can therefore be expressed as follows, \begin{equation} \Delta V_{MN} = V_M - V_N = \rho I \underbrace{\frac{1}{2 \pi} \left[ \frac{1}{AM} - \frac{1}{MB} - \frac{1}{AN} + \frac{1}{NB} \right]}_{G} \end{equation} and the resistivity of the halfspace $\rho$ is equal to, $$ \rho = \frac{\Delta V_{MN}}{IG} $$ In this equation $G$ is often referred to as the geometric factor. In the case where we are not in a uniform halfspace the above equation is used to compute the apparent resistivity ($\rho_a$) which is the resistivity of the uniform halfspace which best reproduces the measured potential difference. In the top plot the location of the A electrode is marked by the red +, the B electrode is marked by the blue -, and the M/N potential electrodes are marked by the black dots. The $V_M$ and $V_N$ potentials are printed just above and to the right of the black dots. The calculted apparent resistivity is shown in the grey box to the right. The bottom plot can show the resistivity model, the electric fields (e), potentials, or current densities (j) depending on which toggle button is selected. Some patience may be required for the plots to update after parameters have been changed. Two layer app A: (+) Current electrode location B: (-) Current electrode location M: (+) Potential electrode location N: (-) Potential electrode location $\rho_1$: Resistivity of the top layer $\rho_2$: Resistivity of the bottom layer h: thickness of the first layer Plot: Field to visualize Type: which part of the field End of explanation app = MidpointPseudoSectionWidget(); display(app) Explanation: 3. Building Pseudosections 2D profiles are often plotted as pseudo-sections by extending $45^{\circ}$ lines downwards from the A-B and M-N midpoints and plotting the corresponding $\Delta V_{MN}$, $\rho_a$, or misfit value at the intersection of these lines as shown below. For pole-dipole or dipole-pole surveys the $45^{\circ}$ line is simply extended from the location of the pole. By using this method of plotting, the long offset electrodes plot deeper than those with short offsets. This provides a rough idea of the region sampled by each data point, but the vertical axis of a pseudo-section is not a true depth. In the widget below the red dot marks the midpoint of the current dipole or the location of the A electrode location in a pole-dipole array while the green dots mark the midpoints of the potential dipoles or M electrode locations in a dipole-pole array. The blue dots then mark the location in the pseudo-section where the lines from Tx and Rx midpoints intersect and the data is plotted. By stepping through the Tx (current electrode pairs) using the slider you can see how the pseudo section is built up. The figures shown below show how the points in a pseudo-section are plotted for pole-dipole, dipole-pole, and dipole-dipole arrays. The color coding of the dots match those shown in the widget. <br /> <br /> <img style="float: center; width: 60%; height: 60%" src="https://github.com/geoscixyz/geosci-labs/blob/master/images/dc/PoleDipole.png?raw=true"> <center>Basic skematic for a uniformly spaced pole-dipole array. <br /> <br /> <br /> <img style="float: center; width: 60%; height: 60%" src="https://github.com/geoscixyz/geosci-labs/blob/master/images/dc/DipolePole.png?raw=true"> <center>Basic skematic for a uniformly spaced dipole-pole array. <br /> <br /> <br /> <img style="float: center; width: 60%; height: 60%" src="https://github.com/geoscixyz/geosci-labs/blob/master/images/dc/DipoleDipole.png?raw=true"> <center>Basic skematic for a uniformly spaced dipole-dipole array. <br /> Pseudo-section app End of explanation app = DC2DPseudoWidget() display(app) Explanation: DC pseudo-section app $\rho_1$: Resistivity of the first layer (thickness of the first layer is 5m) $\rho_2$: Resistivity of the cylinder resistivity of the second layer is 1000 $\Omega$m xc: x location of cylinder center zc: z location of cylinder center r: radius of cylinder surveyType: Type of survey End of explanation app = DC2DfwdWidget() display(app) Explanation: 4. Parametric Inversion In this final widget you are able to forward model the apparent resistivity of a cylinder embedded in a two layered earth. Pseudo-sections of the apparent resistivity can be generated using dipole-dipole, pole-dipole, or dipole-pole arrays to see how survey geometry can distort the size, shape, and location of conductive bodies in a pseudo-section. Due to distortion and artifacts present in pseudo-sections trying to interpret them directly is typically difficult and dangerous due to the risk of misinterpretation. Inverting the data to find a model which fits the observed data and is geologically reasonable should be standard practice. By systematically varying the model parameters and comparing the plots of observed vs. predicted apparent resistivity a parametric inversion can be preformed by hand to find the "best" fitting model. Normalized data misfits, which provide a numerical measure of the difference between the observed and predicted data, are useful for quantifying how well and inversion model fits the observed data. The manual inversion process can be difficult and time consuming even with small examples sure as the one presented here. Therefore, numerical optimization algorithms are typically utilized to minimized the data misfit and a model objective function, which provides information about the model structure and complexity, in order to find an optimal solution. Parametric DC inversion app Definition of variables: - $\rho_1$: Resistivity of the first layer - $\rho_2$: Resistivity of the cylinder - xc: x location of cylinder center - zc: z location of cylinder center - r: radius of cylinder - predmis: toggle which allows you to switch the bottom pannel from predicted apparent resistivity to normalized data misfit - suveyType: toggle which allows you to switch between survey types. Knonw information - resistivity of the second layer is 1000 $\Omega$m - thickness of the first layer is known: 5m Unknowns are: $\rho_1$, $\rho_2$, xc, zc, and r End of explanation
3,664
Given the following text description, write Python code to implement the functionality described below step by step Description: CNN Implementation Object recognition and categorization using TensorFlow required a basic understanding of convolutions (for CNNs), common layers (non-linearity, pooling, fc), image loading, image manipulation and colorspaces. With these areas covered, it's possible to build a CNN model for image recognition and classification using TensorFlow. In this case, the model is a dataset provided by Stanford which includes pictures of dogs and their corresponding breed. The network needs to train on these pictures then be judged on how well it can guess a dog's breed based on a picture. The network architecture follows a simplified version of Alex Krizhevsky's AlexNet without all of AlexNet's layers. This architecture was described earlier in the chapter as the network which won ILSVRC'12 top challenge. The network uses layers and techniques familiar to this chapter which are similar to the TensorFlow provided tutorial on CNNs. <p style="text-align Step1: An example of how the archive is organized. The glob module allows directory listing which shows the structure of the files which exist in the dataset. The eight digit number is tied to the WordNet ID of each category used in ImageNet. ImageNet has a browser for image details which accepts the WordNet ID, for example the Chihuahua example can be accessed via http Step3: This example code organized the directory and images ('./imagenet-dogs/n02085620-Chihuahua/n02085620_10131.jpg') into two dictionaries related to each breed including all the images for that breed. Now each dictionary would include Chihuahua images in the following format Step4: The example code is opening each image, converting it to grayscale, resizing it and then adding it to a TFRecord file. The logic isn't different from earlier examples except that the operation tf.image.resize_images is used. The resizing operation will scale every image to be the same size even if it distorts the image. For example, if an image in portrait orientation and an image in landscape orientation were both resized with this code then the output of the landscape image would become distorted. These distortions are caused because tf.image.resize_images doesn't take into account aspect ratio (the ratio of height to width) of an image. To properly resize a set of images, cropping or padding is a preferred method because it ignores the aspect ratio stopping distortions. Load Images Once the testing and training dataset have been transformed to TFRecord format, they can be read as TFRecords instead of as JPEG images. The goal is to load the images a few at a time with their corresponding labels. Step5: This example code loads training images by matching all the TFRecord files found in the training directory. Each TFRecord includes multiple images but the tf.parse_single_example will take a single Example out of the file. The batching operation discussed earlier is used to train multiple images simultaneously. Batching multiple images is useful because these operations are designed to work with multiple images the same as with a single image. The primary requirement is that the system have enough memory to work with them all. With the images available in memory, the next step is to create the model used for training and testing. Model The model used is similar to the mnist convolution example which is often used in tutorials describing convolutional neural networks in TensorFlow. The architecture of this model is simple yet it performs well for illustrating different techniques used in image classification and recognition. An advanced model may borrow more from Alex Krizhevsky's AlexNet design which includes more convolution layers. Step6: The first layer in the model is created using the shortcut tf.contrib.layers.convolution2d. It's important to note that the weight_init is set to be a random normal, meaning that the first set of filters are filled with random numbers following a normal distribution (this parameter is renamed in TensorFlow 0.9 to be weights_initializer). The filters are set as trainable so that as the network is fed information, these weights are adjusted to improve the accuracy of the model. After a convolution is applied to the images, the output is downsized using a max_pool operation. After the operation, the output shape of the convolution is reduced in half due to the ksize used in the pooling and the strides. The reduction didn't change the number of filters (output channels) or the size of the image batch. The components which were reduced dealt with the height and width of the image (filter). Step7: The second layer changes little from the first except the depth of the filters. The number of filters is now doubled while again reducing the size of the height and width of the image. The multiple convolution and pool layers are continuing to reduce the height and width of the input while adding further depth. At this point, further convolution and pool steps could be taken. In many architectures there are over 5 different convolution and pooling layers. The most advanced architectures take longer to train and debug but they can match more sophisticated patterns. In this example, the two convolution and pooling layers are enough to illustrate the mechanics at work. The tensor being operated on is still fairly complex tensor, the next step is to fully connect every point in each image with an output neuron. Since this example is using softmax later, the fully connected layer needs to be changed into a rank two tensor. The tensor's first dimension will be used to separate each image while the second dimension is a rank one tensor of each input tensor. Step8: tf.reshape has a special value which can be used to signify, use all the dimensions remaining. In this example code, the -1 is used to reshape the last pooling layer into a giant rank one tensor. With the pooling layer flattened out, it can be combined with two fully connected layers which associate the current network state to the breed of dog predicted. Step9: This example code creates the final fully connected layer of the network where every pixel is associated with every breed of dog. Every step of this network has been reducing the size of the input images by converting them into filters which are then matched with a breed of dog (label). This technique has reduced the processing power required to train or test a network while generalizing the output. Training Once a model is ready to be trained, the last steps follow the same process discussed in earlier chapters of this book. The model's loss is computed based on how accurately it guessed the correct labels in the training data which feeds into a training optimizer which updates the weights of each layer. This process continues one iteration at a time while attempting to increase the accuracy of each step. An important note related to this model, during training most classification functions (tf.nn.softmax) require numerical labels. This was highlighted in the section describing loading the images from TFRecords. At this point, each label is a string similar to n02085620-Chihuahua. Instead of using tf.nn.softmax on this string, the label needs to be converted to be a unique number for each label. Converting these labels into an integer representation should be done in preprocessing. For this dataset, each label will be converted into an integer which represents the index of each name in a list including all the dog breeds. There are many ways to accomplish this task, for this example a new TensorFlow utility operation will be used (tf.map_fn). Step10: This example code uses two different forms of a map operation. The first form of map is used to create a list including only the dog breed name based on a list of directories. The second form of map is tf.map_fn which is a TensorFlow operation which will map a function over a tensor on the graph. The tf.map_fn is used to generate a rank one tensor including only the integer indexes where each label is located in the list of all the class labels. These unique integers can now be used with tf.nn.softmax to classify output predictions. Step11: Debug the Filters with Tensorboard CNNs have multiple moving parts which can cause issues during training resulting in poor accuracy. Debugging problems in a CNN often start with investigating how the filters (kernels) are changing every iteration. Each weight used in a filter is constantly changing as the network attempts to learn the most accurate set of weights to use based on the train method. In a well designed CNN, when the first convolution layer is started, the initialized input weights are set to be random (in this case using weight_init=tf.random_normal). These weights activate over an image and the output of the activation (feature map) is random as well. Visualizing the feature map as if it were an image, the output looks like the original image with static applied. The static is caused by all the weights activating at random. Over many iterations, each filter becomes more uniform as the weights are adjusted to fit the training feedback. As the network converges, the filters resemble distinct small patterns which can be found in the image. <p style="text-align
Python Code: # setup-only-ignore import tensorflow as tf sess = tf.InteractiveSession() import glob image_filenames = glob.glob("./imagenet-dogs/n02*/*.jpg") image_filenames[0:2] Explanation: CNN Implementation Object recognition and categorization using TensorFlow required a basic understanding of convolutions (for CNNs), common layers (non-linearity, pooling, fc), image loading, image manipulation and colorspaces. With these areas covered, it's possible to build a CNN model for image recognition and classification using TensorFlow. In this case, the model is a dataset provided by Stanford which includes pictures of dogs and their corresponding breed. The network needs to train on these pictures then be judged on how well it can guess a dog's breed based on a picture. The network architecture follows a simplified version of Alex Krizhevsky's AlexNet without all of AlexNet's layers. This architecture was described earlier in the chapter as the network which won ILSVRC'12 top challenge. The network uses layers and techniques familiar to this chapter which are similar to the TensorFlow provided tutorial on CNNs. <p style="text-align: center;"><i>The network described in this section including the output TensorShape after each layer. The layers are read from left to right and top to bottom where related layers are grouped together. As the input progresses further into the network, its height and width are reduced while its depth is increased. The increase in depth reduces the computation required to use the network.</i></p> <br /> Stanford Dogs Dataset The dataset used for training this model can be found on Stanford's computer vision site http://vision.stanford.edu/aditya86/ImageNetDogs/. Training the model requires downloading relevant data. After downloading the Zip archive of all the images, extract the archive into a new directory called imagenet-dogs in the same directory as the code building the model. The Zip archive provided by Stanford includes pictures of dogs organized into 120 different breeds. The goal of this model is to train on 80% of the dog breed images and then test using the remaining 20%. If this were a production model, part of the raw data would be reserved for cross-validation of the results. Cross-validation is a useful step to validate the accuracy of a model but this model is designed to illustrate the process and not for competition. The organization of the archive follows ImageNet's practices. Each dog breed is a directory name similar to n02085620-Chihuahua where the second half of the directory name is the dog's breed in English (Chihuahua). Within each directory there is a variable amount of images related to that breed. Each image is in JPEG format (RGB) and of varying sizes. The different sized images is a challenge because TensorFlow is expecting tensors of the same dimensionality. Convert Images to TFRecords The raw images organized in a directory doesn't work well for training because the images are not of the same size and their dog breed isn't included in the file. Converting the images into TFRecord files in advance of training will help keep training fast and simplify matching the label of the image. Another benefit is that the training and testing related images can be separated in advance. Separated training and testing datasets allows continual testing of a model while training is occurring using checkpoint files. Converting the images will require changing their colorspace into grayscale, resizing the images to be of uniform size and attaching the label to each image. This conversion should only happen once before training commences and likely will take a long time. End of explanation from itertools import groupby from collections import defaultdict training_dataset = defaultdict(list) testing_dataset = defaultdict(list) # Split up the filename into its breed and corresponding filename. The breed is found by taking the directory name image_filename_with_breed = map(lambda filename: (filename.split("/")[2], filename), image_filenames) # Group each image by the breed which is the 0th element in the tuple returned above for dog_breed, breed_images in groupby(image_filename_with_breed, lambda x: x[0]): # Enumerate each breed's image and send ~20% of the images to a testing set for i, breed_image in enumerate(breed_images): if i % 5 == 0: testing_dataset[dog_breed].append(breed_image[1]) else: training_dataset[dog_breed].append(breed_image[1]) # Check that each breed includes at least 18% of the images for testing breed_training_count = len(training_dataset[dog_breed]) breed_testing_count = len(testing_dataset[dog_breed]) assert round(breed_testing_count / (breed_training_count + breed_testing_count), 2) > 0.18, "Not enough testing images." Explanation: An example of how the archive is organized. The glob module allows directory listing which shows the structure of the files which exist in the dataset. The eight digit number is tied to the WordNet ID of each category used in ImageNet. ImageNet has a browser for image details which accepts the WordNet ID, for example the Chihuahua example can be accessed via http://www.image-net.org/synset?wnid=n02085620. End of explanation def write_records_file(dataset, record_location): Fill a TFRecords file with the images found in `dataset` and include their category. Parameters ---------- dataset : dict(list) Dictionary with each key being a label for the list of image filenames of its value. record_location : str Location to store the TFRecord output. writer = None # Enumerating the dataset because the current index is used to breakup the files if they get over 100 # images to avoid a slowdown in writing. current_index = 0 for breed, images_filenames in dataset.items(): for image_filename in images_filenames: if current_index % 100 == 0: if writer: writer.close() record_filename = "{record_location}-{current_index}.tfrecords".format( record_location=record_location, current_index=current_index) writer = tf.python_io.TFRecordWriter(record_filename) current_index += 1 image_file = tf.read_file(image_filename) # In ImageNet dogs, there are a few images which TensorFlow doesn't recognize as JPEGs. This # try/catch will ignore those images. try: image = tf.image.decode_jpeg(image_file) except: print(image_filename) continue # Converting to grayscale saves processing and memory but isn't required. grayscale_image = tf.image.rgb_to_grayscale(image) resized_image = tf.image.resize_images(grayscale_image, 250, 151) # tf.cast is used here because the resized images are floats but haven't been converted into # image floats where an RGB value is between [0,1). image_bytes = sess.run(tf.cast(resized_image, tf.uint8)).tobytes() # Instead of using the label as a string, it'd be more efficient to turn it into either an # integer index or a one-hot encoded rank one tensor. # https://en.wikipedia.org/wiki/One-hot image_label = breed.encode("utf-8") example = tf.train.Example(features=tf.train.Features(feature={ 'label': tf.train.Feature(bytes_list=tf.train.BytesList(value=[image_label])), 'image': tf.train.Feature(bytes_list=tf.train.BytesList(value=[image_bytes])) })) writer.write(example.SerializeToString()) writer.close() write_records_file(testing_dataset, "./output/testing-images/testing-image") write_records_file(training_dataset, "./output/training-images/training-image") Explanation: This example code organized the directory and images ('./imagenet-dogs/n02085620-Chihuahua/n02085620_10131.jpg') into two dictionaries related to each breed including all the images for that breed. Now each dictionary would include Chihuahua images in the following format: training_dataset["n02085620-Chihuahua"] = ["n02085620_10131.jpg", ...] Organizing the breeds into these dictionaries simplifies the process of selecting each type of image and categorizing it. During preprocessing, all the image breeds can be iterated over and their images opened based on the filenames in the list. End of explanation filename_queue = tf.train.string_input_producer( tf.train.match_filenames_once("./output/training-images/*.tfrecords")) reader = tf.TFRecordReader() _, serialized = reader.read(filename_queue) features = tf.parse_single_example( serialized, features={ 'label': tf.FixedLenFeature([], tf.string), 'image': tf.FixedLenFeature([], tf.string), }) record_image = tf.decode_raw(features['image'], tf.uint8) # Changing the image into this shape helps train and visualize the output by converting it to # be organized like an image. image = tf.reshape(record_image, [250, 151, 1]) label = tf.cast(features['label'], tf.string) min_after_dequeue = 10 batch_size = 3 capacity = min_after_dequeue + 3 * batch_size image_batch, label_batch = tf.train.shuffle_batch( [image, label], batch_size=batch_size, capacity=capacity, min_after_dequeue=min_after_dequeue) Explanation: The example code is opening each image, converting it to grayscale, resizing it and then adding it to a TFRecord file. The logic isn't different from earlier examples except that the operation tf.image.resize_images is used. The resizing operation will scale every image to be the same size even if it distorts the image. For example, if an image in portrait orientation and an image in landscape orientation were both resized with this code then the output of the landscape image would become distorted. These distortions are caused because tf.image.resize_images doesn't take into account aspect ratio (the ratio of height to width) of an image. To properly resize a set of images, cropping or padding is a preferred method because it ignores the aspect ratio stopping distortions. Load Images Once the testing and training dataset have been transformed to TFRecord format, they can be read as TFRecords instead of as JPEG images. The goal is to load the images a few at a time with their corresponding labels. End of explanation # Converting the images to a float of [0,1) to match the expected input to convolution2d float_image_batch = tf.image.convert_image_dtype(image_batch, tf.float32) conv2d_layer_one = tf.contrib.layers.convolution2d( float_image_batch, num_output_channels=32, # The number of filters to generate kernel_size=(5,5), # It's only the filter height and width. activation_fn=tf.nn.relu, weight_init=tf.random_normal, stride=(2, 2), trainable=True) pool_layer_one = tf.nn.max_pool(conv2d_layer_one, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='SAME') # Note, the first and last dimension of the convolution output hasn't changed but the # middle two dimensions have. conv2d_layer_one.get_shape(), pool_layer_one.get_shape() Explanation: This example code loads training images by matching all the TFRecord files found in the training directory. Each TFRecord includes multiple images but the tf.parse_single_example will take a single Example out of the file. The batching operation discussed earlier is used to train multiple images simultaneously. Batching multiple images is useful because these operations are designed to work with multiple images the same as with a single image. The primary requirement is that the system have enough memory to work with them all. With the images available in memory, the next step is to create the model used for training and testing. Model The model used is similar to the mnist convolution example which is often used in tutorials describing convolutional neural networks in TensorFlow. The architecture of this model is simple yet it performs well for illustrating different techniques used in image classification and recognition. An advanced model may borrow more from Alex Krizhevsky's AlexNet design which includes more convolution layers. End of explanation conv2d_layer_two = tf.contrib.layers.convolution2d( pool_layer_one, num_output_channels=64, # More output channels means an increase in the number of filters kernel_size=(5,5), activation_fn=tf.nn.relu, weight_init=tf.random_normal, stride=(1, 1), trainable=True) pool_layer_two = tf.nn.max_pool(conv2d_layer_two, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='SAME') conv2d_layer_two.get_shape(), pool_layer_two.get_shape() Explanation: The first layer in the model is created using the shortcut tf.contrib.layers.convolution2d. It's important to note that the weight_init is set to be a random normal, meaning that the first set of filters are filled with random numbers following a normal distribution (this parameter is renamed in TensorFlow 0.9 to be weights_initializer). The filters are set as trainable so that as the network is fed information, these weights are adjusted to improve the accuracy of the model. After a convolution is applied to the images, the output is downsized using a max_pool operation. After the operation, the output shape of the convolution is reduced in half due to the ksize used in the pooling and the strides. The reduction didn't change the number of filters (output channels) or the size of the image batch. The components which were reduced dealt with the height and width of the image (filter). End of explanation flattened_layer_two = tf.reshape( pool_layer_two, [ batch_size, # Each image in the image_batch -1 # Every other dimension of the input ]) flattened_layer_two.get_shape() Explanation: The second layer changes little from the first except the depth of the filters. The number of filters is now doubled while again reducing the size of the height and width of the image. The multiple convolution and pool layers are continuing to reduce the height and width of the input while adding further depth. At this point, further convolution and pool steps could be taken. In many architectures there are over 5 different convolution and pooling layers. The most advanced architectures take longer to train and debug but they can match more sophisticated patterns. In this example, the two convolution and pooling layers are enough to illustrate the mechanics at work. The tensor being operated on is still fairly complex tensor, the next step is to fully connect every point in each image with an output neuron. Since this example is using softmax later, the fully connected layer needs to be changed into a rank two tensor. The tensor's first dimension will be used to separate each image while the second dimension is a rank one tensor of each input tensor. End of explanation # The weight_init parameter can also accept a callable, a lambda is used here returning a truncated normal # with a stddev specified. hidden_layer_three = tf.contrib.layers.fully_connected( flattened_layer_two, 512, weight_init=lambda i, dtype: tf.truncated_normal([38912, 512], stddev=0.1), activation_fn=tf.nn.relu ) # Dropout some of the neurons, reducing their importance in the model hidden_layer_three = tf.nn.dropout(hidden_layer_three, 0.1) # The output of this are all the connections between the previous layers and the 120 different dog breeds # available to train on. final_fully_connected = tf.contrib.layers.fully_connected( hidden_layer_three, 120, # Number of dog breeds in the ImageNet Dogs dataset weight_init=lambda i, dtype: tf.truncated_normal([512, 120], stddev=0.1) ) Explanation: tf.reshape has a special value which can be used to signify, use all the dimensions remaining. In this example code, the -1 is used to reshape the last pooling layer into a giant rank one tensor. With the pooling layer flattened out, it can be combined with two fully connected layers which associate the current network state to the breed of dog predicted. End of explanation import glob # Find every directory name in the imagenet-dogs directory (n02085620-Chihuahua, ...) labels = list(map(lambda c: c.split("/")[-1], glob.glob("./imagenet-dogs/*"))) # Match every label from label_batch and return the index where they exist in the list of classes train_labels = tf.map_fn(lambda l: tf.where(tf.equal(labels, l))[0,0:1][0], label_batch, dtype=tf.int64) Explanation: This example code creates the final fully connected layer of the network where every pixel is associated with every breed of dog. Every step of this network has been reducing the size of the input images by converting them into filters which are then matched with a breed of dog (label). This technique has reduced the processing power required to train or test a network while generalizing the output. Training Once a model is ready to be trained, the last steps follow the same process discussed in earlier chapters of this book. The model's loss is computed based on how accurately it guessed the correct labels in the training data which feeds into a training optimizer which updates the weights of each layer. This process continues one iteration at a time while attempting to increase the accuracy of each step. An important note related to this model, during training most classification functions (tf.nn.softmax) require numerical labels. This was highlighted in the section describing loading the images from TFRecords. At this point, each label is a string similar to n02085620-Chihuahua. Instead of using tf.nn.softmax on this string, the label needs to be converted to be a unique number for each label. Converting these labels into an integer representation should be done in preprocessing. For this dataset, each label will be converted into an integer which represents the index of each name in a list including all the dog breeds. There are many ways to accomplish this task, for this example a new TensorFlow utility operation will be used (tf.map_fn). End of explanation # setup-only-ignore loss = tf.reduce_mean( tf.nn.sparse_softmax_cross_entropy_with_logits( final_fully_connected, train_labels)) batch = tf.Variable(0) learning_rate = tf.train.exponential_decay( 0.01, batch * 3, 120, 0.95, staircase=True) optimizer = tf.train.AdamOptimizer( learning_rate, 0.9).minimize( loss, global_step=batch) train_prediction = tf.nn.softmax(final_fully_connected) Explanation: This example code uses two different forms of a map operation. The first form of map is used to create a list including only the dog breed name based on a list of directories. The second form of map is tf.map_fn which is a TensorFlow operation which will map a function over a tensor on the graph. The tf.map_fn is used to generate a rank one tensor including only the integer indexes where each label is located in the list of all the class labels. These unique integers can now be used with tf.nn.softmax to classify output predictions. End of explanation # setup-only-ignore filename_queue.close(cancel_pending_enqueues=True) coord.request_stop() coord.join(threads) Explanation: Debug the Filters with Tensorboard CNNs have multiple moving parts which can cause issues during training resulting in poor accuracy. Debugging problems in a CNN often start with investigating how the filters (kernels) are changing every iteration. Each weight used in a filter is constantly changing as the network attempts to learn the most accurate set of weights to use based on the train method. In a well designed CNN, when the first convolution layer is started, the initialized input weights are set to be random (in this case using weight_init=tf.random_normal). These weights activate over an image and the output of the activation (feature map) is random as well. Visualizing the feature map as if it were an image, the output looks like the original image with static applied. The static is caused by all the weights activating at random. Over many iterations, each filter becomes more uniform as the weights are adjusted to fit the training feedback. As the network converges, the filters resemble distinct small patterns which can be found in the image. <p style="text-align: center;"><i>An original grayscale training image before it is passed through the first convolution layer.</i></p> <br /> <p style="text-align: center;"><i>A single feature map from the first convolution layer highlighting randomness in the output.</i></p> <br /> Debugging a CNN requires a familiarity working with these filters. Currently there isn't any built in support in tensorboard to display filters or feature maps. A simple view of the filters can be done using a tf.image_summary operation on the filters being trained and the feature maps generated. Adding an image summary output to a graph gives a good overview of the filters being used and the feature map generated by applying them to the input images. An in progress jupyter notebook extension worth mentioning is TensorDebugger which is in an early state of development. The extension has a mode capable of viewing changes in filters as an animated gif over iterations. End of explanation
3,665
Given the following text description, write Python code to implement the functionality described below step by step Description: 2章 パーセプトロン 2.1 パーセプトロンとは xは入力信号、yは出力信号、wは重みを表す。 ニューロンの発火する限界値を閾値(θ)とする。 $$ y = \begin{cases} & \ 0 \; (w_{1}x_{1} + w_{1}x_{2} \leq \theta) \ & \ 1 \; (w_{1}x_{1} + w_{1}x_{2} > \theta) \end{cases} $$ 2.2 単純な論理回路 2.2.1 ANDゲート 2.2.1 NANDゲートとORゲート ORゲートについては一例として以下のように表せる。 $$ (w_{1},w_{1},\theta) = (0.5,0.5,0.3) $$ ANDもNANDもORも同じモデル(式)でパラメータを変更するだけで表現出来る。 ここでは人間がパラメータを考えたが、機械学習ではパラメータの値を決める作業をコンピュータに自動で行わせる。学習=適切なパラメータを決める作業。 2.3 パーセプトロンの実装 2.3.1 簡単な実装 Step1: 2.3.2 重みとバイアスの導入 θを-bとして式変形を行なう。bはバイアス。 $$ y = \begin{cases} & \ 0 \; (b + w_{1}x_{1} + w_{1}x_{2} \leq 0) \ & \ 1 \; (b + w_{1}x_{1} + w_{1}x_{2} > 0) \end{cases} $$ Step2: 2.3.3 重みとバイアスによる実装 以下では-θをバイアスbと命名した。 文脈によっては重みもバイアスも含めてパラメータを「重み」と呼ぶ場合がある。 Step3: 2.4 パーセプトロンの限界 2.4.1 XORゲート 排他的論理和の場合、これまでのパーセプトロンでは実装できない。 ORゲートについて考える。 重みパラメータは$(b,w_{1},w_{2})=(-0.5,1.0,1.0)$の場合、パーセプトロンは以下式となる。 $$ y = \begin{cases} & \ 0 \; (-0.5 + x_{1} + x_{2} \leq 0) \ & \ 1 \; (-0.5 + x_{1} + x_{2} > 0) \end{cases} $$ パーセプトロンの発火有無であるyの値は直線によって0か1の境界を表すことができる。(グラフ中赤直線) Step4: 2.4.2 線形と非線形 ORゲートは直線によってyの境界を表すことができた。これは線形な領域と言う。 一方XORゲートの場合は以下のようにプロットされる。直線によって境界を表すことができないため、曲線で領域を区切る必要がある。これを非線形な領域と言う。 Step5: 2.5 多層パーセプトロン パーセプトロンではXORゲートは表現できなかった。 しかし、パーセプトロンを多層に重ねることによって表現可能になる。 2.5.1 既存ゲートの組み合わせ XORはゲートを重ねることによって求めることができる。 XOR = AND( NAND(x1,x2), OR(x1,x2) ) Step6: XORは2層のパーセプトロンで表現できた。(文献によっては3層とも呼ぶ場合がある。) 層を重ねたパーセプトロンは多層パーセプトロン(multi-layered perceptron)と呼ばれる。 ネットワーク図は以下のようになる。 左から第0層、第1層、第2層となる。
Python Code: # ANDの実装 def func_AND(x1, x2): w1, w2, theta = 0.5, 0.5, 0.7 tmp = x1*w1 + x2*w2 if tmp <= theta: return 0 elif tmp > theta: return 1 print(func_AND(0, 0)) print(func_AND(1, 0)) print(func_AND(0, 1)) print(func_AND(1, 1)) Explanation: 2章 パーセプトロン 2.1 パーセプトロンとは xは入力信号、yは出力信号、wは重みを表す。 ニューロンの発火する限界値を閾値(θ)とする。 $$ y = \begin{cases} & \ 0 \; (w_{1}x_{1} + w_{1}x_{2} \leq \theta) \ & \ 1 \; (w_{1}x_{1} + w_{1}x_{2} > \theta) \end{cases} $$ 2.2 単純な論理回路 2.2.1 ANDゲート 2.2.1 NANDゲートとORゲート ORゲートについては一例として以下のように表せる。 $$ (w_{1},w_{1},\theta) = (0.5,0.5,0.3) $$ ANDもNANDもORも同じモデル(式)でパラメータを変更するだけで表現出来る。 ここでは人間がパラメータを考えたが、機械学習ではパラメータの値を決める作業をコンピュータに自動で行わせる。学習=適切なパラメータを決める作業。 2.3 パーセプトロンの実装 2.3.1 簡単な実装 End of explanation import numpy as np x = np.array([0, 1]) w = np.array([0.5, 0.5]) b = -0.7 print(w*x) print(np.sum(w*x)) print(np.sum(w*x)+b) Explanation: 2.3.2 重みとバイアスの導入 θを-bとして式変形を行なう。bはバイアス。 $$ y = \begin{cases} & \ 0 \; (b + w_{1}x_{1} + w_{1}x_{2} \leq 0) \ & \ 1 \; (b + w_{1}x_{1} + w_{1}x_{2} > 0) \end{cases} $$ End of explanation # ANDゲートを実装 def AND(x1, x2): x = np.array([x1, x2]) w = np.array([0.5, 0.5]) b = -0.7 tmp = np.sum(w*x) + b if tmp <= 0: return 0 else: return 1 # NANDゲートを実装 def NAND(x1, x2): x = np.array([x1, x2]) w = np.array([-0.5, -0.5]) # 重みとバイアスだけがANDと違う b = 0.7 tmp = np.sum(w*x) + b if tmp <= 0: return 0 else: return 1 # ORゲートを実装 def OR(x1, x2): x = np.array([x1, x2]) w = np.array([0.5, 0.5]) # 重みとバイアスだけがANDと違う b = -0.2 tmp = np.sum(w*x) + b if tmp <= 0: return 0 else: return 1 Explanation: 2.3.3 重みとバイアスによる実装 以下では-θをバイアスbと命名した。 文脈によっては重みもバイアスも含めてパラメータを「重み」と呼ぶ場合がある。 End of explanation import numpy as np import matplotlib.pyplot as plt # グラフx軸、y軸の表示 fig, ax = plt.subplots() #-- Set axis spines at 0 for spine in ['left', 'bottom']: ax.spines[spine].set_position('zero') # Hide the other spines... for spine in ['right', 'top']: ax.spines[spine].set_color('none') #-- Decorate the spins arrow_length = 20 # In points # X-axis arrow ax.annotate('X1', xy=(1, 0), xycoords=('axes fraction', 'data'), xytext=(arrow_length, 0), textcoords='offset points', ha='left', va='center', arrowprops=dict(arrowstyle='<|-', fc='black')) # Y-axis arrow ax.annotate('X2', xy=(0, 1), xycoords=('data', 'axes fraction'), xytext=(0, arrow_length), textcoords='offset points', ha='center', va='bottom', arrowprops=dict(arrowstyle='<|-', fc='black')) #-- Plot ax.axis([-1, 2, -0.5, 2]) ax.grid() # y=0となる組み合わせ x1_circle = [0] x2_circle = [0] # y=1となる組み合わせ x1_triangle = [0, 1, 1] x2_triangle = [1, 0, 1] # yにおける0と1が求められるの領域 x1 = np.linspace(-2,3,4) x2 = 0.5 - x1 # プロット plt.plot(x1_circle, x2_circle, 'o') plt.plot(x1_triangle, x2_triangle, '^') plt.plot(x1, x2, 'r-') plt.show() Explanation: 2.4 パーセプトロンの限界 2.4.1 XORゲート 排他的論理和の場合、これまでのパーセプトロンでは実装できない。 ORゲートについて考える。 重みパラメータは$(b,w_{1},w_{2})=(-0.5,1.0,1.0)$の場合、パーセプトロンは以下式となる。 $$ y = \begin{cases} & \ 0 \; (-0.5 + x_{1} + x_{2} \leq 0) \ & \ 1 \; (-0.5 + x_{1} + x_{2} > 0) \end{cases} $$ パーセプトロンの発火有無であるyの値は直線によって0か1の境界を表すことができる。(グラフ中赤直線) End of explanation import matplotlib.pyplot as plt from matplotlib.image import imread img = imread('../docs/XOR.png') plt.figure(figsize=(8,5)) plt.imshow(img) plt.show() Explanation: 2.4.2 線形と非線形 ORゲートは直線によってyの境界を表すことができた。これは線形な領域と言う。 一方XORゲートの場合は以下のようにプロットされる。直線によって境界を表すことができないため、曲線で領域を区切る必要がある。これを非線形な領域と言う。 End of explanation # 2.5.2 XORゲートの実装 def XOR(x1, x2): s1 = NAND(x1, x2) s2 = OR(x1, x2) y = AND(s1, s2) return y print(XOR(0, 0)) print(XOR(1, 0)) print(XOR(0, 1)) print(XOR(1, 1)) Explanation: 2.5 多層パーセプトロン パーセプトロンではXORゲートは表現できなかった。 しかし、パーセプトロンを多層に重ねることによって表現可能になる。 2.5.1 既存ゲートの組み合わせ XORはゲートを重ねることによって求めることができる。 XOR = AND( NAND(x1,x2), OR(x1,x2) ) End of explanation from graphviz import Digraph f = Digraph(format="png") f.attr(rankdir='LR', size='8,5') f.attr('node', shape='circle') f.edge('x1', 's1') f.edge('x1', 's2') f.edge('x2', 's1') f.edge('x2', 's2') f.edge('s1', 'y') f.edge('s2', 'y') f.render("../docs/XOR_Perceptron") img = imread('../docs/XOR_Perceptron.png') plt.figure(figsize=(10,8)) plt.imshow(img) plt.show() Explanation: XORは2層のパーセプトロンで表現できた。(文献によっては3層とも呼ぶ場合がある。) 層を重ねたパーセプトロンは多層パーセプトロン(multi-layered perceptron)と呼ばれる。 ネットワーク図は以下のようになる。 左から第0層、第1層、第2層となる。 End of explanation
3,666
Given the following text description, write Python code to implement the functionality described below step by step Description: Copyright 2019 The TensorFlow Authors. Step1: Migrating tf.summary usage to TF 2.x <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https Step2: TensorFlow 2.x includes significant changes to the tf.summary API used to write summary data for visualization in TensorBoard. What's changed It's useful to think of the tf.summary API as two sub-APIs Step3: Example usage with tf.function graph execution Step4: Example usage with legacy TF 1.x graph execution Step5: Converting your code Converting existing tf.summary usage to the TF 2.x API cannot be reliably automated, so the tf_upgrade_v2 script just rewrites it all to tf.compat.v1.summary and will not enable the TF 2.x behaviors automatically. Partial Migration To make migration to TF 2.x easier for users of model code that still depends heavily on the TF 1.x summary API logging ops like tf.compat.v1.summary.scalar(), it is possible to migrate only the writer APIs first, allowing for individual TF 1.x summary ops inside your model code to be fully migrated at a later point. To support this style of migration, <a href="https
Python Code: #@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. Explanation: Copyright 2019 The TensorFlow Authors. End of explanation import tensorflow as tf Explanation: Migrating tf.summary usage to TF 2.x <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://www.tensorflow.org/tensorboard/migrate"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a> </td> <td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/tensorboard/blob/master/docs/migrate.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a> </td> <td> <a target="_blank" href="https://github.com/tensorflow/tensorboard/blob/master/docs/migrate.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a> </td> <td> <a href="https://storage.googleapis.com/tensorflow_docs/tensorboard/docs/migrate.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a> </td> </table> Note: This doc is for people who are already familiar with TensorFlow 1.x TensorBoard and who want to migrate large TensorFlow code bases from TensorFlow 1.x to 2.x. If you're new to TensorBoard, see the get started doc instead. If you are using tf.keras there may be no action you need to take to upgrade to TensorFlow 2.x. End of explanation writer = tf.summary.create_file_writer("/tmp/mylogs/eager") with writer.as_default(): for step in range(100): # other model code would go here tf.summary.scalar("my_metric", 0.5, step=step) writer.flush() ls /tmp/mylogs/eager Explanation: TensorFlow 2.x includes significant changes to the tf.summary API used to write summary data for visualization in TensorBoard. What's changed It's useful to think of the tf.summary API as two sub-APIs: A set of ops for recording individual summaries - summary.scalar(), summary.histogram(), summary.image(), summary.audio(), and summary.text() - which are called inline from your model code. Writing logic that collects these individual summaries and writes them to a specially formatted log file (which TensorBoard then reads to generate visualizations). In TF 1.x The two halves had to be manually wired together - by fetching the summary op outputs via Session.run() and calling FileWriter.add_summary(output, step). The v1.summary.merge_all() op made this easier by using a graph collection to aggregate all summary op outputs, but this approach still worked poorly for eager execution and control flow, making it especially ill-suited for TF 2.x. In TF 2.X The two halves are tightly integrated, and now individual tf.summary ops write their data immediately when executed. Using the API from your model code should still look familiar, but it's now friendly to eager execution while remaining graph-mode compatible. Integrating both halves of the API means the summary.FileWriter is now part of the TensorFlow execution context and gets accessed directly by tf.summary ops, so configuring writers is the main part that looks different. Example usage with eager execution, the default in TF 2.x: End of explanation writer = tf.summary.create_file_writer("/tmp/mylogs/tf_function") @tf.function def my_func(step): with writer.as_default(): # other model code would go here tf.summary.scalar("my_metric", 0.5, step=step) for step in tf.range(100, dtype=tf.int64): my_func(step) writer.flush() ls /tmp/mylogs/tf_function Explanation: Example usage with tf.function graph execution: End of explanation g = tf.compat.v1.Graph() with g.as_default(): step = tf.Variable(0, dtype=tf.int64) step_update = step.assign_add(1) writer = tf.summary.create_file_writer("/tmp/mylogs/session") with writer.as_default(): tf.summary.scalar("my_metric", 0.5, step=step) all_summary_ops = tf.compat.v1.summary.all_v2_summary_ops() writer_flush = writer.flush() with tf.compat.v1.Session(graph=g) as sess: sess.run([writer.init(), step.initializer]) for i in range(100): sess.run(all_summary_ops) sess.run(step_update) sess.run(writer_flush) ls /tmp/mylogs/session Explanation: Example usage with legacy TF 1.x graph execution: End of explanation # Enable eager execution. tf.compat.v1.enable_v2_behavior() # A default TF 2.x summary writer is available. writer = tf.summary.create_file_writer("/tmp/mylogs/enable_v2_in_v1") # A step is set for the writer. with writer.as_default(step=0): # Below invokes `tf.summary.scalar`, and the return value is an empty bytestring. tf.compat.v1.summary.scalar('float', tf.constant(1.0), family="family") Explanation: Converting your code Converting existing tf.summary usage to the TF 2.x API cannot be reliably automated, so the tf_upgrade_v2 script just rewrites it all to tf.compat.v1.summary and will not enable the TF 2.x behaviors automatically. Partial Migration To make migration to TF 2.x easier for users of model code that still depends heavily on the TF 1.x summary API logging ops like tf.compat.v1.summary.scalar(), it is possible to migrate only the writer APIs first, allowing for individual TF 1.x summary ops inside your model code to be fully migrated at a later point. To support this style of migration, <a href="https://www.tensorflow.org/api_docs/python/tf/compat/v1/summary"><code>tf.compat.v1.summary</code></a> will automatically forward to their TF 2.x equivalents under the following conditions: The outermost context is eager mode A default TF 2.x summary writer has been set A non-empty value for step has been set for the writer (using <a href="https://www.tensorflow.org/api_docs/python/tf/summary/SummaryWriter#as_default"><code>tf.summary.SummaryWriter.as_default</code></a>, <a href="https://www.tensorflow.org/api_docs/python/tf/summary/experimental/set_step"><code>tf.summary.experimental.set_step</code></a>, or alternatively <a href="https://www.tensorflow.org/api_docs/python/tf/compat/v1/train/create_global_step"><code>tf.compat.v1.train.create_global_step</code></a>) Note that when TF 2.x summary implementation is invoked, the return value will be an empty bytestring tensor, to avoid duplicate summary writing. Additionally, the input argument forwarding is best-effort and not all arguments will be preserved (for instance family argument will be supported whereas collections will be removed). Example to invoke <a href="https://www.tensorflow.org/api_docs/python/tf/summary/scalar"><code>tf.summary.scalar</code></a> behaviors in <a href="https://www.tensorflow.org/api_docs/python/tf/compat/v1/summary/scalar"><code>tf.compat.v1.summary.scalar</code></a>: End of explanation
3,667
Given the following text description, write Python code to implement the functionality described below step by step Description: Using turicreate for easy ML on SEEG dataset For example, one thing that makes it easy is that turicreate automagically creates dummy variables for categorical features. Step1: Reading cleaned data Step2: Preparing data We cannot have both TPLE and TPLE category in same set or results will be biased. So create two data sets Step3: Regression approach Step4: Classification approach Multi-class solution Using TPLE category Step5: Binary solution Creating new column "Deviated" (yes/no) based on -arbitrary- cut off value.
Python Code: import turicreate as tc import numpy as np import matplotlib.pyplot as plt %matplotlib inline Explanation: Using turicreate for easy ML on SEEG dataset For example, one thing that makes it easy is that turicreate automagically creates dummy variables for categorical features. End of explanation sf = tc.SFrame.read_csv('electrodes_clean.csv') sf.explore() # in GUI # optional save to SFrame # sf = tc.SFrame('electrodes_clean.sframe') Explanation: Reading cleaned data End of explanation sf_reg = sf.remove_column('TPLE category') sf_class = sf.remove_column('TPLE') Explanation: Preparing data We cannot have both TPLE and TPLE category in same set or results will be biased. So create two data sets: one for regression (removing TPLE category as feature) and one for classification (removing TPLE as feature). Regarding the final (automatically selected) model: - &lt;model&gt;.summary() summarizes the model parameters - &lt;model&gt;.features shows which features have been included (= all selected for model building) End of explanation sf_reg_train, sf_reg_test = sf_reg.random_split(0.8) reg_model = tc.regression.create(sf_reg_train, target = 'TPLE') reg_model.evaluate(sf_reg_test) reg_model.summary() Explanation: Regression approach End of explanation sf_class_train, sf_class_test = sf_class.random_split(0.8) class_model = tc.classifier.create(sf_class_train, target = 'TPLE category') metrics = class_model.evaluate(sf_class_test) metrics # metrics['confusion_matrix'] class_model.summary() Explanation: Classification approach Multi-class solution Using TPLE category End of explanation # create new dataset - easier when experimenting with different cutoff values # remove column 'TPLE category' - otherwise we severely bias results sf_dev = sf_class.remove_column('TPLE category') def evaluate_classification_for_cutoff(value): '''Creates dataframe with predefined cutoff value. Useful to play with different cutoffs. Value represents the deviation in mm. Returns metrics of model''' sf_dev['Deviated'] = sf['TPLE'].apply(lambda tple: 'yes' if tple > value else 'no') sf_dev_train, sf_dev_test = sf_dev.random_split(0.8) model = tc.classifier.create(sf_dev_train, target = 'Deviated', verbose = False) metrics = model.evaluate(sf_dev_test) return metrics cutoff_values = [1.5, 2.0, 2.5, 3.0, 3.5, 4.0] results = {} for cv in cutoff_values: metr = evaluate_classification_for_cutoff(cv) results.update({cv: metr}) plt.figure() for cutoff, metric in results.items(): acc = metric['accuracy']; auc = metric['auc'] print(f"Cutoff {cutoff} - Accuracy: {acc:.2f} | AUC: {auc:.2f}") plt.plot(cutoff, acc, 'bo', label = 'Accuracy') # Accuracy in BLUE plt.plot(cutoff, auc, 'ro', label = 'AUC') # AUC in RED Explanation: Binary solution Creating new column "Deviated" (yes/no) based on -arbitrary- cut off value. End of explanation
3,668
Given the following text description, write Python code to implement the functionality described below step by step Description: Step 1 Step1: Step 3 Step2: Step 4 Step3: Step 5 Step4: Step 5a Step5: Step 6 Step6: Step 7 Step7: Step 8 Step8: Step 9 Step9: Step 10
Python Code: X = tf.placeholder(tf.float32, name="X") Y = tf.placeholder(tf.float32, name="Y") Explanation: Step 1: read in data from the .xls file Step 2: create placeholders for input X (number of fire) and label Y (number of theft) End of explanation w = tf.Variable(0.0, name='w') b = tf.Variable(0.0, name='b') Explanation: Step 3: create weight and bias, initialized to 0 End of explanation Y_predicted = w * X + b Explanation: Step 4: build model to predict Y End of explanation loss = tf.square(Y - Y_predicted) Explanation: Step 5: use the square error as the loss function End of explanation def huber_loss(labels, predictions, delta=1.0): pass # loss = utils.huber_loss(Y, Y_predicted) Explanation: Step 5a: implement Huber loss function from lecture and try it out End of explanation optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.001).minimize(loss) sess = tf.Session() # prefer with tf.Session() as sess: in your code Explanation: Step 6: using gradient descent with learning rate of 0.01 to minimize loss End of explanation sess.run(tf.global_variables_initializer()) writer = tf.summary.FileWriter('./graphs/linear_reg', sess.graph) Explanation: Step 7: initialize the necessary variables, in this case, w and b End of explanation for i in range(50): # train the model 50 epochs total_loss = 0 for x, y in data: # Session runs train_op and fetch values of loss _, l = sess.run([optimizer, loss], feed_dict={X:x, Y:y}) total_loss += l print('Epoch {0}: {1}'.format(i, total_loss/float(n_samples))) # close the writer when you're done using it writer.close() Explanation: Step 8: train the model End of explanation w, b = sess.run([w, b]) Explanation: Step 9: output the values of w and b End of explanation X, Y = data[:, 0], data[:, 1] plt.scatter(X, Y, label="Real data") plt.plot(X, w * X + b, label="Predicted data", color='r') plt.show() Explanation: Step 10: plot the results End of explanation
3,669
Given the following text description, write Python code to implement the functionality described below step by step Description: Compute spatial resolution metrics in source space Compute peak localisation error and spatial deviation for the point-spread functions of dSPM and MNE. Plot their distributions and difference of distributions. This example mimics some results from Step1: MNE Compute resolution matrices, peak localisation error (PLE) for point spread functions (PSFs), spatial deviation (SD) for PSFs Step2: dSPM Do the same for dSPM Step3: Visualize results Visualise peak localisation error (PLE) across the whole cortex for MNE PSF Step4: And dSPM Step5: Subtract the two distributions and plot this difference Step6: These plots show that dSPM has generally lower peak localization error (red color) than MNE in deeper brain areas, but higher error (blue color) in more superficial areas. Next we'll visualise spatial deviation (SD) across the whole cortex for MNE PSF Step7: And dSPM Step8: Subtract the two distributions and plot this difference
Python Code: # Author: Olaf Hauk <[email protected]> # # License: BSD (3-clause) import mne from mne.datasets import sample from mne.minimum_norm import make_inverse_resolution_matrix from mne.minimum_norm import resolution_metrics print(__doc__) data_path = sample.data_path() subjects_dir = data_path + '/subjects/' fname_fwd = data_path + '/MEG/sample/sample_audvis-meg-eeg-oct-6-fwd.fif' fname_cov = data_path + '/MEG/sample/sample_audvis-cov.fif' fname_evo = data_path + '/MEG/sample/sample_audvis-ave.fif' # read forward solution forward = mne.read_forward_solution(fname_fwd) # forward operator with fixed source orientations mne.convert_forward_solution(forward, surf_ori=True, force_fixed=True, copy=False) # noise covariance matrix noise_cov = mne.read_cov(fname_cov) # evoked data for info evoked = mne.read_evokeds(fname_evo, 0) # make inverse operator from forward solution # free source orientation inverse_operator = mne.minimum_norm.make_inverse_operator( info=evoked.info, forward=forward, noise_cov=noise_cov, loose=0., depth=None) # regularisation parameter snr = 3.0 lambda2 = 1.0 / snr ** 2 Explanation: Compute spatial resolution metrics in source space Compute peak localisation error and spatial deviation for the point-spread functions of dSPM and MNE. Plot their distributions and difference of distributions. This example mimics some results from :footcite:HaukEtAl2019, namely Figure 3 (peak localisation error for PSFs, L2-MNE vs dSPM) and Figure 4 (spatial deviation for PSFs, L2-MNE vs dSPM). End of explanation rm_mne = make_inverse_resolution_matrix(forward, inverse_operator, method='MNE', lambda2=lambda2) ple_mne_psf = resolution_metrics(rm_mne, inverse_operator['src'], function='psf', metric='peak_err') sd_mne_psf = resolution_metrics(rm_mne, inverse_operator['src'], function='psf', metric='sd_ext') del rm_mne Explanation: MNE Compute resolution matrices, peak localisation error (PLE) for point spread functions (PSFs), spatial deviation (SD) for PSFs: End of explanation rm_dspm = make_inverse_resolution_matrix(forward, inverse_operator, method='dSPM', lambda2=lambda2) ple_dspm_psf = resolution_metrics(rm_dspm, inverse_operator['src'], function='psf', metric='peak_err') sd_dspm_psf = resolution_metrics(rm_dspm, inverse_operator['src'], function='psf', metric='sd_ext') del rm_dspm, forward Explanation: dSPM Do the same for dSPM: End of explanation brain_ple_mne = ple_mne_psf.plot('sample', 'inflated', 'lh', subjects_dir=subjects_dir, figure=1, clim=dict(kind='value', lims=(0, 2, 4))) brain_ple_mne.add_text(0.1, 0.9, 'PLE MNE', 'title', font_size=16) Explanation: Visualize results Visualise peak localisation error (PLE) across the whole cortex for MNE PSF: End of explanation brain_ple_dspm = ple_dspm_psf.plot('sample', 'inflated', 'lh', subjects_dir=subjects_dir, figure=2, clim=dict(kind='value', lims=(0, 2, 4))) brain_ple_dspm.add_text(0.1, 0.9, 'PLE dSPM', 'title', font_size=16) Explanation: And dSPM: End of explanation diff_ple = ple_mne_psf - ple_dspm_psf brain_ple_diff = diff_ple.plot('sample', 'inflated', 'lh', subjects_dir=subjects_dir, figure=3, clim=dict(kind='value', pos_lims=(0., 1., 2.))) brain_ple_diff.add_text(0.1, 0.9, 'PLE MNE-dSPM', 'title', font_size=16) Explanation: Subtract the two distributions and plot this difference End of explanation brain_sd_mne = sd_mne_psf.plot('sample', 'inflated', 'lh', subjects_dir=subjects_dir, figure=4, clim=dict(kind='value', lims=(0, 2, 4))) brain_sd_mne.add_text(0.1, 0.9, 'SD MNE', 'title', font_size=16) Explanation: These plots show that dSPM has generally lower peak localization error (red color) than MNE in deeper brain areas, but higher error (blue color) in more superficial areas. Next we'll visualise spatial deviation (SD) across the whole cortex for MNE PSF: End of explanation brain_sd_dspm = sd_dspm_psf.plot('sample', 'inflated', 'lh', subjects_dir=subjects_dir, figure=5, clim=dict(kind='value', lims=(0, 2, 4))) brain_sd_dspm.add_text(0.1, 0.9, 'SD dSPM', 'title', font_size=16) Explanation: And dSPM: End of explanation diff_sd = sd_mne_psf - sd_dspm_psf brain_sd_diff = diff_sd.plot('sample', 'inflated', 'lh', subjects_dir=subjects_dir, figure=6, clim=dict(kind='value', pos_lims=(0., 1., 2.))) brain_sd_diff.add_text(0.1, 0.9, 'SD MNE-dSPM', 'title', font_size=16) Explanation: Subtract the two distributions and plot this difference: End of explanation
3,670
Given the following text description, write Python code to implement the functionality described below step by step Description: Adding optional relationships changes the dcnt value SYNOPSIS Step1: 2) Depth-01 term, GO Step2: Notice that dcnt=0 for GO Step3: 3) Depth-01 term, GO Step4: 4) Depth-01 term, GO Step5: 5) Descendants under GO Step6: 6) Plot descendants of virion
Python Code: from goatools.base import get_godag godag = get_godag("go-basic.obo", optional_attrs={'relationship'}) go_leafs = set(o.item_id for o in godag.values() if not o.children) Explanation: Adding optional relationships changes the dcnt value SYNOPSIS: For GO:0019012, virion, the descendants count dcnt, is: * 0 when using is_a relationships (the default) and * 48 when adding the optional relationship, part_of. Table of Contents: 1. Download Ontologies, if necessary 2. Depth-01 term, virion, has dcnt=0 through is_a relationships (default) 3. Depth-01 term, virion, dcnt value is higher using all relationships 4. Depth-01 term, virion, dcnt value is higher using part_of relationships 5. Descendants under virion 6. Plot some descendants of virion 1) Download Ontologies, if necessary End of explanation virion = 'GO:0019012' from goatools.gosubdag.gosubdag import GoSubDag gosubdag_r0 = GoSubDag(go_leafs, godag) Explanation: 2) Depth-01 term, GO:0019012 (virion) has dcnt=0 through is_a relationships (default) GO:0019012, virion, has no GO terms below it through the is_a relationship, so the default value of dcnt will be zero, even though it is very high in the DAG at depth=01. End of explanation nt_virion = gosubdag_r0.go2nt[virion] print(nt_virion) print('THE VALUE OF dcnt IS: {dcnt}'.format(dcnt=nt_virion.dcnt)) Explanation: Notice that dcnt=0 for GO:0019012, virion, even though it is very high in the DAG hierarchy (depth=1). This is because there are no GO IDs under GO:0019012 (virion) using the is_a relationship. End of explanation gosubdag_r1 = GoSubDag(go_leafs, godag, relationships=True) nt_virion = gosubdag_r1.go2nt[virion] print(nt_virion) print('THE VALUE OF dcnt IS: {dcnt}'.format(dcnt=nt_virion.dcnt)) Explanation: 3) Depth-01 term, GO:0019012 (virion) dcnt value is higher using all relationships Load all relationships into GoSubDag using relationships=True End of explanation gosubdag_partof = GoSubDag(go_leafs, godag, relationships={'part_of'}) nt_virion = gosubdag_partof.go2nt[virion] print(nt_virion) print('THE VALUE OF dcnt IS: {dcnt}'.format(dcnt=nt_virion.dcnt)) Explanation: 4) Depth-01 term, GO:0019012 (virion) dcnt value is higher using part_of relationships Load all relationships into GoSubDag using relationships={'part_of'} End of explanation virion_descendants = gosubdag_partof.rcntobj.go2descendants[virion] print('{N} descendants of virion were found'.format(N=len(virion_descendants))) Explanation: 5) Descendants under GO:0019012 (virion) End of explanation from goatools.gosubdag.plot.gosubdag_plot import GoSubDagPlot # Limit plot of descendants to get a smaller plot virion_capsid_fiber = {'GO:0098033', 'GO:0098032'} nts = gosubdag_partof.prt_goids(virion_capsid_fiber, '{NS} {GO} dcnt({dcnt}) D-{depth:02} {GO_name}') # Limit plot size by choosing just two virion descendants # Get a subset containing only a couple virion descendants and their ancestors pltdag = GoSubDag(virion_capsid_fiber, godag, relationships={'part_of'}) pltobj = GoSubDagPlot(pltdag) pltobj.plt_dag('virion_capsid_fiber.png') Explanation: 6) Plot descendants of virion End of explanation
3,671
Given the following text description, write Python code to implement the functionality described below step by step Description: Conditional Probability This lab is an introduction to visualizing conditional probabilities. We will cover icon arrays. These do not appear in the textbook and will not appear on any exam, but they will help you gain intuition about conditional probability. Administrative details This lab will not be collected. Conditional probability will appear on the final exam, and this is an opportunity to understand it better. We recommend going through at least part 2. You can complete the rest later as an exercise when you're studying. Step1: 1. What is conditional probability good for? Suppose we have a known population, like all dogs in California. So far, we've seen 3 ways of predicting something about an individual in that population, given incomplete knowledge about the identity of the individual Step2: Here's a table with those marbles Step6: We've included some code to display something called an icon array. The functions in the cell below create icon arrays from various kinds of tables. Refer back to this cell later when you need to make an icon array. Step7: Here's an icon array of all the marbles, grouped by color and size Step8: Note that the icon colors don't correspond to the colors of the marbles they represent. You (the marble) should imagine that you are a random draw from these 13 icons. Question 2.2. Make an icon array of the marbles, grouped only by color. Step9: Knowing nothing else about yourself, you're equally likely to be any of the marbles pictured. Question 2.3. What's the probability that you're a green marble? Calculate this by hand (using Python for arithmetic) by looking at your icon array. Step10: 2.1. Conditional probability Suppose you overhear Samantha saying that you're a large marble. (Little-known fact Step11: In question 2.3, we assumed you were equally likely to be any of the marbles, because we didn't know any better. That's why we looked at all the marbles to compute the probability you were green. But assuming you're a large marble, we can eliminate some of these possibilities. In particular, you can't be a small green marble or a small red marble. You're still equally likely to be any of the remaining marbles, because you don't know anything that says otherwise. So here's an icon array of those remaining possibilities Step12: Question 2.1.1. What's the probability you're a green marble, knowing that you're a large marble? Calculate it by hand, using the icon array. Step13: You should have found that this is different from the probability that you're a green marble, which you computed earlier. The distribution of colors among the large marbles is a little different from the distribution of colors among all the marbles. Question 2.1.2. Suppose instead Samantha had said you're a green marble. What's the probability you're large? Make an icon array to help you compute this probability, then compute it. Hint Step14: Question 2.1.3. How could you answer the last two questions just by looking at the full icon array? (You can run the cell below to see it again.) Step15: Write your answer here, replacing this text. 3. Cancer screening Now let's look at a much more realistic application. Background Medical tests are an important but surprisingly controversial topic. For years, women have been advised to get regular mammograms (tests for breast cancer). Today, there is controversy over whether the tests are useful at all. Part of the problem with such tests is that they are not perfectly reliable. Someone without cancer, or with only a benign form of cancer, can see a positive result on a test for cancer. Someone with cancer can receive a negative result. ("Positive" means "pointing toward cancer," so in this context it's bad!) Doctors and patients often deal poorly with the first case, called false positives. For example, a patient may receive dangerous treatment like chemotherapy or radiation despite having no cancer or, as happens more frequently, having a cancer that would not have impacted her health. Conditional probability is a good way to think about such situations. For example, you can compute the chance that you have cancer, given the result of a test, by combining information from different probability distributions. You'll see that the chance you have cancer can be far from 100% even if you have a positive test result from a test that is usually accurate. 3.1. Basic cancer statistics Suppose that, in a representative group of 10,000 people who are tested for cancer ("representative" meaning that the frequencies of different things are the same as the frequencies in the whole population) Step16: One way to visualize this dataset is with a contingency table, which you've seen before. Question 3.1.1. Create a contingency table that looks like this Step17: Question 3.1.2. Display the people data in an icon array. The name of the population members should be "people who've taken a cancer test". Step18: Now let's think about how you can use this kind of information when you're tested for cancer. Before you know any information about yourself, you could imagine yourself as a uniform random sample of one of the 10,000 people in this imaginary population of people who have been tested. What's the chance that you have cancer, knowing nothing else about yourself? It's $\frac{100}{10000}$, or 1%. We can see that more directly with this icon array Step19: Question 3.1.3. What's the chance that you have a positive test result, knowing nothing else about yourself? Hint Step20: 3.2. Interpreting test results Suppose you have a positive test result. This means you can now narrow yourself down to being part of one of two groups Step21: The conditional probability that you have cancer given your positive test result is the chance that you're in the first group, assuming you're in one of these two groups. Question 3.2.1. Eyeballing it, is the conditional probability that you have cancer given your positive test result closest to Step22: Question 3.2.2. Now write code to calculate that probability exactly, using the original contingency table you wrote. Step23: Question 3.2.3. Look at the full icon array again. Using that, how would you compute (roughly) the conditional probability of cancer given a positive test? Step24: Write your answer here, replacing this text. Question 3.2.4. Is your answer to question 3.2.2 bigger than the overall proportion of people in the population who have cancer? Does that make sense? Write your answer here, replacing this text. 4. Tree diagrams A tree diagram is another useful visualization for conditional probability. It is easiest to draw a tree diagram when the probabilities are presented in a slightly different way. For example, people often summarize the information in your cancer table using 3 numbers
Python Code: # Run this cell to set up the notebook, but please don't change it. # These lines import the Numpy and Datascience modules. import numpy as np from datascience import * # These lines do some fancy plotting magic. import matplotlib %matplotlib inline import matplotlib.pyplot as plt plt.style.use('fivethirtyeight') import warnings warnings.simplefilter('ignore', FutureWarning) # This line loads the visualization code for this lab. import visualizations # These lines load the tests. from client.api.assignment import load_assignment tests = load_assignment('lab10.ok') Explanation: Conditional Probability This lab is an introduction to visualizing conditional probabilities. We will cover icon arrays. These do not appear in the textbook and will not appear on any exam, but they will help you gain intuition about conditional probability. Administrative details This lab will not be collected. Conditional probability will appear on the final exam, and this is an opportunity to understand it better. We recommend going through at least part 2. You can complete the rest later as an exercise when you're studying. End of explanation probability_large_green = ... _ = tests.grade("q21") Explanation: 1. What is conditional probability good for? Suppose we have a known population, like all dogs in California. So far, we've seen 3 ways of predicting something about an individual in that population, given incomplete knowledge about the identity of the individual: If we know nothing about the individual dog, we could predict that its speed is the average or median of all the speeds in the population. If we know the dog's height but not its speed, we could use linear regression to predict its speed from its height. The resulting prediction is still imperfect, but it might be more accurate than the population average. If we know the dog's breed, height, and age, we could use nearest-neighbor classification to predict its speed by comparing to a collection of dogs with known speed. Computing conditional probabilities is a different way of making predictions. It differs in at least two important ways from the methods we've seen: 1. Rather than producing a single answer that might be wrong, we just figure out how likely each possible answer is. 2. In the simple (but important) cases we'll look at today, conditional probabilities can be calculated exactly from assumptions rather than estimated from data. By contrast, there are many techniques for classification, and even once we choose k-Nearest Neighbors, we get different results for different values of k. 2. Icon arrays Parts 3 and 4 of this lab are about cancer, but first let's start with a simple, contrived example. Imagine you are a marble. You don't know what you look like (since you obviously have no eyes), but you know that Samantha drew you uniformly at random from a bag that contained the following marbles: * 4 large green marbles, * 1 large red marble, * 6 small green marbles, and * 2 small red marbles. Question 2.1. Knowing only what we've told you so far, what's the probability that you're a large green marble? End of explanation marbles = Table.read_table("marbles.csv") marbles Explanation: Here's a table with those marbles: End of explanation # Run this cell. ####################################################################### # The functions you'll need to actually use are in here. Each is a # way of making an icon array from a differently-formatted table. ####################################################################### def display_icon_array(table, groups, individuals_name): Given a table and some columns to group it on, displays an icon array of the groups. groups should be an array of labels of columns in table. individuals_name is your name for the individual rows of table. For example, if we're talking about a population of people, individuals_name should be "people". For example: display_icon_array(marbles, make_array("color", "size"), "marbles") display_grouped_icon_array(table.groups(groups), individuals_name) def display_grouped_icon_array(grouped_data, individuals_name): Given a table with counts for data grouped by 1 or more categories, displays an icon array of the groups represented in the table. grouped_data should be a table of frequencies or counts, such as a table created by calling the groups method on some table. individuals_name is your name for the individual members of the dataset. For example, if we're talking about a population of people, individuals_name should be "people". For example: display_grouped_icon_array(marbles.groups(make_array("color", "size")), "marbles") visualizations.display_combinations(grouped_data, individuals_name=individuals_name) def display_crosstab_icon_array(crosstabulation, x_label, individuals_name): Given a crosstabulation table, displays an icon array of the groups represented in the table. crosstabulation should be a table of frequencies or counts created by calling pivot on some table. x_label should be the label of the categories listed as columns (on the "x axis" when the crosstabulation table is printed). individuals_name is your name for the individual members of the dataset. For example, if we're talking about a population of people, individuals_name should be "people". For example: display_crosstab_icon_array(marbles.pivot("color", "size"), "color", "marbles") display_grouped_icon_array(visualizations.pivot_table_to_groups(crosstabulation, x_label), individuals_name) Explanation: We've included some code to display something called an icon array. The functions in the cell below create icon arrays from various kinds of tables. Refer back to this cell later when you need to make an icon array. End of explanation # Run this cell. display_grouped_icon_array(marbles.groups(make_array("color", "size")), "marbles") Explanation: Here's an icon array of all the marbles, grouped by color and size: End of explanation ... Explanation: Note that the icon colors don't correspond to the colors of the marbles they represent. You (the marble) should imagine that you are a random draw from these 13 icons. Question 2.2. Make an icon array of the marbles, grouped only by color. End of explanation probability_green = ... _ = tests.grade("q23") Explanation: Knowing nothing else about yourself, you're equally likely to be any of the marbles pictured. Question 2.3. What's the probability that you're a green marble? Calculate this by hand (using Python for arithmetic) by looking at your icon array. End of explanation display_grouped_icon_array(marbles.groups(make_array("color", "size")), "marbles") Explanation: 2.1. Conditional probability Suppose you overhear Samantha saying that you're a large marble. (Little-known fact: though marbles lack eyes, they possess rudimentary ears.) Does this somehow change the likelihood that you're green? Let's find out. Go back to the full icon array, displayed below for convenience. End of explanation # Just run this cell. display_grouped_icon_array(marbles.where("size", "large").group("color"), "large marbles") Explanation: In question 2.3, we assumed you were equally likely to be any of the marbles, because we didn't know any better. That's why we looked at all the marbles to compute the probability you were green. But assuming you're a large marble, we can eliminate some of these possibilities. In particular, you can't be a small green marble or a small red marble. You're still equally likely to be any of the remaining marbles, because you don't know anything that says otherwise. So here's an icon array of those remaining possibilities: End of explanation probability_green_given_large = ... _ = tests.grade("q211") Explanation: Question 2.1.1. What's the probability you're a green marble, knowing that you're a large marble? Calculate it by hand, using the icon array. End of explanation # Make an icon array to help you compute the answer. ... # Now compute the answer. probability_large_given_green = ... _ = tests.grade("q212") Explanation: You should have found that this is different from the probability that you're a green marble, which you computed earlier. The distribution of colors among the large marbles is a little different from the distribution of colors among all the marbles. Question 2.1.2. Suppose instead Samantha had said you're a green marble. What's the probability you're large? Make an icon array to help you compute this probability, then compute it. Hint: Look at the code we wrote to generate an icon array for question 2.1.1. End of explanation # Just run this cell. The next cell is where you should write your answer. display_grouped_icon_array(marbles.groups(make_array("color", "size")), "marbles") Explanation: Question 2.1.3. How could you answer the last two questions just by looking at the full icon array? (You can run the cell below to see it again.) End of explanation people = Table().with_columns( "cancer status", make_array("sick", "sick", "healthy", "healthy"), "test status", make_array("positive", "negative", "positive", "negative"), "count", make_array(90, 10, 198, 9702)) people Explanation: Write your answer here, replacing this text. 3. Cancer screening Now let's look at a much more realistic application. Background Medical tests are an important but surprisingly controversial topic. For years, women have been advised to get regular mammograms (tests for breast cancer). Today, there is controversy over whether the tests are useful at all. Part of the problem with such tests is that they are not perfectly reliable. Someone without cancer, or with only a benign form of cancer, can see a positive result on a test for cancer. Someone with cancer can receive a negative result. ("Positive" means "pointing toward cancer," so in this context it's bad!) Doctors and patients often deal poorly with the first case, called false positives. For example, a patient may receive dangerous treatment like chemotherapy or radiation despite having no cancer or, as happens more frequently, having a cancer that would not have impacted her health. Conditional probability is a good way to think about such situations. For example, you can compute the chance that you have cancer, given the result of a test, by combining information from different probability distributions. You'll see that the chance you have cancer can be far from 100% even if you have a positive test result from a test that is usually accurate. 3.1. Basic cancer statistics Suppose that, in a representative group of 10,000 people who are tested for cancer ("representative" meaning that the frequencies of different things are the same as the frequencies in the whole population): 1. 100 have cancer. 2. Among those 100, 90 have positive results on a cancer test and 10 have negative results. ("Negative" means "not pointing toward cancer.") 3. The other 9,900 don't have cancer. 4. Among these, 198 have positive results on a cancer test and the other 9,702 have negative results. (So 198 see "false positive" results.) Below we've generated a table with data from these 10,000 hypothetical people. End of explanation cancer = ... cancer _ = tests.grade("q311") Explanation: One way to visualize this dataset is with a contingency table, which you've seen before. Question 3.1.1. Create a contingency table that looks like this: |cancer status|negative|positive| |-|-|-| |sick||| |healthy|||| ...with the count of each group filled in, according to what we've told you above. The counts in the 4 boxes should sum to 10,000. Hint: Use pivot with the sum function. End of explanation ... Explanation: Question 3.1.2. Display the people data in an icon array. The name of the population members should be "people who've taken a cancer test". End of explanation by_health = people.select(0, 2).group(0, sum).relabeled(1, 'count') display_grouped_icon_array(by_health, "people who've taken a cancer test") Explanation: Now let's think about how you can use this kind of information when you're tested for cancer. Before you know any information about yourself, you could imagine yourself as a uniform random sample of one of the 10,000 people in this imaginary population of people who have been tested. What's the chance that you have cancer, knowing nothing else about yourself? It's $\frac{100}{10000}$, or 1%. We can see that more directly with this icon array: End of explanation # We first made an icon array in the 2 lines below. by_test = ... display_grouped_icon_array(by_test, "people who've taken a cancer test") # Fill in the probabiliy of having a positive test result. probability_positive_test = ... _ = tests.grade("q313") Explanation: Question 3.1.3. What's the chance that you have a positive test result, knowing nothing else about yourself? Hint: Make an icon array. End of explanation # Just run this cell. display_grouped_icon_array(people.where("test status", are.equal_to("positive")).drop(1), "people who have a positive test result") Explanation: 3.2. Interpreting test results Suppose you have a positive test result. This means you can now narrow yourself down to being part of one of two groups: 1. The people with cancer who have a positive test result. 2. The people without cancer who have a positive test result. Here's an icon array for those two groups: End of explanation # Set this to one of the numbers above. rough_prob_sick_given_positive = ... _ = tests.grade("q321") Explanation: The conditional probability that you have cancer given your positive test result is the chance that you're in the first group, assuming you're in one of these two groups. Question 3.2.1. Eyeballing it, is the conditional probability that you have cancer given your positive test result closest to: 1. 9/10 2. 2/3 3. 1/2 4. 1/3 5. 1/100 End of explanation prob_sick_given_positive = ... prob_sick_given_positive _ = tests.grade("q322") Explanation: Question 3.2.2. Now write code to calculate that probability exactly, using the original contingency table you wrote. End of explanation # The full icon array is given here for your convenience. # Write your answer in the next cell. display_grouped_icon_array(people, "people who've taken a cancer test") Explanation: Question 3.2.3. Look at the full icon array again. Using that, how would you compute (roughly) the conditional probability of cancer given a positive test? End of explanation # Hint: You may find these two tables useful: has_cancer = cancer.where("cancer status", are.equal_to("sick")) no_cancer = cancer.where("cancer status", are.equal_to("healthy")) X = .01 Y = ... Z = ... print('X:', X, ' Y:', Y, ' Z:', Z) _ = tests.grade("q41") # For your convenience, you can run this cell to run all the tests at once! import os _ = [tests.grade(q[:-3]) for q in os.listdir("tests") if q.startswith('q')] # Run this cell to submit your work *after* you have passed all of the test cells. # It's ok to run this cell multiple times. Only your final submission will be scored. !TZ=America/Los_Angeles jupyter nbconvert --output=".lab10_$(date +%m%d_%H%M)_submission.html" lab10.ipynb Explanation: Write your answer here, replacing this text. Question 3.2.4. Is your answer to question 3.2.2 bigger than the overall proportion of people in the population who have cancer? Does that make sense? Write your answer here, replacing this text. 4. Tree diagrams A tree diagram is another useful visualization for conditional probability. It is easiest to draw a tree diagram when the probabilities are presented in a slightly different way. For example, people often summarize the information in your cancer table using 3 numbers: The overall probability of having cancer is X. (This is called the base rate or marginal probability of the disease.) Given that you have cancer, the probability of a positive test result is Y. (This is called the sensitivity of the test. Higher values of Y mean the test is more useful.) Given that you don't have cancer, the probability of a positive test result is Z. (This is called the false positive rate of the test. Higher values of Z mean the test is less useful.) This corresponds to this tree diagram: /\ / \ 1-X / \ X / \ no cancer cancer / \1-Z / \ Z / \ Y/ \ 1-Y / \ / \ + - + - You already saw that the base rate of cancer (which we'll call X for short) was .01 in the previous section. Y and Z can be computed using the same method you used to compute the conditional probability of cancer given a positive test result. Question 4.1. Compute Y and Z for the data in section 3. You can use an icon array or compute them only with code. You can run the tests to see the right answers. End of explanation
3,672
Given the following text description, write Python code to implement the functionality described below step by step Description: An Introduction to py-Goldsberry py-Goldsberry is a Python package that makes it easy to interface with the http Step1: py-goldsberry is designed to work in conjuntion with Pandas. Each function within the package returns data in a format that is easily converted to a Pandas DataFrame. To get started, let's get a list of all of the players who were on an NBA roster during the 2015-16 season PlayerIDs Currently, the PlayerList() function defaults to the current season. We start by creating an object, players, that we will use to scrape player data. Step2: We can manipulate the players object to get data from different seasons by changing the API parameters and then re-running the query of the website. For example, if we want to get a list of players who were on an NBA roster during the 1990-91 season, we set the Season parameter to 1990-91 using the .get_new_data() method of the players class as follows. Step3: Once we get the raw data from the website, we need to save it as a dataframe to a new object. Step4: Each class in py-Goldsberry works in a similar fashion. When instantiating each class, the class makes some assumptions about the parameters to use to query the NBA website and executes the query. If you want to change the query after instantiation, you can change the query parameters and then re-query the database with .get_new_data(). Under the hood, the .get_new_data() method takes any number of keyword arguments that it then translates to api parameters. As a sanity check, it will raise an exception if you try to set a parameter that the specific query does not take. Each class takes a specific set of parameters. py-Goldsberry is built to include a list of each parameter as well as a default value. I'm working on a dictionary of parameters and possible values each can take. Look for it to be posted in the near future. Until then, you can access the raw parameter dictionary by calling the .get_parameter_items() method of each class. This gives you the possible values that the query can take. As you saw above, you can pass in keyword arguments with the keyword being the parameter name and the argument being the desired value to change the default value of the paramters. Step5: In the case of the PlayersList() class, you can get a historical list of players by changing the value of 'IsOnlyCurrentSeason' from 1 to 0. Step6: By default, Goldsberry is set to pull data from the current year. If you are interested in alternative data from the get-go, you can set the default parameters do your desired values upon insantiation of the class. Let's checkout an example of getting the All-Time player list from a brand new object Step7: Well, it looks like these data frames aren't quite identical. Why is that? Take a look at the ROSTERSTATUS column. When we first asked for the all time players, remember we had set the base year to 1990-91? Alaa Abdelnaby was actually on a roster during that season (Portland to be specific) so he has a value of 1 in the ROSTERSTATUS column. Since he was not in the league during the current season, he has a 0 in that column for the second pull. Let's compare just the names and see if we get an exact match. That will further reinforce that we have the same data, but we are looking at it from diffent points in time.
Python Code: import goldsberry import pandas as pd goldsberry.__version__ Explanation: An Introduction to py-Goldsberry py-Goldsberry is a Python package that makes it easy to interface with the http://stats.nba.com and retrieve the data in a more analyzable format. This is the first in a series of tutorials that walk through the different modules of the packages and how to use each to get different types of data. If you've made it this far, you're probably less interested in reading about the package and more interested in actually using it. Installation If you don't have the package installed, use pip install get the latest version pip install py-goldsberry pip install --upgrade py-goldsberry When you have py-goldsberry installed, you can load the package and check the version number End of explanation players = goldsberry.PlayerList() players2015 = pd.DataFrame(players.players()) players2015.head() Explanation: py-goldsberry is designed to work in conjuntion with Pandas. Each function within the package returns data in a format that is easily converted to a Pandas DataFrame. To get started, let's get a list of all of the players who were on an NBA roster during the 2015-16 season PlayerIDs Currently, the PlayerList() function defaults to the current season. We start by creating an object, players, that we will use to scrape player data. End of explanation players.get_new_data(Season = '1990-91') Explanation: We can manipulate the players object to get data from different seasons by changing the API parameters and then re-running the query of the website. For example, if we want to get a list of players who were on an NBA roster during the 1990-91 season, we set the Season parameter to 1990-91 using the .get_new_data() method of the players class as follows. End of explanation players1990 = pd.DataFrame(players.players()) players1990.head() Explanation: Once we get the raw data from the website, we need to save it as a dataframe to a new object. End of explanation players.get_parameter_items() Explanation: Each class in py-Goldsberry works in a similar fashion. When instantiating each class, the class makes some assumptions about the parameters to use to query the NBA website and executes the query. If you want to change the query after instantiation, you can change the query parameters and then re-query the database with .get_new_data(). Under the hood, the .get_new_data() method takes any number of keyword arguments that it then translates to api parameters. As a sanity check, it will raise an exception if you try to set a parameter that the specific query does not take. Each class takes a specific set of parameters. py-Goldsberry is built to include a list of each parameter as well as a default value. I'm working on a dictionary of parameters and possible values each can take. Look for it to be posted in the near future. Until then, you can access the raw parameter dictionary by calling the .get_parameter_items() method of each class. This gives you the possible values that the query can take. As you saw above, you can pass in keyword arguments with the keyword being the parameter name and the argument being the desired value to change the default value of the paramters. End of explanation players.get_new_data(IsOnlyCurrentSeason = 0) playersAllTime = pd.DataFrame(players.players()) playersAllTime.head() Explanation: In the case of the PlayersList() class, you can get a historical list of players by changing the value of 'IsOnlyCurrentSeason' from 1 to 0. End of explanation new_playersAllTime = pd.DataFrame(goldsberry.PlayerList(IsOnlyCurrentSeason=0).players()) new_playersAllTime.head() playersAllTime.equals(new_playersAllTime) Explanation: By default, Goldsberry is set to pull data from the current year. If you are interested in alternative data from the get-go, you can set the default parameters do your desired values upon insantiation of the class. Let's checkout an example of getting the All-Time player list from a brand new object End of explanation playersAllTime.loc[:, 'DISPLAY_FIRST_LAST'].equals(new_playersAllTime.loc[:, 'DISPLAY_FIRST_LAST']) Explanation: Well, it looks like these data frames aren't quite identical. Why is that? Take a look at the ROSTERSTATUS column. When we first asked for the all time players, remember we had set the base year to 1990-91? Alaa Abdelnaby was actually on a roster during that season (Portland to be specific) so he has a value of 1 in the ROSTERSTATUS column. Since he was not in the league during the current season, he has a 0 in that column for the second pull. Let's compare just the names and see if we get an exact match. That will further reinforce that we have the same data, but we are looking at it from diffent points in time. End of explanation
3,673
Given the following text description, write Python code to implement the functionality described below step by step Description: Anna KaRNNa In this notebook, I'll build a character-wise RNN trained on Anna Karenina, one of my all-time favorite books. It'll be able to generate new text based on the text from the book. This network is based off of Andrej Karpathy's post on RNNs and implementation in Torch. Also, some information here at r2rt and from Sherjil Ozair on GitHub. Below is the general architecture of the character-wise RNN. <img src="assets/charseq.jpeg" width="500"> Step1: First we'll load the text file and convert it into integers for our network to use. Here I'm creating a couple dictionaries to convert the characters to and from integers. Encoding the characters as integers makes it easier to use as input in the network. Step2: Let's check out the first 100 characters, make sure everything is peachy. According to the American Book Review, this is the 6th best first line of a book ever. Step3: And we can see the characters encoded as integers. Step4: Since the network is working with individual characters, it's similar to a classification problem in which we are trying to predict the next character from the previous text. Here's how many 'classes' our network has to pick from. Step5: Making training mini-batches Here is where we'll make our mini-batches for training. Remember that we want our batches to be multiple sequences of some desired number of sequence steps. Considering a simple example, our batches would look like this Step6: Now I'll make my data sets and we can check out what's going on here. Here I'm going to use a batch size of 10 and 50 sequence steps. Step7: If you implemented get_batches correctly, the above output should look something like ``` x [[55 63 69 22 6 76 45 5 16 35] [ 5 69 1 5 12 52 6 5 56 52] [48 29 12 61 35 35 8 64 76 78] [12 5 24 39 45 29 12 56 5 63] [ 5 29 6 5 29 78 28 5 78 29] [ 5 13 6 5 36 69 78 35 52 12] [63 76 12 5 18 52 1 76 5 58] [34 5 73 39 6 5 12 52 36 5] [ 6 5 29 78 12 79 6 61 5 59] [ 5 78 69 29 24 5 6 52 5 63]] y [[63 69 22 6 76 45 5 16 35 35] [69 1 5 12 52 6 5 56 52 29] [29 12 61 35 35 8 64 76 78 28] [ 5 24 39 45 29 12 56 5 63 29] [29 6 5 29 78 28 5 78 29 45] [13 6 5 36 69 78 35 52 12 43] [76 12 5 18 52 1 76 5 58 52] [ 5 73 39 6 5 12 52 36 5 78] [ 5 29 78 12 79 6 61 5 59 63] [78 69 29 24 5 6 52 5 63 76]] `` although the exact numbers will be different. Check to make sure the data is shifted over one step fory`. Building the model Below is where you'll build the network. We'll break it up into parts so it's easier to reason about each bit. Then we can connect them up into the whole network. <img src="assets/charRNN.png" width=500px> Inputs First off we'll create our input placeholders. As usual we need placeholders for the training data and the targets. We'll also create a placeholder for dropout layers called keep_prob. Step8: LSTM Cell Here we will create the LSTM cell we'll use in the hidden layer. We'll use this cell as a building block for the RNN. So we aren't actually defining the RNN here, just the type of cell we'll use in the hidden layer. We first create a basic LSTM cell with python lstm = tf.contrib.rnn.BasicLSTMCell(num_units) where num_units is the number of units in the hidden layers in the cell. Then we can add dropout by wrapping it with python tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob) You pass in a cell and it will automatically add dropout to the inputs or outputs. Finally, we can stack up the LSTM cells into layers with tf.contrib.rnn.MultiRNNCell. With this, you pass in a list of cells and it will send the output of one cell into the next cell. Previously with TensorFlow 1.0, you could do this python tf.contrib.rnn.MultiRNNCell([cell]*num_layers) This might look a little weird if you know Python well because this will create a list of the same cell object. However, TensorFlow 1.0 will create different weight matrices for all cell objects. But, starting with TensorFlow 1.1 you actually need to create new cell objects in the list. To get it to work in TensorFlow 1.1, it should look like ```python def build_cell(num_units, keep_prob) Step9: RNN Output Here we'll create the output layer. We need to connect the output of the RNN cells to a full connected layer with a softmax output. The softmax output gives us a probability distribution we can use to predict the next character. If our input has batch size $N$, number of steps $M$, and the hidden layer has $L$ hidden units, then the output is a 3D tensor with size $N \times M \times L$. The output of each LSTM cell has size $L$, we have $M$ of them, one for each sequence step, and we have $N$ sequences. So the total size is $N \times M \times L$. We are using the same fully connected layer, the same weights, for each of the outputs. Then, to make things easier, we should reshape the outputs into a 2D tensor with shape $(M * N) \times L$. That is, one row for each sequence and step, where the values of each row are the output from the LSTM cells. One we have the outputs reshaped, we can do the matrix multiplication with the weights. We need to wrap the weight and bias variables in a variable scope with tf.variable_scope(scope_name) because there are weights being created in the LSTM cells. TensorFlow will throw an error if the weights created here have the same names as the weights created in the LSTM cells, which they will be default. To avoid this, we wrap the variables in a variable scope so we can give them unique names. Step10: Training loss Next up is the training loss. We get the logits and targets and calculate the softmax cross-entropy loss. First we need to one-hot encode the targets, we're getting them as encoded characters. Then, reshape the one-hot targets so it's a 2D tensor with size $(MN) \times C$ where $C$ is the number of classes/characters we have. Remember that we reshaped the LSTM outputs and ran them through a fully connected layer with $C$ units. So our logits will also have size $(MN) \times C$. Then we run the logits and targets through tf.nn.softmax_cross_entropy_with_logits and find the mean to get the loss. Step11: Optimizer Here we build the optimizer. Normal RNNs have have issues gradients exploding and disappearing. LSTMs fix the disappearance problem, but the gradients can still grow without bound. To fix this, we can clip the gradients above some threshold. That is, if a gradient is larger than that threshold, we set it to the threshold. This will ensure the gradients never grow overly large. Then we use an AdamOptimizer for the learning step. Step12: Build the network Now we can put all the pieces together and build a class for the network. To actually run data through the LSTM cells, we will use tf.nn.dynamic_rnn. This function will pass the hidden and cell states across LSTM cells appropriately for us. It returns the outputs for each LSTM cell at each step for each sequence in the mini-batch. It also gives us the final LSTM state. We want to save this state as final_state so we can pass it to the first LSTM cell in the the next mini-batch run. For tf.nn.dynamic_rnn, we pass in the cell and initial state we get from build_lstm, as well as our input sequences. Also, we need to one-hot encode the inputs before going into the RNN. Step13: Hyperparameters Here I'm defining the hyperparameters for the network. batch_size - Number of sequences running through the network in one pass. num_steps - Number of characters in the sequence the network is trained on. Larger is better typically, the network will learn more long range dependencies. But it takes longer to train. 100 is typically a good number here. lstm_size - The number of units in the hidden layers. num_layers - Number of hidden LSTM layers to use learning_rate - Learning rate for training keep_prob - The dropout keep probability when training. If you're network is overfitting, try decreasing this. Here's some good advice from Andrej Karpathy on training the network. I'm going to copy it in here for your benefit, but also link to where it originally came from. Tips and Tricks Monitoring Validation Loss vs. Training Loss If you're somewhat new to Machine Learning or Neural Networks it can take a bit of expertise to get good models. The most important quantity to keep track of is the difference between your training loss (printed during training) and the validation loss (printed once in a while when the RNN is run on the validation data (by default every 1000 iterations)). In particular Step14: Time for training This is typical training code, passing inputs and targets into the network, then running the optimizer. Here we also get back the final LSTM state for the mini-batch. Then, we pass that state back into the network so the next batch can continue the state from the previous batch. And every so often (set by save_every_n) I save a checkpoint. Here I'm saving checkpoints with the format i{iteration number}_l{# hidden layer units}.ckpt Step15: Saved checkpoints Read up on saving and loading checkpoints here Step16: Sampling Now that the network is trained, we'll can use it to generate new text. The idea is that we pass in a character, then the network will predict the next character. We can use the new one, to predict the next one. And we keep doing this to generate all new text. I also included some functionality to prime the network with some text by passing in a string and building up a state from that. The network gives us predictions for each character. To reduce noise and make things a little less random, I'm going to only choose a new character from the top N most likely characters. Step17: Here, pass in the path to a checkpoint and sample from the network.
Python Code: import time from collections import namedtuple import numpy as np import tensorflow as tf Explanation: Anna KaRNNa In this notebook, I'll build a character-wise RNN trained on Anna Karenina, one of my all-time favorite books. It'll be able to generate new text based on the text from the book. This network is based off of Andrej Karpathy's post on RNNs and implementation in Torch. Also, some information here at r2rt and from Sherjil Ozair on GitHub. Below is the general architecture of the character-wise RNN. <img src="assets/charseq.jpeg" width="500"> End of explanation with open('anna.txt', 'r') as f: text=f.read() vocab = sorted(set(text)) vocab_to_int = {c: i for i, c in enumerate(vocab)} int_to_vocab = dict(enumerate(vocab)) encoded = np.array([vocab_to_int[c] for c in text], dtype=np.int32) Explanation: First we'll load the text file and convert it into integers for our network to use. Here I'm creating a couple dictionaries to convert the characters to and from integers. Encoding the characters as integers makes it easier to use as input in the network. End of explanation text[:100] Explanation: Let's check out the first 100 characters, make sure everything is peachy. According to the American Book Review, this is the 6th best first line of a book ever. End of explanation encoded[:100] Explanation: And we can see the characters encoded as integers. End of explanation len(vocab) Explanation: Since the network is working with individual characters, it's similar to a classification problem in which we are trying to predict the next character from the previous text. Here's how many 'classes' our network has to pick from. End of explanation def get_batches(arr, n_seqs, n_steps): '''Create a generator that returns batches of size n_seqs x n_steps from arr. Arguments --------- arr: Array you want to make batches from n_seqs: Batch size, the number of sequences per batch n_steps: Number of sequence steps per batch ''' # Get the number of characters per batch and number of batches we can make characters_per_batch = n_seqs * n_steps n_batches = len(arr)//characters_per_batch # Keep only enough characters to make full batches arr = arr[:n_batches * characters_per_batch] # Reshape into n_seqs rows arr = arr.reshape((n_seqs, -1)) for n in range(0, arr.shape[1], n_steps): # The features x = arr[:, n:n+n_steps] # The targets, shifted by one y = np.zeros_like(x) y[:, :-1], y[:, -1] = x[:, 1:], x[:, 0] yield x, y Explanation: Making training mini-batches Here is where we'll make our mini-batches for training. Remember that we want our batches to be multiple sequences of some desired number of sequence steps. Considering a simple example, our batches would look like this: <img src="assets/[email protected]" width=500px> <br> We have our text encoded as integers as one long array in encoded. Let's create a function that will give us an iterator for our batches. I like using generator functions to do this. Then we can pass encoded into this function and get our batch generator. The first thing we need to do is discard some of the text so we only have completely full batches. Each batch contains $N \times M$ characters, where $N$ is the batch size (the number of sequences) and $M$ is the number of steps. Then, to get the number of batches we can make from some array arr, you divide the length of arr by the batch size. Once you know the number of batches and the batch size, you can get the total number of characters to keep. After that, we need to split arr into $N$ sequences. You can do this using arr.reshape(size) where size is a tuple containing the dimensions sizes of the reshaped array. We know we want $N$ sequences (n_seqs below), let's make that the size of the first dimension. For the second dimension, you can use -1 as a placeholder in the size, it'll fill up the array with the appropriate data for you. After this, you should have an array that is $N \times (M * K)$ where $K$ is the number of batches. Now that we have this array, we can iterate through it to get our batches. The idea is each batch is a $N \times M$ window on the array. For each subsequent batch, the window moves over by n_steps. We also want to create both the input and target arrays. Remember that the targets are the inputs shifted over one character. You'll usually see the first input character used as the last target character, so something like this: python y[:, :-1], y[:, -1] = x[:, 1:], x[:, 0] where x is the input batch and y is the target batch. The way I like to do this window is use range to take steps of size n_steps from $0$ to arr.shape[1], the total number of steps in each sequence. That way, the integers you get from range always point to the start of a batch, and each window is n_steps wide. End of explanation batches = get_batches(encoded, 10, 50) x, y = next(batches) print('x\n', x[:10, :10]) print('\ny\n', y[:10, :10]) Explanation: Now I'll make my data sets and we can check out what's going on here. Here I'm going to use a batch size of 10 and 50 sequence steps. End of explanation def build_inputs(batch_size, num_steps): ''' Define placeholders for inputs, targets, and dropout Arguments --------- batch_size: Batch size, number of sequences per batch num_steps: Number of sequence steps in a batch ''' # Declare placeholders we'll feed into the graph inputs = tf.placeholder(tf.int32, [batch_size, num_steps], name='inputs') targets = tf.placeholder(tf.int32, [batch_size, num_steps], name='targets') # Keep probability placeholder for drop out layers keep_prob = tf.placeholder(tf.float32, name='keep_prob') return inputs, targets, keep_prob Explanation: If you implemented get_batches correctly, the above output should look something like ``` x [[55 63 69 22 6 76 45 5 16 35] [ 5 69 1 5 12 52 6 5 56 52] [48 29 12 61 35 35 8 64 76 78] [12 5 24 39 45 29 12 56 5 63] [ 5 29 6 5 29 78 28 5 78 29] [ 5 13 6 5 36 69 78 35 52 12] [63 76 12 5 18 52 1 76 5 58] [34 5 73 39 6 5 12 52 36 5] [ 6 5 29 78 12 79 6 61 5 59] [ 5 78 69 29 24 5 6 52 5 63]] y [[63 69 22 6 76 45 5 16 35 35] [69 1 5 12 52 6 5 56 52 29] [29 12 61 35 35 8 64 76 78 28] [ 5 24 39 45 29 12 56 5 63 29] [29 6 5 29 78 28 5 78 29 45] [13 6 5 36 69 78 35 52 12 43] [76 12 5 18 52 1 76 5 58 52] [ 5 73 39 6 5 12 52 36 5 78] [ 5 29 78 12 79 6 61 5 59 63] [78 69 29 24 5 6 52 5 63 76]] `` although the exact numbers will be different. Check to make sure the data is shifted over one step fory`. Building the model Below is where you'll build the network. We'll break it up into parts so it's easier to reason about each bit. Then we can connect them up into the whole network. <img src="assets/charRNN.png" width=500px> Inputs First off we'll create our input placeholders. As usual we need placeholders for the training data and the targets. We'll also create a placeholder for dropout layers called keep_prob. End of explanation def build_lstm(lstm_size, num_layers, batch_size, keep_prob): ''' Build LSTM cell. Arguments --------- keep_prob: Scalar tensor (tf.placeholder) for the dropout keep probability lstm_size: Size of the hidden layers in the LSTM cells num_layers: Number of LSTM layers batch_size: Batch size ''' ### Build the LSTM Cell def build_cell(lstm_size, keep_prob): # Use a basic LSTM cell lstm = tf.contrib.rnn.BasicLSTMCell(lstm_size) # Add dropout to the cell drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob) return drop # Stack up multiple LSTM layers, for deep learning cell = tf.contrib.rnn.MultiRNNCell([build_cell(lstm_size, keep_prob) for _ in range(num_layers)]) initial_state = cell.zero_state(batch_size, tf.float32) return cell, initial_state Explanation: LSTM Cell Here we will create the LSTM cell we'll use in the hidden layer. We'll use this cell as a building block for the RNN. So we aren't actually defining the RNN here, just the type of cell we'll use in the hidden layer. We first create a basic LSTM cell with python lstm = tf.contrib.rnn.BasicLSTMCell(num_units) where num_units is the number of units in the hidden layers in the cell. Then we can add dropout by wrapping it with python tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob) You pass in a cell and it will automatically add dropout to the inputs or outputs. Finally, we can stack up the LSTM cells into layers with tf.contrib.rnn.MultiRNNCell. With this, you pass in a list of cells and it will send the output of one cell into the next cell. Previously with TensorFlow 1.0, you could do this python tf.contrib.rnn.MultiRNNCell([cell]*num_layers) This might look a little weird if you know Python well because this will create a list of the same cell object. However, TensorFlow 1.0 will create different weight matrices for all cell objects. But, starting with TensorFlow 1.1 you actually need to create new cell objects in the list. To get it to work in TensorFlow 1.1, it should look like ```python def build_cell(num_units, keep_prob): lstm = tf.contrib.rnn.BasicLSTMCell(num_units) drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob) return drop tf.contrib.rnn.MultiRNNCell([build_cell(num_units, keep_prob) for _ in range(num_layers)]) ``` Even though this is actually multiple LSTM cells stacked on each other, you can treat the multiple layers as one cell. We also need to create an initial cell state of all zeros. This can be done like so python initial_state = cell.zero_state(batch_size, tf.float32) Below, we implement the build_lstm function to create these LSTM cells and the initial state. End of explanation def build_output(lstm_output, in_size, out_size): ''' Build a softmax layer, return the softmax output and logits. Arguments --------- x: Input tensor in_size: Size of the input tensor, for example, size of the LSTM cells out_size: Size of this softmax layer ''' # Reshape output so it's a bunch of rows, one row for each step for each sequence. # That is, the shape should be batch_size*num_steps rows by lstm_size columns seq_output = tf.concat(lstm_output, axis=1) x = tf.reshape(seq_output, [-1, in_size]) # Connect the RNN outputs to a softmax layer with tf.variable_scope('softmax'): softmax_w = tf.Variable(tf.truncated_normal((in_size, out_size), stddev=0.1)) softmax_b = tf.Variable(tf.zeros(out_size)) # Since output is a bunch of rows of RNN cell outputs, logits will be a bunch # of rows of logit outputs, one for each step and sequence logits = tf.matmul(x, softmax_w) + softmax_b # Use softmax to get the probabilities for predicted characters out = tf.nn.softmax(logits, name='predictions') return out, logits Explanation: RNN Output Here we'll create the output layer. We need to connect the output of the RNN cells to a full connected layer with a softmax output. The softmax output gives us a probability distribution we can use to predict the next character. If our input has batch size $N$, number of steps $M$, and the hidden layer has $L$ hidden units, then the output is a 3D tensor with size $N \times M \times L$. The output of each LSTM cell has size $L$, we have $M$ of them, one for each sequence step, and we have $N$ sequences. So the total size is $N \times M \times L$. We are using the same fully connected layer, the same weights, for each of the outputs. Then, to make things easier, we should reshape the outputs into a 2D tensor with shape $(M * N) \times L$. That is, one row for each sequence and step, where the values of each row are the output from the LSTM cells. One we have the outputs reshaped, we can do the matrix multiplication with the weights. We need to wrap the weight and bias variables in a variable scope with tf.variable_scope(scope_name) because there are weights being created in the LSTM cells. TensorFlow will throw an error if the weights created here have the same names as the weights created in the LSTM cells, which they will be default. To avoid this, we wrap the variables in a variable scope so we can give them unique names. End of explanation def build_loss(logits, targets, lstm_size, num_classes): ''' Calculate the loss from the logits and the targets. Arguments --------- logits: Logits from final fully connected layer targets: Targets for supervised learning lstm_size: Number of LSTM hidden units num_classes: Number of classes in targets ''' # One-hot encode targets and reshape to match logits, one row per batch_size per step y_one_hot = tf.one_hot(targets, num_classes) y_reshaped = tf.reshape(y_one_hot, logits.get_shape()) # Softmax cross entropy loss loss = tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y_reshaped) loss = tf.reduce_mean(loss) return loss Explanation: Training loss Next up is the training loss. We get the logits and targets and calculate the softmax cross-entropy loss. First we need to one-hot encode the targets, we're getting them as encoded characters. Then, reshape the one-hot targets so it's a 2D tensor with size $(MN) \times C$ where $C$ is the number of classes/characters we have. Remember that we reshaped the LSTM outputs and ran them through a fully connected layer with $C$ units. So our logits will also have size $(MN) \times C$. Then we run the logits and targets through tf.nn.softmax_cross_entropy_with_logits and find the mean to get the loss. End of explanation def build_optimizer(loss, learning_rate, grad_clip): ''' Build optmizer for training, using gradient clipping. Arguments: loss: Network loss learning_rate: Learning rate for optimizer ''' # Optimizer for training, using gradient clipping to control exploding gradients tvars = tf.trainable_variables() grads, _ = tf.clip_by_global_norm(tf.gradients(loss, tvars), grad_clip) train_op = tf.train.AdamOptimizer(learning_rate) optimizer = train_op.apply_gradients(zip(grads, tvars)) return optimizer Explanation: Optimizer Here we build the optimizer. Normal RNNs have have issues gradients exploding and disappearing. LSTMs fix the disappearance problem, but the gradients can still grow without bound. To fix this, we can clip the gradients above some threshold. That is, if a gradient is larger than that threshold, we set it to the threshold. This will ensure the gradients never grow overly large. Then we use an AdamOptimizer for the learning step. End of explanation class CharRNN: def __init__(self, num_classes, batch_size=64, num_steps=50, lstm_size=128, num_layers=2, learning_rate=0.001, grad_clip=5, sampling=False): # When we're using this network for sampling later, we'll be passing in # one character at a time, so providing an option for that if sampling == True: batch_size, num_steps = 1, 1 else: batch_size, num_steps = batch_size, num_steps tf.reset_default_graph() # Build the input placeholder tensors self.inputs, self.targets, self.keep_prob = build_inputs(batch_size, num_steps) # Build the LSTM cell cell, self.initial_state = build_lstm(lstm_size, num_layers, batch_size, self.keep_prob) ### Run the data through the RNN layers # First, one-hot encode the input tokens x_one_hot = tf.one_hot(self.inputs, num_classes) # Run each sequence step through the RNN and collect the outputs outputs, state = tf.nn.dynamic_rnn(cell, x_one_hot, initial_state=self.initial_state) self.final_state = state # Get softmax predictions and logits self.prediction, self.logits = build_output(outputs, lstm_size, num_classes) # Loss and optimizer (with gradient clipping) self.loss = build_loss(self.logits, self.targets, lstm_size, num_classes) self.optimizer = build_optimizer(self.loss, learning_rate, grad_clip) Explanation: Build the network Now we can put all the pieces together and build a class for the network. To actually run data through the LSTM cells, we will use tf.nn.dynamic_rnn. This function will pass the hidden and cell states across LSTM cells appropriately for us. It returns the outputs for each LSTM cell at each step for each sequence in the mini-batch. It also gives us the final LSTM state. We want to save this state as final_state so we can pass it to the first LSTM cell in the the next mini-batch run. For tf.nn.dynamic_rnn, we pass in the cell and initial state we get from build_lstm, as well as our input sequences. Also, we need to one-hot encode the inputs before going into the RNN. End of explanation batch_size = 200 # Sequences per batch num_steps = 50 # Number of sequence steps per batch lstm_size = 128 # Size of hidden layers in LSTMs num_layers = 2 # Number of LSTM layers learning_rate = 0.01 # Learning rate keep_prob = 0.5 # Dropout keep probability Explanation: Hyperparameters Here I'm defining the hyperparameters for the network. batch_size - Number of sequences running through the network in one pass. num_steps - Number of characters in the sequence the network is trained on. Larger is better typically, the network will learn more long range dependencies. But it takes longer to train. 100 is typically a good number here. lstm_size - The number of units in the hidden layers. num_layers - Number of hidden LSTM layers to use learning_rate - Learning rate for training keep_prob - The dropout keep probability when training. If you're network is overfitting, try decreasing this. Here's some good advice from Andrej Karpathy on training the network. I'm going to copy it in here for your benefit, but also link to where it originally came from. Tips and Tricks Monitoring Validation Loss vs. Training Loss If you're somewhat new to Machine Learning or Neural Networks it can take a bit of expertise to get good models. The most important quantity to keep track of is the difference between your training loss (printed during training) and the validation loss (printed once in a while when the RNN is run on the validation data (by default every 1000 iterations)). In particular: If your training loss is much lower than validation loss then this means the network might be overfitting. Solutions to this are to decrease your network size, or to increase dropout. For example you could try dropout of 0.5 and so on. If your training/validation loss are about equal then your model is underfitting. Increase the size of your model (either number of layers or the raw number of neurons per layer) Approximate number of parameters The two most important parameters that control the model are lstm_size and num_layers. I would advise that you always use num_layers of either 2/3. The lstm_size can be adjusted based on how much data you have. The two important quantities to keep track of here are: The number of parameters in your model. This is printed when you start training. The size of your dataset. 1MB file is approximately 1 million characters. These two should be about the same order of magnitude. It's a little tricky to tell. Here are some examples: I have a 100MB dataset and I'm using the default parameter settings (which currently print 150K parameters). My data size is significantly larger (100 mil >> 0.15 mil), so I expect to heavily underfit. I am thinking I can comfortably afford to make lstm_size larger. I have a 10MB dataset and running a 10 million parameter model. I'm slightly nervous and I'm carefully monitoring my validation loss. If it's larger than my training loss then I may want to try to increase dropout a bit and see if that helps the validation loss. Best models strategy The winning strategy to obtaining very good models (if you have the compute time) is to always err on making the network larger (as large as you're willing to wait for it to compute) and then try different dropout values (between 0,1). Whatever model has the best validation performance (the loss, written in the checkpoint filename, low is good) is the one you should use in the end. It is very common in deep learning to run many different models with many different hyperparameter settings, and in the end take whatever checkpoint gave the best validation performance. By the way, the size of your training and validation splits are also parameters. Make sure you have a decent amount of data in your validation set or otherwise the validation performance will be noisy and not very informative. End of explanation epochs = 10 # Save every N iterations save_every_n = 200 model = CharRNN(len(vocab), batch_size=batch_size, num_steps=num_steps, lstm_size=lstm_size, num_layers=num_layers, learning_rate=learning_rate) saver = tf.train.Saver(max_to_keep=100) with tf.Session() as sess: sess.run(tf.global_variables_initializer()) # Use the line below to load a checkpoint and resume training #saver.restore(sess, 'checkpoints/______.ckpt') counter = 0 for e in range(epochs): # Train network new_state = sess.run(model.initial_state) loss = 0 for x, y in get_batches(encoded, batch_size, num_steps): counter += 1 start = time.time() feed = {model.inputs: x, model.targets: y, model.keep_prob: keep_prob, model.initial_state: new_state} batch_loss, new_state, _ = sess.run([model.loss, model.final_state, model.optimizer], feed_dict=feed) end = time.time() print('Epoch: {}/{}... '.format(e+1, epochs), 'Training Step: {}... '.format(counter), 'Training loss: {:.4f}... '.format(batch_loss), '{:.4f} sec/batch'.format((end-start))) if (counter % save_every_n == 0): saver.save(sess, "checkpoints/i{}_l{}.ckpt".format(counter, lstm_size)) saver.save(sess, "checkpoints/i{}_l{}.ckpt".format(counter, lstm_size)) Explanation: Time for training This is typical training code, passing inputs and targets into the network, then running the optimizer. Here we also get back the final LSTM state for the mini-batch. Then, we pass that state back into the network so the next batch can continue the state from the previous batch. And every so often (set by save_every_n) I save a checkpoint. Here I'm saving checkpoints with the format i{iteration number}_l{# hidden layer units}.ckpt End of explanation tf.train.get_checkpoint_state('checkpoints') Explanation: Saved checkpoints Read up on saving and loading checkpoints here: https://www.tensorflow.org/programmers_guide/variables End of explanation def pick_top_n(preds, vocab_size, top_n=5): p = np.squeeze(preds) p[np.argsort(p)[:-top_n]] = 0 p = p / np.sum(p) c = np.random.choice(vocab_size, 1, p=p)[0] return c def sample(checkpoint, n_samples, lstm_size, vocab_size, prime="The "): samples = [c for c in prime] model = CharRNN(len(vocab), lstm_size=lstm_size, sampling=True) saver = tf.train.Saver() with tf.Session() as sess: saver.restore(sess, checkpoint) new_state = sess.run(model.initial_state) for c in prime: x = np.zeros((1, 1)) x[0,0] = vocab_to_int[c] feed = {model.inputs: x, model.keep_prob: 1., model.initial_state: new_state} preds, new_state = sess.run([model.prediction, model.final_state], feed_dict=feed) c = pick_top_n(preds, len(vocab)) samples.append(int_to_vocab[c]) for i in range(n_samples): x[0,0] = c feed = {model.inputs: x, model.keep_prob: 1., model.initial_state: new_state} preds, new_state = sess.run([model.prediction, model.final_state], feed_dict=feed) c = pick_top_n(preds, len(vocab)) samples.append(int_to_vocab[c]) return ''.join(samples) Explanation: Sampling Now that the network is trained, we'll can use it to generate new text. The idea is that we pass in a character, then the network will predict the next character. We can use the new one, to predict the next one. And we keep doing this to generate all new text. I also included some functionality to prime the network with some text by passing in a string and building up a state from that. The network gives us predictions for each character. To reduce noise and make things a little less random, I'm going to only choose a new character from the top N most likely characters. End of explanation tf.train.latest_checkpoint('checkpoints') checkpoint = tf.train.latest_checkpoint('checkpoints') samp = sample(checkpoint, 2000, lstm_size, len(vocab), prime="Far") print(samp) checkpoint = 'checkpoints/i200_l512.ckpt' samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far") print(samp) checkpoint = 'checkpoints/i600_l512.ckpt' samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far") print(samp) checkpoint = 'checkpoints/i1200_l512.ckpt' samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far") print(samp) Explanation: Here, pass in the path to a checkpoint and sample from the network. End of explanation
3,674
Given the following text description, write Python code to implement the functionality described below step by step Description: Zobrazování dat s knihovnou Matplolib Matplotlib je knihovna napsaná v Pythonu pro vizualizování dat různými způsoby. Jedná se o nejpoužívanější knihovnu v Pythonu pro zobrazování dat. Knihovny umožňuje export obrázků v různých formátech (png, eps, pdf, ...) v kvalitě vhodné pro vědecké publikace. Následuje import knihovny a její nastavení pro Jupyter (kreslení inline obrázků). Knihovna Numpy je importována také, pro snadné generování syntetických (umělých) dat. Step1: Kreslení bodů a řad Nejčastější případ zobrazování dat v technických aplikacích je asi spojnicový a bodový (scatter) graf. Minimální příklady Následují dva minimalistické příklady, pro pochopení jak používat Matplotlib. V těchto příkladech schválně není nic navíc, aby bylo evidentní jaké příkazy slouží k samotnému kreslení dat. Step2: Popisky, legenda, titulek, velikost, rozsah, mřížka Následuje komplexní příklad, v kterém je demonstrováno jak nastavit/doplnit do grafu všechny podstatné náležitosti. Step3: Styl značek, spojnic Další příklad demonstruje jak nastavovat vzhled značkek a spojnic. Všimněte si zadávání barvy a tvaru markeru Step4: Ostatní druhy grafů Mimo spojnicových a bodových grafů existují ještě různé další způsoby zobrazení dat. Několik populárních způsobů je ukázáno na následujících příkladech. Sloupcový graf Následuje příklad sloupcového grafu. Step5: Histogram Histogram je sloupcový graf, který zobrazuje četnost výskytu nějaké hodnoty v datech. Je to velice častý způsob analýzy dat. Z tohoto důvodu je v Matplotlibu připravena vlastní funkce, která rovnou kreslí výsledek analýzy. Step6: Boxplot Boxplot - krabicový graf je nástroj pro zjednodušené zobrazení rozložení hodnot v nějakém výběru. Jeden box představuje jednu sadu dat (skupinu, vzorek). Následuje příklad - srovnání dvou skupin - x, y. Step7: Koláčový graf Koláčové grafy jsou tak známé, že je není třeba více představovat. Následuje příklad. Step8: Více grafů v jednom okně Často je požadavek na zobrazení více grafů pohromadě ve skupině. Aby uživatel nemusel obrázky skládat ručně, Matplotlib dokáže grafy uspořádat sám podle instrukcí. Pozice do které se má kreslit (subplot) v sestavě grafů je určena třemi čísly. První číslo představuje počet řádků v sestavě, durhé číslo představuje počet sloupců v sestavě a poslední číslo pořadí dané pozici v sestavě. Následuje jednoduchý příklad - dva sloupce, jeden řádek. Step9: Následuje příklad, který využívá postupné změny počtu řádků a sloupců k tomu aby vytvořil složitější sestavu. Step10: Poznámka
Python Code: # inline plots %matplotlib inline # import matplotlib as plt acronym import matplotlib.pylab as plt # import numpy as np acronym import numpy as np Explanation: Zobrazování dat s knihovnou Matplolib Matplotlib je knihovna napsaná v Pythonu pro vizualizování dat různými způsoby. Jedná se o nejpoužívanější knihovnu v Pythonu pro zobrazování dat. Knihovny umožňuje export obrázků v různých formátech (png, eps, pdf, ...) v kvalitě vhodné pro vědecké publikace. Následuje import knihovny a její nastavení pro Jupyter (kreslení inline obrázků). Knihovna Numpy je importována také, pro snadné generování syntetických (umělých) dat. End of explanation # synthetic data x = np.linspace(-10, 10, 100)**3 # plotting plt.plot(x) plt.show() # synthetic data x = np.random.normal(0, 2, 20) y = np.random.normal(0, 2, 20) # plotting plt.plot(x, y, "o") plt.show() Explanation: Kreslení bodů a řad Nejčastější případ zobrazování dat v technických aplikacích je asi spojnicový a bodový (scatter) graf. Minimální příklady Následují dva minimalistické příklady, pro pochopení jak používat Matplotlib. V těchto příkladech schválně není nic navíc, aby bylo evidentní jaké příkazy slouží k samotnému kreslení dat. End of explanation # synthetic data x = np.linspace(-10, 10, 100) y1 = x**3 y2 = x**2 # plotting plt.figure(figsize=(12,5)) # create figure with size in inches plt.plot(x, y1, label="$y=x^3$") # plot y1 plt.plot(x, y2, label="$y=x^3$") # plot y2 plt.title("$y=f(x)$") # main title plt.xlabel("x [-]") # x axis label plt.ylabel("y [-]") # y axis label plt.xlim(-7.5, 10) # limits of x axis plt.ylim(-750, 750) # limits of y axis plt.grid() # show grid plt.legend() # show legend plt.show() Explanation: Popisky, legenda, titulek, velikost, rozsah, mřížka Následuje komplexní příklad, v kterém je demonstrováno jak nastavit/doplnit do grafu všechny podstatné náležitosti. End of explanation # synthetic data x = np.linspace(-10, 10, 25) y1 = x**3 y2 = x**2 y3 = x**4 / 5 # plotting plt.figure(figsize=(12,7)) # set size plt.plot(x, y1, "ro", label="$y=x^3$") # plot y1 plt.plot(x, y2, "b^-", linewidth=6, markersize=15, label="$y=x^3$") # plot y2 plt.plot(x, y3, "k:", linewidth=5, label="$y=x^4/5$") # plot y3 plt.legend() # show legend plt.show() Explanation: Styl značek, spojnic Další příklad demonstruje jak nastavovat vzhled značkek a spojnic. Všimněte si zadávání barvy a tvaru markeru: 'ro' - červená (red), kulatý marker (tvar o). 'b^-' - modrá (blue), horní trojůhelník (tvar ^), plná čára (značka -) 'k:' - černá (black), tečkovaná čára (značka :) End of explanation # synthetic data values = [121, 56, 41, 31] # values of bars years = [2015, 2016, 2017, 2018] # position of bars # plotting plt.bar(years, values, align='center') plt.xticks(years, years) plt.show() Explanation: Ostatní druhy grafů Mimo spojnicových a bodových grafů existují ještě různé další způsoby zobrazení dat. Několik populárních způsobů je ukázáno na následujících příkladech. Sloupcový graf Následuje příklad sloupcového grafu. End of explanation # synthetic data with normal distribution x = np.random.normal(0, 2, 1000) # create and plot histogram plt.hist(x, bins=20) plt.show() Explanation: Histogram Histogram je sloupcový graf, který zobrazuje četnost výskytu nějaké hodnoty v datech. Je to velice častý způsob analýzy dat. Z tohoto důvodu je v Matplotlibu připravena vlastní funkce, která rovnou kreslí výsledek analýzy. End of explanation # synthetic data with normal distribution x = np.random.normal(0, 2, 1000) y = np.random.normal(1, 1, 1000) # basic plot plt.boxplot([x,y], labels=["x", "y"]) plt.show() Explanation: Boxplot Boxplot - krabicový graf je nástroj pro zjednodušené zobrazení rozložení hodnot v nějakém výběru. Jeden box představuje jednu sadu dat (skupinu, vzorek). Následuje příklad - srovnání dvou skupin - x, y. End of explanation labels = ['apples', 'oranges', 'pears'] # classes values = [121, 56, 41] # values for classes plt.pie(values, labels=labels) # pie chart plt.legend() plt.axis('equal') # unscale to 1:1 plt.show() Explanation: Koláčový graf Koláčové grafy jsou tak známé, že je není třeba více představovat. Následuje příklad. End of explanation # synthetic data x = np.random.normal(0, 2, 20) y = np.random.normal(0, 2, 20) plt.figure(figsize=(8,5)) # set size # plotting plt.subplot(121) plt.plot(x, y, "ob") plt.subplot(122) plt.plot(y, x, "or") plt.show() Explanation: Více grafů v jednom okně Často je požadavek na zobrazení více grafů pohromadě ve skupině. Aby uživatel nemusel obrázky skládat ručně, Matplotlib dokáže grafy uspořádat sám podle instrukcí. Pozice do které se má kreslit (subplot) v sestavě grafů je určena třemi čísly. První číslo představuje počet řádků v sestavě, durhé číslo představuje počet sloupců v sestavě a poslední číslo pořadí dané pozici v sestavě. Následuje jednoduchý příklad - dva sloupce, jeden řádek. End of explanation # synthetic data xb = np.random.normal(0, 2, 1000) yb = np.random.normal(1, 1, 1000) x0 = np.random.normal(0, 2, 20) y0 = np.random.normal(0, 2, 20) plt.figure(figsize=(10,10)) # set size # first row plt.subplot(311) plt.plot(x0, "-xk") plt.xlabel("x [-]") plt.ylabel("y [-]") # second row plt.subplot(323) plt.hist(xb, bins=20, color="b") plt.xlabel("x [-]") plt.ylabel("y [-]") plt.subplot(324) plt.hist(yb, bins=20, color="r") plt.xlabel("x [-]") plt.ylabel("y [-]") # third row plt.subplot(337) plt.pie(values, autopct='%1.1f%%', shadow=True,startangle=140) plt.subplot(338) plt.boxplot([x,y], labels=["x", "y"]) plt.subplot(339) plt.pie(values, labels=labels) # adjust plot placement to make it nicer plt.subplots_adjust(left=None, bottom=None, right=None, top=None, wspace=None, hspace=None) plt.show() Explanation: Následuje příklad, který využívá postupné změny počtu řádků a sloupců k tomu aby vytvořil složitější sestavu. End of explanation # set style ggplot plt.style.use('ggplot') # syntetic data x = np.linspace(-10, 10, 100) y1 = x**3 y2 = x**2 # plotting plt.figure(figsize=(12,5)) # create figure with size in inches plt.plot(x, y1, label="$y=x^3$") # plot y1 plt.plot(x, y2, label="$y=x^3$") # plot y2 plt.title("$y=f(x)$") # main title plt.xlabel("x [-]") # x axis label plt.ylabel("y [-]") # y axis label plt.xlim(-7.5, 10) # limits of x axis plt.ylim(-750, 750) # limits of y axis plt.legend() # show legend plt.show() Explanation: Poznámka: Pří skládání grafů dohromady může někdy dojít k překrývání pospisků os a tilků grafů navzájem. Tento problém je možné ošetřit ručním přizpůsobením mezer mezi grafy (viz předposlední řádek v posledním příkladě). Někdy je možné vylepšit rozložení grafu jen pomocí zavolání příkazu plt.tight_layout() před příkazem plt.show(). Grafické styly Matplotlib umožňuje používat grafické styly, které mění nastavení celých grafů. Následuje ukázka jednoho populárního stylu. End of explanation
3,675
Given the following text description, write Python code to implement the functionality described below step by step Description: TensorFlow warmup This is a notebook to get you started with TensorFlow. Step5: Graph visualisation This is for visualizing a TF graph in an iPython notebook; the details are not interesting. (Borrowed from the DeepDream iPython notebook) Step6: The execution model TensorFlow allows you to specify graphs representing computations and provides a runtime for efficiently executing those graphs across a range of hardware. The graph nodes are Ops and the edges are Tensors. Step7: Ops Every node in the computation graph corresponds to an op. tf.constant, tf.sub and tf.add are Ops. There are many built-in Ops for low-level manipulation of numeric Tensors, e.g. Step8: Session The actual computations are carried out in a Session. Each session has exactly one graph, but it is completely valid to have multiple disconnected subgraphs in the same graph. The same graph can be used to initialize two different Sessions, yielding two independent environments with independent states. Unless specified otherwise, nodes and edges are added to the default graph. By default, a Session will use the default graph. Step9: Variables Variables maintain state in a Session across multiple calls to Session.run(). You add a variable to the graph by constructing an instance of the class tf.Variable. For example, model parameters (weights and biases) are stored in Variables. We train the model with multiple calls to Session.run(), and each call updates the model parameters. For more information on Variables see https Step10: Placeholders So far you have seen Variables, but there is a more basic construct Step11: At execution time, we feed data into the graph using a feed_dict Step12: The variable we added represents a variable in the computational graph, but is not an instance of the variable. The computational graph represents a program, and the variable will exist when we run the graph in a session. The value of the variable is stored in the session. Take a guess Step13: The output might surprise you Step14: Queues Queues are TensorFlow’s primitives for writing asynchronous code. Queues provide Ops with queue semantics. Queue Ops, like all Ops, need to be executed to do anything. Are often used for asynchronously processing data (e.g., an input pipeline with data augmentation). Queues are stateful graph nodes. The state is associated with a session. There are several different types of queues, e.g., FIFOQueue and RandomShuffleQueue. See the Threading and Queues for more details. Note Step15: Exercise
Python Code: import numpy as np import tensorflow as tf Explanation: TensorFlow warmup This is a notebook to get you started with TensorFlow. End of explanation # This is for graph visualization. from IPython.display import clear_output, Image, display, HTML def strip_consts(graph_def, max_const_size=32): Strip large constant values from graph_def. strip_def = tf.GraphDef() for n0 in graph_def.node: n = strip_def.node.add() n.MergeFrom(n0) if n.op == 'Const': tensor = n.attr['value'].tensor size = len(tensor.tensor_content) if size > max_const_size: tensor.tensor_content = "<stripped %d bytes>"%size return strip_def def show_graph(graph_def, max_const_size=32): Visualize TensorFlow graph. if hasattr(graph_def, 'as_graph_def'): graph_def = graph_def.as_graph_def() strip_def = strip_consts(graph_def, max_const_size=max_const_size) code = <script> function load() {{ document.getElementById("{id}").pbtxt = {data}; }} </script> <link rel="import" href="https://tensorboard.appspot.com/tf-graph-basic.build.html" onload=load()> <div style="height:600px"> <tf-graph-basic id="{id}"></tf-graph-basic> </div> .format(data=repr(str(strip_def)), id='graph'+str(np.random.rand())) iframe = <iframe seamless style="width:1200px;height:620px;border:0" srcdoc="{}"></iframe> .format(code.replace('"', '&quot;')) display(HTML(iframe)) Explanation: Graph visualisation This is for visualizing a TF graph in an iPython notebook; the details are not interesting. (Borrowed from the DeepDream iPython notebook) End of explanation # This code only creates the graph. No computation is done yet. tf.reset_default_graph() x = tf.constant(7.0, name="x") y = tf.add(x, tf.constant(2.0, name="y"), name="add_op") z = tf.subtract(x, tf.constant(2.0, name="z"), name="sub_op") w = tf.multiply(y, tf.constant(3.0)) # If no name is given, TF will chose a unique name for us. # Visualize the graph. show_graph(tf.get_default_graph().as_graph_def()) Explanation: The execution model TensorFlow allows you to specify graphs representing computations and provides a runtime for efficiently executing those graphs across a range of hardware. The graph nodes are Ops and the edges are Tensors. End of explanation # We can also use shorthand syntax # Notice the default names TF chooses for us. tf.reset_default_graph() x = tf.constant(7.0) y = x + 2 z = x - 2 w = y * 3 # Visualize the graph. show_graph(tf.get_default_graph().as_graph_def()) Explanation: Ops Every node in the computation graph corresponds to an op. tf.constant, tf.sub and tf.add are Ops. There are many built-in Ops for low-level manipulation of numeric Tensors, e.g.: Arithmetic (with matrix and complex number support) Tensor operations (reshape, reduction, casting) Image manipulation (cropping, sizing, coloring, ...) Batching (arranging training examples into batches) Almost every object in TensorFlow is an op. Even things that don't look like they are! TensorFlow uses the op abstraction for a surprising range of things: Queues Variables Variable initializers This can be confusing at first. For now, remember that because many things are Ops, some things have to be done in a somewhat non-obvious fashion. A list of TF Ops can be found at https://www.tensorflow.org/api_docs/python/. Tensors x, y, w and z are Tensors - a description of a multidimensional array. A Tensor is a symbolic handle to one of the outputs of an Operation. It does not hold the values of that operation's output, but instead provides a means of computing those values in a TensorFlow tf.Session. Tensor shapes can usually be derived from the computation graph. This is called shape inference. For example, if you perform a matrix multiply of a [4,2] and a [2,3] Tensor, then TensorFlow infers that the output Tensor has shape [4,3]. End of explanation tf.reset_default_graph() x = tf.constant(7.0, name="x") y = tf.add(x, tf.constant(2.0, name="y"), name="add_op") z = y * 3.0 # Create a session, which is the context for running a graph. with tf.Session() as sess: # When we call sess.run(y) the session is computing the value of Tensor y. print(sess.run(y)) print(sess.run(z)) Explanation: Session The actual computations are carried out in a Session. Each session has exactly one graph, but it is completely valid to have multiple disconnected subgraphs in the same graph. The same graph can be used to initialize two different Sessions, yielding two independent environments with independent states. Unless specified otherwise, nodes and edges are added to the default graph. By default, a Session will use the default graph. End of explanation tf.reset_default_graph() # tf.get_variable returns a tf.Variable object. Creating such objects directly # is possible, but does not have a sharing mechanism. Hence, tf.get_variable is # preferred. x = tf.get_variable("x", shape=[], initializer=tf.zeros_initializer()) assign_x = tf.assign(x, 10, name="assign_x") z = tf.add(x, 1, name="z") # Variables in TensorFlow need to be initialized first. The following op # conveniently takes care of that and initializes all variables. init = tf.global_variables_initializer() # Visualize the graph. show_graph(tf.get_default_graph().as_graph_def()) Explanation: Variables Variables maintain state in a Session across multiple calls to Session.run(). You add a variable to the graph by constructing an instance of the class tf.Variable. For example, model parameters (weights and biases) are stored in Variables. We train the model with multiple calls to Session.run(), and each call updates the model parameters. For more information on Variables see https://www.tensorflow.org/programmers_guide/variables End of explanation tf.reset_default_graph() x = tf.placeholder("float", None) y = x * 2 # Visualize the graph. show_graph(tf.get_default_graph().as_graph_def()) Explanation: Placeholders So far you have seen Variables, but there is a more basic construct: the placeholder. A placeholder is simply a variable that we will assign data to at a later date. It allows us to create our operations and build our computation graph, without needing the data. In TensorFlow terminology, we then feed data into the graph through these placeholders. End of explanation with tf.Session() as session: result = session.run(y, feed_dict={x: [1, 2, 3]}) print(result) Explanation: At execution time, we feed data into the graph using a feed_dict: for each placeholder, it contains the value we want to assign to it. This can be useful for batching up data, as you will see later. End of explanation with tf.Session() as sess: # Assign an initial value to the instance of the variable in this session, # determined by the initializer provided above. sess.run(init) print (sess.run(z)) Explanation: The variable we added represents a variable in the computational graph, but is not an instance of the variable. The computational graph represents a program, and the variable will exist when we run the graph in a session. The value of the variable is stored in the session. Take a guess: what is the output of the code below? End of explanation with tf.Session() as sess: # When we create a new session we need to initialize all Variables again. sess.run(init) sess.run(assign_x) print (sess.run(z)) Explanation: The output might surprise you: it's 1.0! The op assign_x is not a dependency of x or z, and hence is never evaluated. One way to solve this problem is: End of explanation tf.reset_default_graph() q = tf.FIFOQueue(3, "float", name="q") initial_enqueue = q.enqueue_many(([0., 0., 0.],), name="init") x = q.dequeue() y = x + 1 q_inc = q.enqueue([y]) with tf.Session() as session: session.run(initial_enqueue) outputs = [] for _ in range(20): _, y_val = session.run([q_inc, y]) outputs.append(y_val) print(outputs) # Visualize the graph. show_graph(tf.get_default_graph().as_graph_def()) Explanation: Queues Queues are TensorFlow’s primitives for writing asynchronous code. Queues provide Ops with queue semantics. Queue Ops, like all Ops, need to be executed to do anything. Are often used for asynchronously processing data (e.g., an input pipeline with data augmentation). Queues are stateful graph nodes. The state is associated with a session. There are several different types of queues, e.g., FIFOQueue and RandomShuffleQueue. See the Threading and Queues for more details. Note: You probably will never need to directly use these low level implementations of queues yourself. Do note, however, that several important operations (for example, reading and batching) are implemented as queues. End of explanation tf.reset_default_graph() number_to_check = 29 # Define graph. a = tf.Variable(number_to_check, dtype=tf.int32) pred = tf.equal(0, tf.mod(a, 2)) b = tf.cast( tf.cond( pred, lambda: tf.div(a, 2), lambda: tf.add(tf.multiply(a, 3), 1)), tf.int32) assign_op = tf.assign(a, b) with tf.Session() as session: # 1. Implement graph execution. pass # Simple solution without queue. tf.reset_default_graph() number_to_check = 29 # Define graph. a = tf.Variable(number_to_check, dtype=tf.int32) pred = tf.equal(0, tf.mod(a, 2)) b = tf.cast( tf.cond( pred, lambda: tf.div(a, 2), lambda: tf.add(tf.multiply(a, 3), 1)), tf.int32) assign_op = tf.assign(a, b) with tf.Session() as session: session.run(tf.global_variables_initializer()) print(session.run(a)) _, b_val = session.run([assign_op, b]) print(b_val) while (b_val != 1): _, b_val = session.run([assign_op, b]) print(b_val) # Solution with queue. tf.reset_default_graph() number_to_check = 29 # Define graph. q = tf.FIFOQueue(3, tf.int32, name="q") initial_enqueue = q.enqueue(number_to_check) a = q.dequeue() pred = tf.equal(0, tf.mod(a, 2)) b = tf.cast(tf.cond(pred, lambda: tf.div(a, 2), lambda: tf.add(tf.multiply(a, 3), 1)), tf.int32) q_op = q.enqueue([b]) with tf.Session() as session: session.run(initial_enqueue) _, a_val, b_val = session.run([q_op, a, b]) print(a_val) print(b_val) while (b_val != 1): _, b_val = session.run([q_op, b]) print(b_val) Explanation: Exercise: Collatz Conjecture And now some fun! Collatz conjecture states that after applying the following rule $f(n) = \begin{cases} n/2 &\text{if } n \equiv 0 \pmod{2},\ 3n+1 & \text{if } n\equiv 1 \pmod{2} .\end{cases}$ a finite number of times to a given number, we will end up at $1$ (cf. https://xkcd.com/710/). Implement the checking routine in TensorFlow (i.e. implement some code that given a number, checks that it satisfies Collatz conjecture). Bonus: use a queue. End of explanation
3,676
Given the following text description, write Python code to implement the functionality described below step by step Description: Revisão Zero de função Minimos de função Metodo da bissecao Step1: Posição Falsa Só troca a funcao $(A + B) / 2$ por $(A * f(B) - B * f(A)) / (f(B) - f(A))$ Step2: Metodos de newton e secante
Python Code: %matplotlib inline import numpy as np from matplotlib import pyplot as plt # Segundo passo: Definir função def f(x): return -x * np.e ** -x + 0.2 def p(A, B): plt.xlabel('x') plt.ylabel('y = f(x)') plt.title('Zero de funcoes') plt.grid() plt.plot(x, y) [xmin, xmax, ymin, ymax] = plt.axis() plt.plot([A, A], [ymin, ymax], 'k-') plt.plot([B, B], [ymin, ymax], 'k-') # Terceiro passo: Visualizar função x = np.linspace(0, 10, 100) y = f(x) p(1, 10) plt.show() A = 1.0 B = 10.0 x_tol = 0.0001 x_prev = B y_tol = 0.0001 #for i in range(5): while True: # Termo de recursao xi = (A + B) / 2.0 p(A, B) [xmin, xmax, ymin, ymax] = plt.axis() plt.plot([xi, xi], [ymin, ymax], '--') plt.show() print('Aproximacao: %.6lf' %f(xi)) if (f(A) * f(xi)) < 0: B = xi elif (f(A) * f(xi)) == 0: print('Raiz encontrada em %6.f' % xi) else: A = xi if abs(f(xi)) < y_tol: break if abs(x_prev - xi) < x_tol: break x_prev = xi plt.show(block=True) Explanation: Revisão Zero de função Minimos de função Metodo da bissecao End of explanation %matplotlib inline import numpy as np from matplotlib import pyplot as plt # Segundo passo: Definir função def f(x): return -x * np.e ** -x + 0.2 def p(A, B): plt.xlabel('x') plt.ylabel('y = f(x)') plt.title('Zero de funcoes') plt.grid() plt.plot(x, y) [xmin, xmax, ymin, ymax] = plt.axis() plt.plot([A, A], [ymin, ymax], 'k-') plt.plot([B, B], [ymin, ymax], 'k-') # Terceiro passo: Visualizar função x = np.linspace(0.0, 10.0, 100) y = f(x) p(1.0, 10.0) plt.show() A = 1.0 B = 10.0 x_tol = 0.001 x_prev = B y_tol = 0.0001 while True: # Termo de recursao xi = (A * f(B) - B * f(A)) / (f(B) - f(A)) p(A, B) [xmin, xmax, ymin, ymax] = plt.axis() plt.plot([xi, xi], [ymin, ymax], '--') plt.show() print('Aproximacao: %.6lf' %f(xi)) if (f(A) * f(xi)) < 0: B = xi elif (f(A) * f(xi)) == 0: print('Raiz encontrada em %6.f' % xi) else: A = xi if abs(f(xi)) < y_tol: break if abs(x_prev - xi) < x_tol: break x_prev = xi plt.show(block=True) Explanation: Posição Falsa Só troca a funcao $(A + B) / 2$ por $(A * f(B) - B * f(A)) / (f(B) - f(A))$ End of explanation %matplotlib inline import numpy as np from matplotlib import pyplot as plt # Segundo passo: Definir função def f(x): return -x * np.e ** -x + 0.2 def f_linha(x): return np.e ** -x * (x - 1) def p(A, B): plt.xlabel('x') plt.ylabel('y = f(x)') plt.title('Zero de funcoes') plt.grid() plt.plot(x, y) if A != 0 and B != 0: [xmin, xmax, ymin, ymax] = plt.axis() plt.plot([A, A], [ymin, ymax], 'k-') plt.plot([B, B], [ymin, ymax], 'k-') # Terceiro passo: Visualizar função x = np.linspace(0.0, 10.0, 100) y = f(x) p(1.0, 10.0) plt.show() A = 1.0 B = 10.0 xi = 2.0 x_tol = 0.0001 y_tol = 0.0001 x_prev = xi while True: p(0, 0) [xmin, xmax, ymin, ymax] = plt.axis() plt.plot([xi, xi], [ymin, ymax], '--') plt.show() print('Aproximacao: %.6lf' %f(xi)) # Termo de recursao xi = xi - f(xi) / f_linha(xi) if abs(f(xi)) < y_tol: break if abs(x_prev - xi) < x_tol: break x_prev = xi plt.show(block=True) Explanation: Metodos de newton e secante End of explanation
3,677
Given the following text description, write Python code to implement the functionality described below step by step Description: <table class="ee-notebook-buttons" align="left"><td> <a target="_blank" href="http Step1: Request body The request body is an instance of an EarthEngineAsset. This is where the path to the COG is specified, along with other useful properties. Note that the image is a small area exported from the composite made in this example script. See this doc for details on exporting a COG. Earth Engine will determine the bands, geometry, and other relevant information from the metadata of the TIFF. The only other fields that are accepted when creating a COG-backed asset are properties, start_time, and end_time. Step2: Send the request Make the POST request to the Earth Engine CreateAsset endpoint.
Python Code: # This has details about the Earth Engine Python Authenticator client. from ee import oauth from google_auth_oauthlib.flow import Flow import json # Build the `client_secrets.json` file by borrowing the # Earth Engine python authenticator. client_secrets = { 'web': { 'client_id': oauth.CLIENT_ID, 'client_secret': oauth.CLIENT_SECRET, 'redirect_uris': [oauth.REDIRECT_URI], 'auth_uri': 'https://accounts.google.com/o/oauth2/auth', 'token_uri': 'https://accounts.google.com/o/oauth2/token' } } # Write to a json file. client_secrets_file = 'client_secrets.json' with open(client_secrets_file, 'w') as f: json.dump(client_secrets, f, indent=2) # Start the flow using the client_secrets.json file. flow = Flow.from_client_secrets_file(client_secrets_file, scopes=oauth.SCOPES, redirect_uri=oauth.REDIRECT_URI) # Get the authorization URL from the flow. auth_url, _ = flow.authorization_url(prompt='consent') # Print instructions to go to the authorization URL. oauth._display_auth_instructions_with_print(auth_url) print('\n') # The user will get an authorization code. # This code is used to get the access token. code = input('Enter the authorization code: \n') flow.fetch_token(code=code) # Get an authorized session from the flow. session = flow.authorized_session() Explanation: <table class="ee-notebook-buttons" align="left"><td> <a target="_blank" href="http://colab.research.google.com/github/google/earthengine-api/blob/master/python/examples/ipynb/Earth_Engine_asset_from_cloud_geotiff.ipynb"> <img src="https://www.tensorflow.org/images/colab_logo_32px.png" /> Run in Google Colab</a> </td><td> <a target="_blank" href="https://github.com/google/earthengine-api/blob/master/python/examples/ipynb/Earth_Engine_asset_from_cloud_geotiff.ipynb"><img width=32px src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" /> View source on GitHub</a></td></table> Cloud GeoTiff Backed Earth Engine Assets Note: The REST API contains new and advanced features that may not be suitable for all users. If you are new to Earth Engine, please get started with the JavaScript guide. Earth Engine can load images from Cloud Optimized GeoTiffs (COGs) in Google Cloud Storage (learn more). This notebook demonstrates how to create Earth Engine assets backed by COGs. An advantage of COG-backed assets is that the spatial and metadata fields of the image will be indexed at asset creation time, making the image more performant in collections. (In contrast, an image created through ee.Image.loadGeoTIFF and put into a collection will require a read of the GeoTiff for filtering operations on the collection.) A disadvantage of COG-backed assets is that they may be several times slower than standard assets when used in computations. To create a COG-backed asset, make a POST request to the Earth Engine CreateAsset endpoint. As shown in the following, this request must be authorized to create an asset in your user folder. Start an authorized session To be able to make an Earth Engine asset in your user folder, you need to be able to authenticate as you when you make the request. The Earth Engine Python authenticator can be leveraged as a client app that is able to pass your credentials along. Follow the instructions in the cell output to authenticate. (Note that this auth flow is not supported if this notebook is being run in playgroud mode; make a copy before proceeding). For more details, see this guide on obtaining credentials in this manner, this reference on the Flow module, this reference for the client secrets format, and oauth.py from the Earth Engine python library. End of explanation # Request body as a dictionary. request = { 'type': 'IMAGE', 'gcs_location': { 'uris': ['gs://ee-docs-demos/COG_demo.tif'] }, 'properties': { 'source': 'https://code.earthengine.google.com/d541cf8b268b2f9d8f834c255698201d' }, 'startTime': '2016-01-01T00:00:00.000000000Z', 'endTime': '2016-12-31T15:01:23.000000000Z', } from pprint import pprint pprint(json.dumps(request)) Explanation: Request body The request body is an instance of an EarthEngineAsset. This is where the path to the COG is specified, along with other useful properties. Note that the image is a small area exported from the composite made in this example script. See this doc for details on exporting a COG. Earth Engine will determine the bands, geometry, and other relevant information from the metadata of the TIFF. The only other fields that are accepted when creating a COG-backed asset are properties, start_time, and end_time. End of explanation # Where Earth Engine assets are kept. project_folder = 'earthengine-legacy' # Your user folder name and new asset name. asset_id = 'users/user_folder_name/asset_name' url = 'https://earthengine.googleapis.com/v1alpha/projects/{}/assets?assetId={}' response = session.post( url = url.format(project_folder, asset_id), data = json.dumps(request) ) pprint(json.loads(response.content)) Explanation: Send the request Make the POST request to the Earth Engine CreateAsset endpoint. End of explanation
3,678
Given the following text description, write Python code to implement the functionality described below step by step Description: Read surfer grids There are 3 varieties Step1: Another ASCII Step2: A binary file
Python Code: !head ../data/Surfer/surfer-6-ascii-tiny.grd import gio da = gio.read_surfer('../data/Surfer/surfer-6-ascii-tiny.grd') da da.max() Explanation: Read surfer grids There are 3 varieties: Surfer 6 binary Surfer 6 ASCII Surfer 7 binary In theory we can read all of these, but I don't have a Surfer 6 binary file to test on. A small binary file From the docs. End of explanation import gio da = gio.read_surfer('../data/Surfer/surfer-6-ascii.grd') da da.plot() Explanation: Another ASCII End of explanation import gio da = gio.read_surfer('../data/Surfer/WDS1_Si_TAP_Quant.grd') da da.plot() da.shape Explanation: A binary file End of explanation
3,679
Given the following text description, write Python code to implement the functionality described below step by step Description: To make a better wedge This notebook is an update to the notebook entitled "To make a wedge" featured in the blog post, To make a wedge, on December 12, 2013. Start by importing Numpy and Matplotlib's pyplot module in the usual way Step1: Import the ricker wavelet function from bruges Step2: Make a wedge Step3: Let's make a more generic wedge that will handle any 3 layer case we want to make. Step4: Plotting the synthetic Step5: We can make use of the awesome apply_along_axis in Numpy to avoid looping over all the traces. https
Python Code: import numpy as np % matplotlib inline import matplotlib.pyplot as plt Explanation: To make a better wedge This notebook is an update to the notebook entitled "To make a wedge" featured in the blog post, To make a wedge, on December 12, 2013. Start by importing Numpy and Matplotlib's pyplot module in the usual way: End of explanation from bruges.filters import ricker Explanation: Import the ricker wavelet function from bruges: End of explanation from IPython.display import Image Explanation: Make a wedge End of explanation Image('images/generic_wedge.png', width=600) defaults = {'ta1':150, 'tb1':30, 'dta':50, 'dtb':50, 'xa1':100, 'xa2':100, 'dx':1, 'mint':0, 'maxt': 600, 'dt':1, 'minx':0, 'maxx': 500} def make_upper_boundary(**kw): x = kw['maxx']-kw['minx'] t0 = kw['ta1'] x2 = np.arange(1, x-(kw['xa2']+kw['xa1']), kw['dx']) m2 = kw['dta']/x2[-1] seg1 = np.ones(int(kw['xa1']/kw['dx'])) seg3 = np.ones(int(kw['xa2']/kw['dx'])) seg2 = x2 * m2 interface = t0 + np.concatenate((seg1, seg2, kw['dta']+seg3)) return interface def make_lower_boundary(**kw): x = kw['maxx']-kw['minx'] t1 = kw['ta1'] + kw['tb1'] x2 = np.arange(1, x-(kw['xa2']+kw['xa1']), kw['dx']) m2 = (kw['dta']+kw['dtb'])/x2[-1] seg1 = np.ones(int(kw['xa1']/kw['dx'])) seg3 = np.ones(int(kw['xa2']/kw['dx'])) seg2 = x2 * m2 interface = t1 + np.concatenate((seg1, seg2, seg2[-1]+seg3)) return interface def make_wedge(kwargs): upper_interface = make_upper_boundary(**kwargs) lower_interface = make_lower_boundary(**kwargs) return upper_interface, lower_interface def plot_interfaces(ax, upper, lower, **kw): ax.plot(upper,'-r') ax.plot(lower,'-b') ax.set_ylim(0,600) ax.set_xlim(kw['minx'],kw['maxx']) ax.invert_yaxis() upper, lower = make_wedge(defaults) f = plt.figure() ax = f.add_subplot(111) plot_interfaces(ax, upper, lower, **defaults) def make_meshgrid(**kw): upper, lower = make_wedge(defaults) t = np.arange(kw['mint'], kw['maxt']-1, kw['dt']) x = np.arange(kw['minx'], kw['maxx']-1, kw['dx']) xv, yv = np.meshgrid(x, t, sparse=False, indexing='ij') return xv, yv xv, yv = make_meshgrid(**defaults) conditions = {'upper': yv.T < upper, 'middle': (yv.T >= upper) & (yv.T <= lower), 'lower': yv.T > lower } labels = {'upper': 1, 'middle':2, 'lower': 3} d = yv.T.copy() for name, cond in conditions.items(): d[cond] = labels[name] plt.imshow(d, cmap='copper') vp = np.array([3300., 3200., 3300.]) rho = np.array([2600., 2550., 2650.]) AI = vp*rho AI model = d.copy() model[model == 1] = AI[0] model[model == 2] = AI[1] model[model == 3] = AI[2] def wvlt(f): return ricker(0.512, 0.001, f) def conv(a): return np.convolve(wvlt(f), a, mode='same') plt.imshow(model, cmap='Spectral') plt.colorbar() plt.title('Impedances') Explanation: Let's make a more generic wedge that will handle any 3 layer case we want to make. End of explanation # These are just some plotting parameters rc_params = {'cmap':'RdBu', 'vmax':0.05, 'vmin':-0.05, 'aspect':0.75} txt_params = {'fontsize':12, 'color':'black', 'horizontalalignment':'center', 'verticalalignment':'center'} tx = [0.85*defaults['maxx'],0.85*defaults['maxx'],0.85*defaults['maxx']] ty = [(defaults['ta1'] + defaults['dta'])/2, defaults['ta1'] + defaults['dta'] + (defaults['dtb']/1.33), defaults['maxt']-(defaults['maxt'] - defaults['ta1'] - defaults['dta'] - defaults['dtb'])/2] rock_names = ['shale1', 'sand', 'shale2'] defaults['ta1'], defaults['dta'], defaults['dtb']/1.25 rc = (model[1:] - model[:-1]) / (model[1:] + model[:-1]) Explanation: Plotting the synthetic End of explanation freqs = np.array([7,14,21]) f, axs = plt.subplots(1,len(freqs), figsize=(len(freqs)*5,6)) for i, f in enumerate(freqs): axs[i].imshow(np.apply_along_axis(conv, 0, rc), **rc_params) [axs[i].text(tx[j], ty[j], rock_names[j], **txt_params) for j in range(3)] plot_interfaces(axs[i], upper, lower, **defaults) axs[i].set_ylim(defaults['maxt'],defaults['mint']) axs[i].set_title( f'{f} Hz wavelet' ) axs[i].grid(alpha=0.5) Explanation: We can make use of the awesome apply_along_axis in Numpy to avoid looping over all the traces. https://docs.scipy.org/doc/numpy/reference/generated/numpy.apply_along_axis.html End of explanation
3,680
Given the following text description, write Python code to implement the functionality described below step by step Description: E2E ML on GCP Step1: Restart the kernel Once you've installed the additional packages, you need to restart the notebook kernel so it can find the packages. Step2: Authenticate your Google Cloud account If you are using Vertex AI Workbench Notebooks, your environment is already authenticated. Skip this step. If you are using Colab, run the cell below and follow the instructions when prompted to authenticate your account via oAuth. Otherwise, follow these steps Step3: Before you begin GPU runtime This tutorial does not require a GPU runtime. Set up your Google Cloud project The following steps are required, regardless of your notebook environment. Select or create a Google Cloud project. When you first create an account, you get a $300 free credit towards your compute/storage costs. Make sure that billing is enabled for your project. Enable the following APIs Step4: Region You can also change the REGION variable, which is used for operations throughout the rest of this notebook. Below are regions supported for Vertex AI. We recommend that you choose the region closest to you. Americas Step5: Timestamp If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append the timestamp onto the name of resources you create in this tutorial. Step6: Create a Cloud Storage bucket The following steps are required, regardless of your notebook environment. When you initialize the Vertex AI SDK for Python, you specify a Cloud Storage staging bucket. The staging bucket is where all the data associated with your dataset and model resources are retained across sessions. Set the name of your Cloud Storage bucket below. Bucket names must be globally unique across all Google Cloud projects, including those outside of your organization. Step7: Only if your bucket doesn't already exist Step8: Finally, validate access to your Cloud Storage bucket by examining its contents Step9: Service Account If you don't know your service account, try to get your service account using gcloud command by executing the second cell below. Step10: Set service account access for Vertex AI Pipelines Run the following commands to grant your service account access to read and write pipeline artifacts in the bucket that you created in the previous step -- you only need to run these once per service account. Step11: Set up variables Next, set up some variables used throughout the tutorial. Import libraries and define constants Step12: Initialize Vertex AI SDK for Python Initialize the Vertex AI SDK for Python for your project and corresponding bucket. Step13: Downloading the data After creating the bucket, the cell below will download the dataset into a CSV file and save it in GCS Step16: Construct pipeline components Create component Step19: Create component Step22: Train the BigQuery ML model To train the BigQuery ML model, you the following Step23: Create component Step25: Create component Step26: Create component Step27: Construct component Step28: Construct the rapid prototyoing pipeline Next, you construct pipeline, as follows Step29: Compile and execute the pipeline Finally, you compile the pipleline and then execute it with the following pipeline parameters Step30: Wait for the pipeline to complete Currently, your pipeline is running asynchronous by using the submit() method. To have run it synchronously, you would have invoked the run() method. In this last step, you block on the asynchronously executed waiting for completion using the wait() method. Step31: Cleaning up To clean up all Google Cloud resources used in this project, you can delete the Google Cloud project you used for the tutorial. Otherwise, you can delete the individual resources you created in this tutorial
Python Code: import os # The Vertex AI Workbench Notebook product has specific requirements IS_WORKBENCH_NOTEBOOK = os.getenv("DL_ANACONDA_HOME") IS_USER_MANAGED_WORKBENCH_NOTEBOOK = os.path.exists( "/opt/deeplearning/metadata/env_version" ) # Vertex AI Notebook requires dependencies to be installed with '--user' USER_FLAG = "" if IS_WORKBENCH_NOTEBOOK: USER_FLAG = "--user" ! pip3 install --quiet --upgrade google-cloud-aiplatform {USER_FLAG} -q ! pip3 install {USER_FLAG} --quiet -U google-cloud-pipeline-components==1.0 kfp -q ! pip3 install {USER_FLAG} --quiet --upgrade google-cloud-bigquery -q Explanation: E2E ML on GCP: MLOps stage 3 : Get started with rapid prototyping with AutoML and BQML <table align="left"> <td> <a href="https://colab.research.google.com/github/GoogleCloudPlatform/vertex-ai-samples/blob/main/notebooks/community/ml_ops/stage3/get_started_with_rapid_prototyping_bqml_automl.ipynb"> <img src="https://cloud.google.com/ml-engine/images/colab-logo-32px.png" alt="Colab logo"> Run in Colab </a> </td> <td> <a href="https://github.com/GoogleCloudPlatform/vertex-ai-samples/blob/main/notebooks/community/ml_ops/stage3/get_started_with_rapid_prototyping_bqml_automl.ipynb"> <img src="https://cloud.google.com/ml-engine/images/github-logo-32px.png" alt="GitHub logo"> View on GitHub </a> </td> <td> <a href="https://console.cloud.google.com/vertex-ai/workbench/deploy-notebook?download_url=https://raw.githubusercontent.com/GoogleCloudPlatform/vertex-ai-samples/main/notebooks/community/ml_ops/stage3/get_started_with_rapid_prototyping_bqml_automl.ipynb"> <img src="https://lh3.googleusercontent.com/UiNooY4LUgW_oTvpsNhPpQzsstV5W8F7rYgxgGBD85cWJoLmrOzhVs_ksK_vgx40SHs7jCqkTkCk=e14-rj-sc0xffffff-h130-w32" alt="Vertex AI logo"> Open in Vertex AI Workbench </a> </td> </table> <br/><br/><br/> Overview This tutorial demonstrates how to use Vertex AI Pipelines to rapid prototype a model using both AutoML and BQML, do an evaluation comparison, for a baseline, before progressing to a custom model. <img src="https://storage.googleapis.com/rafacarv-public-bucket-do-not-delete/abalone/automl_and_bqml.png" /> Dataset The Abalone Dataset <img src="https://storage.googleapis.com/rafacarv-public-bucket-do-not-delete/abalone/dataset.png" /> <p>Dataset Credits</p> <p>Dua, D. and Graff, C. (2019). UCI Machine Learning Repository <a href="http://archive.ics.uci.edu/ml">http://archive.ics.uci.edu/ml</a>. Irvine, CA: University of California, School of Information and Computer Science.</p> <p><a href="https://archive.ics.uci.edu/ml/datasets/abalone">Direct link</a></p> Attribute Information: <p>Given is the attribute name, attribute type, the measurement unit and a brief description. The number of rings is the value to predict: either as a continuous value or as a classification problem.</p> <body> <table> <tr> <th>Name</th> <th>Data Type</th> <th>Measurement Unit</th> <th>Description</th> </tr> <tr> <td>Sex</td> <td>nominal</td> <td>--</td> <td>M, F, and I (infant)</td> </tr> <tr> <td>Length</td> <td>continuous</td> <td>mm</td> <td>Longest shell measurement</td> </tr> <tr> <td>Diameter</td> <td>continuous</td> <td>mm</td> <td>perpendicular to length</td> </tr> <tr> <td>Height</td> <td>continuous</td> <td>mm</td> <td>with meat in shell</td> </tr> <tr> <td>Whole weight</td> <td>continuous</td> <td>grams</td> <td>whole abalone</td> </tr> <tr> <td>Shucked weight</td> <td>continuous</td> <td>grams</td> <td>weight of meat</td> </tr> <tr> <td>Viscera weight</td> <td>continuous</td> <td>grams</td> <td>gut weight (after bleeding)</td> </tr> <tr> <td>Shell weight</td> <td>continuous</td> <td>grams</td> <td>after being dried</td> </tr> <tr> <td>Rings</td> <td>integer</td> <td>--</td> <td>+1.5 gives the age in years</td> </tr> </table> </body> Objective In this tutorial, you learn how to use Vertex AI Predictions for rapid prototyping a model. This tutorial uses the following Google Cloud ML services: Vertex AI Pipelines Vertex AI AutoML Vertex AI BigQuery ML Google Cloud Pipeline Components The steps performed include: Creating a BigQuery and Vertex AI training dataset. Training a BigQuery ML and AutoML model. Extracting evaluation metrics from the BigQueryML and AutoML models. Selecting the best trained model. Deploying the best trained model. Testing the deployed model infrastructure. Costs This tutorial uses billable components of Google Cloud: Vertex AI Cloud Storage Learn about Vertex AI pricing and Cloud Storage pricing, and use the Pricing Calculator to generate a cost estimate based on your projected usage. Set up your local development environment If you are using Colab or Google Cloud Notebooks, your environment already meets all the requirements to run this notebook. You can skip this step. Otherwise, make sure your environment meets this notebook's requirements. You need the following: The Cloud Storage SDK Git Python 3 virtualenv Jupyter notebook running in a virtual environment with Python 3 The Cloud Storage guide to Setting up a Python development environment and the Jupyter installation guide provide detailed instructions for meeting these requirements. The following steps provide a condensed set of instructions: Install and initialize the SDK. Install Python 3. Install virtualenv and create a virtual environment that uses Python 3. Activate the virtual environment. To install Jupyter, run pip3 install jupyter on the command-line in a terminal shell. To launch Jupyter, run jupyter notebook on the command-line in a terminal shell. Open this notebook in the Jupyter Notebook Dashboard. Installation Install the packages required for executing this notebook. End of explanation # Automatically restart kernel after installs import os if not os.getenv("IS_TESTING"): # Automatically restart kernel after installs import IPython app = IPython.Application.instance() app.kernel.do_shutdown(True) Explanation: Restart the kernel Once you've installed the additional packages, you need to restart the notebook kernel so it can find the packages. End of explanation # If you are running this notebook in Colab, run this cell and follow the # instructions to authenticate your GCP account. This provides access to your # Cloud Storage bucket and lets you submit training jobs and prediction # requests. import os import sys # If on Vertex AI Workbench, then don't execute this code IS_COLAB = "google.colab" in sys.modules if not os.path.exists("/opt/deeplearning/metadata/env_version") and not os.getenv( "DL_ANACONDA_HOME" ): if "google.colab" in sys.modules: IS_COLAB = True from google.colab import auth as google_auth google_auth.authenticate_user() # If you are running this notebook locally, replace the string below with the # path to your service account key and run this cell to authenticate your GCP # account. elif not os.getenv("IS_TESTING"): %env GOOGLE_APPLICATION_CREDENTIALS '' Explanation: Authenticate your Google Cloud account If you are using Vertex AI Workbench Notebooks, your environment is already authenticated. Skip this step. If you are using Colab, run the cell below and follow the instructions when prompted to authenticate your account via oAuth. Otherwise, follow these steps: In the Cloud Console, go to the Create service account key page. Click Create service account. In the Service account name field, enter a name, and click Create. In the Grant this service account access to project section, click the Role drop-down list. Type "Vertex" into the filter box, and select Vertex Administrator. Type "Storage Object Admin" into the filter box, and select Storage Object Admin. Click Create. A JSON file that contains your key downloads to your local environment. Enter the path to your service account key as the GOOGLE_APPLICATION_CREDENTIALS variable in the cell below and run the cell. End of explanation import os PROJECT_ID = "[your-project-id]" # @param {type:"string"} if PROJECT_ID == "" or PROJECT_ID is None or PROJECT_ID == "[your-project-id]": # Get your GCP project id from gcloud shell_output = ! gcloud config list --format 'value(core.project)' 2>/dev/null PROJECT_ID = shell_output[0] print("Project ID:", PROJECT_ID) ! gcloud config set project $PROJECT_ID Explanation: Before you begin GPU runtime This tutorial does not require a GPU runtime. Set up your Google Cloud project The following steps are required, regardless of your notebook environment. Select or create a Google Cloud project. When you first create an account, you get a $300 free credit towards your compute/storage costs. Make sure that billing is enabled for your project. Enable the following APIs: Vertex AI APIs, Compute Engine APIs, and Cloud Storage. If you are running this notebook locally, you will need to install the Cloud SDK. Enter your project ID in the cell below. Then run the cell to make sure the Cloud SDK uses the right project for all the commands in this notebook. Note: Jupyter runs lines prefixed with ! as shell commands, and it interpolates Python variables prefixed with $. Set your project ID If you don't know your project ID, you may be able to get your project ID using gcloud. End of explanation REGION = "[your-region]" # @param {type:"string"} if REGION == "[your-region]": REGION = "us-central1" Explanation: Region You can also change the REGION variable, which is used for operations throughout the rest of this notebook. Below are regions supported for Vertex AI. We recommend that you choose the region closest to you. Americas: us-central1 Europe: europe-west4 Asia Pacific: asia-east1 You may not use a multi-regional bucket for training or prediction with Vertex AI. Not all regions provide support for all Vertex AI services. Learn more about Vertex AI regions. End of explanation from datetime import datetime TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S") Explanation: Timestamp If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append the timestamp onto the name of resources you create in this tutorial. End of explanation BUCKET_NAME = "[your-bucket-name]" # @param {type:"string"} BUCKET_URI = f"gs://{BUCKET_NAME}" if BUCKET_URI == "" or BUCKET_URI is None or BUCKET_URI == "gs://[your-bucket-name]": BUCKET_URI = "gs://" + PROJECT_ID + "aip-" + TIMESTAMP Explanation: Create a Cloud Storage bucket The following steps are required, regardless of your notebook environment. When you initialize the Vertex AI SDK for Python, you specify a Cloud Storage staging bucket. The staging bucket is where all the data associated with your dataset and model resources are retained across sessions. Set the name of your Cloud Storage bucket below. Bucket names must be globally unique across all Google Cloud projects, including those outside of your organization. End of explanation ! gsutil mb -l $REGION $BUCKET_URI Explanation: Only if your bucket doesn't already exist: Run the following cell to create your Cloud Storage bucket. End of explanation ! gsutil ls -al $BUCKET_URI Explanation: Finally, validate access to your Cloud Storage bucket by examining its contents: End of explanation SERVICE_ACCOUNT = "[your-service-account]" # @param {type:"string"} if ( SERVICE_ACCOUNT == "" or SERVICE_ACCOUNT is None or SERVICE_ACCOUNT == "[your-service-account]" ): # Get your service account from gcloud if not IS_COLAB: shell_output = !gcloud auth list 2>/dev/null SERVICE_ACCOUNT = shell_output[2].replace("*", "").strip() if IS_COLAB: shell_output = ! gcloud projects describe $PROJECT_ID # print("shell_output=", shell_output) project_number = shell_output[-1].split(":")[1].strip().replace("'", "") SERVICE_ACCOUNT = f"{project_number}[email protected]" print("Service Account:", SERVICE_ACCOUNT) Explanation: Service Account If you don't know your service account, try to get your service account using gcloud command by executing the second cell below. End of explanation ! gsutil iam ch serviceAccount:{SERVICE_ACCOUNT}:roles/storage.objectCreator $BUCKET_URI ! gsutil iam ch serviceAccount:{SERVICE_ACCOUNT}:roles/storage.objectViewer $BUCKET_URI Explanation: Set service account access for Vertex AI Pipelines Run the following commands to grant your service account access to read and write pipeline artifacts in the bucket that you created in the previous step -- you only need to run these once per service account. End of explanation import sys from datetime import datetime from typing import NamedTuple import google.cloud.aiplatform as aip from google.cloud import bigquery from kfp import dsl from kfp.v2 import compiler from kfp.v2.dsl import Artifact, Input, Metrics, Output, component Explanation: Set up variables Next, set up some variables used throughout the tutorial. Import libraries and define constants End of explanation aip.init(project=PROJECT_ID, location=REGION, staging_bucket=BUCKET_URI) Explanation: Initialize Vertex AI SDK for Python Initialize the Vertex AI SDK for Python for your project and corresponding bucket. End of explanation DATA_FOLDER = f"{BUCKET_URI}/data" RAW_INPUT_DATA = f"{DATA_FOLDER}/abalone.csv" ! gsutil cp gs://cloud-samples-data/vertex-ai/community-content/datasets/abalone/abalone.data {RAW_INPUT_DATA} Explanation: Downloading the data After creating the bucket, the cell below will download the dataset into a CSV file and save it in GCS End of explanation @component(base_image="python:3.9", packages_to_install=["google-cloud-bigquery"]) def import_data_to_bigquery( project: str, bq_location: str, bq_dataset: str, gcs_data_uri: str, raw_dataset: Output[Artifact], table_name_prefix: str = "abalone", ): Outputs: output['uri'] # BigQuery table from google.cloud import bigquery # Construct a BigQuery client object. client = bigquery.Client(project=project, location=bq_location) def load_dataset(gcs_uri, table_id): Load CSV data into BigQuery table job_config = bigquery.LoadJobConfig( schema=[ bigquery.SchemaField("Sex", "STRING"), bigquery.SchemaField("Length", "NUMERIC"), bigquery.SchemaField("Diameter", "NUMERIC"), bigquery.SchemaField("Height", "NUMERIC"), bigquery.SchemaField("Whole_weight", "NUMERIC"), bigquery.SchemaField("Shucked_weight", "NUMERIC"), bigquery.SchemaField("Viscera_weight", "NUMERIC"), bigquery.SchemaField("Shell_weight", "NUMERIC"), bigquery.SchemaField("Rings", "NUMERIC"), ], skip_leading_rows=1, # The source format defaults to CSV, so the line below is optional. source_format=bigquery.SourceFormat.CSV, ) print(f"Loading {gcs_uri} into {table_id}") load_job = client.load_table_from_uri( gcs_uri, table_id, job_config=job_config ) # Make an API request. load_job.result() # Waits for the job to complete. destination_table = client.get_table(table_id) # Make an API request. print("Loaded {} rows.".format(destination_table.num_rows)) def create_dataset_if_not_exist(bq_dataset_id, bq_location): print( "Checking for existence of bq dataset. If it does not exist, it creates one" ) dataset = bigquery.Dataset(bq_dataset_id) dataset.location = bq_location dataset = client.create_dataset(dataset, exists_ok=True, timeout=300) print(f"Created dataset {dataset.full_dataset_id} @ {dataset.location}") bq_dataset_id = f"{project}.{bq_dataset}" create_dataset_if_not_exist(bq_dataset_id, bq_location) raw_table_name = f"{table_name_prefix}_raw" table_id = f"{project}.{bq_dataset}.{raw_table_name}" print("Deleting any tables that might have the same name on the dataset") client.delete_table(table_id, not_found_ok=True) print("will load data to table") load_dataset(gcs_data_uri, table_id) raw_dataset_uri = f"bq://{table_id}" raw_dataset.uri = raw_dataset_uri Explanation: Construct pipeline components Create component: Import CSV data to BigQuery table This component takes the csv file and imports it to a table in BigQuery, as follows: If the dataset does not exist, the dataset is created. If the table within the dataset exists, the table is deleted and recreated. The CSV data is imported into the table. This component returns the BigQuery table raw_dataset as an artifact. End of explanation @component( base_image="python:3.9", packages_to_install=["google-cloud-bigquery"], ) def split_dataset( raw_dataset: Input[Artifact], bq_location: str, ) -> NamedTuple( "bqml_split", [ ("dataset_uri", str), ("dataset_bq_uri", str), ("test_dataset_uri", str), ], ): from collections import namedtuple from google.cloud import bigquery raw_dataset_uri = raw_dataset.uri table_name = raw_dataset_uri.split("bq://")[-1] print(table_name) raw_dataset_uri = table_name.split(".") print(raw_dataset_uri) project = raw_dataset_uri[0] bq_dataset = raw_dataset_uri[1] bq_raw_table = raw_dataset_uri[2] client = bigquery.Client(project=project, location=bq_location) def split_dataset(table_name_dataset): training_dataset_table_name = f"{project}.{bq_dataset}.{table_name_dataset}" split_query = f CREATE OR REPLACE TABLE `{training_dataset_table_name}` AS SELECT Sex, Length, Diameter, Height, Whole_weight, Shucked_weight, Viscera_weight, Shell_weight, Rings, CASE(ABS(MOD(FARM_FINGERPRINT(TO_JSON_STRING(f)), 10))) WHEN 9 THEN 'TEST' WHEN 8 THEN 'VALIDATE' ELSE 'TRAIN' END AS split_col FROM `{project}.{bq_dataset}.abalone_raw` f dataset_uri = f"{project}.{bq_dataset}.{bq_raw_table}" print("Splitting the dataset") query_job = client.query(split_query) # Make an API request. query_job.result() print(dataset_uri) print(split_query.replace("\n", " ")) return training_dataset_table_name def create_test_view(training_dataset_table_name, test_view_name="dataset_test"): view_uri = f"{project}.{bq_dataset}.{test_view_name}" query = f CREATE OR REPLACE VIEW `{view_uri}` AS SELECT Sex, Length, Diameter, Height, Whole_weight, Shucked_weight, Viscera_weight, Shell_weight, Rings FROM `{training_dataset_table_name}` f WHERE f.split_col = 'TEST' print(f"Creating view for --> {test_view_name}") print(query.replace("\n", " ")) query_job = client.query(query) # Make an API request. query_job.result() return view_uri table_name_dataset = "dataset" dataset_uri = split_dataset(table_name_dataset) test_dataset_uri = create_test_view(dataset_uri) dataset_bq_uri = "bq://" + dataset_uri print(f"dataset: {dataset_uri}") result_tuple = namedtuple( "bqml_split", ["dataset_uri", "dataset_bq_uri", "test_dataset_uri"], ) return result_tuple( dataset_uri=str(dataset_uri), dataset_bq_uri=str(dataset_bq_uri), test_dataset_uri=str(test_dataset_uri), ) Explanation: Create component: Split the dataset into train, test and eval For this pipeline, you set aside a portion of the dataset for test evaluation. While both AutoML and BQML will automatically split then datasets, in this example you will explicitly split the datasets into: TRAIN EVALUATE TEST AutoML and BigQuery ML use different nomenclatures for data splits: Learn more about How BigQuery ML splits the data. Learn more about How AutoML splits the data. You create the component split_dataset(), to psuedo randomly split the dataset. First, you add a new column split_col to identify for each example which split the example belongs to. Then you use the psuedo random method to assign each example to one of the three datasets. The column values TRAIN, TEST and EVALUATE are recognized by both AutoML and BigQuery ML for data set splits. Finally, you create a separate table view for the test split. As input, the component takes the dataset Artifact from the import_data_to_bigquery() component and as output returns: dataset_uri: The BigQuery URI to the dataset in the form: project.dataset.table. dataset_bq_uri: The BigQueryn URI to the dataset in the form: bq://project.dataset.table. test_dataset_uri: The BigQuery URI to the test view of the dataset in the form: project.dataset.table. Terminology <ul> <li>Model trials <p>The training set is used to train models with different preprocessing, architecture, and hyperparameter option combinations. These models are evaluated on the validation set for quality, which guides the exploration of additional option combinations. The best parameters and architectures determined in the parallel tuning phase are used to train two ensemble models as described below.</p> </li> <li>Candidate Model <p>AutoML and BigQuery ML services train a candidate model for evaluation, using the training and validation dataset splits. The services generates the final model evaluation metrics on the respective model, using the test dataset split. This is the first time in the process that the test set is used. This approach ensures that the final evaluation metrics are an unbiased reflection of how well the final trained model will perform in production.</p> </li> <li>Serving (blessed) model <p>The candidate model with the best evaluation metrics. This model is the one that you use to request predictions.</p> </li> </ul> End of explanation # Note, this is a static function -- not a component def _create_model_query( project_id: str, bq_dataset: str, training_data_uri: str, model_name: str = "linear_regression_model_prototyping", ) -> str: model_uri = f"{project_id}.{bq_dataset}.{model_name}" model_options = OPTIONS ( MODEL_TYPE='LINEAR_REG', input_label_cols=['Rings'], DATA_SPLIT_METHOD='CUSTOM', DATA_SPLIT_COL='split_col' ) query = f CREATE OR REPLACE MODEL `{model_uri}` {model_options} AS SELECT Sex, Length, Diameter, Height, Whole_weight, Shucked_weight, Viscera_weight, Shell_weight, Rings, CASE(split_col) WHEN 'TEST' THEN TRUE ELSE FALSE END AS split_col FROM `{training_data_uri}`; print(query.replace("\n", " ")) return query Explanation: Train the BigQuery ML model To train the BigQuery ML model, you the following: Construct the CREATE MODEL query using a static Python function _create_model_query(), which runs in the context of the pipeline. Call the prebuilt component BigQueryCreateModelOp, with the constructed query, to train the BigQuery ML model. For this tutorial, you use a simple linear regression model on BQML. For a full list of models supported by BQML, look here: End-to-end user journey for each model. As pointed out before, BQML and AutoML use different split terminologies, so we do an adaptation of the <i>split_col</i> column directly on the SELECT portion of the CREATE model query: When the value of DATA_SPLIT_METHOD is 'CUSTOM', the corresponding column should be of type BOOL. The rows with TRUE or NULL values are used as evaluation data. Rows with FALSE values are used as training data. End of explanation @component(base_image="python:3.9") def interpret_bqml_evaluation_metrics( bqml_evaluation_metrics: Input[Artifact], metrics: Output[Metrics] ): import math metadata = bqml_evaluation_metrics.metadata for r in metadata["rows"]: rows = r["f"] schema = metadata["schema"]["fields"] output = {} for metric, value in zip(schema, rows): metric_name = metric["name"] val = float(value["v"]) output[metric_name] = val metrics.log_metric(metric_name, val) if metric_name == "mean_squared_error": rmse = math.sqrt(val) metrics.log_metric("root_mean_squared_error", rmse) metrics.log_metric("framework", "BQML") print(output) Explanation: Create component: Interpreting the BigQuery ML model evaluation Next, you create a component to interpret the evaluation metrics from BigQueryEvaluateModelJobOp for the purpose of making a apple-to-apple comparison with evaluation metrics that are obtained from the AutoML model. The output of the pre-built component will be a table with the metrics obtained by BigQuery ML when training the model. In your BigQuery console, they look like the image below. <img src="https://storage.googleapis.com/rafacarv-public-bucket-do-not-delete/abalone/bqml-evaluate.png?"> BigQuery ML does not give you a root mean squared error to the list of metrics. In this component, you manually add it to the metrics dictionary, and output the updated metrics dictionary as an Artifact. Learn more about BigQuery ML evaluation metrics. End of explanation @component( base_image="python:3.9", packages_to_install=[ "google-cloud-aiplatform", ], ) def interpret_automl_evaluation_metrics( region: str, model: Input[Artifact], metrics: Output[Metrics] ): ' For a list of available regression metrics, go here: gs://google-cloud-aiplatform/schema/modelevaluation/regression_metrics_1.0.0.yaml. More information on available metrics for different types of models: https://cloud.google.com/vertex-ai/docs/predictions/online-predictions-automl import google.cloud.aiplatform.gapic as gapic # Get a reference to the Model Service client client_options = {"api_endpoint": f"{region}-aiplatform.googleapis.com"} model_service_client = gapic.ModelServiceClient(client_options=client_options) model_resource_name = model.metadata["resourceName"] model_evaluations = model_service_client.list_model_evaluations( parent=model_resource_name ) model_evaluation = list(model_evaluations)[0] available_metrics = [ "meanAbsoluteError", "meanAbsolutePercentageError", "rSquared", "rootMeanSquaredError", "rootMeanSquaredLogError", ] output = dict() for x in available_metrics: val = model_evaluation.metrics.get(x) output[x] = val metrics.log_metric(str(x), float(val)) metrics.log_metric("framework", "AutoML") print(output) Explanation: Create component: Interpreting the AutoML model evaluation Next, you create a component to interpret the evaluation metrics from the AutoML training of the model. Similar to BQML, AutoML also generates metrics during its model creation. These can be accessed in the UI, as seen below: <img src="https://storage.googleapis.com/rafacarv-public-bucket-do-not-delete/abalone/automl-evaluate.png" /> Currently, there is not a pre-built-component to access these metrics programmatically. Instead, in this component you use the Vertex AI GAPIC (Google API Compiler), which auto-generates low-level gRPC interfaces to the AutoML evaluation service. End of explanation @component(base_image="python:3.9") def select_best_model( metrics_bqml: Input[Metrics], metrics_automl: Input[Metrics], thresholds_dict_str: str, best_metrics: Output[Metrics], reference_metric_name: str = "rmse", ) -> NamedTuple( "Outputs", [ ("deploy_decision", str), ("best_model", str), ("metric", float), ("metric_name", str), ], ): import json from collections import namedtuple best_metric = float("inf") best_model = None # BQML and AutoML use different metric names. metric_possible_names = [] if reference_metric_name == "mae": metric_possible_names = ["meanAbsoluteError", "mean_absolute_error"] elif reference_metric_name == "rmse": metric_possible_names = ["rootMeanSquaredError", "root_mean_squared_error"] metric_bqml = float("inf") metric_automl = float("inf") print(metrics_bqml.metadata) print(metrics_automl.metadata) for x in metric_possible_names: try: metric_bqml = metrics_bqml.metadata[x] print(f"Metric bqml: {metric_bqml}") except: print(f"{x} does not exist int the BQML dictionary") try: metric_automl = metrics_automl.metadata[x] print(f"Metric automl: {metric_automl}") except: print(f"{x} does not exist on the AutoML dictionary") # Change condition if higher is better. print(f"Comparing BQML ({metric_bqml}) vs AutoML ({metric_automl})") if metric_bqml <= metric_automl: best_model = "bqml" best_metric = metric_bqml best_metrics.metadata = metrics_bqml.metadata else: best_model = "automl" best_metric = metric_automl best_metrics.metadata = metrics_automl.metadata thresholds_dict = json.loads(thresholds_dict_str) deploy = False # Change condition if higher is better. if best_metric < thresholds_dict[reference_metric_name]: deploy = True if deploy: deploy_decision = "true" else: deploy_decision = "false" print(f"Which model is best? {best_model}") print(f"What metric is being used? {reference_metric_name}") print(f"What is the best metric? {best_metric}") print(f"What is the threshold to deploy? {thresholds_dict_str}") print(f"Deploy decision: {deploy_decision}") Outputs = namedtuple( "Outputs", ["deploy_decision", "best_model", "metric", "metric_name"] ) return Outputs( deploy_decision=deploy_decision, best_model=best_model, metric=best_metric, metric_name=reference_metric_name, ) Explanation: Create component: model selection Next, you create a component select_best_model() to compare the AutoML and BigQuery ML model evaluations, and choose between each model which one has the best metrics. Before the selection, the AutoML and BigQuery ML are candidate models, and the selected model is the blessed model. This component takes the following parameters: metrics_bqml: The metrics Artifact for the interpreted BigQuery ML model evaluation. metrics_automl: The metrics Artifact for the interpreted AutoML model evaluation. thresholds_dict_str: The metric threshold for decision to deploy the model. `reference_metric_name: The consolidated AutoML+BigQueryML metric names. This component returns the Artifact: deploy_decision: Whether to deploy a model -- exceeded minimum metric threshold. best_model: The blessed AutoML or BigQuery ML model to deploy. metric: The metric value of the best (blessed) model. metric_name: The name of the corresponding metric. Note: BigQuery and AutoML use different evaluation metric names, hence why you had to do a mapping of these different nomenclatures. End of explanation @component(base_image="python:3.9", packages_to_install=["google-cloud-aiplatform"]) def validate_infrastructure( endpoint: Input[Artifact], ) -> NamedTuple( "validate_infrastructure_output", [("instance", str), ("prediction", float)] ): import json from collections import namedtuple from google.cloud import aiplatform from google.protobuf import json_format from google.protobuf.struct_pb2 import Value def treat_uri(uri): return uri[uri.find("projects/") :] def request_prediction(endp, instance): instance = json_format.ParseDict(instance, Value()) instances = [instance] parameters_dict = {} parameters = json_format.ParseDict(parameters_dict, Value()) response = endp.predict(instances=instances, parameters=parameters) print("deployed_model_id:", response.deployed_model_id) print("predictions: ", response.predictions) # The predictions are a google.protobuf.Value representation of the model's predictions. predictions = response.predictions for pred in predictions: if type(pred) is dict and "value" in pred.keys(): # AutoML predictions prediction = pred["value"] elif type(pred) is list: # BQML Predictions return different format prediction = pred[0] return prediction endpoint_uri = endpoint.uri treated_uri = treat_uri(endpoint_uri) instance = { "Sex": "M", "Length": 0.33, "Diameter": 0.255, "Height": 0.08, "Whole_weight": 0.205, "Shucked_weight": 0.0895, "Viscera_weight": 0.0395, "Shell_weight": 0.055, } instance_json = json.dumps(instance) print("Will use the following instance: " + instance_json) endpoint = aiplatform.Endpoint(treated_uri) prediction = request_prediction(endpoint, instance) result_tuple = namedtuple( "validate_infrastructure_output", ["instance", "prediction"] ) return result_tuple(instance=str(instance_json), prediction=float(prediction)) Explanation: Construct component: validate the serving infrastructure Next, you create the component validate_infrastructure(). After the best (blessed) model has been deployed, we will validate the endpoint by making a simple prediction to it. End of explanation DISPLAY_NAME = "rapid-prototyping" @dsl.pipeline(name=DISPLAY_NAME, description="Rapid Prototyping") def train_pipeline( project: str, gcs_input_file_uri: str, region: str, bq_dataset: str, bq_location: str, bqml_model_export_location: str, bqml_serving_container_image_uri: str, endpoint_display_name: str, thresholds_dict_str: str, ): from google_cloud_pipeline_components import aiplatform as gcc_aip from google_cloud_pipeline_components.types import artifact_types from google_cloud_pipeline_components.v1.bigquery import ( BigqueryCreateModelJobOp, BigqueryEvaluateModelJobOp, BigqueryExportModelJobOp) from google_cloud_pipeline_components.v1.endpoint import (EndpointCreateOp, ModelDeployOp) from google_cloud_pipeline_components.v1.model import ModelUploadOp from kfp.v2.components import importer_node # Imports data to BigQuery using a custom component. import_data_op = import_data_to_bigquery( project=project, bq_location=bq_location, bq_dataset=bq_dataset, gcs_data_uri=gcs_input_file_uri, ) # Splits the BQ dataset using a custom component. split_dataset_op = split_dataset( import_data_op.outputs["raw_dataset"], bq_location=bq_location ) bqml_training_data = split_dataset_op.outputs["dataset_uri"] # Generates the query to create a BQML table. create_model_query = _create_model_query( project_id=project, bq_dataset=bq_dataset, training_data_uri=bqml_training_data ) # Builds BQML model using pre-built-component. bqml_create_model_op = BigqueryCreateModelJobOp( project=project, location=bq_location, query=create_model_query ) bqml_model = bqml_create_model_op.outputs["model"] # Gathers BQML evaluation metrics using a pre-built-component. bqml_evaluate_op = BigqueryEvaluateModelJobOp( project=project, location=bq_location, model=bqml_model ) bqml_eval_metrics_raw = bqml_evaluate_op.outputs["evaluation_metrics"] # Analyzes evaluation BQML metrics using a custom component. interpret_bqml_evaluation_metrics_op = interpret_bqml_evaluation_metrics( bqml_evaluation_metrics=bqml_eval_metrics_raw ) bqml_eval_metrics = interpret_bqml_evaluation_metrics_op.outputs["metrics"] # Exports the BQML model to a GCS bucket using a pre-built-component. bqml_export_op = BigqueryExportModelJobOp( project=project, location=bq_location, model=bqml_model, model_destination_path=bqml_model_export_location, ).after(bqml_create_model_op) bqml_exported_gcs_path = bqml_export_op.outputs["exported_model_path"] import_unmanaged_model_task = importer_node.importer( artifact_uri=bqml_exported_gcs_path, artifact_class=artifact_types.UnmanagedContainerModel, metadata={ "containerSpec": { "imageUri": BQML_SERVING_CONTAINER_IMAGE_URI, }, }, ).after(bqml_export_op) # Uploads the recently exported the BQML model from GCS into Vertex AI using a pre-built-component. bqml_model_upload_op = ModelUploadOp( project=project, location=region, display_name=DISPLAY_NAME + "_bqml", unmanaged_container_model=import_unmanaged_model_task.outputs["artifact"], ).after(import_unmanaged_model_task) bqml_vertex_model = bqml_model_upload_op.outputs["model"] # Creates a Vertex AI Tabular dataset using a pre-built-component. dataset_create_op = gcc_aip.TabularDatasetCreateOp( project=project, location=region, display_name=DISPLAY_NAME, bq_source=split_dataset_op.outputs["dataset_bq_uri"], ) # Trains an AutoML Tables model using a pre-built-component. automl_training_op = gcc_aip.AutoMLTabularTrainingJobRunOp( project=project, location=region, display_name=f"{DISPLAY_NAME}_automl", optimization_prediction_type="regression", optimization_objective="minimize-rmse", predefined_split_column_name="split_col", dataset=dataset_create_op.outputs["dataset"], target_column="Rings", column_transformations=[ {"categorical": {"column_name": "Sex"}}, {"numeric": {"column_name": "Length"}}, {"numeric": {"column_name": "Diameter"}}, {"numeric": {"column_name": "Height"}}, {"numeric": {"column_name": "Whole_weight"}}, {"numeric": {"column_name": "Shucked_weight"}}, {"numeric": {"column_name": "Viscera_weight"}}, {"numeric": {"column_name": "Shell_weight"}}, {"numeric": {"column_name": "Rings"}}, ], ) automl_model = automl_training_op.outputs["model"] # Analyzes evaluation AutoML metrics using a custom component. automl_eval_op = interpret_automl_evaluation_metrics( region=region, model=automl_model ) automl_eval_metrics = automl_eval_op.outputs["metrics"] # 1) Decides which model is best (AutoML vs BQML); # 2) Determines if the best model meets the deployment condition. best_model_task = select_best_model( metrics_bqml=bqml_eval_metrics, metrics_automl=automl_eval_metrics, thresholds_dict_str=thresholds_dict_str, ) # If the deploy condition is True, then deploy the best model. with dsl.Condition( best_model_task.outputs["deploy_decision"] == "true", name="deploy_decision", ): # Creates a Vertex AI endpoint using a pre-built-component. endpoint_create_op = EndpointCreateOp( project=project, location=region, display_name=endpoint_display_name, ).after(best_model_task) # In case the BQML model is the best... with dsl.Condition( best_model_task.outputs["best_model"] == "bqml", name="deploy_bqml", ): # Deploys the BQML model (now on Vertex AI) to the recently created endpoint using a pre-built component. model_deploy_bqml_op = ModelDeployOp( endpoint=endpoint_create_op.outputs["endpoint"], model=bqml_vertex_model, deployed_model_display_name=DISPLAY_NAME + "_best_bqml", dedicated_resources_machine_type="n1-standard-2", dedicated_resources_min_replica_count=2, dedicated_resources_max_replica_count=2, traffic_split={ "0": 100 }, # newly deployed model gets 100% of the traffic ).set_caching_options(False) # Sends an online prediction request to the recently deployed model using a custom component. validate_infrastructure( endpoint=endpoint_create_op.outputs["endpoint"] ).set_caching_options(False).after(model_deploy_bqml_op) # In case the AutoML model is the best... with dsl.Condition( best_model_task.outputs["best_model"] == "automl", name="deploy_automl", ): # Deploys the AutoML model to the recently created endpoint using a pre-built component. model_deploy_automl_op = ModelDeployOp( endpoint=endpoint_create_op.outputs["endpoint"], model=automl_model, deployed_model_display_name=DISPLAY_NAME + "_best_automl", dedicated_resources_machine_type="n1-standard-2", dedicated_resources_min_replica_count=2, dedicated_resources_max_replica_count=2, traffic_split={ "0": 100 }, # newly deployed model gets 100% of the traffic ).set_caching_options(False) # Sends an online prediction request to the recently deployed model using a custom component. validate_infrastructure( endpoint=endpoint_create_op.outputs["endpoint"] ).set_caching_options(False).after(model_deploy_automl_op) Explanation: Construct the rapid prototyoing pipeline Next, you construct pipeline, as follows: Data Preparation import_data_to_bigquery : Import the CSV data into a BigQuery table. split_dataset: Split the imported dataset into train, test and evaluation sets. Note: Once the dataset is split, the training and evaluation of the BigQuery ML and AutoML models happens in parallel. BigQuery ML model training _create_model_query: Construct the query for training a BigQuery ML model. BigqueryCreateModelJobOp: Train a BigQuery ML model. BigqueryEvaluateModelJobOp: Evaluate the BigQuery ML model. interpret_bqml_evaluation_metrics: Obtain the evaluation metrics for the BigQuery ML model to do apple-to-apple comparison with AutoML model. BigqueryExportModelJobOp: Export the BigQuery ML model artifacts to Cloud Storage location. ModelUploadOp: Upload the exported BigQuery ML model to a Vertex AI model resource. AutoML model training TabularDatasetCreateOp: Create a Vertex AI dataset from the BigQuery table. AutoMLTabularTrainingJobRunOp: Train an AutoML model. interpret_automl_evaluation_metrics: Obtain the evaluation metrics for the AutoML model to do apple-to-apple comparison with BigQuery model. Evaluation - select_best_model: Select the BigQuery ML or AutoML model with the best metric evaluation Deployment EndpointCreateOp: Create a Vertex AI endpoint for deploying the best (blessed model). ModelDeployOp: Deploy the corresponding model to a Vertex AI endpoint. validate_infrastructure: Validate the deployed model serving infrastructure. End of explanation PIPELINE_JSON_PKG_PATH = "rapid_prototyping.json" PIPELINE_ROOT = f"{BUCKET_URI}/pipeline_root" image_prefix = REGION.split("-")[0] BQML_SERVING_CONTAINER_IMAGE_URI = ( f"{image_prefix}-docker.pkg.dev/vertex-ai/prediction/tf2-cpu.2-8:latest" ) BQ_DATASET = "rapid_prototype" # j90wipxexhrgq3cquanc5" # @param {type:"string"} BQ_LOCATION = "US" # @param {type:"string"} BQ_LOCATION = BQ_LOCATION.upper() BQML_EXPORT_LOCATION = f"{BUCKET_URI}/artifacts/bqml" ENDPOINT_DISPLAY_NAME = f"{DISPLAY_NAME}_endpoint" compiler.Compiler().compile( pipeline_func=train_pipeline, package_path=PIPELINE_JSON_PKG_PATH, ) pipeline_params = { "project": PROJECT_ID, "region": REGION, "gcs_input_file_uri": RAW_INPUT_DATA, "bq_dataset": BQ_DATASET, "bq_location": BQ_LOCATION, "bqml_model_export_location": BQML_EXPORT_LOCATION, "bqml_serving_container_image_uri": BQML_SERVING_CONTAINER_IMAGE_URI, "endpoint_display_name": ENDPOINT_DISPLAY_NAME, "thresholds_dict_str": '{"rmse": 2.5}', } print(pipeline_params) pipeline_job = aip.PipelineJob( display_name=DISPLAY_NAME, template_path=PIPELINE_JSON_PKG_PATH, pipeline_root=PIPELINE_ROOT, parameter_values=pipeline_params, enable_caching=False, ) response = pipeline_job.submit() Explanation: Compile and execute the pipeline Finally, you compile the pipleline and then execute it with the following pipeline parameters: project: Your project ID. region: Your region for the project. gcs_input_file_uri: The Cloud Storage location of the CSV input data. bq_dataset: Your name for the BigQuery dataset. bq_location: Your region for the BigQuery dataset. bqml_model_export_location: The Cloud Storage location to export the BigQuery ML model artifacts to. bqml_serving_container_image_uri: The deployment (serving) image for the exported BigQuery ML model. endpoint_display_name: The human readable display name for the endpoint for the deployed blessed model. thresholds_dict_str: The evaluation metrics minimum threshold for a candidate model to be considered for a blessed model. End of explanation pipeline_job.wait() Explanation: Wait for the pipeline to complete Currently, your pipeline is running asynchronous by using the submit() method. To have run it synchronously, you would have invoked the run() method. In this last step, you block on the asynchronously executed waiting for completion using the wait() method. End of explanation delete_bucket = True print("Will delete endpoint") endpoints = aip.Endpoint.list( filter=f"display_name={DISPLAY_NAME}_endpoint", order_by="create_time" ) endpoint = endpoints[0] endpoint.undeploy_all() aip.Endpoint.delete(endpoint.resource_name) print("Deleted endpoint:", endpoint) print("Will delete models") suffix_list = ["bqml", "automl", "best"] for suffix in suffix_list: try: model_display_name = f"{DISPLAY_NAME}_{suffix}" print("Will delete model with name " + model_display_name) models = aip.Model.list( filter=f"display_name={model_display_name}", order_by="create_time" ) model = models[0] aip.Model.delete(model) print("Deleted model:", model) except Exception as e: print(e) print("Will delete Vertex dataset") datasets = aip.TabularDataset.list( filter=f"display_name={DISPLAY_NAME}", order_by="create_time" ) dataset = datasets[0] aip.TabularDataset.delete(dataset) print("Deleted Vertex dataset:", dataset) pipelines = aip.PipelineJob.list( filter=f"pipeline_name={DISPLAY_NAME}", order_by="create_time" ) pipeline = pipelines[0] aip.PipelineJob.delete(pipeline) print("Deleted pipeline:", pipeline) # Construct a BigQuery client object. bq_client = bigquery.Client(project=PROJECT_ID, location=BQ_LOCATION) # TODO(developer): Set dataset_id to the ID of the dataset to fetch. dataset_id = f"{PROJECT_ID}.{BQ_DATASET}" print(f"Will delete BQ dataset '{dataset_id}' from location {BQ_LOCATION}.") # Use the delete_contents parameter to delete a dataset and its contents. # Use the not_found_ok parameter to not receive an error if the dataset has already been deleted. bq_client.delete_dataset( dataset_id, delete_contents=True, not_found_ok=True ) # Make an API request. print(f"Deleted BQ dataset '{dataset_id}' from location {BQ_LOCATION}.") if delete_bucket or os.getenv("IS_TESTING"): ! gsutil rm -r $BUCKET_URI Explanation: Cleaning up To clean up all Google Cloud resources used in this project, you can delete the Google Cloud project you used for the tutorial. Otherwise, you can delete the individual resources you created in this tutorial: End of explanation
3,681
Given the following text description, write Python code to implement the functionality described below step by step Description: Copyright 2021 The TensorFlow Authors. Step1: TensorFlow Ranking Keras pipeline for distributed training <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https Step2: Import TensorFlow Ranking library and useful libraries through the notebook. Step3: Data preparation Download training, test data, and vocabulary file. Step4: Here, the dataset is saved in a ranking-specific ExampleListWithContext (ELWC) format. Detailed in the next section, shows how to generate and store data in the ELWC format. ELWC Data Formats for Ranking The data for a single question consists of a list of query_tokens representing the question (the "context"), and a list of answers (the "examples"). Each answer is represented as a list of document_tokens and a relevance score. The following code shows a simplified representation of a question's data Step8: The data files, downloaded in the previous section, contain a serialized protobuffer representation of this sort of data. These protobuffers are quite long when viewed as text, but encode the same data. Step9: While the text format is verbose, protos can be efficiently serialized to a byte string (and parsed back into a proto) Step10: The following parser configuration parses the binary representation into a dictionary of tensors Step11: Note with ELWC, you could also generate size and/or mask features to indicate the valid size and/or to mask out the valid entries in the list as long as size_feature_name and/or mask_feature_name are defined. The above parser is defined in tfr.data and wrapped in our predefined dataset builder tfr.keras.pipeline.BaseDatasetBuilder. Overview of the ranking pipeline Follow the steps depicted in the figure below to train a ranking model with ranking pipeline. In particular, this example uses the tfr.keras.model.FeatureSpecInputCreator and tfr.keras.pipeline.BaseDatasetBuilder defined specific for the datasets with feature_spec. Create a model builder Instead of directly building a tf.keras.Model object, create a model_builder, which is called in the ranking pipeline to build the tf.keras.Model, as all training parameters must be defined under the strategy.scope (called in train_and_validate function in ranking pipeline) in order to train with distributed strategies. This framework uses the keras functional api to build models, where inputs (tf.keras.Input), preprocessors (tf.keras.layers.experimental.preprocessing), and scorer (tf.keras.Sequential) are required to define the model. Specify Features Feature Specification are TensorFlow abstractions that are used to capture rich information about each feature. Create feature specifications for context features, example features, and labels, consistent with the input formats for ranking, such as ELWC format. The default_value of label_spec feature is set to -1 to take care of the padding items to be masked out. Step12: Define input_creator input_creator create dictionaries of context and example tf.keras.Inputs for input features defined in context_feature_spec and example_feature_spec. Step13: Callling the input_creator returns the dictionaries of Keras-Tensors, that are used as the inputs when building the model Step14: Define preprocessor In the preprocessor, the input tokens are converted to a one-hot vector through the String Lookup preprocessing layer and then embeded as an embedding vector through the Embedding preprocessing layer. Finally, compute an embedding vector for the full sentence by the average of token embeddings. Step15: Note that the vocabulary uses the same tokenizer that BERT does. You could also use BertTokenizer to tokenize the raw sentences. Step16: Define scorer This example uses a Deep Neural Network (DNN) univariate scorer, predefined in TensorFlow Ranking. Step17: Make model_builder In addition to input_creator, preprocessor, and scorer, specify the mask feature name to take the mask feature generated in datasets. Step18: Check the model architecture, Step19: Create a dataset builder A dataset_builder is designed to create datasets for training and validation and to define signatures for exporting trained model as tf.function. Specify data hyperparameters Define the hyperparameters to be used to build datasets in dataset_builder by creating a dataset_hparams object. Load training dataset at /tmp/train.tfrecords with tf.data.TFRecordDataset reader. In each batch, each feature tensor has a shape (batch_size, list_size, feature_sizes) with batch_size equal to 32 and list_size equal to 50. Validate with the test data at /tmp/test.tfrecords at the same batch_size and list_size. Step20: Make dataset_builder TensorFlow Ranking provides a pre-defined SimpleDatasetBuilder to generate datasets from ELWC using feature_specs. As a mask feature is used to determine valid examples in each padded list, must specify the mask_feature_name consistent with the mask_feature_name used in model_builder. Step21: Create a ranking pipeline A ranking_pipeline is an optimized ranking model training package that implement distributed training, export model as tf.function, and integrate useful callbacks including tensorboard and restoring upon failures. Specify pipeline hyperparameters Specify the hyperparameters to be used to run the pipeline in ranking_pipeline by creating a pipeline_hparams object. Train the model with approx_ndcg_loss at learning rate equal to 0.05 for 5 epoch with 1000 steps in each epoch using MirroredStrategy. Evaluate the model on the validation dataset for 100 steps after each epoch. Save the trained model under /tmp/ranking_model_dir. Step22: Define ranking_pipeline TensorFlow Ranking provides a pre-defined SimplePipeline to support model training with distributed strategies. Step23: Train and evaluate the model The train_and_validate function evaluates the trained model on the validation dataset after every epoch. Step24: Launch TensorBoard Step25: <!-- <img class="tfo-display-only-on-site" src="https Step26: Load the saved model and run a prediction. Step27: Check the top 5 answers for question number 4.
Python Code: #@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. Explanation: Copyright 2021 The TensorFlow Authors. End of explanation !pip install -q tensorflow-ranking tensorflow-serving-api !pip install -U "tensorflow-text==2.8.*" Explanation: TensorFlow Ranking Keras pipeline for distributed training <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://www.tensorflow.org/ranking/tutorials/ranking_dnn_distributed"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a> </td> <td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/ranking/blob/master/docs/tutorials/ranking_dnn_distributed.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a> </td> <td> <a target="_blank" href="https://github.com/tensorflow/ranking/blob/master/docs/tutorials/ranking_dnn_distributed.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a> </td> <td> <a href="https://storage.googleapis.com/tensorflow_docs/ranking/docs/tutorials/ranking_dnn_distributed.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a> </td> </table> TensorFlow Ranking can handle heterogeneous dense and sparse features, and scales up to millions of data points. However, building and deploying a learning to rank model to operate at scale creates additional challenges beyond simply designing a model. The Ranking library provides workflow utility classes for building distributed training for large-scale ranking applications. For more information about these features, see the TensorFlow Ranking Overview. This tutorial shows you how to build a ranking model that enables a distributed processing strategy by using the Ranking library's support for a pipeline processing architecture. Note: An advanced version of this code is also available as a Python script. The script version supports flags for hyperparameters, and advanced use-cases like Document Interaction Network. ANTIQUE dataset In this tutorial, you will build a ranking model for ANTIQUE, a question-answering dataset. Given a query, and a list of answers, the objective is to rank the answers with optimal rank related metrics, such as NDCG. For more details about ranking metrics, review evaluation measures offline metrics. ANTIQUE is a publicly available dataset for open-domain non-factoid question answering, collected from Yahoo! answers. Each question has a list of answers, whose relevance are graded on a scale of 0-4, 0 for irrelevant and 4 for fully relevant. The list size can vary depending on the query, so we use a fixed "list size" of 50, where the list is either truncated or padded with default values. The dataset is split into 2206 queries for training and 200 queries for testing. For more details, please read the technical paper on arXiv. Setup Download and install the TensorFlow Ranking and TensorFlow Serving packages. End of explanation import pathlib import tensorflow as tf import tensorflow_ranking as tfr import tensorflow_text as tf_text from tensorflow_serving.apis import input_pb2 from google.protobuf import text_format Explanation: Import TensorFlow Ranking library and useful libraries through the notebook. End of explanation !wget -O "/tmp/train.tfrecords" "http://ciir.cs.umass.edu/downloads/Antique/tf-ranking/ELWC/train.tfrecords" !wget -O "/tmp/test.tfrecords" "http://ciir.cs.umass.edu/downloads/Antique/tf-ranking//ELWC/test.tfrecords" !wget -O "/tmp/vocab.txt" "http://ciir.cs.umass.edu/downloads/Antique/tf-ranking/vocab.txt" Explanation: Data preparation Download training, test data, and vocabulary file. End of explanation example_list_with_context = { "context": { "query_tokens": ["this", "is", "a", "question"] }, "examples": [ { "document_tokens": ["this", "is", "a", "relevant", "answer"], "relevance": [4] }, { "document_tokens": ["irrelevant", "data"], "relevance": [0] } ] } Explanation: Here, the dataset is saved in a ranking-specific ExampleListWithContext (ELWC) format. Detailed in the next section, shows how to generate and store data in the ELWC format. ELWC Data Formats for Ranking The data for a single question consists of a list of query_tokens representing the question (the "context"), and a list of answers (the "examples"). Each answer is represented as a list of document_tokens and a relevance score. The following code shows a simplified representation of a question's data: End of explanation CONTEXT = text_format.Parse( features { feature { key: "query_tokens" value { bytes_list { value: ["this", "is", "a", "question"] } } } }, tf.train.Example()) EXAMPLES = [ text_format.Parse( features { feature { key: "document_tokens" value { bytes_list { value: ["this", "is", "a", "relevant", "answer"] } } } feature { key: "relevance" value { int64_list { value: 4 } } } }, tf.train.Example()), text_format.Parse( features { feature { key: "document_tokens" value { bytes_list { value: ["irrelevant", "data"] } } } feature { key: "relevance" value { int64_list { value: 0 } } } }, tf.train.Example()), ] ELWC = input_pb2.ExampleListWithContext() ELWC.context.CopyFrom(CONTEXT) for example in EXAMPLES: example_features = ELWC.examples.add() example_features.CopyFrom(example) print(ELWC) Explanation: The data files, downloaded in the previous section, contain a serialized protobuffer representation of this sort of data. These protobuffers are quite long when viewed as text, but encode the same data. End of explanation serialized_elwc = ELWC.SerializeToString() print(serialized_elwc) Explanation: While the text format is verbose, protos can be efficiently serialized to a byte string (and parsed back into a proto) End of explanation def parse_elwc(elwc): return tfr.data.parse_from_example_list( [elwc], list_size=2, context_feature_spec={"query_tokens": tf.io.RaggedFeature(dtype=tf.string)}, example_feature_spec={ "document_tokens": tf.io.RaggedFeature(dtype=tf.string), "relevance": tf.io.FixedLenFeature(shape=[], dtype=tf.int64, default_value=0) }, size_feature_name="_list_size_", mask_feature_name="_mask_") parse_elwc(serialized_elwc) Explanation: The following parser configuration parses the binary representation into a dictionary of tensors: End of explanation context_feature_spec = { "query_tokens": tf.io.RaggedFeature(dtype=tf.string), } example_feature_spec = { "document_tokens": tf.io.RaggedFeature(dtype=tf.string), } label_spec = ( "relevance", tf.io.FixedLenFeature(shape=(1,), dtype=tf.int64, default_value=-1) ) Explanation: Note with ELWC, you could also generate size and/or mask features to indicate the valid size and/or to mask out the valid entries in the list as long as size_feature_name and/or mask_feature_name are defined. The above parser is defined in tfr.data and wrapped in our predefined dataset builder tfr.keras.pipeline.BaseDatasetBuilder. Overview of the ranking pipeline Follow the steps depicted in the figure below to train a ranking model with ranking pipeline. In particular, this example uses the tfr.keras.model.FeatureSpecInputCreator and tfr.keras.pipeline.BaseDatasetBuilder defined specific for the datasets with feature_spec. Create a model builder Instead of directly building a tf.keras.Model object, create a model_builder, which is called in the ranking pipeline to build the tf.keras.Model, as all training parameters must be defined under the strategy.scope (called in train_and_validate function in ranking pipeline) in order to train with distributed strategies. This framework uses the keras functional api to build models, where inputs (tf.keras.Input), preprocessors (tf.keras.layers.experimental.preprocessing), and scorer (tf.keras.Sequential) are required to define the model. Specify Features Feature Specification are TensorFlow abstractions that are used to capture rich information about each feature. Create feature specifications for context features, example features, and labels, consistent with the input formats for ranking, such as ELWC format. The default_value of label_spec feature is set to -1 to take care of the padding items to be masked out. End of explanation input_creator = tfr.keras.model.FeatureSpecInputCreator( context_feature_spec, example_feature_spec) Explanation: Define input_creator input_creator create dictionaries of context and example tf.keras.Inputs for input features defined in context_feature_spec and example_feature_spec. End of explanation input_creator() Explanation: Callling the input_creator returns the dictionaries of Keras-Tensors, that are used as the inputs when building the model: End of explanation class LookUpTablePreprocessor(tfr.keras.model.Preprocessor): def __init__(self, vocab_file, vocab_size, embedding_dim): self._vocab_file = vocab_file self._vocab_size = vocab_size self._embedding_dim = embedding_dim def __call__(self, context_inputs, example_inputs, mask): list_size = tf.shape(mask)[1] lookup = tf.keras.layers.StringLookup( max_tokens=self._vocab_size, vocabulary=self._vocab_file, mask_token=None) embedding = tf.keras.layers.Embedding( input_dim=self._vocab_size, output_dim=self._embedding_dim, embeddings_initializer=None, embeddings_constraint=None) # StringLookup and Embedding are shared over context and example features. context_features = { key: tf.reduce_mean(embedding(lookup(value)), axis=-2) for key, value in context_inputs.items() } example_features = { key: tf.reduce_mean(embedding(lookup(value)), axis=-2) for key, value in example_inputs.items() } return context_features, example_features _VOCAB_FILE = '/tmp/vocab.txt' _VOCAB_SIZE = len(pathlib.Path(_VOCAB_FILE).read_text().split()) preprocessor = LookUpTablePreprocessor(_VOCAB_FILE, _VOCAB_SIZE, 20) Explanation: Define preprocessor In the preprocessor, the input tokens are converted to a one-hot vector through the String Lookup preprocessing layer and then embeded as an embedding vector through the Embedding preprocessing layer. Finally, compute an embedding vector for the full sentence by the average of token embeddings. End of explanation tokenizer = tf_text.BertTokenizer(_VOCAB_FILE) example_tokens = tokenizer.tokenize("Hello TensorFlow!".lower()) print(example_tokens) print(tokenizer.detokenize(example_tokens)) Explanation: Note that the vocabulary uses the same tokenizer that BERT does. You could also use BertTokenizer to tokenize the raw sentences. End of explanation scorer = tfr.keras.model.DNNScorer( hidden_layer_dims=[64, 32, 16], output_units=1, activation=tf.nn.relu, use_batch_norm=True) Explanation: Define scorer This example uses a Deep Neural Network (DNN) univariate scorer, predefined in TensorFlow Ranking. End of explanation model_builder = tfr.keras.model.ModelBuilder( input_creator=input_creator, preprocessor=preprocessor, scorer=scorer, mask_feature_name="example_list_mask", name="antique_model", ) Explanation: Make model_builder In addition to input_creator, preprocessor, and scorer, specify the mask feature name to take the mask feature generated in datasets. End of explanation model = model_builder.build() tf.keras.utils.plot_model(model, expand_nested=True) Explanation: Check the model architecture, End of explanation dataset_hparams = tfr.keras.pipeline.DatasetHparams( train_input_pattern="/tmp/train.tfrecords", valid_input_pattern="/tmp/test.tfrecords", train_batch_size=32, valid_batch_size=32, list_size=50, dataset_reader=tf.data.TFRecordDataset) Explanation: Create a dataset builder A dataset_builder is designed to create datasets for training and validation and to define signatures for exporting trained model as tf.function. Specify data hyperparameters Define the hyperparameters to be used to build datasets in dataset_builder by creating a dataset_hparams object. Load training dataset at /tmp/train.tfrecords with tf.data.TFRecordDataset reader. In each batch, each feature tensor has a shape (batch_size, list_size, feature_sizes) with batch_size equal to 32 and list_size equal to 50. Validate with the test data at /tmp/test.tfrecords at the same batch_size and list_size. End of explanation dataset_builder = tfr.keras.pipeline.SimpleDatasetBuilder( context_feature_spec, example_feature_spec, mask_feature_name="example_list_mask", label_spec=label_spec, hparams=dataset_hparams) ds_train = dataset_builder.build_train_dataset() ds_train.element_spec Explanation: Make dataset_builder TensorFlow Ranking provides a pre-defined SimpleDatasetBuilder to generate datasets from ELWC using feature_specs. As a mask feature is used to determine valid examples in each padded list, must specify the mask_feature_name consistent with the mask_feature_name used in model_builder. End of explanation pipeline_hparams = tfr.keras.pipeline.PipelineHparams( model_dir="/tmp/ranking_model_dir", num_epochs=5, steps_per_epoch=1000, validation_steps=100, learning_rate=0.05, loss="approx_ndcg_loss", strategy="MirroredStrategy") Explanation: Create a ranking pipeline A ranking_pipeline is an optimized ranking model training package that implement distributed training, export model as tf.function, and integrate useful callbacks including tensorboard and restoring upon failures. Specify pipeline hyperparameters Specify the hyperparameters to be used to run the pipeline in ranking_pipeline by creating a pipeline_hparams object. Train the model with approx_ndcg_loss at learning rate equal to 0.05 for 5 epoch with 1000 steps in each epoch using MirroredStrategy. Evaluate the model on the validation dataset for 100 steps after each epoch. Save the trained model under /tmp/ranking_model_dir. End of explanation ranking_pipeline = tfr.keras.pipeline.SimplePipeline( model_builder, dataset_builder=dataset_builder, hparams=pipeline_hparams) Explanation: Define ranking_pipeline TensorFlow Ranking provides a pre-defined SimplePipeline to support model training with distributed strategies. End of explanation ranking_pipeline.train_and_validate(verbose=1) Explanation: Train and evaluate the model The train_and_validate function evaluates the trained model on the validation dataset after every epoch. End of explanation %load_ext tensorboard %tensorboard --logdir="/tmp/ranking_model_dir" --port 12345 Explanation: Launch TensorBoard End of explanation ds_test = dataset_builder.build_valid_dataset() # Get input features from the first batch of the test data for x, y in ds_test.take(1): break Explanation: <!-- <img class="tfo-display-only-on-site" src="https://user-images.githubusercontent.com/18746174/136845677-8cd41b8f-0a1a-4b38-b905-779966839e5f.png" /> --> Generate predictions and evaluate Get the test data. End of explanation loaded_model = tf.keras.models.load_model("/tmp/ranking_model_dir/export/latest_model") # Predict ranking scores scores = loaded_model.predict(x) min_score = tf.reduce_min(scores) scores = tf.where(tf.greater_equal(y, 0.), scores, min_score - 1e-5) # Sort the answers by scores sorted_answers = tfr.utils.sort_by_scores( scores, [tf.strings.reduce_join(x['document_tokens'], -1, separator=' ')])[0] Explanation: Load the saved model and run a prediction. End of explanation question = tf.strings.reduce_join( x['query_tokens'][4, :], -1, separator=' ').numpy() top_answers = sorted_answers[4, :5].numpy() print( f'Q: {question.decode()}\n' + '\n'.join([f'A{i+1}: {ans.decode()}' for i, ans in enumerate(top_answers)])) Explanation: Check the top 5 answers for question number 4. End of explanation
3,682
Given the following text description, write Python code to implement the functionality described below step by step Description: Step1: Introduction In Carola Lilienthal's talk about architecture and technical debt at Herbstcampus 2017, I was reminded that I wanted to implement some of the examples of her book "Long-lived software systems" (available only in German) with jQAssistant. Especially the visualization of the dependencies between different business domains seems like a great starting point to try out some stuff Step3: The request returns all the corresponding subdomain for each type. Combined with the approach in Java Type Dependency Analysis, we can now visualize the dependencies between the various subdomains Step6: In the output, we see the dependencies between the various subdomains I've altered the visualization just a little bit so that we can see bidirectional dependencies as well. Those are green and red at the same time and appear more dominant than unidirectional dependencies. From the visualization above, we can see that the creator subdomain is used by Java source code from the subdomains comment, site, scheduling, mail and framework. The first four make perfectly sense because if you create one of those content types in the application, they are created by some person (they are "personalized" content). Whereas todo and files are user agnostic content types and thus don't have any dependencies on creator (that's a tricky situation in retrospect). What's could look like a mess are the dependencies from and to framework. In the pseudo subdomain framework are some base classes for all the data objects that get persistent in a data store. That explains the outbound dependency of creator. The inbound dependencies from framework to creator are needed for the central dependency injection configuration of the application. Where it get's interesting is the following visualization of the dependencies of the subdomain site Step8: A more sophisticated use case Even if there aren't any package naming conventions, you can identify some structure for example in class names or in your inheritance hierarchy that points you towards your subdomains in the code (if that isn't possible as well Step11: First, let's assume that we have some subdomains of our business domain we know about Step14: Like in the simple example, the graph looks now like this Step16: Bonus Dependencies between subdomains
Python Code: import py2neo import pandas as pd query= MATCH (:Jar:Archive)-[:CONTAINS]->(type:Type) RETURN type.fqn AS type, SPLIT(type.fqn, ".")[2] AS subdomain graph = py2neo.Graph() subdomaininfo = pd.DataFrame(graph.run(query).data()) subdomaininfo.head() Explanation: Introduction In Carola Lilienthal's talk about architecture and technical debt at Herbstcampus 2017, I was reminded that I wanted to implement some of the examples of her book "Long-lived software systems" (available only in German) with jQAssistant. Especially the visualization of the dependencies between different business domains seems like a great starting point to try out some stuff: The green connections between the modules show the downward dependencies to other modules and the red one the upward dependencies. This visualization can help you if you want to further modularize your system towards your business or subdomains or to identify unwanted dependencies between modules. At the same time, I started the Java Type Dependency Analysis and realized that there it is only a smart step to analyze dependencies between business domains. What's missing is the information which type belong to which business domain. We'll find out now! A simple case study Once, I've developed an party planning application called DropOver (that didn't go live, but that's another story). We wrote that web application in Java and paid especially attention to structuring the code along the business' subdomain "partying". This led to this package structure that resembles the main parts of the application: The application's main entry point is a site for a party including location, time, the site's creator and so on. A user can comment on a site as well as add some specific widgets like todo lists, scheduling or files upload and also gets notified by the mail feature. And there is a special package framework were all the cross-cutting concerns are placed like the dependency injection configuration or common, technical software elements. The main point to take away here is that thanks to the alignment of the package structure along the business' subdomain it's easy to determine the business domain for a software entity. It's the 3rd position in the Java package name: at.dropover.&lt;subdomain&gt;. This information item can easily be used to retrieve the information about the subdomain. Software from a graph's perspective I've built the web application, scanned the software artifact (a standard JAR file that we export for integration testing purposes) with jQAssistant command line tool (with jqassistant.sh scan -f dropover-classesjar in this case) and started the server (with jqassistant.sh server). Taking a look in the accompanied Neo4j Browser, we can see the graph that jQAssistant stored in Neo4j. E. g. we can display the relationship between the JAR file and the contained Java types: In the following, I set up the connection between my Python glue code and the Neo4j database. The query executed lists simply all Java types of the application (respectivley the JAR artifact). As mentioned above, we can also get the information about the subdomain derived from the package name: End of explanation import json query= MATCH (:Jar:Archive)-[:CONTAINS]-> (type:Type)-[:DEPENDS_ON]->(directDependency:Type) <-[:CONTAINS]-(:Jar:Archive) RETURN SPLIT(type.fqn, ".")[2] AS name, COLLECT(DISTINCT SPLIT(directDependency.fqn, ".")[2]) AS imports graph = py2neo.Graph() json_data = graph.run(query).data() with open ( "vis/flare-imports.json", mode='w') as json_file: json_file.write(json.dumps(json_data, indent=3)) json_data[:2] Explanation: The request returns all the corresponding subdomain for each type. Combined with the approach in Java Type Dependency Analysis, we can now visualize the dependencies between the various subdomains: End of explanation query= MATCH (type:Type) WHERE type.fqn STARTS WITH "at.dropover" WITH DISTINCT type MATCH (d1:Domain:Business)<-[:BELONGS_TO]-(type:Type), (type)-[:DEPENDS_ON*0..1]->(directDependency:Type), (directDependency)-[:BELONGS_TO]->(d2:Business:Domain) RETURN d1.name as name, COLLECT(DISTINCT d2.name) as imports json_data = graph.run(query).data() import json with open ( "vis/flare-imports.json", mode='w') as json_file: json_file.write(json.dumps(json_data, indent=3)) json_data[:2] query= MATCH (type:Type) WHERE type.fqn STARTS WITH "at.dropover" WITH DISTINCT type MATCH (d1:Domain:Business)<-[:BELONGS_TO]-(type:Type), (type)-[r:DEPENDS_ON*0..1]->(directDependency:Type), (directDependency)-[:BELONGS_TO]->(d2:Business:Domain) RETURN d1.name as name, d2.name, COUNT(r) as number json_data = graph.run(query).data() df = pd.DataFrame(json_data) data = df.to_dict(orient='split')['data'] with open ( "vis/chord_data.json", mode='w') as json_file: json_file.write(json.dumps(data, indent=3)) data[:5] Explanation: In the output, we see the dependencies between the various subdomains I've altered the visualization just a little bit so that we can see bidirectional dependencies as well. Those are green and red at the same time and appear more dominant than unidirectional dependencies. From the visualization above, we can see that the creator subdomain is used by Java source code from the subdomains comment, site, scheduling, mail and framework. The first four make perfectly sense because if you create one of those content types in the application, they are created by some person (they are "personalized" content). Whereas todo and files are user agnostic content types and thus don't have any dependencies on creator (that's a tricky situation in retrospect). What's could look like a mess are the dependencies from and to framework. In the pseudo subdomain framework are some base classes for all the data objects that get persistent in a data store. That explains the outbound dependency of creator. The inbound dependencies from framework to creator are needed for the central dependency injection configuration of the application. Where it get's interesting is the following visualization of the dependencies of the subdomain site: End of explanation import py2neo import pandas as pd query= MATCH (:Project)-[:CONTAINS]->(artifact:Artifact)-[:CONTAINS]->(type:Type) RETURN type.fqn as fqn, type.name as name graph = py2neo.Graph() subdomaininfo = pd.DataFrame(graph.run(query).data()) subdomaininfo.head() Explanation: A more sophisticated use case Even if there aren't any package naming conventions, you can identify some structure for example in class names or in your inheritance hierarchy that points you towards your subdomains in the code (if that isn't possible as well: I wrote my Master's thesis about mining cohesive concepts from source code via text mining, so you could use that as well :-D . And at a last resort, you have to do the mapping manually...). Let's see how this could work by mapping business subdomains to the class names of the Spring PetClinic project. We also have a list of all types in our application: End of explanation subdomains = ['Owner', 'Pet', 'Visit', 'Vet', 'Specialty', 'Clinic'] def determine_subdomain(name): for feature in subdomains: if feature in name: return feature return "Framework" subdomaininfo['subdomain'] = subdomaininfo['name'].apply(determine_subdomain) subdomaininfo.head() query= UNWIND {subdomaininfo} as info MERGE (subdomain:Domain:Business { name: info.subdomain }) WITH info, subdomain MATCH (n:Type { fqn: info.fqn}) MERGE (n)-[:BELONGS_TO]->(subdomain) RETURN n.fqn as type_fqn, subdomain.name as subdomain result = graph.run(query, subdomaininfo=subdomaininfo.to_dict(orient='records')).data() pd.DataFrame(result).head() query= MATCH (:Project)-[:CONTAINS]->(artifact:Artifact)-[:CONTAINS]->(type:Type) WHERE // we don't want thgo analyze test artifacts NOT artifact.type = "test-jar" WITH DISTINCT type, artifact MATCH (d1:Domain:Business)<-[:BELONGS_TO]-(type:Type), (type)-[r:DEPENDS_ON*0..1]->(directDependency:Type), (directDependency)-[:BELONGS_TO]->(d2:Business:Domain), (directDependency)<-[:CONTAINS]-(artifact) RETURN d1.name as name, d2.name, COUNT(r) as number json_data = graph.run(query).data() df = pd.DataFrame(json_data) df.to_dict(orient='split') Explanation: First, let's assume that we have some subdomains of our business domain we know about: End of explanation import pandas as pd pd.DataFrame(json_data).head() query= MATCH (:Project)-[:CONTAINS]->(artifact:Artifact)-[:CONTAINS]->(type:Type) WHERE // we don't want to analyze test artifacts NOT artifact.type = "test-jar" WITH DISTINCT type, artifact MATCH (d1:Domain:Business)<-[:BELONGS_TO]-(type:Type), (type)-[:DEPENDS_ON*0..1]->(directDependency:Type), (directDependency)-[:BELONGS_TO]->(d2:Business:Domain), (directDependency)<-[:CONTAINS]-(artifact) RETURN d1.name as name, COLLECT(DISTINCT d2.name) as imports json_data = graph.run(query).data() import json with open ( "vis/flare-imports.json", mode='w') as json_file: json_file.write(json.dumps(json_data, indent=3)) json_data query= MATCH (:Project)-[:CONTAINS]->(artifact:Artifact)-[:CONTAINS]->(type:Type) WHERE // we don't want to analyze test artifacts NOT artifact.type = "test-jar" WITH DISTINCT type, artifact MATCH (d1:Domain:Business)<-[:BELONGS_TO]-(type:Type), (type)-[r:DEPENDS_ON*0..1]->(directDependency:Type), (directDependency)-[:BELONGS_TO]->(d2:Business:Domain), (directDependency)<-[:CONTAINS]-(artifact) RETURN d1.name as name, d2.name, COUNT(r) as number json_data = graph.run(query).data() df = pd.DataFrame(json_data) data = df.to_dict(orient='split')['data'] with open ( "vis/chord_data.json", mode='w') as json_file: json_file.write(json.dumps(data, indent=3)) data[:5] Explanation: Like in the simple example, the graph looks now like this: End of explanation query= MATCH (t1:Type)-[:BELONGS_TO]->(s1:Subdomain), (t2:Type)-[:BELONGS_TO]->(s2:Subdomain), (t1)-[:DEPENDS_ON]->(t2) WHERE s1.name <> s2.name MERGE (s1)-[:DEPENDS_ON]->(s2) RETURN s1.name, s2.name pd.DataFrame(graph.run(query).data()).head() Explanation: Bonus Dependencies between subdomains End of explanation
3,683
Given the following text description, write Python code to implement the functionality described below step by step Description: NLP Information Extraction SparkContext and SparkSession Step1: Simple NLP pipeline architecture Reference Step2: Create a spark data frame to store raw text Use the nltk.sent_tokenize() function to split text into sentences. Step3: Tokenization and POS tagging Step4: Transform data Step5: Chunking Chunking is the process of segmenting and labeling multitokens. The following example shows how to do a noun phrase chunking on the tagged words data frame from the previous step. First we define a udf function which chunks noun phrases from a list of pos-tagged words. Step6: Transform data
Python Code: from pyspark import SparkContext sc = SparkContext(master = 'local') from pyspark.sql import SparkSession spark = SparkSession.builder \ .appName("Python Spark SQL basic example") \ .config("spark.some.config.option", "some-value") \ .getOrCreate() Explanation: NLP Information Extraction SparkContext and SparkSession End of explanation import nltk from nltk.corpus import gutenberg milton_paradise = gutenberg.raw('milton-paradise.txt') Explanation: Simple NLP pipeline architecture Reference: Bird, Steven, Ewan Klein, and Edward Loper. Natural language processing with Python: analyzing text with the natural language toolkit. " O'Reilly Media, Inc.", 2009. Example data The raw text is from the gutenberg corpus from the nltk package. The fileid is milton-paradise.txt. Get the data Raw text End of explanation import pandas as pd pdf = pd.DataFrame({ 'sentences': nltk.sent_tokenize(milton_paradise) }) df = spark.createDataFrame(pdf) df.show(n=5) Explanation: Create a spark data frame to store raw text Use the nltk.sent_tokenize() function to split text into sentences. End of explanation from pyspark.sql.functions import udf from pyspark.sql.types import * ## define udf function def sent_to_tag_words(sent): wordlist = nltk.word_tokenize(sent) tagged_words = nltk.pos_tag(wordlist) return(tagged_words) ## define schema for returned result from the udf function ## the returned result is a list of tuples. schema = ArrayType(StructType([ StructField('f1', StringType()), StructField('f2', StringType()) ])) ## the udf function sent_to_tag_words_udf = udf(sent_to_tag_words, schema) Explanation: Tokenization and POS tagging End of explanation df_tagged_words = df.select(sent_to_tag_words_udf(df.sentences).alias('tagged_words')) df_tagged_words.show(5) Explanation: Transform data End of explanation import nltk from pyspark.sql.functions import udf from pyspark.sql.types import * # define a udf function to chunk noun phrases from pos-tagged words grammar = "NP: {<DT>?<JJ>*<NN>}" chunk_parser = nltk.RegexpParser(grammar) chunk_parser_udf = udf(lambda x: str(chunk_parser.parse(x)), StringType()) Explanation: Chunking Chunking is the process of segmenting and labeling multitokens. The following example shows how to do a noun phrase chunking on the tagged words data frame from the previous step. First we define a udf function which chunks noun phrases from a list of pos-tagged words. End of explanation df_NP_chunks = df_tagged_words.select(chunk_parser_udf(df_tagged_words.tagged_words).alias('NP_chunk')) df_NP_chunks.show(2, truncate=False) Explanation: Transform data End of explanation
3,684
Given the following text description, write Python code to implement the functionality described below step by step Description: Step1: Language Translation In this project, you’re going to take a peek into the realm of neural network machine translation. You’ll be training a sequence to sequence model on a dataset of English and French sentences that can translate new sentences from English to French. Get the Data Since translating the whole language of English to French will take lots of time to train, we have provided you with a small portion of the English corpus. Step3: Explore the Data Play around with view_sentence_range to view different parts of the data. Step6: Implement Preprocessing Function Text to Word Ids As you did with other RNNs, you must turn the text into a number so the computer can understand it. In the function text_to_ids(), you'll turn source_text and target_text from words to ids. However, you need to add the &lt;EOS&gt; word id at the end of target_text. This will help the neural network predict when the sentence should end. You can get the &lt;EOS&gt; word id by doing Step8: Preprocess all the data and save it Running the code cell below will preprocess all the data and save it to file. Step10: Check Point This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk. Step12: Check the Version of TensorFlow and Access to GPU This will check to make sure you have the correct version of TensorFlow and access to a GPU Step15: Build the Neural Network You'll build the components necessary to build a Sequence-to-Sequence model by implementing the following functions below Step18: Process Decoder Input Implement process_decoder_input by removing the last word id from each batch in target_data and concat the GO ID to the begining of each batch. Step22: Encoding Implement encoding_layer() to create a Encoder RNN layer Step26: Decoding - Training Create a training decoding layer Step30: Decoding - Inference Create inference decoder Step34: Build the Decoding Layer Implement decoding_layer() to create a Decoder RNN layer. Embed the target sequences Construct the decoder LSTM cell (just like you constructed the encoder cell above) Create an output layer to map the outputs of the decoder to the elements of our vocabulary Use the your decoding_layer_train(encoder_state, dec_cell, dec_embed_input, target_sequence_length, max_target_sequence_length, output_layer, keep_prob) function to get the training logits. Use your decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, max_target_sequence_length, vocab_size, output_layer, batch_size, keep_prob) function to get the inference logits. Note Step37: Build the Neural Network Apply the functions you implemented above to Step38: Neural Network Training Hyperparameters Tune the following parameters Step40: Build the Graph Build the graph using the neural network you implemented. Step44: Batch and pad the source and target sequences Step47: Train Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem. Step49: Save Parameters Save the batch_size and save_path parameters for inference. Step51: Checkpoint Step54: Sentence to Sequence To feed a sentence into the model for translation, you first need to preprocess it. Implement the function sentence_to_seq() to preprocess new sentences. Convert the sentence to lowercase Convert words into ids using vocab_to_int Convert words not in the vocabulary, to the &lt;UNK&gt; word id. Step56: Translate This will translate translate_sentence from English to French.
Python Code: # sequence_to_sequence_implementation course assignment was used a lot to finish this hw # A live help person highly suggested I worked through it again. --- 10000% correct. this was vital ### AKA the UDACITY seq2seq assignment, /deep-learning/seq2seq/sequence_to_sequence_implementation.ipynb DON'T MODIFY ANYTHING IN THIS CELL import helper import problem_unittests as tests source_path = 'data/small_vocab_en' target_path = 'data/small_vocab_fr' source_text = helper.load_data(source_path) target_text = helper.load_data(target_path) Explanation: Language Translation In this project, you’re going to take a peek into the realm of neural network machine translation. You’ll be training a sequence to sequence model on a dataset of English and French sentences that can translate new sentences from English to French. Get the Data Since translating the whole language of English to French will take lots of time to train, we have provided you with a small portion of the English corpus. End of explanation view_sentence_range = (10, 110) DON'T MODIFY ANYTHING IN THIS CELL import numpy as np print('Dataset Stats') print('Roughly the number of unique words: {}'.format(len({word: None for word in source_text.split()}))) sentences = source_text.split('\n') word_counts = [len(sentence.split()) for sentence in sentences] print('Number of sentences: {}'.format(len(sentences))) print('Average number of words in a sentence: {}'.format(np.average(word_counts))) print() print('English sentences {} to {}:'.format(*view_sentence_range)) print('\n'.join(source_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]])) print() print('French sentences {} to {}:'.format(*view_sentence_range)) print('\n'.join(target_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]])) Explanation: Explore the Data Play around with view_sentence_range to view different parts of the data. End of explanation def text_to_ids(source_text, target_text, source_vocab_to_int, target_vocab_to_int): Convert source and target text to proper word ids :param source_text: String that contains all the source text. :param target_text: String that contains all the target text. :param source_vocab_to_int: Dictionary to go from the source words to an id :param target_vocab_to_int: Dictionary to go from the target words to an id :return: A tuple of lists (source_id_text, target_id_text) # TODO: Implement Function # I couldn't remember what eos stood for (too many acronyms to remember) so I googled it #https://www.tensorflow.org/tutorials/seq2seq # end-of-senence (eos) # asked a live support about this. He / she directed me to https://github.com/nicolas-ivanov/tf_seq2seq_chatbot/issues/15 # # Ok, setup the stuff that is known to be needed first source_id_text = [] target_id_text = [] end_of_seq = target_vocab_to_int['<EOS>'] # had "eos" at first and it gave an error. Changing to EOS. ## Update: doesn't fix, / issue is something else. #look at data strcuture #print("================") #print(source_text) #print("================") #source_id_text = enumerate(source_text.split('\n')) #source_id_text = for tacos in (source_text.split('\n')) #source_id_text = source_text.split('\n') #print(source_id_text) #print(np.) print("================") source_id_textsen = source_text.split('\n') target_id_textsen = target_text.split('\n') #for sentence in (source_id_textsen): # for word in sentence.split(): # I think this is OK. default *should be spaces* #print("test:"+word) #source_id_text = word #source_id_text = source_vocab_to_int[word] # source_id_text.append([source_vocab_to_int[word]]) #print(len(source_id_text)) #for sentence in (target_id_textsen): # for word in sentence.split(): # #pass # #target_id_text = target_vocab_to_int[word] # target_id_text.append(target_vocab_to_int[word]) # target_id_text.append(end_of_seq) #### WHY AM I STILL GETTING 60 something and an error saying it should just be four values in # source_id_text #How did I just break this.... It jus t worked # for sentence in (source_id_textsen): # source_id_text = [[source_vocab_to_int[word] for word in sentence.split()]] # for sentence in (target_id_textsen): # target_id_text = [[target_vocab_to_int[word] for word in sentence.split()] + [end_of_seq]] # Live help said the following is the same. Added here for future reference if a similar problem is encountered after the course. source_id_text = [[source_vocab_to_int[word] for word in seq.split()] for seq in source_text.split('\n')] target_id_text = [[target_vocab_to_int[word] for word in seq.split()] + [end_of_seq] for seq in target_text.split('\n')] return source_id_text, target_id_text # do an enummeration for print("================") return (source_id_text, target_id_text) #None, None DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE tests.test_text_to_ids(text_to_ids) Explanation: Implement Preprocessing Function Text to Word Ids As you did with other RNNs, you must turn the text into a number so the computer can understand it. In the function text_to_ids(), you'll turn source_text and target_text from words to ids. However, you need to add the &lt;EOS&gt; word id at the end of target_text. This will help the neural network predict when the sentence should end. You can get the &lt;EOS&gt; word id by doing: python target_vocab_to_int['&lt;EOS&gt;'] You can get other word ids using source_vocab_to_int and target_vocab_to_int. End of explanation DON'T MODIFY ANYTHING IN THIS CELL helper.preprocess_and_save_data(source_path, target_path, text_to_ids) Explanation: Preprocess all the data and save it Running the code cell below will preprocess all the data and save it to file. End of explanation DON'T MODIFY ANYTHING IN THIS CELL import numpy as np import helper (source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess() Explanation: Check Point This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk. End of explanation DON'T MODIFY ANYTHING IN THIS CELL from distutils.version import LooseVersion import warnings import tensorflow as tf from tensorflow.python.layers.core import Dense # Check TensorFlow Version assert LooseVersion(tf.__version__) >= LooseVersion('1.1'), 'Please use TensorFlow version 1.1 or newer' print('TensorFlow Version: {}'.format(tf.__version__)) # Check for a GPU if not tf.test.gpu_device_name(): warnings.warn('No GPU found. Please use a GPU to train your neural network.') else: print('Default GPU Device: {}'.format(tf.test.gpu_device_name())) Explanation: Check the Version of TensorFlow and Access to GPU This will check to make sure you have the correct version of TensorFlow and access to a GPU End of explanation def model_inputs(): Create TF Placeholders for input, targets, learning rate, and lengths of source and target sequences. :return: Tuple (input, targets, learning rate, keep probability, target sequence length, max target sequence length, source sequence length) #https://www.tensorflow.org/api_docs/python/tf/placeholder #float32 issue at end of project, chaning things to int32 where possible??? Input = tf.placeholder(dtype=tf.int32,shape=[None,None],name="input") Target = tf.placeholder(dtype=tf.int32,shape=[None,None],name="target") lr = tf.placeholder(dtype=tf.float32,name="lr") taretlength = tf.placeholder(dtype=tf.int32,name="target_sequence_length") kp = tf.placeholder(dtype=tf.float32,name="keep_prob") #maxseq = tf.placeholder(dtype.float32,name='max_target_len') maxseq = tf.reduce_max(taretlength,name='max_target_len') sourceseqlen = tf.placeholder(dtype=tf.int32,shape=[None],name='source_sequence_length') # TODO: Implement Function return Input, Target, lr, kp, taretlength, maxseq, sourceseqlen DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE tests.test_model_inputs(model_inputs) Explanation: Build the Neural Network You'll build the components necessary to build a Sequence-to-Sequence model by implementing the following functions below: - model_inputs - process_decoder_input - encoding_layer - decoding_layer_train - decoding_layer_infer - decoding_layer - seq2seq_model Input Implement the model_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders: Input text placeholder named "input" using the TF Placeholder name parameter with rank 2. Targets placeholder with rank 2. Learning rate placeholder with rank 0. Keep probability placeholder named "keep_prob" using the TF Placeholder name parameter with rank 0. Target sequence length placeholder named "target_sequence_length" with rank 1 Max target sequence length tensor named "max_target_len" getting its value from applying tf.reduce_max on the target_sequence_length placeholder. Rank 0. Source sequence length placeholder named "source_sequence_length" with rank 1 Return the placeholders in the following the tuple (input, targets, learning rate, keep probability, target sequence length, max target sequence length, source sequence length) End of explanation ### From the UDACITYclass assignment: ########################################## # Process the input we'll feed to the decoder #def process_decoder_input(target_data, vocab_to_int, batch_size): # '''Remove the last word id from each batch and concat the <GO> to the begining of each batch''' # ending = tf.strided_slice(target_data, [0, 0], [batch_size, -1], [1, 1]) # dec_input = tf.concat([tf.fill([batch_size, 1], vocab_to_int['<GO>']), ending], 1)## #return dec_input# ###udacity/hw/deep-learning/seq2seq/sequence_to_sequence_implementation.ipynb ##################################### def process_decoder_input(target_data, target_vocab_to_int, batch_size): Preprocess target data for encoding :param target_data: Target Placehoder :param target_vocab_to_int: Dictionary to go from the target words to an id :param batch_size: Batch Size :return: Preprocessed target data # done: Implement Function # this is to be sliced just like one would do with numpy # to do that, https://www.tensorflow.org/api_docs/python/tf/strided_slice is used. # ref to verify this is the rigth func: https://stackoverflow.com/questions/41380126/what-does-tf-strided-slice-do #strided_slice( #input_, # begin, #end, #strides=None, #begin_mask=0, #end_mask=0, #ellipsis_mask=0, #new_axis_mask=0, #shrink_axis_mask=0, #var=None, #name=None # #) #ret = tf.strided_slice(input_=target_data,begin=[0],end=[batch_size],) # FROM UDACITY seq2seq assignment #ending = tf.strided_slice(target_data, [0, 0], [batch_size, -1], [1, 1]) #dec_input = tf.concat([tf.fill([batch_size, 1], vocab_to_int['<GO>']), ending], 1) #return dec_input ret = tf.strided_slice(target_data, [0, 0], [batch_size, -1], [1, 1]) #ret =tf.concat([tf.fill([batch_size, 1], target_vocab_to_int['<GO>']), target_data], 1) ret =tf.concat([tf.fill([batch_size, 1], target_vocab_to_int['<GO>']), ret], 1) return ret #None DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE tests.test_process_encoding_input(process_decoder_input) Explanation: Process Decoder Input Implement process_decoder_input by removing the last word id from each batch in target_data and concat the GO ID to the begining of each batch. End of explanation from imp import reload reload(tests) def encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob, source_sequence_length, source_vocab_size, encoding_embedding_size): Create encoding layer :param rnn_inputs: Inputs for the RNN :param rnn_size: RNN Size :param num_layers: Number of layers :param keep_prob: Dropout keep probability :param source_sequence_length: a list of the lengths of each sequence in the batch :param source_vocab_size: vocabulary size of source data :param encoding_embedding_size: embedding size of source data :return: tuple (RNN output, RNN state) # done: Implement Function ################## ## ## This is simlar to 2.1 Encoder of the UDACITY seq2seq hw #def encoding_layer(input_data, rnn_size, num_layers, source_sequence_length, source_vocab_size, encoding_embedding_size): # Encoder embedding enc_embed_input = tf.contrib.layers.embed_sequence(input_data, source_vocab_size, encoding_embedding_size) # RNN cell def make_cell(rnn_size): enc_cell = tf.contrib.rnn.LSTMCell(rnn_size, initializer=tf.random_uniform_initializer(-0.1, 0.1, seed=2)) return enc_cell enc_cell = tf.contrib.rnn.MultiRNNCell([make_cell(rnn_size) for _ in range(num_layers)]) enc_output, enc_state = tf.nn.dynamic_rnn(enc_cell, enc_embed_input, sequence_length=source_sequence_length, dtype=tf.float32) return enc_output, enc_state # ## ########## # the respective documents for this cell are: #https://www.tensorflow.org/api_docs/python/tf/contrib/layers/embed_sequence #https://www.tensorflow.org/api_docs/python/tf/contrib/rnn/LSTMCell #https://www.tensorflow.org/api_docs/python/tf/contrib/rnn/DropoutWrapper #https://www.tensorflow.org/api_docs/python/tf/nn/dynamic_rnn #rrnoutput= #rrnstate= #embed_input = tf.contrib.layers.embed_sequence(rnn_inputs, rnn_size, encoding_embedding_size) embed_input = tf.contrib.layers.embed_sequence(rnn_inputs, source_vocab_size, encoding_embedding_size) #tf.contrib.layers.embed_sequence() def make_cell(rnn_size): #https://www.tensorflow.org/api_docs/python/tf/contrib/rnn/DropoutWrapper enc_cell = tf.contrib.rnn.LSTMCell(rnn_size,initializer=tf.random_uniform_initializer(-0.1, 0.1, seed=2)) # I had AN INSANE AMMOUNT OF ERRORS BECAUSE I ACCIDENTALLY EDINTED THIS LINE TO HAVE PROB INSTEAD OF THE DROPOUT. >.> no good error codes enc_cell = tf.contrib.rnn.DropoutWrapper(enc_cell,output_keep_prob=keep_prob) # Not sure which one. Probably not input. EIther output or state.. #input_keep_prob: unit Tensor or float between 0 and 1, input keep probability; if it is constant and 1, no input dropout will be added. #output_keep_prob: unit Tensor or float between 0 and 1, output keep probability; if it is constant and 1, no output dropout will be added. #state_keep_prob: unit Tensor or float between 0 and 1, output keep probability; if it is constant and 1, no output dropout will be added. State dropout is performed on the output states of the cell. return enc_cell enc_cell = tf.contrib.rnn.MultiRNNCell([make_cell(rnn_size) for _ in range(num_layers)]) enc_output, enc_state = tf.nn.dynamic_rnn(enc_cell, embed_input, sequence_length=source_sequence_length, dtype=tf.float32) return enc_output, enc_state #return None, None DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE tests.test_encoding_layer(encoding_layer) Explanation: Encoding Implement encoding_layer() to create a Encoder RNN layer: * Embed the encoder input using tf.contrib.layers.embed_sequence * Construct a stacked tf.contrib.rnn.LSTMCell wrapped in a tf.contrib.rnn.DropoutWrapper * Pass cell and embedded input to tf.nn.dynamic_rnn() End of explanation ###### # SUPEEEER TRICKY UDACITY! # I spent half a day trying to figure out why I had cryptic errors - turns out only # Tensorflow 1.1 can run this. # not 1.0 . Not 1.2. # wasting my time near the submission deadline even though my code is OK. # Used the UDACITY sequence_to_sequence_implementation as reference for this # did find operation (ctrl+f) fro "rainingHelper" # Found decoding_layer(...) function which seems to address this cell's requirements def decoding_layer_train(encoder_state, dec_cell, dec_embed_input, target_sequence_length, max_summary_length, output_layer, keep_prob): Create a decoding layer for training :param encoder_state: Encoder State :param dec_cell: Decoder RNN Cell :param dec_embed_input: Decoder embedded input :param target_sequence_length: The lengths of each sequence in the target batch :param max_summary_length: The length of the longest sequence in the batch :param output_layer: Function to apply the output layer :param keep_prob: Dropout keep probability :return: BasicDecoderOutput containing training logits and sample_id # done: Implement Function #from seq 2 seq: # Helper for the training process. Used by BasicDecoder to read inputs. training_helper = tf.contrib.seq2seq.TrainingHelper(inputs=dec_embed_input, sequence_length=target_sequence_length, time_major=False) # Basic decoder training_decoder = tf.contrib.seq2seq.BasicDecoder(dec_cell, training_helper, enc_state, output_layer) # Perform dynamic decoding using the decoder training_decoder_output, _ = tf.contrib.seq2seq.dynamic_decode(training_decoder, # Helper for the training process. Used by BasicDecoder to read inputs. training_helper = tf.contrib.seq2seq.TrainingHelper(inputs=dec_embed_input, sequence_length=target_sequence_length, time_major=False) #encoder_state ... ameError: name 'enc_state' is not defined # Basic decoder training_decoder = tf.contrib.seq2seq.BasicDecoder(dec_cell, training_helper, encoder_state, output_layer) # Perform dynamic decoding using the decoder #NameError: name 'max_target_sequence_length' is not defined ... same deal training_decoder_output, _ = tf.contrib.seq2seq.dynamic_decode(training_decoder, impute_finished=True, maximum_iterations=max_summary_length) #ValueError: too many values to unpack (expected 2) #training_decoder_output = tf.contrib.seq2seq.dynamic_decode(training_decoder, impute_finished=True, maximum_iterations=max_summary_length) return training_decoder_output DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE tests.test_decoding_layer_train(decoding_layer_train) Explanation: Decoding - Training Create a training decoding layer: * Create a tf.contrib.seq2seq.TrainingHelper * Create a tf.contrib.seq2seq.BasicDecoder * Obtain the decoder outputs from tf.contrib.seq2seq.dynamic_decode End of explanation ######################### # # Searched course tutorial Seq2seq again, same functtion as last code cell # # See below: with tf.variable_scope("decode", reuse=True): start_tokens = tf.tile(tf.constant([target_letter_to_int['<GO>']], dtype=tf.int32), [batch_size], name='start_tokens') # Helper for the inference process. inference_helper = tf.contrib.seq2seq.GreedyEmbeddingHelper(dec_embeddings, start_tokens, target_letter_to_int['<EOS>']) # Basic decoder inference_decoder = tf.contrib.seq2seq.BasicDecoder(dec_cell, inference_helper, enc_state, output_layer) # Perform dynamic decoding using the decoder inference_decoder_output, _ = tf.contrib.seq2seq.dynamic_decode(inference_decoder, impute_finished=True, maximum_iterations=max_target_sequence_length) return training_decoder_output, inference_decoder_output def decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, max_target_sequence_length, vocab_size, output_layer, batch_size, keep_prob): Create a decoding layer for inference :param encoder_state: Encoder state :param dec_cell: Decoder RNN Cell :param dec_embeddings: Decoder embeddings :param start_of_sequence_id: GO ID :param end_of_sequence_id: EOS Id :param max_target_sequence_length: Maximum length of target sequences :param vocab_size: Size of decoder/target vocabulary :param decoding_scope: TenorFlow Variable Scope for decoding :param output_layer: Function to apply the output layer :param batch_size: Batch size :param keep_prob: Dropout keep probability :return: BasicDecoderOutput containing inference logits and sample_id # done: Implement Function #### BASED STRONGLY ON CLASS COURSEWORK, THE SEQ2SEQ material #https://www.tensorflow.org/api_docs/python/tf/tile #NameError: name 'target_letter_to_int' is not defined #start_tokens = tf.tile(tf.constant([target_letter_to_int['<GO>']], dtype=tf.int32), [batch_size], name='start_tokens') #start_tokens = tf.tile(tf.constant(['<GO>'], dtype=tf.int32), [batch_size], name='start_tokens') start_tokens = tf.tile(tf.constant([start_of_sequence_id], dtype=tf.int32), [batch_size], name='start_tokens') # Helper for the inference process. #https://www.tensorflow.org/api_docs/python/tf/contrib/seq2seq/GreedyEmbeddingHelper #inference_helper = tf.contrib.seq2seq.GreedyEmbeddingHelper(dec_embeddings,start_tokens,target_letter_to_int['<EOS>']) #NameError: name 'target_letter_to_int' is not defined inference_helper = tf.contrib.seq2seq.GreedyEmbeddingHelper(dec_embeddings,start_tokens,end_of_sequence_id) # Basic decoder #enc_state # encoder_state changed naes #https://www.tensorflow.org/api_docs/python/tf/contrib/seq2seq/BasicDecoder inference_decoder = tf.contrib.seq2seq.BasicDecoder(dec_cell,inference_helper,encoder_state,output_layer) # Perform dynamic decoding using the decoder #https://www.tensorflow.org/api_docs/python/tf/contrib/seq2seq/dynamic_decode inference_decoder_output, _ = tf.contrib.seq2seq.dynamic_decode(inference_decoder,impute_finished=True,maximum_iterations=max_target_sequence_length) return inference_decoder_output#None DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE tests.test_decoding_layer_infer(decoding_layer_infer) Explanation: Decoding - Inference Create inference decoder: * Create a tf.contrib.seq2seq.GreedyEmbeddingHelper * Create a tf.contrib.seq2seq.BasicDecoder * Obtain the decoder outputs from tf.contrib.seq2seq.dynamic_decode End of explanation # ## # Again, as suggested by a Uedacity TA (live support), SEQ 2 SEQ # Largely based on the decoding_layer in the udadcity seq2seq tutorial/example material. # See here: def decoding_layer(target_letter_to_int, decoding_embedding_size, num_layers, rnn_size, target_sequence_length, max_target_sequence_length, enc_state, dec_input): # 1. Decoder Embedding target_vocab_size = len(target_letter_to_int) dec_embeddings = tf.Variable(tf.random_uniform([target_vocab_size, decoding_embedding_size])) dec_embed_input = tf.nn.embedding_lookup(dec_embeddings, dec_input) # 2. Construct the decoder cell def make_cell(rnn_size): dec_cell = tf.contrib.rnn.LSTMCell(rnn_size, initializer=tf.random_uniform_initializer(-0.1, 0.1, seed=2)) return dec_cell dec_cell = tf.contrib.rnn.MultiRNNCell([make_cell(rnn_size) for _ in range(num_layers)]) # 3. Dense layer to translate the decoder's output at each time # step into a choice from the target vocabulary output_layer = Dense(target_vocab_size, kernel_initializer = tf.truncated_normal_initializer(mean = 0.0, stddev=0.1)) # 4. Set up a training decoder and an inference decoder # Training Decoder with tf.variable_scope("decode"): # Helper for the training process. Used by BasicDecoder to read inputs. training_helper = tf.contrib.seq2seq.TrainingHelper(inputs=dec_embed_input, sequence_length=target_sequence_length, time_major=False) # Basic decoder training_decoder = tf.contrib.seq2seq.BasicDecoder(dec_cell, training_helper, enc_state, output_layer) # Perform dynamic decoding using the decoder training_decoder_output, _ = tf.contrib.seq2seq.dynamic_decode(training_decoder, impute_finished=True, maximum_iterations=max_target_sequence_length) # 5. Inference Decoder # Reuses the same parameters trained by the training process with tf.variable_scope("decode", reuse=True): start_tokens = tf.tile(tf.constant([target_letter_to_int['<GO>']], dtype=tf.int32), [batch_size], name='start_tokens') # Helper for the inference process. inference_helper = tf.contrib.seq2seq.GreedyEmbeddingHelper(dec_embeddings, start_tokens, target_letter_to_int['<EOS>']) # Basic decoder inference_decoder = tf.contrib.seq2seq.BasicDecoder(dec_cell, inference_helper, enc_state, output_layer) # Perform dynamic decoding using the decoder inference_decoder_output, _ = tf.contrib.seq2seq.dynamic_decode(inference_decoder, impute_finished=True, maximum_iterations=max_target_sequence_length) return training_decoder_output, inference_decoder_output def decoding_layer(dec_input, encoder_state, target_sequence_length, max_target_sequence_length, rnn_size, num_layers, target_vocab_to_int, target_vocab_size, batch_size, keep_prob, decoding_embedding_size): Create decoding layer :param dec_input: Decoder input :param encoder_state: Encoder state :param target_sequence_length: The lengths of each sequence in the target batch :param max_target_sequence_length: Maximum length of target sequences :param rnn_size: RNN Size :param num_layers: Number of layers :param target_vocab_to_int: Dictionary to go from the target words to an id :param target_vocab_size: Size of target vocabulary :param batch_size: The size of the batch :param keep_prob: Dropout keep probability :param decoding_embedding_size: Decoding embedding size :return: Tuple of (Training BasicDecoderOutput, Inference BasicDecoderOutput) # TODO: Implement Function # 1. Decoder Embedding #NameError: name 'target_letter_to_int' is not defined #target_vocab_size = len(target_letter_to_int) # already param dec_embeddings = tf.Variable(tf.random_uniform([target_vocab_size, decoding_embedding_size])) dec_embed_input = tf.nn.embedding_lookup(dec_embeddings, dec_input) # 2. Construct the decoder cell def make_cell(rnn_size): dec_cell = tf.contrib.rnn.LSTMCell(rnn_size, initializer=tf.random_uniform_initializer(-0.1, 0.1, seed=2)) return dec_cell dec_cell = tf.contrib.rnn.MultiRNNCell([make_cell(rnn_size) for _ in range(num_layers)]) # 3. Dense layer to translate the decoder's output at each time # step into a choice from the target vocabulary output_layer = Dense(target_vocab_size, kernel_initializer = tf.truncated_normal_initializer(mean = 0.0, stddev=0.1)) # 4. Set up a training decoder and an inference decoder # Training Decoder with tf.variable_scope("decode"): # Helper for the training process. Used by BasicDecoder to read inputs. training_helper = tf.contrib.seq2seq.TrainingHelper(inputs=dec_embed_input, sequence_length=target_sequence_length, time_major=False) # Basic decoder #NameError: name 'enc_state' is not defined training_decoder = tf.contrib.seq2seq.BasicDecoder(dec_cell, training_helper, encoder_state, output_layer) # Perform dynamic decoding using the decoder training_decoder_output, _ = tf.contrib.seq2seq.dynamic_decode(training_decoder, impute_finished=True, maximum_iterations=max_target_sequence_length) # 5. Inference Decoder # Reuses the same parameters trained by the training process with tf.variable_scope("decode", reuse=True): #NameError: name 'target_letter_to_int' is not defined #target_vocab_to_int is the closest equivalent start_tokens = tf.tile(tf.constant([target_vocab_to_int['<GO>']], dtype=tf.int32), [batch_size], name='start_tokens') # Helper for the inference process. inference_helper = tf.contrib.seq2seq.GreedyEmbeddingHelper(dec_embeddings, start_tokens, target_vocab_to_int['<EOS>']) # Basic decoder #NameError: name 'enc_state' is not defined inference_decoder = tf.contrib.seq2seq.BasicDecoder(dec_cell, inference_helper, encoder_state, output_layer) # Perform dynamic decoding using the decoder inference_decoder_output, _ = tf.contrib.seq2seq.dynamic_decode(inference_decoder, impute_finished=True, maximum_iterations=max_target_sequence_length) return training_decoder_output, inference_decoder_output #return None, None DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE tests.test_decoding_layer(decoding_layer) Explanation: Build the Decoding Layer Implement decoding_layer() to create a Decoder RNN layer. Embed the target sequences Construct the decoder LSTM cell (just like you constructed the encoder cell above) Create an output layer to map the outputs of the decoder to the elements of our vocabulary Use the your decoding_layer_train(encoder_state, dec_cell, dec_embed_input, target_sequence_length, max_target_sequence_length, output_layer, keep_prob) function to get the training logits. Use your decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, max_target_sequence_length, vocab_size, output_layer, batch_size, keep_prob) function to get the inference logits. Note: You'll need to use tf.variable_scope to share variables between training and inference. End of explanation def seq2seq_model(input_data, target_data, keep_prob, batch_size, source_sequence_length, target_sequence_length, max_target_sentence_length, source_vocab_size, target_vocab_size, enc_embedding_size, dec_embedding_size, rnn_size, num_layers, target_vocab_to_int): Build the Sequence-to-Sequence part of the neural network :param input_data: Input placeholder :param target_data: Target placeholder :param keep_prob: Dropout keep probability placeholder :param batch_size: Batch Size :param source_sequence_length: Sequence Lengths of source sequences in the batch :param target_sequence_length: Sequence Lengths of target sequences in the batch :param source_vocab_size: Source vocabulary size :param target_vocab_size: Target vocabulary size :param enc_embedding_size: Decoder embedding size :param dec_embedding_size: Encoder embedding size :param rnn_size: RNN Size :param num_layers: Number of layers :param target_vocab_to_int: Dictionary to go from the target words to an id :return: Tuple of (Training BasicDecoderOutput, Inference BasicDecoderOutput) # done: Implement Function #ENcode RNN_output, RNN_state= encoding_layer(input_data, rnn_size, num_layers, keep_prob, source_sequence_length, source_vocab_size, enc_embedding_size) #process Preprocessedtargetdata=process_decoder_input(target_data, target_vocab_to_int, batch_size) #decode reta,retb= decoding_layer(Preprocessedtargetdata, RNN_state, target_sequence_length, max_target_sentence_length, rnn_size, num_layers, target_vocab_to_int, target_vocab_size, batch_size, keep_prob, dec_embedding_size) return reta,retb#None DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE tests.test_seq2seq_model(seq2seq_model) Explanation: Build the Neural Network Apply the functions you implemented above to: Encode the input using your encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob, source_sequence_length, source_vocab_size, encoding_embedding_size). Process target data using your process_decoder_input(target_data, target_vocab_to_int, batch_size) function. Decode the encoded input using your decoding_layer(dec_input, enc_state, target_sequence_length, max_target_sentence_length, rnn_size, num_layers, target_vocab_to_int, target_vocab_size, batch_size, keep_prob, dec_embedding_size) function. End of explanation # PHyper parameters are expected to be of similar range to those of the Seq 2 seq lesson # Number of Epochs epochs = 16 #60 #None # Batch Size batch_size = 256 #None # RNN Size rnn_size = 50#None # Number of Layers num_layers = 2#None # Embedding Size encoding_embedding_size = 256 #15None decoding_embedding_size = 256 #None # Learning Rate learning_rate = 0.01# None # Dropout Keep Probability keep_probability = 0.75 # reasoning: should be more than 50/50.. but it should still be able to drop values so it can search #None display_step = 32#None Explanation: Neural Network Training Hyperparameters Tune the following parameters: Set epochs to the number of epochs. Set batch_size to the batch size. Set rnn_size to the size of the RNNs. Set num_layers to the number of layers. Set encoding_embedding_size to the size of the embedding for the encoder. Set decoding_embedding_size to the size of the embedding for the decoder. Set learning_rate to the learning rate. Set keep_probability to the Dropout keep probability Set display_step to state how many steps between each debug output statement End of explanation DON'T MODIFY ANYTHING IN THIS CELL save_path = 'checkpoints/dev' (source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess() max_target_sentence_length = max([len(sentence) for sentence in source_int_text]) train_graph = tf.Graph() with train_graph.as_default(): input_data, targets, lr, keep_prob, target_sequence_length, max_target_sequence_length, source_sequence_length = model_inputs() #sequence_length = tf.placeholder_with_default(max_target_sentence_length, None, name='sequence_length') input_shape = tf.shape(input_data) train_logits, inference_logits = seq2seq_model(tf.reverse(input_data, [-1]), targets, keep_prob, batch_size, source_sequence_length, target_sequence_length, max_target_sequence_length, len(source_vocab_to_int), len(target_vocab_to_int), encoding_embedding_size, decoding_embedding_size, rnn_size, num_layers, target_vocab_to_int) training_logits = tf.identity(train_logits.rnn_output, name='logits') inference_logits = tf.identity(inference_logits.sample_id, name='predictions') masks = tf.sequence_mask(target_sequence_length, max_target_sequence_length, dtype=tf.float32, name='masks') with tf.name_scope("optimization"): # Loss function cost = tf.contrib.seq2seq.sequence_loss( training_logits, targets, masks) # Optimizer optimizer = tf.train.AdamOptimizer(lr) # Gradient Clipping gradients = optimizer.compute_gradients(cost) capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None] train_op = optimizer.apply_gradients(capped_gradients) Explanation: Build the Graph Build the graph using the neural network you implemented. End of explanation DON'T MODIFY ANYTHING IN THIS CELL def pad_sentence_batch(sentence_batch, pad_int): Pad sentences with <PAD> so that each sentence of a batch has the same length max_sentence = max([len(sentence) for sentence in sentence_batch]) return [sentence + [pad_int] * (max_sentence - len(sentence)) for sentence in sentence_batch] def get_batches(sources, targets, batch_size, source_pad_int, target_pad_int): Batch targets, sources, and the lengths of their sentences together for batch_i in range(0, len(sources)//batch_size): start_i = batch_i * batch_size # Slice the right amount for the batch sources_batch = sources[start_i:start_i + batch_size] targets_batch = targets[start_i:start_i + batch_size] # Pad pad_sources_batch = np.array(pad_sentence_batch(sources_batch, source_pad_int)) pad_targets_batch = np.array(pad_sentence_batch(targets_batch, target_pad_int)) # Need the lengths for the _lengths parameters pad_targets_lengths = [] for target in pad_targets_batch: pad_targets_lengths.append(len(target)) pad_source_lengths = [] for source in pad_sources_batch: pad_source_lengths.append(len(source)) yield pad_sources_batch, pad_targets_batch, pad_source_lengths, pad_targets_lengths Explanation: Batch and pad the source and target sequences End of explanation DON'T MODIFY ANYTHING IN THIS CELL def get_accuracy(target, logits): Calculate accuracy max_seq = max(target.shape[1], logits.shape[1]) if max_seq - target.shape[1]: target = np.pad( target, [(0,0),(0,max_seq - target.shape[1])], 'constant') if max_seq - logits.shape[1]: logits = np.pad( logits, [(0,0),(0,max_seq - logits.shape[1])], 'constant') return np.mean(np.equal(target, logits)) # Split data to training and validation sets train_source = source_int_text[batch_size:] train_target = target_int_text[batch_size:] valid_source = source_int_text[:batch_size] valid_target = target_int_text[:batch_size] (valid_sources_batch, valid_targets_batch, valid_sources_lengths, valid_targets_lengths ) = next(get_batches(valid_source, valid_target, batch_size, source_vocab_to_int['<PAD>'], target_vocab_to_int['<PAD>'])) with tf.Session(graph=train_graph) as sess: sess.run(tf.global_variables_initializer()) for epoch_i in range(epochs): for batch_i, (source_batch, target_batch, sources_lengths, targets_lengths) in enumerate( get_batches(train_source, train_target, batch_size, source_vocab_to_int['<PAD>'], target_vocab_to_int['<PAD>'])): _, loss = sess.run( [train_op, cost], {input_data: source_batch, targets: target_batch, lr: learning_rate, target_sequence_length: targets_lengths, source_sequence_length: sources_lengths, keep_prob: keep_probability}) if batch_i % display_step == 0 and batch_i > 0: batch_train_logits = sess.run( inference_logits, {input_data: source_batch, source_sequence_length: sources_lengths, target_sequence_length: targets_lengths, keep_prob: 1.0}) batch_valid_logits = sess.run( inference_logits, {input_data: valid_sources_batch, source_sequence_length: valid_sources_lengths, target_sequence_length: valid_targets_lengths, keep_prob: 1.0}) train_acc = get_accuracy(target_batch, batch_train_logits) valid_acc = get_accuracy(valid_targets_batch, batch_valid_logits) print('Epoch {:>3} Batch {:>4}/{} - Train Accuracy: {:>6.4f}, Validation Accuracy: {:>6.4f}, Loss: {:>6.4f}' .format(epoch_i, batch_i, len(source_int_text) // batch_size, train_acc, valid_acc, loss)) # Save Model saver = tf.train.Saver() saver.save(sess, save_path) print('Model Trained and Saved') Explanation: Train Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem. End of explanation DON'T MODIFY ANYTHING IN THIS CELL # Save parameters for checkpoint helper.save_params(save_path) Explanation: Save Parameters Save the batch_size and save_path parameters for inference. End of explanation DON'T MODIFY ANYTHING IN THIS CELL import tensorflow as tf import numpy as np import helper import problem_unittests as tests _, (source_vocab_to_int, target_vocab_to_int), (source_int_to_vocab, target_int_to_vocab) = helper.load_preprocess() load_path = helper.load_params() Explanation: Checkpoint End of explanation def sentence_to_seq(sentence, vocab_to_int): Convert a sentence to a sequence of ids :param sentence: String :param vocab_to_int: Dictionary to go from the words to an id :return: List of word ids # TODO: Implement Function return None DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE tests.test_sentence_to_seq(sentence_to_seq) Explanation: Sentence to Sequence To feed a sentence into the model for translation, you first need to preprocess it. Implement the function sentence_to_seq() to preprocess new sentences. Convert the sentence to lowercase Convert words into ids using vocab_to_int Convert words not in the vocabulary, to the &lt;UNK&gt; word id. End of explanation #translate_sentence = 'he saw a old yellow truck .' #Why does this have a typo in it? It should be "He saw AN old, yellow truck." translate_sentence = "There once was a man from Nantucket." DON'T MODIFY ANYTHING IN THIS CELL translate_sentence = sentence_to_seq(translate_sentence, source_vocab_to_int) loaded_graph = tf.Graph() with tf.Session(graph=loaded_graph) as sess: # Load saved model loader = tf.train.import_meta_graph(load_path + '.meta') loader.restore(sess, load_path) input_data = loaded_graph.get_tensor_by_name('input:0') logits = loaded_graph.get_tensor_by_name('predictions:0') target_sequence_length = loaded_graph.get_tensor_by_name('target_sequence_length:0') source_sequence_length = loaded_graph.get_tensor_by_name('source_sequence_length:0') keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0') translate_logits = sess.run(logits, {input_data: [translate_sentence]*batch_size, target_sequence_length: [len(translate_sentence)*2]*batch_size, source_sequence_length: [len(translate_sentence)]*batch_size, keep_prob: 1.0})[0] print('Input') print(' Word Ids: {}'.format([i for i in translate_sentence])) print(' English Words: {}'.format([source_int_to_vocab[i] for i in translate_sentence])) print('\nPrediction') print(' Word Ids: {}'.format([i for i in translate_logits])) print(' French Words: {}'.format(" ".join([target_int_to_vocab[i] for i in translate_logits]))) Explanation: Translate This will translate translate_sentence from English to French. End of explanation
3,685
Given the following text description, write Python code to implement the functionality described below step by step Description: FAQ This document will address frequently asked questions not addressed in other pages of the documentation. How do I install cobrapy? Please see the INSTALL.rst file. How do I cite cobrapy? Please cite the 2013 publication Step1: The Model.repair function will rebuild the necessary indexes Step2: How do I delete a gene? That depends on what precisely you mean by delete a gene. If you want to simulate the model with a gene knockout, use the cobra.maniupulation.delete_model_genes function. The effects of this function are reversed by cobra.manipulation.undelete_model_genes. Step3: If you want to actually remove all traces of a gene from a model, this is more difficult because this will require changing all the gene_reaction_rule strings for reactions involving the gene. How do I change the reversibility of a Reaction? Reaction.reversibility is a property in cobra which is computed when it is requested from the lower and upper bounds. Step4: Trying to set it directly will result in an error or warning Step5: The way to change the reversibility is to change the bounds to make the reaction irreversible. Step6: How do I generate an LP file from a COBRA model? While the cobrapy does not include python code to support this feature directly, many of the bundled solvers have this capability. Create the problem with one of these solvers, and use its appropriate function. Please note that unlike the LP file format, the MPS file format does not specify objective direction and is always a minimzation. Some (but not all) solvers will rewrite the maximization as a minimzation.
Python Code: from __future__ import print_function import cobra.test model = cobra.test.create_test_model() for metabolite in model.metabolites: metabolite.id = "test_" + metabolite.id try: model.metabolites.get_by_id(model.metabolites[0].id) except KeyError as e: print(repr(e)) Explanation: FAQ This document will address frequently asked questions not addressed in other pages of the documentation. How do I install cobrapy? Please see the INSTALL.rst file. How do I cite cobrapy? Please cite the 2013 publication: 10.1186/1752-0509-7-74 How do I rename reactions or metabolites? TL;DR Use Model.repair afterwards When renaming metabolites or reactions, there are issues because cobra indexes based off of ID's, which can cause errors. For example: End of explanation model.repair() model.metabolites.get_by_id(model.metabolites[0].id) Explanation: The Model.repair function will rebuild the necessary indexes End of explanation model = cobra.test.create_test_model() PGI = model.reactions.get_by_id("PGI") print("bounds before knockout:", (PGI.lower_bound, PGI.upper_bound)) cobra.manipulation.delete_model_genes(model, ["STM4221"]) print("bounds after knockouts", (PGI.lower_bound, PGI.upper_bound)) Explanation: How do I delete a gene? That depends on what precisely you mean by delete a gene. If you want to simulate the model with a gene knockout, use the cobra.maniupulation.delete_model_genes function. The effects of this function are reversed by cobra.manipulation.undelete_model_genes. End of explanation model = cobra.test.create_test_model() model.reactions.get_by_id("PGI").reversibility Explanation: If you want to actually remove all traces of a gene from a model, this is more difficult because this will require changing all the gene_reaction_rule strings for reactions involving the gene. How do I change the reversibility of a Reaction? Reaction.reversibility is a property in cobra which is computed when it is requested from the lower and upper bounds. End of explanation try: model.reactions.get_by_id("PGI").reversibility = False except Exception as e: print(repr(e)) Explanation: Trying to set it directly will result in an error or warning: End of explanation model.reactions.get_by_id("PGI").lower_bound = 10 model.reactions.get_by_id("PGI").reversibility Explanation: The way to change the reversibility is to change the bounds to make the reaction irreversible. End of explanation model = cobra.test.create_test_model() # glpk through cglpk glp = cobra.solvers.cglpk.create_problem(model) glp.write("test.lp") glp.write("test.mps") # will not rewrite objective # gurobi gurobi_problem = cobra.solvers.gurobi_solver.create_problem(model) gurobi_problem.write("test.lp") gurobi_problem.write("test.mps") # rewrites objective # cplex cplex_problem = cobra.solvers.cplex_solver.create_problem(model) cplex_problem.write("test.lp") cplex_problem.write("test.mps") # rewrites objective Explanation: How do I generate an LP file from a COBRA model? While the cobrapy does not include python code to support this feature directly, many of the bundled solvers have this capability. Create the problem with one of these solvers, and use its appropriate function. Please note that unlike the LP file format, the MPS file format does not specify objective direction and is always a minimzation. Some (but not all) solvers will rewrite the maximization as a minimzation. End of explanation
3,686
Given the following text description, write Python code to implement the functionality described below step by step Description: <h1>datetime library</h1> <li>Time is linear <li>progresses as a straightline trajectory from the big bag <li>to now and into the future <li>日期库官方说明 https Step1: <li>How much time has passed? Step2: <h4>Obviously that's not going to work. </h4> <h4>We can't do date operations on strings</h4> <h4>Let's see what happens with datetime</h4> Step3: <li>datetime objects understand time <h3>The datetime library contains several useful types</h3> <li>date Step4: <h3>For a cleaner output</h3> Step5: <h3>datetime.datetime</h3> Step6: <h4>datetime objects can check validity</h4> <li>A ValueError exception is raised if the object is invalid</li> Step7: <h3>datetime.timedelta</h3> <h4>Used to store the duration between two points in time</h4> Step8: <h3>datetime.time</h3> Step9: <h4>You can do arithmetic operations on datetime objects</h4> <li>You can use timedelta objects to calculate new dates or times from a given date Step10: <li>But you can't use timedelta on time objects. If you do, you'll get a TypeError exception Step11: <h2>datetime and strings</h2> <h4>datetime.strptime</h4> <li>datetime.strptime() Step12: <h4>datetime.strftime</h4> <li>The strftime function flips the strptime function. It converts a datetime object to a string <li>with the specified format
Python Code: d1 = "10/24/2017" d2 = "11/24/2016" max(d1,d2) Explanation: <h1>datetime library</h1> <li>Time is linear <li>progresses as a straightline trajectory from the big bag <li>to now and into the future <li>日期库官方说明 https://docs.python.org/3.5/library/datetime.html <h3>Reasoning about time is important in data analysis</h3> <li>Analyzing financial timeseries data <li>Looking at commuter transit passenger flows by time of day <li>Understanding web traffic by time of day <li>Examining seaonality in department store purchases <h3>The datetime library</h3> <li>understands the relationship between different points of time <li>understands how to do operations on time <h3>Example:</h3> <li>Which is greater? "10/24/2017" or "11/24/2016" End of explanation d1 - d2 Explanation: <li>How much time has passed? End of explanation import datetime d1 = datetime.date(2016,11,24) d2 = datetime.date(2017,10,24) max(d1,d2) print(d2 - d1) Explanation: <h4>Obviously that's not going to work. </h4> <h4>We can't do date operations on strings</h4> <h4>Let's see what happens with datetime</h4> End of explanation import datetime century_start = datetime.date(2000,1,1) today = datetime.date.today() print(century_start,today) print("We are",today-century_start,"days into this century") print(type(century_start)) print(type(today)) Explanation: <li>datetime objects understand time <h3>The datetime library contains several useful types</h3> <li>date: stores the date (month,day,year) <li>time: stores the time (hours,minutes,seconds) <li>datetime: stores the date as well as the time (month,day,year,hours,minutes,seconds) <li>timedelta: duration between two datetime or date objects <h3>datetime.date</h3> End of explanation print("We are",(today-century_start).days,"days into this century") Explanation: <h3>For a cleaner output</h3> End of explanation century_start = datetime.datetime(2000,1,1,0,0,0) time_now = datetime.datetime.now() print(century_start,time_now) print("we are",time_now - century_start,"days, hour, minutes and seconds into this century") Explanation: <h3>datetime.datetime</h3> End of explanation some_date=datetime.date(2015,2,29) #some_date =datetime.date(2016,2,29) #some_time=datetime.datetime(2015,2,28,23,60,0) Explanation: <h4>datetime objects can check validity</h4> <li>A ValueError exception is raised if the object is invalid</li> End of explanation century_start = datetime.datetime(2050,1,1,0,0,0) time_now = datetime.datetime.now() time_since_century_start = time_now - century_start print("days since century start",time_since_century_start.days) print("seconds since century start",time_since_century_start.total_seconds()) print("minutes since century start",time_since_century_start.total_seconds()/60) print("hours since century start",time_since_century_start.total_seconds()/60/60) Explanation: <h3>datetime.timedelta</h3> <h4>Used to store the duration between two points in time</h4> End of explanation date_and_time_now = datetime.datetime.now() time_now = date_and_time_now.time() print(time_now) Explanation: <h3>datetime.time</h3> End of explanation today=datetime.date.today() five_days_later=today+datetime.timedelta(days=5) print(five_days_later) now=datetime.datetime.today() five_minutes_and_five_seconds_later = now + datetime.timedelta(minutes=5,seconds=5) print(five_minutes_and_five_seconds_later) now=datetime.datetime.today() five_minutes_and_five_seconds_earlier = now+datetime.timedelta(minutes=-5,seconds=-5) print(five_minutes_and_five_seconds_earlier) Explanation: <h4>You can do arithmetic operations on datetime objects</h4> <li>You can use timedelta objects to calculate new dates or times from a given date End of explanation time_now=datetime.datetime.now().time() #Returns the time component (drops the day) print(time_now) thirty_seconds=datetime.timedelta(seconds=30) time_later=time_now+thirty_seconds #Bug or feature? #But this is Python #And we can always get around something by writing a new function! #Let's write a small function to get around this problem def add_to_time(time_object,time_delta): import datetime temp_datetime_object = datetime.datetime(500,1,1,time_object.hour,time_object.minute,time_object.second) print(temp_datetime_object) return (temp_datetime_object+time_delta).time() #And test it time_now=datetime.datetime.now().time() thirty_seconds=datetime.timedelta(seconds=30) print(time_now,add_to_time(time_now,thirty_seconds)) Explanation: <li>But you can't use timedelta on time objects. If you do, you'll get a TypeError exception End of explanation date='01-Apr-03' date_object=datetime.datetime.strptime(date,'%d-%b-%y') print(date_object) #Unfortunately, there is no similar thing for time delta #So we have to be creative! bus_travel_time='2:15:30' hours,minutes,seconds=bus_travel_time.split(':') x=datetime.timedelta(hours=int(hours),minutes=int(minutes),seconds=int(seconds)) print(x) #Or write a function that will do this for a particular format def get_timedelta(time_string): hours,minutes,seconds = time_string.split(':') import datetime return datetime.timedelta(hours=int(hours),minutes=int(minutes),seconds=int(seconds)) Explanation: <h2>datetime and strings</h2> <h4>datetime.strptime</h4> <li>datetime.strptime(): grabs time from a string and creates a date or datetime or time object <li>The programmer needs to tell the function what format the string is using <li> See http://pubs.opengroup.org/onlinepubs/009695399/functions/strptime.html for how to specify the format End of explanation now = datetime.datetime.now() string_now = datetime.datetime.strftime(now,'%m/%d/%y %H:%M:%S') print(now,string_now) print(str(now)) #Or you can use the default conversion Explanation: <h4>datetime.strftime</h4> <li>The strftime function flips the strptime function. It converts a datetime object to a string <li>with the specified format End of explanation
3,687
Given the following text description, write Python code to implement the functionality described below step by step Description: SmartSheet Sheet To BigQuery Move sheet data into a BigQuery table. License Copyright 2020 Google LLC, Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at https Step1: 2. Set Configuration This code is required to initialize the project. Fill in required fields and press play. If the recipe uses a Google Cloud Project Step2: 3. Enter SmartSheet Sheet To BigQuery Recipe Parameters Specify SmartSheet token. Locate the ID of a sheet by viewing its properties. Provide a BigQuery dataset ( must exist ) and table to write the data into. StarThinker will automatically map the correct schema. Modify the values below for your use case, can be done multiple times, then click play. Step3: 4. Execute SmartSheet Sheet To BigQuery This does NOT need to be modified unless you are changing the recipe, click play.
Python Code: !pip install git+https://github.com/google/starthinker Explanation: SmartSheet Sheet To BigQuery Move sheet data into a BigQuery table. License Copyright 2020 Google LLC, Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at https://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. Disclaimer This is not an officially supported Google product. It is a reference implementation. There is absolutely NO WARRANTY provided for using this code. The code is Apache Licensed and CAN BE fully modified, white labeled, and disassembled by your team. This code generated (see starthinker/scripts for possible source): - Command: "python starthinker_ui/manage.py colab" - Command: "python starthinker/tools/colab.py [JSON RECIPE]" 1. Install Dependencies First install the libraries needed to execute recipes, this only needs to be done once, then click play. End of explanation from starthinker.util.configuration import Configuration CONFIG = Configuration( project="", client={}, service={}, user="/content/user.json", verbose=True ) Explanation: 2. Set Configuration This code is required to initialize the project. Fill in required fields and press play. If the recipe uses a Google Cloud Project: Set the configuration project value to the project identifier from these instructions. If the recipe has auth set to user: If you have user credentials: Set the configuration user value to your user credentials JSON. If you DO NOT have user credentials: Set the configuration client value to downloaded client credentials. If the recipe has auth set to service: Set the configuration service value to downloaded service credentials. End of explanation FIELDS = { 'auth_read':'user', # Credentials used for reading data. 'auth_write':'service', # Credentials used for writing data. 'token':'', # Retrieve from SmartSheet account settings. 'sheet':'', # Retrieve from sheet properties. 'dataset':'', # Existing BigQuery dataset. 'table':'', # Table to create from this report. 'schema':'', # Schema provided in JSON list format or leave empty to auto detect. 'link':True, # Add a link to each row as the first column. } print("Parameters Set To: %s" % FIELDS) Explanation: 3. Enter SmartSheet Sheet To BigQuery Recipe Parameters Specify SmartSheet token. Locate the ID of a sheet by viewing its properties. Provide a BigQuery dataset ( must exist ) and table to write the data into. StarThinker will automatically map the correct schema. Modify the values below for your use case, can be done multiple times, then click play. End of explanation from starthinker.util.configuration import execute from starthinker.util.recipe import json_set_fields TASKS = [ { 'smartsheet':{ 'auth':{'field':{'name':'auth_read','kind':'authentication','order':0,'default':'user','description':'Credentials used for reading data.'}}, 'token':{'field':{'name':'token','kind':'string','order':2,'default':'','description':'Retrieve from SmartSheet account settings.'}}, 'sheet':{'field':{'name':'sheet','kind':'string','order':3,'description':'Retrieve from sheet properties.'}}, 'link':{'field':{'name':'link','kind':'boolean','order':7,'default':True,'description':'Add a link to each row as the first column.'}}, 'out':{ 'bigquery':{ 'auth':{'field':{'name':'auth_write','kind':'authentication','order':1,'default':'service','description':'Credentials used for writing data.'}}, 'dataset':{'field':{'name':'dataset','kind':'string','order':4,'default':'','description':'Existing BigQuery dataset.'}}, 'table':{'field':{'name':'table','kind':'string','order':5,'default':'','description':'Table to create from this report.'}}, 'schema':{'field':{'name':'schema','kind':'json','order':6,'description':'Schema provided in JSON list format or leave empty to auto detect.'}} } } } } ] json_set_fields(TASKS, FIELDS) execute(CONFIG, TASKS, force=True) Explanation: 4. Execute SmartSheet Sheet To BigQuery This does NOT need to be modified unless you are changing the recipe, click play. End of explanation
3,688
Given the following text description, write Python code to implement the functionality described below step by step Description: <link rel="stylesheet" href="reveal.js/css/theme/sky.css" id="theme"> Step1: <h1>Machine Learning</h1> <h5>and</h5> <h1>Probabilistic Programming</h1> <tiny>Colin Carroll, Kensho</tiny> Raw Data Step2: Model Step3: What is going on here? Two models Step4: Transform raw data to features Train a model Measure how accurate you expect the model to be Turning raw data into features Surprisingly hard not to peek at the future. Step5: Nonlinear models Your features may depend nonlinearly on the raw data! Step6: Turning features into a model When using linear regression we assume that $$ \mathbf{score} = w_1 \cdot \mathbf{avg_score} + w_2 \cdot \mathbf{fg_pct} + \cdots + w_m \cdot \mathbf{win_pct} $$ Try to find $(w_1, w_2, \ldots, w_m)$. More concisely Try to find a $\mathbf{w}$ satisfying $X\mathbf{w} = y$. Step7: What can we say about linear regression? Linear regression minimizes the sum of squared errors $$\sum (\mathbf{x}_j \cdot \mathbf{w} - y_j)^2$$ Linear regression finds the most likely weights Given our data, $(X, \mathbf{y})$, Bayes Rule says that $$ P(\mathbf{w} | X, \mathbf{y}) = \frac{P(X, \mathbf{y} | \mathbf{w}) p(\mathbf{w})}{P(X, \mathbf{y})} $$ Linear regression is geometrically pleasant (and syntactically terrifying) $X\mathbf{w}$ is the nearest point to $\mathbf{y}$ in the $m$-dimensional subspace of $\mathbb{R}^n$ spanned by the columns of $X$. More guarantees* Step8: How wrong will I be? Step9: Cross validation, testing, overfitting... Step10: Logistic regression, briefly Step11: Instead of $$ \mathbf{y} = X\mathbf{w}, $$ $$ \mathbf{y} = \sigma \left(X\mathbf{w}\right) $$ What is $\sigma$? $$ \sigma(x) = \frac{1}{1 + e^{-x}} $$ Step12: Challenger Disaster <img src='oring.jpg'></img> Challenger dataset
Python Code: import matplotlib %matplotlib inline from bokeh.plotting import figure, show, ColumnDataSource from bokeh.models import HoverTool from bokeh.io import output_notebook, save from clean_data import (get_models, predict, explain_model, LATEST_DATA as results_2016, get_df, get_features, regression_target, get_training_data, predict_winner, predict_scores) import numpy as np import numpy.random as nr from scipy.special import logit, expit import pandas as pd from sklearn.pipeline import make_pipeline from sklearn.linear_model import LinearRegression, LogisticRegression from sklearn.preprocessing import PolynomialFeatures pd.options.display.float_format = '{:,.1f}'.format output_notebook() WIDTH = 800 HEIGHT = 768 reg, clf = get_models(cv=False) reg_cv, clf_cv = get_models(cv=True) def plot_scores(reg, n=1000): sub = get_training_data(results_2016).sample(n=n) df_in = get_features(sub) preds = reg.predict(df_in) sub['predicted'] = preds[:, 0] sub['predicted_first'] = preds[:, 0].round().astype(int) sub['predicted_second'] = preds[:, 1].round().astype(int) source = ColumnDataSource(data=sub) dot = figure(title="", tools="", toolbar_location=None, width=WIDTH, x_axis_label='Actual Score', y_axis_label='Predicted Score') dot.circle(x='score_first', y='predicted', size=15, fill_alpha=0.3, source=source) min_coord = max(sub.score_first.min(), sub.predicted.min()) max_coord = min(sub.score_first.max(), sub.predicted.max()) dot.segment(min_coord, min_coord, max_coord, max_coord, line_width=5, color='black', line_cap="round") dot.xaxis.major_label_text_font_size = "20pt" dot.yaxis.major_label_text_font_size = "20pt" dot.xaxis.axis_label_text_font_size="20pt" dot.yaxis.axis_label_text_font_size="20pt" save(dot, 'scores.html') show(dot) def plot_residuals(reg): sub = get_training_data(results_2016) df_in = get_features(sub) preds = reg.predict(df_in) sub['error'] = preds[:, 0] - sub['score_first'] hist, edges = np.histogram(sub.error, density=True, bins=50) bars = figure(title="", tools="", toolbar_location=None, width=WIDTH, x_axis_label='Error', y_axis_label='') bars.quad(top=hist, bottom=0, left=edges[:-1], right=edges[1:], fill_color="#036564", line_color="#033649") bars.xaxis.major_label_text_font_size = "20pt" bars.xaxis.axis_label_text_font_size="20pt" bars.yaxis.visible = False save(bars, 'residuals.html') show(bars) def plot_logit(): x = np.linspace(1e-3, 1-1e-3, 100) y = logit(x) p1 = figure(title="", tools="") p1.line(x, y) p1.segment(x, 0, x, y) save(p1, 'logit.html') show(p1) def plot_sigmoid(): x = np.linspace(-4, 4, 100) y = expit(x) p1 = figure(title="", tools="") p1.line(x, y) p1.segment(0, y, x, y) save(p1, 'sigmoid.html') show(p1) def plot_overfitting(line=False): nr.seed(42) x = nr.random(30) y = 2 * x + 0.1 * nr.randn(x.shape[0]) dot = figure(title="", tools="", toolbar_location=None, width=WIDTH, x_axis_label='x', y_axis_label='y', x_range=(0, 1), y_range=(-0.5, 2.5)) dot.circle(x=x, y=y, size=15, fill_alpha=0.3) if line: pipe = make_pipeline(PolynomialFeatures(degree=25), LinearRegression(fit_intercept=False)) model = pipe.fit(np.atleast_2d(x).T, y) new_x = np.linspace(0, 1, 1000) preds = model.predict(np.atleast_2d(new_x).T) dot.line(x=new_x, y=preds, color='red') save(dot, 'overfitting.html') show(dot) def plot_orings(line=False): df = pd.read_csv('orings.csv') df.Temperature += nr.randn(df.Temperature.size) dot = figure(title="O-Ring Failures", tools="", toolbar_location=None, width=WIDTH, x_axis_label='Temperature', y_axis_label='O-Ring Problems') dot.circle(x='Temperature', y='Failure', size=15, fill_alpha=0.3, source=df) save(dot, 'orings.html') show(dot) Explanation: <link rel="stylesheet" href="reveal.js/css/theme/sky.css" id="theme"> End of explanation get_df(2016).head() Explanation: <h1>Machine Learning</h1> <h5>and</h5> <h1>Probabilistic Programming</h1> <tiny>Colin Carroll, Kensho</tiny> Raw Data End of explanation predict(reg, clf, 'North Carolina', 'Connecticut') Explanation: Model End of explanation predict_winner(clf, 'North Carolina', 'Connecticut') predict_scores(reg, 'North Carolina', 'Connecticut') Explanation: What is going on here? Two models: Classification Regression End of explanation get_features(get_training_data(results_2016)).head() Explanation: Transform raw data to features Train a model Measure how accurate you expect the model to be Turning raw data into features Surprisingly hard not to peek at the future. End of explanation plot_logit() Explanation: Nonlinear models Your features may depend nonlinearly on the raw data! End of explanation explain_model(reg) Explanation: Turning features into a model When using linear regression we assume that $$ \mathbf{score} = w_1 \cdot \mathbf{avg_score} + w_2 \cdot \mathbf{fg_pct} + \cdots + w_m \cdot \mathbf{win_pct} $$ Try to find $(w_1, w_2, \ldots, w_m)$. More concisely Try to find a $\mathbf{w}$ satisfying $X\mathbf{w} = y$. End of explanation plot_scores(reg, 1000) Explanation: What can we say about linear regression? Linear regression minimizes the sum of squared errors $$\sum (\mathbf{x}_j \cdot \mathbf{w} - y_j)^2$$ Linear regression finds the most likely weights Given our data, $(X, \mathbf{y})$, Bayes Rule says that $$ P(\mathbf{w} | X, \mathbf{y}) = \frac{P(X, \mathbf{y} | \mathbf{w}) p(\mathbf{w})}{P(X, \mathbf{y})} $$ Linear regression is geometrically pleasant (and syntactically terrifying) $X\mathbf{w}$ is the nearest point to $\mathbf{y}$ in the $m$-dimensional subspace of $\mathbb{R}^n$ spanned by the columns of $X$. More guarantees*: If there is no noise, the true $\mathbf{w}$ will be recovered $\mathbf{w}$ is unique $\mathbf{w}$ exists (*not actually guaranteed) Evaluating Fit End of explanation plot_residuals(reg) Explanation: How wrong will I be? End of explanation plot_overfitting() plot_overfitting(line=True) Explanation: Cross validation, testing, overfitting... End of explanation predict_winner(clf, 'North Carolina', 'Connecticut') Explanation: Logistic regression, briefly End of explanation plot_sigmoid() Explanation: Instead of $$ \mathbf{y} = X\mathbf{w}, $$ $$ \mathbf{y} = \sigma \left(X\mathbf{w}\right) $$ What is $\sigma$? $$ \sigma(x) = \frac{1}{1 + e^{-x}} $$ End of explanation plot_orings() Explanation: Challenger Disaster <img src='oring.jpg'></img> Challenger dataset End of explanation
3,689
Given the following text description, write Python code to implement the functionality described below step by step Description: GA4GH 1000 Genomes Reads Protocol Example This example illustrates how to access alignment data made available using a GA4GH interface. Initialize the client In this step we create a client object which will be used to communicate with the server. It is initialized using the URL. Step1: Search read group sets Read group sets are logical containers for read groups similar to BAM. We can obtain read group sets via a search_read_group_sets request. Observe that this request takes as it's main parameter dataset_id, which was obtained using the example in 1kg_metadata_service using a search_datasets request. Step2: Note Step3: Note, like in the previous example. Only a selected amount of parameters are selected for illustration, the data returned by the server is far richer, this format is only to have a more aesthetic presentation. Search reads This request returns reads were the read group set names we obtained above. The reference ID provided corresponds to chromosome 1 as obtained from the 1kg_reference_service examples. A search_reads request searches for read alignments in a region using start and end coordinates.
Python Code: import ga4gh_client.client as client c = client.HttpClient("http://1kgenomes.ga4gh.org") Explanation: GA4GH 1000 Genomes Reads Protocol Example This example illustrates how to access alignment data made available using a GA4GH interface. Initialize the client In this step we create a client object which will be used to communicate with the server. It is initialized using the URL. End of explanation counter = 0 for read_group_set in c.search_read_group_sets(dataset_id="WyIxa2dlbm9tZXMiXQ"): counter += 1 if counter < 4: print "Read Group Set: {}".format(read_group_set.name) print "id: {}".format(read_group_set.id) print "dataset_id: {}".format(read_group_set.dataset_id) print "Aligned Read Count: {}".format(read_group_set.stats.aligned_read_count) print "Unaligned Read Count: {}\n".format(read_group_set.stats.unaligned_read_count) for read_group in read_group_set.read_groups: print " Read group:" print " id: {}".format(read_group.id) print " Name: {}".format(read_group.name) print " Description: {}".format(read_group.description) print " Biosample Id: {}\n".format(read_group.bio_sample_id) else: break Explanation: Search read group sets Read group sets are logical containers for read groups similar to BAM. We can obtain read group sets via a search_read_group_sets request. Observe that this request takes as it's main parameter dataset_id, which was obtained using the example in 1kg_metadata_service using a search_datasets request. End of explanation read_group_set = c.get_read_group_set(read_group_set_id="WyIxa2dlbm9tZXMiLCJyZ3MiLCJOQTE5Njc4Il0") print "Read Group Set: {}".format(read_group_set.name) print "id: {}".format(read_group_set.id) print "dataset_id: {}".format(read_group_set.dataset_id) print "Aligned Read Count: {}".format(read_group_set.stats.aligned_read_count) print "Unaligned Read Count: {}\n".format(read_group_set.stats.unaligned_read_count) for read_group in read_group_set.read_groups: print " Read Group: {}".format(read_group.name) print " id: {}".format(read_group.bio_sample_id) print " bio_sample_id: {}\n".format(read_group.bio_sample_id) Explanation: Note: only a small subset of elements is being illustrated, the data returned by the servers is richer, that is, it contains other informational fields which may be of interest. Get read group set Similarly, we can obtain a specific Read Group Set by providing a specific identifier. End of explanation for read_group in read_group_set.read_groups: print "Alignment from {}\n".format(read_group.name) alignment = c.search_reads(read_group_ids=[read_group.id], start=0, end=1000000, reference_id="WyJOQ0JJMzciLCIxIl0").next() print " id: {}".format(alignment.id) print " fragment_name: {}".format(alignment.fragment_name) print " aligned_sequence: {}\n".format(alignment.aligned_sequence) Explanation: Note, like in the previous example. Only a selected amount of parameters are selected for illustration, the data returned by the server is far richer, this format is only to have a more aesthetic presentation. Search reads This request returns reads were the read group set names we obtained above. The reference ID provided corresponds to chromosome 1 as obtained from the 1kg_reference_service examples. A search_reads request searches for read alignments in a region using start and end coordinates. End of explanation
3,690
Given the following text description, write Python code to implement the functionality described below step by step Description: Batch Normalization – Lesson What is it? What are it's benefits? How do we add it to a network? Let's see it work! What are you hiding? What is Batch Normalization?<a id='theory'></a> Batch normalization was introduced in Sergey Ioffe's and Christian Szegedy's 2015 paper Batch Normalization Step6: Neural network classes for testing The following class, NeuralNet, allows us to create identical neural networks with and without batch normalization. The code is heavily documented, but there is also some additional discussion later. You do not need to read through it all before going through the rest of the notebook, but the comments within the code blocks may answer some of your questions. About the code Step9: There are quite a few comments in the code, so those should answer most of your questions. However, let's take a look at the most important lines. We add batch normalization to layers inside the fully_connected function. Here are some important points about that code Step10: Comparisons between identical networks, with and without batch normalization The next series of cells train networks with various settings to show the differences with and without batch normalization. They are meant to clearly demonstrate the effects of batch normalization. We include a deeper discussion of batch normalization later in the notebook. The following creates two networks using a ReLU activation function, a learning rate of 0.01, and reasonable starting weights. Step11: As expected, both networks train well and eventually reach similar test accuracies. However, notice that the model with batch normalization converges slightly faster than the other network, reaching accuracies over 90% almost immediately and nearing its max acuracy in 10 or 15 thousand iterations. The other network takes about 3 thousand iterations to reach 90% and doesn't near its best accuracy until 30 thousand or more iterations. If you look at the raw speed, you can see that without batch normalization we were computing over 1100 batches per second, whereas with batch normalization that goes down to just over 500. However, batch normalization allows us to perform fewer iterations and converge in less time over all. (We only trained for 50 thousand batches here so we could plot the comparison.) The following creates two networks with the same hyperparameters used in the previous example, but only trains for 2000 iterations. Step12: As you can see, using batch normalization produces a model with over 95% accuracy in only 2000 batches, and it was above 90% at somewhere around 500 batches. Without batch normalization, the model takes 1750 iterations just to hit 80% – the network with batch normalization hits that mark after around 200 iterations! (Note Step13: With the number of layers we're using and this small learning rate, using a sigmoid activation function takes a long time to start learning. It eventually starts making progress, but it took over 45 thousand batches just to get over 80% accuracy. Using batch normalization gets to 90% in around one thousand batches. The following creates two networks using a ReLU activation function, a learning rate of 1, and reasonable starting weights. Step14: Now we're using ReLUs again, but with a larger learning rate. The plot shows how training started out pretty normally, with the network with batch normalization starting out faster than the other. But the higher learning rate bounces the accuracy around a bit more, and at some point the accuracy in the network without batch normalization just completely crashes. It's likely that too many ReLUs died off at this point because of the high learning rate. The next cell shows the same test again. The network with batch normalization performs the same way, and the other suffers from the same problem again, but it manages to train longer before it happens. Step15: In both of the previous examples, the network with batch normalization manages to gets over 98% accuracy, and get near that result almost immediately. The higher learning rate allows the network to train extremely fast. The following creates two networks using a sigmoid activation function, a learning rate of 1, and reasonable starting weights. Step16: In this example, we switched to a sigmoid activation function. It appears to hande the higher learning rate well, with both networks achieving high accuracy. The cell below shows a similar pair of networks trained for only 2000 iterations. Step17: As you can see, even though these parameters work well for both networks, the one with batch normalization gets over 90% in 400 or so batches, whereas the other takes over 1700. When training larger networks, these sorts of differences become more pronounced. The following creates two networks using a ReLU activation function, a learning rate of 2, and reasonable starting weights. Step18: With this very large learning rate, the network with batch normalization trains fine and almost immediately manages 98% accuracy. However, the network without normalization doesn't learn at all. The following creates two networks using a sigmoid activation function, a learning rate of 2, and reasonable starting weights. Step19: Once again, using a sigmoid activation function with the larger learning rate works well both with and without batch normalization. However, look at the plot below where we train models with the same parameters but only 2000 iterations. As usual, batch normalization lets it train faster. Step20: In the rest of the examples, we use really bad starting weights. That is, normally we would use very small values close to zero. However, in these examples we choose randome values with a standard deviation of 5. If you were really training a neural network, you would not want to do this. But these examples demonstrate how batch normalization makes your network much more resilient. The following creates two networks using a ReLU activation function, a learning rate of 0.01, and bad starting weights. Step21: As the plot shows, without batch normalization the network never learns anything at all. But with batch normalization, it actually learns pretty well and gets to almost 80% accuracy. The starting weights obviously hurt the network, but you can see how well batch normalization does in overcoming them. The following creates two networks using a sigmoid activation function, a learning rate of 0.01, and bad starting weights. Step22: Using a sigmoid activation function works better than the ReLU in the previous example, but without batch normalization it would take a tremendously long time to train the network, if it ever trained at all. The following creates two networks using a ReLU activation function, a learning rate of 1, and bad starting weights.<a id="successful_example_lr_1"></a> Step23: The higher learning rate used here allows the network with batch normalization to surpass 90% in about 30 thousand batches. The network without it never gets anywhere. The following creates two networks using a sigmoid activation function, a learning rate of 1, and bad starting weights. Step24: Using sigmoid works better than ReLUs for this higher learning rate. However, you can see that without batch normalization, the network takes a long time tro train, bounces around a lot, and spends a long time stuck at 90%. The network with batch normalization trains much more quickly, seems to be more stable, and achieves a higher accuracy. The following creates two networks using a ReLU activation function, a learning rate of 2, and bad starting weights.<a id="successful_example_lr_2"></a> Step25: We've already seen that ReLUs do not do as well as sigmoids with higher learning rates, and here we are using an extremely high rate. As expected, without batch normalization the network doesn't learn at all. But with batch normalization, it eventually achieves 90% accuracy. Notice, though, how its accuracy bounces around wildly during training - that's because the learning rate is really much too high, so the fact that this worked at all is a bit of luck. The following creates two networks using a sigmoid activation function, a learning rate of 2, and bad starting weights. Step26: In this case, the network with batch normalization trained faster and reached a higher accuracy. Meanwhile, the high learning rate makes the network without normalization bounce around erratically and have trouble getting past 90%. Full Disclosure Step27: When we used these same parameters earlier, we saw the network with batch normalization reach 92% validation accuracy. This time we used different starting weights, initialized using the same standard deviation as before, and the network doesn't learn at all. (Remember, an accuracy around 10% is what the network gets if it just guesses the same value all the time.) The following creates two networks using a ReLU activation function, a learning rate of 2, and bad starting weights. Step29: When we trained with these parameters and batch normalization earlier, we reached 90% validation accuracy. However, this time the network almost starts to make some progress in the beginning, but it quickly breaks down and stops learning. Note Step31: This version of fully_connected is much longer than the original, but once again has extensive comments to help you understand it. Here are some important points Step32: In the following cell, we pass True for test_training_accuracy, which performs the same batch normalization that we normally perform during training. Step33: As you can see, the network guessed the same value every time! But why? Because during training, a network with batch normalization adjusts the values at each layer based on the mean and variance of that batch. The "batches" we are using for these predictions have a single input each time, so their values are the means, and their variances will always be 0. That means the network will normalize the values at any layer to zero. (Review the equations from before to see why a value that is equal to the mean would always normalize to zero.) So we end up with the same result for every input we give the network, because its the value the network produces when it applies its learned weights to zeros at every layer. Note
Python Code: # Import necessary packages import tensorflow as tf import tqdm import numpy as np import matplotlib.pyplot as plt %matplotlib inline # Import MNIST data so we have something for our experiments from tensorflow.examples.tutorials.mnist import input_data mnist = input_data.read_data_sets("MNIST_data/", one_hot=True) Explanation: Batch Normalization – Lesson What is it? What are it's benefits? How do we add it to a network? Let's see it work! What are you hiding? What is Batch Normalization?<a id='theory'></a> Batch normalization was introduced in Sergey Ioffe's and Christian Szegedy's 2015 paper Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift. The idea is that, instead of just normalizing the inputs to the network, we normalize the inputs to layers within the network. It's called "batch" normalization because during training, we normalize each layer's inputs by using the mean and variance of the values in the current mini-batch. Why might this help? Well, we know that normalizing the inputs to a network helps the network learn. But a network is a series of layers, where the output of one layer becomes the input to another. That means we can think of any layer in a neural network as the first layer of a smaller network. For example, imagine a 3 layer network. Instead of just thinking of it as a single network with inputs, layers, and outputs, think of the output of layer 1 as the input to a two layer network. This two layer network would consist of layers 2 and 3 in our original network. Likewise, the output of layer 2 can be thought of as the input to a single layer network, consisting only of layer 3. When you think of it like that - as a series of neural networks feeding into each other - then it's easy to imagine how normalizing the inputs to each layer would help. It's just like normalizing the inputs to any other neural network, but you're doing it at every layer (sub-network). Beyond the intuitive reasons, there are good mathematical reasons why it helps the network learn better, too. It helps combat what the authors call internal covariate shift. This discussion is best handled in the paper and in Deep Learning a book you can read online written by Ian Goodfellow, Yoshua Bengio, and Aaron Courville. Specifically, check out the batch normalization section of Chapter 8: Optimization for Training Deep Models. Benefits of Batch Normalization<a id="benefits"></a> Batch normalization optimizes network training. It has been shown to have several benefits: 1. Networks train faster – Each training iteration will actually be slower because of the extra calculations during the forward pass and the additional hyperparameters to train during back propagation. However, it should converge much more quickly, so training should be faster overall. 2. Allows higher learning rates – Gradient descent usually requires small learning rates for the network to converge. And as networks get deeper, their gradients get smaller during back propagation so they require even more iterations. Using batch normalization allows us to use much higher learning rates, which further increases the speed at which networks train. 3. Makes weights easier to initialize – Weight initialization can be difficult, and it's even more difficult when creating deeper networks. Batch normalization seems to allow us to be much less careful about choosing our initial starting weights. 4. Makes more activation functions viable – Some activation functions do not work well in some situations. Sigmoids lose their gradient pretty quickly, which means they can't be used in deep networks. And ReLUs often die out during training, where they stop learning completely, so we need to be careful about the range of values fed into them. Because batch normalization regulates the values going into each activation function, non-linearlities that don't seem to work well in deep networks actually become viable again. 5. Simplifies the creation of deeper networks – Because of the first 4 items listed above, it is easier to build and faster to train deeper neural networks when using batch normalization. And it's been shown that deeper networks generally produce better results, so that's great. 6. Provides a bit of regularlization – Batch normalization adds a little noise to your network. In some cases, such as in Inception modules, batch normalization has been shown to work as well as dropout. But in general, consider batch normalization as a bit of extra regularization, possibly allowing you to reduce some of the dropout you might add to a network. 7. May give better results overall – Some tests seem to show batch normalization actually improves the training results. However, it's really an optimization to help train faster, so you shouldn't think of it as a way to make your network better. But since it lets you train networks faster, that means you can iterate over more designs more quickly. It also lets you build deeper networks, which are usually better. So when you factor in everything, you're probably going to end up with better results if you build your networks with batch normalization. Batch Normalization in TensorFlow<a id="implementation_1"></a> This section of the notebook shows you one way to add batch normalization to a neural network built in TensorFlow. The following cell imports the packages we need in the notebook and loads the MNIST dataset to use in our experiments. However, the tensorflow package contains all the code you'll actually need for batch normalization. End of explanation class NeuralNet: def __init__(self, initial_weights, activation_fn, use_batch_norm): Initializes this object, creating a TensorFlow graph using the given parameters. :param initial_weights: list of NumPy arrays or Tensors Initial values for the weights for every layer in the network. We pass these in so we can create multiple networks with the same starting weights to eliminate training differences caused by random initialization differences. The number of items in the list defines the number of layers in the network, and the shapes of the items in the list define the number of nodes in each layer. e.g. Passing in 3 matrices of shape (784, 256), (256, 100), and (100, 10) would create a network with 784 inputs going into a hidden layer with 256 nodes, followed by a hidden layer with 100 nodes, followed by an output layer with 10 nodes. :param activation_fn: Callable The function used for the output of each hidden layer. The network will use the same activation function on every hidden layer and no activate function on the output layer. e.g. Pass tf.nn.relu to use ReLU activations on your hidden layers. :param use_batch_norm: bool Pass True to create a network that uses batch normalization; False otherwise Note: this network will not use batch normalization on layers that do not have an activation function. # Keep track of whether or not this network uses batch normalization. self.use_batch_norm = use_batch_norm self.name = "With Batch Norm" if use_batch_norm else "Without Batch Norm" # Batch normalization needs to do different calculations during training and inference, # so we use this placeholder to tell the graph which behavior to use. self.is_training = tf.placeholder(tf.bool, name="is_training") # This list is just for keeping track of data we want to plot later. # It doesn't actually have anything to do with neural nets or batch normalization. self.training_accuracies = [] # Create the network graph, but it will not actually have any real values until after you # call train or test self.build_network(initial_weights, activation_fn) def build_network(self, initial_weights, activation_fn): Build the graph. The graph still needs to be trained via the `train` method. :param initial_weights: list of NumPy arrays or Tensors See __init__ for description. :param activation_fn: Callable See __init__ for description. self.input_layer = tf.placeholder(tf.float32, [None, initial_weights[0].shape[0]]) layer_in = self.input_layer for weights in initial_weights[:-1]: layer_in = self.fully_connected(layer_in, weights, activation_fn) self.output_layer = self.fully_connected(layer_in, initial_weights[-1]) def fully_connected(self, layer_in, initial_weights, activation_fn=None): Creates a standard, fully connected layer. Its number of inputs and outputs will be defined by the shape of `initial_weights`, and its starting weight values will be taken directly from that same parameter. If `self.use_batch_norm` is True, this layer will include batch normalization, otherwise it will not. :param layer_in: Tensor The Tensor that feeds into this layer. It's either the input to the network or the output of a previous layer. :param initial_weights: NumPy array or Tensor Initial values for this layer's weights. The shape defines the number of nodes in the layer. e.g. Passing in 3 matrix of shape (784, 256) would create a layer with 784 inputs and 256 outputs. :param activation_fn: Callable or None (default None) The non-linearity used for the output of the layer. If None, this layer will not include batch normalization, regardless of the value of `self.use_batch_norm`. e.g. Pass tf.nn.relu to use ReLU activations on your hidden layers. # Since this class supports both options, only use batch normalization when # requested. However, do not use it on the final layer, which we identify # by its lack of an activation function. if self.use_batch_norm and activation_fn: # Batch normalization uses weights as usual, but does NOT add a bias term. This is because # its calculations include gamma and beta variables that make the bias term unnecessary. # (See later in the notebook for more details.) weights = tf.Variable(initial_weights) linear_output = tf.matmul(layer_in, weights) # Apply batch normalization to the linear combination of the inputs and weights batch_normalized_output = tf.layers.batch_normalization(linear_output, training=self.is_training) # Now apply the activation function, *after* the normalization. return activation_fn(batch_normalized_output) else: # When not using batch normalization, create a standard layer that multiplies # the inputs and weights, adds a bias, and optionally passes the result # through an activation function. weights = tf.Variable(initial_weights) biases = tf.Variable(tf.zeros([initial_weights.shape[-1]])) linear_output = tf.add(tf.matmul(layer_in, weights), biases) return linear_output if not activation_fn else activation_fn(linear_output) def train(self, session, learning_rate, training_batches, batches_per_sample, save_model_as=None): Trains the model on the MNIST training dataset. :param session: Session Used to run training graph operations. :param learning_rate: float Learning rate used during gradient descent. :param training_batches: int Number of batches to train. :param batches_per_sample: int How many batches to train before sampling the validation accuracy. :param save_model_as: string or None (default None) Name to use if you want to save the trained model. # This placeholder will store the target labels for each mini batch labels = tf.placeholder(tf.float32, [None, 10]) # Define loss and optimizer cross_entropy = tf.reduce_mean( tf.nn.softmax_cross_entropy_with_logits(labels=labels, logits=self.output_layer)) # Define operations for testing correct_prediction = tf.equal(tf.argmax(self.output_layer, 1), tf.argmax(labels, 1)) accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32)) if self.use_batch_norm: # If we don't include the update ops as dependencies on the train step, the # tf.layers.batch_normalization layers won't update their population statistics, # which will cause the model to fail at inference time with tf.control_dependencies(tf.get_collection(tf.GraphKeys.UPDATE_OPS)): train_step = tf.train.GradientDescentOptimizer(learning_rate).minimize(cross_entropy) else: train_step = tf.train.GradientDescentOptimizer(learning_rate).minimize(cross_entropy) # Train for the appropriate number of batches. (tqdm is only for a nice timing display) for i in tqdm.tqdm(range(training_batches)): # We use batches of 60 just because the original paper did. You can use any size batch you like. batch_xs, batch_ys = mnist.train.next_batch(60) session.run(train_step, feed_dict={self.input_layer: batch_xs, labels: batch_ys, self.is_training: True}) # Periodically test accuracy against the 5k validation images and store it for plotting later. if i % batches_per_sample == 0: test_accuracy = session.run(accuracy, feed_dict={self.input_layer: mnist.validation.images, labels: mnist.validation.labels, self.is_training: False}) self.training_accuracies.append(test_accuracy) # After training, report accuracy against test data test_accuracy = session.run(accuracy, feed_dict={self.input_layer: mnist.validation.images, labels: mnist.validation.labels, self.is_training: False}) print('{}: After training, final accuracy on validation set = {}'.format(self.name, test_accuracy)) # If you want to use this model later for inference instead of having to retrain it, # just construct it with the same parameters and then pass this file to the 'test' function if save_model_as: tf.train.Saver().save(session, save_model_as) def test(self, session, test_training_accuracy=False, include_individual_predictions=False, restore_from=None): Trains a trained model on the MNIST testing dataset. :param session: Session Used to run the testing graph operations. :param test_training_accuracy: bool (default False) If True, perform inference with batch normalization using batch mean and variance; if False, perform inference with batch normalization using estimated population mean and variance. Note: in real life, *always* perform inference using the population mean and variance. This parameter exists just to support demonstrating what happens if you don't. :param include_individual_predictions: bool (default True) This function always performs an accuracy test against the entire test set. But if this parameter is True, it performs an extra test, doing 200 predictions one at a time, and displays the results and accuracy. :param restore_from: string or None (default None) Name of a saved model if you want to test with previously saved weights. # This placeholder will store the true labels for each mini batch labels = tf.placeholder(tf.float32, [None, 10]) # Define operations for testing correct_prediction = tf.equal(tf.argmax(self.output_layer, 1), tf.argmax(labels, 1)) accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32)) # If provided, restore from a previously saved model if restore_from: tf.train.Saver().restore(session, restore_from) # Test against all of the MNIST test data test_accuracy = session.run(accuracy, feed_dict={self.input_layer: mnist.test.images, labels: mnist.test.labels, self.is_training: test_training_accuracy}) print('-'*75) print('{}: Accuracy on full test set = {}'.format(self.name, test_accuracy)) # If requested, perform tests predicting individual values rather than batches if include_individual_predictions: predictions = [] correct = 0 # Do 200 predictions, 1 at a time for i in range(200): # This is a normal prediction using an individual test case. However, notice # we pass `test_training_accuracy` to `feed_dict` as the value for `self.is_training`. # Remember that will tell it whether it should use the batch mean & variance or # the population estimates that were calucated while training the model. pred, corr = session.run([tf.arg_max(self.output_layer,1), accuracy], feed_dict={self.input_layer: [mnist.test.images[i]], labels: [mnist.test.labels[i]], self.is_training: test_training_accuracy}) correct += corr predictions.append(pred[0]) print("200 Predictions:", predictions) print("Accuracy on 200 samples:", correct/200) Explanation: Neural network classes for testing The following class, NeuralNet, allows us to create identical neural networks with and without batch normalization. The code is heavily documented, but there is also some additional discussion later. You do not need to read through it all before going through the rest of the notebook, but the comments within the code blocks may answer some of your questions. About the code: This class is not meant to represent TensorFlow best practices – the design choices made here are to support the discussion related to batch normalization. It's also important to note that we use the well-known MNIST data for these examples, but the networks we create are not meant to be good for performing handwritten character recognition. We chose this network architecture because it is similar to the one used in the original paper, which is complex enough to demonstrate some of the benefits of batch normalization while still being fast to train. End of explanation def plot_training_accuracies(*args, **kwargs): Displays a plot of the accuracies calculated during training to demonstrate how many iterations it took for the model(s) to converge. :param args: One or more NeuralNet objects You can supply any number of NeuralNet objects as unnamed arguments and this will display their training accuracies. Be sure to call `train` the NeuralNets before calling this function. :param kwargs: You can supply any named parameters here, but `batches_per_sample` is the only one we look for. It should match the `batches_per_sample` value you passed to the `train` function. fig, ax = plt.subplots() batches_per_sample = kwargs['batches_per_sample'] for nn in args: ax.plot(range(0,len(nn.training_accuracies)*batches_per_sample,batches_per_sample), nn.training_accuracies, label=nn.name) ax.set_xlabel('Training steps') ax.set_ylabel('Accuracy') ax.set_title('Validation Accuracy During Training') ax.legend(loc=4) ax.set_ylim([0,1]) plt.yticks(np.arange(0, 1.1, 0.1)) plt.grid(True) plt.show() def train_and_test(use_bad_weights, learning_rate, activation_fn, training_batches=50000, batches_per_sample=500): Creates two networks, one with and one without batch normalization, then trains them with identical starting weights, layers, batches, etc. Finally tests and plots their accuracies. :param use_bad_weights: bool If True, initialize the weights of both networks to wildly inappropriate weights; if False, use reasonable starting weights. :param learning_rate: float Learning rate used during gradient descent. :param activation_fn: Callable The function used for the output of each hidden layer. The network will use the same activation function on every hidden layer and no activate function on the output layer. e.g. Pass tf.nn.relu to use ReLU activations on your hidden layers. :param training_batches: (default 50000) Number of batches to train. :param batches_per_sample: (default 500) How many batches to train before sampling the validation accuracy. # Use identical starting weights for each network to eliminate differences in # weight initialization as a cause for differences seen in training performance # # Note: The networks will use these weights to define the number of and shapes of # its layers. The original batch normalization paper used 3 hidden layers # with 100 nodes in each, followed by a 10 node output layer. These values # build such a network, but feel free to experiment with different choices. # However, the input size should always be 784 and the final output should be 10. if use_bad_weights: # These weights should be horrible because they have such a large standard deviation weights = [np.random.normal(size=(784,100), scale=5.0).astype(np.float32), np.random.normal(size=(100,100), scale=5.0).astype(np.float32), np.random.normal(size=(100,100), scale=5.0).astype(np.float32), np.random.normal(size=(100,10), scale=5.0).astype(np.float32) ] else: # These weights should be good because they have such a small standard deviation weights = [np.random.normal(size=(784,100), scale=0.05).astype(np.float32), np.random.normal(size=(100,100), scale=0.05).astype(np.float32), np.random.normal(size=(100,100), scale=0.05).astype(np.float32), np.random.normal(size=(100,10), scale=0.05).astype(np.float32) ] # Just to make sure the TensorFlow's default graph is empty before we start another # test, because we don't bother using different graphs or scoping and naming # elements carefully in this sample code. tf.reset_default_graph() # build two versions of same network, 1 without and 1 with batch normalization nn = NeuralNet(weights, activation_fn, False) bn = NeuralNet(weights, activation_fn, True) # train and test the two models with tf.Session() as sess: tf.global_variables_initializer().run() nn.train(sess, learning_rate, training_batches, batches_per_sample) bn.train(sess, learning_rate, training_batches, batches_per_sample) nn.test(sess) bn.test(sess) # Display a graph of how validation accuracies changed during training # so we can compare how the models trained and when they converged plot_training_accuracies(nn, bn, batches_per_sample=batches_per_sample) Explanation: There are quite a few comments in the code, so those should answer most of your questions. However, let's take a look at the most important lines. We add batch normalization to layers inside the fully_connected function. Here are some important points about that code: 1. Layers with batch normalization do not include a bias term. 2. We use TensorFlow's tf.layers.batch_normalization function to handle the math. (We show lower-level ways to do this later in the notebook.) 3. We tell tf.layers.batch_normalization whether or not the network is training. This is an important step we'll talk about later. 4. We add the normalization before calling the activation function. In addition to that code, the training step is wrapped in the following with statement: python with tf.control_dependencies(tf.get_collection(tf.GraphKeys.UPDATE_OPS)): This line actually works in conjunction with the training parameter we pass to tf.layers.batch_normalization. Without it, TensorFlow's batch normalization layer will not operate correctly during inference. Finally, whenever we train the network or perform inference, we use the feed_dict to set self.is_training to True or False, respectively, like in the following line: python session.run(train_step, feed_dict={self.input_layer: batch_xs, labels: batch_ys, self.is_training: True}) We'll go into more details later, but next we want to show some experiments that use this code and test networks with and without batch normalization. Batch Normalization Demos<a id='demos'></a> This section of the notebook trains various networks with and without batch normalization to demonstrate some of the benefits mentioned earlier. We'd like to thank the author of this blog post Implementing Batch Normalization in TensorFlow. That post provided the idea of - and some of the code for - plotting the differences in accuracy during training, along with the idea for comparing multiple networks using the same initial weights. Code to support testing The following two functions support the demos we run in the notebook. The first function, plot_training_accuracies, simply plots the values found in the training_accuracies lists of the NeuralNet objects passed to it. If you look at the train function in NeuralNet, you'll see it that while it's training the network, it periodically measures validation accuracy and stores the results in that list. It does that just to support these plots. The second function, train_and_test, creates two neural nets - one with and one without batch normalization. It then trains them both and tests them, calling plot_training_accuracies to plot how their accuracies changed over the course of training. The really imporant thing about this function is that it initializes the starting weights for the networks outside of the networks and then passes them in. This lets it train both networks from the exact same starting weights, which eliminates performance differences that might result from (un)lucky initial weights. End of explanation train_and_test(False, 0.01, tf.nn.relu) Explanation: Comparisons between identical networks, with and without batch normalization The next series of cells train networks with various settings to show the differences with and without batch normalization. They are meant to clearly demonstrate the effects of batch normalization. We include a deeper discussion of batch normalization later in the notebook. The following creates two networks using a ReLU activation function, a learning rate of 0.01, and reasonable starting weights. End of explanation train_and_test(False, 0.01, tf.nn.relu, 2000, 50) Explanation: As expected, both networks train well and eventually reach similar test accuracies. However, notice that the model with batch normalization converges slightly faster than the other network, reaching accuracies over 90% almost immediately and nearing its max acuracy in 10 or 15 thousand iterations. The other network takes about 3 thousand iterations to reach 90% and doesn't near its best accuracy until 30 thousand or more iterations. If you look at the raw speed, you can see that without batch normalization we were computing over 1100 batches per second, whereas with batch normalization that goes down to just over 500. However, batch normalization allows us to perform fewer iterations and converge in less time over all. (We only trained for 50 thousand batches here so we could plot the comparison.) The following creates two networks with the same hyperparameters used in the previous example, but only trains for 2000 iterations. End of explanation train_and_test(False, 0.01, tf.nn.sigmoid) Explanation: As you can see, using batch normalization produces a model with over 95% accuracy in only 2000 batches, and it was above 90% at somewhere around 500 batches. Without batch normalization, the model takes 1750 iterations just to hit 80% – the network with batch normalization hits that mark after around 200 iterations! (Note: if you run the code yourself, you'll see slightly different results each time because the starting weights - while the same for each model - are different for each run.) In the above example, you should also notice that the networks trained fewer batches per second then what you saw in the previous example. That's because much of the time we're tracking is actually spent periodically performing inference to collect data for the plots. In this example we perform that inference every 50 batches instead of every 500, so generating the plot for this example requires 10 times the overhead for the same 2000 iterations. The following creates two networks using a sigmoid activation function, a learning rate of 0.01, and reasonable starting weights. End of explanation train_and_test(False, 1, tf.nn.relu) Explanation: With the number of layers we're using and this small learning rate, using a sigmoid activation function takes a long time to start learning. It eventually starts making progress, but it took over 45 thousand batches just to get over 80% accuracy. Using batch normalization gets to 90% in around one thousand batches. The following creates two networks using a ReLU activation function, a learning rate of 1, and reasonable starting weights. End of explanation train_and_test(False, 1, tf.nn.relu) Explanation: Now we're using ReLUs again, but with a larger learning rate. The plot shows how training started out pretty normally, with the network with batch normalization starting out faster than the other. But the higher learning rate bounces the accuracy around a bit more, and at some point the accuracy in the network without batch normalization just completely crashes. It's likely that too many ReLUs died off at this point because of the high learning rate. The next cell shows the same test again. The network with batch normalization performs the same way, and the other suffers from the same problem again, but it manages to train longer before it happens. End of explanation train_and_test(False, 1, tf.nn.sigmoid) Explanation: In both of the previous examples, the network with batch normalization manages to gets over 98% accuracy, and get near that result almost immediately. The higher learning rate allows the network to train extremely fast. The following creates two networks using a sigmoid activation function, a learning rate of 1, and reasonable starting weights. End of explanation train_and_test(False, 1, tf.nn.sigmoid, 2000, 50) Explanation: In this example, we switched to a sigmoid activation function. It appears to hande the higher learning rate well, with both networks achieving high accuracy. The cell below shows a similar pair of networks trained for only 2000 iterations. End of explanation train_and_test(False, 2, tf.nn.relu) Explanation: As you can see, even though these parameters work well for both networks, the one with batch normalization gets over 90% in 400 or so batches, whereas the other takes over 1700. When training larger networks, these sorts of differences become more pronounced. The following creates two networks using a ReLU activation function, a learning rate of 2, and reasonable starting weights. End of explanation train_and_test(False, 2, tf.nn.sigmoid) Explanation: With this very large learning rate, the network with batch normalization trains fine and almost immediately manages 98% accuracy. However, the network without normalization doesn't learn at all. The following creates two networks using a sigmoid activation function, a learning rate of 2, and reasonable starting weights. End of explanation train_and_test(False, 2, tf.nn.sigmoid, 2000, 50) Explanation: Once again, using a sigmoid activation function with the larger learning rate works well both with and without batch normalization. However, look at the plot below where we train models with the same parameters but only 2000 iterations. As usual, batch normalization lets it train faster. End of explanation train_and_test(True, 0.01, tf.nn.relu) Explanation: In the rest of the examples, we use really bad starting weights. That is, normally we would use very small values close to zero. However, in these examples we choose randome values with a standard deviation of 5. If you were really training a neural network, you would not want to do this. But these examples demonstrate how batch normalization makes your network much more resilient. The following creates two networks using a ReLU activation function, a learning rate of 0.01, and bad starting weights. End of explanation train_and_test(True, 0.01, tf.nn.sigmoid) Explanation: As the plot shows, without batch normalization the network never learns anything at all. But with batch normalization, it actually learns pretty well and gets to almost 80% accuracy. The starting weights obviously hurt the network, but you can see how well batch normalization does in overcoming them. The following creates two networks using a sigmoid activation function, a learning rate of 0.01, and bad starting weights. End of explanation train_and_test(True, 1, tf.nn.relu) Explanation: Using a sigmoid activation function works better than the ReLU in the previous example, but without batch normalization it would take a tremendously long time to train the network, if it ever trained at all. The following creates two networks using a ReLU activation function, a learning rate of 1, and bad starting weights.<a id="successful_example_lr_1"></a> End of explanation train_and_test(True, 1, tf.nn.sigmoid) Explanation: The higher learning rate used here allows the network with batch normalization to surpass 90% in about 30 thousand batches. The network without it never gets anywhere. The following creates two networks using a sigmoid activation function, a learning rate of 1, and bad starting weights. End of explanation train_and_test(True, 2, tf.nn.relu) Explanation: Using sigmoid works better than ReLUs for this higher learning rate. However, you can see that without batch normalization, the network takes a long time tro train, bounces around a lot, and spends a long time stuck at 90%. The network with batch normalization trains much more quickly, seems to be more stable, and achieves a higher accuracy. The following creates two networks using a ReLU activation function, a learning rate of 2, and bad starting weights.<a id="successful_example_lr_2"></a> End of explanation train_and_test(True, 2, tf.nn.sigmoid) Explanation: We've already seen that ReLUs do not do as well as sigmoids with higher learning rates, and here we are using an extremely high rate. As expected, without batch normalization the network doesn't learn at all. But with batch normalization, it eventually achieves 90% accuracy. Notice, though, how its accuracy bounces around wildly during training - that's because the learning rate is really much too high, so the fact that this worked at all is a bit of luck. The following creates two networks using a sigmoid activation function, a learning rate of 2, and bad starting weights. End of explanation train_and_test(True, 1, tf.nn.relu) Explanation: In this case, the network with batch normalization trained faster and reached a higher accuracy. Meanwhile, the high learning rate makes the network without normalization bounce around erratically and have trouble getting past 90%. Full Disclosure: Batch Normalization Doesn't Fix Everything Batch normalization isn't magic and it doesn't work every time. Weights are still randomly initialized and batches are chosen at random during training, so you never know exactly how training will go. Even for these tests, where we use the same initial weights for both networks, we still get different weights each time we run. This section includes two examples that show runs when batch normalization did not help at all. The following creates two networks using a ReLU activation function, a learning rate of 1, and bad starting weights. End of explanation train_and_test(True, 2, tf.nn.relu) Explanation: When we used these same parameters earlier, we saw the network with batch normalization reach 92% validation accuracy. This time we used different starting weights, initialized using the same standard deviation as before, and the network doesn't learn at all. (Remember, an accuracy around 10% is what the network gets if it just guesses the same value all the time.) The following creates two networks using a ReLU activation function, a learning rate of 2, and bad starting weights. End of explanation def fully_connected(self, layer_in, initial_weights, activation_fn=None): Creates a standard, fully connected layer. Its number of inputs and outputs will be defined by the shape of `initial_weights`, and its starting weight values will be taken directly from that same parameter. If `self.use_batch_norm` is True, this layer will include batch normalization, otherwise it will not. :param layer_in: Tensor The Tensor that feeds into this layer. It's either the input to the network or the output of a previous layer. :param initial_weights: NumPy array or Tensor Initial values for this layer's weights. The shape defines the number of nodes in the layer. e.g. Passing in 3 matrix of shape (784, 256) would create a layer with 784 inputs and 256 outputs. :param activation_fn: Callable or None (default None) The non-linearity used for the output of the layer. If None, this layer will not include batch normalization, regardless of the value of `self.use_batch_norm`. e.g. Pass tf.nn.relu to use ReLU activations on your hidden layers. if self.use_batch_norm and activation_fn: # Batch normalization uses weights as usual, but does NOT add a bias term. This is because # its calculations include gamma and beta variables that make the bias term unnecessary. weights = tf.Variable(initial_weights) linear_output = tf.matmul(layer_in, weights) num_out_nodes = initial_weights.shape[-1] # Batch normalization adds additional trainable variables: # gamma (for scaling) and beta (for shifting). gamma = tf.Variable(tf.ones([num_out_nodes])) beta = tf.Variable(tf.zeros([num_out_nodes])) # These variables will store the mean and variance for this layer over the entire training set, # which we assume represents the general population distribution. # By setting `trainable=False`, we tell TensorFlow not to modify these variables during # back propagation. Instead, we will assign values to these variables ourselves. pop_mean = tf.Variable(tf.zeros([num_out_nodes]), trainable=False) pop_variance = tf.Variable(tf.ones([num_out_nodes]), trainable=False) # Batch normalization requires a small constant epsilon, used to ensure we don't divide by zero. # This is the default value TensorFlow uses. epsilon = 1e-3 def batch_norm_training(): # Calculate the mean and variance for the data coming out of this layer's linear-combination step. # The [0] defines an array of axes to calculate over. batch_mean, batch_variance = tf.nn.moments(linear_output, [0]) # Calculate a moving average of the training data's mean and variance while training. # These will be used during inference. # Decay should be some number less than 1. tf.layers.batch_normalization uses the parameter # "momentum" to accomplish this and defaults it to 0.99 decay = 0.99 train_mean = tf.assign(pop_mean, pop_mean * decay + batch_mean * (1 - decay)) train_variance = tf.assign(pop_variance, pop_variance * decay + batch_variance * (1 - decay)) # The 'tf.control_dependencies' context tells TensorFlow it must calculate 'train_mean' # and 'train_variance' before it calculates the 'tf.nn.batch_normalization' layer. # This is necessary because the those two operations are not actually in the graph # connecting the linear_output and batch_normalization layers, # so TensorFlow would otherwise just skip them. with tf.control_dependencies([train_mean, train_variance]): return tf.nn.batch_normalization(linear_output, batch_mean, batch_variance, beta, gamma, epsilon) def batch_norm_inference(): # During inference, use the our estimated population mean and variance to normalize the layer return tf.nn.batch_normalization(linear_output, pop_mean, pop_variance, beta, gamma, epsilon) # Use `tf.cond` as a sort of if-check. When self.is_training is True, TensorFlow will execute # the operation returned from `batch_norm_training`; otherwise it will execute the graph # operation returned from `batch_norm_inference`. batch_normalized_output = tf.cond(self.is_training, batch_norm_training, batch_norm_inference) # Pass the batch-normalized layer output through the activation function. # The literature states there may be cases where you want to perform the batch normalization *after* # the activation function, but it is difficult to find any uses of that in practice. return activation_fn(batch_normalized_output) else: # When not using batch normalization, create a standard layer that multiplies # the inputs and weights, adds a bias, and optionally passes the result # through an activation function. weights = tf.Variable(initial_weights) biases = tf.Variable(tf.zeros([initial_weights.shape[-1]])) linear_output = tf.add(tf.matmul(layer_in, weights), biases) return linear_output if not activation_fn else activation_fn(linear_output) Explanation: When we trained with these parameters and batch normalization earlier, we reached 90% validation accuracy. However, this time the network almost starts to make some progress in the beginning, but it quickly breaks down and stops learning. Note: Both of the above examples use extremely bad starting weights, along with learning rates that are too high. While we've shown batch normalization can overcome bad values, we don't mean to encourage actually using them. The examples in this notebook are meant to show that batch normalization can help your networks train better. But these last two examples should remind you that you still want to try to use good network design choices and reasonable starting weights. It should also remind you that the results of each attempt to train a network are a bit random, even when using otherwise identical architectures. Batch Normalization: A Detailed Look<a id='implementation_2'></a> The layer created by tf.layers.batch_normalization handles all the details of implementing batch normalization. Many students will be fine just using that and won't care about what's happening at the lower levels. However, some students may want to explore the details, so here is a short explanation of what's really happening, starting with the equations you're likely to come across if you ever read about batch normalization. In order to normalize the values, we first need to find the average value for the batch. If you look at the code, you can see that this is not the average value of the batch inputs, but the average value coming out of any particular layer before we pass it through its non-linear activation function and then feed it as an input to the next layer. We represent the average as $\mu_B$, which is simply the sum of all of the values $x_i$ divided by the number of values, $m$ $$ \mu_B \leftarrow \frac{1}{m}\sum_{i=1}^m x_i $$ We then need to calculate the variance, or mean squared deviation, represented as $\sigma_{B}^{2}$. If you aren't familiar with statistics, that simply means for each value $x_i$, we subtract the average value (calculated earlier as $\mu_B$), which gives us what's called the "deviation" for that value. We square the result to get the squared deviation. Sum up the results of doing that for each of the values, then divide by the number of values, again $m$, to get the average, or mean, squared deviation. $$ \sigma_{B}^{2} \leftarrow \frac{1}{m}\sum_{i=1}^m (x_i - \mu_B)^2 $$ Once we have the mean and variance, we can use them to normalize the values with the following equation. For each value, it subtracts the mean and divides by the (almost) standard deviation. (You've probably heard of standard deviation many times, but if you have not studied statistics you might not know that the standard deviation is actually the square root of the mean squared deviation.) $$ \hat{x_i} \leftarrow \frac{x_i - \mu_B}{\sqrt{\sigma_{B}^{2} + \epsilon}} $$ Above, we said "(almost) standard deviation". That's because the real standard deviation for the batch is calculated by $\sqrt{\sigma_{B}^{2}}$, but the above formula adds the term epsilon, $\epsilon$, before taking the square root. The epsilon can be any small, positive constant - in our code we use the value 0.001. It is there partially to make sure we don't try to divide by zero, but it also acts to increase the variance slightly for each batch. Why increase the variance? Statistically, this makes sense because even though we are normalizing one batch at a time, we are also trying to estimate the population distribution – the total training set, which itself an estimate of the larger population of inputs your network wants to handle. The variance of a population is higher than the variance for any sample taken from that population, so increasing the variance a little bit for each batch helps take that into account. At this point, we have a normalized value, represented as $\hat{x_i}$. But rather than use it directly, we multiply it by a gamma value, $\gamma$, and then add a beta value, $\beta$. Both $\gamma$ and $\beta$ are learnable parameters of the network and serve to scale and shift the normalized value, respectively. Because they are learnable just like weights, they give your network some extra knobs to tweak during training to help it learn the function it is trying to approximate. $$ y_i \leftarrow \gamma \hat{x_i} + \beta $$ We now have the final batch-normalized output of our layer, which we would then pass to a non-linear activation function like sigmoid, tanh, ReLU, Leaky ReLU, etc. In the original batch normalization paper (linked in the beginning of this notebook), they mention that there might be cases when you'd want to perform the batch normalization after the non-linearity instead of before, but it is difficult to find any uses like that in practice. In NeuralNet's implementation of fully_connected, all of this math is hidden inside the following line, where linear_output serves as the $x_i$ from the equations: python batch_normalized_output = tf.layers.batch_normalization(linear_output, training=self.is_training) The next section shows you how to implement the math directly. Batch normalization without the tf.layers package Our implementation of batch normalization in NeuralNet uses the high-level abstraction tf.layers.batch_normalization, found in TensorFlow's tf.layers package. However, if you would like to implement batch normalization at a lower level, the following code shows you how. It uses tf.nn.batch_normalization from TensorFlow's neural net (nn) package. 1) You can replace the fully_connected function in the NeuralNet class with the below code and everything in NeuralNet will still work like it did before. End of explanation def batch_norm_test(test_training_accuracy): :param test_training_accuracy: bool If True, perform inference with batch normalization using batch mean and variance; if False, perform inference with batch normalization using estimated population mean and variance. weights = [np.random.normal(size=(784,100), scale=0.05).astype(np.float32), np.random.normal(size=(100,100), scale=0.05).astype(np.float32), np.random.normal(size=(100,100), scale=0.05).astype(np.float32), np.random.normal(size=(100,10), scale=0.05).astype(np.float32) ] tf.reset_default_graph() # Train the model bn = NeuralNet(weights, tf.nn.relu, True) # First train the network with tf.Session() as sess: tf.global_variables_initializer().run() bn.train(sess, 0.01, 2000, 2000) bn.test(sess, test_training_accuracy=test_training_accuracy, include_individual_predictions=True) Explanation: This version of fully_connected is much longer than the original, but once again has extensive comments to help you understand it. Here are some important points: It explicitly creates variables to store gamma, beta, and the population mean and variance. These were all handled for us in the previous version of the function. It initializes gamma to one and beta to zero, so they start out having no effect in this calculation: $y_i \leftarrow \gamma \hat{x_i} + \beta$. However, during training the network learns the best values for these variables using back propagation, just like networks normally do with weights. Unlike gamma and beta, the variables for population mean and variance are marked as untrainable. That tells TensorFlow not to modify them during back propagation. Instead, the lines that call tf.assign are used to update these variables directly. TensorFlow won't automatically run the tf.assign operations during training because it only evaluates operations that are required based on the connections it finds in the graph. To get around that, we add this line: with tf.control_dependencies([train_mean, train_variance]): before we run the normalization operation. This tells TensorFlow it needs to run those operations before running anything inside the with block. The actual normalization math is still mostly hidden from us, this time using tf.nn.batch_normalization. tf.nn.batch_normalization does not have a training parameter like tf.layers.batch_normalization did. However, we still need to handle training and inference differently, so we run different code in each case using the tf.cond operation. We use the tf.nn.moments function to calculate the batch mean and variance. 2) The current version of the train function in NeuralNet will work fine with this new version of fully_connected. However, it uses these lines to ensure population statistics are updated when using batch normalization: python if self.use_batch_norm: with tf.control_dependencies(tf.get_collection(tf.GraphKeys.UPDATE_OPS)): train_step = tf.train.GradientDescentOptimizer(learning_rate).minimize(cross_entropy) else: train_step = tf.train.GradientDescentOptimizer(learning_rate).minimize(cross_entropy) Our new version of fully_connected handles updating the population statistics directly. That means you can also simplify your code by replacing the above if/else condition with just this line: python train_step = tf.train.GradientDescentOptimizer(learning_rate).minimize(cross_entropy) 3) And just in case you want to implement every detail from scratch, you can replace this line in batch_norm_training: python return tf.nn.batch_normalization(linear_output, batch_mean, batch_variance, beta, gamma, epsilon) with these lines: python normalized_linear_output = (linear_output - batch_mean) / tf.sqrt(batch_variance + epsilon) return gamma * normalized_linear_output + beta And replace this line in batch_norm_inference: python return tf.nn.batch_normalization(linear_output, pop_mean, pop_variance, beta, gamma, epsilon) with these lines: python normalized_linear_output = (linear_output - pop_mean) / tf.sqrt(pop_variance + epsilon) return gamma * normalized_linear_output + beta As you can see in each of the above substitutions, the two lines of replacement code simply implement the following two equations directly. The first line calculates the following equation, with linear_output representing $x_i$ and normalized_linear_output representing $\hat{x_i}$: $$ \hat{x_i} \leftarrow \frac{x_i - \mu_B}{\sqrt{\sigma_{B}^{2} + \epsilon}} $$ And the second line is a direct translation of the following equation: $$ y_i \leftarrow \gamma \hat{x_i} + \beta $$ We still use the tf.nn.moments operation to implement the other two equations from earlier – the ones that calculate the batch mean and variance used in the normalization step. If you really wanted to do everything from scratch, you could replace that line, too, but we'll leave that to you. Why the difference between training and inference? In the original function that uses tf.layers.batch_normalization, we tell the layer whether or not the network is training by passing a value for its training parameter, like so: python batch_normalized_output = tf.layers.batch_normalization(linear_output, training=self.is_training) And that forces us to provide a value for self.is_training in our feed_dict, like we do in this example from NeuralNet's train function: python session.run(train_step, feed_dict={self.input_layer: batch_xs, labels: batch_ys, self.is_training: True}) If you looked at the low level implementation, you probably noticed that, just like with tf.layers.batch_normalization, we need to do slightly different things during training and inference. But why is that? First, let's look at what happens when we don't. The following function is similar to train_and_test from earlier, but this time we are only testing one network and instead of plotting its accuracy, we perform 200 predictions on test inputs, 1 input at at time. We can use the test_training_accuracy parameter to test the network in training or inference modes (the equivalent of passing True or False to the feed_dict for is_training). End of explanation batch_norm_test(True) Explanation: In the following cell, we pass True for test_training_accuracy, which performs the same batch normalization that we normally perform during training. End of explanation batch_norm_test(False) Explanation: As you can see, the network guessed the same value every time! But why? Because during training, a network with batch normalization adjusts the values at each layer based on the mean and variance of that batch. The "batches" we are using for these predictions have a single input each time, so their values are the means, and their variances will always be 0. That means the network will normalize the values at any layer to zero. (Review the equations from before to see why a value that is equal to the mean would always normalize to zero.) So we end up with the same result for every input we give the network, because its the value the network produces when it applies its learned weights to zeros at every layer. Note: If you re-run that cell, you might get a different value from what we showed. That's because the specific weights the network learns will be different every time. But whatever value it is, it should be the same for all 200 predictions. To overcome this problem, the network does not just normalize the batch at each layer. It also maintains an estimate of the mean and variance for the entire population. So when we perform inference, instead of letting it "normalize" all the values using their own means and variance, it uses the estimates of the population mean and variance that it calculated while training. So in the following example, we pass False for test_training_accuracy, which tells the network that we it want to perform inference with the population statistics it calculates during training. End of explanation
3,691
Given the following text description, write Python code to implement the functionality described below step by step Description: Sklearn sklearn.liner_model linear_model Step1: Генерация данных Step2: Линейная классификация RidgeClassifier Step3: LogisticRegression Step4: Оценка качества по cross-validation cross_val_score Step5: cross_val_score с заданными scorer и cv_strategy
Python Code: from matplotlib.colors import ListedColormap from sklearn import cross_validation, datasets, linear_model, metrics import numpy as np %pylab inline Explanation: Sklearn sklearn.liner_model linear_model: * RidgeClassifier * SGDClassifier * SGDRegressor * LinearRegression * LogisticRegression * Lasso * etc документация: http://scikit-learn.org/stable/modules/classes.html#module-sklearn.linear_model примеры: http://scikit-learn.org/stable/modules/linear_model.html#linear-model End of explanation blobs = datasets.make_blobs(centers = 2, cluster_std = 5.5, random_state=1) colors = ListedColormap(['red', 'blue']) pylab.figure(figsize(8, 8)) pylab.scatter([x[0] for x in blobs[0]], [x[1] for x in blobs[0]], c=blobs[1], cmap=colors) train_data, test_data, train_labels, test_labels = cross_validation.train_test_split(blobs[0], blobs[1], test_size = 0.3, random_state = 1) Explanation: Генерация данных End of explanation #создание объекта - классификатора ridge_classifier = linear_model.RidgeClassifier(random_state = 1) #обучение классификатора ridge_classifier.fit(train_data, train_labels) #применение обученного классификатора ridge_predictions = ridge_classifier.predict(test_data) print test_labels print ridge_predictions #оценка качества классификации metrics.accuracy_score(test_labels, ridge_predictions) ridge_classifier.coef_ ridge_classifier.intercept_ Explanation: Линейная классификация RidgeClassifier End of explanation log_regressor = linear_model.LogisticRegression(random_state = 1) log_regressor.fit(train_data, train_labels) lr_predictions = log_regressor.predict(test_data) lr_proba_predictions = log_regressor.predict_proba(test_data) print test_labels print lr_predictions print lr_proba_predictions print metrics.accuracy_score(test_labels, lr_predictions) print metrics.accuracy_score(test_labels, ridge_predictions) Explanation: LogisticRegression End of explanation ridge_scoring = cross_validation.cross_val_score(ridge_classifier, blobs[0], blobs[1], scoring = 'accuracy', cv = 10) lr_scoring = cross_validation.cross_val_score(log_regressor, blobs[0], blobs[1], scoring = 'accuracy', cv = 10) lr_scoring print 'Ridge mean:{}, max:{}, min:{}, std:{}'.format(ridge_scoring.mean(), ridge_scoring.max(), ridge_scoring.min(), ridge_scoring.std()) print 'Log mean:{}, max:{}, min:{}, std:{}'.format(lr_scoring.mean(), lr_scoring.max(), lr_scoring.min(), lr_scoring.std()) Explanation: Оценка качества по cross-validation cross_val_score End of explanation scorer = metrics.make_scorer(metrics.accuracy_score) cv_strategy = cross_validation.StratifiedShuffleSplit(blobs[1], n_iter = 20 , test_size = 0.3, random_state = 2) ridge_scoring = cross_validation.cross_val_score(ridge_classifier, blobs[0], blobs[1], scoring = scorer, cv = cv_strategy) lr_scoring = cross_validation.cross_val_score(log_regressor, blobs[0], blobs[1], scoring = scorer, cv = cv_strategy) print 'Ridge mean:{}, max:{}, min:{}, std:{}'.format(ridge_scoring.mean(), ridge_scoring.max(), ridge_scoring.min(), ridge_scoring.std()) print 'Log mean:{}, max:{}, min:{}, std:{}'.format(lr_scoring.mean(), lr_scoring.max(), lr_scoring.min(), lr_scoring.std()) Explanation: cross_val_score с заданными scorer и cv_strategy End of explanation
3,692
Given the following text description, write Python code to implement the functionality described below step by step Description: Assessing Cox model fit using residuals (work in progress) This tutorial is on some common use cases of the (many) residuals of the Cox model. We can use resdiuals to diagnose a model's poor fit to a dataset, and improve an existing model's fit. Step1: Martingale residuals Defined as Step2: Deviance residuals One problem with martingale residuals is that they are not symetric around 0. Deviance residuals are a transform of martingale residuals them symetric. Roughly symmetric around zero, with approximate standard deviation equal to 1. Positive values mean that the patient died sooner than expected. Negative values mean that the patient lived longer than expected (or were censored). Very large or small values are likely outliers.
Python Code: df = load_rossi() df['age_strata'] = pd.cut(df['age'], np.arange(0, 80, 5)) df = df.drop('age', axis=1) cph = CoxPHFitter() cph.fit(df, 'week', 'arrest', strata=['age_strata', 'wexp']) cph.print_summary() cph.plot(); Explanation: Assessing Cox model fit using residuals (work in progress) This tutorial is on some common use cases of the (many) residuals of the Cox model. We can use resdiuals to diagnose a model's poor fit to a dataset, and improve an existing model's fit. End of explanation r = cph.compute_residuals(df, 'martingale') r.head() r.plot.scatter( x='week', y='martingale', c=np.where(r['arrest'], '#008fd5', '#fc4f30'), alpha=0.75 ) Explanation: Martingale residuals Defined as: $$ \delta_i - \Lambda(T_i) \ = \delta_i - \beta_0(T_i)\exp(\beta^T x_i)$$ where $T_i$ is the total observation time of subject $i$ and $\delta_i$ denotes whether they died under observation of not (event_observed in lifelines). From [1]: Martingale residuals take a value between $[1,−\inf]$ for uncensored observations and $[0,−\inf]$ for censored observations. Martingale residuals can be used to assess the true functional form of a particular covariate (Thernau et al. (1990)). It is often useful to overlay a LOESS curve over this plot as they can be noisy in plots with lots of observations. Martingale residuals can also be used to assess outliers in the data set whereby the survivor function predicts an event either too early or too late, however, it's often better to use the deviance residual for this. From [2]: Positive values mean that the patient died sooner than expected (according to the model); negative values mean that the patient lived longer than expected (or were censored). End of explanation r = cph.compute_residuals(df, 'deviance') r.head() r.plot.scatter( x='week', y='deviance', c=np.where(r['arrest'], '#008fd5', '#fc4f30'), alpha=0.75 ) r = r.join(df.drop(['week', 'arrest'], axis=1)) plt.scatter(r['prio'], r['deviance'], color=np.where(r['arrest'], '#008fd5', '#fc4f30')) r = cph.compute_residuals(df, 'delta_beta') r.head() r = r.join(df[['week', 'arrest']]) r.head() plt.scatter(r['week'], r['prio'], color=np.where(r['arrest'], '#008fd5', '#fc4f30')) Explanation: Deviance residuals One problem with martingale residuals is that they are not symetric around 0. Deviance residuals are a transform of martingale residuals them symetric. Roughly symmetric around zero, with approximate standard deviation equal to 1. Positive values mean that the patient died sooner than expected. Negative values mean that the patient lived longer than expected (or were censored). Very large or small values are likely outliers. End of explanation
3,693
Given the following text description, write Python code to implement the functionality described below step by step Description: Chapter 1 - The Python Data Model In this chapter, Mr. Ramalho discusses the Python Data Model. Framework here doesn't mean something like Django or Pyramid, but more about how language features and the core libraries fit together and the underlying philosphy that tie them together. It reminded me a lot about when Josh Bloch talks about the Java Collections Framework. In Java, you're expected to override or implement standard classes or interfaces to work with the framework. In Python, we implement the dunder methods such as getitem, which will then get called by the framework when we apply the [] operator to our class. Same goes for len, which gets called when people call len() on our class (by the way, the book has a great explanation on why len is not a method. Certainly something that at first seems to be inconsistent for someone coming from Java. If you haven't read the book, you really should). Now for examples Step1: He talks about other dunder methods as well. In Java, Object's toString method is certainly one of the most well-known methods. In Python, this would be str and repr, two methods that are similar but have slightly different goals. An example may make this clearer.
Python Code: class DaysOfWeek: def __init__(self): self._days = ["lundi", "mardi", "mercredi", "jeudi", "vendredi", "samedi", "dimanche"] def __getitem__(self, position): return self._days[position] def __len__(self): return len(self._days) days_of_week = DaysOfWeek() "'{0}' is day number {1} of the {2} days.".format(days_of_week[1], 1, len(days_of_week)) Explanation: Chapter 1 - The Python Data Model In this chapter, Mr. Ramalho discusses the Python Data Model. Framework here doesn't mean something like Django or Pyramid, but more about how language features and the core libraries fit together and the underlying philosphy that tie them together. It reminded me a lot about when Josh Bloch talks about the Java Collections Framework. In Java, you're expected to override or implement standard classes or interfaces to work with the framework. In Python, we implement the dunder methods such as getitem, which will then get called by the framework when we apply the [] operator to our class. Same goes for len, which gets called when people call len() on our class (by the way, the book has a great explanation on why len is not a method. Certainly something that at first seems to be inconsistent for someone coming from Java. If you haven't read the book, you really should). Now for examples: End of explanation from datetime import timedelta class Duration: def __init__(self, milliseconds): self._duration_in_milliseconds = milliseconds def __repr__(self): # return unambiguous string that mimics the source code to construct the object back return "Duration({0})".format(self._duration_in_milliseconds) def __str__(self): return str(timedelta(milliseconds=self._duration_in_milliseconds)) a_billion_milliseconds = Duration(1_000_000_000) "'{0}' is unambiguous, '{1}' is human-friendly".format(repr(a_billion_milliseconds), str(a_billion_milliseconds)) Explanation: He talks about other dunder methods as well. In Java, Object's toString method is certainly one of the most well-known methods. In Python, this would be str and repr, two methods that are similar but have slightly different goals. An example may make this clearer. End of explanation
3,694
Given the following text description, write Python code to implement the functionality described below step by step Description: 1A.data - DataFrame et Matrice Les DataFrame se sont imposés pour manipuler les données avec le module pandas. Le module va de la manipulation des données jusqu'au calcul d'une régresion linéaire. Avec cette façon de représenter les données, associée à des un ensemble de méthodes couramment utilisées, ce qu'on faisait en une ou deux boucles se fait maintenant en une seule fonction. Cette séance contient beaucoup d'exemples et peu d'exercices. Il est conseillé de supprimer toutes les sorties et de les exécuter une à une. Step1: L'introduction ne contient pas d'éléments nécessaires à la réalisation du TD. Trouver chaussure à ses stats La programmation est omni-présente lorsqu'on manipule des données. On leur applique des traitements parfois standards, souvent adaptés pour la circonstance. On souhaite toujours programmer le moins possible mais aussi ne pas avoir à réapprendre un langage à chaque fois qu'on doit manipuler les données. Le logiciel MATLAB a proposé voici 30 ans un premier environnement de travail facilitant le calcul matriciel et ce standard s'est imposé depuis. Comme MATLAB est un logiciel payant, des équivalents open source et gratuits ont été développés. Ils proposent tous le calcul matriciel, la possibilité de visualiser, un environnement de développement. Ils différent pas des performances différentes et des éventails d'extensions différentes. R Step2: Avec une valeur manquante Step3: NaN est une convention pour une valeur manquante. On extrait la variable prix Step4: Ou Step5: Pour extraire plusieurs colonnes Step6: Pour prendre la transposée (voir aussi DataFrame.transpose) Step7: Lecture et écriture de DataFrame Aujourd'hui, on n'a plus besoin de réécrire soi-même une fonction de lecture ou d'écriture de données présentées sous forme de tables. Il existe des fonctions plus génériques qui gère un grand nombre de cas. Cette section présente brièvement les fonctions qui permettent de lire/écrire un DataFrame aux formats texte/Excel. On reprend l'exemple de section précédente. L'instruction encoding=utf-8 n'est pas obligatoire mais conseillée lorsque les données contiennent des accents (voir read_csv). Step8: On peut récupérer des données directement depuis Internet ou une chaîne de caractères et afficher le début (head) ou la fin (tail). Le code qui suit est ce qu'on écrirait d'habitude Step9: Et pout éviter les erreurs de connexion internet, les données font partie intégrante du module Step10: La fonction describe permet d'en savoir un peu plus sur les colonnes numériques de cette table. Step11: DataFrame et Index On désigne généralement une colonne ou variable par son nom. Les lignes peuvent être désignées par un entier. Step12: On extrait une ligne (loc) Step13: Mais il est possible d'utiliser une colonne ou plusieurs colonnes comme index (set_index) Step14: On peut maintenant désigner une ligne par une date Step15: Il est possible d'utiliser plusieurs colonnes comme index Step16: Si on veut changer l'index ou le supprimer (reset_index) Step17: Les index sont particulièrement utiles lorsqu'il s'agit de fusionner deux tables. Pour des petites tables, la plupart du temps, il est plus facile de s'en passer. Notation avec le symbole Step18: On peut sélectionner un sous-ensemble de lignes Step19: On extrait la même plage mais avec deux colonnes seulement Step20: Le même code pour lequel on renomme les colonnes extraites Step21: Exercice 1 Step22: Manipuler un DataFrame Step23: filter Filter consiste à sélectionner un sous-ensemble de lignes du dataframe. Pour filter sur plusieurs conditions, il faut utiliser les opérateurs logique & (et), | (ou), ~ (non) (voir Mapping Operators to Functions). filter, mask,where pandas Step24: union union = concaténation de deux DataFrame (qui n'ont pas nécessaire les mêmes colonnes). On peut concaténer les lignes ou les colonnes. concat Merge, join, and concatenate Step25: sort Sort = trier sort Step26: group by Cette opération consiste à grouper les lignes qui partagent une caractéristique commune (une ou ou plusieurs valeurs par exemple). Sur chaque groupe, on peut calculer une somme, une moyenne... groupby sum, cumsum, mean, count SQL GROUP BY Group By Step27: Si les nom des colonnes utilisées lors de l'opération ne sont pas mentionnés, implicitement, c'est l'index qui sera choisi. On peut aussi aggréger les informations avec une fonction personnalisée. Step28: Ou encore considérer des aggrégations différentes pour chaque colonne Step29: join (merge ou fusion) Fusionner deux tables consiste à apparier les lignes de la première table avec celle de la seconde si certaines colonnes de ces lignes partagent les mêmes valeurs. On distingue quatre cas Step30: On souhaite ajouter une colonne pays aux marathons se déroulant dans les villes suivanes. Step31: pivot (tableau croisé dynamique) Cette opération consiste à créer une seconde table en utilisant utiliser les valeurs d'une colonne comme nom de colonnes. | A | B | C | | --- | --- | --- | | A1 | B1 | C1 | | A1 | B2 | C2 | | A2 | B1 | C3 | | A2 | B2 | C4 | | A2 | B3 | C5 | L'opération pivot(A,B,C) donnera Step32: Il existe une méthode qui effectue l'opération inverse Step33: Matrix, Array (numpy) Le module le plus populaire sous Python est numpy. Il propose deux containers Matrix et Array qui facilitent le calcul matriciel. Ce module est écrit en C++, Fortran. Il sera plus rapide que tout code écrit en Python. De nombreuses modules Python s'appuient sur numpy Step34: Il y a deux types d'objets, array et matrix. Le type matrix se comporte comme on peut l'attendre d'une matrice. Le type array est plus générique et autorise plus de deux dimensions. Les opérateurs qui s'y appliquent ne comportent pas comme ceux d'une matrice, en particulier la multiplication qui se fait terme à terme pour un tableau. Step35: Un tableau en plusieurs dimensions Step36: Quelques liens pour apprendre à manipuler ces objets Step37: Pour d'autres fonctionnalités aléatoires Step38: La conversion réciproque est aussi simple mais il faut préciser les noms des colonnes qui ne sont pas mémorisées dans l'objet numpy.array Step39: Exercice 3 Step40: On veut construire le modèle
Python Code: from jyquickhelper import add_notebook_menu add_notebook_menu() Explanation: 1A.data - DataFrame et Matrice Les DataFrame se sont imposés pour manipuler les données avec le module pandas. Le module va de la manipulation des données jusqu'au calcul d'une régresion linéaire. Avec cette façon de représenter les données, associée à des un ensemble de méthodes couramment utilisées, ce qu'on faisait en une ou deux boucles se fait maintenant en une seule fonction. Cette séance contient beaucoup d'exemples et peu d'exercices. Il est conseillé de supprimer toutes les sorties et de les exécuter une à une. End of explanation import pandas l = [ { "date":"2014-06-22", "prix":220.0, "devise":"euros" }, { "date":"2014-06-23", "prix":221.0, "devise":"euros" },] df = pandas.DataFrame(l) df Explanation: L'introduction ne contient pas d'éléments nécessaires à la réalisation du TD. Trouver chaussure à ses stats La programmation est omni-présente lorsqu'on manipule des données. On leur applique des traitements parfois standards, souvent adaptés pour la circonstance. On souhaite toujours programmer le moins possible mais aussi ne pas avoir à réapprendre un langage à chaque fois qu'on doit manipuler les données. Le logiciel MATLAB a proposé voici 30 ans un premier environnement de travail facilitant le calcul matriciel et ce standard s'est imposé depuis. Comme MATLAB est un logiciel payant, des équivalents open source et gratuits ont été développés. Ils proposent tous le calcul matriciel, la possibilité de visualiser, un environnement de développement. Ils différent pas des performances différentes et des éventails d'extensions différentes. R : la référence pour les statisticiens, il est utilisé par tous les chercheurs dans ce domaine. SciLab : développé par l'INRIA. Octave : clone open source de MATLAB, il n'inclut pas autant de librairies mais il est gratuit. Julia : c'est le plus jeune, il est plus rapide mais ses librairies sont moins nombreuses. Ils sont tous performants en qui concerne le calcul numérique, ils le sont beaucoup moins lorsqu'il s'agit de faire des traitements qui ne sont pas numériques (traiter du texte par exemple) car ils n'ont pas été prévus pour cela à la base (à l'exception de Julia peut être qui est plus jeune Python v. Clojure v. Julia). Le langage Python est devenu depuis 2012 une alternative intéressante pour ces raisons (voir également Why Python?) : Il propose les même fonctionnalités de base (calcul matriciel, graphiques, environnement). Python est plus pratique pour tout ce qui n'est pas numérique (fichiers, web, server web, SQL, ...). La plupart des librairies connues et écrites en C++ ont été portée sous Python. Il est plus facile de changer un composant important en Python (numpy par exemple) si le nouveau est plus efficace. Un inconvénient peut-être est qu'il faut installer plusieurs extensions avant de pouvoir commencer à travailler (voir Installation de Python) : numpy : calcul matriciel pandas : DataFrame jupyter : notebooks (comme celui-ci) matplotlib : graphiques scikit-learn : machine learning, statistique descriptive statsmodels : statistiques descriptives Optionnels : Spyder : environnement type R, MATLAB, ... scipy : autres traitements numériques (voir NumPy vs. SciPy vs. other packages) dask : dataframe distribué et capables de gérer des gros volumes de données (> 5Go) Les environnements Python évoluent très vite, les modules mentionnés ici sont tous maintenus mais il eut en surgir de nouveau très rapidement. Quelques environnements à suivre : Python Tools for Visual Studio : environnement de développement pour Visual Studio PyCharm : n'inclut pas les graphiques mais est assez agréable pour programmer IEP : écrit en Python PyDev : extension pour Eclipse WingIDE Si vous ne voulez pas programmer, il existe des alternatives. C'est assez performant sur de petits jeux de données mais cela devient plus complexe dès qu'on veut programmer car le code doit tenir compte des spécificités de l'outil. Orange : écrit en Python Weka : écrit en Java (le pionnier) dataiku : startup française RapidMiner : version gratuite et payante AzureML : solution Microsoft de workflow de données C'est parfois plus pratique pour commencer mais mal commode si on veut automatiser un traitrment pour répéter la même tâche de façon régulière. Pour les travaux pratiques à l'ENSAE, j'ai choisi les notebooks : c'est une page blanche où on peut mélanger texte, équations, graphiques, code et exécution de code. Taille de DataFrame Les DataFrame en Python sont assez rapides lorsqu'il y a moins de 10 millions d'observations et que le fichier texte qui décrit les données n'est pas plus gros que 10 Mo. Au delà, il faut soit être patient, soit être astucieux comme ici : DataFrame et SQL, Data Wrangling with Pandas. Valeurs manquantes Lorsqu'on récupère des données, il peut arriver qu'une valeur soit manquante. Missing Data Working with missing data DataFrame (pandas) Quelques liens : An Introduction to Pandas Un Data Frame est un objet qui est présent dans la plupart des logiciels de traitements de données, c'est une matrice, chaque colonne est de même type (nombre, dates, texte), elle peut contenir des valeurs manquantes. On peut considérer chaque colonne comme les variables d'une table (pandas.Dataframe - cette page contient toutes les méthodes de la classe). End of explanation l = [ { "date":"2014-06-22", "prix":220.0, "devise":"euros" }, { "date":"2014-06-23", "devise":"euros" },] df = pandas.DataFrame(l) df Explanation: Avec une valeur manquante : End of explanation df.prix Explanation: NaN est une convention pour une valeur manquante. On extrait la variable prix : End of explanation df["prix"] Explanation: Ou : End of explanation df [["date","prix"]] Explanation: Pour extraire plusieurs colonnes : End of explanation df.T Explanation: Pour prendre la transposée (voir aussi DataFrame.transpose) : End of explanation import pandas l = [ { "date":"2014-06-22", "prix":220.0, "devise":"euros" }, { "date":"2014-06-23", "prix":221.0, "devise":"euros" },] df = pandas.DataFrame(l) # écriture au format texte df.to_csv("exemple.txt",sep="\t",encoding="utf-8", index=False) # on regarde ce qui a été enregistré with open("exemple.txt", "r", encoding="utf-8") as f : text = f.read() print(text) # on enregistre au format Excel df.to_excel("exemple.xlsx", index=False) # on ouvre Excel sur ce fichier (sous Windows) from pyquickhelper.loghelper import run_cmd from pyquickhelper.loghelper.run_cmd import skip_run_cmd out,err = run_cmd("exemple.xlsx", wait = False) Explanation: Lecture et écriture de DataFrame Aujourd'hui, on n'a plus besoin de réécrire soi-même une fonction de lecture ou d'écriture de données présentées sous forme de tables. Il existe des fonctions plus génériques qui gère un grand nombre de cas. Cette section présente brièvement les fonctions qui permettent de lire/écrire un DataFrame aux formats texte/Excel. On reprend l'exemple de section précédente. L'instruction encoding=utf-8 n'est pas obligatoire mais conseillée lorsque les données contiennent des accents (voir read_csv). End of explanation if False: import pandas, urllib.request furl = urllib.request.urlopen("http://www.xavierdupre.fr/enseignement/complements/marathon.txt") df = pandas.read_csv(furl, sep="\t", names=["ville", "annee", "temps","secondes"]) df.head() Explanation: On peut récupérer des données directement depuis Internet ou une chaîne de caractères et afficher le début (head) ou la fin (tail). Le code qui suit est ce qu'on écrirait d'habitude : End of explanation from ensae_teaching_cs.data import marathon import pandas df = pandas.read_csv(marathon(filename=True), sep="\t", names=["ville", "annee", "temps","secondes"]) df.head() Explanation: Et pout éviter les erreurs de connexion internet, les données font partie intégrante du module : End of explanation df.describe() Explanation: La fonction describe permet d'en savoir un peu plus sur les colonnes numériques de cette table. End of explanation import pandas l = [ { "date":"2014-06-22", "prix":220.0, "devise":"euros" }, { "date":"2014-06-23", "prix":221.0, "devise":"euros" },] df = pandas.DataFrame(l) df Explanation: DataFrame et Index On désigne généralement une colonne ou variable par son nom. Les lignes peuvent être désignées par un entier. End of explanation df.iloc[1] Explanation: On extrait une ligne (loc) : End of explanation dfi = df.set_index("date") dfi Explanation: Mais il est possible d'utiliser une colonne ou plusieurs colonnes comme index (set_index) : End of explanation dfi.loc["2014-06-23"] Explanation: On peut maintenant désigner une ligne par une date : End of explanation df = pandas.DataFrame([ {"prénom":"xavier", "nom":"dupré", "arrondissement":18}, {"prénom":"clémence", "nom":"dupré", "arrondissement":15 } ]) dfi = df.set_index(["nom","prénom"]) dfi.loc["dupré","xavier"] Explanation: Il est possible d'utiliser plusieurs colonnes comme index : End of explanation dfi.reset_index(drop=False, inplace=True) # le mot-clé drop pour garder ou non les colonnes servant d'index # inplace signifie qu'on modifie l'instance et non qu'une copie est modifiée # donc on peut aussi écrire dfi2 = dfi.reset_index(drop=False) dfi.set_index(["nom", "arrondissement"],inplace=True) dfi Explanation: Si on veut changer l'index ou le supprimer (reset_index) : End of explanation from ensae_teaching_cs.data import marathon import pandas df = pandas.read_csv(marathon(filename=True), sep="\t", names=["ville", "annee", "temps","secondes"]) df.head() Explanation: Les index sont particulièrement utiles lorsqu'il s'agit de fusionner deux tables. Pour des petites tables, la plupart du temps, il est plus facile de s'en passer. Notation avec le symbole : Le symbole : désigne une plage de valeurs. End of explanation df[3:6] Explanation: On peut sélectionner un sous-ensemble de lignes : End of explanation df.loc[3:6,["annee","temps"]] Explanation: On extrait la même plage mais avec deux colonnes seulement : End of explanation sub = df.loc[3:6,["annee","temps"]] sub.columns = ["year","time"] sub Explanation: Le même code pour lequel on renomme les colonnes extraites : End of explanation import pandas, io # ... Explanation: Exercice 1 : créer un fichier Excel On souhaite récupérer les données donnees_enquete_2003_television.txt (source : INSEE). POIDSLOG : Pondération individuelle relative POIDSF : Variable de pondération individuelle cLT1FREQ : Nombre d'heures en moyenne passées à regarder la télévision cLT2FREQ : Unité de temps utilisée pour compter le nombre d'heures passées à regarder la télévision, cette unité est représentée par les quatre valeurs suivantes 0 : non concerné 1 : jour 2 : semaine 3 : mois Ensuite, on veut : Supprimer les colonnes vides Obtenir les valeurs distinctes pour la colonne cLT2FREQ Modifier la matrice pour enlever les lignes pour lesquelles l'unité de temps (cLT2FREQ) n'est pas renseignée ou égale à zéro. Sauver le résultat au format Excel. Vous aurez peut-être besoin des fonctions suivantes : numpy.isnan DataFrame.apply DataFrame.fillna ou DataFrame.isnull DataFrame.copy End of explanation from ensae_teaching_cs.data import marathon import pandas df = pandas.read_csv(marathon(), sep="\t", names=["ville", "annee", "temps","secondes"]) print(df.columns) print("villes",set(df.ville)) print("annee",list(set(df.annee))[:10],"...") Explanation: Manipuler un DataFrame : filtrer, union, sort, group by, join, pivot Si la structure DataFrame s'est imposée, c'est parce qu'on effectue toujours les mêmes opérations. Chaque fonction cache une boucle ou deux dont le coût est précisé en fin de ligne : filter : on sélectionne un sous-ensemble de lignes qui vérifie une condition $\rightarrow O(n)$ union : concaténation de deux jeux de données $\rightarrow O(n_1 + n_2)$ sort : tri $\rightarrow O(n \ln n)$ group by : grouper des lignes qui partagent une valeur commune $\rightarrow O(n)$ join : fusionner deux jeux de données en associant les lignes qui partagent une valeur commune $\rightarrow \in [O(n_1 + n_2), O(n_1 n_2)]$ pivot : utiliser des valeurs présentes dans colonne comme noms de colonnes $\rightarrow O(n)$ Les 5 premières opérations sont issues de la logique de manipulation des données avec le langage SQL (ou le logiciel SAS). La dernière correspond à un tableau croisé dynamique. Pour illustrer ces opérations, on prendre le DataFrame suivant : End of explanation subset = df [ df.annee == 1971 ] subset.head() subset = df [ (df.annee == 1971) & (df.ville == "BOSTON") ] subset.head() Explanation: filter Filter consiste à sélectionner un sous-ensemble de lignes du dataframe. Pour filter sur plusieurs conditions, il faut utiliser les opérateurs logique & (et), | (ou), ~ (non) (voir Mapping Operators to Functions). filter, mask,where pandas: filter rows of DataFrame with operator chaining Indexing and Selecting Data End of explanation concat_ligne = pandas.concat((df,df)) df.shape,concat_ligne.shape concat_col = pandas.concat((df,df), axis=1) df.shape,concat_col.shape Explanation: union union = concaténation de deux DataFrame (qui n'ont pas nécessaire les mêmes colonnes). On peut concaténer les lignes ou les colonnes. concat Merge, join, and concatenate End of explanation tri = df.sort_values( ["annee", "ville"], ascending=[0,1]) tri.head() Explanation: sort Sort = trier sort End of explanation gr = df.groupby("annee") gr nb = gr.count() nb.sort_index(ascending=False).head() nb = gr.sum() nb.sort_index(ascending=False).head(n=2) nb = gr.mean() nb.sort_index(ascending=False).head(n=3) Explanation: group by Cette opération consiste à grouper les lignes qui partagent une caractéristique commune (une ou ou plusieurs valeurs par exemple). Sur chaque groupe, on peut calculer une somme, une moyenne... groupby sum, cumsum, mean, count SQL GROUP BY Group By: split-apply-combine group by customisé End of explanation def max_entier(x): return int(max(x)) nb = df[["annee","secondes"]].groupby("annee").agg(max_entier).reset_index() nb.tail(n=3) Explanation: Si les nom des colonnes utilisées lors de l'opération ne sont pas mentionnés, implicitement, c'est l'index qui sera choisi. On peut aussi aggréger les informations avec une fonction personnalisée. End of explanation nb = df[["annee","ville","secondes"]].groupby("annee").agg({ "ville":len, "secondes":max_entier}) nb.tail(n=3) Explanation: Ou encore considérer des aggrégations différentes pour chaque colonne : End of explanation from IPython.display import Image Image("patates.png") Explanation: join (merge ou fusion) Fusionner deux tables consiste à apparier les lignes de la première table avec celle de la seconde si certaines colonnes de ces lignes partagent les mêmes valeurs. On distingue quatre cas : INNER JOIN - inner : on garde tous les appariements réussis LEFT OUTER JOIN - left : on garde tous les appariements réussis et les lignes non appariées de la table de gauche RIGHT OUTER JOIN - right : on garde tous les appariements réussis et les lignes non appariées de la table de droite FULL OUTER JOIN - outer : on garde tous les appariements réussis et les lignes non appariées des deux tables Exemples et documentation : * merging, joining * join * merge ou DataFrame.merge * jointures SQL - illustrations avec graphiques en patates Si les noms des colonnes utilisées lors de la fusion ne sont pas mentionnés, implicitement, c'est l'index qui sera choisi. Pour les grandes tables (> 100.000 lignes), il est fortement recommandés d'ajouter un index s'il n'existe pas avant de fusionner. A quoi correspondent les quatre cas suivants : End of explanation values = [ {"V":'BOSTON', "C":"USA"}, {"V":'NEW YORK', "C":"USA"}, {"V":'BERLIN', "C":"Germany"}, {"V":'LONDON', "C":"UK"}, {"V":'PARIS', "C":"France"}] pays = pandas.DataFrame(values) pays dfavecpays = df.merge(pays, left_on="ville", right_on="V") pandas.concat([dfavecpays.head(n=2),dfavecpays.tail(n=2)]) Explanation: On souhaite ajouter une colonne pays aux marathons se déroulant dans les villes suivanes. End of explanation piv = df.pivot("annee","ville","temps") pandas.concat([piv[20:23],piv[40:43],piv.tail(n=3)]) Explanation: pivot (tableau croisé dynamique) Cette opération consiste à créer une seconde table en utilisant utiliser les valeurs d'une colonne comme nom de colonnes. | A | B | C | | --- | --- | --- | | A1 | B1 | C1 | | A1 | B2 | C2 | | A2 | B1 | C3 | | A2 | B2 | C4 | | A2 | B3 | C5 | L'opération pivot(A,B,C) donnera : | A | B1 | B2 | B3 | | --- | --- | --- | --- | | A1 | C1 | C2 | | | A2 | C3 | C4 | C5 | pivot Reshaping and Pivot Tables Tableau croisé dynamique - wikipédia On applique cela aux marathons où on veut avoir les villes comme noms de colonnes et une année par lignes. End of explanation from datetime import datetime, time from ensae_teaching_cs.data import marathon import pandas df = pandas.read_csv(marathon(), sep="\t", names=["ville", "annee", "temps","secondes"]) df = df [["ville", "annee", "temps"]] # on enlève la colonne secondes pour la recréer df["secondes"] = df.apply( lambda r : (datetime.strptime(r.temps,"%H:%M:%S") - \ datetime(1900,1,1)).total_seconds(), axis=1) df.head() Explanation: Il existe une méthode qui effectue l'opération inverse : Dataframe.stack. Exercice 2 : moyennes par groupes Toujours avec le même jeu de données (marathon.txt), on veut ajouter une ligne à la fin du tableau croisé dynamique contenant la moyenne en secondes des temps des marathons pour chaque ville. Dates Les dates sont souvent compliquées à gérer car on n'utilise pas le mêmes format dans tous les pays. Pour faire simple, je recommande deux options : Soit convertir les dates/heures au format chaînes de caractères AAAA-MM-JJ hh:mm:ss:ms qui permet de trier les dates par ordre croissant. Soit convertir les dates/heures au format datetime (date) ou timedelta (durée) (voir Quelques notions sur les dates, format de date/heure). Par exemple, voici le code qui a permis de générer la colonne seconde de la table marathon : End of explanation import numpy print("int","\n",numpy.matrix([[1, 2], [3, 4,]])) print("float","\n",numpy.matrix([[1, 2], [3, 4.1]])) print("str","\n",numpy.matrix([[1, 2], [3, '4']])) Explanation: Matrix, Array (numpy) Le module le plus populaire sous Python est numpy. Il propose deux containers Matrix et Array qui facilitent le calcul matriciel. Ce module est écrit en C++, Fortran. Il sera plus rapide que tout code écrit en Python. De nombreuses modules Python s'appuient sur numpy : SciPy, pandas, scikit-learn, matplotlib, ... Il y a deux différences entre un DataFrame et un tableau numpy : Il n'y a pas d'index sur les lignes autre que l'index entier de la ligne. Tous les types doivent être identiques (tous entier, tous réels, tous str). Il n'y a pas de mélange possible. C'est à cette condition que les calculs sont aussi rapides. End of explanation m1 = numpy.matrix( [[0.0,1.0],[1.0,0.0]]) print("multiplication de matrices\n",m1 * m1) m2 = numpy.array([[0.0,1.0],[1.0,0.0]]) print("multiplication de tableaux (terme à terme)\n",m2 * m2) Explanation: Il y a deux types d'objets, array et matrix. Le type matrix se comporte comme on peut l'attendre d'une matrice. Le type array est plus générique et autorise plus de deux dimensions. Les opérateurs qui s'y appliquent ne comportent pas comme ceux d'une matrice, en particulier la multiplication qui se fait terme à terme pour un tableau. End of explanation cube = numpy.array( [ [[0.0,1.0],[1.0,0.0]], [[0.0,1.0],[1.0,0.0]] ] ) print(cube.shape) cube Explanation: Un tableau en plusieurs dimensions : End of explanation # la matrice nulle numpy.zeros( (3,4) ) # la matrice de 1 numpy.ones( (3,4) ) # la matrice identité numpy.identity( 3 ) # la matrice aléatoire numpy.random.random( (3,4)) Explanation: Quelques liens pour apprendre à manipuler ces objets : opérations avec numpy.matrix Numpy - multidimensional data arrays NUmpy Tutorial classe numpy.matrix classe numpy.array matrices nulle, identité, aléatoire On utilise beaucoup les fonctions suivantes pour créer une matrice ou un tableau particulier. End of explanation from pandas import read_csv import numpy from datetime import datetime, time from ensae_teaching_cs.data import marathon df = read_csv(marathon(filename=True), sep="\t", names=["ville", "annee", "temps","secondes"]) arr = df[["annee","secondes"]].values # retourne un array (et non un matrix) mat = numpy.matrix(arr) print(type(arr),type(mat)) arr[:2,:] Explanation: Pour d'autres fonctionnalités aléatoires : numpy.random. Quelques fonctions fréquemment utilisées column_stack : pour assembler des colonnes les unes à côté des autres vstack : pour assembler des lignes les unes à la suite des autres de DataFrame à numpy Le plus simple est sans doute d'utiliser pandas pour lire un fichier texte et d'utiliser la propriété values pour convertir tout ou partie du DataFrame en numpy.matrix. End of explanation import pandas df2 = pandas.DataFrame(arr, columns=["annee", "secondes"]) df2.head(n=2) Explanation: La conversion réciproque est aussi simple mais il faut préciser les noms des colonnes qui ne sont pas mémorisées dans l'objet numpy.array : End of explanation from pandas import read_csv from datetime import datetime, time from ensae_teaching_cs.data import marathon df = read_csv(marathon(filename=True), sep="\t", names=["ville", "annee", "temps","secondes"]) df = df [ (df["ville"] == "BERLIN") | (df["ville"] == "PARIS") ] for v in ["PARIS","BERLIN"]: df["est" + v] = df.apply( lambda r : 1 if r["ville"] == v else 0, axis=1) df.head(n = 3) Explanation: Exercice 3 : régression linéaire On souhaite implémenter une régression qui se traduit par le problème suivant : $Y=XA+\epsilon$. La solution est donnée par la formule matricielle : $A^*=(X'X)^{-1}X'Y$. On prépare les données suivantes. End of explanation import pandas writer = pandas.ExcelWriter('tou_example.xlsx') df.to_excel(writer, 'Data 0') df.to_excel(writer, 'Data 1') writer.save() Explanation: On veut construire le modèle : $secondes = a_0 \; annee + a_1 \; stPARIS + a_2 \; estBERLIN$. En appliquant la formule ci-dessus, déterminer les coefficients $a_0,a_1,a_2$. Annexes Créer un fichier Excel avec plusieurs feuilles La page Allow ExcelWriter() to add sheets to existing workbook donne plusieurs exemples d'écriture. End of explanation
3,695
Given the following text description, write Python code to implement the functionality described below step by step Description: Notebook as a Step using Notebooks Executor The following sample shows how to use the notebook executor as part of a Vertex AI Libraries and Variables Step1: Prerequisites You need to configure your project as detailed in https Step2: For additional details about
Python Code: !which pip !pip install kfp --upgrade -q !pip install --upgrade google-cloud-aiplatform -q !pip install --upgrade google-cloud-pipeline-components -q import kfp import os from datetime import datetime from google.cloud import aiplatform from kfp.v2 import compiler import google.cloud.aiplatform as aip kfp.__version__ # Variables PROJECT_ID = '<YOUR_PROJECT_ID>' REGION = 'us-central1' # This is where the KMS and Metastore resides from the project configuration ROOT_PATH = ".." PIPELINE_ROOT_PATH = f'gs://{PROJECT_ID}-vertex-root' PACKAGE_PATH = 'notebook-as-a-step-sample-pipeline.json' RUNNABLES_PATH = './runnables' COMPONENT_YAML_PATH = os.path.join(ROOT_PATH, 'component.yaml') if not os.path.isfile(COMPONENT_YAML_PATH): print(f'COMPONENT_YAML_PATH does not exist') WORKING_BUCKET_NAME = f'{PROJECT_ID}-naas' INPUT_NOTEBOOK_FILE = f'gs://{WORKING_BUCKET_NAME}/runnables/run_create_bucket.ipynb' OUTPUT_NOTEBOOK_FOLDER = f'gs://{WORKING_BUCKET_NAME}/outputs' INPUT_NOTEBOOK_FILE !gcloud config set project "{PROJECT_ID}" !gcloud config list # The service account used by the Pipelines must have access to this bucket. !gsutil ls "{PIPELINE_ROOT_PATH}" || gsutil mb -l "{REGION}" "{PIPELINE_ROOT_PATH}" !gsutil cp -r "{RUNNABLES_PATH}/*" "gs://{WORKING_BUCKET_NAME}/runnables" !gsutil ls "gs://{WORKING_BUCKET_NAME}/runnables" Explanation: Notebook as a Step using Notebooks Executor The following sample shows how to use the notebook executor as part of a Vertex AI Libraries and Variables End of explanation @kfp.dsl.pipeline( name="notebook-as-a-step-sample", pipeline_root=PIPELINE_ROOT_PATH) def pipeline( project: str, execution_id: str, input_notebook_file:str, output_notebook_folder:str, location:str, master_type:str, container_image_uri:str): execute_notebook_component = kfp.components.load_component_from_file(COMPONENT_YAML_PATH) execute_notebook_op = execute_notebook_component( project=project, execution_id=execution_id, input_notebook_file=input_notebook_file, output_notebook_folder=output_notebook_folder, location=location, master_type=master_type, container_image_uri=container_image_uri, parameters=f'PROJECT_ID={project},EXECUTION_ID={execution_id}' ) Explanation: Prerequisites You need to configure your project as detailed in https://cloud.google.com/vertex-ai/docs/pipelines/configure-project End of explanation NOW = datetime.now().strftime("%Y%m%d%H%M%S") JOB_ID=f'naas-{NOW}' EXECUTION_ID=f'naas_{NOW}' compiler.Compiler().compile( pipeline_func=pipeline, package_path=PACKAGE_PATH) job = aip.PipelineJob( display_name='notebook-executor-pipeline', template_path=PACKAGE_PATH, job_id=JOB_ID, parameter_values={ 'project': PROJECT_ID, 'execution_id': EXECUTION_ID, 'input_notebook_file': INPUT_NOTEBOOK_FILE, 'output_notebook_folder': OUTPUT_NOTEBOOK_FOLDER, 'location': 'us-central1', 'master_type': 'n1-standard-4', 'container_image_uri': 'gcr.io/deeplearning-platform-release/base-cpu' }, ) job.submit() Explanation: For additional details about: - Google Cloud Notebook Executor template, see the ExecutionTemplate API documentation. - Pipeline types, see kfp.dsl.types. - Notebook parameters, see Papermill parameters Run pipeline End of explanation
3,696
Given the following text description, write Python code to implement the functionality described below step by step Description: Non-parametric between conditions cluster statistic on single trial power This script shows how to compare clusters in time-frequency power estimates between conditions. It uses a non-parametric statistical procedure based on permutations and cluster level statistics. The procedure consists of Step1: Set parameters Step2: Factor to downsample the temporal dimension of the TFR computed by tfr_morlet. Decimation occurs after frequency decomposition and can be used to reduce memory usage (and possibly comptuational time of downstream operations such as nonparametric statistics) if you don't need high spectrotemporal resolution. Step3: Compute statistic Step4: View time-frequency plots
Python Code: # Authors: Alexandre Gramfort <[email protected]> # # License: BSD (3-clause) import numpy as np import matplotlib.pyplot as plt import mne from mne.time_frequency import tfr_morlet from mne.stats import permutation_cluster_test from mne.datasets import sample print(__doc__) Explanation: Non-parametric between conditions cluster statistic on single trial power This script shows how to compare clusters in time-frequency power estimates between conditions. It uses a non-parametric statistical procedure based on permutations and cluster level statistics. The procedure consists of: extracting epochs for 2 conditions compute single trial power estimates baseline line correct the power estimates (power ratios) compute stats to see if the power estimates are significantly different between conditions. End of explanation data_path = sample.data_path() raw_fname = data_path + '/MEG/sample/sample_audvis_raw.fif' event_fname = data_path + '/MEG/sample/sample_audvis_raw-eve.fif' tmin, tmax = -0.2, 0.5 # Setup for reading the raw data raw = mne.io.read_raw_fif(raw_fname) events = mne.read_events(event_fname) include = [] raw.info['bads'] += ['MEG 2443', 'EEG 053'] # bads + 2 more # picks MEG gradiometers picks = mne.pick_types(raw.info, meg='grad', eeg=False, eog=True, stim=False, include=include, exclude='bads') ch_name = 'MEG 1332' # restrict example to one channel # Load condition 1 reject = dict(grad=4000e-13, eog=150e-6) event_id = 1 epochs_condition_1 = mne.Epochs(raw, events, event_id, tmin, tmax, picks=picks, baseline=(None, 0), reject=reject, preload=True) epochs_condition_1.pick_channels([ch_name]) # Load condition 2 event_id = 2 epochs_condition_2 = mne.Epochs(raw, events, event_id, tmin, tmax, picks=picks, baseline=(None, 0), reject=reject, preload=True) epochs_condition_2.pick_channels([ch_name]) Explanation: Set parameters End of explanation decim = 2 freqs = np.arange(7, 30, 3) # define frequencies of interest n_cycles = 1.5 tfr_epochs_1 = tfr_morlet(epochs_condition_1, freqs, n_cycles=n_cycles, decim=decim, return_itc=False, average=False) tfr_epochs_2 = tfr_morlet(epochs_condition_2, freqs, n_cycles=n_cycles, decim=decim, return_itc=False, average=False) tfr_epochs_1.apply_baseline(mode='ratio', baseline=(None, 0)) tfr_epochs_2.apply_baseline(mode='ratio', baseline=(None, 0)) epochs_power_1 = tfr_epochs_1.data[:, 0, :, :] # only 1 channel as 3D matrix epochs_power_2 = tfr_epochs_2.data[:, 0, :, :] # only 1 channel as 3D matrix Explanation: Factor to downsample the temporal dimension of the TFR computed by tfr_morlet. Decimation occurs after frequency decomposition and can be used to reduce memory usage (and possibly comptuational time of downstream operations such as nonparametric statistics) if you don't need high spectrotemporal resolution. End of explanation threshold = 6.0 T_obs, clusters, cluster_p_values, H0 = \ permutation_cluster_test([epochs_power_1, epochs_power_2], n_permutations=100, threshold=threshold, tail=0) Explanation: Compute statistic End of explanation times = 1e3 * epochs_condition_1.times # change unit to ms evoked_condition_1 = epochs_condition_1.average() evoked_condition_2 = epochs_condition_2.average() plt.figure() plt.subplots_adjust(0.12, 0.08, 0.96, 0.94, 0.2, 0.43) plt.subplot(2, 1, 1) # Create new stats image with only significant clusters T_obs_plot = np.nan * np.ones_like(T_obs) for c, p_val in zip(clusters, cluster_p_values): if p_val <= 0.05: T_obs_plot[c] = T_obs[c] plt.imshow(T_obs, extent=[times[0], times[-1], freqs[0], freqs[-1]], aspect='auto', origin='lower', cmap='gray') plt.imshow(T_obs_plot, extent=[times[0], times[-1], freqs[0], freqs[-1]], aspect='auto', origin='lower', cmap='RdBu_r') plt.xlabel('Time (ms)') plt.ylabel('Frequency (Hz)') plt.title('Induced power (%s)' % ch_name) ax2 = plt.subplot(2, 1, 2) evoked_contrast = mne.combine_evoked([evoked_condition_1, evoked_condition_2], weights=[1, -1]) evoked_contrast.plot(axes=ax2, time_unit='s') plt.show() Explanation: View time-frequency plots End of explanation
3,697
Given the following text description, write Python code to implement the functionality described below step by step Description: Multigroup Mode Part I Step1: We will now create the multi-group library using data directly from Appendix A of the C5G7 benchmark documentation. All of the data below will be created at 294K, consistent with the benchmark. This notebook will first begin by setting the group structure and building the groupwise data for UO2. As you can see, the cross sections are input in the order of increasing groups (or decreasing energy). Note Step2: We will now add the scattering matrix data. Note Step3: Now that the UO2 data has been created, we can move on to the remaining materials using the same process. However, we will actually skip repeating the above for now. Our simulation will instead use the c5g7.h5 file that has already been created using exactly the same logic as above, but for the remaining materials in the benchmark problem. For now we will show how you would use the uo2_xsdata information to create an openmc.MGXSLibrary object and write to disk. Step4: Generate 2-D C5G7 Problem Input Files To build the actual 2-D model, we will first begin by creating the materials.xml file. First we need to define materials that will be used in the problem. In other notebooks, either nuclides or elements were added to materials at the equivalent stage. We can do that in multi-group mode as well. However, multi-group cross-sections are sometimes provided as macroscopic cross-sections; the C5G7 benchmark data are macroscopic. In this case, we can instead use the Material.add_macroscopic method to specify a macroscopic object. Unlike for nuclides and elements, we do not need provide information on atom/weight percents as no number densities are needed. When assigning macroscopic objects to a material, the density can still be scaled by setting the density to a value that is not 1.0. This would be useful, for example, when slightly perturbing the density of water due to a small change in temperature (while of course ignoring any resultant spectral shift). The density of a macroscopic dataset is set to 1.0 in the openmc.Material object by default when a macroscopic dataset is used; so we will show its use the first time and then afterwards it will not be required. Aside from these differences, the following code is very similar to similar code in other OpenMC example Notebooks. Step5: Now we can go ahead and produce a materials.xml file for use by OpenMC Step6: Our next step will be to create the geometry information needed for our assembly and to write that to the geometry.xml file. We will begin by defining the surfaces, cells, and universes needed for each of the individual fuel pins, guide tubes, and fission chambers. Step7: The next step is to take our universes (representing the different pin types) and lay them out in a lattice to represent the assembly types Step8: Let's now create the core layout in a 3x3 lattice where each lattice position is one of the assemblies we just defined. After that we can create the final cell to contain the entire core. Step9: Before we commit to the geometry, we should view it using the Python API's plotting capability Step10: OK, it looks pretty good, let's go ahead and write the file Step11: We can now create the tally file information. The tallies will be set up to give us the pin powers in this notebook. We will do this with a mesh filter, with one mesh cell per pin. Step12: With the geometry and materials finished, we now just need to define simulation parameters for the settings.xml file. Note the use of the energy_mode attribute of our settings_file object. This is used to tell OpenMC that we intend to run in multi-group mode instead of the default continuous-energy mode. If we didn't specify this but our cross sections file was not a continuous-energy data set, then OpenMC would complain. This will be a relatively coarse calculation with only 500,000 active histories. A benchmark-fidelity run would of course require many more! Step13: Let's go ahead and execute the simulation! You'll notice that the output for multi-group mode is exactly the same as for continuous-energy. The differences are all under the hood. Step14: Results Visualization Now that we have run the simulation, let's look at the fission rate and flux tallies that we tallied.
Python Code: import os import matplotlib.pyplot as plt import matplotlib.colors as colors import numpy as np import openmc %matplotlib inline Explanation: Multigroup Mode Part I: Introduction This Notebook illustrates the usage of OpenMC's multi-group calculational mode with the Python API. This example notebook creates and executes the 2-D C5G7 benchmark model using the openmc.MGXSLibrary class to create the supporting data library on the fly. Generate MGXS Library End of explanation # Create a 7-group structure with arbitrary boundaries (the specific boundaries are unimportant) groups = openmc.mgxs.EnergyGroups(np.logspace(-5, 7, 8)) uo2_xsdata = openmc.XSdata('uo2', groups) uo2_xsdata.order = 0 # When setting the data let the object know you are setting the data for a temperature of 294K. uo2_xsdata.set_total([1.77949E-1, 3.29805E-1, 4.80388E-1, 5.54367E-1, 3.11801E-1, 3.95168E-1, 5.64406E-1], temperature=294.) uo2_xsdata.set_absorption([8.0248E-03, 3.7174E-3, 2.6769E-2, 9.6236E-2, 3.0020E-02, 1.1126E-1, 2.8278E-1], temperature=294.) uo2_xsdata.set_fission([7.21206E-3, 8.19301E-4, 6.45320E-3, 1.85648E-2, 1.78084E-2, 8.30348E-2, 2.16004E-1], temperature=294.) uo2_xsdata.set_nu_fission([2.005998E-2, 2.027303E-3, 1.570599E-2, 4.518301E-2, 4.334208E-2, 2.020901E-1, 5.257105E-1], temperature=294.) uo2_xsdata.set_chi([5.87910E-1, 4.11760E-1, 3.39060E-4, 1.17610E-7, 0.00000E-0, 0.00000E-0, 0.00000E-0], temperature=294.) Explanation: We will now create the multi-group library using data directly from Appendix A of the C5G7 benchmark documentation. All of the data below will be created at 294K, consistent with the benchmark. This notebook will first begin by setting the group structure and building the groupwise data for UO2. As you can see, the cross sections are input in the order of increasing groups (or decreasing energy). Note: The C5G7 benchmark uses transport-corrected cross sections. So the total cross section we input here will technically be the transport cross section. End of explanation # The scattering matrix is ordered with incoming groups as rows and outgoing groups as columns # (i.e., below the diagonal is up-scattering). scatter_matrix = \ [[[1.27537E-1, 4.23780E-2, 9.43740E-6, 5.51630E-9, 0.00000E-0, 0.00000E-0, 0.00000E-0], [0.00000E-0, 3.24456E-1, 1.63140E-3, 3.14270E-9, 0.00000E-0, 0.00000E-0, 0.00000E-0], [0.00000E-0, 0.00000E-0, 4.50940E-1, 2.67920E-3, 0.00000E-0, 0.00000E-0, 0.00000E-0], [0.00000E-0, 0.00000E-0, 0.00000E-0, 4.52565E-1, 5.56640E-3, 0.00000E-0, 0.00000E-0], [0.00000E-0, 0.00000E-0, 0.00000E-0, 1.25250E-4, 2.71401E-1, 1.02550E-2, 1.00210E-8], [0.00000E-0, 0.00000E-0, 0.00000E-0, 0.00000E-0, 1.29680E-3, 2.65802E-1, 1.68090E-2], [0.00000E-0, 0.00000E-0, 0.00000E-0, 0.00000E-0, 0.00000E-0, 8.54580E-3, 2.73080E-1]]] scatter_matrix = np.array(scatter_matrix) scatter_matrix = np.rollaxis(scatter_matrix, 0, 3) uo2_xsdata.set_scatter_matrix(scatter_matrix, temperature=294.) Explanation: We will now add the scattering matrix data. Note: Most users familiar with deterministic transport libraries are already familiar with the idea of entering one scattering matrix for every order (i.e. scattering order as the outer dimension). However, the shape of OpenMC's scattering matrix entry is instead [Incoming groups, Outgoing Groups, Scattering Order] to best enable other scattering representations. We will follow the more familiar approach in this notebook, and then use numpy's numpy.rollaxis function to change the ordering to what we need (scattering order on the inner dimension). End of explanation # Initialize the library mg_cross_sections_file = openmc.MGXSLibrary(groups) # Add the UO2 data to it mg_cross_sections_file.add_xsdata(uo2_xsdata) # And write to disk mg_cross_sections_file.export_to_hdf5('mgxs.h5') Explanation: Now that the UO2 data has been created, we can move on to the remaining materials using the same process. However, we will actually skip repeating the above for now. Our simulation will instead use the c5g7.h5 file that has already been created using exactly the same logic as above, but for the remaining materials in the benchmark problem. For now we will show how you would use the uo2_xsdata information to create an openmc.MGXSLibrary object and write to disk. End of explanation # For every cross section data set in the library, assign an openmc.Macroscopic object to a material materials = {} for xs in ['uo2', 'mox43', 'mox7', 'mox87', 'fiss_chamber', 'guide_tube', 'water']: materials[xs] = openmc.Material(name=xs) materials[xs].set_density('macro', 1.) materials[xs].add_macroscopic(xs) Explanation: Generate 2-D C5G7 Problem Input Files To build the actual 2-D model, we will first begin by creating the materials.xml file. First we need to define materials that will be used in the problem. In other notebooks, either nuclides or elements were added to materials at the equivalent stage. We can do that in multi-group mode as well. However, multi-group cross-sections are sometimes provided as macroscopic cross-sections; the C5G7 benchmark data are macroscopic. In this case, we can instead use the Material.add_macroscopic method to specify a macroscopic object. Unlike for nuclides and elements, we do not need provide information on atom/weight percents as no number densities are needed. When assigning macroscopic objects to a material, the density can still be scaled by setting the density to a value that is not 1.0. This would be useful, for example, when slightly perturbing the density of water due to a small change in temperature (while of course ignoring any resultant spectral shift). The density of a macroscopic dataset is set to 1.0 in the openmc.Material object by default when a macroscopic dataset is used; so we will show its use the first time and then afterwards it will not be required. Aside from these differences, the following code is very similar to similar code in other OpenMC example Notebooks. End of explanation # Instantiate a Materials collection, register all Materials, and export to XML materials_file = openmc.Materials(materials.values()) # Set the location of the cross sections file to our pre-written set materials_file.cross_sections = 'c5g7.h5' materials_file.export_to_xml() Explanation: Now we can go ahead and produce a materials.xml file for use by OpenMC End of explanation # Create the surface used for each pin pin_surf = openmc.ZCylinder(x0=0, y0=0, R=0.54, name='pin_surf') # Create the cells which will be used to represent each pin type. cells = {} universes = {} for material in materials.values(): # Create the cell for the material inside the cladding cells[material.name] = openmc.Cell(name=material.name) # Assign the half-spaces to the cell cells[material.name].region = -pin_surf # Register the material with this cell cells[material.name].fill = material # Repeat the above for the material outside the cladding (i.e., the moderator) cell_name = material.name + '_moderator' cells[cell_name] = openmc.Cell(name=cell_name) cells[cell_name].region = +pin_surf cells[cell_name].fill = materials['water'] # Finally add the two cells we just made to a Universe object universes[material.name] = openmc.Universe(name=material.name) universes[material.name].add_cells([cells[material.name], cells[cell_name]]) Explanation: Our next step will be to create the geometry information needed for our assembly and to write that to the geometry.xml file. We will begin by defining the surfaces, cells, and universes needed for each of the individual fuel pins, guide tubes, and fission chambers. End of explanation lattices = {} # Instantiate the UO2 Lattice lattices['UO2 Assembly'] = openmc.RectLattice(name='UO2 Assembly') lattices['UO2 Assembly'].dimension = [17, 17] lattices['UO2 Assembly'].lower_left = [-10.71, -10.71] lattices['UO2 Assembly'].pitch = [1.26, 1.26] u = universes['uo2'] g = universes['guide_tube'] f = universes['fiss_chamber'] lattices['UO2 Assembly'].universes = \ [[u, u, u, u, u, u, u, u, u, u, u, u, u, u, u, u, u], [u, u, u, u, u, u, u, u, u, u, u, u, u, u, u, u, u], [u, u, u, u, u, g, u, u, g, u, u, g, u, u, u, u, u], [u, u, u, g, u, u, u, u, u, u, u, u, u, g, u, u, u], [u, u, u, u, u, u, u, u, u, u, u, u, u, u, u, u, u], [u, u, g, u, u, g, u, u, g, u, u, g, u, u, g, u, u], [u, u, u, u, u, u, u, u, u, u, u, u, u, u, u, u, u], [u, u, u, u, u, u, u, u, u, u, u, u, u, u, u, u, u], [u, u, g, u, u, g, u, u, f, u, u, g, u, u, g, u, u], [u, u, u, u, u, u, u, u, u, u, u, u, u, u, u, u, u], [u, u, u, u, u, u, u, u, u, u, u, u, u, u, u, u, u], [u, u, g, u, u, g, u, u, g, u, u, g, u, u, g, u, u], [u, u, u, u, u, u, u, u, u, u, u, u, u, u, u, u, u], [u, u, u, g, u, u, u, u, u, u, u, u, u, g, u, u, u], [u, u, u, u, u, g, u, u, g, u, u, g, u, u, u, u, u], [u, u, u, u, u, u, u, u, u, u, u, u, u, u, u, u, u], [u, u, u, u, u, u, u, u, u, u, u, u, u, u, u, u, u]] # Create a containing cell and universe cells['UO2 Assembly'] = openmc.Cell(name='UO2 Assembly') cells['UO2 Assembly'].fill = lattices['UO2 Assembly'] universes['UO2 Assembly'] = openmc.Universe(name='UO2 Assembly') universes['UO2 Assembly'].add_cell(cells['UO2 Assembly']) # Instantiate the MOX Lattice lattices['MOX Assembly'] = openmc.RectLattice(name='MOX Assembly') lattices['MOX Assembly'].dimension = [17, 17] lattices['MOX Assembly'].lower_left = [-10.71, -10.71] lattices['MOX Assembly'].pitch = [1.26, 1.26] m = universes['mox43'] n = universes['mox7'] o = universes['mox87'] g = universes['guide_tube'] f = universes['fiss_chamber'] lattices['MOX Assembly'].universes = \ [[m, m, m, m, m, m, m, m, m, m, m, m, m, m, m, m, m], [m, n, n, n, n, n, n, n, n, n, n, n, n, n, n, n, m], [m, n, n, n, n, g, n, n, g, n, n, g, n, n, n, n, m], [m, n, n, g, n, o, o, o, o, o, o, o, n, g, n, n, m], [m, n, n, n, o, o, o, o, o, o, o, o, o, n, n, n, m], [m, n, g, o, o, g, o, o, g, o, o, g, o, o, g, n, m], [m, n, n, o, o, o, o, o, o, o, o, o, o, o, n, n, m], [m, n, n, o, o, o, o, o, o, o, o, o, o, o, n, n, m], [m, n, g, o, o, g, o, o, f, o, o, g, o, o, g, n, m], [m, n, n, o, o, o, o, o, o, o, o, o, o, o, n, n, m], [m, n, n, o, o, o, o, o, o, o, o, o, o, o, n, n, m], [m, n, g, o, o, g, o, o, g, o, o, g, o, o, g, n, m], [m, n, n, n, o, o, o, o, o, o, o, o, o, n, n, n, m], [m, n, n, g, n, o, o, o, o, o, o, o, n, g, n, n, m], [m, n, n, n, n, g, n, n, g, n, n, g, n, n, n, n, m], [m, n, n, n, n, n, n, n, n, n, n, n, n, n, n, n, m], [m, m, m, m, m, m, m, m, m, m, m, m, m, m, m, m, m]] # Create a containing cell and universe cells['MOX Assembly'] = openmc.Cell(name='MOX Assembly') cells['MOX Assembly'].fill = lattices['MOX Assembly'] universes['MOX Assembly'] = openmc.Universe(name='MOX Assembly') universes['MOX Assembly'].add_cell(cells['MOX Assembly']) # Instantiate the reflector Lattice lattices['Reflector Assembly'] = openmc.RectLattice(name='Reflector Assembly') lattices['Reflector Assembly'].dimension = [1,1] lattices['Reflector Assembly'].lower_left = [-10.71, -10.71] lattices['Reflector Assembly'].pitch = [21.42, 21.42] lattices['Reflector Assembly'].universes = [[universes['water']]] # Create a containing cell and universe cells['Reflector Assembly'] = openmc.Cell(name='Reflector Assembly') cells['Reflector Assembly'].fill = lattices['Reflector Assembly'] universes['Reflector Assembly'] = openmc.Universe(name='Reflector Assembly') universes['Reflector Assembly'].add_cell(cells['Reflector Assembly']) Explanation: The next step is to take our universes (representing the different pin types) and lay them out in a lattice to represent the assembly types End of explanation lattices['Core'] = openmc.RectLattice(name='3x3 core lattice') lattices['Core'].dimension= [3, 3] lattices['Core'].lower_left = [-32.13, -32.13] lattices['Core'].pitch = [21.42, 21.42] r = universes['Reflector Assembly'] u = universes['UO2 Assembly'] m = universes['MOX Assembly'] lattices['Core'].universes = [[u, m, r], [m, u, r], [r, r, r]] # Create boundary planes to surround the geometry min_x = openmc.XPlane(x0=-32.13, boundary_type='reflective') max_x = openmc.XPlane(x0=+32.13, boundary_type='vacuum') min_y = openmc.YPlane(y0=-32.13, boundary_type='vacuum') max_y = openmc.YPlane(y0=+32.13, boundary_type='reflective') # Create root Cell root_cell = openmc.Cell(name='root cell') root_cell.fill = lattices['Core'] # Add boundary planes root_cell.region = +min_x & -max_x & +min_y & -max_y # Create root Universe root_universe = openmc.Universe(name='root universe', universe_id=0) root_universe.add_cell(root_cell) Explanation: Let's now create the core layout in a 3x3 lattice where each lattice position is one of the assemblies we just defined. After that we can create the final cell to contain the entire core. End of explanation root_universe.plot(origin=(0., 0., 0.), width=(3 * 21.42, 3 * 21.42), pixels=(500, 500), color_by='material') Explanation: Before we commit to the geometry, we should view it using the Python API's plotting capability End of explanation # Create Geometry and set root Universe geometry = openmc.Geometry(root_universe) # Export to "geometry.xml" geometry.export_to_xml() Explanation: OK, it looks pretty good, let's go ahead and write the file End of explanation tallies_file = openmc.Tallies() # Instantiate a tally Mesh mesh = openmc.RegularMesh() mesh.dimension = [17 * 2, 17 * 2] mesh.lower_left = [-32.13, -10.71] mesh.upper_right = [+10.71, +32.13] # Instantiate tally Filter mesh_filter = openmc.MeshFilter(mesh) # Instantiate the Tally tally = openmc.Tally(name='mesh tally') tally.filters = [mesh_filter] tally.scores = ['fission'] # Add tally to collection tallies_file.append(tally) # Export all tallies to a "tallies.xml" file tallies_file.export_to_xml() Explanation: We can now create the tally file information. The tallies will be set up to give us the pin powers in this notebook. We will do this with a mesh filter, with one mesh cell per pin. End of explanation # OpenMC simulation parameters batches = 150 inactive = 50 particles = 5000 # Instantiate a Settings object settings_file = openmc.Settings() settings_file.batches = batches settings_file.inactive = inactive settings_file.particles = particles # Tell OpenMC this is a multi-group problem settings_file.energy_mode = 'multi-group' # Set the verbosity to 6 so we dont see output for every batch settings_file.verbosity = 6 # Create an initial uniform spatial source distribution over fissionable zones bounds = [-32.13, -10.71, -1e50, 10.71, 32.13, 1e50] uniform_dist = openmc.stats.Box(bounds[:3], bounds[3:], only_fissionable=True) settings_file.source = openmc.Source(space=uniform_dist) # Tell OpenMC we want to run in eigenvalue mode settings_file.run_mode = 'eigenvalue' # Export to "settings.xml" settings_file.export_to_xml() Explanation: With the geometry and materials finished, we now just need to define simulation parameters for the settings.xml file. Note the use of the energy_mode attribute of our settings_file object. This is used to tell OpenMC that we intend to run in multi-group mode instead of the default continuous-energy mode. If we didn't specify this but our cross sections file was not a continuous-energy data set, then OpenMC would complain. This will be a relatively coarse calculation with only 500,000 active histories. A benchmark-fidelity run would of course require many more! End of explanation # Run OpenMC openmc.run() Explanation: Let's go ahead and execute the simulation! You'll notice that the output for multi-group mode is exactly the same as for continuous-energy. The differences are all under the hood. End of explanation # Load the last statepoint file and keff value sp = openmc.StatePoint('statepoint.' + str(batches) + '.h5') # Get the OpenMC pin power tally data mesh_tally = sp.get_tally(name='mesh tally') fission_rates = mesh_tally.get_values(scores=['fission']) # Reshape array to 2D for plotting fission_rates.shape = mesh.dimension # Normalize to the average pin power fission_rates /= np.mean(fission_rates[fission_rates > 0.]) # Force zeros to be NaNs so their values are not included when matplotlib calculates # the color scale fission_rates[fission_rates == 0.] = np.nan # Plot the pin powers and the fluxes plt.figure() plt.imshow(fission_rates, interpolation='none', cmap='jet', origin='lower') plt.colorbar() plt.title('Pin Powers') plt.show() Explanation: Results Visualization Now that we have run the simulation, let's look at the fission rate and flux tallies that we tallied. End of explanation
3,698
Given the following text description, write Python code to implement the functionality described below step by step Description: NLL Curves Step1: Graph training error as a function of average NLL over epochs LR = learning rate {0.1, 0.01, 0.001} SZ = size of the hidden layer and the embedding size {100, 200, 250} Step2: Conclusion Best performance with a larger embedding size (250) and a learning rate of 0.01. The concern now is overfitting.
Python Code: %matplotlib inline import numpy as np import scipy as sp import matplotlib as mpl import matplotlib.cm as cm import matplotlib.pyplot as plt import pandas as pd pd.set_option('display.width', 500) pd.set_option('display.max_columns', 100) pd.set_option('display.notebook_repr_html', True) import seaborn as sns sns.set_style("whitegrid") sns.set_context("poster") with open('learning_rates.txt', 'r') as f: lines = f.readlines() values = [] for l in lines: if "," in l: values.append(map(float, l.split(","))) else: values.append(float(l)) learning_rates = [] sizes = [] nlls = [] for idx, v in enumerate(values): if idx % 21 == 0: learning_rates.append(v[1]) sizes.append(v[0]) elif idx % 21 == 1: nlls.append(values[idx:idx+20]) else: pass Explanation: NLL Curves End of explanation f, ax = plt.subplots(3,3, sharex=True) X = range(1, 21) for i in range(len(nlls)): a = ax[i / 3][i % 3] a.plot(X, nlls[i]) a.set_title("LR: %s, SZ: %s" % (learning_rates[i], sizes[i])) a.set_ylabel("Average NLL") a.set_xlabel("Epochs") a.set_ylim([0.5, 2.6]) plt.tight_layout() Explanation: Graph training error as a function of average NLL over epochs LR = learning rate {0.1, 0.01, 0.001} SZ = size of the hidden layer and the embedding size {100, 200, 250} End of explanation with open('test.txt', 'r') as f: data = f.readlines() data = [x.split('\t')[:2] for x in data] data = [(int(x), float(y)) for (x,y) in data] x = [d[0] for d in data] y = [d[1] for d in data] plt.plot(x, y) Explanation: Conclusion Best performance with a larger embedding size (250) and a learning rate of 0.01. The concern now is overfitting. End of explanation
3,699
Given the following text description, write Python code to implement the functionality described below step by step Description: Sorting Network Sets Frequently a set of Networks is recorded while changing some other variable; like voltage, or current or time. So... now you have this set of data and you want to look at how some feature evolves, or calculate some representative statics. This example demonstrates how to do this using NetworkSets. Generate some Data For the purpose of this example we use a predefined skrf.Media object to generate some networks, and save them as a series of touchstone files. Each file is named with a timestamp, generated with the convenience function rf.now_string(). Step1: Lets take a look at what we made Step2: Not sorted (default) When created using NetworkSet.from_dir(), the NetworkSet's stores each Network randomly Step3: Sort it Step4: Sorting using key argument You can also pass a function through the key argument, which allows you to sort on arbitrary properties. For example, we could sort based on the sub-second field of the name, Step5: Extracting Datetimes You can also convert the ntwk names to datetime objects, in case you want to plot something with pandas or do some other processing. There is a companion function to rf.now_string() which is rf.now_string_2_dt(). How creative.. Step6: Put into a Pandas DataFrame and Plot The next step is to slice the network set along the time axis. For example we may want to look at S11 phase, at a few different frequencies. This can be done with the following script. Note that NetworkSets can be sliced by frequency with human readable strings, just like Networks. Step7: Visualizing Behavoir with signature It may be of use to visualize the evolution of a scalar component of the network set over all frequencies. This can be done with a little bit of array manipulation and imshow. For example if we take the magnitude in dB for each network, and create 2D matrix from this, Step8: This array has shape ( 'Number of Networks' , 'Number frequency points'). This can be visualized with imshow. Most of the code below just adds labels, and axis-scales. Step9: This process is automated with the method NetworkSet.signature(). It even has a vs_time parameter which will automatically create the DateTime index from the Network's names, if they were written by rf.now_string()
Python Code: from time import sleep import skrf as rf %matplotlib inline from pylab import * rf.stylely() !rm -rf tmp !mkdir tmp wg = rf.wr10 # just a dummy media object to generate data wg.frequency.npoints = 101 for k in range(10): # timestamp generated with `rf.now_string()` ntwk = wg.random(name=rf.now_string()+'.s1p') ntwk.s = k*ntwk.s ntwk.write_touchstone(dir='tmp') sleep(.1) Explanation: Sorting Network Sets Frequently a set of Networks is recorded while changing some other variable; like voltage, or current or time. So... now you have this set of data and you want to look at how some feature evolves, or calculate some representative statics. This example demonstrates how to do this using NetworkSets. Generate some Data For the purpose of this example we use a predefined skrf.Media object to generate some networks, and save them as a series of touchstone files. Each file is named with a timestamp, generated with the convenience function rf.now_string(). End of explanation ls tmp Explanation: Lets take a look at what we made End of explanation ns = rf.NS.from_dir('tmp') ns.ntwk_set Explanation: Not sorted (default) When created using NetworkSet.from_dir(), the NetworkSet's stores each Network randomly End of explanation ns.sort() ns.ntwk_set Explanation: Sort it End of explanation ns = rf.NetworkSet.from_dir('tmp') ns.sort(key = lambda x: x.name.split('.')[0]) ns.ntwk_set Explanation: Sorting using key argument You can also pass a function through the key argument, which allows you to sort on arbitrary properties. For example, we could sort based on the sub-second field of the name, End of explanation ns.sort() dt_idx = [rf.now_string_2_dt(k.name ) for k in ns] dt_idx Explanation: Extracting Datetimes You can also convert the ntwk names to datetime objects, in case you want to plot something with pandas or do some other processing. There is a companion function to rf.now_string() which is rf.now_string_2_dt(). How creative.. End of explanation import pandas as pd dates = pd.DatetimeIndex(dt_idx) # create a function to pull out S11 in degrees at a specific frequency s_deg_at = lambda s:{s: [k[s].s_deg[0,0,0] for k in ns]} for f in ['80ghz', '90ghz','100ghz']: df =pd.DataFrame(s_deg_at(f), index=dates) df.plot(ax=gca()) title('Phase Evolution in Time') ylabel('S11 (deg)') Explanation: Put into a Pandas DataFrame and Plot The next step is to slice the network set along the time axis. For example we may want to look at S11 phase, at a few different frequencies. This can be done with the following script. Note that NetworkSets can be sliced by frequency with human readable strings, just like Networks. End of explanation mat = array([k.s_db.flatten() for k in ns]) mat.shape Explanation: Visualizing Behavoir with signature It may be of use to visualize the evolution of a scalar component of the network set over all frequencies. This can be done with a little bit of array manipulation and imshow. For example if we take the magnitude in dB for each network, and create 2D matrix from this, End of explanation freq = ns[0].frequency # creates x and y scales extent = [freq.f_scaled[0], freq.f_scaled[-1], len(ns) ,0] #make the image imshow(mat, aspect='auto',extent=extent,interpolation='nearest') # label things grid(0) freq.labelXAxis() ylabel('Network #') cbar = colorbar() cbar.set_label('Magntidue (dB)') Explanation: This array has shape ( 'Number of Networks' , 'Number frequency points'). This can be visualized with imshow. Most of the code below just adds labels, and axis-scales. End of explanation ns.signature(component='s_db', vs_time=True,cbar_label='Magnitude (dB)') Explanation: This process is automated with the method NetworkSet.signature(). It even has a vs_time parameter which will automatically create the DateTime index from the Network's names, if they were written by rf.now_string() End of explanation